id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
236695606 | pes2o/s2orc | v3-fos-license | ANNEALING TEMPERATURE AND COCATALYST EFFECTS TO THE PHOTOELECTROCHEMICAL PROPERTY OF CuInS2 THIN FILM SEMICONDUCTOR
Thin film of CuInS2 semiconductor had been synthesized by copper and indium stack electrodepositions on a molybdenum glass substrate and followed by sulfurization at varied annealing temperatures of 600-800 oC. CuInS2 thin film was characterized by using XRD, Raman, and SEM. Then on the CuInS2 was deposited Pt with various deposition times and its photocurrent property was observed. Finally, Pt or Rh cocatalyst deposited on In2S3-CuInS2 was also measured its photoelectrochemical property. XRD, Raman, and SEM data showed CuInS2 had a different character with varied annealing temperatures. An annealing temperature of 680 oC gave a maximum photocurrent of CuInS2 as a photocathode. The introduction of cocatalysts increased the photocurrent, even for Rh cocatalyst gave a better-applied bias photon-to-current efficiency than Pt.
INTRODUCTION
Photoelectrochemical (PEC) water splitting to produce hydrogen gas (H2) by using sunlight is hoped to be an ideal and environmentally clean technology to replace fossil fuel sources which tend to reduce in the short years. This idea motivates the scientific community to find various materials and strategies for that purpose. Fujishima et al. as the pioneer of PEC water splitting applied photoanode of TiO2 irradiated by using UV light. 1 The finding motivated researchers to find out various types of semiconductors and configurations of PEC devices to improve the efficiency of water reduction to produce hydrogen gas (H2). [2][3][4][5][6][7] Single absorber with a wide bandgap gives a low efficiency of PEC water splitting because it only works at UV light, therefore to improve the efficiency by harvesting all sunlight irradiation regions dual absorbers seem more promising. They consist of photoanode and photocathode electrodes that function as water oxidation and reduction, respectively. With this dual absorber system, researchers can be more flexible and optimal to focus on each electrode, so that it is possible to obtain a real splitting of water without bias by irradiation using sunlight to the absorbers. Considering the absorber as the cathode part, the materials of Cu-based chacopyrite as a p-type semiconductor are promising candidate absorbers for producing H2 evolution efficiently. [8][9][10][11][12][13] The compound of CuInS2 is one of the most important chalcopyrite because it has an optimal value of the bandgap and high absorption coefficient, namely 1.5 eV and 10 4 cm -1 , respectively, that allowing it to utilize sunlight efficiently. 14 Moreover, low-cost electrodeposition and annealing methods of thin-film CuInS2 THIN FILM SEMICONDUCTOR Gunawan et al. fabrication techniques enable to prepare CuInS2 photoelectrodes with high structural and optical quality. [15][16][17][18] Surface modification of CuInS2 using buffer layer of n-type and platinum introduction enhances the cathodic photocurrent and it is very important for a better separation of electrons and holes generated by illumination. 19 Platinum is the best cocatalyst until now to perform hydrogen evolution reaction by facilitating interfacial charge transfer reactions 20 . Meanwhile, for water reduction, the semiconductor surfaces as electrodes have no especially catalytic property due to their high photogenerated carrier recombination. As it is usually applied widely in solar cells by fabrication p and n types of semiconductors of Cu-based chalcopyrite (p-n junctions) the covering of n-type of buffer layer on photo absorbers increases the photocurrent of the photocathode. [21][22][23][24] In this study, we investigated the effect of annealing temperatures of CuInS2 thin film, and insertion of cocatalyst of Pt or Rh on the surface of photocathode of In2S3 covered CuInS2 to improve its photoelectrochemical property since limited previous works discussed these cases. Their works mainly correlated to the temperature annealing effect of CuInS2 on the structural and morphological as well as optical properties. 25,26 EXPERIMENTAL
Materials and Instrumentations
The chemicals used were CuSO4, InCl3, trisodium citrate, citric acid, acetone, KCN, In2(SO4)3, thioacetamide, CH3COOH, H2PtCl6, RhCl3, Na2SO4, Eu(NO3)3, NaH2PO4. All chemicals were bought from Merck and used without purification. Molybdenum glasses were purchased from Geomatec Ltd. Japan. For annealing and drying the Cu/In film used H2S (5%) and N2 gases, respectively. Potentiostat (Hokuto Dento 110) was used for electrodeposition of copper, indium, platinum and rhodium. X-ray diffraction (XRD) was performed for analysis of crystalline structures of the CuInS2 film using PANalytical X Pert 3 Powder X-ray diffractometer (Cu Kα, Ni filter). The CuInS2 thin film's morphology was analyzed by using a scanning electron microscope (SEM) JSM-6510LA Analytical at an acceleration voltage of 20 kV. Raman analyses were obtained by using Raman Spectrophotometer (Jasco NRC 3100 Laser) with an excitation laser at a wavelength of 532 nm. Photocurrent responses and PEC measurements of bare and modified CuInS2 used potentiostat coupled with digital function generator at 0.3 Hz and Shutter Controller.
Electrodeposition of Cu/In on Molybdenum Glass
The electrodeposition was carried out from copper then continued with indium electrolyte solutions successively with Ag/AgCl, Pt-wire, and a Mo-covered glass substrate (0.7 cm x 1.0 cm) as a reference, counter and working electrodes, respectively. Copper electrolyte pH 2.38 contained 0.05M CuSO4, 0.15M trisodium citrate, and 0.242M citric acid. While indium electrolyte contained 0.03M InCl3, 0.242M citric acid, and 0.036M trisodium citrate. The electrodepositions were run for 7 and 15 min for copper and indium using potentiostat at potentials of -0.2 and -0.78 V, respectively.
Effect of Annealing Temperature of CuInS2
As deposited Cu-In was converted to CuInS2 by pre-annealing for 30 min at a temperature of 160 ºC in Ar gas with flow rate at 200 mL/min and annealing for 10 min under 200 mL/min of H2S (5% H2S) flow in a glass tube furnace. The annealing temperature was varied at a temperature from 600 until 800 ºC. Then the CuInS2 was immersed in KCN (10%) for 2 min to remove excess of CuxS. Effect of annealing temperature of CuInS2 was observed by photocurrent response measurements used potentiostat coupled with digital function generator at 0.3 Hz and Shutter Controller. Three electrodes containing Ag/AgCl electrode, Pt counter electrode and CuInS2 as working electrode were immersed in 0.2M Eu(NO3)3 solution. The measurement was run by chopped 1.5 AM light radiation to the working electrode with a sweep potential from 0 until -0.45V and a scan rate of 10 mV/s. Analysis of Raman, XRD and SEM were applied to characterize the effect of varying temperatures on synthesized CuInS2.
Effect of Pt Cocatalyst Deposition
Pt electrodeposition on bare-CuInS2 films was done using 20 mL electrolyte consisting 1mM H2PtCl6 and 0.1M Na2SO4 in cylindric flask with a window. Then into the flask was inserted bare-CuInS2 CuInS2 THIN FILM SEMICONDUCTOR Gunawan et al.
photocathode, Pt, and Ag/AgCl as working, counter, and reference electrodes, respectively. Pt was photoelectrodeposited on the working electrode at various deposition times using potentiostat under illumination. Photocurrent response was measured using a similar procedure as above.
Effect of Type of Cocatalysts
Before deposition of cocatalysts, bare-CuInS2 film was covered with In2S3 by immersed in an electrolyte containing 0.025M indium sulfate, 100 mM thiacetamide and 100 mM acetic acid at 65 o C for 15 min. Then Pt or Rh was deposited on modified CuInS2 with a similar procedure as above with concentration 1mM for 10 s. Photoelectrochemical properties were performed in 0.2 M NaH2PO4 solution using a similar instrument and parameter as photocurrent response measurement. APBE (applied bias photon-tocurrent efficiency) was evaluated using equation: ABPE (%) = J x Vb x 100/PAM1.5 Where J is photocurrent (mA/cm 2 ), Vb is bias voltage (RHE scale), and PAM1.5 is 1.5AM simulated radiation (100 mW/cm 2 ). While RHE is calculated as the following, RHE = EAg/AgCl + 0.059xpH + 0.199.
RESULTS AND DISCUSSION
Effect of Annealing Temperature of CuInS2 Preparation of semiconductors of CuInS2 by successive electrodeposition of Cu and In of the precursor was conducted by annealing as-deposited Cu/In in H2S gas at a temperature of 600 until 800 °C. The obtained CuInS2 then were examined their photocurrent responses in 0.2 M europium solution (as electron scavenging) with chopped illumination and the results are depicted in Fig.-1. The figure can be seen the maximum photocurrent around 12.2 mA/cm 2 at a potential of -0.4V for CuInS2 annealed at 680 o C with a good dark current. There is an anomaly in the results of the photocurrent response at temperatures of 620 and 640 o C. However, the photocurrent response has a trend to be optimum at 680 o C and becomes decrease after that temperature; those are 5.8 and 5.2 mA/cm 2 at annealing temperatures of 750 and 800 o C, respectively (Tabel-1). CuInS2 annealed at 700 to 800 ºC gave a bad dark current, since the dark current not flat. The good absorber semiconductor should have zero current when there is no light (at dark). (Fig.-4). As the temperature increases, the grain size improves until temperature 640 ºC with the highest grain size of 38.36 nm and after that, the grain size becomes decreases to 32.48 nm. As it is seen from Raman curve at higher annealing temperature (>700 ºC) was not merely CuInS2 present in the thin film. Since the CuInS2 thin films prepared is almost pure, increasing temperature is effective until 640 ºC, this is also confirmed by previous research that showed annealing temperature of CuInS2 until the temperature of 550 ºC. 28 Figure-5 shows SEM of a thin film of CuInS2 annealed at the temperatures of 600, 640, and 750 ºC as a representation for three areas that have different grain sizes as shown in Fig.-4. The SEM shows that for CuInS2 annealed at 600 ºC has a porous property and small grain size, meanwhile the CuInS2 annealed at CuInS2 THIN FILM SEMICONDUCTOR Gunawan et al.
640 ºC has a bigger grain size and also it looks like the film became melt. Although the decrease of grain size did not appear at an annealing temperature of 750 ºC, the melted-like was disappeared. This can be due to the presence of molybdenum covered on its surface as confirmed by Raman measurement.
Effect of Pt cocatalyst Deposition
Platinum is a cocatalyst for hydrogen evolution reaction since it can bind with hydrogen ion to form ideal bond strength of Pt−H that can facilitate adsorption process and reduction of hydrogen ion and to release H2 easily when reduction process is complete. 20 Figure-6 shows the deposition of Pt improves the photocurrent and onset potential compared with bare-CuInS2. The photocurrent increases from 1.5 to 5 mA/cm 2 for bare and Pt deposited CuInS2, respectively. While the onset potential gave more positive potential for Pt deposited CuInS2 than bare-CuInS2. However, the increase of deposition times (from 20 s until 1 h) has no significant effect on photocurrent as well as onset potential of Pt-CuInS2.
Effect of Type of Cocatalyst
Effect of type of cocatalyst was evaluated using Rh as replacement of Pt. Since Pt has a work function a relatively large ca. 5.65 eV, 29 the formation of Schottky-type potential barrier would be possible that resists the transfer of electron. Therefore, Pt can be replaced by Rh as a candidate to reduce the potential barrier because of its relatively small work function (4.98 eV), 29 possible depositions by a photoelectron chemical method similar to that employed for the platinum deposition, and relatively low overpotential for water reduction comparable to that of Pt. 30,31 Figure-7a shows typical current and potential scans of Pt and Rh covered In2S3/CuInS2 electrodes, respectively. The result shows appreciable improvement of photocurrent and also the onset potential achieved by using Rh catalyst for the In2S3/CuInS2 electrode and improving the ABPE more than 2 % for Rh catalyst, as shown in Fig.-7b. Since there is no significant improvement when the Rh catalyst was used instead of the Pt catalyst for the CdS/CuInS2 electrode system, the use of Rh should work well by a combination with In2S3 but not for CdS having a relatively negative conduction band minimum (CBM). 32 Figure-8 shows the energy diagram of In2S3/CuInS2, Rh and Pt. The absolute and the electrochemical scales are given on the right side.
CONCLUSION
CuInS2 thin film had an optimum photocurrent response value of 12.2 mA/cm 2 when it was annealed at 680 o C. Deposition of platinum on CuInS2 improved the photocurrent compared to bare CuInS2. However, increasing times of Pt depositions had no significant effect on photoelectrochemical properties of the Pt-CuInS2 photocathodes. Due to the relatively high CBM of the In2S3 photocathode layer, direct deposition of conventional Pt catalysts was found to be not optimal due to the generation of a large Schottky barrier; instead, the use of Rh was beneficial for this system, though it is still not sufficiently improved. | 2021-08-03T00:06:17.318Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "6298addf4a4721e3d17913086e7a31679b0f522c",
"oa_license": null,
"oa_url": "https://doi.org/10.31788/rjc.2021.1425818",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1918a46e91b4ad7c186fa0892c4e8dfd0e321a4d",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
238997405 | pes2o/s2orc | v3-fos-license | Bilateral Corneal Ghost Vessels in an Otherwise Healthy Child
Article info: Received: 03 Jun 2021 Revised: 23 Jun 2021 Accepted: 29 Jun 2021
Introduction
he cornea is completely avascular and Corneal Neovascularization (CNV) is always considered a pathologic condition [1]. The etiology of CNV is not completely understood and a wide range of inflammatory, infectious, degenerative, conditions may induce CNV [2]. Two possibly fundamental mechanisms may play a role in the development of neovasculariza-tion of the cornea: vasculogenesis (the de novo formation of new blood vessels by differentiation of circulating bone marrow-derived mesodermal precursors, largely during embryogenesis), and angiogenesis (the formation of new blood vessels from sprouting or splitting of preexisting vasculature). These two aforementioned mechanisms may have overlap with each other [3]. Herein, we report an interesting case of the bilateral and almost symmetric pattern of regressed deep corneal stromal neovascularization in a 6-year-old girl with
Case Presentation
A 6-year-old girl who was referred to the ophthalmologist due to suboptimal visual acuity in her pre-school screening program was presented to the cornea service for further evaluation. The best-corrected visual acuity was 20/40 in both eyes. Slit-lamp examination revealed regressed blood vessels ("ghost vessels") in the anterior and mid-corneal stroma. Some lumena were large and carried a few red blood cells in the peripheral of the cornea, anterior to the limbus in both eyes ( Figure 1). There was no corneal opacity, and corneal thickness was with-in the normal range in both eyes. Other examinations of anterior and posterior segments were normal. Confocal scanning microscopy of both corneas demonstrated scattered branching railroad-shaped ghost vessels at the level of the middle and anterior stroma. Endothelium, posterior stroma, and epithelium were unremarkable ( Figure 2). She was born full-term, with normal vaginal delivery and normal developmental milestones Her mother's obstetric history was unremarkable. Full systemic workup revealed no systemic, inflammatory, infectious, or degenerative disorders.
Discussion
Corneal avascularity is required for the preservation of corneal transparency and vision clarity [4]. CNV may occur due to disrupted balance between angiogenic and antiangiogenic factors, which may cause invasion of new vascular structures into the cornea from the limbus that can lead to corneal scarring, lipid deposition, and inflammation that may significantly alter visual acuity [5,6].
The cornea remains avascular despite the formation of large arteries and vascular networks in the periocu- After taking an extensive history and examination, we discovered that this patient had an unremarkable past medical history of certain risk factors, such as inflammatory and infectious diseases after her birth. We assume that any insult during pregnancy may be involved in the formation of CNV in our patient through exacerbation of vasculogenesis processes.
Previous experimental studies have identified potential pro-and anti-angiogenic factors in the anterior eye of the developing avian embryos that may a play role in ocular vasculogenesis and corneal avascularity during embryonic development for the first time. Angiogenesis occurs when a disequilibrium between proangiogenic and antiangiogenic factors stimuli results in an up-regulation of proangiogenic factors, such as VEGF-A (Vascular Endothelial Growth Facto), FGF1 (Fibroblast Growth Factor 1), and FGF2 (Fibroblast Growth Factor 2), and a downregulation in antiangiogenic agents, such as Sema3E (Semaphorin 3E), Sema3G (Semaphorin 3G), Netrin1, Netrin4, and sFlt1 (soluble fms-like tyrosine kinase 1) [3].
To determine when corneal avascularity is established, Kwiatkowski et al. visualized the vascular patterning of the anterior eye using transgenic quail embryos. On an embryonic day 3, angioblasts and primitive blood vessels are present in the periocular region but prevented presumptive cornea. Aggregation of angioblasts leads to the formation of tubular temporal and nasal ciliary arteries and a "vascular ring" around the cornea periphery [4]. By embryonic day 10 angioblasts form a stream of blood vessels that approaches the limbus region and connects to the conjunctival vasculature to form the limbal vasculature. The angiostatic function of the limbus has been proposed as a mechanism for corneal avascularity [6,7].
Although the interaction between these factors in the cornea is not completely realized, soluble VEGF receptor-1 (sVEGFR1; also known as sFlt-1), which sequesters VEGF-A, is suggested as a key modulator inhibiting VEGF-driven angiogenesis [8].
In addition, as noted by McKenna et al., loss of Nrp1/ Sema signaling in the presence of functional Nrp1/VEGF signaling results in angioblast invasion to the presumptive cornea and subsequent vascularization of the developing cornea, which induces mechanisms involved in vascular patterning during embryonic development [9].
There is no physical barrier between the periocular mesenchyme and presumptive cornea. Thus, we attrib-uted the corneal vascularization in the present case to an intrauterine insult such as transient exposure to toxic materials, hypoxia, or inflammation during pregnancy with consequent destruction of the balance between pro-angiogenic and anti-angiogenic factors, resulting in the attraction of angioblasts into the developing cornea in the present case, which was regressed after elimination of the inciting mechanism.
Compliance with ethical guidelines
There were no ethical considerations to be considered in this research.
Funding
This research did not receive any grant from funding agencies in the public, commercial, or non-profit sectors. | 2021-10-16T18:46:19.263Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3c739482e93d6dc5e5b2962ba7ff72b0e0cf5a7f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18502/crcp.v6i3.7127",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3c739482e93d6dc5e5b2962ba7ff72b0e0cf5a7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
201827723 | pes2o/s2orc | v3-fos-license | Lung Stereotactic Body Radiation Therapy (SBRT) dose gradient and PTV volume: a retrospective multi-center analysis
Background The treatment of lung lesions with stereotactic body radiation therapy calls for highly conformal dose, which is evaluated by a number of metrics. Lung stereotactic body radiation therapy clinical trials constrain a plans gradient index. The purpose of this work is to describe the dependence of clinically achievable dose gradient on planning target volume. Methods Three hundred seventy-four lung stereotactic body radiation therapy treatment plans were retrospectively reviewed and selected for this study. The relationship between R50% and planning target volume size was observed and compared against the RTOG 0915 and 0813 constraints noting minor and major deviations. Then a least squares regression was used to determine the coefficients for a power functional form of the dependence of gradient measure (GM) on planning target volume size. Results Of the 317 peripheral lung SBRT plans, 142 exhibited no deviation, 135 exhibited a minor deviation, and 40 exhibited a major deviation according to the RTOG 0915 dosimetric. conformality and dose fall-off constraints. A plot of gradient measure versus planning target volume size for peripheral lesions, excluding RTOG 0915 major deviations, is fit with an power function of GM = 0.564 V0.215. Conclusions Using the PTV size and GM relationship we have characterized, treatment plans with PTV < 85 cm3 can be evaluated subjectively to our previously plans, and given a percentile GM. This relationship and evaluation is useful for volumetric modulated arc therapy lung stereotactic body radiation therapy treatment planning and quality control. Electronic supplementary material The online version of this article (10.1186/s13014-019-1334-9) contains supplementary material, which is available to authorized users.
Background
In radiation oncology, stereotactic body radiation therapy (SBRT) for lung lesions is an external beam radiation therapy technique that utilizes precise targeting and dose delivery of radiation with acceptable toxicity [1]. The ablative target doses delivered with SBRT are modeled after intracranial stereotactic radiosurgery (SRS). Unlike conventionally fractionated radiation therapy, which achieves the therapeutic window through the relative radiosensitivity of tumor tissue compared to normal tissue, the stereotactic approach achieves the therapeutic window with geometric accuracy and a highly conformal dose distribution. [2][3][4] Lung SBRT is particularly challenging due to physiological organ and target motion (respiration). The necessary geometric accuracy has been achieved by utilizing advances in patient immobilization, tumor motion assessment, and near real time imaging studies at the time of treatment [5][6][7]. The high dose per fraction makes steep dose gradients desirable. A good plan quality is characterized with highly conformal dose distribution and steep dose gradients nearly isotropically around the target. Previously, volumetric modulated arc therapy (VMAT) has been shown to offer improved target conformality with shorter treatment times for lung SBRT with both coplanar and non-coplanar delivery over conventional 3D conformal treatments [8][9][10]. An optimal lung SBRT plan achieves target dose conformality while avoiding excessive high dose and intermediate dose spillage. For example, the conformality of a plan is characterized with the conformity index (CI), which is a ratio of the prescription isodose volume, PIV, i.e. the volume encompassed by the 100% isodose line (IDL), and the volume of the planning target volume (PTV) [11]. Lung SBRT clinical trials aim for a CI less than 1.2 and utilize a number of other dose metrics [12,13]. Gradient index (GI) is a tool to evaluate intermediate dose fall off, and is the ratio of the volume of half the prescription isodose and the PIV [14]. The clinically achievable GI is dependent on the size of the PTV [13]. R50% is a similar quantity presented in the RTOG 0813 and 0915 lung SBRT protocols [12,13], and it is defined as the ratio of the volume of the 50% isodose volume and the PTV volume.
The Eclipse (Varian, Palo Alto, CA) treatment planning system reports gradient measure (GM), which is defined as the difference, in centimeters, of the equivalent sphere radii of the 50 and 100% prescription IDL volumes [15]. Similar to the GI and R50%, this metric has value in assessing the high dose fall off; but unlike GI or R50%, the dependence of clinically achievable GM on PTV size for lung SBRT has not yet been established. The aim of this work is to characterize the clinically achievable GM dependence on PTV size across multiple radiation oncology clinics for the purpose of dosimetric quality control.
Methods
Clinically approved treatment plans were retrospectively reviewed and selected for this study. All plans utilized a coplanar volumetric modulated arc therapy (VMAT) technique with one isocenter per target receiving SBRT treatment in one to five fractions. Treatments were collected across four centers within our institution and planned in accordance with the guidelines of RTOG 0813 or 0915 depending on its locationcentral or peripheral (> 2 cm from the proximal bronchial tree). The treatments were planned with Varian Eclipse versions 11 and 13.6, using the Analytical Anisotropic Algorithm (AAA) (versions 11 or 13.6) for dose calculation with a grid size of 0.25 cm. All plans were treated on either a Varian TrueBeam or C-Series linear accelerator with a Millennium 120 multileaf collimator (MLC). All plans used 6 MV energy, but some used the higher dose rate 6X-SRS mode, and one of the machines used the flattening filter free 6X energy (6X-FFF).
The Varian Eclipse Scripting application programming interface (API) was used to extract treatment and planning quality metrics from each selected case. Specifically, treatment date, center, disease site and location (central and peripheral), prescribed dose, number of fractions, number of fields, monitor units (MU), PTV size (cm 3 ), effective diameter, gradient measure, gradient index, R50%, conformity index, mean dose, max dose, minimum PTV dose, and percent of the PTV receiving 100% of the prescription dose (V100) were extracted or calculated. We then analyzed the relationships between these parameters.
The relationship between R50% and PTV size was observed and compared against the RTOG 0915 and 0813 constraints noting minor and major deviations. Next, the relationship between GM and PTV size was investigated. Least squares regression was used to determine the coefficients for a linear, exponential, logarithmic, and power functional form of the dependence of GM on PTV size. Linear and exponential functional forms had low R 2 values (0.762 and 0.696), while a logarithmic functional form had a better R 2 value (0.823), but residuals that did not appear to be randomly distributed. To achieve the greatest R 2 value (0.842) and random residuals distribution, a power functional form was selected and is presented in Eq. 1, where GM is the gradient measure in cm, V is the PTV volume in cm 3 , and A and B are the unknown coefficients.
Quantiles regression, which is a more robust method than least squares regression when there are outliers in the data, was also used to determine coefficients for Eq. 1 for the 10, 25, 50, 75, and 90% quantiles. All regression analyses were performed using R version 3.5.1 [16].
Results
From January 2016 through March 2018, 374 lung SBRT plans were identified -317 peripheral (85%) and 57 central (15%). Central was defined as being within a 2 cm radius of the airway or mediastinal pleura. PTV volumes ranged from 2.05 to 310.45 cc. A frequency distribution of the PTV volumes is shown in Fig. 1. Averages of the data binned using PTV volume bins from RTOG 0915 are presented in Table 1.
Of the 317 peripheral lung SBRT plans, 142 exhibited no deviation, 135 exhibited a minor deviation, and 40 exhibited a major deviation according to the RTOG 0915 dosimetric conformality and dose falloff constraints. Plan performance relative to RTOG 0915 dosimetric conformality and dose falloff constraints is presented in Fig. 2 for each PTV volume bin.
There were 277 protocol-acceptable peripheral lung SBRT plans. A plot of R50% versus PTV volume (cm 3 ) is presented in Fig. 3. Included with that figure is a power function fit using least squares regression and a plot of its residuals. The functional form of the relationship between PTV volume, V (cm 3 ) and R50% for peripheral lesions is with a standard error of 0.021 and 0.007 for the A and B parameters and a coefficient of determination, R [2], of 0.63. The residuals plot appears random up to a PTV volume of approximately 85 cm 3 (Additional files 1 and 2). Above 85 cm 3 , Eq. 2 consistently predicted a smaller R50% than what was calculated in the clinically approved, protocol-acceptable plans. A plot of gradient measure versus PTV volume for peripheral lesions is presented in Fig. 4 with a power function fit using least squares regression. The functional form of that relationship is with a standard error of 0.017 and 0.006 for the A and B parameters and an R 2 value of 0.850. A plot of the Fig. 1 Distribution of the size of all PTVs in this study. PTV volume is presented using the RTOG 0915 volume bins. Data are separated between centrally located (within 2 cm of airways or mediastinal pleura) and peripheral residuals is also included in Fig. 3, and as was the case with R50%, Eq. 3 predicted a smaller gradient measure than what was achieved clinically. The improved coefficient of determination in Eq. 3 signifies that it can explain a greater percent of the random variation of Gradient Measure than Eq. 2 can explain of R50%. A notable limitation of Eqs. 2 and 3 is their predictability for PTV volumes of approximately 85 cm 3 and greater. Additional plots of the gradient measure versus PTV volume for all peripheral lesion plans including major deviations (n = 317) and all central lesion plans including major deviations (n = 57) are included in additional files.
Quantiles regression [17] was performed for the 90th, 75th, 50th, 25th, and 10th percentiles on the relationship between GM and PTV volume for peripheral lesions (Fig. 5). In this case, 90th percentile means that 90% of the plans had a Gradient Measure equal to or lower than that value, so the lower the percentile the steeper the high dose falloff. The coefficients for each of the percentile curves is presented in Table 2.
Discussion
A predictable relationship exists between PTV volume and gradient measure or R50% for protocol-acceptable, peripheral lung SBRT plans at our institution. Its Fig. 2 Plan performance of all PTVs relative to RTOG 0915 dosimetric conformality and dose falloff constraints for peripheral lesions Fig. 3 R50% versus the PTV volume for peripheral lesions, excluding RTOG 0915 major deviations. A least squares fit of a power function is presented along with its functional form and R [2]. Residuals of the predicted R50% minus the actual R50% are presented on the right functional form is presented in Eqs. 2 and 3. A limitation of these equations is their tendency to under-predict the gradient measure and R50% for large PTV volumes (⪆ 85 cm 3 ). Eight of the 277 peripheral lung SBRT plans had a PTV volume greater than 85 cm 3 . A separate function could be fit to the larger PTV data, but more treatment plans are required in this volume range.
Narayanasamy et al. [18] have studied the relationship between R50% and PTV volume for a sample size of 105 lung SBRT plans. In their paper, the relationship between R50% and PTV volume was found to be with an R 2 of 0.58. This formula predicts a similar, but larger R50% (less steep dose dropoff) than the one presented in this work (Eq. 2), which is likely due to the plans being from another institution with different Fig. 4 Gradient measure versus PTV volume for peripheral lesions, excluding RTOG 0915 major deviations. A least squares fit of a power function is presented along with its functional form and R [2]. Residuals of the predicted gradient measure minus the actual gradient measure are presented on the right Fig. 5 Gradient measure versus PTV volume along with quantiles regression curves for the 90th, 75th, 50th, 25th, and 10th quantiles, for peripheral lesions, excluding RTOG 0915 major deviations planning policies and procedures. Their planning techniques were a mix of 3DCRT, sliding window IMRT, and RapidArc, while this work only considered RapidArc plans.
The functional form of the GM and PTV size relationship offers lung SBRT treatment planners a tool to evaluate the GM of their plan beyond a simple "no deviation, minor deviation, or major deviation" described in clinical trials. Additionally, the quantiles regression allow planners to estimate the percentile of a plan's GM, so they may develop an understanding of the greatest plan outliers and the potential GM increase from replanning a treatment.
Additionally, the results presented in this work can be used prospectively during treatment planning to inform the creation of a planning psudo-structure to reduce GM. Since the CI is near unity for most RapidArc plans at our institution, the average distance from the edge of the PTV to the 50% isodose line is approximately the Gradient Measure. This relationship can be used to create control structures for the purpose of minimizing R50%. Future work will explore a proposed workflow would be as follows: 1. Planner calculates the 25th or 10th percentile gradient measure given the PTV volume; 2. Planner creates a bespoke control ring (Fig. 6) with an inner dimension 1 GM from the PTV (the thickness of this ring is set such that the ring is continuous -3 mm for a high resolution structure in Eclipse); 3. For optimization purposes, the control ring has an upper constraint of 0% receiving 50% of the prescription dose with a priority equal to the lower constraint for PTV coverage; 4. After calculation, the planner and physicist benchmark the plan against the gradient measure from Eq. 3. As part of plan QC, the percentile curves presented in Table 2 may be used to determine how the plan performed relative to the plans in this dataset. Since the planner is aiming for a lower gradient measure and R50%, the lower the percentile the steeper the dose falloff. Naturally, the plan should be compared against any institutional normal tissue constraints and the RTOG 0915 dosimetric conformality and dose falloff constraints.
As shown in the residuals plot, there is variability that is likely from a source other than PTV volume, such as PTV shape, risk structure location, and user variability. More advanced algorithms such as knowledge based planning (KBP) [19] may prove to have a better predictability than the rudimentary method proposed in the previous paragraph since it considers the geometric relationship between the target and nearby organs at risk. However, advanced algorithms such as KBP are not yet widely available or implemented into routine use in the clinic, so this simpler approach should prove helpful for treatment planning and quality control in those settings.
Conclusion
PTV size can be used to predict the gradient measure for PTVs less than approximately 85 cm 3 . This relationship is useful for RapidArc peripheral lung SBRT treatment planning and quality control purposes when more advanced algorithms such as KBP aren't available.
Additional files
Additional file 1: Figure S1. Gradient measure versus PTV volume for all peripheral lesion plans (n = 317) including major deviations. A least squares fit of a power function is presented along with its functional form and R [2]. (TIF 1714 kb) Additional file 2: Figure S2. Gradient measure versus PTV volume for all central lesion plans (n = 57) including major deviations. A least squares fit of a power function is presented along with its functional form and R [2]. 6 An example of how to use PTV volume to guide optimization. In this case, the PTV volume is measured, and the predicted gradient measure is determined for that volume. A ring structure is created with an inner radius that is 1 GM from the PTV and an outer radius that is 3 mm larger (3 mm thick rind). This ring is used to control the 50% IDL | 2019-09-05T13:17:30.544Z | 2019-09-03T00:00:00.000 | {
"year": 2019,
"sha1": "41501ba0749db917db4eed5408bd62265baf1aa1",
"oa_license": "CCBY",
"oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/s13014-019-1334-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a1aaeb5a36f5711cea77d4569a5ed7d3e5f7933",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251911954 | pes2o/s2orc | v3-fos-license | Antidepressant-like activity, active components and related mechanism of Hemerocallis citrina Baroni extracts
Hemerocallis citrina Baroni [Asphodelaceae], which is traditional herbal medicine, has been widely used for treating depressive disorders in Eastern-Asia countries. However, the active compounds and corresponding mechanism of anti-depression are not yet completely clarified. In this study, the anti-depressive activities of six H. citrina extracts were primarily evaluated. The results showed that the water extract of H. citrina flowers (HCW) displays significant anti-depressive activity. A total of 32 metabolites were identified from HCW by high-performance liquid chromatography/quadrupole time-of-flight mass spectrometry (HPLC-Q-TOF-MS) and nuclear magnetic resonance (NMR). And then, the anti-depressive activity of the high-level compound (rutin) in HCW was also estimated. The results indicated that rutin displayed significant anti-depressive activity and was one of the main active ingredients. Finally, the anti-depressive mechanisms of HCW and rutin were investigated based on the intestinal microorganisms. The results showed that HCW and rutin increase the diversity and richness of the intestinal flora and regulate the specific intestinal microorganisms such as Bacteroides and Desulfovibrio genera in depressed mice. This work marks the first comprehensive study of the active components, anti-depressive activities and corresponding mechanisms of different H. citrina extracts, which provide a potential possibility for developing new antidepressants.
Introduction
Depression, a mental disease with high morbidity and mortality, has become a severe public health problem in the 21st century. According to the prediction by the World Health Organization, depression will become the disease with the heaviest economic burden in the world by 2030 (World Health Organization, 2017;Miller and Campo, 2021). At present, synthesized drugs are the most commonly used and effective treatment means in clinical traits, which have disadvantages such as low effective rate, serious side effects, and high price (Krishnan and Nestler, 2008;Carhart-Harris et al., 2021). However, traditional herbal medicines have unique advantages in preventing and treating depression as alternative and complementary therapies. Therefore, the development of new antidepressants from traditional herbal medicines has been researched hot.
H. citrina Baroni (It was also called "Huang Hua Cai" in Chinese) has been widely grown in China, Japan, and Korea, and its flower buds are one of the most commonly consumed vegetables in Asia (Ma et al., 2018;Liu et al., 2020). The flower buds of H. citrina have been recorded to relieve depression in the medicinal book "Compendium of Materia Medica", which is a famous Chinese encyclopedia of medicine written by Li et al. (2017) in the Ming dynasty (Xu et al., 2020;Qing et al., 2021a). Modern pharmacology has also proved that flower buds of H. citrina extract have significant antidepressant activity, and the polyphenols and flavonoids were regarded as the main active components (Lin et al., 2013;Xu et al., 2016). However, the specific active ingredients in the extract of H. citrina, which may display prominent antidepressant-like activity, were rarely identified and needed further investigation.
The active constituents undergo successive changes during plant growth, and metabolites vary in fresh and dry flower buds of H. citrina (Qing et al., 2017a;Yu et al., 2020;Qing et al., 2021b). In some previous studies (Du et al., 2014;Xu et al., 2016), a chronic unpredictable mild stress (CUMS) model was used to evaluate the anti-depressive effect of H. citrina that only took one type of the flower buds (mainly using the dried flower buds of H. citrina as the materials), which may cause a series of active constituents to be neglected and not estimated. In the present study, we comprehensively evaluate the antidepressantlike activities of extracts produced by dried flower buds, fresh flower buds, and flowers of H. citrina (Supplementary Figure S1).
The antidepressant activity of H. citrina. extracts and related mechanisms have been investigated in previous studies. H. citrina extracts could increase the levels of monoamine neurotransmitters, such as 5-hydroxytryptamine , dopamine (DA) and norepinephrine (NE), in the brain of depressed mice (monoamine hypothesis) (Gu et al., 2012;Lin et al., 2013;Xu et al., 2016;Qing et al., 2021a). In addition, the extracts of H. citrina were able to increase the content of BDNF (neurotrophic hypothesis) and reduce IL-1β, IL-6, TNF-α and malondialdehyde (MDA) levels (stress hypothesis) in the brain of depressed mice. With the development of human health and gut microbes, depression is inextricably linked with changes in intestinal microorganisms. However, the relationships between the antidepressant-like activity of H. citrina extracts and intestinal microorganism variations were rarely studied and need further investigation.
In this study, the antidepressant-like activities of 6 different extracts from H. citrina were primarily assessed using a CUMS model. And then, the main chemical constituents of active extract were identified by HPLC-Q-TOF-MS and NMR technology. Finally, the mechanisms of antidepressant-like activity were investigated based on the intestinal flora. Figure S1), the water and 80% ethanol extracts of both parts (a total of 4 different extracts with low and high-dose, namely WHCWL, WHCWH, HCWL, HCWH, WHCEL, WHCEH, HCEL, and HCEH) were employed. The result showed that the Sucrose preference test (SPT) of the model control group was significantly lower (p < 0.05) compared with the normal control group, which indicated that the CUMS mouse model was successfully established ( Figure 1A). The SPT index of different dose HCW groups (HCWL and HCWH), WHCW low-dose group (WHCWL) and fluoxetine hydrochloride (FH) group was significantly increased compared to that of the model group (p < 0.01 or p < 0.05). According to the SPT index, the HCW displayed stronger anti-depressive activity than the positive control in the corresponding dose.
Compared with the normal control group, the Ingestion latency test (ILT) of mice in the model control group was significantly prolonged (p < 0.01), however, the ILT index in the FH group was significantly shorter than that in the model group (p < 0.01) ( Figure 1B). The ILT of the depressed mice in low and high-dose HCW groups (HCWL and HCWH) was significantly decreased (p < 0.05) compared to the model control group, and the high-dose HCW group had a more shorted ILT index than that of the low-dose group. In addition, The ILT of the depressed mice in the high-dose WHCW (WHCWH), WHCE (WHCEH), and HCE (HCEH) were significantly decreased (p < 0.05, or p < 0.01) compared to the model control group ( Figure 1B).
Compared with the normal control group, the activity time of the model control group was significantly decreased (p < 0.05), and the resting time was significantly prolonged (p < 0.01), which indicated that the CUMS mouse model was successfully
FIGURE 1
The effects of different dose extracts of H. citrina flowers and fresh flower buds on the behaviors of CUMS mice. (A) Sucrose preference test, (B) ingestion latency test, (C) tail suspension activity test, and (D) tail suspension still time test. Data are reported as mean ± SD. For statistical significant, # p < 0.05, ## p < 0.01 compared with the normal control group; *p < 0.05, **p < 0.01 compared with the model control group. NC, normal group; MC, model group; FH, Fluoxetine hydrochloride group; WHCWL and WHCWH, low and high-dose of water extracts of fresh flower buds; HCWL and HCWH, low and high-dose of water extracts of flowers; WHCEL and WHCEH, low and high-dose of 80% ethanol extracts of fresh flower buds; HCEL and HCEH, low and high-dose of 80% ethanol extracts of flowers; low-dose: 200 mg/kg; high-dose, 500 mg/kg.
Frontiers in Pharmacology
frontiersin.org 03 established ( Figures 1C,D). Compared with the model group, the activity time was significantly prolonged (p < 0.05) and the resting time was significantly decreased (p < 0.05) in low and high-dose groups of HCW. The low-dose WHCW (WHCWL), high-dose WHCE (WHCWH), low and high-dose WHCE (WHCEL and WHCEH), and FH groups display similar antidepressive activities with the HCW groups ( Figures 1C,D).
The results of SPT, ILT, and Tail suspension test (TST) experiments indicated that the low and high-dose HCW groups display significant anti-depressive activity, and the water extracts of flowerings (HCWL and HCWH) showed better activities than that of ethanol extracts (HCEL and HCEH). In addition, the extracts of flowerings (HCWL, HCWH, HCEL, and HCEH) display more significant anti-depressive activities than that of flower buds (WHCWL, WHCWH, WHCEL, and WHCEH). Figure S1), the water and 80% ethanol extracts of both samples (a total of 3 different extracts with low and high-dose, namely HCWL, HCWH, DHCWL, DHCWH, DHCEL, and DHCEH) were employed. The result showed that the SPT of the model control group was significantly lower (p < 0.05) compared with the normal control group, which indicated that the CUMS mouse model was successfully established (Figure 2A). Compared to the model group, the SPT of low and high-dose HCW and DHCE groups (HCWL, HCWH, DHCWL, and DHCWH), high-dose DHCW group (DHCWH) and FH group was significantly increased (p < 0.05).
As shown in Figure 2B, the ILT of mice in the model control group was significantly prolonged (p < 0.05) compared to the normal control group, which also indicated that the CUMS mouse model was successfully established. The ILT index of mice in low and high-dose HCW groups (HCWL and HCWH) and the positive control group (FH) were significantly decreased (p < 0.05) compared with the model control group. However, the ILT of mice in the other four groups (DCWL, DCWH, DHCEL, and DHCEH) were not decreased significantly.
Compared with the normal control group, the activity time of the model control group was significantly decreased (p < 0.05), The results of SPT, ILT and TST in the second antidepressive experiment indicated that the low and high-dose HCW groups (HCWL and HCWH) also display significant anti-depressive activity. However, the water and 80% ethanol In summary, HCW displayed significant anti-depressive activity in the twice anti-depressive experiments, and the extracts of flowers showed more potent anti-depressive activity than that of fresh and dry flower buds.
Analysis and isolation of primary metabolites from H. citrina flowers
The HCW and HCE of H. citrina was preliminarily analyzed by HPLC-Q-TOF-MS to find the main ingredients contributing to the antidepressant activity. A total of 32 high-
FIGURE 4
The MS/MS spectra of standard 7 (A) and the metabolite 6 (B) in ESI − mode, and corresponding fragmentation behaviors.
Frontiers in Pharmacology frontiersin.org 06 level metabolites, including 18 flavonoids, 7 chlorogenic acidtype, 3 polyphenols, 2 acetamide alkaloids and 2 diterpenoid saponins, were screened and tentatively identified by their tandem mass spectrometry (MS/MS) [ (Table 1 and Figure 3) ]. And then, the high-content ingredient (compound 15) was further isolated by the MS-guided isolation method (Qing et al., 2014;Qing et al., 2016;Qing et al., 2017b), and its structure was unambiguously identified by nuclear magnetic resonance (NMR) data. The specific structural identification of the primary metabolites by HPLC-Q-TOF-MS and NMR is as follows: 2.4 Identification of the metabolites by high-performance liquid chromatography/quadrupole time-offlight mass spectrometry HPLC-Q-TOF-MS is a fast and sensitive tool widely used for comprehensive screening and identifying the plant's metabolite. In this study, each high-level compound which was appeared with a prominent peak in the total ion chromatogram (TIC) or the ultraviolet chromatogram was screened. The MS/MS data of metabolites were obtained by the target-MS/MS methods, and their structures were determined by their characteristic fragmentation behavior. Take metabolites 6 and 7 for example. Compound 7 was unambiguously identified as chlorogenic acid by comparing the retention time, MS and MS/MS data with the standard. The fragmentation pathways of compound 7 were investigated in detail ( Figure 4A), and compound 6 has similar fragmentation behaviors to the standard ( Figure 4B). The difference m/z value of compounds 6 and 7 was 15.9942 Da, which indicated that one of the hydroxyl groups in compound 7 was replaced by a hydrogen atom and formed the structure of 6. In the MS/MS spectrum of compound 6 ( Figure 4B), the high abundance of ions at m/z 163.0379 and 119.0463 were formed, which demonstrated that one of the hydroxyl groups connected to the benzene ring was replaced, therefore, compound 6 was tentatively identified as 5-O-pcoumaroylquinic acid . In addition, metabolites 9, 21, and 30 were unambiguously identified as caffeic acid, rutin, and quercetin, respectively, by comparing the retention time, MS and MS/MS data with the standards. The rest primary metabolites in the HCW were also identified by their MS/MS spectra (Supplementary Figure S2; Table 1).
Isolation and identification of compound 15
In order to further determine the structure of the primary metabolites, the MS-guided isolation, which was well-developed by our laboratory (Qing et al., 2016;Qing et al., 2017b), was employed. Compound 15 was obtained from HCW and its structure was unambiguously identified by NMR data. Compound 15 was obtained as a yellow amorphous powder. (Siewek et al., 1984). Compound 15 was reported for the first time from H. citrina.
In summary, flavonoids (14-32) and chlorogenic acid-type compounds (2-4, 6-8, and 13) were the primary metabolites of the HCW, which display a crucial role in the anti-depressive activity. Rutin (21) was the highest content metabolite of HCW, and compounds 15 and 23 were also the high-level metabolites ( Figure 3B), which may have potential anti-depressive activity and need further investigation.
The anti-depressive activity of rutin (compound 21)
In order to find the anti-depressive active component of HCW, the anti-depressive activity of the main metabolite (21) was evaluated. The result showed that the SPT index of the model control group was significantly lower (p < 0.05) compared with the normal control group, which indicated that the CUMS mice model was successfully established ( Figure 5A). Compared to the model group, the SPT index of different doses of rutin (RTL: 0.7 mg/kg, RTM: 1.8 mg/kg, RTH: 6.3 mg/kg and RTE: 10.0 mg/kg, see Supplementary Table S1) and FH group was significantly increased (p < 0.05).
As shown in Figure 5B, the ILT of mice in the model control group was significantly prolonged (p < 0.05) compared to the normal control group, which also indicated that the CUMS mice model was successfully established. The ILT of mice in the RTH and RHE groups and the FH group decreased significantly (p < 0.05) compared to the model control group.
Frontiers in Pharmacology frontiersin.org Compared with the normal control group, the resting time of the model control group was significantly prolonged (p < 0.05), which showed that the CUMS mice model was successfully established. The resting time of mice in the FH, RTM, RTH, and RTE group was significantly decreased (p < 0.05) compared with the model group. However, the tail suspension activity experiment failed ( Figures 5C,D).
The results of SPT, ILT, and TST experiments indicated that RTH and RTE have significant anti-depressive activity, which demonstrates that rutin was one of the main anti-depressive active compounds of HCW. And the anti-depressive activities of other high-level compounds such as metabolites 15 and 23 needed further evaluation.
Effect of H. citrina flowers on the intestinal flora
Depression is a complicated and comprehensive mood disease, and the pathogenesis is still not completely clear. More and more studies have shown that intestinal flora affects not only gastrointestinal physiology, but also the function and behavior of the central nervous system through the microbiota-intestinal-brain axis (Collins et al., 2012;Rogers et al., 2016). However, the relationships between the antidepressant-like activity of H. citrina extracts and intestinal microorganism variations were rarely studied. Therefore, we analyzed the 16S rRNA gene sequencing to determine the effects of two H. citrina extracts (low-dose of HCW and HCE) on the gut microbiota of the depressed mice.
H. citrina flowers increases the diversity and richness of the intestinal flora
As the depth of sequencing increases, the rarefaction curves of all the samples approach the saturation plateau, and the result indicates that the sequencing data covers all species in the sample ( Figure 6A). The Venn diagram compares the differences group between OTUs. A total of 1044 OTUs from five groups of sequencing data of intestinal flora. As shown in Figure 6B, five groups shared 430 OTUs. Unique OTUs were observed in the normal control (22), CUMS model control (48), fluoxetine (11), HCE (93) and HCW (181) groups. α diversity included the ACE, Chao1, Shannon and Simpson index, which was intended
Frontiers in Pharmacology
frontiersin.org to be represent the community's richness and diversity (Liang et al., 2018;Perxachs et al., 2022). As shown in Figure 6C, the HCE and HCW groups exhibited an increase in the alpha diversity of the ACE index compared with the model group (p < 0.05), and the HCE group increased in the alpha diversity of the Chao 1 index compared with the model group (p < 0.05). The results showed that HCW and HCE treatment improved the diversity and richness of mouse gut microbiota, which was decreased by depression. Principal coordinates analysis (PCoA) presented gut microbiota communities in mice, which were divided into different quadrants respectively from five groups (Simpson et al., 2021). The model group was separated from the normal group in PCoA space, and the normal group and HCW group exhibited certain polymerization tendencies (Supplementary Figure S4). The result showed that stress stimuli decrease the enrichment and diversity of gut microbiota, and the HCW can reverse that phenomenon.
H. citrina flowers regulates the abundance of specific intestinal flora
The effects of CUMS, HCW and HCE on the composition and function of the intestinal flora were analyzed via 16S rRNA sequencing. The community composition was analyzed to obtain the abundance and diversity of each species (Pu et al., 2022). At the phylum level, Firmicutes, Bacteroidetes, and Proteobacteria were predominant in all samples but varied in their abundances ( Figure 7A). The abundance of Firmicutes in the CUMS model group (MC) was significantly higher than that of the normal group (NC) and HCW (p < 0.01), which indicated that the HCW could decrease the abundance of Firmicutes in the CUMS model group. The abundance of Bacteroidetes in the CUMS model group was significantly lower than that of the normal group and HCW (p < 0.01), which demonstrated that the HCW could increase the abundance of Bacteroidetes in the CUMS model group ( Figure 7B). In previous studies, the diversity and abundance of intestinal flora in depressed patients were decreased (Naseribafrouei et al., 2014;Zheng et al., 2016;Lin et al., 2017). Compared to the normal person, at the phylum level, the abundance of Bacteroidetes was decreased, and the level of Firmicutes was increased in the intestinal flora of depressed patients. In this study, the HCW could increase the abundance of Bacteroidetes and decrease the level of Firmicutes in the intestinal flora of depressed mice at the phylum level, which indicated that HCW has the potential to be developed as an antidepressant by regulating the abundance of Firmicutes and Bacteroidetes at the phylum level.
At the genera level, Bacteroides and Desulfovibrio were the predominant genus in all samples but varied in their abundances Frontiers in Pharmacology frontiersin.org ( Figure 8A). The abundance of Bacteroides was significantly decreased when the normal mice were depressed, while its abundance was significantly increased after administration of the FH and HCW. The abundance of Desulfovibrio were significantly increased when the normal mice depressed, while their abundance was significantly decreased after administration of the HCW ( Figure 8B). The above results indicated that the HCW regulates the abundance of Bacteroides and Desulfovibrio at genera level to achieve the anti-depressive activity based on the microbiota-intestinal-brain axis. Gamma-amino butyric acid (GABA), which is an important neurotransmitter, was used to transmit signals in the synapses of the nervous system. In previous studies (Ironside et al., 2021;Prévot and Sibille, 2021), the level of GABA significantly decreased in depressive patients, including major depressive disorder (MDD) patients. Previous studies indicated that the intestinal flora belonging to Bacteroides genera could produce GABA in the gut, which has an effect on the synapses of the nervous in the brain by the microbiota-intestinal-brain axis (Strandwitz et al., 2019;Izuno et al., 2021;Otaru et al., 2021). In this study, the HCW could significantly increase the abundance of Bacteroides genera in the gut of depressed mice, which indicated that the HCW plays the anti-depressive activity by increasing the abundance of Bacteroides genera and thereby improving the level of GABA.
Desulfovibrio genera could produce the lipopolysaccharide to disrupt the intestinal barrier and leads to the production of inflammatory factors. In previous studies (Haroon and Miller, 2017;Zhu et al., 2019), the abundance of Desulfovibrio genera was significantly increased in the intestinal flora of depressed mice, which was in accordance with the results of this study. The inflammatory factors, such as IL-1β, IL-6 and TNF-α, could damage the epithelial cells of the gut and have an effect on the frontal cortex and hippocampus of the brain by the blood circulation, which could lead to depression (Chudzik et al., 2021;Morais et al., 2021). In this study, HCW could significantly decrease the level of Desulfovibrio genera of depressive mice. The potential mechanism involved in decreasing the level of lipopolysaccharide produced by Desulfovibrio genera and thereby reducing the content of inflammatory factors in the gut, blood, and brain of depressed mice.
Effect of rutin on the intestinal flora
Rutin (21) is a polyphenolic compound that has been proven to have antidepressant activity. However, the Frontiers in Pharmacology frontiersin.org relationship between the antidepressant-like activity of rutin and intestinal microorganism variations was rarely reported. Therefore, we analyzed the 16S rRNA gene sequencing to investigate the effects of rutin (high-dose RT, namely RTE) on the gut microbiota of depressed mice.
Rutin increases the diversity and richness of the intestinal flora
According to the sample number and species OUTs, the rarefaction curves of all the samples had reached a plateau, which indicated that the sequencing data covers all species in the sample (Supplementary Figure S5A). A total of 1205 OTUs from four groups of sequencing data of intestinal flora. As shown in Supplementary Figure S5B, four groups shared 792 OTUs. Unique OTUs were observed in the normal control (35), CUMS model control (13), FH (21), and RT (33) groups. As shown in Supplementary Figure S5C, the model group was separated from the normal group in PCoA space, and the normal group and RT group exhibited certain polymerization tendencies. The FH and RT groups exhibited an increase in the alpha diversity of the ACE index, Chao 1 index, and Shannon index compared with the model group (p < 0.05), and the model group decreased the alpha diversity of the ACE index and Chao 1 index compared with the normal group (p < 0.05) (Supplementary Figures S5A-D). The results showed that FH and RT could improve the diversity and richness of the gut microbiota of depressed mice.
Rutin regulates the abundance of specific intestinal flora
At the phylum level, Firmicutes, Bacteroidetes, and Proteobacteria were predominant in all samples but varied in their abundances (Supplementary Figure S7A). Although the variation trend of the abundance of those microbiotas was consistent with the HCW group. However, the RT group did not display significant function in regulating Firmicutes and Bacteroidetes phylum levels in the depressed mice (p > 0.05). At the genera level, the RT group could increase the abundance of Bacteroides and decrease the level of Desulfovibrio genera (Supplementary Figure S7B), which was in accordance with the HCW group. However, the effect of the RT group on the level of Bacteroides and Desulfovibrio genera in depressed mice was weaker than the HCW group. The result shows that rutin is one of the main ingredients of antidepression in HCW.
Preparations of H. citrina extracts
Dry, fresh flower buds and flowers of H. citrina (Mengzihua, 100 Kg each, Supplementary Figure S1) were collected from Qidong County, Hunan Province of China, and were unambiguously identified by Doctor Zhixing Qing (Hunan Agricultural University). The extract experiments used water and 80% ethanol as the extraction solvent. The ratio of material to liquid was 6: 1 and the extraction time was 24 h under room temperature. The extraction solvents were concentrated by reducing pressure and dried by vacuum. Finally, 6 different extracts, which include water extracts of flowers (HCW), water extracts of fresh flower buds (WHCW), 80% ethanol extracts of flowers (HCE), 80% ethanol extracts of fresh flower buds (WHCE), water extract of dried flower buds (DHCW), and the 80% ethanol extract of dried flower buds (DHCE), were obtained for the anti-depressive experiments.
Chemicals
Acetonitrile and formic acid (HPLC-grade) were purchased from Merck (Darmstadt, Germany) and ROE (Newark, New Castle, United States), respectively. Deionized water was purified using a Milli-Q system (MA, United States). All of them were used for HPLC-Q-TOF-MS analysis. Three standards, including chlorogenic acid (7)
High-performance liquid chromatography/quadrupole time-offlight mass spectrometry conditions
Chromatography was performed using an Agilent 1290 HPLC system (Agilent Technologies, United States) consisting of an auto-sampler, thermostatted column compartment, and a tunable UV detector. Separation was carried out on a XAqua C18 (150 mm × 2.1 mm, 2.8 µm; Frontiers in Pharmacology frontiersin.org Accrom Technologies Co. Ltd., China). The elution system was 0.1% formic acid 1) and 0.1% formic acid in acetonitrile 2). The linear gradient elution program was as flowers: 0-30 min, 5%-45% B; 30-40 min 45%-90% B. The sample injection volume was 5 μl. The rate was set at 0.3 ml/min, and the column temperature was maintained at 30°C. Mass spectrometric experiments were performed using a 6530 Q-TOF/MS accurate mass spectrometer (Agilent Technologies, United States) in negative ionization mode, and TOF data were acquired between m/z 100 and 1000 in centroid mode. The condition of Q-TOF-MS was optimized as follows: sheath gas temperature: 350°C; sheath gas flow, 12 L/min; gas temperature, 300°C; drying gas, 10 L/min; fragmentor voltage, 150 V; skimmer voltage, 65 V, capillary voltage, 4000 V. The TOF mass spectrometer was continuously calibrated using a reference solution (masses at m/z 112.9855 and 966.0007) to obtain the high-accuracy mass measurement. The targeted MS/MS experiments were performed using variable collision energy (10-50 eV), which was optimized for each metabolite.
Experimental animal
Male ICR mice (weight range of 18.0-22.0 g) were purchased from Hunan Slake Jing-da Experimental Animals Co., Ltd. (Certificate number 43004700048590). The experimental animal production license number is SCXK (Xiang) 2011-0003, and the use license number is SYXK 2015-06. Animals were housed under a standard 12: 12 h light/dark schedule with the light on at 8:00 a.m. and given free assess to tap water and food pellets. The ambient temperature was controlled at (22°C ± 2°C) and given a standard chow and water ad libitum for the duration of the study. All experiments and procedures were carried out according to the Regulations of Experimental Animal Administration issued by the State Committee of Science and Technology of China.
Establishment of depression model by chronic unpredictable mild stress
In addition to 10 mice in the normal control group, the other mice were subjected to a chronic unpredictable mild stress, including fasting (12 h), water prohibition (12 h), forced swimming (10 min), strobe (12 h), noise (30 min), restraint (12 h) (placed in a 50 ml centrifuge tube with a diameter of 3.0 cm, a length of about 10 cm, and 6 to 7 vents with a diameter of 0.5 mm), tilting cage (12 h), wet cage (12 h), reversal of day and night, etc. (specific schedule see Supplementary Table S2). Animals abstain from food and water during bondage (Lu et al., 2019).
Drug administration
After 28 days of continuous modeling, the animals were randomly divided into several groups (Supplementary Tables S1, S3, S4, three anti-depressive trails have been done in this study) according to the results of the sucrose preference test and body weight. The normal and model control groups were given distilled water by intragastric administration, and the other groups were given the corresponding solution of extracts with a volume of 20 ml/kg for continuous 35 days. After the last administration, the mice were tested for SPT, ILT, and TST. All antidepressant experimental procedures are shown in Supplementary Figure S9.
Sucrose preference test
The sucrose preference test was divided into training and test period. Two days before the test as the training period, two bottles of 1% sucrose solution were given to the animals in the first 24 h, and a bottle of 1% sucrose solution was replaced with a bottle of pure water in the next 24 h. They did not abstain from food and water for 8 h before the test. During the test period (15 h), mice were given a bottle of 1% sucrose solution and a bottle of pure water, and the positions of the two bottles were swapped to avoid the influence of position preference. At the end of the test, calculate the sucrose preference index (sucrose preference index (%) = sucrose consumption/(sucrose consumption + pure water consumption) × 100%).
Ingestion latency test
The ingestion latency test was carried out after the SPT. The experiment was divided into 2 days, the first day was the adaptation period, and the animals were put into a square open box to adapt for 10 min. After fasting for 24 h, the ingestion latency test was carried out. A food pellet was placed in the center of the open box, and the mice were put Frontiers in Pharmacology frontiersin.org back to assess the food pellet (mice were placed in the same position and direction each time). The time between the animal was put into the cage and the first ingestion of food was recorded.
Tail suspension test
After the ILT, the mice were tested by tail suspension test. The mice were fixed on the tail suspension device with a baffle to separate the line of sight and avoid interfering with others mice. The head was about 5 cm from the table, so the mice had no place to climb onto or grasp. The activity and rest time of the mice were recorded in the next 4 min.
Collection of intestinal solutes
All mice were anesthetized in a container filled with ether gas after the behavioral test, and decapitated quickly to avoid more pain. The intestines of the experimental mice were cut off under the aseptic environment, and the intestinal solutes were dug up with the aseptic knife. The intestinal solutes were collected with sterilized centrifuge tubes on the ice bag and stored in the refrigerator at −80°C for 16S rRNA analysis.
DNA extraction and PCR amplification
The DNA of mice intestinal samples was extracted according to the instructions of "E.Z.N.A. ® Soil DNA Kit". Each sample of DNA was diluted to 1 ng/µl with sterile water, and measured the purity and concentration via agarose gel electrophoresis. PCR was performed using TransGen AP221-02, and a specific primer (16S V4 region primers, 515F and 806R) with barcode and amplification was used with related enzymes. 1% agarose gel electrophoresis was used to detect PCR amplification product and the target band was recovered by shearing. Take the qualified PCR amplification product and send it to Shanghai Meiji Biomedical Technology Co., Ltd. for sequencing.
Library construction and computer sequencing
The library was constructed using NEXTFLEX Rapid DNA-Seq Kit. The V4 region of the 16S rRNA gene was analyzed by high throughput sequencing using the Illumina HiSeq platform by NovaSeq PE250. After sequencing is completed, the data are subjected to low-quality read removal, splicing, filtering, and chimera removal to obtain valid data. Sequences were analyzed using Quantitative Insights into Microbial Ecology software and the UPARSE pipeline.
3.8.7 Statistical method SPSS16.0 was used for experimental data statistical analysis, and the statistically significant level was set to p ≤ 0.05. The data were expressed as mean ± standard deviation. Leven's test method was used to test normality and homogeneity of variance. Multiple samples were compared through one-way ANOVA, and the LSD test was used for statistical analysis.
Statistical differences and biological significance were considered in the evaluation.
Conclusion
In this study, the anti-depressive activities of six extracts of H. citrina (HCW, HCE, WHCW, WHCE, DHCW, and DHCE) were evaluated by the depressive mice induced by CUMS model. The results showed that the extracts of H. citrina flowers (HCW and HCE) displays significant anti-depressive activities and the HCW has the strongest function than other extracts. A total of 32 compounds, which were mainly flavonoids and chlorogenic acid-type compounds, were identified by HPLC-Q-TOF-MS/MS and NMR. Among them, the content of rutin (compound 21) was the highest. And then, the anti-depressive activities of rutin was also estimated, the results showed that this compound displayed significant anti-depressive activity and was one of the main active compounds of HCW. Finally, the 16 s (V3+V4) region amplifiers of mice intestinal flora were sequenced to explore the mechanisms of HCW and rutin on intestinal microflora. The results indicated that HCW and rutin could increase the diversity and richness of the intestinal flora, and regulate the specific intestinal microorganisms of the depressed mice. To sum up, the water extract of H. citrina flowers (HCW) has significant antidepressant activity, and its main active metabolites were determined and the related mechanism has been proposed.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
The animal study was reviewed and approved by Hunan Laboratory Animal Central (IACUC-2018 (3) 033). | 2022-08-30T14:04:48.292Z | 2022-08-29T00:00:00.000 | {
"year": 2022,
"sha1": "7bfb30ca6ecadb37fc720f21562b7ea9fd8af0ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "7bfb30ca6ecadb37fc720f21562b7ea9fd8af0ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215744798 | pes2o/s2orc | v3-fos-license | Local Orientation-Preserving Symmetry Preserving Operations on Polyhedra
Unifying approaches by amongst others Archimedes, Kepler, Goldberg, Caspar and Klug, Coxeter, and Conway, and extending on a previous formalisation of the concept of local symmetry preserving (lsp) operations, we introduce a formal definition of local operations on plane graphs that preserve orientation-preserving symmetries, but not necessarily orientation-reversing symmetries. This operations include, e.g., the chiral Goldberg and Conway operations as well as all lsp operations. We prove the soundness of our definition as well as introduce an invariant which can be used to systematically construct all such operations. We also show sufficient conditions for an operation to preserve the connectedness of the plane graph to which it is applied.
Introduction
Symmetry preserving operations on polyhedra have a long history -from Plato and Archimedes to Kepler [Kep19], Goldberg [Gol37], Caspar and Klug [CK62], Coxeter [Cox71], Conway [CBG08], and many others. Notwithstanding their utility, until recently we had no unified way of defining or describing these operations without resorting to ad-hoc descriptions and drawings. In [BGS17] the concept of local symmetry preserving operations on polyhedra (lsp operations for short) was introduced. These replace each chamber in the barycentric sudbivision of a polyhedron with the same patch, which results in a new polyhedron while preserving the original symmetries. This established a general framework in which the class of all lsp operations can be studied, without having to consider individual operations separately. It was shown that many of the most frequently used operations on polyhedra fit into this framework.
However, some notable operations were not included. Most Goldberg operations and some of the extended Conway operations -like snub (see Figure 1), gyro, propeller, etc. -are chiral, so they only preserve orientation-preserving symmetries. In order to also cover these, we can generalize lsp operations by decorating double chambers instead of single chambers, similar to what Goldberg did in [Gol37] for Goldberg operations. We call these local orientation-preserving symmetry preserving (lopsp) operations. In this paper, we formalize this approach for lopsp operations as [BGS17] did for lsp operations. In the remainder of this section we introduce a combinatorial characterization of plane graphs and the concept of chamber systems. These allow us to define lopsp operations in Section 2. We prove that each lopsp operation can be represented by a double chamber patch, but in contrast to the single chamber patch of a lsp operation, this one is not necessarily unique. We introduce the double chamber decoration of an lopsp operation, which can be easily constructed from the double chamber patch but is independent of the chosen patch, and therefore unique for each lopsp operation. After some auxiliary results, we prove that the double chamber decoration is an invariant for equivalent lopsp operations. This makes it possible to identify a lopsp operation with its double chamber decoration. In Section 3 we give a combinatorial characterization of double chamber decorations independent of the corresponding lopsp operation, and identify the double chamber decorations of 2-connected and 3-connected lopsp operations. Such a characterization is one of the first steps towards constructing a generation algorithm for lopsp operation as was done in [GCC20] for lsp operations. Finally, we prove that 2-connected resp. 3-connected lopsp operations preserve 2-connectivity resp. 3-connectivity, which makes it possible to see 3-connected lopsp operations as operations on polyhedra.
Plane graphs and chamber systems
In this paper, we will consider a plane graph as a rotation system on the set of directed edges.
This definition is equivalent to the more informal way of working with plane graphs. The permutation σ(e) gives the next edge with the same source vertex as e in clockwise direction, and θ (e) gives the inverse edge of e. Note that the set of vertices is not explicitly defined, but can be retrieved as the set of orbits of 〈σ〉.
The orbit corresponding to a vertex v is the set of edges with source v. The faces correspond to orbits of 〈σθ 〉, i.e. the set of edges with the face to its left. The size of a face is the size of its corresponding orbit.
Every plane graph G has an associated chamber system C G [DH87]. This chamber system is obtained by constructing a barycentric subdivision of G, i.e. subdividing each edge by one vertex in its center, adding one vertex in the center of each face, and adding edges from each center of a face to its vertices and centers of edges. In C G , each vertex v has a type t(v) ∈ {0, 1, 2}, indicating the dimension of its corresponding structure in G. Each edge e has the type t(e) of the opposite vertex in an adjacent triangles. A chamber system C G is a plane triangulation.
We call a pair of chambers sharing a type-0 edge a double chamber. Each chamber of C G is contained in exactly one double chamber.
Chiral operations
An example of the construction of a chiral Goldberg operation is given in Figure 3. A quadrangular double chamber patch v 1 , v 0 , v 2 , v 0 consisting of the triangles v 1 , v 0 , v 2 and its counterpart v 1 , v 0 , v 2 is cut out of the hexagonal lattice H. Given a plane graph G with chamber system C G , we can glue this patch into each double chamber of C G . The result is a plane graph G with the same orientation-preserving symmetries as G, but not necessarily the same orientation-reversing symmetries. The symmetries of the hexagonal lattice ensure that after cutting and gluing the patch everything still fits together. But in order to have a combinatorial approach to the operations, we prefer to cut over a simple path P in C H instead of cutting through edges and faces in arbitrary places. We will prove in Lemma 2.4 that it is always possible to find such a path.
It would be easy if we could split the double chamber patch into two separate triangles such that each triangle corresponds to the single chamber patch of an lsp operation [BGS17], and decorate each of the two types of chambers of C G with one of these two patches. Unfortunately, this is not always possible. In Figure 4, such an example is given. This is a double chamber patch for the lopsp operations snub.
Lopsp operations
We define lopsp operations in a similar way to lsp operations, but instead of decorating each chamber with a single chamber patch, we will decorate double chambers.
Definition 2.1. Let T be a connected tiling of the Euclidean plane with chamber system C T , and let v 0 and v 2 be points in the Euclidean plane such that v 0 is the Figure 4. A double chamber patch for snub (see Figure 1).
center of a rotation ρ v 0 by 120 degrees in clockwise direction that is a symmetry of T and v 2 is the center of a rotation ρ v 2 by 60 degrees in clockwise direction that is a symmetry of T .
We call (T, v 0 , v 2 ) a local orientation-preserving symmetry preserving operation, lopsp operation for short.
In contrast to lsp operations, there is no obvious way to apply lopsp operations. We want to cut out the double chamber patch v 2 , v 0 , v 1 , v 0 and glue it into each double chamber, but the straight lines between these vertices do not always coincide with edges of C T , and if we allow other cut-paths there are multiple possibilities (see Figure 5). It is not difficult to imagine that no matter how we cut out this patch, the result after glueing them together will be the same. If we choose another path between v 1 and v 0 or between v 0 and v 2 , we have to adapt the path between v 1 and v 0 resp. v 0 and v 2 accordingly, and the changes will cancel each other out when we glue the patches together. This suggests that if we identify the vertices and edges on the border v 1 , v 0 , v 2 of the patch with the vertices and edges on the border v 1 , v 0 , v 2 , the result is a triangulation of the sphere invariant under the chosen path. In Figure 6 the resulting triangulation for the snub operation is given. We can even construct this triangulation without choosing a path.
Lemma 2.3. The double chamber decoration of a lopsp operation is a plane triangulation.
Proof. Since C T is a triangulation, we know that (σθ ) 3 (e) = e for all edges e ∈ E. It follows immediately that (σθ ) 3 (e) = e for all edges e ∈ E. Since t(σθ (e)) = t(σθ (e)) = t(θ (e)) = t(e) = t(e), it is impossible that σθ (e) = e or (σθ ) 2 (e) = e. Therefore, the size of each orbit of 〈σθ 〉 is 3, which means that all the faces are triangles. Now that we obtained the double chamber decoration D without choosing a cut-path, we can choose a path in D instead of C T . This is easier to do, because we do not have to take the symmetries into account. We can always find a path along the edges of C T , without crossing through edges or faces.
Lemma 2.4. If D is the double chamber decoration of a lopsp operation, there exists a simple path P between v 1 and v 2 through v 0 .
Proof. Since D is a plane triangulation, it is 3-connected and therefore also 2connected. It stays 2-connected if we temporarily add a vertex w with edges to v 1 and v 2 . By Menger's theorem [Men27], there exist two disjoint paths between w and v 0 . This is only possible if there are disjoint paths from v 1 to v 0 and from v 0 to v 2 .
We apply a double chamber decoration D to a plane graph G by cutting D open along the simple path P from the lemma above, which is the subdivided patch v 1 , v 0 , v 2 , v 0 that we glue into each double chamber of G. Instead of cutting and gluing, we can describe this application combinatorially.
Denote the set of directed edges on the path from v 2 to v 0 by P 2 , and their inverses by P 2 . Denote the set of directed edges on the path from v 1 to v 0 by P 1 , and their inverses by P 1 .
There is a one-to-one correspondence between the directed edges of a plane graph G = (E, σ, θ ) and the double chambers of C G , where each edge e corresponds to the double chamber c e immediately to its left. The operations s 1 (c e ) = c θ (e) and s 2 (c e ) = c σ −1 θ (e) correspond to traversing the cyclic order around vertices of type 1 resp. 2. We call the set of double chambers of G along with s 1 and s 2 the double chamber system of G. Definition 2.5. Given a plane graph G with double chamber system C and a double chamber decoration D = (E, σ, θ ) with simple path P satisfying Lemma 2.4, the application of (D, P) to G results in a plane graph D P (G) = (E × C, σ P , θ P ) with σ P ((e, c)) = (σ(e), s P,e (c)) and θ P ((e, c)) = (θ (e), s P,e (c)) where It is possible that there is more than one simple path that satisfies Lemma 2.4. We still have to prove that the result of the operation does not depend on the chosen path P. We will do that in Theorem 2.7, but we first introduce some new terminology.
Given a double chamber decoration D with two simple paths P and Q satisfying Lemma 2.4, consider the subgraph of C D consisting of all the edges in P and Q (see Figure 8 for an example). In order to avoid confusion, we will refer to the faces of this subgraph as regions. With each directed edge e of C D we associate exactly one region R e . If e is an edge in P or Q we choose the region at the left-hand side of e, and for all other edges we choose the containing region. A region path R 0 , . . . , R n is a sequence of regions such that for each i < n there exists an edge e i ∈ Q \ P such that e i is associated with R i and θ (e i ) is associated with R i+1 . A region path corresponds to the operation r 1 • · · · • r n with r i = s Q,e i . Two region paths are called equivalent if they correspond to the same operation. Lemma 2.6. Given a double chamber decoration D with two simple paths P and Q satisfying Lemma 2.4, there exists a region R P,Q such that there is a region path between R P,Q and a region incident to v 2 with an associated operation of the form s k 2 , and a region path between R P,Q and a region incident to v 1 with an associated operation of the form s l 1 .
Proof. Choose a region path R = R 0 , . . . , R n with R 0 incident to v 1 and R n incident to v 2 and associated operation r = r 1 · · · r n . Such a region path exists because P contains no cycles, so all regions are connected. If we add one vertex in each region and one vertex on each edge in Q, and an edge between a vertex in a region and a vertex on an edge if the edge is in the border of the region, this region path induces a path on these edges in a canonical way, and each operation r i corresponds to an intersection of R and Q. An example is given in Figure 9(a). We will prove that if r i and r j correspond to two intersections of R and Q t with t ∈ {1, 2} that are consecutive on Q t , then r i+1 · · · r j−1 is the identity operation. For j = i + 1 this is obvious. Suppose j > i + 1. The subpath of Q t between r i and r j together with the region path between r i and r j forms a closed cycle. If there is an intersection with Q s in r a with a = i + 1, there will be another intersection in r b with a < b < j and r b = r −1 a , as illustrated in Figure 9(b). We can assume by induction that r a · · · r b = r a r b = 1. If b < j − 1, we can repeat this for a = b + 1 until b = i + 1 and thus r i+1 · · · r j−1 = 1.
Take m so that r m ∈ {s 1 , s −1 1 } and r i ∈ {s 2 , s −1 2 } for all i > m. If Q 2 crosses R in r a with 1 < a < m and r i ∈ {s 1 , s −1 1 } for i < a, it will cross again in r b with a < b < m, and r a · · · r b = 1. We can repeat this as long as there is an r c ∈ {s 2 , s −1 2 } with b < c < m, until r 1 · · · r m = s k 1 . Since r m+1 · · · r n = s l 2 , the region between r m and r m+1 satisfies the conditions of R P,Q .
We are now ready to prove that the application of a double chamber decoration D = (E, σ, θ ) to a graph G with double chamber system C is independent of the chosen simple path P. In order to do that, we will construct an isomorphism between the plane graphs D P (G) and D Q (G), with Q another simple path satisfying Lemma 2.4. By choosing the region R P,Q , we fix canonical points R P,Q × C that will be invariant under this isomorphism.
Theorem 2.7. Given a plane graph G with double chamber system C and a double chamber decoration D = (E, σ, θ ) with two simple paths P and Q satisfying Lemma 2.4, there exists an isomorphism between D P (G) and D Q (G).
Consider the case s P,e = 1. The operation s P,Q,σ(e) , corresponding to a region path from R P,Q to R σ(e) , is equal to s P,Q,e followed by the operation corresponding to the region path crossing e, which is s Q,e . Therefore, s P,Q,σ(e) s P,e = s Q,e s P,Q,e . If s P,e = s 1 , there is a region path from R P,Q to R e consisting of a region path from R P,Q to a region R 1 incident to v 1 , corresponding to operation s k 1 , followed by a region path from R 1 to R e , corresponding to operation r. Since e ∈ P 1 , the region path from R 1 to R e can follow the left-hand side of P 1 . The region path from R P,Q to σ(e) starts with the same region path to R 1 . We can now follow the region path along the right-hand side of P 1 to R σ(e) , corresponding to operation r , after we go around v 1 which corresponds to operation s −1 1 . In Figure 10, we see that r is equal to r followed by s Q,e . Therefore, s P,Q,σ(e) s P,e = r s −1 1 s k 1 s 1 = s Q,e rs k 1 = s Q,e s P,Q,e .
For s P,e equal to s −1 1 , s 2 and s −1 2 , the proof is similar. Suppose f ((e, c)) = f ((e , c )), i.e. (e, s P,Q,e (c)) = (e , s P,Q,e (c )). It follows immediately that e = e , and since s P,Q,e is a permutation we have that c = c . Thus (e, c) = (e , c ) and f is injective. For each (e, c) ∈ E × C, f ((e, s −1 P,Q,e (c))) = (e, c), and thus f is surjective.
Since f is a bijective homomorphism, it is an isomorphism between D P (G) and D Q (G).
Double chamber decorations
In the previous section we constructed the double chamber decoration for a given lopsp operation. This double chamber decoration contains all the necessary information in order to apply the decoration to an embedded graph, but does not depend on the tiling T or the simple path P chosen to define and apply the lopsp operation. Since two lopsp operations are equivalent if and only if they have the same double chamber decoration, it is easier to work with the double chamber decorations directly instead of deriving them from lopsp operations. But in order to do that, we need a full characterization of these graphs. This is similar to what we did for lsp operations in [GCC20].
Proof. It is easy to verify that the double chamber decoration of a lopsp operation satisfies these properties.
Given a graph D that satisfies the properties, there exists a simple path P between v 1 and v 2 through v 0 , since the proof of Lemma 2.4 holds for all plane triangulations. We can cut D open along this path to get a subdivided patch D , and glue this patch into each double chamber of the hexagonal lattice H. The result will be a chamber system C T of a tiling T .
We will now prove that the type-2 subgraph of D , consisting of all type-2 edges, is connected. Let u and v be two vertices in the type-2 subgraph. Since every face of D is a cycle, D is 2-connected. Menger's theorem [Men27] gives us that there exist two vertex-disjoint paths between u and v. Since all faces of D except for the outer face are triangles, these two paths form a cycle with only triangles on the inside. Since u is in the type-2 subgraph, it has type 0 or 1, and there is an edge (u, u ) of type 2 on or in the cycle. If u = v, we can do the same for vertices u and v, and we can choose a cycle that contains less triangles than the previous one. By induction, there exists a path between u and v in the type-2 subgraph of D .
Given vertices u and v in the type-2 subgraph of C T , there exists a sequence of chambers C 0 , . . . , C n of H such that two consecutive chambers C i and C i+1 share one side, and u is contained in C 0 and v in C n . Since there are at least two vertices on each side of D , and they are not both of type 2, at least one of them is in the type-2 subgraph of C T . Thus, there is a type-2 path between u and v that passes through all chambers in the sequence C 0 , . . . , C n , and the type-2 subgraph of C T is connected. It follows immediately that T is connected too.
We can choose the vertices of one double chamber of C H in T as v 0 , v 1 , v 0 and v 2 . Now (T, v 0 , v 2 ) satisfies Definition 2.1 of a lopsp operation, and the double chamber decoration of this lopsp operation is D.
We call a lopsp operation and the corresponding double chamber decoration k-connected if it is derived from a k-connected tiling T . For the following results, we need a lemma from [GCC20], which we will repeat here without proof. For k = 2, we will prove that O(G) is 2-connected. A type-1 cycle of length 2 in C O(G) is either completely contained in an area that was one double chamber of C G before it was subdivided by O, or it is split between two areas of adjacent double chambers. Both cases cannot appear, as for any double chamber (resp. any pair of adjacent double chambers) of C G there is an isomorphism between the area of this double chamber (resp. two double chambers) in C O(G) and the corresponding area in T , and T is connected and thus has no type-1 cycles of length 2. This implies that C O(G) contains no type-1 cycles of length 2.
For k = 3, we will prove that O(G) is 3-connected. Consider a type-1 cycle C with edges e 1 , . . . , e n in C O(G) . Let C 1 , . . . , C n be a sequence of double chambers of C G such that e i is contained in the area of C i for 1 ≤ i ≤ n. If C i = C i+1 , we can remove C i from the sequence. This results in the reduced sequence C 1 , . . . , C m .
If C has length 2, then m ≤ 2. Thus, the cycle is contained in one or two neighbouring areas, and it should be present in the tiling T too, which is impossible.
If C has length 4, then m ≤ 4. Thus, the cycle is contained in the areas of at most 4 double chambers of C G , and each double chamber has at least one vertex or edge in common with the previous and next one, but not the same for both of them. We will now construct a type-1 cycle in C G though these chambers. Depending on the position of the common elements in each double chamber, we choose type-1 edges of C G as in Figure 11. This results in at most 4 edges that form a type-1 cycle or single edge C in C G . In Figures 11(i) and 11(k) we choose two edges, but since v 0 and v 0 are of the same type, the path between them on the cycle C has to be at least of length 2 too. If C is a type-1 cycle, it has to be empty since G is 3-connected. Thus, the situation is as in Figure 12(a).
The type-1 cycle C in C O(G) is completely contained in the areas of double 13 chambers of C G adjacent to C . The only situation where this would not necessarily imply a type-1 cycle in C T is when C is a cycle of length 4 surrounding a type-2 vertex. This implies that C passes though 3 or 4 areas corresponding to double chambers of C G , as illustrated in Figures 12(b) and 12(c). There are at least two areas that contain only one edge of C. But since all the areas are isomorphic, it is easy to see that this is impossible. This theorem is particularly interesting for k = 3, for which it says that 3-connected lopsp operations are operations on polyhedra. | 2020-04-14T01:00:28.231Z | 2020-04-11T00:00:00.000 | {
"year": 2020,
"sha1": "bb0b0ebf47b78054bd54ef3f7f5fc8a13385411e",
"oa_license": null,
"oa_url": "https://biblio.ugent.be/publication/8675743/file/8675744.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bb0b0ebf47b78054bd54ef3f7f5fc8a13385411e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
271484869 | pes2o/s2orc | v3-fos-license | CSNet: A Count-Supervised Network via Multiscale MLP-Mixer for Wheat Ear Counting
Wheat is the most widely grown crop in the world, and its yield is closely related to global food security. The number of ears is important for wheat breeding and yield estimation. Therefore, automated wheat ear counting techniques are essential for breeding high-yield varieties and increasing grain yield. However, all existing methods require position-level annotation for training, implying that a large amount of labor is required for annotation, limiting the application and development of deep learning technology in the agricultural field. To address this problem, we propose a count-supervised multiscale perceptive wheat counting network (CSNet, count-supervised network), which aims to achieve accurate counting of wheat ears using quantity information. In particular, in the absence of location information, CSNet adopts MLP-Mixer to construct a multiscale perception module with a global receptive field that implements the learning of small target attention maps between wheat ear features. We conduct comparative experiments on a publicly available global wheat head detection dataset, showing that the proposed count-supervised strategy outperforms existing position-supervised methods in terms of mean absolute error (MAE) and root mean square error (RMSE). This superior performance indicates that the proposed approach has a positive impact on improving ear counts and reducing labeling costs, demonstrating its great potential for agricultural counting tasks. The code is available at http://csnet.samlab.cn.
Introduction
Automated technology is vital for wheat food security, because it enhances breeding efficiency and food production.Wheat is one of the most important food crops, providing approximately 20% of the world's protein and carbohydrate intake and bearing the burden of global food security [1].In addition, wheat has various uses, including industrial raw material, biofuel, and animal feed.However, as the global population continues to grow and the world develops, the growth in wheat production has not matched that of demand.An in-depth analysis reveals that the annual growth rate of wheat demand is 1.7%, juxtaposed with a modest average annual rate of genetic increase of only 1% [2].Therefore, automation technology has been fully utilized to improve breeding efficiency and cope with the global food crisis.More specifically, automation technology can use modern computers to replace manual statistical analysis of crop phenotypes (including height, color, number of ears, and other relevant phenotypes) [3], thus reducing labor and time costs and achieving efficient breeding.
Selecting varieties with desirable traits through high-quality automated counting is an essential process in breeding.Wheat yield, one of the most important traits, is determined by 3 elements: the number of wheat ears per unit ground area, number of grains, and weight of 1,000 grains [4].Conventional breeding methods rely on manual counting to ascertain the number of wheat ears, a process that is prone to low efficiency, high time and cost, and high error [5].Consequently, the implementation of automated counting is indispensable for enhancing breeding efficiency and conserving human resources.To achieve highquality and automated counting, researchers have begun to explore the potential of image-processing techniques for recognizing wheat ears.For instance, Cointault et al. [6] used color and texture feature processing techniques to achieve wheat ear segmentation in images.Alharbi et al. [7] utilized a Gabor filter and the K-means clustering algorithm to detect segmented wheat ear regions and perform wheat ear counting.Fernandez et al. [8] proposed a high-throughput and low-cost method for wheat ear counting by utilizing Laplace frequency and median filters to obtain low-noise wheat ear features.However, the aforementioned methods have mediocre generalization ability and are easily affected by interference factors, such as illumination and environment, limiting their suitability for wheat ear images with rich backgrounds and diverse morphologies.
With the rapid development of deep learning, position-supervised methods, including box-supervised and point-supervised wheat counting approaches, have garnered substantial attention.On the one hand, bounding boxes are employed to select and quantify the number of wheat ears in terms of box-supervised wheat ear counting methods.For instance, Li et al. [9] utilized a Faster R-CNN trained on a self-constructed dataset to achieve fast recognition of wheat ears.Gong et al. [10] proposed a 2-space pyramid pooling network to improve YOLOv4, which further enhanced the detection accuracy of wheat ears.Zang et al. [11] improved the YOLOv5s model for detecting small-scale wheat ear counts.On the other hand, predicting the density map of wheat ears can achieve counting in the case of point-supervised wheat ear counting methods.Lu et al. [12] proposed a local counting regression network known as TasselNet to address the problem of counting maize tassels in the wild.Xiong et al. [13] improved the accuracy and efficiency of wheat ear counting by adding contextual information to TasselNet.Khaki et al. [14] designed a lightweight wheat ear counting model with MobileNetV2 as the backbone, relying on a density map for the counting and localization of wheat ears.Ma et al. [15] selected filtered pyramid blocks and dilated convolutions to construct EarDensityNet, which predicts wheat ear density maps to obtain the number of ears.Wu et al. [16] constructed and optimized a density graph regression network for wheat ear counting in unmanned aerial vehicle (UAV) images.The aforementioned methods address the problems of mediocre generalization and susceptibility to noise and have achieved excellent results in wheat ear counting; however, they require training on high-cost position-level images.
Both box-supervised and density map-based point-supervised wheat ear counting models are locally perceptive convolutional neural networks (CNNs) [17] that can use location information (boxes or density maps) to obtain wheat ear features, as shown in Fig. 1.However, the dense and varied location information of wheat ears is not only costly to label but also introduces inevitable noise that may distract the attention of the model to the wheat ears, resulting in a limitation of model performance.In particular, box-supervised wheat ear counting methods use several target boxes to locate wheat ears and remove duplicate boxes using a nonmaximum suppression technique [18]; however, this is not sufficiently accurate for overlapping wheat ears.In point-supervised methods focusing on dense targets that use Gaussian kernels of the same size to achieve wheat ear localization, adaptation to wheat ears of varying lengths is difficult [19].Furthermore, the application of the Gaussian kernel to generate density maps inevitably results in the labeling of the surrounding backgrounds of wheat ears with densities, thereby introducing background noise.Therefore, a low labeling cost and location-independent method for counting wheat ears is crucial for increasing wheat yield.
In the field of crowd counting, researchers have recently explored count-supervised methods to reduce annotation costs and enhance counting accuracy.For instance, Yang et al. [20] proposed a soft-label sorting and counting networks that achieves count-supervised crowd counting.Liang et al. [21] presented TransCrowd, a crowd-counting network based on a transformer [22] architecture, that effectively extracts semantic crowd information through a self-attention mechanism.Wang et al. [23] presented a multi-granularity multilayer perceptron (MLP) to mine global information and overcome the lack of spatial cues through a proxy task known as split counting.The aforementioned methods are primarily designed to address the challenges posed by considerable density variations within crowds, making them less suitable for wheat ear scenarios characterized by smaller density variations.To address the challenges encountered in wheat ear scenarios within count-supervised, we propose a novel count-supervised wheat counting network known as CSNet, which is a multiscale global perception model for achieving accurate and efficient wheat ear counting with count information only.Specifically, we design a multiscale perception module (MPM) based on the MLP-Mixer network [24] with a global perception capability to learn wheat ear features from different spatial dimensions.For dense or differently sized wheat ears, the MLP-Mixer constructs global feature relationships to obtain the attention map of wheat ears without complex labeling information.Furthermore, we introduce a convolutional block attention module (CBAM) [25] to reduce the effects of background information.In the experiments, we validate the proposed CSNet on a global wheat head detection (GWHD) [26,27] dataset, achieving state-of-the-art results compared with the locationsupervised advanced approach.Regarding dataset usage, the proposed CSNet uses labeled data at a much lower cost than location-supervised methods, exhibiting considerable potential for agricultural counting.In summary, the main contributions of this paper are as follows: • To the best of our knowledge, this is the first study to propose a count-supervised wheat counting method that yields high-precision results at low labeling costs.
• We design an MPM that obtains attention maps of wheat ears in different spatial dimensions by constructing global feature relations, enabling the model to effectively handle diverse wheat ear sizes while relying solely on count information.
• We conduct quantitative and qualitative experiments on the GWHD dataset, manifesting the effectiveness of the CSNet and generalizability of similar agricultural counting tasks.
Materials and Methods
In this section, we introduce the multiscale count-supervised network (CSNet).The "Dataset" section describes the dataset required for the experiment.In the "Methods" section, we describe the framework of CSNet.The "Evaluation metrics" section introduces the evaluation metrics used in the counting model.
Dataset
The range of wheat cultivation is unrivaled, and it is grown in almost every country [28].Wheat varieties vary across different regions because of different natural conditions such as climate, soil, and light.Consequently, creating a universal wheat ear dataset remains challenging.To address this issue, David et al. [26] proposed the GWHD_2020 dataset, which is the first publicly available dataset of wheat crops from multiple countries.In particular, the GWHD_2020 dataset contains wheat varieties at different growth stages and a wide range of genotypes from Europe, North America, Australia, and Asia, totaling 4,700 RGB images containing 193,634 labeled wheat ears.Among them, the image data in the GWHD_2020 dataset were collected at heights ranging from 1.8 to 3 m above the ground, and data harmonization was performed after collection to ensure that all images in the dataset were clearly visible [26].As shown in Fig. 2, the GWHD_2020 dataset contains a wide range of growth stages and varieties of wheat that vary in color, shape, size, and tilt angle.In 2021, the GWHD_2020 dataset was expanded and updated with the addition of 1,722 images and 81,553 labeled wheat ears from 5 additional countries, making it a larger, more diverse, and less noisy dataset [27], which we refer to as GWHD_2021 dataset.They have a substantial impact on wheat counting and provide invaluable resources for innovative research and advancement in wheat-related studies.
We selected the aforementioned dataset for the experiments to verify the validity of the proposed model.As summarized in Table 1, we used 3,422 images from the GWHD_2020 dataset for the experiments, where the maximum number of wheat ears in a single image was 112, minimum value was 0, and average value was 42.49, totaling 145,411 wheat ears.In terms of the GWHD_2021 dataset, we exploited 6,509 images for experiments, with a range of wheat ear counts per image of 0 to 190, averaging 42.29 ears per image and totaling 275,260 ears in the dataset.Based on the common method of dataset division, we randomly selected 80% of the dataset as training data, 10% as validation data, and 10% as test data.
In addition, we constructed a self-contained wheat grain dataset containing 510 images as an extended test.In particular, the number of wheat grains in the images ranged from 0 to 68, with an average of 38.9 grains per image and 19,839 grains.Similarly, we randomly divided 80% of the data into training data and the remainder into test data.
Methods
In this study, we propose a novel count-supervised wheat ear counting network known as CSNet, which comprises a backbone, CBAM, MPM, and a counting module (CM), as shown in Fig. 3.In particular, the backbone extracts image features, whereas the CBAM focuses on wheat region features.To further adapt to the diversity of wheat ears, we design an MPM to obtain the features of wheat ears in multiple spatial dimensions, which improves the ability of the model to recognize wheat ears.Finally, the CM uses a fully connected layer and an average pooling layer to directly regress the final counting results.In the following subsections, we elaborate on the implementation principles for each part.
Backbone
The backbone is an important component of the neural network because it is responsible for feature extraction and has a substantial impact on the generalization ability, robustness, and overall efficiency of the model [29].To optimize the balance of the model between accuracy and resource overhead, we selected the first 10 layers of VGG16 [30], including the first ten 3 × 3 convolutional layers and 3 max-pooling layers, as the backbone of CSNet [31].This backbone is pretrained on ImageNet and initially has the ability to extract the underlying features, which results in notable advantages such as saving computational resources, increasing computational efficiency, and improving the generalization ability of the model.
Convolutional block attention module
In the context of a complex environment with weeds that are overgrown and wheat that is obscured from each other, the model needs to focus its limited attention on the area with wheat ears to be able to count the ears effectively.To address this problem, we introduce an efficient and lightweight CBAM [25] that combines channel and spatial attention.In particular, channel attention can adjust the attention degree of the model between each feature to focus on essential features (e.g., shape, size, and texture of the wheat ears) and ignore irrelevant features (e.g., light variations and debris on the wheat ears).Spatial attention adjusts the extent to which the model focuses on each region of the image, thereby enhancing attention to the wheat region and reducing the influence of background regions (e.g., weeds).
As shown in Fig. 3, the backbone layer outputs feature map M b , which is further optimized using the CBAM module to obtain feature map M f .Compared with feature map M b , feature map M f focuses more on wheat ears in both the channel and space.Initially, the spatial average and max pooling operations are executed on M b to derive the maximum and average values for each channel, respectively.Subsequently, the maximum and average of each channel are weighted through the fully connected layer to obtain the channel attention weight, expressed as M s , reflecting the degree of attention assigned to a single channel.In addition, the channel attention feature map, denoted by M cb , is obtained by performing a point similarity operation on the channel attention weight M c and feature map Mb.Subsequently, channel average and max pooling are performed on M cb , and the results are passed through a convolutional layer to obtain attention weight M s , which contains the spatial location information.Ultimately, the dot multiplication of each spatial location in M cb with attention weights M s produces a feature map, denoted as M f , which is augmented with attention to the wheat ears in both the channel and space.This process enhances the perceptual focus on the wheat region and emphasizes the crucial features of the wheat ear.The CBAM process is formally expressed as follows: where σ denotes a sigmoid function, FC represents a fully connected layer, pool c max denotes the spatial max pooling, and pool s avg denotes the average channel pooling.
Multiscale perception module
Both box-supervised [9][10][11] and point-superved [14][15][16] rely on positional information to recognize diverse and dense wheat (1) ears; however, annotating and overlapping wheat ears is costly [32], and the subjectivity of the labeler can lead to ambiguity.There fore, we believe that location information may not be essential for wheat ear counting, leading to the design of the MPM to perceive diverse and dense wheat ears using only quantity information.
To perceive wheat ears in the absence of positional information, we adopt the mixer layer of the MLP-Mixer [24] network, which is based on MLP, to learn the relationship between each patch and all other patches, thus sensing the connection between wheat ears and counting.However, the phenotypes (size, color, and shape) of wheat ears are so diverse that perceiving all wheat ears on a single scale is impossible.To address this problem, we propose a multiscale method that captures wheat ear features in multiple spaces for accurate recognition.As shown in Fig. 3, the feature maps are sliced into patches of different sizes, with smaller patches capturing more subtle features.By perceiving features at different scales, the MPM can distinguish features from multiple spatial dimensions to identify diverse wheat ears.
In detail, the MPM mainly slices and projects the wheat ear feature map M f onto multiple feature matrices, which are created to obtain comprehensive global attention information using the mixer layer for information interaction, as shown in Fig. 3. First, the wheat ear feature map M f output from CBAM is sliced into feature patches of n 1 × 512 × 16 × 16, n 2 × 512 × 8 × 8, and n 3 × 512 × 4 × 4 sizes, n 1 , n 2 , and n 3 corresponding to the values of 16, 64, and 256, respectively.The smaller the slice size, the larger the number of patches.Each feature patch is then mapped as a feature vector, thus constituting a feature matrix in which the same rows in the feature matrix represent different channels in the same space, and the same columns represent the same channels in different spaces.Furthermore, the feature matrix is fed into the mixer layer for information interaction, which comprises Layer Norm and an MLP, and each row of the feature matrix is normalized by Layer Norm and then communicated through multi-MLPs.In addition, the rows of feature matrices, representing different spatial or channel information, undergo reciprocal transformations via transposition and engage in MLPs to obtain comprehensive global attention information.Finally, the MPM concatenates the 3 feature matrices with different scale information and interacts with them again through the mixer layer, producing a wheat ear feature matrix that incorporates global attention into 3 dimensions.The mixer layer fuses and optimizes features from multiple scales, eliminating discrepancies and promoting a consistent feature representation.The described feature matrices are denoted by T 1 , T 2 , T 3 , and T all , and the entire process can be defined as follows: where S denotes the slice operations, 16 × 16 denotes the size of the sliced patch, and F i denotes a linear projection.In addition, Mix N i indicates that N mixer layers exist on the ith scale.
Counting module
The CM is designed to convert features into quantities without the need to generate bounding boxes or density maps, but rather to generate regression counts directly.In particular, the proposed CM uses the information-rich wheat ear feature matrix output from the MPM input to the fully connected layer for dimensionality reduction and generates counts.To mitigate the potential for considerable discrepancies owing to inherent variability in individual counts, the CM concurrently predicts a set of counts and subsequently aggregates the final predicted number of wheat ears via average pooling.The detailed process is as follows: where σ denotes the ReLU function and Ĉ denotes the final predicted count.
Evaluation metrics
To investigate the counting performance of the model, we utilize the mean absolute error (MAE), root mean square error (RMSE), and R-squared metrics to evaluate the performance in the counting task.MAE is the difference between the predicted and actual values, and it is used to assess the accuracy of the model.The RMSE is the deviation between the predicted and true values, which is used to measure the stability of the model.R 2 is a statistic used to measure the extent to which the regression model fits the data and can take a range of values from 0 to 1; the closer it is to 1, the better the regression model fits the data.The formulas for the above evaluation metrics are expressed as follows: where Ĉi denotes the estimated total number of wheat ears in the ith image, C i denotes the real number in the ith image, C denotes the average real number, and N denotes the number of predicted images.
Experimental details
CSNet is optimized using the stochastic gradient descent (SGD) algorithm, and the training batches are set to 16.We use the MultiStepLR scheduler to adjust the learning rate with an initial learning rate of 1 × 10 −4 and employ the L1 loss function as the loss criterion.Compared with the L2 loss function, the L1 loss function is less affected by challenging scenarios and prevents the model from being overly influenced by outliers.To accommodate multiscale slicing, all images are uniformly scaled to a size of 512 ×512.In addition, the number N of the mixer layers is fixed at 4. All experiments are implemented in the PyTorch framework and uniformly trained on NVIDIA A40 GPU for approximately 40 h using the original configuration, and the weights of the experiment that worked best on the validation dataset are taken for testing.Besides, we evaluate the performance of the proposed method: 4 methods based on boxsupervised, 5 based on point-supervised, and 1 based on count-supervised on 2 datasets.
Performance comparison
To validate the effectiveness of the proposed CSNet, we compare the proposed method with box-supervised and pointsupervised methods commonly used for wheat counting and count-supervised methods used for crowds on the GWHD_2020 and GWHD_2021 datasets, as summarized in Table 2.Among them, box-supervised methods include single-stage target detection methods SSD [33] and YOLOv8 [34], a 2-stage target detection method Faster R-CNN [35], and a transform-based target detection method DETR [36].For the point-supervised methods, we conduct experiments using MCNN [37], CSRNet [31], ASD [38], SPN [39], and WheatNet [14].Furthermore, we compare the proposed method with the count-supervised method used for crowds known as TransCrowd [21]. (5) As summarized in Table 2, the proposed CSNet outperforms both the box-supervised and point-supervised methods in terms of MAE and RMSE.In the GWHD_2020 dataset, the MAE of CSNet is 10.1% lower than that of the best box-supervised model (Faster R-CNN) and 23.6% lower than that of the best point-supervised model (CSRNet), which shows that the proposed count-supervised model can achieve better performance in the absence of location information.Comparing the 2 locationsupervised methods, the box-supervised model outperforms the point-supervised model on average, exhibiting the best MAE of 3.27 and 3.85, and the worst MAE of 4.39 and 5.91, respectively.The MAE of the best model in the box-supervised approach is reduced by 25.5% compared to that of the worst model, whereas it is reduced by 34.8% in the point-supervised approach, demonstrating that a considerable difference exists between the models using the same methods.
Notably, the inclusion of dense images in the GWHD_2021 dataset resulted in a decrease in the performance of all models.Compared with the GWHD_2020 dataset, the MAE of CSNet improves by 31.6%, that of Faster R-CNN improves by 66.9%, and that of CSRNet improves by 51.9%.CSNet continues to exhibit excellent performance on the GWHD_2021 dataset, which may depend on the fact that it is not limited by location information, thereby maximizing its perceptual ability.The substantial increase in the RMSE evaluation metrics of the boxsupervised methods (e.g., that of DETR is as high as 16.2) reveals their disappointing prediction of dense images, which is the main reason for the considerable increase in their MAE.The overlapping target boxes in dense scenes considerably affect the performance of the box-supervised method.
TransCrowd [21] is a count-supervised model with excellent crowd-counting performance that can be classified into 2 modes, token and GAP, depending on the final output type.However, it exhibits a less satisfactory performance in the wheat counting task because the wheat scene was notably different from the crowd scene.As summarized in Table 2, although the GAP mode of TransCrowd outperforms that of the Token mode, it still falls short compared to all the methods we experimented with.We attribute this underperformance to the substantial differences between the wheat and crowd scenes, making them unsuitable for wheat ear counting.
To further explore the performance of the models, we use the R 2 metric to determine the degree of linear regression straight line fit of each model on the GWHD_2020 dataset.As shown in Fig. 4, the R 2 metric of CSNet reaches 0.95, which is higher than that of the other models, proving that CSNet can effectively fit the number of ears without relying on location information.Clearly, the proposed method can obtain excellent results with reduced labeling, which is of great importance for decreasing the application cost of counting models and promoting the development of counting tasks in agriculture.
Impact of different backbones
Classical networks are selected as the backbone of CSNet, including VGG16 [30], ResNet34 [40], ResNet50 [40], MobileNetV2 [41], DarkNet53 [42], and ViT [43], to further evaluate the impact of the backbone on the model performance.All models are pretrained on ImageNet to learn the generic raw features.As summarized in Table 3, using VGG16 as the backbone enables the proposed model to achieve the best counting accuracy on the GWHD_2020 and GWHD_2021 datasets, which may be attributed to the fact that only the features of a single object (wheat ear) need to be captured in the wheat ear counting task without the need to use a more complex network structure.The counting accuracies of ResNet50 and DartNet53, which have more parameters than VGG16, do not increase but decrease.This may be because the wheat counting task is simple and only needs to focus on the features of wheat ears; therefore, the backbone with a large number of parameters appears to be overfitted.However, the transform-based model ViT does not perform well as a backbone, most likely because it requires a large amount of data to exploit its performance.Furthermore, when employing the lightweight MobileNetV2 backbone, CSNet exhibits commendable performance on the GWHD_2020 dataset, but achieves suboptimal results on the GWHD_2021 dataset.This suggests that MobileNetV2 is better suited for simpler scenarios, providing an ideal solution for environments in which inference speed is a priority in uncomplicated settings.CSNet also exhibits good performance when using lightweight Mobilenetv2 as the backbone, which provides a well-suited solution for environments where the speed of inference is sought.In conclusion, we observe that the backbone network has a considerable impact on model performance, and selecting the appropriate network for the task can
Impact of the CBAM
To explore the effects of CBAM on the attentional mechanisms of the model, we conduct a series of experiments to compare the model performance with and without CBAM.As summarized in Table 4, we not only compare their overall performance on the entire set of test data but also perform a detailed examination of their counting capabilities in more densely populated scenes with counts greater than 40 and less dense scenarios with counts less than 40.When CBAM is utilized, a noticeable reduction in both the MAE and RMSE is present compared to the variant without CBAM in denser scenarios.More specifically, in the GWHD_2020 dataset, although the MAE of the model with CBAM is slightly higher than that of the model without CBAM in less dense scenarios, its lower RMSE indicates that the CBAM-enhanced model has better attention generalization and accuracy.This suggests that CBAM plays a crucial role in refining the attention mechanism of the model, allowing it to adapt and excel in challenging and dense wheat ear configurations.
To further illustrate the efficacy of CBAM in addressing the challenges posed by dense and complex wheat images, we select images from the test set of the GWHD_2021 dataset with larger error margins for comparison.Specifically, we use a model without CBAM to predict the absolute error (AE) of each image, and select images with AEs greater than 5.A total of 189 images, numbered from 1 to 189, are selected for this purpose.As shown in Fig. 5, the utilization of CBAM leads to a notable enhancement in the performance of the model on these challenging images, accompanied by a reduction in the MAE value from 8.92 to 6.98.
MPM study
To confirm the effectiveness of the MPM, we vary the number of multiscale layers and size of the slices in the experiments.The smaller the slice size, the finer the features, and the more layers there are, the more spatial feature dimensions the model perceives.More specifically, we construct 1, 2, and 3 layers, in which different slice sizes are used to segment feature information in different layers, including 16 × 16, 8 × 8, and 4 × 4, resulting in 7 different structures, as summarized in Table 5.In addition, we add a finer layer to the 3-layer structure to explore the effects of additional spatial scales.As summarized in Table 5, the MAE of slice size 4 × 4 is 8.8% lower than that of slice size 8 × 8, and 22% lower than that of slice size 16 × 16, indicating that the fineness of the features has a considerable impact on the perceptual ability of the model in the single-layer structure.In terms of the 2-layer structure, the combination of slice sizes 16 × 16 and 4 × 4 performs the best, which may be because they have a larger difference in feature sizes and thus more favorable for acquiring more different feature information.The addition of any layer with different slice sizes to the single-layer structure improves the performance.For the 3-layer structure, the MAE is reduced by 7.8% compared to the best 2-layer structure and by 10.6% compared to the best one-layer structure, confirming that multi-scaling has a positive effect on improving model performance and generalization.We also conduct ablation experiments on the merging layers after the 3 branches.First, the results demonstrate that this layer enhances the fusion of multiscale information, thereby improving the performance of the model with only a slight increase in the number of parameters.However, the 4-layer structure consumes several parameters and the performance is not improved, indicating that the limit of multiscale fusion is in the 3-layer structure.Furthermore, the 3-layer structure exhibits satisfactory speed in the inference speed test, effectively meeting the demands of real-time detection.
As summarized in Table 6, we conduct experiments on the number of layers N in the MPM to explore its impact on model performance.Notably, the performance considerably improves as the number of layers increases, with the best results achieved when N = 4.In particular, for the GWHD_2020 dataset, the MAE decreases from 3.08 to 2.94, whereas the RMSE decreases from 4.05 to 3.88 as N increases from 2 to 4. Similarly, on the GWHD_2021 dataset, the MAE decreases from 3.98 to 3.87, whereas the RMSE decreases from 5.75 to 5.60 for the same transition.However, when the number of layers is increased to 6, the performance of the model exhibits a decreasing trend and is worse than that with 2 layers.This suggests that excess layers may introduce excessive model complexity, which negatively affects the model performance in counting tasks.Hence, striking a balance in the complexity of the model when selecting layers is imperative to fully harness their accuracy in wheat ear counting.
Visualization analysis
To demonstrate the superiority of the proposed method, we conduct visualization experiments using Grad-CAM [44].Grad-CAM propagates gradients through the predicted values to obtain the gradient information for each layer.Gradient information reflects the contribution of each element to the predicted value, with larger contributions indicating that the network focuses more on them.Finally, the attention regions of the network are obtained by determining the positions of the elements with high contributions.In this section, we visualize CSNet, CSRNet [31], and MCNN [37], respectively, as shown in Fig. 6.
In specific experiments, we explore the regions of interest in the last convolutional layer of VGG16 and map them onto the original image using a heat map.In contrast, the proposed model can clearly understand the objects to be focused on even without precise location information.Compared to CSRNet and MCNN, the proposed model exhibits more stable and comprehensive attention when recognizing wheat ears of different colors, sizes, and growth stages, further proving the strong generalization and robustness of CSNet.In summary, through visualization experiments using the Grad-CAM technique, we verify the superiority of the CSNet model for wheat ear identification.We demonstrate its ability to stabilize attention to wheat ears at different characteristics and growth stages, providing important ideas for a deeper understanding and optimization of the model.
To validate the distinctions between the MPM in CSNet and the multi-granularity MLP module in CrowdMLP [23], we conduct a visualization study on both the crowd and wheat ear datasets.In particular, because of the absence of access to the source code of the CrowdMLP model, we utilize Grad-CAM technology to generate attention heatmaps for various layers of the MPM module and the backbone layer in the wheat ear and crowd images.As shown in Fig. 7, the MPM exhibits attention features across 3 different size ranges in the wheat ear images, with larger segmentation sizes corresponding to broader attention ranges.CSNet relies on the multi-range attention mechanism in the MPM to comprehensively understand wheat ear scenes, minimize background interference, and thereby optimize its performance in wheat ear counting tasks.Because the wheat ear counting task does not pose the challenge of abrupt density variations observed in crowd counting, the design of the MPM is not specifically optimized for such challenges.Consequently, the attention of the MPM may diffuse and fail to accurately capture rapid density changes in crowd images.In contrast, CrowdMLP adopts a multi-granularity MLP module design that aims to address the rapid density variations inherent in crowd counting.The results of the visualization unequivocally demonstrate a distinct disparity in design philosophy between the MPM in CSNet and the multi-granularity MLP module in CrowdMLP.Moreover, the results demonstrate that different scales focus on different regions.Fusing them allows the model to more accurately capture a wide range of detailed and global information in the input data, thereby increasing the perceptual capabilities.
Discussion
Counting tasks are crucial in agriculture, as counting crops (e.g., wheat, maize, and rice) can estimate growth, predict yields, and contribute to the efficiency of agricultural production.However, location-supervised methods incur high labeling costs, particularly for dense crops.To reduce the labeling cost, we propose a count-supervised model with multiscale global awareness, which achieves the best results among advanced methods.By reducing the cost and complexity of dataset creation, the proposed approach provides a more practical and cost-effective solution for automated counting in agriculture.We discuss the various aspects of CSNet in different subsections.
Self-constructed wheat grain dataset
Wheat grain count is a critical determinant of wheat yield and a key metric for evaluating crop growth and predicting production.Therefore, we employ a self-constructed wheat grain dataset to perform an extended test to illustrate the low cost and high accuracy of the proposed method.To increase the diversity and robustness of the dataset, images are captured from 2 distinct backgrounds: white and gray stripes.This deliberate variation helps train the model to adapt to a range of backgrounds and lighting conditions, thereby improving its generalization capabilities [45].Subsequently, a predetermined number of wheat grains are randomly scattered and photographed against any given background to populate the dataset.Finally, we randomly increase or decrease the number of grains in the last shot to obtain a new image, as shown in Fig. 8, where each image is unique.In summary, because the grain count is meticulously documented during each capture, the need for extensive annotation efforts is no longer present, resulting in a low-cost dataset.
In particular, we establish a wheat grain dataset with 510 images.All images contain wheat grains, except for 10 background images, which are used as negative samples.The number of grains in the image follows a Gaussian distribution, with a mean value of 40, minimum of 11, and maximum of 68.The specific distribution of the number of grains is shown in Fig. 9.We randomly select 80% of the datasets as training data and use the remainder as test data.To match the size of the wheat grains, the size of the input image is changed to 256.CSNet achieves excellent results in the experiments, with an MAE of 2.79 and an RMSE of 4.49, confirming that it can be used to predict more crops.Moreover, we conduct experiments by training on white background images and fine-tuning a small set of gray-striped background images.Test results on other graystriped backgrounds demonstrate that fine-tuning considerably enhances the model performance, where the MAE improved from 8.30 to 3.98.The results of this experiment indicate that appropriate fine-tuning can enhance the model performance under various background conditions, thereby increasing its usefulness and robustness in real-world applications.
Exploration of MPM
Multiscale techniques have been widely adopted and proven to be effective in computer vision for capturing different features in images, thereby enhancing the understanding and generalization capabilities of a model across different objects or scenes [46][47][48].Given the potential variations in wheat fields, including different growth stages, varieties, and wheat ear densities, we introduce multiscale techniques to improve the adaptability of the proposed counting model to complex scenes.Initially, we adopt a pyramid structure commonly used in the visual domain to generate multiple feature maps of different sizes.We attempt to slice each feature map into equally sized slices to capture information at different spatial scales.Nevertheless, the experiments in this study reveal that this structure impedes the ability of the network to perceive wheat ear features, possibly because of the misalignment of semantic information across multiple feature maps.To ensure consistent semantic information, we slice a single feature map into segments of different sizes to capture the information at multiple spatial scales.
Furthermore, to address the challenge of perceiving objects without location information, selecting a structure with a global receptive field is crucial.In contrast to Transformers, the structure of the MLP-Mixer does not rely on self-attention mechanisms.This characteristic enables the MLP-Mixer to train and generalize more effectively with limited data.Considering the relatively straightforward nature of the wheat ear counting task and the relatively small dataset, which does not require complex context understanding or long-range dependency modeling, the concise structure of the MLP-Mixer proves to be more appropriate.The absence of self-attention mechanisms makes the model easier to train, and it exhibits superior performance in resource-constrained scenarios, making it a more appropriate choice.However, if the MLP-Mixer is used directly for counting, it will achieve unsatisfactory results in wheat counting, and its experimental result on the GWHD_2020 dataset exhibits an MAE of 12.83.
In crowd images, individuals at varying distances exhibit substantial differences in size, resulting in notable density variations across different positions in the image.The multigranularity MLP module in CrowdMLP is specifically designed to address rapid density changes in crowd-counting challenges.This module effectively captures and integrates the semantic information from different granularities, thereby improving the adaptability of the model to variations in crowd density.Compared to the proposed MPM, the design focus of the CrowdMLP module is on density changes within crowds.Conversely, the proposed module addresses different growth stages and varieties in wheat scenes by extracting information at different spatial scales.The distinct design philosophies of the 2 modules enable each to excel in their respective scenarios, exhibiting optimal performance.
Application prospects
Counting is a crucial task in the field of agriculture that provides accurate data support to farmers and aids scientific agricultural management and production decisions [49].With the advancements in computer vision, agricultural counting has gradually become more automated and intelligent [50].However, the high cost of creating datasets has emerged as a bottleneck, hindering the widespread adoption of this technology and the struggle to meet the diverse counting requirements of agriculture.Therefore, the proposed method aims to reduce the cost of dataset creation, thereby enabling low-cost automated counting.In contrast, several agricultural quantity assessments are currently performed manually, and a small count-supervised dataset can be obtained by additionally taking images.Capturing images from various angles in a single region allows label reuse and reduces annotation costs.Furthermore, for plants grown in regional settings (e.g., grapes and tomatoes), a camera can be used to capture panning shots.In such instances, fruits of the same cluster appearing in different images must be counted only once, thereby reducing the occurrence of double counting.For neatly planted crops, quantitative information can be quickly obtained by manually recording the number of rows and columns.However, for densely planted or widely planted crops, manually counting the number of rows and columns can be tedious.CSNet is an effective solution for automating the counting process while minimizing the cost of labeling.
Conclusion
In this study, we propose a novel method for the accurate and efficient counting of wheat ears using count-supervised methods.First, we design a multiscale model with global perception to utilize counting information to learn the attention graph between the wheat ear features, which includes a backbone, CBAM [25], MPM, and CM.The backbone and CBAM are primarily used to extract wheat ear features and reduce interference from complex backgrounds.The MPM learns the relationship graph between wheat ear features from a multidimensional space, which is conducive to identifying the intrinsic connection between wheat ear features and counting information.To validate the proposed approach, experiments are conducted using the global wheat ear head detection dataset [26,27].
In comparative experiments, we compare box-supervised and point-supervised approaches and achieve superior results.The effectiveness of the MPM is validated in ablation experiments and demonstrated through a visual analysis of the attention maps of CSNet.Finally, we establish a wheat grain dataset for experiments, which is applied to evaluate the time cost required for count-and position-level annotations, and to verify the robustness and generality of the proposed method.
The results demonstrate that the proposed method not only reduces the cost of creating datasets but also exhibits excellent counting performance.
Fig. 1 .
Fig. 1. (A) Box-supervised CNN-based methods, which can predict the target box to locate the wheat ear, but are costly to label and are poor at dealing with the overlapping wheat ear.(B) Point-supervised CNN-based methods, which predict the density map to obtain the number of wheat ears, are costly to label and are poorly suited to wheat ears of varying lengths.(C) The proposed CSNet is a multiscale global perception method based on count-supervised, which is easy to label, low cost, and highly accurate.
Fig. 3 .
Fig. 3.The overall architecture of CSNet, including Backbone, convolutional block attention module (CBAM), multiscale perception module (MPM), and counting module (CM).(A) The detailed structure of CBAM is used to improve the attention to the wheat region.(B) The flow of the MPM utilizes the wheat ear features at multiple scales to improve the generalization performance of the network.(C) The detailed operation of the mixer layer in the MPM achieves sensing wheat ears without location information by fusing global features.
Fig. 5 .
Fig. 5. Compare the performance with and without CBAM on the images with large counting errors in the test set of the GWHD_2021 dataset.
Fig. 6 .
Fig. 6.Feature visualization of the last layer from models' backbone.
Fig. 8 .
Fig. 8. Part of the images in the self-built wheat grain dataset.This dataset contains different backgrounds and distributions.
Table 1 .
The statistics of the dataset used in this study.Min, Max, Avg, and Total denote the minimum, maximum, average, and total number of annotated wheat ears, respectively.
Table 2 .
Performance comparison of competing methods using different supervision on the 2020 version of the GWHD dataset and the latest 2021 version
Table 3 .
Counting performance of our proposed method under different backbones on the GWHD dataset
Table 4 .
Counting performance with and without CBAM on the GWHD_2020 and GWHD_2021 datasets
Table 5 .
The impact of different numbers of layers and slice sizes in the multiscale module on the GWHD_2020 dataset
Table 6 .
Impact of the number of layers N in MPM on model performance on the GWHD_2020 and GWHD_2021 datasets | 2024-07-27T15:12:42.538Z | 2024-07-25T00:00:00.000 | {
"year": 2024,
"sha1": "05c8700c9fc8e74b4cd348630647655a32609e81",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.34133/plantphenomics.0236",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25ac5ee763a851e152bb09fa4107f28f82b53172",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119517792 | pes2o/s2orc | v3-fos-license | Sectoral Multipole Focused Beams
We discuss the properties of pure multipole beams with well-defined handedness or helicity, with the beam field a simultaneous eigenvector of the squared total angular momentum and its projection along the propagation axis. Under the condition of hemispherical illumination, we show that the only possible propagating multipole beams are `sectoral' multipoles. The sectoral dipole beam is shown to be equivalent to the non-singular time-reversed field of an electric and a magnetic point dipole Huygens' source located at the beam focus. Higher order multipolar beams are vortex beams vanishing on the propagation axis. The simple analytical expressions of the electric field of sectoral multipole beams, exact solutions of Maxwell's equations, and the peculiar behaviour of the Poynting vector and spin and orbital angular momenta in the focal volume could help to understand and model light-matter interactions under strongly focused beams.
"Gaussian beams" are the most typical beams employed in optical manipulation [1] because most lasers emit light beams whose transverse electric fields and intensity distributions are well approximated by Gaussian functions. However, after focusing, when the half width of the beam waist is comparable to the wavelength, λ, the exact solution of Maxwell's equations predicts significant deviations from the Gaussian shape, only valid in the paraxial scalar approximation (Gaussian approximation can be give accurate results only when the half width of the beam waist is greater than ∼ 10λ) [2].
A rigorous description of focal and scattered fields, usually based on the multipolar expansion of the field in terms of electric and magnetic multipoles, is needed for an accurate and quantitative discussion of field intensities, polarization and optical forces near the focal region of a focused beam [3][4][5][6][7]. A correct multipolar description of the light field is also relevant to analyze optical angular momentum (AM) phenomena [8,9], including the interplay between spin (SAM) and orbital (OAM) angular momenta in the focal volume of tightly focused beams [10,11] as well as to understand the spectral response and optical torques on small objects: Illumination with a tightly focused beam can only excite particle multipolar modes which are already present in the incident beam [12,13]. As a recent example, the strong size selectivity in optical printing of Silicon particles has been associated to the dominant contribution of dipolar modes of the tightly focused laser beam [14].
In many optical applications, where one wishes to maximize the electric energy density and/or minimize the cross-polarization, a converging electric-dipole wave is the natural choice as the incident wave [15][16][17][18][19]. since higher order multipoles vanish at the focal point.
In the so-called 4π illumination (when the incident light can cover the full solid angle), the highest energy density for a given incoming power for monochromatic beams is attained when the beam is a perfect converging dipolar beam [15]. However, for propagating beams where the incident angles are limited to a hemisphere (i.e. 2π illumination -0 ≤ ϕ ≤ 2π and π/2 < θ ≤ π for a beam propagating along the z axis) the properties of the dipolar beam in the focal region can be very different, although the total energy density at the focus can still be more than half of the maximum possible [16]. The vector properties of the light strongly affect the field polarization and intensity distribution near the focus of tightly focused vector beams [20,21] Our main goal here is to explore the field properties of propagating spherical multipole beams within a framework based on helicity, angular momentum and symmetry [22].
To this end, we consider pure multipole beams (PMB) with well-defined handedness or helicity, σ = ±1 (we associate left polarized light with σ = +1 positive helicity -handness-), being simultaneous eigenvectors of the squared total angular momentum, J 2 , and its z- For a beam propagating along the z-axis, the radial component of the Poynting vector far from the focus is assumed to be always negative for incoming light (π/2 ≤ θ ≤ π) and always positive for outgoing (0 ≤ θ < π/2). As we will see, this condition of hemispherical illumination restricts the possible propagating multipole beams to the so-called "sectoral" multipoles with m = σl. Figure 1 illustrates two examples of sectoral and non-sectoral beams. Interestingly, for l > 1, sectoral PMBs are vortex beams whose field vanishes on the propagation axis with a vortex topological charge of σ(l − 1). In contrast, dipole beams with well defined helicity (l = 1, m = σ) concentrate the field at the focus with an energy density that is 2/3 times the Bassett upper bound of passive energy concentration [15]. Dipole beams are equivalent to mixed-dipole waves [16] and can be seen as the sum of outgoing and incoming waves radiated from an electric and a magnetic dipole located at the focus with spinning axes on the focal plane. We will see that, although the helicity and the
II. BEAM EXPANSION IN VECTOR SPHERICAL WAVEFUNCTIONS IN THE HELICITY REPRESENTATION
Let us assume that a monochromatic light beam (with an implicit time-varying harmonic component e −iωt ) propagates through an homogeneous medium with real refractive index n h with a wave number k = n h ω/c = 2πn h /λ 0 (being λ 0 the light wavelength in vacuum). To this end we will consider the expansion of the electric field, E, of the incident focused beam in vector spherical wavefunctions (VSWFs), Ψ σ lm , with well defined helicity, σ = ±1 [23]: where Ψ σ lm is defined as Here, M lm and N lm are Hansen's multipoles [24], X lm denotes the vector spherical harmonic [25] , j l (kr) are the spherical (well-defined at r = 0) Bessel functions, Y m l are the spherical harmonics and L ≡ {−ir × ∇} is the OAM operator. The expansion coefficients C σ lm in the helicity basis are equivalent to the so-called beam shape coefficients (BSCs) [4,26]. An explicit expression of the VSWFs in spherical polar coordinates (with unitary vectorsê r,θ,ϕ ) is given by Let us recall that the multipoles Ψ σ lm can be built following the standard rules of angular momentum addition [24,27] as simultaneous eigenvectors of the square of the total angular momentum, J 2 , and its z-component, J z , with J = L + ↔ S given by the sum of the OAM, L, and SAM, ↔ S, operators: whereê i=x,y,z indicate unitary Cartesian vectors and I is the unit dyadic.
III. POYNTING VECTOR FOR PROPAGATING PURE MULTIPOLE BEAMS
In the helicity representation, the Poynting vector, P , for a monochromatic optical field, when calculated using either the electric or the magnetic field, separates into right-handed and left-handed contributions, with no cross-helicity contributions [28]: where ↔ S is the spin tensor defined in Eq. (7) and with Z = 1/( 0 n h c) = µ 0 / 0 h . This is an interesting result showing that for beams with well defined helicity σ, the z-component of the spin is simply proportional to the proyection of the Poynting vector on the propagation axis.
In the far-field, where lim kr→∞ j j ∼ sin (kr − lπ/2) kr , the Poynting vector for a pure multipole beam (PMB) (with l, m, σ) is given by Integrating over all incoming angles (π/2 ≤ θ ≤ π), the total incoming power, P W , for a pure multipole beam is then given by which, from the parity relation of the spherical harmonics and associated Legendre functions, is zero for l + m odd. This implies that the total amount of power that is carried by beams with l + m odd is identical to zero, even though they may present incoming and outgoing Poynting vectors (notice that P W is the actual total power flowing through the focal plane).
The behaviour near the z-axis, i.e. for θ π and θ 0, shows that, for l > 1, the sectoral PMBs are vortex beams whose field vanishes on the propagation axis with a vortex topological charge of σ(l − 1). Figure 2a shows the Poynting vector lines of a sectoral quadrupolar beam (l = 2, m = 2, σ = +1) which present a "doughnut-like" field intensity pattern around the focus.
For pure dipolar beams with l = 1, the field given by Eq. (18) is identical to the first term of the expansion of a circularly polarized plane wave around the origin [1,14]. The field at the focus, r = 0, given by is circularly polarized on the focal plane. The ratio between the electric energy density at the focal point and the total incoming power is then given by i.e. 2/3 of the Bassett upper bound. The field intensities and Poynting vector lines for a dipole beam are shown in Fig. 3. As it can be seen the intensity is concentrated in a volume ∼ (λ/2) 3 , as expected for a diffraction limited beam with an interesting toriodal flow of the Poynting vector around the focus. It is worth mentioning that steady-state toroidal current flow of the Poynting vector of highly focused beams was already demonstrated experimentally and explained with an approximate field-model [29].
A. Dipolar beams and time-reversal of a Huygens's source
Dipole beams are equivalent to mixed-dipole waves [16] and can be seen as the sum of outgoing and incoming waves radiated from an electric dipole, p (σ) , and an equivalent mag-netic dipole, m (σ) , located at the focus. These electric and magnetic point sources are known as Huygens' sources and their direct, outgoing, emission present a peculiar asymmetric distribution of the radiation pattern [16,30,31]. As we show in the Appendix B, the field of a dipole beam ( Eq. (18) with l = 1) can be rewritten as where G ee (r) is the outgoing dyadic Green function, G em ≡ (1/k)∇ × G ee and Interestingly, for an incoming beam with helicity σ, Eq. (26) is exactly the time-reversed electric field radiated by the Huygens' source (with helicity −σ) as it would be obtained in a time-reversal mirror cavity [32].
V. LINEARLY POLARIZED DIPOLAR BEAMS
Focused linearly polarized dipolar beams can be built by combining two well-defined helicity beams with opposite signs from Eq.(18), i.e.
The field at the focus is linearly polarized on the focal plane The Poynting vector, obtained by inserting Eq.(A1) into Eq.(12) including both helicities in the summation, present the characteristic toroidal flow around the focus but, as expected, the intensity pattern at the focal plane is not longer axially symmetric (see Fig. 4). angular momentum, we may define its (constant) density as where we have introduced l z and s z as the OAM and SAM densities, respectively (although L z and S z are not proper angular-momentum operators [33]). For fields with well defined helicity σ, the z component of the SAM can be calculated from the projection of the Poynting vector on the beam axis, i.e. from Eqs. (12) and (A1), or, alternatively, taking into account that the vectors ξ µ are eigenfunctions of S z with eigenvalue µ:
VII. CONCLUDING REMARKS
Pure multipolar beams are exact analytical solutions of Maxwells equations with well defined angular momentum properties that can be extremely useful to understand and model different phenomena associated to light-matter interactions under strongly focused beams without requiring sophisticated and cumbersome numerical calculations. We have shown that dipole beams could be generated by time-reversal techniques [32] as the time reversal of a Huygens' source. The steady-state toroidal Poynting vector flow in focused beams, which was already demonstrated experimentally and explained with approximated beam models [29], appears here in a natural way in the exact analytical solution for the dipole beam. The peculiar distribution of the SAM and OAM around the focus could lead to some interesting angular momentum transfer phenomena to small asymmetric and/or absorbing particles trapped by circularly polarized highly focused beams [37]. Higher sectoral multipole vortex beams, with well-defined total angular momentum and its projection on the propagation axis could also be useful to understand scattering and AM phenomena in vortex beams [38]. The peculiar properties of the fields in the focus of highly focused beams may also be relevant to understand the light-induced emergence of cooperative phenomena in colloidal suspensions of nanoparticles [39,40].
In spherical polar coordinates, Assuming It is easy to show that which corresponds to Eq. (26) and the extension of Eq. (8) of Carminati et al. [32] for electric and magnetic sources. | 2019-03-12T07:27:14.000Z | 2019-03-12T00:00:00.000 | {
"year": 2019,
"sha1": "7f430e5e793cdf121bf7598d4c8af298d31dc520",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.27.016384",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7f430e5e793cdf121bf7598d4c8af298d31dc520",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
14491742 | pes2o/s2orc | v3-fos-license | Identification of resistance of Plasmodium falciparum to artesunate-mefloquine combination in an area along the Thai-Myanmar border: integration of clinico-parasitological response, systemic drug exposure, and in vitro parasite sensitivity
Background A markedly high failure rate of three-day artesunate-mefloquine was observed in the area along the Thai-Myanmar border. Methods Identification of Plasmodium falciparum isolates with intrinsic resistance to each component of the artesunate-mefloquine combination was analysed with integrated information on clinico-parasitological response, together with systemic drug exposure (area under blood/plasma concentration-time curves (AUC)) of dihydroartemisinin and mefloquine, and in vitro sensitivity of P. falciparum in a total of 17 out of 29 P. falciparum isolates from patients with acute uncomplicated falciparum malaria. Analysis of the contribution of in vitro parasite sensitivity and systemic drug exposure and relationship with pfmdr1 copy number in the group with sensitive response was performed in 21 of 69 cases. Results Identification of resistance and/or reduced intrinsic parasitocidal activity of artesunate and/or mefloquine without pharmacokinetic or other host-related factors were confirmed in six cases: one with reduced sensitivity to artesunate alone, two with resistance to mefloquine alone, and three with reduced sensitivity to artesunate combined with resistance to mefloquine. Resistance and/or reduced intrinsic parasitocidal activity of mefloquine/artesunate, together with contribution of pharmacokinetic factor of mefloquine and/or artesunate were identified in seven cases: two with resistance to mefloquine alone, and five with resistance to mefloquine combined with reduced sensitivity to artesunate. Pharmacokinetic factor alone contributed to recrudescence in three cases, all of which had inadequate whole blood mefloquine levels (AUC0-7days). Other host-related factors contributed to recrudescence in one case. Amplification of pfmdr1 (increasing of pfmdr1 copy number) is a related molecular marker of artesunate-mefloquine resistance and seems to be a suitable molecular marker to predict occurrence of recrudescence. Conclusions Despite the evidence of a low level of a decline in sensitivity of P. falciparum isolates to artemisinins in areas along the Thai-Myanmar border, artemisinin-based combination therapy (ACT) would be expected to remain the key anti-malarial drug for treatment of multidrug resistance P. falciparum. Continued monitoring and active surveillance of clinical efficacy of ACT, including identification of true artemisinin resistant parasites, is required for appropriate implementation of malaria control policy in this area.
Results: Identification of resistance and/or reduced intrinsic parasitocidal activity of artesunate and/or mefloquine without pharmacokinetic or other host-related factors were confirmed in six cases: one with reduced sensitivity to artesunate alone, two with resistance to mefloquine alone, and three with reduced sensitivity to artesunate combined with resistance to mefloquine. Resistance and/or reduced intrinsic parasitocidal activity of mefloquine/ artesunate, together with contribution of pharmacokinetic factor of mefloquine and/or artesunate were identified in seven cases: two with resistance to mefloquine alone, and five with resistance to mefloquine combined with reduced sensitivity to artesunate. Pharmacokinetic factor alone contributed to recrudescence in three cases, all of which had inadequate whole blood mefloquine levels (AUC 0-7days ). Other host-related factors contributed to recrudescence in one case. Amplification of pfmdr1 (increasing of pfmdr1 copy number) is a related molecular marker of artesunate-mefloquine resistance and seems to be a suitable molecular marker to predict occurrence of recrudescence.
(Continued on next page)
Background
The emergence and spread of multidrug resistant Plasmodium falciparum is the key factor contributing to complexity in malaria control. To deal with the threat of resistance, artemisinin-based combination therapy (ACT) is being promoted as a strategy to conteract the increasing resistance of the parasite as well as to prevent disease transmission [1]. Despite the precautionary measures however, artemisinin-resistant P. falciparum malaria has emerged in western Cambodia and the bordering regions with Thailand, the hotspot of multidrug resistance parasites [2][3][4][5][6][7][8][9][10], and appears to be emerging in the western border of Thailand [11,12].
In Thailand, a three-day, artesunate-mefloquine combination regimen has been used as the first-line treatment for acute uncomplicated falciparum malaria throughout the country [13]. In a previous study which aimed at monitoring clinical efficacy of this three-day artesunatemefloquine combination regimen during the year 2009 in 134 patients with acute uncomplicated falciparum malaria in the area along the Thai-Myanmar border, a markedly high failure rate was observed [11]. The 28-and 42-day cure rates calculated by Kaplan-Meier survival analysis with PCR correction for re-infection were 74.7 and 68.1%, respectively. It is noted that re-appearance of parasitaemia occurred as early as seven days after the first dose. In addition, there was a small but significant delay of parasite clearance in the group with recrudescence response (median (range) parasite clearance time 32.0 (28.0-34.0) h) compared with the sensitive group (26.0 (24.0-26.0) h). Only six (17.6%) and seven (20.5%) patients with recrudescence response, respectively, had parasitaemia and fever cleared within 24 hours. This observation is alarming and is of great concern if resistance of P. falciparum has actually developed and spread in this area. In the present study, identification of resistance/reduced sensitivity of P. falciparum in this border area to each component of this three-day, artesunate-mefloquine combination regimen was proposed based on clinico-parasitological response, with confirmed adequacy of anti-malarial systemic drug exposure during acute phase infection, and in vitro sensitivity of P. falciparum isolates to each combination partner. In addition, the possible link between the identified "resistance" cases and P. falciparum multidrug resistance 1 (pfmdr1) copy number, the candidate molecular markers of mefloquine and/or artesunate resistance was investigated.
Patients and study framework
The study was conducted at Mae Tao clinic for migrant workers, Tak Province, Thailand [11]. Figure 1 summarizes the total number of cases included in the study and number of cases included in each step of analysis. Prior to study, approval of the study protocol was obtained from the Ethics Committee of the Ministry of Public Health of Thailand. Written informed consents were obtained from all patients before study participation. The analysis for identification of resistance of P. falciparum to artesunatemefloquine combination was performed in a total of 91 (62 cases with sensitive response and 29 with PCRconfirmed recrudescence) Burmese patients (47 males and 44 females, aged between 16 and 57 years) with acute uncomplicated P. falciparum malaria (median (95% CI) admission parasitaemia 5,512 (5,040-6,930)/μl] [14]). Reappearance of parasitaemia in the 29 late parasitological failure (LPF) cases occurred between day 7 and 42, with significant prolongation of parasite and fever clearance time (PCT nd FCT) in patients with recrudescence compared with sensitive response (median (95% CI) 32.0 (28.0-34.0) vs 26.0 (24.0-26.0) h, and 32.0 (30.0-34.0) vs 26.0 (24.0-26.0) h, respectively). The study procedures and results of clinical efficacy assessment, including relationship with drug concentrations, were previously described in detail [11]. In brief, patients were treated with the standard three-day combination regimen of artesunate and mefloquine with primaquine (4 mg/kg body weight artesunate daily for three days; 750 and 500 mg mefloquine on the first and second day, respectively; 0.6 mg/kg body weight primaquine on the third day). All were admitted to the clinic during the course of treatment or until signs and symptoms of malaria disappeared. Prior to treatment, blood sample (5 ml) was collected from each patient for in vitro sensitivity testing of P. falciparum isolates to artesunate and mefloquine, genetic analysis (molecular markers of recrudescence and drug resistance) and determination of baseline anti-malarial drug concentrations (mefloquine, artesunate and its active plasma metabolite, dihydroartemisinin).
Patients were requested to return for follow-up on days 7, 14, 21, 28 and 42, or at any time if fever or symptoms suggestive of malaria developed. At each visit, a parasite count was performed (Giemsa-stain), and a detailed questionnaire for general symptoms was recorded. Blood samples were collected at specified time points for measurement of mefloquine (at one, two, six, 12, 24, 25, 36, 37, 48, 49 hours, and three and seven days after the first dose) and artesunate/dihydroartemisinin (at one, six, 12 and 24 hours after the first dose) concentrations. Malaria blood smears were obtained on enrolment and thereafter, twice daily until two consecutive slides were confirmed to be negative, as well as at every follow-up visit. Thick films were screened for 200 oilimmersion fields before declaring a slide negative. Asexual parasites and gametocytes were separately counted against 200 white blood cells (WBCs); if the parasite density was too numerous to count on the thick film, the number of parasites per 2,000 red blood cells (RBCs) on the thin film was counted. Parasite slope half-life for each patient was calculated from log e (2)/k = 0.693, where k is the clearance rate constant [15].
Plasmodium falciparum genotyping of the three polymorphic genes for surface antigen 1 (msp1), merozoite surface antigen 2 (msp2), and glutamate-rich protein (glurp) was performed in paired samples collected prior to treatment and at the time of parasite re-appearance to distinguish between re-infection and recrudescence [16][17][18]. Blood sample (5 ml) was also collected from each individual with re-appearance of parasitaemia for determination of drug concentrations, in vitro sensitivity test, and evaluation of potential molecular marker of resistance, P. falciparum multidrug resistance 1 (pfmdr1) copy number. Clinical efficacy of the three-day course of artesunate-mefloquine was evaluated in the group of patients who completed the 42-day follow-up period. The classification of the therapeutic outcome was according to the WHO protocol [14]. Plasma concentrations of artesunate and dihydroartemisinin were measured using liquid chromatography mass-spectrometry (LC/MS) according to the method of Thuy and colleagues [19]. Determination of whole blood concentrations of mefloquine was performed using high performance liquid chromatography with UV-detection (HPLC-UV) according to the method developed by Karbwang and colleagues with modification [20].
Assessment of in vitro sensitivity of Plasmodium falciparum isolates
The in vitro sensitivity test was accomplished in a total of 76 P. falciparum isolates (60, and 16 isolates collected pretreatment and at the time of recrudescence, respectively). Plasmodium falciparum 3D7 (chloroquine sensitive) and K1 (chloroquine resistant) were used as control P. falciparum clones. All were cultured according to the method of Trager and Jensen with modification [21]. In vitro sensitivity testing was performed in 96-flatbottom wells sterile microtiter plate (Costar™, Corning, Massachusetts, USA) according to the methods of Rieckmann and colleagues [22]. Evaluation of sensitivity of P. falciparum isolates to mefloquine and artesunate was performed based on SYBR Green I assay [23]. The Figure 1 Summary of cases included in the analysis of each step (clinico-parasitological response, pfmdr1 copy number, in vitro parasite sensitivity to artemisinin and mefloquine and drug levels (dihydroartemisinin and mefloquine). a = Reference [11].
triplicate results of fluorescence intensity of each drug were calculated to obtain mean value. The dose response curves obtained from in vitro sensitivity assay were analysed by non-linear regression analysis using CalcuSyn™ software (Biosoft, Cambridge, UK). Results were expressed as IC 50 value, which is defined as the concentration of anti-malarial drug that produces 50% inhibition of parasite development as compared to the control.
The criteria for discrimination between the resistant and sensitive parasite isolates to mefloquine was as follows: sensitive (IC 50 ≤24 nM), and resistant (IC 50 >24 nM) [24,25]. For artesunate, as there has been no clear cut-off level for artemisinin resistance, two criteria were applied. Susceptibility was classified into two levels according to the criteria of Pradines and colleagues [26], i e, sensitive (IC 50 ≤10.5 nM), and resistant (IC 50 >10.5 nM). In addition, IC 50 value of greater than the upper limit of 95% CI of median defined from sensitive isolate group as previously applied [27], i e, IC 50 >2.8 nM, was considered as declined sensitivity to artesunate [27].
Determination of pfmdr1 copy number by SYBR Green I quantitative real-time PCR (qRT-PCR) Investigation of pfmdr1 copy number was performed in a total of 120 P. falciparum isolates. Sixty-two samples were obtained from patients with sensitive response during the 42-day follow-up period. Twenty-nine paired samples (58 isolates) were obtained from patients with recrudescence before treatment and at the time of parasite reappearance. Genomic DNA was extracted from all samples using chelex resin modified technique [18]. SYBR Green I qRT-PCR was performed using iCycler™ real-time PCR machine (Bio-Rad, California, USA) according to the method described by Ferreira and colleagues with modification [28]. 3D7 (1 gene copy number) and Dd2 (4 gene copy number) P. falciparum were used as positive control clones for pfmdr1 copy number analysis (provided by Professor Steven A Ward, Liverpool School of Tropical Medicine, UK). Distilled water was used as negative control.
Data analysis
A total of 38 patients with acute uncomplicated P. falciparum malaria were included in the analysis. Identification of parasite isolates with intrinsic resistance/ reduced sensitivity to mefloquine and/or artesunate was analysed in 17 of 29 cases with complete information on clinico-parasitological response, drug concentrations, and in vitro sensitivity. Analysis of the contribution of these three factors in the group with sensitive response was performed in 21 of 62 cases. Quantitative variables are summarized as median (95% confidence interval: 95% CI) values, and qualitative variables are presented as number (%) values. Three criteria were applied for identification of P. falciparum cases with intrinsic resistance to mefloquine and/or artesunate: (i) PCRconfirmed recrudescence during a 42-day follow-up with delayed parasite slope half-life; (ii) adequacy of mefloquine and/or dihydroartemisinin systemic drug exposure (AUC); and, (iii) in vitro parasite resistance/ reduced sensitivity to mefloquine and/or artesunate as defined by the above criteria.
The upper limit of 95% CI of the parasite slope halflife in the group with sensitive response (2.99 h) was used as the cut-off for delayed parasite slope half-life. The lower limits of 95% CI of area under whole blood mefloquine conentration-time curve from day 0 to day 7 (AUC 0-7 days ) in patients with sensitive response (8.48 mg.day/ml) was used as a criterion for the adequacy of mefloquine systemic drug systemic drug exposure. The lower limit of 95% CI of plasma dihydroartemisinin concentration-time curve from hour 0 to hour 24 (AUC 0-24 h ) in patients with sensitive response (462 ng. h/ml) was used as a criterion for the adequacy of dihydroartemisinin systemic drug exposure. AUC was calculated using trapezoidal rule. Association between mefloquine and dihydroartemisinin systemic exposure (AUC 0-7 days and AUC 0-24h ) and treatment response was performed using Fisher's exact test.
Analysis of the correlation between IC 50 values of mefloquine and artesunate, IC 50 values of both drugs and pfmdr1 copy number, as well as pfmdr1 copy number and PCT was performed using Spearman's (rho) correlation test at a statistical significance of α = 0.05 (SPSS version 15; SPSS, Chicago, Illinois, USA).
The IC 50 values of artesunate in all isolates collected before treatment ranged from 0.3-6.1 nM, with median (95% CI) IC 50 of 1.9 (1.6-2.4) nM. Based on the criteria defined by Pradines and colleagues [26], all were considered sensitive to artesunate (median (95% CI) IC 50 = 1.9 (1.6-2.4) nM). However, when upper limit of 95% CI of IC 50 in the sensitive was applied as the cut-off criteria, declining in sensitivity to artesunate was observed in 22 (36.7%) isolates. It was noted however for the relatively wide variation of IC 50 values ranging from 0.3 to 6.1 nM in all isolates.
A relatively strong positive significant correlation was observed between the IC 50 values of mefloquine and artesunate (r 2 = +0.701; p < 0.001).
Identification of resistance of Plasmodium falciparum isolates to artesunate-mefloquine combination therapy and pfmdr1 copy number as a candidate molecular marker of resistance Identification of resistance of P. falciparum isolates to artesunate-mefloquine combination therapy was analysed in 17 out of 29 cases with confirmed recrudescence and with complete information on clinico-parasitological, drug concentrations, and in vitro sensitivity (Table 1). Based on the upper limit of 95% CI of the parasite slope half-life in patients with sensitive response (2.99 h), all of the patients with recrudescence had delayed parasite clearance rate.
Based on the previously defined criteria, clinical recrudescence response with in vitro resistance and/or reduced intrinsic parasitocidal activity of mefloquine and/or artesunate without pharmacokinetic or other host-related factors were confirmed in six cases, two (Nos 13, and 14) with resistance to mefloquine alone, one (No 16) with reduced sensitivity to artesunate alone, and three (Nos 11, 12 and 15) with resistance to mefloquine combined with reduced sensitivity to artesunate. Clinical recrudescence response with in vitro resistance and/or reduced intrinsic paratocidal activity of mefloquine/artesunate, together with contribution of pharmacokinetic factor of artesunate and/or mefloquine were confirmed in seven cases: two (Nos 9 and 10) with resistance to mefloquine alone, and five (Nos 1, 2, 6, 7 and 8) with resistance to mefloquine combined with reduced sensitivity to artesunate. Pharmacokinetic factor alone contributed to recrudescence in three cases (Nos 3, 4, and 5), all of which had inadequate mefloquine AUC 0-7 days . Other host-related factors contributed to recrudescence in one case (No. 17) ( Table 1).
Pfmdr1 copy number data were available in all of the 17 of 29 PCR-confirmed recrudescence cases with complete information on clinico-parasitological response, systemic drug exposure during acute phase and in vitro parasite sensitivity. Twelve of 17 (70.1%) isolates in the group with recrudescence response respectively carried >1 pfmdr1 copy number. Increase in pfmdr1 copy number was associated with reduced/resistance to mefloquine alone, and artesunate together with mefloquine in 11 cases (four cases, respectively) ( Table 1).
Analysis of contribution of in vitro parasite sensitivity and systemic drug exposure and relationship with pfmdr1 copy number in patients with sensitive response Analysis of the contribution of in vitro parasite sensitivity and systemic drug exposure including relationship with pfmdr1 copy number in the group with sensitive response was evaluable in 21 of 69 cases ( Pfmdr1 copy number data were available in all of the 21 cases with complete information on clinico-parasitological response, drug concentrations and in vitro parasite sensitivity. Fifteen of 21 (71.4%) carried only one pfmdr1 copy number (Table 2).
Association between treatment response and systemic drug exposure
No significant association between mefloquine and dihydroartemisinin exposure (AUC 0-7 days and AUC 0-24h ) and treatment response was found (p = 0.137 and 0.583, respectively).
Discussion
Accumulating evidence suggests a decline in the efficacy and some degree of resistance of P. falciparum in the Greater Maekong Subregion (GMS) to artemisinins. Early evidence came from western Cambodia and the Thai-Cambodian border in patients following treatment with either artesunate monotherapy or artesunatemefloquine [2,3,9,10]. Although results of the containment project in seven provinces of Thailand bordering Cambodia (Buriram, Chantaburi, Sakaew, Srisaket, Surin, and Trat) during 2009-2011, in a total of 1,709 P. falciparum-positive cases, suggest that the therapeutic efficacy of artesunate-mefloquine remains at an acceptable level with cure rate of greater than 90%, continuous monitoring of P. falciparum resistance in both border areas is critical [29]. With regard to the Thai-Myanmar border, until recently [11,12], there has been no clear evidence of a significant reduction in artemisinin efficacy at either clinical or in vitro sensitivity level. A longitudinal investigation in a total of 3,202 patients during 2001-2010 from malaria clinics, Shoklo Malaria Research Unit, Tak Province, unveiled that genetically determined artemisinin resistance in P. falciparum may have emerged along the Thai-Myanmar border at least eight years ago and has since markedly increased [12]. The clinical efficacy of artesunate-mefloquine combination therapy is beginning to decline in the Thai-Myanmar border and resistance is not only confined to western Cambodia and areas along the Thai-Cambodian border. Due to the limitation of the study design however, it was not possible to attribute treatment failures in these studies to resistance or host-related factors (e g, pharmacokinetics). Furthermore, if resistance actually occurred, it was unclear whether this was due to intrinsic parasite resistance to artesunate alone, mefloquine alone, or both, because of the pre-existing background of mefloquine resistance in these areas. In order to exclude the contribution of host and confounding factors from the partner drug mefloquine, a series of investigation with artesunate monotherapy (2-6 mg/kg body weight/day for seven days) was performed during 2006-2008 using stringent criteria for defining artemisinin resistance with integrated in vivo-in vitro approach [4][5][6][7][8].
The present study was designed to identify the treatment failure cases due to intrinsic parasite factor to each component of the three-day regimen. Resistance or decline in parasite susceptibility was confirmed based on integrated information on clinicopathological assessment together with in vitro sensitivity (intrinsic parasite resistance) and systemic drug exposure (pharmacokinetic factor) in 17 out of 29 patients with recrudescence (LPF) following treatment with a three-day combination regimen of artesunate-mefloquine [11]. All had significant delay in parasite clearance rate (slope half-life) compared with the sensitive cases. Despite the fact that the delay in parasite clearance is influenced by host-related factors, it is proposed as a sensitive marker of reduced susceptibility of artemisinin component of the ACT than recrudescence rate [30]. The main effect of artemisinins is proposed to be on the slope of the log-linear decline in parasite clearance and thus, the slope half-life [15]. Resistance of the P. falciparum isolates to mefloquine or aretsunate was defined according to in vitro sensitivity criteria. Existing data on the relationship between in vitro sensitivity and artemisinin are controversial. Although lack of significant correlation between in vitro sensitivity of artemisinins and clinical response was found in some studies [4,31], good correlation was observed in most studies [2,3,9,26,32,33]. Adequacy of mefloquine and dihydroartemisinin systemic drug exposure was defined based on the lower limits of 95% CI of the median AUC 0-7 days and AUC 0-24h , respectively, in the sensitive group. Based on this defined criteria, results suggest that low level of a decline in sensitivity of P. falciparum to artesunate (in terms of a small increase in IC 50 of artesunate and number of identified cases with reduced sensitivity to artesunate) exists in this area on a background of pre-existing mefloquine resistance, and both parasite in conjunction with host-related factors significantly contributed to high failure rate in this group of patients. There was only one (out of 17) confirmed case with reduced sensitivity to artesunate alone, while there were three cases with reduced sensitivity/resistance to both artesunate and mefloquine. Pharmacokinetic factor contributed to about 58.8% (10 of 17 cases) of the total recrudescence cases. Inadequacy of mefloquine and dihydrortemisinin systemic drug exposure was observed in five and five cases, respectively. However, there was no significant difference in the systemic exposure of both mefloquine and dihydroartemisinin (AUC 0-7 days and AUC 0-24h ) in patients with treatment failure compared with those with sensitive response.
Host-related factor contributing to treatment failure in one case could not be definitely defined. It is noted that the in vitro sensitivity of the only one isolate with identified resistance/reduced sensitivity to artesunate alone (No 16) was markedly low compared with others (IC 50 5.2 nM). Furthermore, in one (No 11) of the three cases with in vitro resistance/reduced sensitivity to both mefloquine and artesunate, mefloquine concentration of as high as 1,250 ng/ml was detected on the day of recrudescence (day 17). It is of note that this high level was still inadequate to completely eliminate residual parasites on background of resistance/reduced sensitivity to both mefloquine and artesunate. In the other two cases with contribution of mefloquine pharmacokinetic factor (Nos 1 and 2), variable whole blood mefloquine concentration on the days of recrudescence of 610 and 100 ng/ml was observed. The relatively low drug concentrations in some of the recrudescence cases could be due to variability in pharmacokinetics of mefloquine and artesunate/ dihydroartemisinin, and these concentrations were no longer adequate once the level of resistance to either mefloquine or artesunate was aggravated. Systemic exposure of both drugs during the acute phase infection was used as a criterion to define adequacy of drug levels. Whole blood mefloquine concentration on day 1 of treatment has been reported to be an important determinant of successful treatment [34], but there has been no defined threshold levels of artesunate/dihydroartemisinin for treatment of falciparum malaria. Sensitivity of P. falciparum isolates in this area to mefloquine was still at the resistance level (43.3% of the isolates with recrudescence) after a certain period of improvement [27]. Mefloquine was used as monotherapeutic treatment of acute uncomplicated falciparum in this area long before the introduction of the combination regimen and thus mefloquine resistance had already reached a level too extreme to protect the development and spread of artesunate resistance. The sensitivity of P. falciparum to artemisinins would be compromised by intensifying resistance to mefloquine. Decline of in vitro sensitivity of parasite in this, as well as other areas to artesunate has been demonstrated [4,7,35] and it is noted for a ten-fold decrease in in vitro artemisinin sensitivity observed over a 10-year period in north-western Thailand [36]. In a previous study, decreased in vitro susceptibility to dihydroartemisinin (IC 50 21.2 nM) and artesunate (16.3 nM) was reported in a patient returning from south of Laos and north of Thailand [26]. In view of the short half-life of oral artesunate on the other hand, it is expected that the drug exerts little drug pressure, provided the treatment generally results in parasitological cure. Compared with western Cambodia and areas along the Thai-Cambodian border, intensity of malaria transmission is relatively low in the north-western border areas of Thailand. This would lead to lower selective drug pressure and drug resistance. Although artemisinin resistance starts to gradually emerge [37,38], relatively good parasitologic responses to artemisinins are observed in this border area even after almost 20 years of intensive use [31].
Analysis of the contribution of in vitro parasite sensitivity and drug concentrations and relationship with pfmdr1 copy number in the group with sensitive response was performed in 21 of 69 cases. Results indicate contribution of both parasite and pharmacokinetic factors in treatment response, but with different magnitude. Pharmacokinetic factor contributed to 71.4% (15 of 21 cases) of variation in mefloquine and/or dihydroartemisinin concentrations in this group of patients, of which 35.3% (six of 17 cases) were due to variable mefloquine pharmacokinetics (alone or together with dihydroartemisinin) and 64.7% (11 of 17 cases) were due to variable artesunate/dihydroartemisinin pharmacokinetics (alone or together with mefloquine). In vitro resistance/reduced sensitivity to mefloquine and aresunate was found in 36.7% (nine of 17 cases) and 43.3% (12 of 17 cases) of all cases, respecrtively. In the group with recrudescence response, pharmacokinetic factor contributed to 58.8% (ten of 17 cases) of variation in melfoquine and/or dihydroartemisinin concentrations; 50% (five of ten cases) were due to variable mefloquine pharmacokinetics (alone or together with dihydroartemisinin) and 50% (five of ten cases) were due to variable artesunate/dihydroartemisinin pharmacokinetics (alone or together with mefloquine). Resistance/reduced sensitivity to mefloquine and aresunate was found in 23.8% (five of 21 cases) and 42.9% (nine of 21 cases) of all cases, respectively. Altogether, these findings may suggest that the influence of pharmacokinetic variability of mefloquine influences more on treatment response than of artesunate/ dihydroartemisinin on background of mefloquine resistance together with decreasing susceptibility of parasite to artesunate. Although statistical significance could not be achieved due to a marked variability, the IC 50 of both artesunate (medians of 49.9 vs 21.1 nM) and mefloquine (medians of 2.8 vs 1.8 nM) tended to be higher in isolates obtained from patients with recrudescence response. In the two cases with sensitive response (Nos 18 and 20), sensitivity of the parasite to both artesunate and mefloquine is relatively high and even with low systemic exposure of mefloquine, were still adequate to completely eliminate all the parasites. In the other three cases (Nos 31, 32 and 33) with in vitro resistance/reduced sensitivity to mefloquine and artesunate, radical cure could be obtained as far as adequate systemic exposure of both mefloquine and dihydroartemisinin were achieved.
A consensus for the molecular and cellular mechanisms of artemisinin resistance has not emerged. Despite continuous efforts to uncover definite molecular markers for resistance to artemisinins in P. falciparum, no valid marker has been identified and confirmed, which precludes efficient monitoring of emerging and spreading resistance. Results from various molecular studies varied, probably reflecting a multigenetic nature of artemisinin resistance. Among these, polymorphisms of PfMDR1 (an ATP-binding cassette (ABC) transporter residing on the digestive vacuolar membrane of the parasite, where it transports solutes, including antimalarial drugs into the digestive vacuole) encoded by the pfmdr1 gene have received the most attention. Several mutations (N86Y, Y184F, S1034C, N1042D and D1246Y) and copy number variation that occurred in pfmdr1 from field isolates collected from several endemic areas are associated with altered sensitivity to artemisinins and their combination partner mefloquine [39][40][41][42]. Amplification of PfMDR1 copy number has been proposed as a key determinant for both in vivo and in vitro resistance to both artemisinins and mefloquine in Cambodia and areas along the Thai-Cambodian and Thai-Myanmar borders [26,35,[43][44][45][46][47]. Nevertheless, some clinical trials could not confirm the link between PfMDR1 polymorphism/copy variation and artemisinin resistance [4][5][6][7][8]. There appeared to be no association between the mutation or amplification of pfatp6, the sarco/endoplasmic reticulum Ca 2+ -ATPase, in P. falciparum isolates in Thailand [38]. In this study, increased copy number of pfmdr1 was shown to be associated with treatment failure and a decline in in vitro susceptibility of P. falciparum isolates in this area following artesunate-mefloquine combination, with approximately 40% of the isolates carrying pfmdr1 copy number between two and six copies. Pfmdr1 copy number correlated well with clinical treatment response in both groups of patients with sensitive and recrudescence response. Approximately 70.1% (12 of 17 cases) and 71.4% (15 of 21 cases) in the groups with sensitive and recrudescence response carried gene copy number >1 (two to six) and one, respectively. Resistance to ACT may evolve even when the two drugs within the combination are taken simultaneously and amplification of the pfmdr1 gene may partly contribute to this phenotype.
Conclusions
Despite the evidence of a low level of a decline in sensitivity of P. falciparum isolates to artemisinins in areas along the Thai-Myanmar border, ACT would be expected to remain the key anti-malarial drug for treatment of multidrug resistance P. falciparum. Continued monitoring and active surveillance of clinical efficacy of ACT including identification of true artemisinin resistant parasite is required for appropriate implementation of malaria control policy in this area. | 2017-06-22T18:27:03.933Z | 2013-07-30T00:00:00.000 | {
"year": 2013,
"sha1": "71e497fd5157aa2427c3a1f8a2134977360ddf6b",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-12-263",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71e497fd5157aa2427c3a1f8a2134977360ddf6b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
229371119 | pes2o/s2orc | v3-fos-license | Global Context Networks
The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies within an image, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by the non-local network are almost the same for different query positions. In this paper, we take advantage of this finding to create a simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further replace the one-layer transformation function of the non-local block by a two-layer bottleneck, which further reduces the parameter number considerably. The resulting network element, called the global context (GC) block, effectively models global context in a lightweight manner, allowing it to be applied at multiple layers of a backbone network to form a global context network (GCNet). Experiments show that GCNet generally outperforms NLNet on major benchmarks for various recognition tasks. The code and network configurations are available at https://github.com/xvjiarui/GCNet.
INTRODUCTION
Long-range dependencies among pixels in an image are essential to capture for global understanding of a visual scene. This dependency modeling is proven to benefit a wide range of recognition tasks, such as image classification [2], object detection and segmentation [3], [4], and video action recognition [5]. In convolutional neural networks, long-range dependencies are mainly modeled by deep stacking of convolution layers, where each layer models pixel relationships within a local neighborhood. However, direct repetition of convolution layers is computationally inefficient and hard to optimize [5], due in part to difficulties in delivering messages between distant positions.
To address this issue, the non-local network (NLNet) [5] utilizes a layer to model long-range dependencies, via a self-attention mechanism [6]. For each query position, the non-local network first computes pairwise relations between the query position and all other positions to form an attention map, and then aggregates the features of all positions by a weighted sum with the weights defined by the attention map. The aggregated features are finally added to the features of each query position to form the output.
The query-specific attention weights in the non-local network are expected to reflect the importance of the corresponding positions to the query position. Visualizing these weights would help to better understand their behavior, but such analysis was largely missing in the original paper. In an analysis that we conducted, a surprising observation can be made. As shown in Figure 1, we found that the attention maps for different query positions are almost the same, indicating that the learnt dependency is basically query-independent. This observation is further verified by the statistical analysis in Tables 1, 2 Based on this observation, we propose a simplification of the non-local block in which a query-independent attention map is explicitly used for all query positions. The output is then formed by the same aggregation of features using this attention map as weights. This simplified block requires significantly less computation than the original non-local block, but exhibits almost no decrease in accuracy on several important visual recognition tasks. The block design follows a general three-step framework: (a) a context modeling module which aggregates the features of all positions together to form a global context feature; (b) a feature transform module to capture the channel-wise interdependencies; and (c) a fusion module to merge the global context feature into features of all positions. We further significantly reduce the parameter number by replacing the one-layer transformation function of the non-local block with a bottleneck of two layers, to form a new unit that we call the global context (GC) block.
Because of the lightweight computation of the GC block, it can be applied to all residual blocks in the ResNet architecture, in contrast to the original non-local block which is usually applied after just one or a few layers due to its heavy processing. We refer to this network as the global context network (GCNet). On COCO object detection/instance segmentation, it is found that GCNet outperforms NLNet by 1.9% on AP box and 1.5% on AP mask with just a 0.07% relative increase in FLOPs. In addition, GCNet yields significant performance gains over four general visual recognition tasks: object detection/segmentation on COCO (2.7%↑ on AP bbox , arXiv:2012.13375v1 [cs.CV] 24 Dec 2020 and 2.4%↑ on AP mask over Mask R-CNN with FPN and ResNet-50 as backbone [7]), semantic segmentation on Cityscapes (3.2%↑ on mIoU over ResNet-101 as backbone with dilated convolutions), image classification on ImageNet (0.8%↑ on top-1 accuracy over ResNet-50 [8]), and action recognition on Kinetics (1.1%↑ on top-1 accuracy over the ResNet-50 Slow-only baseline [9]), with less than a 0.26% increase in computation cost.
The proposed global context network is a new architecture designed for general purpose. It introduces a novel global context block which models long-range information into existing architectures, showing general improvements on a wide range of vision tasks, such as object detection, instance segmentation, image classification and action recognition.
Long-range dependency modeling
While existing deep architectures mainly work by stacking layers which operate locally, there are also methods that directly model long-range dependency using a single layer. Such methods can be categorized into two classes: pairwise based, and context fusion based.
Most pairwise methods are based on the self-attention mechanism, and the non-local network (NLNet) is a pioneering work [5] for pixel-pixel pairwise relation modeling that has proven beneficial for several visual recognition tasks, such as object detection and action recognition. There are also extensions of non-local networks proposed to benefit specific tasks. Object Context Networks (OCNet) [41] model pixel-wise relationships in the same object category via self-attention mechanisms and also capture context at multiple scales. Dual Attention Networks (DANet) [42] use self-attention mechanisms to model pixel-pixel relationships and channel-channel relationships to improve feature representations. Criss-Cross Networks (CCNet) [43] accelerate NLNet via stacking two criss-cross blocks, which can enlarge the dependency range to the whole feature map with low computational cost.
While it is widely believed that NLNet benefits visual recognition due to pairwise relation modeling, this paper empirically proves that such belief is actually incorrect. In fact, for several important visual recognition tasks such as ImageNet image classification, COCO object detection and Kinetics action recognition, we observe that NLNet degenerates to learning the same global context vector for different pixels, and thus the effectiveness of NLNet can mainly be ascribed to global context modeling other than pairwise relation modeling. For some other visual recognition tasks, such as semantic segmentation, although we observe that some kind of pairwise relation is learnt, the accuracy improvement is still mostly ascribed to its global context modeling ability. Based on this observation, we propose a simplification of the non-local block, which explicitly learns global context other than pairwise relations. The resulting block, called the global context (GC) block, consumes significantly less computation than the non-local block but performs with the same accuracy on several important tasks. Note while the proposed GC block exploits the findings of this degeneration issue to explicitly simplify the non-local block, in a follow-up to this paper, our work on disentangled non-local networks (DNL) [44] on the contrary attempts to alleviate this degeneration problem by a disentangled design in a manner that allows learning of different contexts for different pixels while preserving the shared global context. Different from pairwise methods, context fusion methods operate by strengthening the feature of each position by a context feature that aggregates information from all pixels including those at long range. For example, SENet [2] fuse the two features by adaptive rescaling on different channels. GENet [45] uses local patches to compute position-adaptive context features. PSANet [46] proposes to connect each position on the feature map to all the other ones through a self-adaptively learned attention mask, and aggregate the features of other positions via rescaling. CBAM [47] recalibrates the importance of both different spatial positions and channels also via rescaling. All these methods adopt rescaling for feature aggregation, which may be of limited effectiveness for global context modeling.
The proposed GCNet is also a context fusion method. But by using a different context feature computation method (attention pooling) and a different fusion method (addition), GCNet performs generally better than the widely used SENet method. Noting that the context feature computation and fusion methods used in GCNet are inherited from NLNet, the proposed GCNet can be also seen as a product of connecting two representative long-range dependency modeling methods, NLNet and SENet, but makes good use of their respective strengths (GCNet is the same as NLNet in better context modeling and information fusion, while being as lightweight as SENet).
In natural language processing, Transformer [6], which applies a self-attention mechanism to model long-range dependencies between words, is a milestone work for machine translation. Graph Attention Networks (GAT) [56] improve graph convolution with self-attention mechanisms that operate on graph-structured data, producing remarkable gains over baseline graph convolution methods. Self-attention Generative Adversarial Networks (SAGAN) [57] generate high-resolution details as a function of not only spatially local points but also distant points, via self-attention mechanisms that model long-range dependency.
For visual recognition, aside from pixel relation modeling, the attention mechanism is also applied for object-object/object-pixel relation modeling [3], [61], which is proven effective in object detection.
The presented analysis and proposed GCNet in this paper are basically about the general self-attention mechanism, with experiments and instantiations mainly targeting the problem of pixel-pixel relation modeling. Such an analysis and global context modeling approach could be extended to other self-attention applications such as object-object/object-pixel relation modeling, natural language processing, and graph social networks. For these applications, there are questions of whether the pairwise relations can be well learnt by the self-attention mechanism and how the global context modeling approach can effectively contribute. Both of these questions on broader applications are promising directions for further study.
ANALYSIS OF NON-LOCAL NETWORKS
In this section, we first review the design of the non-local block [5]. While in-depth studies have been rare on what a non-local block learns and what makes it effective, we conduct such a study both qualitatively and statistically. Qualitatively, we visualize the attention maps across different query positions generated by a widely-used instantiation of the non-local block. Statistically, we compute the average cosine distances between different feature maps (including input, attention map, output and so on) inside the non-local block, to delve deep into the non-local block design. This in-depth study brings a new understanding of the non-local block and may inspire new approaches as in the next section.
Revisiting the Non-local Block
The basic non-local block [5] aims at strengthening the features of the query position via aggregating information from other positions. We denote x={x i } Np i=1 as the feature map of one input instance (e.g., an image or video), where N p is the number of positions in the feature map (e.g., N p =H·W for image, N p =H·W·T for video). x and z denote the input and output of the non-local block, respectively, which have the same dimensions. The nonlocal block is formulated as where i is the index of query positions, and j enumerates all possible positions. f (x i , x j ) denotes the relationship between position i and j, and has a normalization factor C (x). W z and W v denote linear transform matrices (e.g., 1x1 convolution). For simplification, we denote as the normalized pairwise relationship between position i and j.
In [5], four instantiations of the non-local block are provided by defining ω ij as different functions: • Gaussian. f in ω ij is the Gaussian function, defined as It is a simple extension of Gaussian by using an embedding space to compute similarity, defined as We illustrate the architecture of two most widely-used instantiations, Embedded Gaussian and Gaussian, in Figure 3(a) and 3(b). The non-local block can be regarded as a query-specific global context modeling block, which strengthens the feature at a query position by a query-specific global context vector, computed by a weighted sum over all positions. The weights are determined by a similarity between two positions, and the weights over all positions form an attention map for one query position. The time and space complexity of the non-local block are heavy in that they are both quadratic to the number of positions N p . Likely as a result, it is applied to only a few places in a network architecture, e.g. as one block inserted into the Mask R-CNN framework.
The non-local block [5] is proven to benefit many visual recognition tasks, such as object detection/instance segmentation, and action recognition. It is believed that such effectiveness arises from effective learning of pairwise pixel relations [5]. Nevertheless, direct evidence and an in-depth study of this has been lacking.
In the following, we analyze what is truly learnt in non-local networks, both qualitatively and statistically. Such a study shed light on the behavior of non-local networks.
Visualization
To intuitively understand the behavior of the non-local block, we first visualize the attention maps for different query positions. As different instantiations achieve comparable performance [5], here we only visualize the most widely-used version, Embedded Gaussian, which has the same formulation as the block proposed in [6]. Since attention maps in videos are hard to visualize and understand, we only show visualizations on object detection/instance Fig. 3: Two instantiations of the non-local block: Embedded Gaussian and Gaussian. The feature maps are shown by their dimensions, e.g. CxHxW. ⊗ denotes matrix multiplication, and ⊕ is broadcast element-wise addition. For two matrices with different dimensions, broadcast operations first broadcast features in each dimension to match the dimensions of the two matrices. The feature maps marked in red (e.g. ' 5 att') are statistically analyzed in Tables 1, 3 and 2. segmentation, which takes images as input. Following the standard setting of non-local networks for object detection [5], we conduct experiments on Mask R-CNN with FPN and ResNet50, and only add one non-local block right before the last residual block of res 4 .
In Figure 2, we randomly select six images from the COCO dataset, and visualize three different query positions (red points) and their query-specific attention maps (heatmaps) for each image. We surprisingly find that for different query positions, their attention maps are almost the same. This suggests that it may be redundant for the non-local block to compute different attention maps for different positions in object detection, as the non-local block may not learn pixel-pixel relationships in this task but rather just global context. This observation motivates us to delve deep into the design of non-local block, to understand its real behavior.
Dataset
Method AP bbox AP mask cosine distance input output att
Statistical Analysis
To more rigorously verify the phenomenon observed from the visualization, we statistically compare the differences (cosine distances) between the input features and the output features of different positions. Denote v i as the feature vector for position i. The average distance measure is defined as Different Non-local Instantiations/Tasks. The average cosine distances are computed between input features, attention maps and output features of different positions, with four instantiations of the non-local block on three standard tasks: object detection on COCO, action recognition on Kinetics, and image classification on ImageNet. In detail, we compute the cosine distance between three kinds of vectors, the non-local block inputs Table 1), and the attention maps of query positions Table 1).
Results with four instantiations of the non-local block on four standard tasks are shown in Table 1. First, large values of cosine distance in the 'input' column show that the input features for the non-local block are discriminative across different positions. But the values of cosine distance in the 'output' column are at least one order of magnitude smaller than that in the 'input' column on COCO, Kinetics and ImageNet, indicating that output global context features modeled by the non-local block on these three tasks are almost the same for different query positions. The cosine distances on attention maps ('att') are also very small for all instantiations on these three tasks, which again verifies the observation from the visualization.
To conclude, although a non-local block intends to compute the global context specific to each query position, the global context after training is actually independent of query position. Hence, it may be redundant for the non-local block to compute different attention maps for different positions, allowing us to simplify the non-local block. Insertion at Different Stages. It is widely accepted that the lower layers of deep networks contain low-level, less-semantic features such as local edges, and higher layers contain high-level features with more semantic information, such as parts and objects [62]. The non-local block may perform differently at different places in a deep network. To examine this, we have also done a statistical analysis across different stages with the most widelyused instantiation, Embedded Gaussian, on the four standard tasks.
For different tasks, the non-local block is applied at different positions. For example, in action recognition on kinetics, the nonlocal blocks are inserted only in c4 and c5, hence we perform the experiments accordingly.
Results are presented in Table 2. Interestingly, we can see an obvious trend from lower layers to higher layers, that the output features in higher layers are more query-dependent than that in the lower layers. Fine-grained Analysis. To analyze the reason of this phenomenon, we have done a more fine-grained statistical analysis on the two most widely-adopted instantiations of the non-local block, Embedded Gaussian and Gaussian.
For Embedded Gaussian, we compute the average cosine distances between input features (input), features after the W k transform (key), features after the W q transform (query), different query features after inner product (prod), attention maps (att), and output features (output), which are marked in Figure 3 (a). For Gaussian, as marked in Figure 3 (b), we compute the average cosine distances between input features (input), different query features after inner product (prod), attention maps (att), and output features (output).
Results of the fine-grained statistical analysis are shown in Table 3. First, we look into the results on COCO, Kinetics and ImageNet. For Embedded Gaussian, although W q and W k are both 1x1 convolutions with the same input, the features after W q are more similar, and the features after W k are still different. Also, features after the inner-product computation are more queryindependent after training. For Gaussian, as this instantiation does not include the query and key transformations, the attention maps still appear query-dependent. But after attention pooling and the output transform, the differences between the output features are significantly reduced, and are almost one order of magnitude smaller than that of the input features.
In our understanding, the tasks drive the network components to learn the specific architecture that can benefit the tasks most. And query-independence of the non-local block can benefit three major tasks: object detection on COCO, action recognition on Kinetics, and image recognition on ImageNet.
Exceptions. Although non-local networks do not learn pairwise relations on the above three important visual recognition tasks, we note that there are also some tasks where non-local networks successfully learn pairwise relations, e.g. semantic segmentation on Cityscapes, as illustrated in Table 4. Table 12 also shows that NLNet can improve segmentation accuracy over the regular counterpart. A question is whether such improvements are due mainly to the learnt pairwise relations. Surprisingly, a simplified version of NLNet (noted as SNL, which will be introduced in the next section) which models only global context also shows performance comparable to NLNet. This indicates that although the non-local block applied in semantic segmentation may learn pairwise relations, the accuracy improvement may be mostly ascribed to the modeling of global context.
METHOD
In the last section, both qualitative and statistical analysis indicate that non-local blocks tend to learn query-independent attention maps in many visual recognition tasks, instead of query-dependent context as implied by the formulation. This finding challenges the necessity of the query-dependent formulation in the original non-local block, and raises the question of whether explicit query-independent attention maps perform worse than the original query-dependent formulation. We answer this in the following subsections. We first present a simplified non-local formulation by explicitly making the attention maps query-independent in Section 4.1. We will show in experiments that this simplified formulation can significantly reduce computation yet maintain accuracy. Then in Section 4.2, we abstract this simplified non-local formulation into a general global context modeling framework, which interestingly also operates like the popular SE block [2]. Finally, in Section 4.3, we present our global context block, which is a new instantiation of the general framework by combining the strengths of the simplified non-local block and the SE block [2].
Simplifying the Non-local Block
As the widely-adopted Embedded Gaussian instantiation achieves representative performance on all three standard tasks, as shown in Table 1, we adopt the Embedded Gaussian as the basic nonlocal block in the following sections. Based on the observation that the attention maps for different query positions are almost the same, we simplify the non-local block by computing a global (query-independent) attention map and share this global attention map among all query positions. Following the results in [3] that variants with and without W z achieve comparable performance, we omit W z in the simplified version. Our simplified non-local block is defined as where W k and W v denote linear transformation matrices.
To further reduce the computational cost of this simplified block, we apply the distributive law to move W v outside of the attention pooling, as This version of simplified non-local block is illustrated in Figure 4(b). After moving W v outside of attention pooling, the FLOPs of this 1x1 convolution W v is reduced from O(HWC 2 ) to O(C 2 ). Different from the traditional non-local block, the second term in Eqn. 3 is independent of the query position i, which means that this term is shared across all query positions i. We thus directly model global context as a weighted sum of the features at all positions, and aggregate (add) the global context features to the features at each query position. In experiments, we directly replace the non-local (NL) block with our simplified non-local (SNL) block, and evaluate accuracy and computation cost on four tasks, object detection on COCO, semantic segmentation on Cityscapes, ImageNet classification, and action recognition on Kinetics, shown in Tables 5(a), 8(a), 12(a) and 10. As expected, the SNL block achieves performance comparable to (or slightly below) the NL block with significantly lower FLOPs.
Global Context Modeling Framework
As shown in Fig. 4(b), the simplified non-local block can be abstracted into three parts: (a) global attention pooling, which adopts a 1x1 convolution W k and a softmax function to obtain the attention weights, and then performs attention pooling to obtain the global context features; (b) feature transform via a 1x1 convolution W v ; (c) feature aggregation, which employs addition to aggregate global context features to each position.
We regard this abstraction as a global context modeling framework, illustrated in Figure 4(a) and defined as where (a) j α j x j denotes the context modeling module which groups the features of all positions together via weighted averaging with weight α j to obtain the global context features (global attention pooling in the simplified NL (SNL) block); (b) δ(·) denotes the feature transform to capture channel-wise dependencies (1x1 convolution in the SNL block); and (c) F (·, ·) denotes the fusion function to aggregate the global context features to the features of each position (broadcast element-wise addition in the SNL block). Interestingly, the squeeze-excitation (SE) block proposed in [2] is also an instantiation of our proposed framework, which consists of: (a) global average pooling for global context modeling (set α j = 1 Np in Eqn. 4), called the squeeze operation in the SE block; (b) a bottleneck transform module (let δ(·) in Eqn. 4 be one 1x1 convolution, one ReLU, one 1x1 convolution and a sigmoid function, sequentially), to compute the importance for each channel, called the excitation operation in the SE block; and (c) a rescaling function for fusion (let F (·, ·) in Eqn. 4 be elementwise multiplication), to recalibrate the channel-wise features.
Global Context Block
Here we propose a new instantiation of the global context modeling framework, named the global context (GC) block, which can effectively model long-range dependency as a simplified non-local block, and is lightweight for application to all layers with a small increase in FLOPs.
In the simplified non-local block, shown in Figure 4(b), the transform module has the largest number of parameters, including from one 1x1 convolution with C·C parameters. When we add this SNL block to higher layers, e.g. res 5 , the number of parameters of this 1x1 convolution, C·C=2048·2048, dominates the number of parameters of this block. Hence, this 1x1 convolution is replaced by a bottleneck transform module, which significantly reduces the number of parameters from C·C to 2·C·C/r, where r is the bottleneck ratio and C/r denotes the hidden representation dimension of the bottleneck. With the default reduction ratio set to r=16, the number of parameters for the transform module can be reduced to 1/8 of the original SNL block. More results on different values of bottleneck ratio r are shown in Table 5(e).
As the two-layer bottleneck transformation increases the difficulty of optimization, we add layer normalization inside the bottleneck transformation (before ReLU) to ease optimization, as well as to act as a regularizer that can benefit generalization. As shown in Table 5(d), layer normalization can significantly enhance the performance of object detection and segmentation on COCO.
The detailed architecture of the global context (GC) block is illustrated in Figure 4(c) and formulated as where α j = e W k x j m e W k xm is the weight for global attention pooling, and δ(·) = W v2 ReLU(LN(W v1 (·))) denotes the bottleneck transform. Specifically, our GC block consists of: (a) global attention pooling for context modeling; (b) bottleneck transform to capture channel-wise dependencies; and (c) broadcast elementwise addition for feature fusion.
Since the GC block is lightweight, it can be applied in multiple layers to better capture long-range dependency with only a slight increase in computation cost. Taking ResNet-50 for ImageNet classification as an example, GC-ResNet-50 denotes adding the GC block to all layers (c3+c4+c5) in ResNet-50 with a bottleneck ratio of 16. GC-ResNet-50 increases ResNet-50 computation from ∼3.86 GFLOPs to ∼3.87 GFLOPs, corresponding to a 0.26% relative increase. Also, GC-ResNet-50 introduces ∼2.52M additional parameters beyond the ∼25.56M parameters required by ResNet-50, corresponding to a ∼9.86% increase.
Global context can benefit a wide range of visual recognition tasks, and the flexibility of the GC block allows it to be plugged into network architectures used in various computer vision problems. In this paper, we apply our GC block to four general vision tasks -image recognition, object detection/instance segmentation, semantic segmentation and action recognition -and observe significant improvements in all four. Relationship to non-local block. As the non-local block actually learns query-independent global context, the global attention pooling of our global context block models the same global context as the NL block but with significantly lower computation cost. As the GC block adopts the bottleneck transform to reduce redundancy in the global context features, the number of parameters and FLOPs are further reduced. The FLOPs and number of parameters of the GC block are significantly lower than that of the NL block, allowing our GC block to be applied to multiple layers with just a slight increase in computation, while better capturing long-range dependency and aiding network training. Relationship to squeeze-excitation block. The main difference between the SE block and our GC block is the fusion module, which reflects the different goals of the two blocks. The SE block adopts rescaling to recalibrate the importance of channels but inadequately models long-range dependency. Our GC block follows the NL block by utilizing addition to aggregate global context to all positions for capturing long-range dependency. A second difference is with the layer normalization in the bottleneck transform. As our GC block adopts addition for fusion, layer normalization can ease optimization of the two-layer architecture for the bottleneck transform, which can lead to better performance. Third, global average pooling in the SE block is a special case of global attention pooling in the GC block. Results in Tables 5(d), 5(f) and 8(b) show the superiority of addition in the fusion module, layer normalization in the two-layer bottleneck, and the global attention pooling, compared to the SE block, respectively.
EXPERIMENTS
To evaluate the proposed method, we carry out experiments on four basic tasks: object detection/instance segmentation on COCO [63], image classification on ImageNet [64], action recognition on Kinetics [65], and semantic segmentation on Cityscapes [66]. Experimental results demonstrate that the proposed GCNet generally outperforms non-local networks with significantly lower FLOPs.
Object Detection/Instance Segmentation on COCO
We investigate our model on object detection and instance segmentation on COCO 2017 [63], whose train set is comprised of 118k images, validation set of 5k images, and test-dev set of 20k images. We follow the standard setting [7] of evaluating object detection and instance segmentation via the standard mean average-precision scores at different boxes and the mask IoUs.
Setup. Our experiments are implemented with PyTorch [67] based on open source mmdetection [68]. Unless otherwise noted, our GC block of ratio r=16 is applied to stages c3, c4, c5 of ResNet/ResNeXt. Training. We use the standard configuration of Mask R-CNN [7] with FPN and ResNet/ResNeXt as the backbone architecture. The input images are resized such that their shorter side is of 800 pixels [69]. We trained on 8 GPUs with 2 images per GPU (effective mini batch size of 16). The backbones of all models are pretrained on ImageNet classification [64], then all layers except for c1 and c2 are jointly finetuned with detection and segmentation heads. Unlike stage-wise training with respect to RPN in [7], endto-end training like in [70] is adopted for our implementation, yielding better results. Different from the conventional finetuning setting [7], we use Synchronized BatchNorm to replace frozen BatchNorm. All models are trained for 12 epochs using Synchronized SGD with a weight decay of 0.0001 and momentum of 0.9, which roughly corresponds to the 1x schedule in the Mask R-CNN benchmark [71]. The learning rate is initialized to 0.02, and decays by a factor of 10 at the 8th and 11th epochs. The choice of hyper-parameters also follows the latest release of the Mask R-CNN benchmark [71].
Ablation Study
The ablation study is done on the COCO 2017 validation set. The standard COCO metrics including AP, AP 50 , AP 75 Block design. Following [5], we insert 1 non-local block (NL), 1 simplified non-local block (SNL), or 1 global context block (GC) right before the last residual block of c4. Table 5(a) shows that both SNL and GC achieve performance comparable to NL with fewer parameters and less computation, indicating redundancy in computation and parameters in the original non-local design. Furthermore, adding the GC block in all residual blocks yields higher performance (1.1%↑ on AP bbox and 0.9%↑ on AP mask ) with a slight increase in FLOPs and #params.
Positions. The NL block is inserted after the residual block (afterAdd), while the SE block is integrated after the last 1x1 convolution inside the residual block (after1x1). In Table 5(b), we investigate both cases with the GC block and they yield similar results. Hence, we adopt after1x1 as the default.
Stages. Table 5(c) shows the results of integrating the GC block at different stages. All stages benefit from global context modeling in the GC block (0.7%-1.7%↑ on AP bbox and AP mask ). Inserting into c4 and c5 both achieves better performance than into c3, demonstrating that better semantic features can benefit more from the global context modeling. With a slight increase in FLOPs, inserting the GC block into all layers (c3+c4+c5) yields even higher performance than inserting into only a single layer.
Bottleneck design. The effects of each component in the bottleneck transform are shown in Table 5(d). w/o ratio denotes the simplified NLNet using one 1x1 convolution as the transform, which has more parameters compared to the baseline. Even though r16 and r16+ReLU have much fewer parameters than the w/o ratio variant, two layers are found to be harder to optimize and lead to worse performance than a single layer. So LayerNorm (LN) is exploited to ease optimization, leading to performance similar to w/o ratio but with much fewer #params.
The reason we adopt layer norm here is that other alternatives, i.e. batch norm and group norm, do not perform well probably due to insufficient statistics to compute the means and variances. The spatial resolution of the intermediate feature map in the GC block has been reduced to 1 × 1 (see Fig 4(c)). If batch normalization is used, the number of elements to compute each mean and variance is b (b is the batch size), which is small. If group normalization is used, the number of elements to compute each mean and variance is C/r/g (g is the group number), which is also small. For layer norm, the number of elements used to compute each mean and variance is C/r, which is observed to be sufficient. Bottleneck ratio. The bottleneck design is intended to reduce redundancy in parameters and provide a good tradeoff between performance and #params. In Table 5(e), we vary the ratio r of the bottleneck. As the ratio r decreases (from 32 to 4) with increasing number of parameters and FLOPs, the performance improves consistently (0.8%↑ on AP bbox and 0.5%↑ on AP mask ), indicating that our bottleneck strikes a good balance between performance and number of parameters. It is worth noting that even with a ratio of r=32, the network still outperforms baseline by large margins.
Pooling and fusion. The different choices for pooling and fusion are ablated in Table 5(f). First, it shows that addition is more effective than scaling in the fusion stage. It is surprising that attention pooling only achieves slightly better results than vanilla average pooling. This indicates that how global context is aggregated to query positions (choice of fusion module) is more important than how features from all positions are grouped together (choice in context modeling module). It is worth noting that att+add significantly outperforms avg+scale, which denotes the approach of SENet with layer norm, because of the effective modeling of long-range dependency with attention pooling for context modeling, and the use of addition for feature aggregation.
Different Normalization The result of different normalization is presented in 6(a). GCNet improves the performance by 1.0% ↑ on AP bbox and 0.7% ↑ on AP mask by replacing fixBN with syncBN in the backbone, while baseline maintains similar performance. Since the backbone is already pretrained on ImageNet while the inserted GC block is randomly initialized, the running statistics of the backbone features could help with the training of the GC block. Following [72], [73], syncBN is further applied in both the backbone and heads. Even though the baseline improves by 1.6% ↑ in AP bbox and 0.8% ↑ in AP mask , the gap between GC and the baseline is still preserved, which is 2.6% ↑ in AP bbox and (a) Block Design 2.4% ↑ in AP mask . Longer Training We also trained our model for 24 epochs which is roughly the same as the 2x schedule in [71]. As shown in 6(b), GCNet does not saturate and greater performance gain is observed, which indicates the large potential capacity of GCNet.
Experiments on Stronger Backbones
We evaluate our GCNet on stronger backbones, by replacing ResNet-50 with ResNet-101 and ResNeXt-101 [15], adding deformable convolution to multiple layers (c3+c4+c5) [17], [18] and adopting the Cascade strategy [74]. The results of our GCNet with GC blocks integrated in all layers (c3+c4+c5) with bottleneck ratios of 4 and 16 are reported. Table 7(a) presents detailed results on the validation set. It is worth noting that even when adopting stronger backbones, the gain of GCNet compared to the baseline is still significant, which demonstrates that our GC block with global context modeling is complementary to the capacity of current models. For the strongest backbone, with deformable convolution and cascade RCNN in ResNeXt-101, our GC block can still boost performance by 0.8%↑ on AP bbox and 0.5%↑ on AP mask . To further evaluate our proposed method, the results on the test-dev set are also reported, shown in Table 7(b). On test-dev, strong baselines are also boosted by large margins by adding GC blocks, which is consistent with the results on the validation set. These results demonstrate the robustness of our proposed method.
Image Classification on ImageNet
ImageNet [64] is a benchmark dataset for image classification, containing 1.28M training images and 50K validation images from 1000 classes. We follow the standard setting in [8] to train deep networks on the training set and report the single-crop top-1 and the top-5 errors on the validation set. Our preprocessing and augmentation strategy follows the baseline proposed in [75] and [2]. Concretely, the following augmentation and preprocessing are performed sequentially during training: [−10 • , 10 • ] random rotation, [3/4, 4/3] random aspect ratio with [8%, 100%] random area crop, 224×224 resizing, horizontally flip with 0.5 probability, [0.6, 1.4] HSV random scaling, and PCA noise sampled from N (0, 0.1). The standard ResNet-50 is trained for 120 epochs on 4 GPUs with 64 images per GPU (effective batch size of 256) with synchronous SGD of momentum 0.9. Cosine learning rate decay is adopted with an initial learning rate of 0.1.
Block Design. As done for the block design on COCO, results on different blocks are reported in Pooling and fusion. The functionality of different pooling and fusion methods is also investigated on image classification. Comparing Table 8(b) with Table 5(f), it is seen that attention pooling has greater effect in image classification, which could be one of the missing ingredients in [2]. Also, attention pooling with addition (GCNet) outperforms vanilla average pooling with scaling (SENet with layer norm) by 0.35% on top-1 accuracy with almost the same #params and FLOPs.
Comparison with Other Approaches. As shown in Table 9, we compare our approach with other state-of-the-art approaches on image recognition of ImageNet, and find that our GCNet outperforms SENet [2] and CBAM [47].
Action Recognition on Kinetics
For human action recognition, we adopt the widely-used Kinetics [65] dataset, which has ∼240k training videos and 20k validation videos in 400 human action categories. All models are trained on the training set and tested on the validation set. Following [5], we report top-1 and top-5 recognition accuracy. We adopt the slowonly baseline in [9], the best single model to date that can utilize weights inflated [39] from the ImageNet pretrained model. The inflated 3D strategy [5] greatly speeds up convergence compared to training from scratch. All the experiment settings follow [9]; the slow-only baseline is trained with 8 frames (8 × 8) as input, and multi(30)-clip validation is adopted. Ablation Study. The ablation study results are reported in Table 10. For the Kinetics experiments, the ratio of GC blocks is set to 4. First, when replacing the NL block with the simplified NL block and GC block, the performance can be regarded as on par (0.19%↓ and 0.11%↓ in top-1 accuracy, 0.15%↑ and 0.14%↑ in top-5 accuracy). As in COCO and ImageNet, adding more GC blocks further improves results and outperforms NL blocks with much less computation.
Comparison with Other Approaches. As shown in Table 11, we compare our approach with other state-of-the-art action recognition methods on Kinetics, and find that our GCNet outperforms GloRE [49] and NLNet [5].
Semantic Segmentation on Cityscapes
The Cityscapes [66] dataset is one of the most popular benchmarks for semantic segmentation, consisting of 5,000 high quality pixel-level finely annotated images and 20,000 coarsely annotated images captured from 50 different cities. Only the finely annotated part of the dataset is utilized in our experiments, and is divided into 2,975/500/1,525 for training, validation and testing. In total there are 30 semantic classes provided, 19 of which are used for evaluation. The standard mean Intersect over Union (mIoU) on the validation set is reported for measuring segmentation accuracy.
The training setting and hyper-parameters strictly follow CC-Net [43]. The data are augmented by random scaling the original 2048 × 1014 high resolution images by a factor in [0.5, 2], then randomly cropping to 769×769 patches. The poly learning policy is employed where the initial learning rate 0.01 is multiplied by (1 − iter itermax ) 0.9 . SGD training is performed on 4 GPUs with 2 images per GPU with Synchronized Batch Normalization for 160 epochs, which is roughly 60k steps. Following the practice of recent semantic segmentation approaches [24], [26], [41], [42], [43], ResNet-101 pretrained by [76], [77] is used as the backbone, where the downsampling operation in c4,c5 is removed and dilated convolution [78] is incorporated. The backbone is followed by a semantic segmentation head. Like the design in CCNet [43], the c5 feature is encoded by a context operator (e.g. CCNet, GCNet, SNLNet, NLNet) and concatenated with c5 before the pixel-wise classification layer. In the FCN [22] baseline, there is no context operator. As done in previous works [24], [41], [42], [43], an auxiliary head is added after the c4 stage output for a deep supervision loss. We use ratio r = 4 for the GC block as default for semantic segmentation experiments.
Block Design. As shown in Table 12(a), the SNL head achieves performance comparable to the NL Head. Hence we argue that the accuracy gains by self-attention can be mainly ascribed the modeling of global context rather than the learning of pairwise relations. Moreover, all heads significantly boost the performance over the baseline, which indicates that long-range dependency is essential in the fine-grained semantic segmentation task. Note that with the GC block incorporated in the head, the GC blocks in the backbone do not have a significant effect because long range dependency is already exploited.
Pooling and Fusion. The observations for the pooling and fusion in 12(b) are similar to those of object detection. Moreover, attention pooling with addition (GCNet) outperforms vanilla average pooling with scaling (SENet with layer norm) with almost the same #params and FLOPs. We conjecture that simply recalibrating channels does not effectively exploit rich semantic global context.
Comparison with Other Approaches. As shown in Table 13, we compare our approach with other state-of-the-art approaches on semantic segmentation of Cityscapes, and find that our GCNet achieves performance on par with DANet [42], ANN [79], CCNet [43] and NLNet [5].
Visualizations
Visualizations of Context Attention Map. In Figure 5, we randomly choose fifteen images from the COCO dataset and visualize their attention maps (softmax output of context modeling module) for GCNet and NLNet. We can observe that NLNet learns similar attention maps for different query points in most cases, which are also similar to the attention maps learnt by GCNet. In addition, we observe that the two models usually focus on small or thin objects like frisbee, skateboard, and snowboard. This may facilitate the detection of these objects, and the accuracy is hence boosted. Also note that the human body is an exception, which is less attended. We hypothesize the reason is because the person class is common enough in the COCO dataset and it is not hard to be detected.
Output Activations of GC Block. We follow [2] to visualize the output activations of GC blocks in different layers. As depicted Comparison of state-of-the-art methods with ResNet-101 on semantic segmentation with stronger augmentation on Cityscapes validation set. The methods denoted with " † " marker produce the pixel-wise classification logits by the concatenation of the stride-8 c5 backbone and the context head followed by a 3×3 convolution layer, while the others directly utilize the context head features without concatenating the 2048-dim c5 features.
in Figure 6, the channel activations are class-agnostic in the shallow layers and more class-dependent in the deeper layers. It is intuitive since for neurons closer to the final classification layer, a higher correlation between activation and class label is expected. Illustration of Class Selectivity. We use the class selectivity index proposed in [80] to study the effect of global context modeling on learned representations. In Figure 7, we plot the distribution of the class selectivity index on ImageNet. We use the last activation of each block in the c4 stage to compute the class selectivity index. The observed pattern is similar to that in GENet [45]. The distributions are almost the same in the first blocks. As the depth increases, GCNet begins to diverge from the baseline. And as shown in the last plot (c4.5.relu) in Figure 7, GCNet exhibits much less class selectivity. Also pointed out in [45], we speculate that there are some cases that suffer from local ambiguity, which would push the baseline network to specialize some neurons to overcome it. Note that the global context computed by GCNet may avoid this burden thus resulting in less class selectivity.
CONCLUSION
The long-range dependency modeling of non-local networks intends to model query-specific global context, but we have found empirically that it only models query-independent context on several important visual recognition tasks. Based on this, we simplify non-local networks and abstract this simplified version to a global context modeling framework. Then we propose a novel instantiation of this framework, the GC block, which is lightweight and can effectively model long-range dependency. Our GCNet is constructed via applying GC blocks to multiple layers, which generally outperforms simplified NLNet on major benchmarks for various recognition tasks.
We have verified that the global context block can benefit multiple visual recognition tasks. In the future, the global context block may be extended to the generative models [81], [82], graph learning models [56], [83], and self-supervised models [84]. | 2020-12-25T02:15:28.881Z | 2020-12-24T00:00:00.000 | {
"year": 2020,
"sha1": "e1da715b8ae4436c5224e9b573309a3b72c7a53c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e1da715b8ae4436c5224e9b573309a3b72c7a53c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
219740158 | pes2o/s2orc | v3-fos-license | The Design Of Wheelchair Systems With Raspberry Pi 3-Based Joystick Analog And Voice Control
In this study, an electric wheelchair that combines two controls: joystick analog and voice control is designed. IC MCP3008 is used to navigate wheelchairs by using Josytick, where joystick analog data will be converted into digital data. The movements resulted from the joystick analog on the xAxis axis (horizontally) are the right turn and left turn, and on the yAxis axis (vertically) are forward and backward. The movements on the yAxis and xAxis axes set by the user affects the speed of the wheelchair. Meanwhile, the AMR-Voice application on Android is used to navigate wheelchairs by using sound. There are five commands in this voice control: “Forward”, “backward”, “left”, “right”, “stop”. The order will be sent to Raspberry Pi 3 via the HC-06 module to then be recognized for the command. If the voice commands are received accordingly, Raspberry Pi 3 will provide an activation signal to the motor driver to move the wheelchair in the direction corresponding to the command given by the user. Voice control testing on wheelchairs is tested in quiet rooms and noisy rooms. The results of the wheelchair control testing with sound indicate that the accuracy and speed of the wheelchair response rely heavily on Internet connection and room conditions. The average response when the condition of the room is quiet is 0.16 s and when the condition of the room is noisy is 5.18 s. Wheelchairs with joystick control and the voice made can be used for the disabled, whether for those who can move their fingers or not, at a low cost so that they can be an alternative in developing countries.
Introduction
Paralysis is a major issue that requires someone to use a wheelchair. The usefulness of this tool is often interpreted as accommodation for people with paralysis. Wheelchair assistance for these patients is widely approved to provide the proper running therapy for a patient who is paralyzed. The large benefits of wheelchairs can help patients in their daily activities. Unfortunately, wheelchairs generally still require the help of others to push them. But there is a time when the disabled must be able to move their wheelchair independently. Wheelchairs are needed in Indonesia because of the many disabilities in the country. Based on the data of Sakernas 2017 survey institution, the number of people with disabilities in Indonesia was 21 million. Currently wheelchairs are still widely used manually, International Conference on Applied Sciences, Information and Technology 2019 IOP Conf. Series: Materials Science and Engineering 846 (2020) 012032 IOP Publishing doi:10.1088/1757-899X/846/1/012032 2 that is with the help of others. Psychally, a person with disabilities does not want to trouble the surrounding people. A lot of wheelchair development has been done, one of which is a wheelchair capable of being controlled with voice by using module STM32F103C6T6 (STM32) [1]. On another study, brain signal-based wheelchairs are developed with various methods of brain signal processing [2], [3], [4]. This research develops a wheelchair-control system using joystick analog and voice. Speed settings can be performed on joystick control mode. The relatively inexpensive use of components is expected to be a solution to the availability of wheelchairs with special controls for the disabled in developing countries such as Indonesia.
Methodology
A. Raspberry Pi 3 Raspberry Pi 3 has 1GB RAM and Broadcom VideoCore IV graphics at a higher clock frequency than before that run at 250MHz. There are 40 pins or 26 pins of GPIOs on Raspberry Pi 3. In the GPIO, sensors are generally connected or used to drive the relay. GPIO is used as an alternative to Raspberry communication to a device outside, exactly like a USB port or Ethernet. The GPIO pins consist of:
B. Joystick Analog
Joystick Analog is two potentiometers combined for one vertical movement (yAxis) and another for horizontal movement (xAxis). The Joystick also has a select switch. This Joystick generates 2.5 V output of X and Y when the position is not moved. Adjusting the direction of the joystick will cause the output to vary from 0V to 5V depending on the direction of the movement. This IC MCP3008 is a 10-bit Analog to-Digital (A/D0 Converter) device. There are 4 pairs of false differential inputs or 8 single inputs. Differential Nonlinear (DNL) and Integral Nonlinearity (INL) are determined at ± 1 LSB. Communication with the device is performed by using a simple serial interface that is compatible with the SPI protocol. The IC MCP30008 has a conversion rate of up to 200 ksps. The MCP3004/3008 device operates at a wide voltage range (2.7 V-5.5 V).
D. AMR-Voice Control
AMR-Voice is an Android application available in the Google Play store. When it is impossible for the user to type with the touch keyboard on Android, it can be replaced by saying a sentence, then Google Voice will analyze the sound it takes. Therefore, this developed voice control requires an Internet connection when used.
E. HC-06 Bluetooth Module
Bluetooth is one form of data communication wirelessly, based on radio frequencies. This Bluetooth module is replacing serial communication using cables. Bluetooth consists of two types of devices, namely Master (data sender) and Slave (receiver).
F. System Block Diagram
From the two controls used will be integrated into a system capable of navigating the direction of the wheelchair. Figure 6 is a block diagram of wheelchair control using joystick and voice control. And Figure 7 shows a control system flowchart using a joystick. Analog Joystick produces analog data which will be converted into ADC digital data . Figure 7 shows that to move the wheelchair forward, the user must move the yAxis axis on the joystick analog until the value is larger than 600. Users can increase the speed on the wheelchair by moving the yAxis axis on a joystick analog up to a maximum value of 1024. As for the backward movement, the user must move the yAxis axis on the joystick analog until the value is smaller than 470. The backward speed can also be adjusted by moving the yAxis axis on the joystick analog up to a maximum value of 0. As with the forward and backward movements, for the right turn the user must move the xAxis axis on the joystick analog until the value is arger than 600. Users can increase the speed on the wheelchair by moving the xAxis axis on an analog joystick up to a maximum value of 10-24. As for the left Turn movement, the user must move the xAxis axis on the joystick analog until the value is smaller than 470. The left turn speed can also be set up by moving the xAxis axis on the joystick analog up to a maximum value of 0. Figure 8. Flowchart of Speed Control In this system, the speed of the wheelchair movement is divided into two: ' normal ' speed mode and ' fast ' speed mode. Normal speed mode has been determined with PWM 49 and the values on xAxis and yAxis are > 600 and < = 800 or xAxis and yAxis are < 400 and > 200, while fast speed mode uses PWM 100 and the values on xAxis and yAxis are > 800 and < = 1024 or < 200 and > 0. As shown in Figure 8. These two speed modes can be used to all directions from joystick control.
While controlling the wheelchair with voice control uses an Android application: AMR-Voice. This application makes use of Google Voice feature, the data read from Google Voice will be sent via Bluetooth Android and will be received by HC-06 module. The commands that can be executed by raspberry are only five commands; namely, forward, backward, right, left, and stop. When the user gives a command other than the five predefined commands, Raspbery PI will not execute the command and the wheelchair will not respond to anything. Figure 9 is a wheelchaircontrolled flowchart using voice control.
Result and Discussion
A. Wheelchair Control Testing Using Joystick Analog This testing is done to determine the accuracy of the joystick output value to the movement of the two wheelchair motors, which represents the direction of the wheelchair movement. At the time of testing, the wheelchair uses joystick analog to turn left, then the user must shift the analog lever left so that the angle on the xAxis axis becomes < 470 and > = 0. Then, Motor 2 (M2) is moving clockwise (CW), while Motor1 (M1) is in the state of Stop. On the other hand, if the wheelchair moves right then the user must shift the analog lever right so that the angle on the xAxis axis becomes > 600 and < = 1024 then Motor1 (M1) rotates clockwise (CW) while the Motor2 (M2) is in the state of stop. For the backward movement, the user must move the analog lever backwards, so that the angle on the yAxis axis becomes < 470 and > = 0. Then both motors move counterclockwise (CCW), while for forward the movement, the user must drive Analog 8 lever forward so that the angle on the yAxis axis becomes > 600 and < = 1024, then the motor will move clockwise (CW). The test in Table 2 was done to see the suitability of the joystick output data to the direction and speed of the wheelchair motor movements. The test results show that each joystick analog output data has precise accuracy of the movement direction and of the wheelchair motor speed. User-spoken voice commands are accepted by AMR-Voice so that the command is sent to Raspberry Pi 3 via the HC-06 module, after which the command will be executed. If appropriate, the wheelchair will move according to the given command. If not appropriate, the wheelchair will not respond to the command. Table III is a response time of AMR Voice that is performed in two conditions; namely, ' quiet ' condition and ' noisy ' atmosphere condition.
B. Wheelchair Control Testing Using Voice Control
From table 3, it is seen that the time response of the AMRVoice application is different in quiet and noisy conditions, where the respond time when the condition is ' quiet ' will be better than the ' noisy ' condition. It will also affect the response of the wheelchair movement.
Conclusion
The purpose of this research is to make a wheelchair control system that is cheap and easy to use for people with disabilities who have limited limbs or cannot use their fingers. This wheelchair control using a joystick and voice successfully moves the wheelchair in the desired direction. Joystick controls are for users who can still use their hands to move the joystick, while voice controls are for users who are unable to use their fingers to push or direct the joystick. Voice control has a very good response time in quiet noise situations with a good internet connection. | 2020-05-28T09:15:21.078Z | 2020-05-28T00:00:00.000 | {
"year": 2020,
"sha1": "cdecba75098f782e27d85e641a0819c8001196f6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/846/1/012032",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bb98647156d3fc33cf573b3409c7cf3723dad8c5",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
74182848 | pes2o/s2orc | v3-fos-license | Celiac-associated autoimmune thyroid disease : A study of 16 patients with overt hypothyroidism
OVER 100 DIFFERENT DISORDERS may occur in association with celiac disease (1) and a number of autoimmune disorders, including autoimmune thyroid diseases, have been described (2-5). In some case studies, celiac disease and hyperthyroidism have been described (2,6-10), while in others, celiac disease and hypothyroidism have been detected (11-16). In addition, some alterations in intestinal absorptive function have been detected, particularly in the presence of hyperthyroidism, but these are reported to normalize following treatment of the thyroid disease (17,18). In a recent study from a circumscribed geographical area in Scotland (19), measurements of thyroid function and studies for thyroid autoantibodies suggested that the risk of clinically overt thyroid disease, particularly hypothyroidism, was increased in patients with celiac disease. This is not surprising because the human lymphocyte antigen (HLA) haplotypes B8 and DR3 are both more commonly detected in autoimmune thyroid disease and celiac disease patients versus the general population (20-22). In this report, patients with biopsydefined celiac disease were evaluated for changes in thyroid function and the presence of thyroid autoantibodies as well as other associated clinical disorCLINICAL GASTROENTEROLOGY
In a recent study from a circumscribed geographical area in Scotland (19), measurements of thyroid function and studies for thyroid autoantibodies suggested that the risk of clinically overt thyroid disease, particularly hypothyroidism, was increased in patients with celiac disease.This is not surprising because the human lymphocyte antigen (HLA) haplotypes B8 and DR3 are both more commonly detected in autoimmune thyroid disease and celiac disease patients versus the general population (20)(21)(22).
In this report, patients with biopsydefined celiac disease were evaluated for changes in thyroid function and the presence of thyroid autoantibodies as well as other associated clinical disor-ders.The results indicate that there is high frequency of autoimmune thyroid disease in patients with celiac disease.These patients also have a high frequency of dermatitis herpetiformis and small intestinal neoplastic disease, particularly lymphoma.
PATIENTS AND METHODS
A total of 96 adults with celiac disease were seen at the University of British Columbia Hospital, Vancouver.In each patient, a small bowel biopsy diagnosis of celiac disease was established on the basis of typical histological features of a severe 'flat' lesion (23) or 'crypt hyperplastic villous atrophy' followed by a response to a strict glutenfree diet.Each patient's hospital and office records were reviewed for evidence of thyroid disease and other clinical disorders that have been previously closely linked with celiac disease, including dermatitis herpetiformis and lymphoma (24,25).For either dermatitis herpetiformis or lymphoma, a histological diagnosis was required.There were 16 patients with thyroid disease and all except one were regularly reviewed in an adult celiac disease clinic at least on an annual basis.The one patient not followed on an annual basis died of pneumonia six months following diagnosis of celiac disease.One other patient with celiac disease died with a perforated small bowel due to lymphoma (25).Details of the clinical presentation and past medical history were recorded for each patient as were hematological (hemoglobin, white blood cell count, platelet count) and biochemical results (carotene, iron studies, folic acid, vitamin B 12 , calcium, phosphate, total protein, albumin, prothrombin time, immunoglobulins and liver tests).
After small intestinal biopsies were done, patients were reviewed by a clinical dietitian with a special interest in celiac disease who provided specific instructions on a gluten-free diet.Patients were assessed periodically, as required, to address any concerns regarding diet treatment.Compliance and response to a prescribed strict gluten-free diet were evaluated during each clinic visit.
The 16 celiac disease patients included hypothyroid patients now on L-thyroxine with normal measurements of thyroid function; newly diagnosed thyroid disease patients; and patients with impaired thyroid gland function due to prior surgical and/or radio-iodine ablative treatment for Grave's disease.Most, but not all, of the 96 patients with celiac disease had thyroid function evaluated.Radioimmunoassays for total and/or free thyroxine and thyroid-stimulating hormone were done.Thyroid microsomal antibodies were detected by a standard agglutination technique.Hypothyroidism was diagnosed on the basis of a low thyroxine value, an increased thyroidstimulating hormone measurement or both.
RESULTS
Patient data: All 96 patients were residents of British Columbia.Their average age at diagnosis of celiac disease was 47.3 years.Thirty-three were older than age 60 years; for the initial 30 of these elderly patients, the clinical spectrum of associated diseases was described earlier in a separate report (26).Another 63 of these 96 patients were 17 to 59 years old.Details related to the clinical features of some of these 96 patients have been documented elsewhere in earlier case report studies because of unusual or unique clinical and/or histopathological features related to the celiac disease (27)(28)(29)(30)(31)(32)(33)(34).Sixteen of these 96 celiac disease patients had hypothyroidism detected for an overall prevalence of at least 17%; this approximates a reported prevalence of 14% for overt thyroid disease in Scottish patients with celiac disease (19) and is greater than a prevalence of 2.7% recorded in an English study (4), 5.8% prévalence de la maladie thyroïdienne a été étudiée dans une série de 96 patients consécutifs porteurs d'une maladie coeliaque de l'adulte confirmée par biopsie (moyenne d'âge, 47,3 ans).Chez seize patients atteints de maladie coeliaque (moyenne d'âge, 58,1 ans) on a identifié une hypothyroïdie, y compris chez quatre sujets ayant subi une ablation radio-iodée ou une thyroïdectomie en traitement de la maladie de Grave.En plus de la maladie coeliaque, près de la moitié présentaient une dermatite herpétiforme, une néoplasie de l'intestin grêle (particulièrement sous forme de lymphome) ou les deux.Le diagnostic de maladie thyroïdienne a précédé celui de maladie coeliaque chez 13 patients, sauf chez deux sujets où les deux diagnostics ont été posés simultanément.Chez un seul patient, la maladie thyroïdienne a été décelée après que le diagnostic de maladie coeliaque ait été posé.Cela signifie que la maladie thyroïdienne accompagne plus souvent la maladie coeliaque que l'on pourrait le penser, possiblement à cause de leur origines embryologiques ou de leurs caractéristiques immunologiques communes, et qu'elle pourrait être une manifestation clinique chez les adultes, spécialement en présence de dermatite herpétiforme co-existante.Une surveillance attentive de ce sous-groupe s'impose à cause de la fréquence des néoplasies intestinales, du lymphome en particulier.in a Swedish population (5) and 5.4% in a Finnish report (35).
There were 70 females and 26 males for an overall female:male ratio of 2.7:1.These findings are similar to the age and sex distribution for celiac disease patients reported by investigators elsewhere (36) and recorded earlier in a separate report describing clinical fea-tures in elderly biopsy-defined celiac disease patients (26).In the 16 patients with thyroid disease reported here, there were 11 females and five males for a female:male ratio of 2.2:1.
All 16 patients with thyroid disease were Caucasian and no patient had a known family history of celiac disease.However, two patients had an apparent family history of thyroid disease: a 70year-old male reported that his mother had a 'goiter' and a 62-year-old female had a daughter with autoimmune (Hashimoto's) thyroiditis and hypothyroidism.The average age at diagnosis of celiac disease for these 16 patients (with both celiac and thyroid diseases) was 58.1 years (compared with the group average of 47.3 years).Related clinical disorders: Table 1 lists related disorders that were identified in each patient.In the 16 patients with thyroid disease described in the present report, dermatitis herpetiformis was diagnosed in six patients (38%) and five (31%) had a neoplastic disorder.These included small intestinal lymphoma in three patients and a small intestinal adenocarcinoma in one.Although the relationship among celiac disease, lymphoma and dermatitis herpetiformis has been well established (24), thyroid disease has not previously been recognized in association with this triad of related clinical disorders.None of these 16 thyroid disease patients had any other endocrine disorder detected, including insulin-dependent diabetes.Thyroid disease data: Details related to thyroid disease of each of the 16 patients are provided in Table 2.The average age for diagnosis of thyroid disease in this group was 47.6 years.In contrast, in the same group the average age at celiac disease diagnosis was 58.1 years, over a decade later.This age difference in diagnosis for the two disorders may reflect a sexual difference in expression of thyroid and celiac diseases.In females, average age at diagnosis of the thyroid and celiac diseases was 40.1 and 53.4 years, respectively; in males, however, the average age was 64.2 and 67.8 years, respectively, which indicates a similar age at diagnosis for both disorders in males.In these 16 patients, 13 had thyroid disease detected before celiac disease diagnosis, two had both conditions detected concurrently and only one patient had celiac disease detected about 10 years before the diagnosis of thyroid disease.
In 11 patients, the clinical and laboratory features were very typical of autoimmune hypothyroidism with biochemical features of thyroid hypofunc-tion and, in nine patients, of positive thyroid microsomal antibodies.In four patients, hypothyroidism followed prior surgical or radio-iodine treatment for hyperthyroidism; in another patient a partial thyroidectomy for a thyroid adenoma was done.Thyroid microsomal antibodies were also detected in three of these patients.The titres of the thyroid antibodies detected in this series varied from dilutions of 1:100 to 1:25,600; interestingly, the high titres were found in the only two patients in this series with family histories of thyroid disease.All 14 surviving patients are euthyroid with normal thyroid function measurements, and, in most instances, the patients are now receiving replacement L-thyroxine therapy.
DISCUSSION
The present report indicates that autoimmune thyroid diseases are common in patients with celiac disease and may occur more often than has been previously recognized.In this study, 16 of 96 celiac disease patients (17%) had overt thyroid disease and, in most, this was associated with impaired thyroid function.It is possible that the prevalence was even higher because thyroid function was assessed in most, but could not be examined prospectively in all 96 patients in a systematic fashion.The association is not entirely unexpected, however, because the thyroid gland shares a common embryonic origin during fetal development, derived from the pharyngeal gut on the 17th day (37).In addition, patients with autoimmune thyroid disease and celiac disease both have increased frequen-cies of the HLA haplotypes B8 and DR3 (20)(21)(22).Finally, HLA DR antigen expression both occurs in the small bowel of patients with celiac disease and has been detected in epithelial glandular structures of patients with other autoimmune diseases (eg, salivary glands in Sjögren's syndrome) (38,39).
The 16 patients with both thyroid and celiac disease were much older than the average age of the 96 celiac patients in this series.Possibly this reflects the reported tendency to an increased prevalence of hypothyroidism with increasing age (40).Or elderly celiac disease patients, being untreated for prolonged periods before diagnosis, may be more likely to develop autoimmune thyroid or other diseases.Greater small intestinal permeability in celiac disease patients may permit excessive amounts of antigen to enter the circulation and cross-react with other tissues, including the thyroid gland (19).
In the present study, thyroid disease was detected before celiac disease in most patients, and only one patient had treatment with a gluten-free diet before detection of thyroid disease.As suggested elsewhere (19), it would be of particular interest to assess the prevalence of thyroid diseases in adult celiac patients treated with a gluten-free diet from childhood.
This study also explored clinical disorders that are already known to be strongly linked with celiac disease.Almost half of the patients in the present study also had dermatitis herpetiformis, small intestinal lymphoma or both.While this 'triad' has been previously recognized and linked with celiac disease (24), detection of altered thyroid function may lead to recognition of a clinical setting that carries an increased risk of neoplastic disease.In addition, earlier studies, especially in animal models, have suggested a critical role for thyroid hormones in altering the normal pattern of epithelial cell renewal and epithelial proliferation rates in the small intestine (41,42).Studies are needed to elucidate more precisely the role of altered levels of circulating molecules, including hormones, in the development of malignant complications in this clinical setting of adult celiac disease.
The association of celiac disease with autoimmune thyroid disease may be important to recognize from a clinical perspective as the two disorders share several common features, such as anemia, altered bowel habit and impaired absorption, especially in hyperthyroidism.Failure to diagnose both conditions might result in a limited, or even an apparent lack of, response to a treatment regimen.Alternatively, hypothyroidism might mask the clinical features of celiac disease, such as weight loss and diarrhea.Thus, it may be important to exclude thyroid disease in all celiac disease patients, especially if there is an apparent failure to respond to a gluten-free diet.Conversely, hypothyroid patients failing to respond to oral L-thyroxine treatment may have undiagnosed or occult celiac disease.Prospective studies are needed to define further this intriguing relationship of celiac disease with autoimmune endocrine, including thyroid, diseases.
TABLE 1 Celiac disease patients with thyroid dis- eases
*Age (in years) at diagnosis of celiac disease; † Deceased; ‡ Diagnosed after celiac disease detected; DH Dermatitis herpetiformis | 2019-01-11T06:39:46.061Z | 1995-01-01T00:00:00.000 | {
"year": 1995,
"sha1": "a8e53c3fce168459794f28e5933b259f22df286e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cjgh/1995/342519.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8e53c3fce168459794f28e5933b259f22df286e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3908253 | pes2o/s2orc | v3-fos-license | Targeted Multifunctional Lipid ECO Plasmid DNA Nanoparticles as Efficient Non-viral Gene Therapy for Leber’s Congenital Amaurosis
Development of a gene delivery system with high efficiency and a good safety profile is essential for successful gene therapy. Here we developed a targeted non-viral delivery system using a multifunctional lipid ECO for treating Leber’s congenital amaurosis type 2 (LCA2) and tested this in a mouse model. ECO formed stable nanoparticles with plasmid DNA (pDNA) at a low amine to phosphate (N/P) ratio and mediated high gene transfection efficiency in ARPE-19 cells because of their intrinsic properties of pH-sensitive amphiphilic endosomal escape and reductive cytosolic release (PERC). All-trans-retinylamine, which binds to interphotoreceptor retinoid-binding protein (IRBP), was incorporated into the nanoparticles via a polyethylene glycol (PEG) spacer for targeted delivery of pDNA into the retinal pigmented epithelium. The targeted ECO/pDNA nanoparticles provided high GFP expression in the RPE of 1-month-old Rpe65−/− mice after subretinal injection. Such mice also exhibited a significant increase in electroretinographic activity, and this therapeutic effect continued for at least 120 days. A safety study in wild-type BALB/c mice indicated no irreversible retinal damage following subretinal injection of these targeted nanoparticles. All-trans-retinylamine-modified ECO/pDNA nanoparticles provide a promising non-viral platform for safe and effective treatment of RPE-specific monogenic eye diseases such as LCA2.
INTRODUCTION
Leber's congenital amaurosis (LCA) is a genetic disease causing retinal degeneration with severe vision loss at an early age that affects 1 in 80,000 subjects. [1][2][3] One molecular form of this disease, LCA type 2 (LCA2), is caused by mutations in the RPE65 gene that encodes the RPE65 protein (retinal pigment epithelium-specific protein 65-kDa) predominantly expressed in the retinal pigmented epithelium (RPE). RPE65, a key enzyme in retinoid metabolism, catalyzes the hydrolysis and isomerization of all-trans-retinyl esters to 11-cis-retinal. RPE65 deficiency results in the accumulation of all-trans-retinyl esters and causes rod and cone photoreceptor dysfunction. [4][5][6] Currently, there is no approved therapy for effectively treating LCA2. As a monogenic disease, LCA2 is a good candidate for gene therapy because the photoreceptor cells and the RPE do not show extensive pathological abnormalities in the early stages of this disease. 7 Recently, gene replacement therapy with adeno-associated viral vectors (AAVs) has demonstrated considerable therapeutic efficacy in improving vision in RPE65-deficient animal models and human patients. [8][9][10][11][12][13] Although clinical trials have validated the overall benefit of gene replacement therapy, their success is limited by several drawbacks associated with viral delivery systems. The possibility of an immune response induced by viral vectors greatly compromises the efficiency of gene transfection and can cause complications in patients. 14 Studies have shown that vector DNA is detectable in the optic nerve and brain following subretinal injections, which raises additional safety concerns. 15 Non-viral gene delivery systems that employ cationic lipids, dendrimers, polycations, and polysaccharides have also been developed for gene delivery. [16][17][18][19][20] Non-viral systems generally exhibit advantages such as ease of production, good safety profiles, and unlimited cargo capacity. However, their clinical translation is still hindered by their low transfection efficiency and transient gene expression. 21 Novel designs of highly effective non-viral delivery systems are needed to overcome the limitations of existing non-viral delivery systems for gene therapy of inherited monogenic visual disorders to become effective and practical.
Recently, we designed a multifunctional lipid, (1-aminoethyl)iminobis [N-(oleicylcysteinyl-1-amino-ethyl)propionamide] (ECO), as a simple and smart gene delivery carrier based on its mechanism of pH-sensitive amphiphilic endosomal escape and reductive cytosolic release (PERC) of nucleic acids. [22][23][24][25][26] ECO contains a protonable ethylenediamine (E) head group, two cysteine (C) functional linkers, and two oleoyl (O) lipophilic tails. The thiol groups of the cysteine residues can form disulfide bonds to stabilize particle formulations and can also be chemically modified with targeting ligands. Following cellular uptake, cytosolic release of the gene cargo is facilitated by pH-sensitive amphiphilic endosomal membrane destabilization through protonation of the head group of ECO in the acidic endosomal-lysosomal compartment (pH = 5-6) and dissociation of the nanoparticles by reduction of the disulfide bonds in the cytoplasm. ECO has demonstrated excellent transfection efficiency for RNAi cancer therapies without additional helper lipids. 25 An ECO/dendrimer hybrid system has also successfully transfected retinal tissue with GFP. 26 In this study, we designed and prepared all-trans-retinylamine modified ECO plasmid DNA (pDNA) nanoparticles with a polyethylene glycol (PEG) spacer to target interphotoreceptor retinoid-binding protein (IRBP) for enhanced gene delivery into the retina. Alltrans-retinoids have a high binding affinity for retinoid binding proteins, which play important roles in visual transduction. 27 IRBP is a major protein in the interphotoreceptor matrix (IPM) that selectively transports 11-cis-retinal to photoreceptor outer segments and all-trans-retinol to the RPE. [28][29][30][31][32][33] Such a selective transport mechanism can increase the transfection efficiency directly into the RPE with the ECO/pDNA nanoparticles conjugated with all-trans-retinylamine. We first evaluated the in vitro transfection efficiency of ECO/pDNA nanoparticles in ARPE-19 cells, a human RPE cell line. The in vivo transfection efficiency of targeted ECO/pDNA nanoparticles to the RPE was then evaluated in wild-type BALB/c mice using GFP plasmids. Finally, the efficacy of gene therapy with the targeted nanoparticles was determined by electroretinography (ERG) in the Rpe65 À/À mouse model of human LCA2. Confocal fluorescence microscopy images demonstrating intracellular trafficking of ECO/Cy3-pDNA nanoparticles in ARPE-19 cells. Cells were treated with LysoTracker Green (1:2,500 dilution) and Hoechst 33342 (1:10,000 dilution) and then transfected with ECO/Cy3-labeled nanoparticles at N/P = 6. After 1, 4, and 24 hr of transfection, cells were fixed and imaged. Green, endosomes; blue, nuclei; red, Cy3-labeled pDNA. Arrows denote the ECO/Cy3-pDNA nanoparticles and Cy3-pDNA. Scale bars, 20 mm.
In Vitro Transfection with ECO/pDNA Nanoparticles
To determine the transfection efficiency of ECO in vitro, human RPE cells (ARPE-19) were transfected with ECO/plasmid GFP (pGFP; amine to phosphate [N/P] ratio = 6/1) nanoparticles, and confocal microscopy was used to determine GFP expression 48 hr after transfection ( Figure 1A). ECO/pGFP nanoparticles produced significant GFP expression, with 69.7% of cells expressing GFP, whereas the Lipofectamine control transfected only 14.4% of the cells, as determined by flow cytometry ( Figure 1B). The high gene expression efficiency of ECO/pDNA nanoparticles correlated positively with their efficient intracellular uptake. Figure 1C shows the intracellular uptake of ECO/pDNA nanoparticles as imaged by 3D confocal microscopy 1, 4, and 24 hr post-transfection with Cy3-pDNA as the tracker. After 1 hr incubation, ECO/Cy3-pDNA nanoparticles (red) were aligned at the surface of the cell membrane because of electrostatic interactions of these positively charged nanoparticles with the negatively charged cell membrane. After 4 hr, the nanoparticles entered the cell and co-localized with late endosomes, indicated by their yellow color. After 24 hr, the nanoparticles escaped endosomal entrapment, as shown by the red fluorescence in the cytoplasm and the diminished overlap with endosomes. Efficient cytosolic pDNA delivery of ECO/pDNA nanoparticles resulted in high gene expression efficiency in RPE cells in vitro.
Preparation of Retinylamine-Targeted ECO/pDNA Nanoparticles
To target IRBP, an all-trans-retinoid structure was introduced onto the surface of ECO/pDNA nanoparticles via a PEG (3.4-kDa) spacer. All-trans-retinylamine (all-trans-retinylamine [Ret]-NH 2 ) was first reacted with the N-hydroxysuccinimide (NHS)-activated ester of NHS-PEG-malaimido (MAL) to yield Ret-PEG-MAL, which was characterized by MALDI-TOF mass spectroscopy (Figures 2A and 2B). To form targeted ECO/pDNA nanoparticles, Ret-PEG-MAL was first reacted with the 2.5 mol % ECO via Michael addition between the thiol and maleimide. The targeted nanoparticles were then formed by self-assembly with pDNA. Figure 2C shows the transmission electron microscopy (TEM) images of the untargeted and targeted ECO/pDNA nanoparticles. The average size of ECO/ pDNA nanoparticles was approximately 100 nm based on TEM. The average size of Ret-PEG-ECO/pDNA nanoparticles was around 120 nm, a slight increase after surface modification with Ret-PEG. The result was consistent with that measured by dynamic light scat- tering (DLS). Slight aggregation of ECO/pDNA nanoparticles might result in wider size distribution than Ret-PEG-ECO/pDNA nanoparticles in the size distribution curve, as shown in Figure 2D. After conjugation with the targeting ligand, little aggregation was observed for the Ret-PEG-ECO/pDNA nanoparticles with a narrow distribution of the particle size. The sizes and zeta potentials of ECO/pDNA and Ret-PEG-ECO/pDNA nanoparticles are depicted in Figure 2E. The average size of the ECO/pDNA nanoparticles (117 nm) was slightly smaller than that of the Ret-PEG-ECO/pDNA nanoparticles (131 nm) based on DLS measurements. After targeting ligand conjugation, the average zeta potential of Ret-PEG-ECO/pDNA nanoparticles dropped from 26 mV to 18 mV, which not only reduced the cytotoxicity but also stabilized the delivery system.
In Vivo Transfection with Targeted Ret-PEG-ECO/pGFP Nanoparticles in Wild-Type BALB/c Mice
The Ret-PEG-ECO/pGFP nanoparticles were subretinally injected in BALB/c mice to determine in vivo gene delivery and expression efficiency with GFP as a reporter gene. Significant GFP expression was observed in RPE flat mounts with both unmodified ECO/pGFP nanoparticles and targeted Ret-PEG-ECO/ pGFP nanoparticles 3 days post-injection. However, Ret-PEG-ECO/pGFP nanoparticles produced greater GFP expression than ECO/ pGFP nanoparticles ( Figure 3A). ZO-1 staining of tight junction proteins in RPE flat mounts further confirmed that the enhanced GFP expression emanated from RPE cells ( Figure 3B). knocked down. Rpe65 À/À mice exhibit phenotypic features similar to human LCA2 patients. 34 Ret-PEG-ECO/pRPE65 nanoparticles were injected into the subretinal space of 1-month-old Rpe65 À/À mice. 15 days post-injection, treatment with Ret-PEG-ECO/pRPE65 nanoparticles produced higher mRNA levels of RPE65 in the treated group than in the untreated control group ( Figure 4A). This finding demonstrates successful introduction of the therapeutic gene. ERG was performed at an intensity of 1.6 log cd  s/m 2 to determine the efficacy of the nanoparticle treatment based on the electrical responses to light from the retina. 35 Figure 4B shows significant scotopic and photopic ERG response waveforms in the nanoparticle treatment group 7 days post-treatment, whereas there was almost no response in control group mice injected with Ret-PEG-ECO. The amplitudes of the major waves from all ERG tests were calculated 3, 7, 30, and 120 days posttreatment ( Figures 4C-4F). Significant increases in the amplitudes of scotopic a-waves and b-waves were observed for nanoparticle-treated groups but not for control groups (vehicle-injected). Introduction of the exogenous RPE65 gene increased about 50% of the scotopic ERG amplitude throughout all time points up to 120 days ( Figures 4C and 4E), which demonstrated improved function of rod photoreceptors. Cone function also improved, represented by a 3-to 5-fold increase in photopic b-wave amplitude in the first 7 days after treatment. Although the amplitude decreased at later time points, the photopic b-wave amplitude of the treatment group was 2-fold that of the control, even at 120 days. Photopic a-waves were higher in the treatment groups than in the controls, but the difference was not statistically significant.
Cone Preservation after Gene Replacement Therapy with Ret-PEG-ECO/pRPE65 Nanoparticles in Rpe65 -/-Mice
To determine whether Ret-PEG-ECO/pRPE65 nanoparticles could rescue cone cells in Rpe65 À/À mice, we prepared cryo-sections of the whole retina at 120 days post-injection and stained cone cells with peanut agglutinin (green). Compared with the control group ( Figure 5A), the treatment group ( Figure 5B) revealed substantial green fluorescence staining, representing a greater number of healthy cone photoreceptors. This result also explains the increase in photopic wave amplitudes in the ERG. Interestingly, fewer cone cells were observed away from the injection site ( Figure 5C), suggesting local rescue in this gene therapy approach.
Therapeutic Effect of Gene Replacement Therapy with Ret-PEG-ECO/pRPE65 Nanoparticles in 3-Month-Old Rpe65 -/-Mice To determine the optimal timing for gene replacement therapy of LCA2 with the targeted nanoparticles, we initiated RPE65 gene therapy with Ret-PEG-ECO/pRPE65 nanoparticles in 3-month-old Rpe65 À/À mice and performed ERG tests to evaluate its therapeutic efficacy. According to the ERG responses measured 7 and 30 days post-treatment, no differences were observed for scotopic and photopic waveforms between the treatment and control groups ( Figure 6), indicating no observable improvement of retinal function. This result suggests that gene replacement therapy with targeted nanoparticles in these older mice was not as effective in restoring vision as in younger mice, likely because of the progression of irreversible retinal degeneration in older animals.
in response amplitudes was observed for some major waveforms 7 days post-injection because of the induced inflammation. Eye function after nanoparticle injection became normal at 30 days, and no deleterious effects were noted in the ERG major wave amplitudes ( Figures 7B-7E). This result indicates that Ret-PEG-ECO/ pRPE65 nanoparticles are safe for subretinal injection in gene replacement therapy.
DISCUSSION
Gene replacement therapy holds great promise for treating monogenic vision disorders. Thus, establishing a gene delivery system with high transfection efficiency, good therapeutic efficacy, and a high safety profile is critical for broad clinical applications of this treatment. Gene therapy with AAV 1 has been extensively investigated for treatment of LCA2, a monogenic genetic disease. Although gene delivery by AAV has been reported as successful in LCA2, 8,9 its application is limited for treating other monogenic ocular diseases because some therapeutic genes are too large to be loaded into this viral vector. Design of a safe and efficient non-viral gene delivery system has the potential to circumvent this limitation in gene therapy of monogenic visual disorders.
ECO is a multifunctional lipid that has demonstrated excellent efficiency for cytosolic delivery of a variety of genetic materials because of its intrinsic PERC effect. [22][23][24][25][26] This study has shown that ECO is also effective for delivering therapeutic pDNA for non-viral gene replacement therapy in Rpe65 À/À mice. The superior transfection properties of ECO/pDNA nanoparticles result from the multifunctional properties of the lipid carrier, including self-assembly formation of stable nanoparticles with pDNA without helper lipids, pH-sensitive amphiphilic cell membrane destabilization and endosomal escape, as well as reductive dissociation of the nanoparticles to release nucleic acids in the cytoplasm. 22,26 Here we tested a targeting mechanism that involves the use of IRBP to enhance pDNA delivery with ECO into RPE cells (Figure 8). In the retina, the interphotoreceptor matrix fills the space between rod outer segment and RPE cells, where IRBP is the major carrier that selectively transports all-trans-retinol from photoreceptor cells to RPE cells. 28 Modification of ECO/pDNA nanoparticles with this all-trans-retinoid can then facilitate their binding to IRBPs for enhanced delivery to the RPE. When injected into the subretinal space, IRBPs will quickly bind the targeting ligand and help to transport and release the particles near the apical side of the RPE before being internalized by the RPE cells. 33 In vivo transfection with pGFP demonstrated the enhanced gene transfer and expression efficiency of ECO/pDNA nanoparticles with this targeting mechanism in the RPE.
Gene replacement therapy using Ret-PEG-ECO/pRPE65 nanoparticles successfully introduced the expression of exogenous therapeutic RPE65 genes in the RPE layer. ERG results in treated Rpe65 À/À mice demonstrated a significant increase in function of both rod and cone photoreceptors, with a therapeutic effect comparable to that of viral gene delivery systems. 36 In addition to protecting the RPE, Ret-PEG-ECO/pRPE65 treatment protected cone photoreceptors adjacent to the injection sites in these mice, slowing cone degeneration for at least 4 months. This therapeutic effect is similar to that reported for viral delivery of RPE65, which also delayed cone degeneration 4 months post-injection in Rpe65 À/À mice. 37 Furthermore, gene therapy with Ret-PEG-ECO/pRPE65 nanoparticles has demonstrated a safe profile that does not irreversibly damage the retina. The slight drop in ERG major wave amplitudes caused by the injection's inflammatory effect recovered shortly after the injection.
Similar to the phenotype of human LCA2 patients, mice with RPE65 gene knockout have diminished ERG responses because the supply of the 11-cis-retinal visual chromophore cannot be regenerated. Rather than producing the chromophore, the impaired visual cycle accumulates the intermediate product all-trans-retinyl ester in the RPE, which gradually damages the retina. Massive degeneration of cone cells can occur as early as 1 month of age in Rpe65 À/À mice. Photoreceptor outer segment abnormalities are commonly visible at 1 or 2 months of age, and the outer nuclear layer is thinner than normal at 3 months of age. 34 After introduction of the exogenous RPE65 gene to the RPE layer of 1-month-old mice, even in a small number of RPE cells, the visual cycle can supply sufficient 11-cisretinal for the adjacent photoreceptor cell layer to provide improved visual function. 38
Figure 5. Cone Preservation after Gene Replacement Therapy with Ret-PEG-ECO/pRPE65
Nanoparticles in Rpe65 -/-Mice 120 Days after Treatment (A-C) Peanut agglutinin (green) was used to stain cone photoreceptors. Nuclei were stained with DAPI (blue).
Rescue of cone cells because of Ret-PEG-ECO/ pRPE65 nanoparticle gene therapy could also be attributed to RPE65 expression in such cells as well as in RPE cells. This effect was also reported in a clinical trial that demonstrated a cone rescue effect after gene therapy. 9 Although the treatment was effective in 1-month-old mice, treatment of 3-month-old mice with Ret-PEG-ECO/pRPE65 nanoparticles had no effect because the photoreceptor cells had begun to degenerate by this age. The observation is consistent with human patients with LCA2. During the first few years of life, children with LCA2 are less visually responsive than healthy children. Older LCA2 patients demonstrate more severe retinal degeneration that makes the retina less responsive to RPE65 gene therapy.
Other strategies have been reported previously to target the RPE layer. For example, hyaluronan has been applied to target CD44 receptors expressed by RPE cells. Folate has been tested as a targeting ligand for folate receptors associated with the RPE. 39,40 However, the expression of CD44 receptors is more restricted in inflammatory tissue, and folate receptors reside predominantly in the basal rather than apical membranes of RPE cells. Therefore, the distribution of these receptors greatly restricts the use of these ligands to target the RPE. By comparison, targeting IRBPs with the all-trans-retinyl group can avoid the restrictions of the targeting mechanisms reported above and provide efficient gene delivery to RPE cells after subretinal injection.
Here we have demonstrated the efficacy of a targeted gene delivery system for gene replacement in LCA2. However, improvements are needed to optimize the system prior to clinical translation. These include prolonging gene expression, identifying the appropriate disease stage for maximal effectiveness of therapy, and exploring alternate routes of injection to transfect the whole RPE layer. In future work, we will address the transience of gene expression with these nanoparticles by modifying the DNA plasmid with sequences that prolong gene expression, such as scaffold/matrix attachment regions (S/MARs). 41 The targeted ECO/pDNA nanoparticles can also be further optimized by introducing a pH-sensitive spacer between PEG and ECO to enhance endosomal escape of the ECO/pDNA nanoparticles. 42 Although modification of ECO/pDNA with Ret-PEG promoted cellular uptake of the nanoparticles, the PEG layer could hinder the pH-sensitive amphiphilic endosomal escape of the ECO/pDNA nanoparticles. Incorporation www.moleculartherapy.org of a pH-sensitive hydrazone spacer will shed the PEG layer via pH-sensitive hydrolysis of hydrazone in acidic endosomes. This will expose the core ECO/pDNA nanoparticles to enhance endosomal escape, with cytosolic gene delivery and increased expression of the therapeutic gene.
In conclusion, the multifunctional lipid ECO-based gene delivery system used here demonstrated excellent gene transfection efficiency because of its unique ability to escape from the endosome. Modification of ECO/pDNA nanoparticles with all-trans-retinylamine to target RPE cells greatly enhanced transfection efficiency in the RPE in vivo. Gene replacement therapy with Ret-PEG-ECO/pRPE65 nanoparticles significantly improved the ERG activity and vision of Rpe65 À/À mice, and this therapeutic effect continued for at least 120 days. Ret-PEG-ECO/pRPE65 nanoparticles were safe for subretinal injection, as shown in wild-type BALB/c mice. All-trans-retinylamine-modified pH-sensitive ECO/pDNA nanoparticles comprise a promising non-viral platform for safe, efficient, and targeted delivery of gene therapeutics to treat RPE tissue-specific monogenic eye diseases, including LCA2.
Cell Cultures
ARPE-19 cells were cultured in DMEM and supplemented with 10% fetal bovine serum, 100 mg/mL streptomycin, and 100 units/mL penicillin (all reagents were from Invitrogen). Cells were maintained in a humidified incubator at 37 C and 5% CO 2 .
Animals BALB/c wild-type mice were purchased from Jackson Laboratory. Rpe65 À/À -deficient C57BL6 mice were obtained from Michael Redmond (National Eye Institute, NIH) and genotyped as described previously. 43 All mice were housed and cared for in the animal facility at the School of Medicine, Case Western Reserve University, and all animal procedures were approved by the Case Western Reserve University (CWRU) Institutional Animal Care and Use Committee. stirred at room temperature overnight. The product Ret-PEG-MAL was precipitated in 50 mL diethyl ether and washed three times. The product was dried under vacuum to give Ret-PEG-MAL (yield, 89%).
Preparation of ECO/pDNA and Ret-PEG-ECO/pDNA Nanoparticles
Multifunctional pH-sensitive lipid ECO was synthesized as reported previously. 24 The ECO/pDNA nanoparticles were prepared by self-assembly of ECO with plasmid DNA at an amine/phosphate (N/P) ratio of 6. The ECO stock solution (2.5 mM in ethanol) and plasmid DNA stock solution (0.5 mg/mL) at predetermined amounts based on the N/P ratio were diluted into equal volumes with nuclease-free water, mixed, and shaken for 30 min at room temperature. The Ret-PEG-MAL solution (0.4 mM in 50% DMSO and water) was then added to the mixture at 2.5 mol % and shaken for another 30 min to facilitate the reaction between the maleimide functional
TEM
The morphology of ECO/pDNA (N/P = 6) and Ret-PEG-ECO/ pDNA (N/P = 6) nanoparticles was imaged with a transmission electron microscope (JEOL JEM2200FS). Samples for TEM were prepared by depositing 20 mL of the particle solution onto a 300-mesh copper grid covered by a thin amorphous carbon film (20 nm). Immediately after deposition, the excess liquid was removed by touching the grid with filter paper. Samples were stained twice by adding 3 mL 2% uranyl acetate aqueous solution; the excess of staining solution was removed after each addition. Images of the nanoparticles were acquired by TEM after the samples were dried.
DLS
The sizes and zeta potentials of ECO/pDNA (N/P = 6) and Ret-PEG-ECO/pDNA (N/P = 6) nanoparticles were determined by DLS with an Anton Paar Litesizer 500 (Anton Paar USA). Three measurements were performed and averaged for each sample at 20 C.
In Vitro Transfection ARPE-19 cells were seeded onto 12-well plates at a density of 4 Â 10 4 cells/well and allowed to grow for 24 hr at 37 C. Transfections were conducted in 10% fetal bovine serum medium with the ECO nanoparticles of GFP plasmid DNA (Altogen Biosystems) at a DNA concentration of 1 mg/mL. ECO/pGFP nanoparticles were incubated with ARPE-19 cells for 8 hr at 37 C. The medium was then replaced with fresh serum-containing medium (10% serum), and cells were then cultured for an additional 48 hr. GFP expression was evaluated with an Olympus FV1000 confocal microscope (Olympus). After removal of the culture medium, each well was washed twice with PBS (10 mM sodium phosphate [pH 7.2] and 100 mM NaCl). Cells were harvested after treatment with 0.25% trypsin containing 0.26 mM EDTA (Invitrogen), followed by centrifugation at 1,000 rpm for 5 min and fixation in 750 mL PBS containing 4% paraformaldehyde, and finally passed through a 35-mm cell strainer (BD Biosciences). A BD FACSCalibur flow cytometer (BD Biosciences) was used to determine GFP expression based on the fluorescence intensity from a total of 10,000 cells for each sample.
Intracellular Uptake
ARPE-19 cells (4 Â 10 4 /well) were seeded onto glass-bottom microwell dishes and allowed to grow for 24 hr at 37 C before they were stained with 4 mg/mL Hoechst 33342 (Invitrogen) and 100 mM LysoTracker Green (Life Technologies). Cells then were treated with ECO/Cy3-pDNA (Mirus Bio, catalog number MIR7904, N/P = 6) nanoparticles in 10% fetal bovine serum medium. Cells were cultured with nanoparticles for 1, 4, and 24 hr (media were replaced by fresh media after 4 hr), and then the media were removed, and cells were washed with PBS three times before fixation with PBS containing 4% paraformaldehyde. Fluorescence images were acquired with an Olympus FV1000 confocal microscope. When injected into the subretinal space, all-trans-retinylamine-modified ECO nanoparticles will bind to IRBP in the interphotoreceptor matrix. IRBP binding helps to retain the nanoparticles in the space and transports the nanoparticles to the target cells in the RPE. Following cellular uptake by endocytosis, the nanoparticles escape from the endosomal compartment and release the RPE65 plasmid DNA via the PERC mechanism. Finally, the RPE65 gene is expressed by the RPE cell, where it slows cone cell degeneration and preserves visual function.
In Vivo Subretinal Transfection with ECO/pDNA and Ret-PEG-ECO/pDNA Nanoparticles All surgical manipulations were carried out under a surgical microscope (Leica M651 MSD). Mice were anesthetized by intraperitoneal injection of a cocktail (15 mL/g body weight) comprised of ketamine (6 mg/mL) and xylazine (0.44 mg/mL) in PBS buffer (10 mM sodium phosphate and 100 mM NaCl [pH 7.2]). Pupils were dilated with 1.0% tropicamide ophthalmic solution (Bausch & Lomb). A 33G beveled needle (World Precision Instruments) was used as a lance to make a full-thickness cut through the sclera 1.0 mm posterior to the limbus. This needle then was replaced with a 36G beveled needle attached to an injection system (UMP-II microsyringe pump and a Micro4 controller with a footswitch, World Precision Instruments). The needle was aimed toward the inferior nasal area of the retina, and an ECO/pDNA or Ret-PEG-ECO/pDNA nanoparticles solution (2.4 mL) was injected to deliver either a pRPE65 (OriGene) or pGFP at the dose of 240 ng into the subretinal space. Successful administration was confirmed by bleb formation. The tip of the needle remained in the bleb for 10 s after bleb formation, when it was gently withdrawn. A solution (2.4 mL) of Ret-PEG-ECO carrier alone with the same concentration as Ret-PEG-ECO/pDNA nanoparticles was also injected into the subretinal space of the contra eye to serve as a control. Each of the group included at least three eyes with successful subretinal injection. To assess GFP expression, the mice were sacrificed, and eyes were collected 3 days after injection, washed with penicillin-streptomycin solution (Sigma), and rinsed with Hank's balanced salt solution (HyClone). Eye cups were prepared as described previously. 26 The retina and RPE layers were placed in glass-bottom confocal plates and fixed with 1 mL of PBS containing 4% paraformaldehyde. GFP expression in the RPE layer was evaluated with an Olympus FV1000 confocal microscope.
qRT-PCR
Rpe65 À/À mice were sacrificed 15 days after subretinal injection with Ret-PEG-ECO/pRPE65 nanoparticles, and RNA was isolated from their eyes. cDNA was synthesized with the QuantiTect reverse transcription kit (QIAGEN) following the manufacturer's instructions. qRT-PCR amplification was performed with SYBR Green I Master mix (Roche Diagnostics). Fold changes were calculated after normalizing the data to glyceraldehyde 3-phosphate dehydrogenase. Rpe65 À/À mice without treatment were used as controls.
Electroretinograms
Electroretinograms were acquired according to a reported method. 44 Animals were anesthetized by intraperitoneal injection of a cocktail (15 mL/g body weight) comprised of ketamine (6 mg/mL) and xylazine (0.44 mg/mL) in PBS buffer (10 mM sodium phosphate and 100 mM NaCl [pH 7.2]). Pupils were dilated with 1% tropicamide for imaging. Experiments were performed in a dark room. Three electrodes were placed on each mouse: a contact lens electrode on the eye, a reference electrode underneath the skin between the ears, and a ground electrode underneath the skin of the tail. Electroretinograms were recorded with the universal electrophysiologic system UTAS E-3000 (LKC Technologies).
Histology
Eye cups were fixed in 2% glutaraldehyde and 4% paraformaldehyde and processed with optimum cutting temperature (OCT) formulation. Sections were cut at 1 mm. The sample slides were permeabilized and fixed sequentially with 4% paraformaldehyde (PFA) and 0.25% Triton X-100, followed by treatment with 0.5% BSA blocking solution for 1 hr at room temperature. The fluorescence labeled peanut agglutinin (PNA) was applied at a concentration of 12.5 mg/mL for 1 hr at room temperature and washed three times with a 0.1% Tris-buffered saline with Tween 20 (TBST) solution for 5 min each time. Slides were counter-stained with DAPI and mounted with a coverslip using the Prolong Gold reagent (Invitrogen) before imaging. Stained tissue was imaged with an Olympus FV1000 confocal microscope.
Statistical Analysis
Statistical analyses were conducted with two-tailed Student's t tests using a 95% confidence interval. Statistical significance was accepted when p % 0.05. | 2017-11-02T06:06:08.449Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "ff2d794f67711a64fb7aee912e83749a7af9c98b",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2162253117301324/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff2d794f67711a64fb7aee912e83749a7af9c98b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
213695089 | pes2o/s2orc | v3-fos-license | A Study on the Extraction of Texture from Ethnic Minorities by Shape Grammar
Objective to study the application of shape grammar in the extraction of minority nationality pattern symbols, construct minority nationality symbols and extract the pattern symbols that can be used in modern design, so as to spread minority culture. Method by collecting minority pattern symbols, in-depth analysis of the different minority pattern connotation, by using the method of network information and shape grammar extraction minority pattern symbols represented by zhuang, miao, yi, jino, put forward different ethnic symbols element is proposed, implemented as a minority pattern combined with modern design theory basis. The conclusion demonstrates the feasibility of the method, provides a theoretical basis for the combination of symbols of ethnic minorities and modern design, and infuses vitality into modern design for ethnic minorities. It is an inevitable choice for the innovation and development of ethnic minorities' patterns in modern design, which is conducive to the dissemination of ethnic minority patterns and the promotion of cultural inheritance of ethnic minorities. Introduction With the improvement of economic level, people gradually cannot meet the material life rich, pay more attention to spiritual life. At the same time, with the development of science and technology, the rapid rise of electronic products, people's life is more convenient. But all these have brought a series of consequences to people, the fast pace of life leads to people's irritability, more people choose to travel, away from the hustle and bustle of the city. To the quiet and plain countryside and minority areas to experience ethnic customs, people for the pursuit of minority handicrafts is more and more high, this for the quality of handicrafts to improve, how to innovate minority pattern applied in the new handicrafts urgently need to solve. However, the extraction of ethnic minority patterns has become a difficult problem for innovation. Research status of shape grammar: shape grammar is a design and analysis method based on the formal language theory proposed by American design and calculation theorists George Stiny and James Gips in 1972 [1]. Shape grammar has definite geometric shape elements and grammatical rules, which make the trace of the figure to be found when it is used. Shape grammar is often used in painting and decorative art. Design of palladio villa, shenzhen concert hall. The works of op artist Victor Vasarely are all designed by applying the basic method of shape grammar. The study of shape grammar is valuable for the extraction of minority patterns. Design Ideas of Shape Grammar and Pattern Extraction Shape grammar is a design method used to calculate shape evolution, which can make shape deduce new form and symbol elements according to specific evolution rules [2]. It can not only retain the original shape and form, but also deduce new symbolic elements. The methods used in shape grammar include rotation, translation, mirror image, deformation and so on. The basic patterns of
Introduction
With the improvement of economic level, people gradually cannot meet the material life rich, pay more attention to spiritual life. At the same time, with the development of science and technology, the rapid rise of electronic products, people's life is more convenient. But all these have brought a series of consequences to people, the fast pace of life leads to people's irritability, more people choose to travel, away from the hustle and bustle of the city. To the quiet and plain countryside and minority areas to experience ethnic customs, people for the pursuit of minority handicrafts is more and more high, this for the quality of handicrafts to improve, how to innovate minority pattern applied in the new handicrafts urgently need to solve. However, the extraction of ethnic minority patterns has become a difficult problem for innovation.
Research status of shape grammar: shape grammar is a design and analysis method based on the formal language theory proposed by American design and calculation theorists George Stiny and James Gips in 1972 [1]. Shape grammar has definite geometric shape elements and grammatical rules, which make the trace of the figure to be found when it is used. Shape grammar is often used in painting and decorative art. Design of palladio villa, shenzhen concert hall. The works of op artist Victor Vasarely are all designed by applying the basic method of shape grammar. The study of shape grammar is valuable for the extraction of minority patterns.
Design Ideas of Shape Grammar and Pattern Extraction
Shape grammar is a design method used to calculate shape evolution, which can make shape deduce new form and symbol elements according to specific evolution rules [2]. It can not only retain the original shape and form, but also deduce new symbolic elements. The methods used in shape grammar include rotation, translation, mirror image, deformation and so on. The basic patterns of ethnic minorities are obtained by searching for information, and the basic patterns of ethnic minorities are obtained by summarizing and summarizing. Shape grammar can be used to extract ethnic minority patterns, increase the innovation of extraction, and provide help for the application of modern design of ethnic minority. For the extraction of patterns, the most basic patterns of minority patterns, namely, geometric patterns and text patterns, are used for the following reasons: 1. 2. The use of shape grammar requires simple patterns; 3. The other patterns are too complicated to be read after transformation.
Shape Grammar Concept
Shape grammar originated from the "symbolic language" proposed by American computer theorists in 1972, which is a design and analysis method based on shape. It first appeared in automobile, painting, graphic design, decorative art and web design. In daily life, the arrangement of pavements, the pattern design of Chinese Windows, the composition rules of tangram and the design of roof ridge all come from the changing rules of shape grammar. Shape grammar is a formula, according to the formula to increase or decrease, rotation, parallel to form a new pattern, the resulting pattern is traceable. Because of this characteristic, more and more designers begin to use shape grammar to study.
Ideas for Shape Grammar Extraction
First establish element, the element from the basic patterns of ethnic minorities, according to the characteristics of each element rotation, parallel, the change of the vertical mirror, change shape grammar rules one pattern of the 45° rotating, rotation Angle and frequency to be decided according to the characteristics of the pattern, after translation, finally after a vertical mirror to a new pattern. You can also translate it and then rotate it, and rotate it a couple of times or translate it a couple of times is subjective definition, and you end up with a new pattern.
Extraction of Basic Minority Patterns
Ethnic patterns in the process of the formation and development of each nation formed its own national characteristics, in the process of exploring ethnic patterns found that most of the minority pattern symbols have word of geometric lines, plant grain, grain, animal grain pattern of these patterns such as the original may be a copy of a specific image or subject, or from some plants, animals and natural objects evolved, or some kind of totem sign [3]. Each nation forms its own ethnic pattern according to its own living habits, history, culture and legends. Geometric pattern is the most common national pattern, almost appear in every national pattern history proves that geometric pattern is the earliest appearance, in ancient times geometric pattern has been used, has been continued until now, so geometric pattern has become the most basic pattern of every nation. From the most simple geometric lines is slowly becoming complicated, thus forming the plant, animal grain gradually become modern we see the rich variety of ethnic patterns as people focus on ethnic minorities, ethnic handicrafts, clothing and a series of related products people's favour, ethnic patterns gradually applied in modern design, used in modern design in the process, the patterns of the ethnic minorities to extract be applied to the problem of modern design, strong particularity because of ethnic patterns, the difficult to extract in applying in modern design, want to innovative features is difficult to [4].
Brief Introduction of Ethnic Minority Patterns
The appearance of ethnic minority patterns is in the long history, beginning in ancient times. The process of development has gone through three stages. First, in ancient times, in ancient times, there was no writing to convey events through patterns. Now we can see quite a few patterns on the excavated rocks. Over time, these words gradually evolved into the patterns of different peoples. Second, the pattern of arts and crafts formed in ancient China has become an important part of the formation process of national patterns. Third, the influence of modern society, under the influence of modern society ethnic minority patterns gradually into the modern.
Geometric patterns are mostly composed of regular shapes and irregular shapes. Regular shapes include triangles, squares, etc. Irregular shapes are the deformation of regular patterns. The geometric patterns of minority nationalities are composed of these two parts. Most of the geometric patterns of minority nationalities in southwest China are mainly circular and square. According to different living environment, geometric patterns may be different, but in addition to being restricted by taboo colors, national geometric patterns are basically consistent in their artistic construction, reflecting their unique aesthetic orientation [5]. As shown in figure 2: The minority nationality's writing pattern is not the normal state of the traditional nationality pattern, the writing pattern originates from the minority nationality's recording symbol, is the minority nationality's writing pattern art and the writing symbol joint development joint product, also is the minority nationality calligraphy art origin. The pattern of characters is mainly based on the hieroglyphs of minority nationalities, but there is no lack of abstract characters. Such as the Tibetan masses auspicious Symbols are basically based on the artistic abstraction of Tibetan, which implies deep blessings in the text folding [7]. Figure 4 is the early minority writing patterns. The text pattern here mainly comes from the ten thousand glyphs on ancient pottery POTS. Animal patterns are usually used in national costumes through deformation and exaggeration. The animals used by different nationalities are different, such as: the zhuang yao worshipping cattle, the hui pig, and the miao butterfly. The most common animal prints are cows and pigs. Butterfly, man, elephant, deer, dog, rabbit, rat, chicken, fish, bat, butterfly, shrimp, bee, bird, horse, lion, frog, loach. Geese, ducks, sheep, dragons, phoenixes, unicorns in the imagination, human and beast conjoined in the primitive witchcraft myths and legends, human ghosts and ghosts, etc [8].
Application of symbol Extraction of Zhuang, Miao, Yi and Jinuo Nationalities
There are commonalities among nationalities, but each has its own personality. Due to the same development process and origin of patterns, the patterns of various nationalities are almost the same. However, different lifestyles and living environments formed in the development process of each nationality make the same patterns of different nationalities have different meanings. After the passage of time, each ethnic group has formed its own ethnic pattern and culture. How to use shape grammar to extract minority patterns? Representative elements of 56 ethnic minorities were randomly extracted. First, four ethnic minority patterns of zhuang, miao, yi and jinuo were found as research representatives. Secondly, the glyphs and geometric patterns of these four ethnic groups were found for comparison. Finally, the glyphs and geometric patterns of these four ethnic minorities were extracted and compared by using shape grammar. Through the method of comparative experiment, a new pattern was obtained by using shape grammar to extract minority patterns.
Zhuang
Characters often appear in the embroidery of zhuang people. These characters include xi, shou, tian, mi and jing, which symbolize the yearning and pursuit of a better life. Different characters and patterns represent different meanings, which are used in the national costumes. The characters are unique in shape and meaning. Zhuang people also accepted the characters with complex historical background, such as swastikas, which appear in the composition of zhuang costumes [9]. These show that the zhuang people have deep feelings for character print. In the following diagram, the swastika pattern was extracted, and elements were extracted according to the rules of shape grammar. The new element pattern, figure 6, was obtained by means of parallel, rotation, deformation and vertical mirror. Figure 6. The basic graphic evolution of Zhuang shape grammar.
Miao
As one of the ethnic minorities with a large population in China, the miao nationality enjoys its own development and prosperity. The patterns of longteng can often be seen in the dress patterns of the miao nationality, and there are often flowers and plants in the plant patterns, while there are more peony flowers. At the same time, it is found from the literature of the miao nationality that the national costumes are quite similar to those of the palace maids and guards, mainly because the palace maids and guards in the ancient war-torn palace fled to the southwest and brought their costumes to the miao tribe to influence the clothing patterns of the miao people. As for the geometric patterns of miao costumes, some of the patterns themselves represent a certain meaning, while some of the overall symbolic meaning is expressed by comprehensive patterns. Some are hieroglyphs of Chinese characters and abstract patterns transformed by morphed characters [10] . The cross pattern of the miao nationality pattern is adopted in the extraction of the miao nationality pattern. In figure 7, the rules of shape grammar are used to extract the miao nationality elements, and the new element pattern is obtained by means of parallel, rotation, deformation and vertical mirror.
Yi
Yi geometric patterns are composed of points, lines and surfaces. The geometric pattern in the figure below is very clear to see the point, line, plane geometric pattern. His modeling is concise and concise, which precipitates a long aesthetic tradition and often combines with other concrete natural symbols to form new patterns [11] . It will become more and more difficult for the yi nationality's pattern inheritance to be ignored by the young people. Elements are extracted by combining the rules of the yi nationality's geometric pattern and shape grammar, and new elements are obtained by means of parallel, rotation, deformation, vertical mirror, etc. The application and modern design inject vitality into the yi nationality's pattern inheritance. Figure 8. The basic graphic evolution of Yi shape grammar.
Jinuo
As a minority nationality, the number of people is relatively small. Therefore, people have little knowledge about the jinuo nationality and lack of relevant knowledge on the characteristics of ethnic patterns. The following figure is the geometric pattern of jinuo nationality. These patterns are derived from plants, and it is easy to make them by cross-stitch method. Therefore, its application range is very wide, and almost every branch of jinuo nationality is using geometric patterns [12] . At the same time, it is relatively difficult to apply the ethnic patterns of jinuo in modern design. Shape grammar was used to extract the patterns of the jinuo nationality, and the ethnic characteristics of the jinuo nationality were better and faster displayed in modern design. The following figure obtained the new patterns through the evolution of the rules of shape gramma. Figure 9. The basic graphic evolution of Jinuo shape gramma.
Conclusion
This paper constructs minority pattern elements based on minority pattern patterns. At the same time, the paper also discusses the application of zhuang, yi, miao and jinuo patterns. The principle of shape grammar deduces the basic patterns of zhuang, miao, yi and jinuo nationalities. In this paper, the shape grammar USES translation, rotation, symmetry, mirror vertical and other methods. Translation and symmetry are applied in zhuang, yi and miao nationality, and can be applied in plane design background design and packaging design. The vertical rotation and mirror image method is reflected in the extraction of jinuo patterns, which can be used in web design. Using shape grammar to extract minority patterns, the extracted patterns have good visual effect, so as to be used in modern design. The results show that it is reasonable and effective to extract minority patterns from shape patterns, and the new patterns can promote the protection and inheritance of minority patterns. How to apply the obtained new patterns in these designs will be the future work of this study. In this paper, the extraction of minority patterns only adopts simple graphic elements, while the extraction of other minority patterns remains to be studied. | 2020-01-30T09:05:23.696Z | 2020-01-28T00:00:00.000 | {
"year": 2020,
"sha1": "3882ded52fe6d5eb7f9959e15235d227cdb84a2e",
"oa_license": null,
"oa_url": "http://dpi-proceedings.com/index.php/dtssehs/article/download/33762/32349",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3450ebc936f47c649339ec157c70a9a6a3f0fa13",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
} |
53221060 | pes2o/s2orc | v3-fos-license | RANIBIZUMAB 0.5 MG TREATMENT IN ADOLESCENTS WITH CHOROIDAL NEOVASCULARIZATION: SUBGROUP ANALYSIS DATA FROM THE MINERVA STUDY
In the 12-month MINERVA study, a subgroup of 5 adolescent patients aged 13–17 years received open-label ranibizumab 0.5 mg at baseline, followed by individualized pro re nata regimen based on disease activity, for the treatment of choroidal neovascularization. Visual and anatomical outcomes were improved, and no new safety findings were observed with ranibizumab.
C horoidal neovascularization (CNV) in children is a rare ocular disease that can cause significant visual impairment and even severe vision loss if untreated. 1 In children, CNV may be associated with a specific underlying ocular condition, but in many cases, the cause remains uncertain. 1,2 The overall prevalence of CNV is much lower in children and adolescents than in adults; however, the prevalence increases with age and remains a cause for significant decline in visual function. 3,4 Currently, there is no approved therapy or established standard of care for the treatment of CNV in adolescents and children. The available treatment modalities for CNV in the pediatric population include observation, macular surgery, laser photocoagulation, verteporfin photodynamic therapy, and, more recently, the off-label use of anti-vascular endothelial growth factor (anti-VEGF) agents. 2,5,6 Limited data on the natural history of CNV in adolescents suggest that observation alone may be required in some cases. 7 Furthermore, although surgery may improve the visual acuity in such patients, there could be an inherent risk of ocular complications or the need for reoperation or postoperative laser treatment. 2,7 Laser photocoagulation is reported to be safe but involves a risk of scarring and thermal injury. 2 A small number of case reports and series have reported that verteporfin photodynamic therapy can improve visual acuity with immediate reduction in CNV leakage in children; however, retinal pigment epithelial atrophy has been reported. 2,3,8 Ranibizumab is approved for the treatment of CNV secondary to age-related macular degeneration (AMD) and CNV in adults. 9 The MINERVA study was specifically designed to evaluate the efficacy and safety of ranibizumab 0.5 mg using an individualized pro re nata regimen based on disease activity in adult patients with visual impairment due to CNV associated with any cause other than neovascular AMD and myopic CNV, with an open-label, nonrandomized setting in adolescent patients. 10,11 Here, we present the efficacy and safety results of ranibizumab 0.5 mg in adolescent patients with CNV enrolled in the MINERVA study.
Study Design and Population
The MINERVA study was a 12-month, Phase III, randomized, double-masked, sham-controlled, multicenter study in adult patients, with a nonrandomized, openlabel group of adolescent patients. 10,11 Of the total 183 patients enrolled in this study, 178 were adults and 5 were adolescent. 10,11 The study was conducted in accordance with the Declaration of Helsinki. Written informed consent from guardians of adolescent patients and written assent from adolescent patients were obtained before any study assessments were performed.
Treatment-naive adolescent patients aged $12 to ,18 years with visual impairment due to any CNV etiology were included in the study. Adolescent female patients with positive pregnancy tests were excluded from the study. Complete eligibility criteria have been described previously. 10,11 Treatment All adolescent patients received open-label ranibizumab 0.5 mg in the study eye at baseline, followed by an individualized pro re nata regimen from Month 1 onward based on the disease activity, as assessed by the investigator at each monthly visit. Evidence of disease activity was judged clinically (e.g., visual acuity impairment, intraretinal/subretinal fluid, and hemorrhage or leakage) or based on real-time imaging and functional testing. Retreatment was warranted by the presence of disease activity and as per the investigator's discretion. The fellow eye could receive ranibizumab treatment if it presented with or This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. developed CNV due to the same underlying disease as in the study eye during the course of the study.
For adolescent patients, intravitreal injections were administered and anesthetic procedures were performed as per the local practice.
Objectives
The objective of the study was to describe the efficacy and safety findings with ranibizumab 0.5 mg treatment in adolescents with any CNV etiology, similar to those assessed for adult patients over 12 months. 10,11 The study assessments included the followings: 1) change in best-corrected visual acuity (BCVA) of the study eye from baseline to Months 2, 6, and 12; 2) change in central subfield thickness (CSFT) of the study eye from baseline to Months 2, 6, and 12; 3) change in macular volume of the study eye from baseline to Months 2, 6, and 12; 4) presence of subretinal fluid, intraretinal edema, and CNV leakage in the study eye at Months 2, 6, and 12; 5) treatment exposure in the study eye over 12 months; and 6) safety over 12 months.
Efficacy and Safety Assessments
Study assessments were performed at screening, baseline (Day 1), and at all monthly visits up to the last visit. Spectral domain optical coherence tomography (OCT) including Cirrus (Carl Zeiss Meditec, Dublin, CA) and Spectralis (Heidelberg Engineering, Dossenheim, Germany) was performed at each monthly monitoring visit. The images were evaluated for quantitative (e.g., CSFT and macular volume) and qualitative (e.g., macular edema, cysts, and intraretinal and subretinal fluid) anatomical parameters and their change over time by the central reading center (CRC; Bern Photographic Reading Center, Bern, Switzerland) and by the investigators at the sites. The analysis is based on the assessment by the CRC. Macular volume was recorded by the CRC as the volume of 3-mm diameter field around the foveal center. Fluorescein angiography (FA) and color fundus photography assessments are described previously. 10,11 Data were collected on the number of ranibizumab treatments received over 12 months.
Safety assessments included type, frequency, and severity of adverse events (AEs) and serious AEs, and the occurrence of abnormal vital signs or intraocular pressure $30 mmHg at any time point up to Month 12.
Statistical Analysis
The efficacy and safety outcomes of the adolescent patients were assessed descriptively as individual case reports at Month 12. Descriptive statistics included the number of observations (n), mean, median, SD (as required), and ranges for continuous variables, and frequencies and percentages for categorical values. Statistical analysis was performed using SAS (version 9.3 or higher).
Results
Five adolescent patients aged 13-17 years and diagnosed with CNV in the study eye were included in the study. At baseline, two patients had CNV secondary to idiopathic chorioretinopathy, two patients had CNV due to Best disease, and one patient had CNV secondary to optic disk drusen ( Table 1). All patients completed the 12-month study.
Efficacy
Best-corrected visual acuity improved from baseline to Month 12 in all 5 adolescent patients ( Table 2). Baseline BCVA ranged from 34.0 to 82.0 letters (mean, 58.0 letters) and the change in BCVA from baseline to Month 12 ranged from +5.0 to +38.0 letters (mean, +16.6 letters). The mean change in BCVA of the study eye from baseline to Months 2, 6, and 12 was +9.2, +16.6, and +16.6 letters, respectively. One of the patients has received ranibizumab in the fellow eye (Patient A) for the treatment of CNV secondary to Best disease (the same CNV etiology as in the study eye). The fellow eye had BCVA gain of +12.0 letters at Month 12, from a baseline BCVA of 42.0 letters. Over 12 months, CSFT was stable or reduced in all 5 adolescent patients. The mean change in CSFT of the study eye from baseline to Months 2, 6, and 12 was −31.4, −87.6, and −116.4 mm, respectively (Table 2). In the majority of adolescent patients by Month 12, macular volume was reduced or stable, whereas subretinal fluid, intraretinal edema, and CNV leakage were absent. The OCT and FA findings (macular volume, subretinal fluid, intraretinal edema, and CNV leakage) of the study eye at baseline, Months 2, 6, and 12 are described in Table 3.
As an example, the FA, OCT, and color fundus photography outcomes for one of the patients with CNV secondary to Best disease are shown in Figure 1.
Treatment Exposure
Over 12 months, a mean of three ranibizumab injections (range, 2-5) were administered in the study eye out of possible 12 injections (Table 4). The patient who received treatment in the fellow eye for CNV
Safety
No serious AEs, severe AEs, or AEs were suspected to be related to the study drug. Ocular and nonocular AEs of the study eye and the fellow eye experienced by the patients are summarized in Table 5. No clinically notable abnormal vital signs or patients with intraocular pressure $30 mmHg in the study eye at any time after baseline were reported. No deaths or cases of endophthalmitis were reported.
Discussion
Currently, anti-VEGFs are the first-line treatment for CNV lesions in adults with neovascular AMD and myopic CNV. 9 In the European Union, ranibizumab was approved in November 2016 to also treat CNV due to other causes in adults. 12 No clear standard of care or treatment paradigm is established for the pediatric population with CNV, although anti-VEGF agents, laser photocoagulation, and verteporfin photodynamic therapy appear to be effective treatment options in cases with severe vision loss. 8,[13][14][15][16][17][18][19][20] Ranibizumab is a well-established and approved therapy for neovascular AMD and CNV in adults, 9,10,[21][22][23] and therefore is likely to have a similar beneficial effect in a more diverse population with CNV lesions due to other etiologies. Thus, in the MINERVA study, all adolescents diagnosed with various CNV etiologies received an open-label ranibizumab treatment.
The MINERVA study included five adolescent patients with CNV secondary to idiopathic chorioretinopathy, optic disk drusen, and Best disease. Idiopathic CNV occurs in the absence of any known associated condition. 18 Optic disk drusen usually simulate papilledema, but the associated hemorrhages resulting from the CNV are largely responsible for the central vision loss in children and adolescents. 18 Best disease is characterized by vitelliform lesions of the central macula and electro-oculographic abnormalities. 19,20 Choroidal neovascularization secondary to Best disease is often associated with acute vision loss, but some cases may also progress to disciform scar vision loss. 19,20 In MINERVA, the adolescent patients with these CNV lesions reported a mean visual acuity gain of +16.6 letters at Month 12 with ranibizumab treatment. Few pediatric cases with these CNV lesions have also reported improvement of visual acuity with other treatment options, such as laser photocoagulation and verteporfin photodynamic therapy. 2,3,8,[18][19][20] Limited studies have reported the off-label use of anti-VEGFs in children with CNV irrespective of the underlying etiology. [13][14][15][16][17][24][25][26] Vision improvement has been reported in some patients with CNV aged 9-11 years after the administration of a few (range, 1-3) intravitreal bevacizumab injections. [24][25][26] In a 14-year-old girl with CNV due to acute multifocal posterior placoid pigment epitheliopathy, a single intravitreal ranibizumab injection resulted in complete regression of CNV. 13 Similarly, CNV due to traumatic Bruch membrane rupture resolved after a single ranibizumab injection in a 14-year-old boy. 14 Choroidal neovascularization due to toxoplasmosis in a 7-year-old patient was successfully treated with ranibizumab and antiparasitic therapy. 15 Kohly et al 16,17 described four pediatric patients with CNV, each of whom was treated with different intravitreal anti-VEGF agents: one patient was treated with pegaptanib sodium, two with bevacizumab, and one with ranibizumab. 16 Visual acuity was improved or maintained after two to five injections in all four pediatric patients. 16 Short-term results of ranibizumab treatment for CNV due to inflammatory chorioretinal disease and idiopathic CNV in patients aged 10 and 15 years, respectively, have been encouraging in improving visual acuity. 17 In this subset of the MINERVA study, ranibizumab 0.5 mg treatment over 12 months was beneficial in improving BCVA and stabilizing or reducing CSFT in adolescent patients with CNV. The associated CNV etiologies enrolled in this study were Best disease, optic disk drusen, or idiopathic chorioretinopathy. These are important findings, as the characteristics of CNV may differ between children and adults and may affect the prognosis and natural course of CNV. In few studies, it has been suggested that such differences in characteristics of CNV may lead to more favorable treatment outcomes in younger patients. [1][2][3][4] It should be noted that a mean of only three ranibizumab injections over 12 months was required potentially preventing rapid deterioration of the underlying retinal disease and worsening of vision. These findings from the MINERVA study were consistent with previously published literature in children and adolescents, [13][14][15][16][17] including a few isolated case reports in which a single intravitreal injection of ranibizumab was reported to be effective in resolving CNV. [27][28][29] Moreover, in MINERVA, ranibizumab injection was well tolerated in adolescents, considering the challenges in administration of intravitreal treatment in younger patients. These results reinforce that ranibizumab may be an effective treatment option in adolescents with CNV. [13][14][15][16][17] In a clinical trial setting, the MINERVA study describes the treatment with ranibizumab 0.5 mg in adolescent patients with CNV of certain etiologiesidiopathic chorioretinopathy, Best disease, and optic disk drusen. The adolescent part of this study entailed several limitations. The sample size was small, and no other CNV etiologies were enrolled. Other limitations included the open-label study design, with no control group and a relatively short follow-up duration of 12 months for the evaluation of retinopathy. However, because CNV is rare in the pediatric population, and owing to ethical considerations in a disease without any standard of care, it was not feasible to conduct a randomized, shamcontrolled clinical study. Despite these limitations, it should be noted that all cases were well documented, including the patients' retinal imaging and OCT findings.
Ranibizumab treatment proved to be beneficial for improving visual acuity in these patients with relatively few injections, but in addition prevented the worsening of vision in this pediatric population. This improvement was accompanied by stabilization or reduction in CSFT over the 12-month period. Overall, ranibizumab 0.5 mg was well tolerated, and there were no new safety findings identified up to Month 12. The MINERVA study findings complement the limited available data on the use of ranibizumab for the treatment of CNV with various etiologies in adolescents.
Key words: adolescents, anti-vascular endothelial growth factor therapy, best-corrected visual acuity, choroidal neovascularization, multicenter, open-label, ranibizumab. †At least one episode was suspected to be related to ocular injection.
India) for her medical writing and editorial assistance toward the development of this manuscript. Steering Committee Members: P.G. Hykin (United Kingdom) (supported by the NIHRBMRC for Ophthalmology), G. Staurenghi (Italy), and T. Y.Y. Lai (Hong Kong) are the steering committee members of the MINERVA study and have contributed significantly toward the design of the study, interpretation of data, and development of this manuscript. | 2018-11-15T16:51:26.312Z | 2018-11-02T00:00:00.000 | {
"year": 2018,
"sha1": "ad2ac23061466470b2659e53ba9d6ed49bc53af2",
"oa_license": "CCBY",
"oa_url": "https://journals.lww.com/retinalcases/Fulltext/2021/07000/RANIBIZUMAB_0_5_MG_TREATMENT_IN_ADOLESCENTS_WITH.3.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb4b2f754737f820f9c4fdb04ec834a8f6b0958b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44412334 | pes2o/s2orc | v3-fos-license | Representation of Object-Oriented Database for the Development of Web Based Application Using Db 4 o
Impedance mismatch of data flow is the major problem in the relational database when one is using the web based application development on the computer system. It effects the development of scalable and reusable web application. Due to this, the research cost of development and maintenance increases. Therefore the present work is an attempt towards an object-oriented database system by using Db4o to overcome the impedance mismatch problem. In the current work, the development of web application for Indian Postal Services by using Db4o is demonstrated on dot NET platform. Before developing the application, the authors used a Unified Modeling Language (UML) model in the form of UML class, sequence and use-case diagrams. The Db4o is used to store the object-oriented database and the performance of Db4o is observed through several object-oriented queries and corresponding results are demonstrated.
The relational databases are the most common database management system (DBMS) used as a backend for the web based application.In the current trend of software development, many of the software companies are still using the relational databases but due to impedance mismatch of data flow and evolution of graphical user interface (GUI), software industries are switching from relational databases to object-oriented databases.Let us first describe the important references related to databases.Elmasri and Navathe [1] have presented a clear explanation of theory and design, broad coverage of models and real systems, and an up-to-date introduction to modern database technologies results with a leading introduction to database systems.Silberschatz et al. [2] have discussed the inner working of database system and also discussed about few commercial databases as well as open source database systems.Paterson et al. [3] have described to configure the Db4o, querying and managing objects, performing transactions, and replicating data.Db4o is highly reliable and scalable, which cuts down on development time by smoothly integrating into the system, cutting out the costly object-relational mapping typical for larger applications.Db4o developers community [4] present various blogs and forums related to object-oriented database system such as Db4o, Versant Object Database, Versant JPA, ODBMS.ORG, which helps to get deep understanding on various topics.In [5], Rosenberger made effort to store all objects manually in SQL and has took up half time of total development time and today it is the most well known brand in the niche category "object databases".A big RDBMS machine is oversized for a cell phone.In this case, a tight integration between the application and the database are helpful to achieve top performance.Db4o is also used for caching in near real-time environments when application models are complex, such as in the financial sector, for trading applications.
Rădulescu [6] has given a brief look about SQL technology and demonstrated its limitations on persistence feature of an object-oriented model and also discussed the principles and concepts of Db4o and its implementation on dot NET platform.Danturthi [7] has described a comparative study between the three different methods of data access: SQL Server 2005 with stored procedure, LINQ to SQL and Db4o.Db4o solves the problem of impedance mismatch and making the development of database model much simpler and similar to the application domain model.Liaw et al. [8] have discussed a three tier web based application which can be developed for taking advantage of an object-oriented database instead of using ADO.NET or traditional relational database.Bernardi et al. [9] have presented the validation and the performance evaluation of systems with the help of UML sequence diagrams and state charts.They proposed an automatic translation of state charts and sequence dia-Grams into generalized stochastic petri nets based on the abstract syntax of the UML collaborations and of the state machines packages.
Rumbaugh et al. [10] completely described UML concepts, including major revisions to sequence diagrams, activity models, state machines, components, internal structure of classes and components, and profiles.Umoh et al. [11] have focused on creating a UML structure by specifying the use-case, classes, and activities in the client-server application to design a web-based object-oriented database management system (OODBMS).A working prototype of the system on three-tier client server architecture is discussed in this paper.Chaurasia and Saxena [12] have also introduced a model design through UML for a mobile based electricity bill deposit system.A real case study of Uttar Pradesh Electricity Bill deposit System is considered.In [13], Tagliati and Caloro have discussed the application of UML for the analysis of dynamic words.Mangala [14] also described about the need of ASP.NET and designed the programming model through ASP.NET, which is a completely re-engineered and enhanced technology that offers much, much more than traditional ASP and can increase productivity significantly.
Object-Oriented Database
Evolution of internet and extranet has increased the usage of web-based technology and the companies have shown their interest in object-oriented database management system (OODBMS) to handle the complex data.OODBMS is fusion of two technologies: object-oriented system and database management system.New applications require data persistence, transaction, authorization, distribution, buffering and data scalability associated with the database system, which is fulfilled by the OODBMS.The term object-oriented database system first appeared in 1985 and 2004 was second growth period when open source database system such as Db4o and DTS/SI emerged, which was affordable and easy to use.Db4o (database for objects) is an open sourced bi-licensed software (General Public License and Commercial), written in both java and .NET and can run on any operating system that supports java or .NET.It stores the objects directly without changing their characteristics or crops them to fit in a table of relational data model.The db4o project was started in 2000 by Carl Rosenberger and it was commercially launched in year 2004 by Db4objects Inc but in 2008 it was brought by Versant Corporation.
Unified Modeling Language
The Unified Modeling Language (UML) which is a very dominant modeling graphical language for specifying, constructing, presenting and documenting the artifacts of software system.It is a collection of best engineering practices that have successful in the modeling for a design of a huge and complex systems.Modeling through Unified Modeling language has three categories: Class Model, State Model and Interaction Model.The class modeling is done with the help of object diagram and class diagram.The state model is done, using state diagram and the Interaction modeling is done with the help of activity diagram, use-case diagram and sequence diagram.The main task of the UML is to create a simple, well documented and easy to understand software model for the people.
Web Based Applications
Web based applications are classified into two major categories: static web applications and dynamic web applications.Static web applications are those applications, where the elements comprising the web application are static HTML pages.In this environment, the end user can not interact or modify the application behaviour.On the other hand the dynamic web applications have capability to interact or modify with the application behaviour.The end user gives input and it can be parsed both at the client end or server end.Client side scripts the language by using VB Script and Java Script which are required to parse and interact according to the client input.Using the client side scripting language reduces the traffic of network and also increases the throughput of web based applications.
Server-side scripting is a web server technology and the programs that are executed on the server to generate dynamic web pages.The languages used for these tasks are normal programming languages with special libraries/packages for server side scripting like ASP, C via CGI (*.c, *.csp), PHP (*.php), ASP.NET (*.aspx), Java via java server pages (*.jsp).
The clients use web browsing software to interact with the web server.In ASP.NET when any *.aspx file is requested then the request is send to the Internet Information Server (IIS), which is send to .NET engine.The NET engine parses the client request and converts it to HTML, which is returned to IIS.Finally, IIS returns the HTML to the web browser.Hyper text transfer protocol (HTTP) and file transfer protocol (FTP) are used to deal with the client request and server responses.
Architecture of Web Based Application
The structure of the web based application can be singletiered, two-tiered or three-tiered.Here, authors are using the three-tiered architecture as shown below in Figure 1, which contains the presentation layer, application layer Windows XP (SP1) operating system.An object-oriented database management system Db4o 7.14 is used as a back-end.Db4o is an open source OODBMS, which is freely available and it can be downloaded from: http:// community.versant.com/Downloads.aspx.The software and hardware requirements are described above in the Table 1.
Installation of Db4o database is done with the help of an object browser known as Object Manager Enterprise (OME).After installing it as a plug-in of Ms-Visual Studio 2008, one can get a sub option of object Manager Enterprise in tools option of menu bar.After the completion of installation, new tools are added in the window and it is shown in Figure 2. To create a new web application choose "New Project" from the file menu, in the pop up window select "web" from the visual c# project type.Then choose "ASP.NET Web Application" from the template sub window as shown in Presentation of data and GUI environments are provided to the user with the help of presentation layer.All the rules for the accessing of data and business logic is presented in the application layer.The data layer handles the requests from the application layer.Figure 1 represents the three tier application architecture of web based applications.
Sample Queries for Indian Post Office Database
The project has been developed in Microsoft visual studio 2008 IDE with ASP.NET using C# on Microsoft After completing these tasks, one can store new objects Generalization of use-case "Add Scheme" is done into two child use-cases "Add Banking Services" and "Add Non-Banking Services".Similarly, use case "Make Trade" has two specialized child use-cases "Credit Card Mode" and "Net Banking Mode".
UML Sequence Diagram
Sequence diagram shown in Figure 8 has five objects Admin, Scheme_Database, Office_Database, Business_ Layer and DataAccess_Layer.In this diagram, the Admin adds a new scheme at Scheme_Database.The request is being sent at Business Layer and PerformUpdate (SID, Qty) operation is performed at DataAccess Layer.The Admin can also add a new employees and branches in office database.After that the request is being sent at Business Layer and Perform Update( ) operation is performed at DataAccess Layer.The Admin can perform queries for the available schemes in the Scheme_Database.The ComputeQuery( ) operation is performed at Business_Layer and the Re-trieveResult( ) operation is performed at DataAccess_ Layer.Finally, the result is passed to the Admin.
UML Class Diagram
The UML class diagram shows the static structural behaviour of the system, in which attributes and operations are designed for the complete system.
Conclusion
Major web applications are being developed using relational database management system (RDBMS) but enough time is consumed for relating the object model of .NET and database model.Object-oriented database system (OODBMS) is an alternative of RDBMS, which can be verified efficiently and easy to use.The development of web application for the post office is developed using Microsoft Studio IDE that takes the advantage of object-oriented database system Db4o.Further studies can be done to explore new capabilities and potentials of Db4o like data mining, clustering of object-oriented database, etc.
Figure 3 .
We have to add Db4objects.Db4o.dllfiles to the references of project, which is freely available and the process is shown in the Figures 3 and 4. and data layer.
Figure 5 .Figure 6
Figure 5. Storing objects in C# for Db4o.are: Query By Example (QBE), Simple Object Data Access (SODA) and Native Queries (NQ).Authors have used native query for the querying purpose in this paper.The query is expressed as predicate in .NET language as the object template.The problem of impedance mismatch is completely removed when one can use native query in Db4o.Code for native query is given below: try {
Figure 7
represents a use case diagram for the Admin, which has five use cases: Search Scheme, Add Scheme, Validate Password, Make Trade, Add Emp & Branches.
Figure 9
represents the class diagram of the online post office
Figure 7 .
Figure 7. UML use-case diagram for Customer.
Figure 9 .
Figure 9. UML class diagram for the application.
Table 1 . Software and hardware requirement table.
Object files are the class objects with properties.Here we have four major classes, which are User, Scheme Database, Office Database and SalesRecord.The User class has three sub-classes, which are Admin, Employee and Customer.The User object file is created by creating a class in C# with all its properties.These properties are converted as a database object file for Db4o.The code for object storage mechanism in Db4o is given below and also shown in the Figure5. | 2017-12-10T00:38:09.839Z | 2012-09-14T00:00:00.000 | {
"year": 2012,
"sha1": "7e274b5652c63f3d71c14777dcb21b00cb8ce86e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=22361",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7e274b5652c63f3d71c14777dcb21b00cb8ce86e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
232310883 | pes2o/s2orc | v3-fos-license | A Comparison of Environment Classification Among Premium Hearing Instruments
Hearing aids classify acoustic environments into multiple, generic classes for the purposes of guiding signal processing. Information about environmental classification is made available to the clinician for fitting, counseling, and troubleshooting purposes. The goal of this study was to better inform scientists and clinicians about the nature of that information by comparing the classification schemes among five premium hearing instruments in a wide range of acoustic scenes including those that vary in signal-to-noise ratio and overall level (dB SPL). Twenty-eight acoustic scenes representing various prototypical environments were presented to five premium devices mounted on an acoustic manikin. Classification measures were recorded from the brand-specific fitting software then recategorized to generic labels to conceal the device company, including (a) Speech in Quiet, (b) Speech in Noise, (c) Noise, and (d) Music. Twelve normal-hearing listeners also classified each scene. The results revealed a variety of similarities and differences among the five devices and the human subjects. Where some devices were highly dependent on input overall level, others were influenced markedly by signal-to-noise ratio. Differences between human and hearing aid classification were evident for several speech and music scenes. Environmental classification is the heart of the signal processing strategy for any given device, providing key input to subsequent decision-making. Comprehensive assessment of environmental classification is essential when considering the cost of signal processing errors, the potential impact for typical wearers, and the information that is available for use by clinicians. The magnitude of differences among devices is remarkable and to be noted.
Introduction
In most listening situations, a dynamic mixture of sounds from multiple sound sources simultaneously reaches our ears. Despite the fact that the elements in this mixture are typically interleaved and overlapped in time and frequency, the auditory system is able to accurately parse and group different patterns of the sound sources in terms of timing, space, and frequency into a coherent sound stream through a process known as auditory scene analysis (Bregman, 1990;Bu¨chler et al., 2005). This phenomenon of parsing, grouping, and streaming is one of the theoretical bases for the classification system in hearing aids. In an analogous process, modern hearing aids automatically classify the incoming acoustic mixture into one or more of a larger set of sound scene categories. Because everyday situations present a mixture of speech sounds, musical sounds, environmental sounds, and low-level or quiet moments, the hearing aid is tasked with continuous classification of the listener's current acoustic environment.
The automatic classification is based on many different acoustic analyses over various time scales, some of which are categorical and some of which are scalar. Environmental classification is perhaps the most important function of a modern hearing aid that relies on current environmental factors to guide decisionmaking regarding automatic signal processing involving simple and advanced digital signal processing (DSP) features.
Decisions regarding the nature and methods of signal processing are based, in part, on an initial and ongoing classification of the current acoustic scene. The resulting classification is used to populate the datalogging feature of the fitting software for most hearing aids and was available for each of the devices evaluated in the current investigation. Such datalogging information can be used by a clinician to better understand the nature of the listening environments encountered by a given patient. Clinicians also can use this information in patient counseling, as a troubleshooting tool, or as the basis for device adjustment or accessory recommendations. While information from the datalogging feature within the fitting software is the only data readily available to clinicians, it should be noted that investigation of hearing aid classifiers based on this datalogging data is cursory rather than comprehensive. The datalogging feature does not reveal the dynamics of the classification output and does not reveal if, when, or how the hearing aid uses level detectors, estimates signal-to-noise ratio (SNR), estimates the presence of wind noise or feedback, or compares information across aids. For example, others have shown that output SNR can vary significantly among multiple devices for the same speech-in-noise environment (Miller et al., 2017). Nevertheless, the datalogging feature is what is made available to the clinician, and thus what forms the basis for clinical decisionmaking. The present study is a detailed evaluation of the environment classifiers, as measured by companyspecific datalogging, from five major hearing aid companies and is presented in conjunction with listener judgments.
Advanced DSP features are intended to adapt the corresponding signal processing to a scene class to improve listener experience. In other words, the choice of which DSP feature(s) to engage and the strength of engagement requires some knowledge of the types of stimuli present in the acoustic environment (i.e., environment classification). In most cases, the critical information regarding classification and subsequent decision-making is not widely distributed or known, as the rules governing such processing and the consequences of those rules are typically proprietary and technical in nature. From a broader clinical perspective, a given hearing aid is created with a certain design philosophy that includes the nature of the level-dependent gain-frequency model, the dynamics of that model in terms of amplitude compression, the engagement of other signal processing features, and interactions among these. In many ways, the initial step in implementing the design philosophy begins with environmental classification, a process that typically is not well understood outside of the design team. This investigation represents an initial attempt to gain an understanding of similarities and differences among the environmental classification processes employed in the premium products of five major hearing aid companies. It involves collecting information about environmental classification that technically is available to the clinician, though the data collection methods are generally prohibitive for the average clinician or clinicianscientist. In collecting and analyzing such data, we highlight several key acoustic features that influence such classification including overall level, SNR, stimulus source number, and stimulus source type, from which some of the underlying philosophical differences can be inferred.
Early sound classification algorithms were developed based on subjective judgment, such as listeningenvironment preference (Elberling, 1999;Fedtke et al., 1991). Based on a library of relevant sounds and different kinds of background competition, multiple amplification schemes were developed by identifying different hearing aid characteristics for these desired listening conditions (Kates, 1995;Keidser, 1995Keidser, , 1996. Consequently, current hearing aids can be conceptualized as providing several different "programs" with each program tailored to a particular class of sound environments and/or to particular user preferences. With advances in automatic processing, however, the concept of distinct programs is giving way rapidly to dynamic arrays of individual signal processing features and sets of features that may be engaged or disengaged synchronously, individually, or by degree based on classification of the acoustic environment and other real-time monitoring such as sound pressure level (SPL) and SNR.
Today, environment classifiers available in premium hearing aids possess a fixed number of environment classes (as many as nine; e.g., speech-in-quiet, quiet, speechin-noise, noise alone, music, etc.). Each device classifier is pre-trained on a known set of audio files using computational algorithms that learn which sound features are best associated with each class. The algorithms often follow standard approaches like a Bayes classifier (Lamarche et al., 2010;Ostendorf et al., 1998), neural networks (Freeman, 2007;Park & Lee, 2020;Zhao et al., 2018), or Hidden Markov models (Dong et al., 2007;Freeman, 2007;Nordqvist & Leijon, 2004). Training data are deconstructed into spectro-temporal acoustic features as they would be in real-time in the device, ranging from simple (e.g., overall level or level within frequency channel) to complex feature sets including those based on perceptual models of human hearing (e.g., modulation frequency and depth; mel-frequency cepstral coefficients, etc.; Ravindran et al., 2005). For example, complex scenes with speech are often classified based on their spectral profile and temporal envelope (Chen et al., 2014;Feldbusch, 1998;Kates, 1995), their statistical amplitude distribution (Wagener et al., 2008), or their characteristic temporal and/or spectral modulation frequencies (Nordqvist & Leijon, 2004;Ostendorf et al., 1998).
Classifiers exist physically as software stored and running on a microchip that creates a set of weighting functions that have dimensions specific to a company's desired number of classes and the acoustic features associated with those classes. As shown in Figure 1, at the earliest input stage (e.g., after the microphones), the classifier extracts the acoustic features of the incoming signal before applying weighting functions that project to a class or classes. Depending on the weight matrix, the system switches between the classes in postprocessing or blends class-dependent postprocessing. The resultant class or classes then affect decision rules for hearing aid features, such as directional microphone strategy, amplitude compression/expansion, and adaptive noise reduction. It is important to note that though the system performs these computations in real-time, companies often apply some temporal rules to avoid frequent DSP changes which could lead to adverse listening experiences. The pace at which a device may change environments is company-specific, may vary widely across companies, and is virtually unknown to and unknowable by the clinician (Mueller et al., 2014).
Every hearing aid manufacturer has engineers who design their unique classification schemes. There are potentially hundreds of acoustic parameters that could positively or negatively influence the quality of each of the classification schemes they design. But the devices themselves have limited physical resources to implement detection and actuation on the basis of all of those acoustic parameters. Thus, choices must be made, and limits set as to what is most important in the acoustic milieu for the purpose of their scheme. Those choices are based on the company's and the engineer's philosophy of what is going to be most efficacious for the listener in the widest range of listening environments. It is that bespoke philosophy that determines how the device will classify and ultimately accommodate each listening environment in which it is worn. Hence, there is simply no way that all hearing aid classifiers are created equal. Understanding the philosophy of each company, therefore, should be a contributing factor when prescribing hearing aids according to the listener's individual lifestyle, abilities, desires, and needs. The present study was designed to assess the behavior of the classifiers of five different hearing aid companies using a broad array of acoustic environments. Because each company has its own class labeling, the results of the measurements given here were transformed to four major classes in order to compare across companies. In each case, great effort was taken to line up equivalent classes across companies based on what each uniquely named company class was intended to be used. Finer subclass divisions of each company might reveal the uniqueness and philosophical disparity for each company, but these granular points were not the focus of the study nor was the intention to single out any one specific company.
Previous reports on hearing aid classifiers have described individual methods of classification or have compared various types of classification tools (Abe et al., 2011;Bu¨chler et al., 2005). Development of classification algorithms involves a balance between identifying and defining some number of relevant acoustic environments and the ability of classification procedures to do so accurately and efficiently. Information about the relevant environments and the frequency with which typical hearing aid wearers are in those environments has been obtained by self-report (Keidser, 2009;Walden et al., 2004), acoustic recordings and subsequent off-line analyses (Wagener et al., 2008), synchronized acoustic recordings and self-report (Wu & Bentler, 2012), and via datalogging features that catalog the classifier results over time during real-world hearing aid use (Humes et al., 2018;Taylor & Hayes, 2015). It is interesting and reassuring that each of these methods, with their relative advantages and disadvantages, has converged on very similar information. To summarize the results of the investigations cited earlier, the environmental descriptors were equated, and the data were averaged. This average provides an approximation to the proportion of time that hearing aid wearers (sampled with a clear elderly age bias) are in "quiet" ($28%), "noise alone" ($23%), and speech plus noise ($29%). Interestingly, despite the fact that the chief complaint of a person with hearing loss is difficulty listening to speech in background noise (Beck & Le Goff, 2018;Nabelek et al., 1991;Wu & Bentler, 2012), and the fact that hearing aid wearers are most dissatisfied with the performance of their devices when they are in the same environment (Nabelek et al., 1991;Plyler et al., 2019;Turan et al., 2019;Walden et al., 2004), actual wearers are only in such environments a fairly small proportion of the time. Their complaints, however, are primary factors driving motivation to seek hearing aids (Olsen et al., 2014;Takahashi et al., 2007;Turan et al., 2019) and satisfaction with hearing aids (Huber et al., 2018;Kochkin, 2005;Korhonen et al., 2017;Picou, 2020;Taylor & Hayes, 2015;Wong et al., 2003;Wu et al., 2019). Thus, manufacturers continue to focus on development and refinement of signal cleaning strategies to mitigate the effects of background noise, in addition to refining the classification strategies used to govern the signal cleaning strategies.
Few investigations, however, have directly compared the results of the classification process across hearing aid companies under comparable circumstances. The study reported by Groth and Cui (2017) did just that and included two main components. The first was human subject evaluation of selected acoustic environments, for which interrater agreement was high and judgments appeared to represent accurate descriptors. The second involved assessment of hearing device classification of the same selected acoustic environments, as coded by the datalogging feature associated with each company's fitting software. For the latter, accuracy was defined as agreement between device and human subject classification. For the quiet, speech, and steady noise environments, the classification performance was highly consistent among the devices from six different companies. For the speech babble and noise scenes, five of the six devices had similar classification performance. One device (from different companies) in each of those scenes had a fairly high proportion of unexpected classification results. As the scene complexity increased by combining turn-taking conversational speech with one of four different "noisy" backgrounds, more substantial differences among the devices were revealed. Each of those four scenes was considered to be speech in noise by the human subjects, though the proportion of time the six devices classified those scenes as speech in noise ranged from 98% to 41% with an average across devices of about 67% of the time. It would be even more interesting to know how the speech-in-noise scenes were classified when unexpected classes occurred. Likewise, there was some variability in the accuracy of classification when faced with music as the primary or secondary source in the acoustic scene. Overall, the study revealed a fairly high degree of parity among the classifier results for relatively simple or unitary environments and more diverse results for the speech in noise and music scenes. To further challenge classifier performance, specific scenes could also vary systematically in overall level and in signal-to-background ratio. It is likely that many manufacturers use in their classification schemes estimates of overall level and signal-to-noise ratio as part of their comprehensive analyses. This would be especially interesting in the case of music, as one could imagine background music emerging as the primary signal of interest (or a distraction) as the music-to-background ratio increases from negative to positive. Furthermore, the classifier for a given device and given scene will always weigh the possible categories to a sum of 100%. Thus, when a classification result is unexpected, or when there is an ambiguous scene, it is important to consider the proportions assigned to each possible class. For these reasons, the present investigation includes a wide array of acoustic scenes with systematic changes in overall level and signal-to-background ratios in a design that is somewhat similar to that described by Groth and Cui (2017) but that presents a more detailed analysis of the classifier data.
Acoustic Scenes
Acoustic scenes were developed by mixing different speech and nonspeech sounds chosen from an in-house library of audio files of various durations. Original files were digitized at 44.1 kHz sampling rate and stored in separate mono 16-bit .WAV format. Speech passage recordings were drawn from both male (74 s) and female (54 s) talkers, and nonspeech sounds included music (214 s), a subway platform (177 s), a food court (238 s), a playing-card hall (240 s), and 10-talker babble (70 s). The speech sounds are reproduced on the "Phonak Sound CD 2" (D41-0508-02/0501/CU) distributed by Phonak AG. The music passage "My Babe Just Cares for Me" is distributed by FreeSound.org, as was the recording from the London tube subway sound. All other sounds were recorded and mastered at Unitron. Speech always was presented from a loudspeaker located at 0 relative to the head. The subway, food-court, and card hall each included four unique audio channels that were presented from four loudspeakers spatially separated by 90 (45 , 135 , 225 , and 315 ). The 10-talker babble was a single recording presented diffusely from six loudspeakers (45 , 90 , 135 , 225 , 270 , and 315 ). Music was a single track from a jazz artist (Ella Fitzgerald, "My Baby Just Cares for Me") presented in stereo from loudspeakers at 45 and 315 .
Eight primary acoustic scenes, 80 min in duration, were derived from looping the audio files. Among the primary scenes, 28 conditions were created by varying overall level (L Scenes) or the SNR (S Scenes). For subsets in which overall level varied, SNR was fixed. Likewise, for subsets for which SNR varied, overall level was fixed. Figure 2 (top row) shows the long-term average spectra for each of the isolated audio files (left panel) and primary scenes (L Scenes [middle panel] and S Scenes [right panel]) with 0-dB SNR and normalized to an RMS of 1 (magnitude is given in dB full scale). The temporal modulation index (Gallun & Souza, 2008;Krause & Braida, 2004) was computed for octave bands centered at 0.5, 1, and 4 kHz. The bottom row of Figure 2 provides the modulation index for the 1 kHz octave band, which indicates the dominant modulation rate and relative depth across scenes in that frequency region. Table 1 describes the individual conditions by their description of prominent audio (e.g., speech, music, and noise), overall level, SNR, and modulation index (at .5, 1, and 4 kHz octave bands). Although not an exhaustive arrangement of acoustic scenes, all the SNR and level values were chosen to be within the typical range of realistic if not challenging listening environments. The music track was chosen because it contained both voiced and instrumental audio, and also because listeners have previously been shown to be more sensitive to differences in hearing aids when listening to jazz (Vaisberg et al., 2017).
Hearing Aid Classification
The acoustic scenes were presented in the free field using a 24-channel speaker (KEF Q100) array with subwoofer (KEF KUBE-I) in a sound-attenuating booth (Acoustic Systems RE-245). Digital-to-analog conversion was handled by a MOTU 24ao routed to three 8-channel power amplifiers (Ashley ne8250). To improve test efficiency, up to three pairs of hearing aids were evaluated simultaneously using a Klangfinder Twinface (Klangspektrum) head and ear simulator positioned in the center of the speaker array (41 in. radius) and adjusted such that the center of the aperture of the middle ear canal was level with the center of the dual concentric cone drivers in the KEF loudspeakers (Figure 3).
To evaluate classification of acoustic scenes by hearing aids, audio scenes were presented for the full 80 min. This duration was chosen based on pilot data which revealed that some of the chosen devices required more than an hour but less than 80 min to reliably populate the data logging feature in the commercial fitting software. The premium models (launched in 2017) from each of five major hearing aid companies (subsequently labeled as A, B, C, D, and E) were selected for comparison. All devices had a miniature behind-the-ear, receiver-in-theear form factor (RITE or RIC), and were powered by size 312 zinc-air batteries. Device programming involved choosing the company "first fit" option and using the corresponding default settings. It is expected that other fitting strategies would not impact classification performance, but this was not evaluated specifically. This was done via the company-specific clinical fitting software via the HiPro2 (Otometrics) USB-interface using the same generic mild-sloping hearing loss audiogram. The datalogging feature of each company was enabled and reinitiated prior to each audio scene presentation, and the placement of hearing aids on the Klangfinder Twinface was counterbalanced by company per condition to avoid possible effects of elevation differences.
The number of environment classes and corresponding company-supplied descriptions of the prototypical classes varied across the companies. The technical aspects of defining the environment and the analysis methods of the acoustic scenes are proprietary to each company. Therefore, to keep the analysis consistent across the devices, data-log classes were remapped to the following four generic classes -Speech in Quiet, Speech in Noise, Noise and Music. Re-mapping of classes was done based on a review of DSP features for a given class across all five devices using the information available publicly. This allowed for more direct comparisons per sound class among devices and the subjective classification from human subjects. However, it should be noted that by removing the granularity of the classifiers specific to each company, it is possible that the following observations will be considered too general and not capture the full breadth of each company's classification philosophy. Nevertheless, the present design was chosen to provide the fairest cross-company comparisons.
Human Listener Judgment
Twelve young normal-hearing adults (age M AE SD: 24 AE 2.25 years; 8 females, 4 males) participated in the environment judgment task. All had normal hearing thresholds (i.e., <20 dB HL) at octave frequencies between 250 and 8000 Hz and reported no history of neurological disorders. Each provided written informed consent following procedures approved by the University of South Florida institutional review board.
To evaluate classification of acoustic scenes by normal-hearing listeners, two-channel audio files were recorded using microphones (1/2 in. B&K model 4134 condenser mic) mounted in Zwislocki ear simulators (B&K model DB100) in a KEMAR acoustic manikin (Knowles Electronics, Chicago, IL) and connected to a preamplifier (B&K model 2966), amplified with a G.R. A.S. model 12AA conditioner, and routed to the Motu 24ao audio interface that sampled the stimuli at 44.1 kHz. The sounds were equalized digitally for playback over Sennheiser Precision 580 headphones in a single-walled sound attenuating booth. Each recorded audio file was 2 min in duration. Processed audio files are provided in supplemental material of this report.
Listeners were presented with the same 28 conditions used in the device tests. Each self-paced test consisted of three trials per scene (pseudo-randomized). Listeners were instructed to identify the sound scene by using a maximum of four out of six key phrases: (1) "listening to speech in quiet," (2) "listening to speech in noise," (3) "listening to music," (4) "mostly quiet," (5) "mostly noise," and (6) "mostly music." The key phrases were designed to probe perceived foreground. To directly compare to the device tests, choices (1) and (4) were combined and (3) and (6) were combined to leave four generic classes as in the device tests earlier. Subjective classification was tallied for each sound scene and tested for inter-and intrasubject reliability. Intersubject variability was evaluated using an intraclass correlation coefficient (ICC) based on absolute agreement and a two-way mixed model in SPSS (Bland & Altman, 1999). Values greater than 0.9 indicate excellent reliability, values between 0.75 and 0.9 indicate good reliability, values between 0.5 and 0.75 indicate moderate reliability, and values less than 0.5 indicate poor reliability (Koo & Li, 2016).
Inter-and Intrasubject Response Reliability for Audio Scenes
The intersubject reliability was inferred from two measures of ICC: (a) across all class judgments and (b) within an audio scene type. First, among the four different possible class judgments, listeners were considered to have excellent reliability for judging speech-in-noise (ICC ¼ 0.97), noise (0.96), music (0.91), and quiet (0.89). Second, within each of the audio scenes, listeners judged scene classes with excellent reliability for seven of the eight audio scene types (ICC between 0.83 and 0.99). Listeners' judgments of the speech in a subway background (S Sub ) were only moderately reliable (0.69). Collectively, these results indicate that the designed audio scenes could be reliably classified by human listeners based on the four generic classes.
Level Change
A total of 12 audio scenes varied in overall level with fixed SNR (L Scenes). Among these were four types of scene: speech alone (L S ), speech in noise (L N ), music alone (L M ), and music in noise (L MN ). Within each scene type, there were three overall levels as indicated in the second column-from-the-left in Figure 4 (55, 70, and 85 dB SPL; also see Table 1). In Figure 4, each row is an audio scene, with the listener judgments (S) and device classifier outputs (A to E) represented by columns. Columns are grouped by the generic environment classes: Speech in Quiet, Speech in Noise, Noise, and Music. Each cell in the table indicates a percentage and a corresponding shade of gray as indicated in the color bar. For a given device (A to E) in a single row, the four corresponding cells sum to 100%. These are the percentages extracted from the company-specific data logging. For example, in the first row (L S 55), Device A classified the audio scene proportionally as 92.8% Speech in Quiet and 7.2% Speech in Noise over the course of the 80-min presentation.
Speech-Dominant Scenes. In the first scene type (L S ), a male and female turn-taking conversation in quiet, subjects and devices mostly agreed in their assessment; specifically, a high percentage of the scene was classified as Speech in Quiet by subjects and by devices A, B, and E, independent of the change in level. That is, in the absence of background distractors, the other three classes did not register to a high degree. On the other hand, for levels at or above 70 dB SPL, Devices C and D transitioned from Speech in Quiet class to Speech in Noise class as the overall level increased. These results indicate that, in this type of acoustic scene, Devices C and D invoke level-sensitive algorithms for distinguishing among the Speech in Quiet and Speech in Noise classes, whereas the other devices and human judges did not weight level strongly in decision-making over this 30-dB range.
The second scene type (L SN ) included a food court background at 5-dB SNR. Whereas subjects and most devices classified this scene as a Speech in Noise scenario, independent of overall level, there were some nuances among the classifiers. Device C was consistently at 100% Speech in Noise for all levels. Devices A and E increased the proportion of the Speech in Quiet class as overall level increased, possibly indicating that the positive SNR (5 dB) interacted with overall level for these classifiers. Finally, Devices B and D performed in a more idiosyncratic fashion: at the lowest and highest overall levels, the classifier output was mostly in Speech in Noise (B: 99% and 77%; D: 93% and 94%, respectively), yet the intermediate level led to a split between Speech in Quiet (B: 52%; D: 49%) and Speech in Noise (B: 39%; D: 50%). Across devices, the proportion of speech-in-noise classification for speech-in-noise scenes ranged from 38 to 100%, a value strikingly like the 41 to 98% range reported by Groth and Cui (2017).
Music-Dominant Scenes. Because all modern premium devices include a classifier destination for music environments, the third and fourth scene types tested the likelihood of each classifier selecting the Music class at varying levels either in quiet (L M ) or in background noise (L MN ) consisting of card hall noise at an SNR of 0 dB. In quiet (L M ), human subjects judged the scene to be Music with greater than 90% proportion for each of the overall levels. In noise (L MN ), however, the proportion of Music judgments was considerably lower (between 43% and 58%), with the remaining percentage assigned mostly to the Speech in Noise or Noise classes. The distribution of weights among those classes appears to be level dependent, with Music and Noise having relatively higher weight for 55 and 85 dB and Speech in Noise having a higher weight for the 70 dB level.
Device classification in music was idiosyncratic, but detailed analysis of the cells in Figure 4 support logical inferences for each device. For the quiet (L M ) scene, Devices A and D mirrored the subject judgments in percentage classified as Music. Devices C, E, and B were progressively more level-dependent in their classification of Music. For device C, at 55 dB SPL, classification was 25% Music and 75% Speech in Quiet whereas at 70-and 85-dB SPL classification was 100% Music. For device E, classification percentage gradually increased from 58% to 72% with increasing level. At 55 dB SPL, the remaining percentage was attributed to Speech in Quiet while at 70-and 85-dB SPL the remaining percentage was attributed to Speech in Noise. For device B, classification at 55 dB SPL was 100% Speech in Quiet. At 70 dB SPL, classification was 63% music, 26% Speech in Noise, and 11% Speech in Quiet. At 85 dB SPL, classification was 24% Music and 76% Speech in Noise.
When music was presented in noise (L MN ), no devices mirrored the human subject judgments. The classification by devices A, B, C, and D clearly were level dependent with A and C progressing from Speech in Quiet dominant ( distributed (device D) among the four classes at 85 dB SPL. Device E stood out as not being level dependent. At each level, the classification proportions remained approximately 65% Speech in Noise, about 15% Noise, and about 17% Music. While the aforementioned data reveal level and SNR dependencies when classifying music in quiet or mixed with background sounds, the variability among devices is consisting with the fixedlevel, music alone data from Groth and Cui (2017).
SNR Change
The aforementioned level-change conditions showed that some device classifiers shifted destinations depending on the overall level whereas others were not affected as much by level changes. The presumption was that, like the human listeners in general, device classifiers that were mostly level-invariant would instead show effects from changing SNR. Figure 5 shows a similar heat map as in Figure 4. For this analysis, there were 16 audio scenes encompassing 4 scene types with varying SNRs (S Scenes; also see Table 1): a single talker with background subway noise (S Sub ), a single talker with 10talker background (S 10 ), a single talker in a food court background (S 1 ), and three talkers in a food court background (S 3 ). Background stimuli were chosen to provide a variety of speech and nonspeech with their inherent spectral and modulation differences. For example, the subway background was a low-frequency, steady background with minimal or no fluctuations in the spectrum with level changes as trains arrived and departed, and the 10-talker stimulus contained greater fluctuation at spectral regions common to speech.
Low-Frequency, Steady Background. The first scene (S Sub ) varied in SNR from -10 dB to 10 dB with an overall level held constant at 80 dB SPL. Subjects showed a consistent effect of SNR, shifting from a classification of Noise combined with Speech in Noise at negative SNRs to mostly Speech in Noise at SNRs greater than or equal to 0 dB. Because their judgments, shown in Figure 4, were largely level independent, this is consistent with our earlier presumption that decisions likely would be made based on level or SNR but not both. Device D followed a similar trend, and though Devices B and E also mirrored this trend; these devices also tended to classify a large proportion in Speech in Quiet at the highest SNR. Device A showed a different effect of SNR such that negative SNRs were classified as Speech in Noise, 0-dB SNR was classified mostly as Noise, and positive SNRs were again mostly Speech in Noise. Finally, Device C was 100% Speech in Noise for all SNRs, not only for this scene type, but also all other scene types. Leaving Device C aside for a moment, the other devices each showed some dependence on SNR at this relatively high overall level confirming expected effects of SNR. Multitalker Background Babble. The next scene (S 10 ) also varied in SNR from -10 dB to 10 dB but with a constant level of 70 dB SPL. Subjects transitioned from predominantly Noise class judgments to Speech in Noise as SNR increased. No device classified the scene in a comparable way to the human subjects. Rather, Devices B, C, and D were consistently in the Speech in Noise destination independent of SNR, and though Device A was like these other devices, its classifier also dedicated a minor proportion to the Speech in Quiet class. The more surprising result was seen in Device E at low SNRs, the Music class was evoked to a high degree, even though no actual music was in the background noise. Without deeper knowledge of the company's classification system, including its acoustic feature analysis, it is challenging to infer the full implications of this result.
One Versus Three Speech Sources. The remaining two scene types (S 1 and S 3 ) tested three SNRs (0, 5, and 10 dB) and were fixed at 80 dB SPL. The addition of talkers in S 3 did not have considerable effects on subject or device classification. In general, classification was primarily Speech in Noise with some exceptions. Specifically, Device A tended to classify more as Noise at 0-dB SNR with three talkers, and Device B shifted to Speech in Quiet destination at the high SNR as was seen in the S Sub scene.
Discussion
Hearing aid classifiers were shown to be remarkably different from each other and, in some cases, from human subject judgments, despite all having some sensitivity to level, SNR, or both. Although the resulting heterogeneity may not be too surprising given the variety of acoustic analyses available to and methodologies employed by the different companies as well as their specific classification philosophies, the stark contrasts, however, could have a substantial impact on hearing-aid users and dispensing clinicians.
Effect of Overall Level and SNR on Environment Classification
The present results indicate that at low levels, all devices can reliably classify speech in quiet, but in two cases, increased level led to speech-in-noise classification (C and D). In contrast, the devices performed more idiosyncratically when speech was presented with a food court background. Whereas one device (C) had no dependence on overall level and consistently classified as Speech in Noise, the classification by other devices varied between Speech in Quiet and Speech in Noise in a manner dependent on overall level. Groth and Cui (2017) previously evaluated the classification accuracy of six different hearing aids in various scenes, including speech alone or speech in noise, and reported that accuracy was poorer for most devices in noise backgrounds relative to human judgments. In that study, the "caf e" background was especially challenging, likely due to the presence of speech in the scene, and it is probable that the present food court scene with some distinct speech proved just as challenging for some of the classifiers. The fact that most devices showed a nuanced approach for dynamic backgrounds in both studies, however, may also indicate that for this type of background, a wider range of DSP features are available to the listener and processing is more dynamic over the course of the 80min presentation.
When comparing device classification for scenes with music, again, the results show that for isolated scenes, like speech (L S ) or music (L M ) alone, the devices are mostly consistent with each other and in agreement with the human listeners. Device B was a unique case, which at the lowest level, classified the chosen jazz sample as Speech, at the medium level favored Music, and at the highest level favored Speech in Noise. Music has a wider range of levels and spectro-temporal characteristics compared to speech, which places greater demands on hearing aid circuits and algorithms for producing acceptable sound quality (Chasin & Russo, 2004). It is not surprising therefore that Music classification has been a more recent innovation in the industry, though recent studies have shown that music alone can be reliably classified (Bu¨chler et al., 2005;Gil-Pita et al., 2015;Groth & Cui, 2017). For hybrid sounds containing both music and noise, however, there are known challenges for environmental classifiers (Bu¨chler et al., 2005), just as there were for scenes with both speech and noise. In the present study, adding background noise to music was shown to steer device classifiers to a variety of classes that depended on overall level for four of the five devices. From human judgments, even at the highest overall level, music was considered the listening foreground for roughly 50% of the scene, which was not well-matched by the devices.
The second set of audio scenes was designed to measure the effects of SNR on environment classification. Except for Device E classifying Music somewhat erroneously in the S 10 scene, most devices and the human subjects tended to classify these various speech-in-noise scenes as Speech in Noise. The Device C classifier was undeterred by SNR changes, classifying all scenes as Speech in Noise, likely due to the relatively high overall levels of the scenes. The other devices showed some variability in their classification approach, but the results of this analysis primarily show that both subjects and devices generally agree. One explanation for this is at these high levels, there is often a perceptual roll off (Dirks et al., 1977;Hannley & Jerger, 1981;Jerger & Jerger, 1971) in which the perceptual benefit of digital hearing aids also decreases as input levels increase (Kuk et al., 2015). A large number of the acoustic scenes were presented at relatively high overall levels, which may have limited any chance to differentiate devices in the SNRchange conditions.
Understanding Classification Accuracy
Several studies on environment classification algorithms have shown that these tools can be extremely accurate, with up to 98% validation accuracy when learning the four primary classes (speech, speech in noise, music, and noise; e.g., Lamarche et al., 2010;Ravindran et al., 2005). The definition of accuracy can vary among investigations, however, and when considering actual device classifiers we must also consider the goals of the device under different scenarios. For example, Groth and Cui (2017) reported device classification accuracy as it related to human judgments, and they reported a wide range of accuracy across devices and listening conditions. The results of the present study demonstrate that the output of advanced hearing aid classifiers is often in contrast to human judgments, but again, this should not necessarily be an indication that devices were inaccurate in their classification. Rather, it may be the case that classification in these conditions has been intentionally biased to one class or another to support the desired DSP feature engagement and other adaptive processes according to the company's overall amplification philosophy (Hayes & Eddins, 2008). Thus, the present study reports only the percentage of time in which each classification was chosen for each scene condition.
Among the more surprising results of the present study were the variety of responses to music in noise at 0-dB SNR. The human listeners judged that this scene contained music in the foreground, but certainly recognized that noise and speech were also fittingly present. The devices, on the other hand, mostly avoided the music class, opting instead for either Noise or Speech in Noise. This distinction in understanding accuracy is important because even when music is present for example, a general bias towards the Speech in Noise and/or Noise classifications could be intended to engage decision rules for noise reduction to take precedence in order to preserve comfort and sound quality. This is also a good example of a case where the philosophy of the classifier and the perception of the listener may or may not be at odds. In natural environments, the appropriate hearing aid processing likely depends on the intent of the listener. The problem for the classifier is that the DSP features in use for speech clarity in noise and for improved sound quality for music are almost perfectly at odds with one another (Chasin & Russo, 2004). Speech clarity in noise leads to reductions in input levels and increased signal processing with directional microphones, noise cancellers, and speech enhancement. But improved sound quality for music typically requires less heavy-handed signal processing including omnidirectional microphones, linear gain characteristics, and a removal of noise cancelling (Arehart et al., 2011;Croghan et al., 2014). Thus, the differences between what the listeners heard in this experiment and how the classifier responded nicely highlights the quandaries inherent in classification philosophy. In retrospect, it is also possible that the levels used for the audio scenes were not comparable to realistic scenarios from which the classifiers were trained. Although Smeds et al. (2015) observed natural music settings to range from 0 to 15 dB SNR, the average listening level was not greater than 70 dB SPL. In the present study, two of the three music scenes were at or above 70 dB SPL.
The current evaluation of environmental classifiers illustrates the importance of differences among devices and highlights differences in evaluation criteria. Rather than focusing on accuracy relative to human judgments, here the focus was on consistency, both within and among devices, in various sound-scenes. While extensive, this investigation certainly was not exhaustive. We chose to have listeners with normal hearing evaluate the scenes, as an internal reference and a means for comparison to previous investigations. We did not include listeners with hearing loss, though it would be of great interest to know how such listeners differ in their classifications relative to normal hearing, and whether such listeners classify sound scenes differently unaided versus aided. We developed a broad set of acoustic scenes, but did not cover all of the common environment scenes that listeners may encounter (Smeds et al., 2015;Wolters et al., 2016). We did not specifically consider reverberation that may have been inherent in the original recordings, and we did not manipulate reverberation with room treatments or audio processing methods. In the current set of sound scenes, all conditions were rather static, whereas moving sound sources and varying sound source distances might also impact environmental classification in important and potentially company-specific ways. Finally, and perhaps most profound in impact, we nor others have specifically investigated the performance of environmental classification relative to listener intent. At present, such classifiers still operate exclusively in the acoustic dimension without knowledge of listener intent or current focus of attention. Thus, there is a substantial likelihood of a mismatch between what the aid determines to be the prominent or important signal versus the signal to which the listener would like to attend. The present study did not measure listener intent and therefore cannot assess the accuracy of the devices in this way, but future work should consider accuracy of classifiers not only relative to foreground but also listener intent. Future innovations in classification technology will undoubtedly seek to leverage human interfacing to incorporate the individual's unique perspective (Carlile et al., 2017), and the environment classifier will continue to be the bridge between the acoustic world and the DSP decision rules.
Conclusions
It is clear that premium hearing aid classification varies significantly among brands, and this presumably drives very different DSP-feature engagement subsequent to classification. This result itself connotes little valence if it can be assumed that individual companies understand the dependencies of their classifiers and take that into account when driving feature engagement via environment. The choice in feature activations and the subsequent changes to signal processing should always aim for some benefit to the individual listener. The idiosyncratic patterns revealed in some devices, and the sheer variability across devices, however, is more concerning from a clinician's perspective. Because the attributes of classifiers are not exposed to hearing health-care professionals to the same extent as the signal processing features they control, the importance of classification is often overlooked. The present data indicate that an individual clinician likely needs to know more information than they are provided to use classification data, as revealed via the datalogging feature in fitting software, for counseling, troubleshooting, and in making decisions about fitting adjustments. Presumably, knowledge of company DSP philosophy could better inform the clinician when prescribing premium hearing aids over base-level devices Johnson et al., 2016). Therefore, understanding that there are differences and commonalities among companies may help the clinician give their best judgment in accordance with the patient's needs. | 2021-03-23T06:16:43.602Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "bda6b84f82d286e6b664ff074a7c126f3aee103a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2331216520980968",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d65a14f8a32c771eda5239474f4a02d2fc2a2274",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260953129 | pes2o/s2orc | v3-fos-license | P1366: SAR442257, A CD38/CD28/CD3 TRISPECIFIC ANTIBODY, POTENTIATES CAR T-CELL ACTIVITY AGAINST LARGE B-CELL LYMPHOMA
Book Citations: Authors, Title, HemaSphere, 2023;7(S3):pages. The individual abstract DOIs can be found at https://journals.lww.com/hemasphere/pages/default.aspx. Disclaimer: Articles published in the journal HemaSphere exclusively reflect the opinions of the authors. The authors are responsible for all content in their abstracts including accuracy of the facts, statements, citing resources, etc. 2647 and CD38 in CD8 T-cells from CAR T refractory compared to CAR T naïve tumors. We hypothesized that SAR442257, a CD38/CD28xCD3 trispecific antibody, could boost the activity of CD19 CAR T-cells from rrLBCL patients in part by allowing for dual antigen targeting of CD19 and CD38 on lymphoma cells and by inducing fratricide of CD38-expressing CD8TEX cells. Cytotoxicity assays showed that SAR442257 significantly increased the killing of CAR T-cells against HT and RL WT (CD19+/CD38+) cells (24-hr HT Pvalue=3e-4, 24-hr RL P-value=1e-2) which persisted at 48 hours. Addition of SAR442257 to CAR T-cells allowed killing of HT and RL CD19KO (CD19-/CD38+) (24-hr HT P-value=8e-7, 24-hr RL P-value=2e-8) at a level similar to the combination in WT cells. Addition of antibodies lacking CD38 or CD38/CD3/CD28-targeting regions did not boost CAR T cytotoxicity or rescue killing of CD19KO targets. We observed significant T-cell fratricide, which was beneficial or non-detrimental in light of significantly increased lymphoma cell killing.
1.
To understand the impact of the rrLBCL immune microenvironment on clinical failure to CAR T therapy 2. Investigate targeting of CD38 with a novel CD38 T cell engaging trispecific antibody in combination with CD19 CAR T-cells
Methods:
Discovery analysis: 13 rrLBCL tumors, including 7 CAR T naïve and 6 CAR T refractory tumors, were subjected to scRNA-seq. Clustering was performed and frequency, clonal dominance, and expression profiles of major cell types and subclusters were compared between groups.
Functional experiments: The effect of SAR442257 was assessed in cytotoxicity assays and compared to control antibodies lacking the CD38 or CD38/CD3/CD28 targeting regions. RL and HT lymphoma cell lines were CRISPR modified to knock-out CD19 and isogenic WT clones used as target cells. CD19 CAR T-cells were constructed from PBMCs of rrLBCL patients obtained at the time of apheresis using a construct like axicabtagene ciloleucel (axi-cel).
For cytotoxicity assays, cell lines were co-cultured at E:T ratios of 1:1 for 24h or 48h.
Results:
scRNA-seq revealed that CAR T refractory rrLBCL tumors possessed significantly higher fractions of terminally exhausted LAG3+TIM3+CD38+ CD8 T-cells (CD8 TEX ) with high expression of T-cell dysfunction and TEX signatures compared to CAR T naïve tumors. CD8 TEX were most frequent in CAR T refractory tumors and enriched within CAR+CD8 T-cells detected within all CAR T refractory tumors. We have previously identified similar cells enriched in axi-cel infusion products of rrLBCL patients failing to response (Deng et al., Nat. Med. 2020). TCR clonotype analysis revealed highly expanded T-cell clones within the CD8 TEX cluster, significantly increased clonal dominance, and reduced clonal diversity of CD8 TEX cells in CAR T refractory compared to CAR T naïve tumors. Single cell differential gene expression revealed significantly increased expression of LAG3, TIM3,
and CD38 in CD8 T-cells from CAR T refractory compared to CAR T naïve tumors.
We hypothesized that SAR442257, a CD38/CD28xCD3 trispecific antibody, could boost the activity of CD19 CAR T-cells from rrLBCL patients in part by allowing for dual antigen targeting of CD19 and CD38 on lymphoma cells and by inducing fratricide of CD38-expressing CD8 TEX cells. Cytotoxicity assays showed that SAR442257 significantly increased the killing of CAR T-cells against HT and RL WT (CD19+/CD38+) cells (24-hr HT P-value=3e-4, 24-hr RL P-value=1e-2) which persisted at 48 hours. Addition of SAR442257 to CAR T-cells allowed killing of HT and RL CD19KO (CD19-/CD38+) (24-hr HT P-value=8e-7, 24-hr RL P-value=2e-8) at a level similar to the combination in WT cells. Addition of antibodies lacking CD38 or CD38/CD3/CD28-targeting regions did not boost CAR T cytotoxicity or rescue killing of CD19KO targets. We observed significant T-cell fratricide, which was beneficial or non-detrimental in light of significantly increased lymphoma cell killing.
Summary/Conclusion:
The tumor microenvironment of CAR T refractory rrLBCL is enriched in clonally expanded and terminally exhausted CD8 T-cells expressing CD38. The CD38/CD28xCD3 trispecific antibody SAR442257 boosted CAR Tcell activity through recognition of CD38 on the tumor, costimulation of CAR T-cells, and induced fratricide of CD38+ T-cells; resulting in superior tumor cell killing. In addition, SAR442257 allowed CD19 CAR T-cells to kill CD19-/CD38+ LBCL cells. | 2023-08-18T05:06:56.489Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "3dcd0ce33b40b6cbea4a3cb19d76352b32753962",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3dcd0ce33b40b6cbea4a3cb19d76352b32753962",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
6986486 | pes2o/s2orc | v3-fos-license | Identification of OBO nonalignments and its implications for OBO enrichment
Motivation: Existing projects that focus on the semiautomatic addition of links between existing terms in the Open Biomedical Ontologies can take advantage of reasoners that can make new inferences between terms that are based on the added formal definitions and that reflect nonalignments between the linked terms. However, these projects require that these definitions be necessary and sufficient, a strong requirement that often does not hold. If such definitions cannot be added, the reasoners cannot point to the nonalignments through the suggestion of new inferences. Results: We describe a methodology by which we have identified over 1900 instances of nonredundant nonalignments between terms from the Gene Ontology (GO) biological process (BP), cellular component (CC) and molecular function (MF) ontologies, Chemical Entities of Biological Interest (ChEBI) and the Cell Type Ontology (CL). Many of the 39.8% of these nonalignments whose object terms are more atomic than the subject terms are not currently examined in other ontology-enrichment projects due to the fact that the necessary and sufficient conditions required for the inferences are not currently examined. Analysis of the ratios of nonalignments to assertions from which the nonalignments were identified suggests that BP–MF, BP–BP, BP–CL and CC–CC terms are relatively well-aligned, while ChEBI–MF, BP–ChEBI and CC–MF terms are relatively not aligned well. We propose four ways to resolve an identified nonalignment and recommend an analogous implementation of our methodology in ontology-enrichment tools to identify types of nonalignments that are currently not detected. Availability: The nonalignments discussed in this article may be viewed at http://compbio.uchsc.edu/Hunter_lab/Bada/nonalignments_2008_03_06.html. Code for the generation of these nonalignments is available upon request. Contact: mike.bada@uchsc.edu
INTRODUCTION
Several efforts in recent years have focused on the semiautomatic addition of links between existing terms in the Open Biomedical Ontologies (OBOs) through the creation of formal definitions of these terms using more atomic terms, a process to which we refer as ontology enrichment. Of note, the Gene Ontology Next Generation (GONG) project first used the description-logic-based language DAMLþOIL to formally define 250 Gene Ontology (GO) metabolism terms using MeSH terms (Wroe et al., 2003), and later OWL to formally define a much larger number of GO metabolism, binding and transport terms again using MeSH terms (Aranguren, 2004); this project has since evolved into the more general Biological Ontology Next Generation (BONG), which currently exists as a plugin to the Protege ontology editor. The Obol effort uses a series of Prolog production rules that can be used to decompose a given matching GO term into an Aristotelean genus (category) and one or more differentiae (necessary and sufficient conditions that differentiate the term from other terms of the same genus); the Gene Ontology Consortium is currently using Obol to generate Aristotelean definitions of OBO terms that refer to other OBO terms (Mungall, 2004). In our frame-based Protege ontology-enrichment effort, we have created over 9600 assertions linking terms in the GO (The Gene Ontology Consortium, 2000), Chemical Entities of Biological Interest (ChEBI) ontology (Degtyarenko, 2003), and the Cell Type Ontology (CL) (Bard et al., 2005); these base assertions have been integrated into this set of ontologies such that each assertion is consistent with all assertions made at more general levels (Bada and Hunter, 2007).
Both GONG and Obol have been able to take advantage of associated reasoners; for the former, an OWL reasoner can be used, while for the latter, the Aristotelean definitions can be imported into OBO-Edit (www.oboedit.org), the primary tool in which OBOs are developed, and its associated reasoner invoked. A great advantage of using such a reasoner is its ability to make new inferences derived from the added formal term definitions. For example, in the second published GONG study, using the newly added formal defintions for the GO molecular function (MF) terms neurotransmitter binding and glutamate binding (which use the MeSH terms Neurotransmitters and Glutamates, respectively), the OWL reasoner inferred that neurotransmitter binding subsumes glutamate binding, a link absent at that point in GO. However, both GONG/BONG and Obol/OBO-Edit require that these definitions use necessary and sufficient conditions in order for these inferences to be made. This is a strong requirement that does not hold bidirectionally in many, if not most cases: it is necessary and sufficient that catecholamine transport is a transport that results in the directed movement of a catecholamine. However, the semantics of OWL or OBO say that, for an existential restriction expressed for a subject class A linking it to an object class B via property p, each instance of A must have at least one value from B for p. Since we cannot say that every catecholamine takes part in a catecholamine-transport process, it is not even possible to make this a necessary assertion. Consequently, using terms from these two terminologies that have been linked, these new subsumptive inferences can only be made between subject terms for which necessary and sufficient definitions can be created (e.g. substance-transport terms) and not with the object terms (e.g. the substances that are being transported) used in these definitions.
The inferences that are made by these reasoners point to what we call nonalignments-subsets of terms that are linked (other than via is_a), but that are not aligned in that the terms of one side of the links are linked by subsumption while the terms of the other side are not. (The nonalignments we identify all consist of subject terms that are subsumptively linked and object terms that are not subsumptively linked.) For example, as can be seen in Figure 1, we have linked the ChEBI term chlorohydrocarbons to the GO term chlorinated hydrocarbon metabolism and also the ChEBI term 1,3-dichloro-2-propanol to the GO term 1,3-dichloro-2-propanol metabolism. These pairs of terms are not aligned in that 1,3-dichloro-2-propanol is subsumed by chlorohydrocarbons in ChEBI, but 1,3-dichloro-2-propanol metabolism is not subsumed by chlorinated hydrocarbon metabolism in GO. We expect the two sides to be aligned in that if 1,3-dichloro-2-propanol is indeed a kind of chlorohydrocarbon (as represented in ChEBI), then it should be metabolized in a kind of chlorinated-hydrocarbon metabolism-but 1,3-dichloro-2-propanol metabolism is not a kind of chlorinated-hydrocarbon metabolism (as represented in GO). In the nonalignments we identify, if the more specific subject entity (e.g. 1,3-dichloro-2-propanol) is indeed a kind of the more general subject entity (e.g. chlorohydrocarbons), then the assertion made for the more specific subject entity (e.g. that 1,3-dichloro-2-propanol can be metabolized in a 1,3-dichloro-2-propanol-metabolism process) should be subsumed by the assertion made for the more general subject entity (e.g. that a chlorohydrocarbon can be metabolized in a chlorinated-hydrocarbon-metabolism process).
In this example, with necessary and sufficient definitions of chlorinated hydrocarbon metabolism and 1,3-dichloro-2-propanol metabolism in terms of chlorohydrocarbons and 1,3-dichloro-2-propanol, respectively, these reasoners would point to this nonalignment through the suggestion of an is_a link from 1,3-dichloro-2-propanol metabolism to chlorinated hydrocarbon metabolism. However, if instead 1,3-dichloro-2-propanol was not subsumed by chlorohydrocarbons and 1,3-dichloro-2-propanol metabolism was subsumed by chlorinated hydrocarbon metabolism, these reasoners would not be able to suggest an is_a link from 1,3-dichloro-2-propanol to chlorohydrocarbons, because the required necessary and sufficient definitions of 1,3-dichloro-2-propanol and chlorohydrocarbons in terms of 1,3-dichloro-2-propanol metabolism and chlorinated hydrocarbon metabolism, respectively, could not be created using these terms in an ontologically valid way. This is not a fault of OWL or of Aristotelean formalism; these representational systems have strict semantics, to which ontologists should adhere when making assertions. It is just that reasoners relying solely on necessary and sufficient definitions will likely miss many of these nonalignments because ontologically valid definitions cannot be created, and it is desirable that as many of these nonalignments as possible be rectified.
We have implemented our ontology-enrichment project in Protege-Frames (mainly because this is part of a larger framebased effort). There is no associated reasoner to Protege-Frames, so we implemented a simple reasoning system to ensure the global consistency of the added assertions in our set of integrated ontologies. It is this same reasoning system we use here to discover nonalignments in the constituent ontologies through structural analysis of the assertions we added in our previous work (Bada and Hunter, 2007). Reasoning over these assertions, we were able to discover nearly 1700 instances of nonredundant nonalignments, 39.8% of which likely could not be identified via suggested inferences by OWL or OBO-Edit reasoners due to the fact that the required necessary and sufficient definitions could not be created in an ontologically valid way using these terms of the linked ontologies. We propose that those nonalignments for which such inferences cannot be made by these reasoners also be examined to increase consistency among the linked ontologies.
METHODS
The method by which we ensure the global consistency of the set of assertions to the ontologies is through an analysis of the object classes of the properties of the classes. Specifically, this analysis relies on the fact that the object expression (here, an object class or union of object classes) of a property at a given class level must be subsumed by the object expression of the property at higher (i.e. more general) class levels. Furthermore, the object expression of a given property must be subsumed by the object expression at higher property levels. Put more simply, object expressions should monotonically narrow as one descends to more specific classes and slots. In order for each assertion to be consistent with each assertion made at more general levels, any object class of a property at a given class level that was not subsumed by an object class at a higher class and/or property level such Fig. 1. The relationships between a pair of terms from ChEBI and another pair of terms from the GO BP ontology, the analysis of which an ontology nonalignment has been identified. Specifically, 1,3dichloro-2-propanol is subsumed by chlorohydrocarbons in the former, but 1,3-dichloro-2-propanol metabolism is not subsumed by chlorinated hydrocarbon metabolism in the latter. This nonalignment was identified by analyzing the respective object classes of is metabolized in at the levels of 1,3-dichloro-2-propanol and of chlorohydrocarbons. that these conditions were satisfied was appropriately propagated up the class and/or slot hierarchies. The full details of this procedure can be read in the initial publication of our OBO-enrichment work (Bada and Hunter, 2007).
Our methodology for discovering ontology nonalignments follows from this global consistency enforcement. For each base assertion (represented as a triple of a subject class, property and object class), each of the class's direct superclasses is checked to see if it is within the domain of the property. If so, it is checked if at least one of the object classes of the property of the superclass subsumes the object class of the property of the base assertion. If there is no such subsuming class, this is a nonalignment between the subject and object classes of the two assertions. If there is such a subsuming class at the level of this direct superclass, the same examination is performed for each of its direct superclasses. This continues recursively until either all direct superclasses are outside of the domain of the given property or a root of the ontology is reached.
This can be made clearer with a simple but real example. Consider the base assertion 1,3-dichloro-2-propanol is metabolized in 1,3-dichloro-2-propanol metabolism, which states that 1,3-dichloro-2-propanol can be metabolized in a 1,3-dichloro-2-propanol-metabolism process. The sole direct superclass of 1,3-dichloro-2-propanol-chlorohydrocarbons is obtained. It is checked that chlorohydrocarbons is within the domain of the slot is metabolized in, which is the case. The set of allowed classes of is metabolized in at the level of chlorohydrocarbons is then obtained, which is the single class chlorinated hydrocarbon metabolism (which indicates that a chlorohydrocarbon can be metabolized in a chlorinated-hydrocarbon-metabolism process). The set of allowed classes at the superclass level (the one-member set chlorinated hydrocarbon metabolism) should subsume the set of allowed classes at the base-assertion level (the one-member set 1,3-dichloro-2-propanol metabolism). However, it does not; this is thus a nonalignment. Figure 1 illustrates this example.
For each discovered nonalignment, we extracted four entities into which the nonalignment can be distilled: the subject class of the base assertion, the superclass of this subject class at the level of which the nonalignment was found, the object class of the base assertion (i.e. the allowed class of the assertion), and the set of object classes at the level of the superclass (i.e. the set of allowed classes for the slot at the level of the superclass). There is only one object class for each base assertion, while there can be more than one object class at the level of the superclass, since monotonicity as one travels down the class hierarchy is preserved as long as an object class of a property of a class is subsumed by at least one object class of the property of the superclass. Figure 2 illustrates another real example where the set of allowed classes at the level of the supeclass has more than one member. In this example, the set of object classes for results in binding of at the level of protein binding was assigned the set [proteins, protein polypeptide chains, protein complex]. Such a multiply membered set of object classes is represented as a union of classes, so this assertion indicates that a protein-binding process can result in the binding of either a protein, a protein polypeptide chain, or a protein complex. (This was done because the definition of protein binding is 'interacting selectively with a protein or protein complex'.) However, relatively few terms so far have been assigned multiple allowed classes as in this example, so this is currently an exceptional case.
Each stored nonalignment represented by the four summarizing entities was written out to a text file in the following format: subject class of base assertion -4 superclass of subject class object class of base assertion !-4 object-class set at level of superclass This neatly summarizes the nonalignment by stating that the subject class of the base assertion is subsumed by the superclass, but the object class of the base assertion is not subsumed by any of the object classes at the level of the superclass. Thus, the nonalignment illustrated in Figure 1 is represented as: Such a representation makes clear the essence of the nonalignmentthat 1,3-dichloro-2-propanol is subsumed by chlorohydrocarbons (in ChEBI), but 1,3-dichloro-2-propanol metabolism is not subsumed by chlorinated hydrocarbon metabolism (in the GO biological process (BP) ontology).
Due to the extensive multiple inheritance of the component ontologies, it is possible to discover redundant nonalignments or even the same nonalignment more than once. Only nonredundant nonalignments were stored and exported, as examining redundant nonalignments to assess whether there are true semantic discrepancies entails additional, unnecessary effort and biases statistics. Two nonalignments are redundant if the resolution of the one also results in the resolution of the other. Consider the following two nonalignments: benzoate -4 anions benzoate transport !-4 anion transport benzoate -4 ions benzoate transport !-4 ion transport These two nonalignments are redundant with respect to one another. If the first nonalignment was resolved by adding an is_a link from benzoate transport to anion transport, the second nonalignment would also be resolved since this link addition would result in the implication that benzoate transport is a type of ion transport; thus, the second nonalignment would also be resolved. In cases of redundancy, we have kept the more specific nonalignment; thus, for the example above, only the first nonalignment was stored. The relevant relationships between the terms of these two nonalignments are illustrated in Figure 3.
The March 6, 2008 versions of GO, ChEBI and CL were used for this study. These base ontologies were previously enriched with 10 270 additional assertions linking the component terms using 50 specific relationships detailed in the initial publication of our OBO-enrichment work. It is important to note that although this study relies upon the links we created in our previously published ontology-enrichment work, our methodology for nonalignment identification is not limited by the Fig. 2. The relationships between terms from the GO BP ontology and ChEBI and the GO CC ontology, the analysis of which an ontology nonalignment has been identified. Specifically, histone binding is subsumed by protein binding in the former, but histones is not subsumed by proteins, protein polypeptide chains or protein complex in the latter. This nonalignment was identified by analyzing the respective object classes of results in binding of at the levels of histone binding and of protein binding.
specific relationships we chose to use. (The quality of the nonalignments, however, is dependent on the quality of the links that the methodology analyzes.) In fact, we have recently generated nonalignments based on links created by members of the OBO Consortium and have begun a discussion of ways of managing these nonalignments.
RESULTS
Using this methodology resulted in a total of 1938 nonredundant nonalignments within the set of GO, ChEBI and CL; this set of nonalignments can be examined at http://compbio.uchsc. edu/Hunter_lab/Bada/nonalignments_2008_03_06.html.
To better characterize their distribution, we clustered the nonalignments according to the ontologies that were the sources of the subject and object terms of the nonalignments. For example, the nonalignment illustrated in Figure 1 is a ChEBIto-BP nonalignment, since the subject terms (1,3-dichloro-2-propanol and chlorohydrocarbons) are from ChEBI and the object terms (1,3-dichloro-2-propanol metabolism and chlorinated hydrocarbon metabolism) are from the GO BP ontology. There is a slight complication in that the two sets of object terms of a nonalignment may be from different ontologies, but this is rare. In such a case, the object term of the base assertion is used for the classification of the nonalignment. Table 1 lists the number of assertions and nonredundant nonalignments for each directed pairwise combination of ontologies for which there is at least one corresponding assertion. For example, there are 2710 total added assertions from a GO BP term to another GO BP term, and 94 nonredundant nonalignments were identified from these assertions. The numbers of nonalignments are largely symmetric. The biggest discrepancy is that between the 598 nonalignments identified from the BP-to-ChEBI assertions and the 1022 nonalignments identified from the ChEBI-to-BP assertions. Table 2 lists the numbers of assertions and nonredundant nonalignments and the ratio of nonalignments to assertions for each undirected pairwise combination of ontologies for which there is at least one corresponding assertion. The lowest ratios of nonalignments to assertions are those between BP terms and MF terms (0.02), between BP terms and BP terms (0.034), between BP terms and CL terms (0.064) and between cellular component (CC) terms and CC terms (0.065). This suggests that terms within these pairs of ontologies are relatively wellaligned. The highest ratios of nonalignments to assertions are those between ChEBI terms and MF terms (0.306), between BP terms and ChEBI terms (0.2680) and between CC terms and MF terms (0.19). This suggests that these pairs of ontologies are relatively not aligned well, which agrees with our empirical observations in our ontology-enrichment work that ChEBI is relatively not aligned well with GO.
Another way to characterize the nonalignments is whether the subject terms of the nonalignments are the more complex terms or the more atomic terms. For example, in the example illustrated in Figure 1, the subject terms (1,3-dichloro-2propanol and chlorohydrocarbons) are more atomic than the object terms in that the latter are built up from the former. Conversely, in the example illustrated in Figure 2, the subject terms (protein binding and histone binding) Fig. 3. The relationships between terms from ChEBI and the GO BP ontology, the analysis of which two redundant ontology nonalignments were identified. Specifically, benzoate is subsumed by anions in the former, but benzoate transport is not subsumed by anion transport in the latter. Also, benzoate is subsumed by ions in the former, but benzoate transport is not subsumed by ion transport in the latter. are more complex than the object terms. As will be explained more fully in the next section, this characterization has important implications in that the new inferences made by the GONG/BONG and Obol projects correspond to the first type of nonalignment, in which the subject classes are more atomic, since ontologically valid necessary and sufficient definitions, which are required for these projects, can more easily be constructed in these cases. The second type of nonalignment includes all of the BP-to-CC, BP-to-ChEBI, BP-to-CL, BP-to-MF, MF-to-CC and MF-to-ChEBI nonalignments, while the BP-to-BP and CC-to-CC sets of nonalignments have mixtures of the two types of nonalignments. We have found that 772 (39.8%) of the 1938 nonredundant nonalignments are of the second type, thus showing that our methodology can identify a large number of nonalignments that may be missed by the reasoning methods of the other projects.
Evaluation and management of nonalignments
In this study, we have used the term nonalignment to refer to two analogous sets of entities such that one entity is subsumed by the other in the first pair while one entity is not subsumed by the other in the second pair. Upon examination of a given nonalignment, if it is determined that the pairs of entities should be aligned, we term this a discrepancy. Not all nonalignments are discrepancies; Figure 4 illustrates such an example. Here, laminin-1 binding is subsumed by extracellular matrix binding in the GO MF ontology, but laminin-1 complex is not subsumed by extracellular matrix in the GO CC ontology. Even though it is a nonalignment, we believe that this is not a discrepancy in that these pairs of terms should not be aligned; that is, laminin-1 binding is a type of extracellular-matrix binding, but the laminin-1 complex is not a type of extracellular matrix (but rather a component of the extracellular matrix). Nevertheless, we assert that a large majority of the nonalignments we have identified are indeed discrepancies. If a given nonalignment is assessed to be a discrepancy, there are two ways to resolve it. The first is to add an is_a link from the object term of the base assertion to the object term at the superclass level (or, in the case of multiple object terms at the superclass level, to at least one of the object terms). For example, we assert the nonalignment illustrated in Figure 1 is a discrepancy: according to this model, a chlorohydrocarbon can only be metabolized in a chlorinated-hydrocarbon-metabolism process, but a molecule of 1,3-dichloro-2-propanol, which is a kind of chlorohydrocarbon (according to ChEBI), can only be metabolized in a 1,3-dichloro-2-propanol-metabolism process, which is not a kind of chlorinated-hydrocarbonmetabolism process (according to GO BP). One way to resolve this discrepancy is the addition of an is_a link from 1,3dichloro-2-propanol metabolism to chlorinated hydrocarbon metabolism. With this addition, a molecule of 1,3-dichloro-2-propanol can be metabolized in a 1,3dichloro-2-propanol-metabolism process, which is now a more specific kind of chlorinated-hydrocarbon-metabolism process.
The second way to resolve a discrepancy is the removal of the is_a link from the subject term of the base assertion to the subject term at the superclass level. In Figure 1, this corresponds to the removal of the is_a link from 1,3-dichloro-2-propanol to chlorohydrocarbons. With the removal of this link, 1,3-dichloro-2-propanol is no longer a more specific kind of chlorohydrocarbon, which aligns with the fact that a 1,3-dichoro-2-propanol-metabolism process is not a kind of a chlorinated-hydrocarbon-metabolism process.
In the case of a nonalignment that is not a discrepancy, there is still a logical inconsistency, and action should be taken to rectify the inconsistency. A general, automatic solution to such an inconsistency is the propagation of the object class of the base assertion up to the superclass level; this is the type of upward propagation we previously extensively employed in our ontology-enrichment work so as to ensure the global consistency of the ontologies when adding enriching assertions. For example, in Figure 4, we assert that neither of the two steps described in the previous paragraphs should be performed; however, there is still a logical inconsistency in that an extracellular-matrix-binding process results in the binding of an extracellular matrix, but a laminin-1-binding process, which is a kind of extracellular-matrix-binding process (according to GO MF), results in the binding of a laminin-1 complex, which is not an extracellular matrix (according to GO CC). (According to GO CC, laminin-1 complex is transitively part_of extracellular matrix.) The rectification we describe here consists of adding laminin-1 complex as an object class of results in binding of at the level of extracellular matrix binding; this is illustrated in Figure 5. The semantics of this new model are that an extracellular-matrix-binding process results in the binding of an extracellular matrix or a laminin-1 complex, while a laminin-1-binding process further restricts this to a laminin-1 complex.
A more elegant solution in this example is to instead add the GO CC term extracellular matrix part as an allowed class of results in binding of at the level of extracellular matrix binding; the semantics of this are that an extracellular-matrix-binding-process results in the binding of an extracellular matrix or an extracellular-matrix part, which seems to be a valid definition for extracellular matrix binding. The original nonalignment would be resolved in that laminin-1 complex at the level of laminin-1 binding Fig. 4. The relationships between a pair of terms from the GO MF ontology and a pair of terms from the GO cellular-component ontology, the analysis of which an ontology nonalignment has been identified. We assert this is an example of nonalignment that is not a discrepancy in that the subsumption relationship between the subject terms and the lack of a subsumption relationship between the object terms appear to be valid. would be subsumed by extracellular matrix part at the level of extracellular matrix binding. Though this is semantically closer to the definition of extracellular matrix binding, it is also more manual and thus more labor-intensive (which is not to say that it should not be done). Our methodology could be used to either automatically upwardly propagate the specific classes so as to make the ontologies consistent, as described in the previous paragraph, or it could be used to automatically make suggestions to the ontology curators, who would decide to add either the specific terms or more general terms (such as extracellular matrix part).
Of total of 1938, 100 nonredundant nonalignments were randomly selected for an evaluation. Out of these 100, 96 were assessed to be discrepancies; that is, we assert that they should be similarly aligned through the addition or removal of an is_a link, corresponding to the first two types of resolution. The remaining four nonalignments are analogous to the example seen in Figure 4, in which the subject and object terms should not be aligned; rather, the third type of resolution should be undertaken, in which an object term should be added to the higher-level assertion such that the lower-level assertion is subsumed, as seen in Figure 5.
Comparison to other projects
Both the GONG/BONG and Obol projects have been focusing on creating formal defintions of OBO terms using more atomic OBO terms in necessary and sufficient conditions. These definitions can then be reasoned over (by an OWL reasoner for the former and by the Obol reasoner or the OBO-Edit reasoner for the latter), which can make new inferences using the definitions. However, the reasoner can only make new inferences using the linked terms if ontologically valid necessary and sufficient definitions can be constructed. The type of inferences that can be made largely corresponds to the absent subsumptions in the type of nonalignments in which the subject terms are more atomic than the object terms. Figure 1 is such an example. Necessary and sufficient definitions could be produced for 1,3-dichloro-2-propanol metabolism (as a subclass of metabolism with a results in metabolism of 1,3-dichloro-2-propanol condition) and for chlorinated hydrocarbon metabolism (as a subclass of metabolism with a results in metabolism of chlorohydrocarbons condition). If the associated reasoner reasons over ChEBI and GO (including these added definitions), given that 1,3-dichloro-2-propanol is subsumed by chlorohydrocarbons as in Figure 1, it can infer an is_a link from 1,3-dichloro-2-propanol metabolism to chlorinated hydrocarbon metabolism. This is the same link that is the absent subsumption between the object terms (i.e. that 1,3-dichloro-2propanol metabolism is not subsumed by chlorinated hydrocarbon metabolism) of the nonalignment described for this example. Thus, these projects could predict analogous inferences for all of our nonalignments in which the subject terms are more atomic than the object terms, so long as ontologically valid necessary and sufficient definitions could be constructed, as was done in this example. Our methodology does not automatically suggest that all object pairs in each identified nonalignment be linked via is_a, as this may not be the correct action to take; it allows the curator to resolve the nonalignment with any of the four methods described in the previous section.
However, these projects likely could not predict new inferences for many if not all of the nonalignments in which the object terms are more atomic than the subject terms presented here, because the required necessary and sufficient definitions likely could not be made in an ontologically valid manner. Figure 6 illustrates such an example. The nonalignment identified here is that aldonate transport is subsumed by hexose transport in GO BP, but aldonates is not subsumed by hexoses in ChEBI. Given necessary and sufficient definitions of hexose transport in terms of hexoses and aldonate transport in terms of aldonates and the fact that aldonate transport is subsumed by hexose transport, a reasoner from one of these projects cannot infer that hexoses subsumes aldonates. In order for the reasoner to infer an is_a link from aldonates to hexoses (which is one way to resolve this nonalignment) from these terms and their definitions, necessary and sufficient definitions for aldonates (perhaps as a subclass of molecular entities and an is transported in aldonate transport condition) and hexoses (perhaps as a subclass of molecular entities and an is transported in hexose transport condition) would have to be created. However, this is too strong a condition, as, for example, an aldonate is not necessarily transported elsewhere; it may be used where it was synthesized. Without these necessary and sufficient definitions, this inference cannot be made. Figure 4 has been rectified by the propagation of laminin-1 complex. Specifically, laminin-1 complex has been added as an object class of results in binding of at the level of extracellular matrix binding. Fig. 6. The relationships between a pair of terms from the GO BP ontology and a pair of terms from ChEBI from which a nonalignment was identified. This is an example of a nonalignment that is not currently examined in other ontology-enrichment methodologies, which require necessary and sufficient conditions to make new inferences.
It can be argued that a reasoner in one of these other projects can infer an is_a link between chemicals by creating ontologically valid necessary and sufficient definitions in terms of, for example, parts or functions of these chemicals. However, this presupposes that not only such a more basic ontology but the required specific object terms exist. Such an approach laboriously requires the creation of an entirely new set of assertions, and there may be recursion in that the more basic object terms may not exist in a hierarchical relationship, thus once again preventing the inference of the is_a link between the more composite subject terms. Our approach only requires one set of assertions and their automatically generated inverse assertions and relies on a different kind of reasoning than the deduction used by reasoners in the aforementioned projects. However, we assert that a functionally equivalent methodology could be implemented, e.g. using an OWL API, without the use of explicitly represented inverse assertions.
We have found that 39.8% of the total nonredundant nonalignments identified in this study are those in which the subject terms of the nonalignments are built up from the object terms; these correspond to the instances in which it is difficult to produce the required ontologically valid necessary and sufficient conditions, in which case new inferences by the aforementioned reasoners cannot be made using the linked terms of the ontologies.
Our methodology essentially uses subsumptive analysis of term attributes toward quality assurance of ontologies, a technique which has been used by others in the field. The BERNWARD system reconstructed sets of medical concepts into hierarchies based on five subsumptive principles, but it is different in that it takes into account partonomy in its subsumption without resolution of the type we perform as in Figures 4 and 5 (Bernauer, 1994). In an analysis of UMLS, Cimino (1998) found that the semantic type of 0.5% of concepts was neither the same as nor more specific than the semantic type of their respective parents. In an analysis of the links between diseases and their respective anatomical locations in SNOMED CT, Burgun et al. (2005) looked for differences between sets of disorders associated with all descendants of given anatomical entities and the sets of descendant disorders of the disorders associated with the given anatomical entities. Bodenreider et al. (2007) found that SNOMED CT contained 7226 parent-child pairs in which a role or value present in the parent was not present in the child and 21 799 pairs in which a value of a role present in the parent was not identical or more specific in the child. In addition to being the first subsumptive study of links among OBO terms, ours suggests both fully automatic and semiautomatic solutions to correct the inconsistencies that result upon linking the terms and highlights those that are not currently found by existing reasoning methods in other biomedical ontology-enrichment projects.
We are not calling for the abolition of the use of the OWL, Obol or OBO-Edit reasoners. Rather, we assert that functionality that identifies the type of nonalignments for which inferences cannot be made (due to absence of required necessary and sufficient conditions) can and should be built into ontology-enrichment tools such as BONG. A methodology analogous to ours appears possible through the use of an OWL API through a subsumptive analysis of directly asserted and inherited property-value pairs. Consider Figure 7, in which the nonalignment of Figure 6 has been resolved through the addition of an is_a link from aldonates to hexoses. The links from the subject terms to the object terms can be represented as necessary and sufficient existential (i.e. someValuesFrom) conditions. Comparing the value of results in transport of at the level of aldonate transport (aldonates) to the value of results in transport of at the level of hexose transport (hexoses), it can be determined that the former is subsumed by the latter; thus, there is no inconsistency. Conversely, considering Figure 6, using the same procedure, aldonates is not subsumed by hexoses, which could result in the suggestion of a nonalignment. The same methodology could be used to suggest nonalignments where necessary and sufficient definitions can be made, but this appears unnecessary, since existing reasoners can suggest new inferences for such cases. Moreover, this would require the use of statements for which ontologically valid necessary and sufficient conditions likely could not be made. Thus, the subsumptive inferences made by currently used reasoners and the nonalignments discovered by our methodology are complementary if the OBO curators continue to solely examine those nonalignments indicated by the inferences made by the reasoners using necessary and sufficient definitions.
SUMMARY
We have described a methodology by which we have identified over 1900 instances of nonredundant nonalignments between terms from GO, ChEBI and CL. Analysis of the ratios of nonalignments to assertions from which the nonalignments were identified suggests that BP-MF, BP-BP, BP-CL and CC-CC terms are relatively well-aligned, while ChEBI-MF, BP-ChEBI and CCMF terms are relatively not aligned well. We propose that three ways to resolve an identified nonalignment are the addition of an is_a link between the object terms, the removal of an is_a link between the subject terms and the upward propagation of the object term to the superclass level. Many of the 39.8% of these nonalignments in which the object terms are more atomic than the subject terms likely are not currently examined in other ontology-enrichment projects due to the fact that the necessary and sufficient conditions required for the inferences likely could not be added, as they are semantically too strong. We assert that a methodology analogous to ours could be implemented using an OWL API Fig. 7. The relationships between a pair of terms from the GO BP ontology and a pair of terms from ChEBI that result from the resolution of the nonalignment of Figure 6 via the addition of an is_a link from aldonates to hexoses. | 2014-10-01T00:00:00.000Z | 2008-05-07T00:00:00.000 | {
"year": 2008,
"sha1": "490e5784c76214e6f82b5246308e79a15fc4061b",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/24/12/1448/487229/btn194.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "490e5784c76214e6f82b5246308e79a15fc4061b",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
17513522 | pes2o/s2orc | v3-fos-license | Structural and electronic properties of Eu- and Pd-doped ZnO
Doping ZnO with rare earth and 4d transition elements is a popular technique to manipulate the optical properties of ZnO systems. These systems may also possess intrinsic ferromagnetism due to their magnetic moment borne on 4f and 4d electrons. In this work, the structural, electronic, and magnetic properties of Eu- and Pd-doped ZnO were investigated by the ab initio density functional theory methods based on generalized gradient approximation. The relative stability of incorporation sites of the doped elements in the ZnO host lattice was studied. The ground state properties, equilibrium bond lengths, and band structures of both the ZnO:Eu and ZnO:Pd systems were also investigated. The total and partial densities of electron states were also determined for both systems. It was found that in the ZnO:Eu system, ambient ferromagnetism can be induced by introducing Zn interstitial which leads to a carrier-mediated ferromagnetism while the ZnO:Pd system possesses no ferromagnetism. PACS 31.15.E-, 75.50.Pp, 75.30Hx
Introduction
Semiconductor metal oxides, with applications in the photoelectrochemical cells, diluted magnetic semiconductors (DMS), field effect transistors, and photoluminescence devices, have recently initiated dynamic research activities [1][2][3]. In particular, ZnO has a significant advantage for applications in optical [4] and spintronic [5] devices. As a result, doping ZnO with various elements has been a popular technique to manipulate and control ZnO's extrinsic properties for device applications [6]. Specially rare earth (RE)-and transition metal (TM)-doped ZnO systems exhibit interesting optical and magnetic properties, which do not exist in undoped ZnO. Optically, ZnO systems doped with RE ions have been intensively investigated as electroluminors with wide technological applications [7]. In REdoped ZnO, the intra-ionic 4f transitions of RE ions form luminescent centers which generate narrow and intense emission lines [8]. While the enhancement in the optical absorption of TM-doped ZnO can transfer these materials to efficient photocatalysts [9].
Magnetically, the intrinsic magnetic moment, borne by the RE and TM ions, makes the RE and TM-doped ZnO systems to be potential diluted magnetic semiconductors with applications in spintronic devices. Over the past decade or so, a considerable amount of effort has been made on searching for ZnO-based DMSs with ferromagnetism at ambient. This goal is meant to be achieved by doping ZnO with mainly the first row TMs, such as Co, Mn, and Fe [10]. However, most recently, interesting magnetism has been observed in other metal oxides doped with the second row TMs, namely in Sn 2 O:Pd system [11]. This stimulated further search for possible ferromagnetism [12] and functional optical properties [13] in ZnO:Pd. The realization of magnetism in the ZnO:Pd system is motivated by a previous theoretical prediction [14] and experimental observation [15] of ferromagnetism in Pd clusters. In addition to systems containing TM ions, Eu-doped ZnO (ZnO:Eu) has also shown room temperature ferromagnetic ordering [16] which is partially caused by the high magnetic moment of the Eu ions. In this work, the structural and electronic properties of the ZnO:Pd and ZnO:Eu systems are investigated by a density functional approach. Furthermore, the effects of ZnO's two dominant point defects [17], Zn interstitial (Zn I ) and O vacancy (V O ), on the functional properties of these materials are studied. The results of theoretical investigations presented here will shed light on the origin of the functional properties of this relatively new family of materials.
Computational details
Ab initio calculations were performed with a density functional theory-based DMol3 package developed by Accelrys [18,19]. Geometry optimization and partial density of states (PDOS) calculations were performed with "double-numeric plus polarization" (DNP) basis set while generalized gradient approximation (GGA) based on Perdew-Wang formalism was applied for correlation functional [20]. Real-space global cutoff radii were set for all elements at 5 Å. Since only valence electrons would affect the physical properties, the nuclei and core electrons were replaced by DFT semi-core pseudopotentials with a relativistic correction [21]. A Brillouin zone sampling was carried out by choosing the 2 × 2 × 2 kpoint set using the Monkhorst-Park scheme with a grid spacing of approximately 0.04 Å -1 between k points. The convergence thresholds for energy, Cartesian components of internal forces acting on the ions, and displacement were set to be 10 -5 eV/atom, 0.05 eV/Å, and 0.001 Å, respectively. A convergence testing was performed, first by increasing the k point mesh to 3 × 3 × 3; it was found that the total energy differs less by 10 -5 eV/atom. Then, the k-point mesh was fixed at 2 × 2 × 2, and the cutoff radii were set for all elements to be 5.5 Å. Once again, no significant change in the total energy was obtained. Thus, the results were well converged.
The formation enthalpy and bandgap for undoped ZnO was calculated to be -3.5 and 2.0 eV, respectively. The formation enthalpy is in good agreement with the experimental value of -3.64 eV [22]. However, the bandgap is underestimated by 1.4 eV which is attributed to the GGA intrinsic error. The calculated lattice constants for undoped and fully optimized ZnO were found to be 3.279 Å for a and 5.281 Å for c, which are in good agreement with the experimental data [23], overestimated by only 0.9% and 1.5%, respectively. The Zn-O bond lengths in the relaxed structure were 2.005 and 1.997 Å along the c direction and ab plane, respectively. In order to avoid the artificial hydrostatic stress in the doped structures, the lattice parameters were fixed to the calculated values of the undoped ZnO while only the internal atomic coordinates were relaxed.
To simulate the low concentrations of dopants in ZnO, a large supercell of 4a × 4a × 2c was adopted for calculations. The original supercell contained 64 Zn-O formula units. By introducing one dopant in the substitutional or interstitial site, the concentration of the dopants would be 1.4%. Having the periodic boundary conditions applied, the average separation of the dopant ions is 13.114 Å. This distance is large enough to avoid artificial interaction between the dopants. As a result, the calculations on this supercell will sufficiently resemble the experimental conditions of diluted dopant concentrations. The formation energy (E f ) of the dopants or a cluster of dopants in ZnO's host lattice was calculated as follows: in which E t , μ, and E F represent total energy, chemical potential for respective elements, and Fermi energy, respectively. μ Zn and μ M are set to be the calculated energies of metallic Zn and the dopants (Pd or Eu) per element. n represents the number of Zn atoms removed from the supercell, which is zero in the case of the interstitial dopant and one for the substitutional dopant. q stands for the net number of electrons transferred from the defect to the conduction band. Since only neutral supercells were adapted for the calculations, q is zero for all configurations.
The ZnO:Eu system The incorporation mechanism of Eu ions in ZnO's host lattice was investigated by calculating the E f of the substitutional Eu (Eu Zn ) and interstitial Eu (Eu I ) in the stochiometric ZnO as presented in Table 1. It was found that the E f of Eu Zn is -2.391 eV while the E f of Eu I is 1.429 eV. Such a large difference in E f indicates that Eu ions favorably substitute Zn ions rather than taking the interstitial sites of the ZnO lattice. The local geometry of the Eu Zn and Eu I is presented in (a) and (b). Figure 1a shows that the length of Eu-O bond along c direction (ab plane) has increased to 2.280 Å (2.260 Å) in the ZnO:Eu Zn system. The increase of the bond lengths is approximately equivalent to an expansion of 14% along the c direction and 13% within the ab plane with respect to the Zn-O bond length in an undoped ZnO. On the other hand, in the ZnO:Eu I system, in which Eu I binds to three Zn and three O ions, the length of the Eu-O and Eu-Zn bonds were found to be 2.323 and 2.857 Å comprising an expansion of 16% and 24%, respectively compared to the unrelaxed structure. In the ZnO:Eu Zn system, although the length of the Eu bonds is also substantially expanded, the expansion is much smaller than that in the ZnO:Eu I system. As a result, the ZnO:Eu Zn system reaches the structural stability with less lattice distortion.
Next, the E f of Eu Zn in the nonstochiometric ZnO was studied by considering two distinct situations that lead To investigate the electronic properties of the ZnO:Eu systems, the total and Eu's 4f partial density of states (DOS) of all configurations were calculated and presented in Figure 2. A general feature of the Eu's 4f states in all configurations is that Eu's 4f states are localized in a narrow impurity band of the width of approximately 1 eV, which is located just below the Fermi level. Such localization of the 4f states indicates that 4f electrons are not affected by the local crystal environment. This point is further reinforced by the Eu's magnetization as presented in Table 1. The spin number (S) of the Eu ions in all configurations is approximately 6.9, very close the spin number of free Eu atoms which indicates the infinitesimal hybridization of Eu's f orbitals with other orbitals in the host crystals. According to Figure 2a, b, in stochiometric systems, ZnO:Eu Zn and ZnO:Eu I , there are minor electronic states available at the Fermi level resulting in limited mobile carriers in those systems. However, this amount of carriers is not sufficient to establish carrier-mediated magnetism in the stochiometric ZnO:Eu systems, and these systems remain paramagnetic [24]. By introducing V O , in the ZnO:Eu Zn + V O system, the V O 's impurity states appear below Eu's 4f states as shown in Figure 2c. Thus, V O does not enhance the carrier concentration in the ZnO:Eu Zn + V O system either. As shown in Figure 2d in the ZnO: Eu Zn + Zn I system, Zn I 's 4s states appear in a small peak at the Fermi level, partially hybridizing with Eu's 4f states and introducing further carriers at the Fermi level. To investigate the possibility of ferromagnetic coupling in the defective systems, two substitutional Eu ions were located in the supercells. Then the E f of each system was calculated once for ferromagnetic magnetic alignment (E FM ) and once again for antiferromagnetic magnetic alignment (E AFM ) of the Eu ions. Finally, ΔE is defined to be E AFM -E FM which is an indicator of ferromagnetic phase stabilization. For Eu ions separated by approximately 3.4 Å (nearest possible distance), ΔE was found to be 21 meV for the ZnO:Eu + Zn I system and 3 meV for the ZnO:Eu + V O system. However, for both systems, the ΔE vanishes when the separation between the Eu ions increases to approximately 6 Å. This trend in ΔE indicates that Zn I induces short range ferromagnetic coupling in the ZnO:Eu system.
The ZnO:Pd system
In the ideal ZnO lattice, the length of the O-Zn bond within the basal plane is 1.997 Å. The radius of O -2 being 1.40 Å, leaves enough room for Pd +2 with an atomic radius of 0.64 Å to substitute the Zn +2 ion without significant lattice distortion to create a Pd Zn . Alternatively, Pd +2 can fit in the octahedral interstitial site which is located in the interstitial channel along c axis. In addition to the octahedral interstitial site, there is a tetrahedral interstitial site in ZnO which has a Zn +2 ion and an O -2 ion as nearest-neighbor atoms, at a distance of about 1.67 Å, (0.833 times the Zn-O bond length along the c axis). Thus, a Pd ion cannot be placed at this site without severe geometric constraints. In order to determine the preferred site of the Pd ion in the ZnO lattice, the E f of Pd for both the ZnO:Pd I and ZnO:Pd Zn systems were calculated. It was found that the E f of Pd Zn and PdI were 0.776 and 1.612 eV. Such a difference results in high concentration of Pd Zn over the Pd I in thermal equilibrium condition in the ZnO:Pd system. Such a finding is in agreement with the reported experimental data that Pd +2 tends to substitute Zn +2 in ZnO [13]. Figure 3a, b show the local geometry of Pd Zn and Pd I in ZnO. For the ZnO:Pd Zn system, the Pd-O is 2.127 and 2.199 Å along c direction and within the ab plane, respectively with approximate expansions of 6% and 10%, respectively compared to the unrelaxed structure. For the ZnO:Pd I system, the Pd-O and Pd-Zn bonds have increased to 2.236 and 2.451 Å, respectively with the expansions of 11% and 6% with respect to the unrelaxed structure. Similar to the ZnO:Eu system, in the ZnO:Pd system, Pd Zn has a lower E f and causes less lattice distortion. Electronically, a Mullikan population analysis indicated that both Pd Zn and Pd I are isovalent to the Zn ions, transferring two electrons to neighboring O ions. This implies that Pd's 4d shell remains fully occupied, thus the Pd ions are not magnetized in the ZnO host lattice which is reflected in zero magnetization of Pd ions in all configurations as presented in Table 2. As a result, the stochiometric ZnO:Pd system is nonmagnetic.
Similar to the previous section, the effect of V O and Zn I on the ZnO:Pd systems was investigated by calculating the E f of the Pd Zn + V O and Pd Zn + Zn I complexes in ZnO, which was found to be 6.289 and 3.831 eV, respectively. In the ZnO:Pd Zn + V O system, the Pd-O bond is 2.300 and 2.358 Å along the c direction and the ab plane, respectively, having 15% (18%) expansion in bond lengths with respect to the unrelaxed structure. In the ZnO:Pd Zn + Zn I system, the lengths of the Pd-O bond along the c direction and within the ab plane are 3.332 and 3.398 Å, respectively which comprise an expansion of 66% along the c direction and an expansion of 70% within the ab plane compared to the unrelaxed structure.
To investigate the electronic properties of the ZnO:Pd systems, the total and partial density of states of both systems were calculated and presented in Figure 4. As in Figure 4a, the 4d states of Pd Zn and O's 2p states hybridize and form bonding (t b ) states in the valence band. The antibonding 4d states with e symmetry are located above the t b states, separated by a gap of approximately 1 eV from the valance band maximum and positioned just below the Fermi Level. The position of the Pd Zn antibonding states with respect to the Fermi level indicates an n-type behavior of the ZnO:Pd Zn . Notably, the 4d states are completely degenerate for spin-up and spin-down states with no exchange splitting. This implies that Pd Zn does not induce any magnetization. For Pd I , according to Figure 4b, the 4d states of Pd I are split into bonding and antibonding states. The bonding states with the t b symmetry hybridize with O's 2p orbitals along the valance band. The antibonding states with e symmetry are divided into three peaks in the fundamental bandgap region. This is the major difference of the DOS of the Pd I and Pd Zn , which may be caused by the different crystal fields experienced by each site. The closely located peaks in the bandgap region results in the high optical activity of the ZnO:Pd I system, in particular, they may be the mechanism behind the observed red shift in the photoluminescence spectrum in the ZnO:Pd system [13]. The position of the e states from the Fermi level and the absence of exchange splitting indicate that Pd I is an n-type dopant with no magnetization. In nonstochiometric systems, ZnO:Pd Zn + V O and ZnO:Pd Zn + Zn I , the Pd's 4d states, as shown in Figure 4c, d, are distributed in a similar pattern to the ZnO:Pd Zn . It exhibits the Pd's spin-up and spin-down states which are fully degenerate, indicating the absence of any magnetization per Pd ion. Therefore, these systems are nonmagnetic even when V O and Zn I exist in the ZnO:Pd system.
Conclusion
The structural and electronic properties of the ZnO:Eu and ZnO:Pd systems were investigated by ab initio techniques. It was found that both Eu and Pd ions substitute Zn sites in the ZnO host lattice favorably. With Zn excess in the ZnO:Eu system, the carrier-mediated ferromagnetism can be induced by Zn I . In the ZnO:Pd system, the Pd ions prefer to substitute Zn sites, forming substitutional Pd. Additionally, the Pd ions are isovalent to Zn ions and, consequently, their 4d shell remains fully occupied with no magnetization per Pd ion. | 2014-10-01T00:00:00.000Z | 2011-04-21T00:00:00.000 | {
"year": 2011,
"sha1": "51ecbc23e2025fd7f588ac94eca11b97864aafa8",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-6-357",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cfa0d5ffc8aae172d6381af7ca41d9776ea4df8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
10236772 | pes2o/s2orc | v3-fos-license | Combination therapy targeting toll like receptors 7, 8 and 9 eliminates large established tumors
Background The TLR7/8 agonist 3M-052 and the TLR9 agonist CpG ODN both trigger innate immune responses that support the induction of tumor-specific immunity. Previous studies showed that these agonists used individually could improve the survival of mice challenged with small tumors but were of limited therapeutic benefit against large/advanced tumors. Methods Normal mice were challenged with syngeneic tumors. Once these tumors reached clinically detectable size (500–800 mm3) they were treated by intra-tumoral injection with 3M-052 and/or CpG ODN. Anti-tumor immunity and tumor growth were evaluated. Results The co-delivery of agonists targeting TLRs 7, 8 and 9 increased the number and tumoricidal activity of tumor infiltrating CTL and NK cells while reducing the frequency of immunosuppressive MDSC. The combination of 3M-052 plus CpG ODN (but not each agent alone) eradicated large primary tumors and established long-term protective immunity. Conclusion The combination of agonists targeting TLRs 7/8 and 9 represents a significant improvement in cancer immunotherapy.
Background
Toll-like receptors (TLRs) comprise a family of highly conserved germline-encoded pattern recognition receptors that detect pathogen-associated molecular patterns (PAMPs) expressed by a variety of infectious microorganisms [1]. The ability of TLRs to trigger the innate immune system and bolster adaptive immunity against antigens expressed by pathogens and tumors is well established [2,3]. At least 13 different TLRs have been identified in mammals, with TLRs 7, 8, and 9 being similar in their recognition of nucleic acid motifs and expression within endosomal compartments [1,4,5].
Studies show that TLR7 is primarily expressed by plasmacytoid dendritic cells (pDC), TLR8 by monocytes, monocyte-derived (m)DCs, macrophages and Langerhans cells, and TLR9 by DCs, B cells, monocytes and mast cells [6][7][8][9]. Synthetic agonists designed to stimulate TLR7 typically trigger TLR8 as well and induce the secretion of IL-12 and TNFα by mDCs and/ or pDCs [10,11]. Many TLR7/8 agonists also enhance the expression of co-stimulatory molecules and the migration of DCs, thereby facilitating the induction of Th1 immune responses [12,13]. Synthetic oligonucleotides that express CpG motifs trigger TLR9 and elicit a Th1-dominated immune response characterized by the production of pro-inflammatory cytokines (including IL-12, IFNα, and TNFα) and the up-regulation of costimulatory (CD80 and CD86) and MHC class I and II molecules [14][15][16].
The anti-tumor activity of TLR agonists targeting TLRs 7, 8 and 9 has generally been explored by delivering them systemically to mice with relatively small tumors (typically ≤200 mm 3 ) [17]. While effective against tumors <300 mm 3 , TLR-based therapy of large tumors (>500 mm 3 ) has been much harder to achieve [18][19][20][21]. A growing body of evidence suggests that the efficacy of TLR agonists might be improved by i) using them in combination and ii) injecting them directly into the cancerous tissue [18,22]. Large tumors are commonly infiltrated by immunosuppressive leukocytes that down-regulate antitumor responses. Local delivery of TLR agonists appears to interfere with the function of toleragenic cells in the tumor microenvironment. In this context, intra-tumoral injection of CpG ODN reduced the number and suppressive activity of tumor infiltrating MDSC [22]. Based on preliminary findings, we hypothesized that a combination of agonists targeting TLRs 7, 8 and 9 might be highly effective against established tumors. Unfortunately, the physicochemical characteristics of first generation TLR7/8 agonists resulted in a short in vivo half-life that reduced their activity when co-delivered with CpG ODN.
In the current work, this limitation was overcome by studying a novel TLR7/8 agonist (3M-052) modified with a lipophylic tail that persists in vivo at high levels for at least 24 hr after administration [23] (Additional file 1: Figure S1 and Additional file 2: Table S1). Results show that the combination of 3M-052 plus CpG ODN significantly increases CTL activity and Th1 cytokine production while down-regulating the activity of immunosuppressive MDSC. Although neither CpG ODN nor 3M-052 alone were effective against large tumors, the combination was highly active and mediated tumor eradication and the establishment of long-term immunity.
Results
Effect of TLR agonists on the growth of small tumors CT26 colon cancer cells were implanted subcutaneously into the flank of syngeneic BALB/c mice. When these tumors reached 200 mm 3 in volume, 100 μg of ODN and/or 50 μg of 3M-052 was injected intra-tumorally and the procedure repeated 2 days later. Tumors in untreated mice proliferated rapidly and increased in size by 5-fold within 2 wk ( Figure 1). The rate of proliferation was significantly reduced by treatment with either CpG ODN or 3M-052, although the tumors persisted. In contrast, animals treated with a combination of CpG ODN plus 3M-052 completely rejected their tumors (p < 0.01; Figure 1).
Effect of TLR agonists on the frequency of tumor infiltrating mMDSC, NK and CD8 T cells
Immune cells in the tumor microenvironment profoundly influence the success of immunotherapy. A single cell suspension was prepared from tumor samples, and the frequency of various immune subsets evaluated by FACS (Additional file 3: Figure S2). The number of mMDSC is considered an important marker of immune suppression, as these cells suppress the tumoricidal activity of CTL and NK cells. Consistent with previous reports, the frequency of Gr1 + CD11b + mMDSC was significantly elevated in mice bearing CT26 tumors ( Figure 2). Treatment with either CpG ODN or 3M-052 alone reduced the number of mMDSC infiltrating the tumor site by ≈ 50% (p <0.05). The combination of these two agonists resulted in a nearly 90% reduction in mMDSC frequency (p <0.01, Figure 2). This effect was detectable by 1 day after the second treatment.
Previous studies showed that the infiltration of NK and CD8 T cells into the tumor microenvironment was associated with improved host survival [22]. The effect of TLR agonist treatment on the frequency of tumoricidal cells was therefore analyzed. The number of NK cells was ≈ 25% higher in mice treated with 3M-052 or CpG ODN when compared to untreated controls (p <0.05, Figure 2). This increase was magnified in mice treated with the combination of both TLR agonists. By comparison, while CpG ODN or 3M-052 alone increased CD8 T cell frequency by approximately 2-fold, the combination of both agonists synergistically increased CD8 T cell numbers by >5-fold (p <0.05, Figure 2). No effect on the frequency of Foxp3 + Treg was observed (Additional file 4: Figure S3).
Two experiments were performed to explore the functional activity of these CD8 T cells. Splenocytes from mice in each group were isolated and stimulated ex vivo with the CT26-derived AH-1 tumor peptide. To monitor CTL activity, the frequency of IFNγ secreting cells was determined by ELIspot assay. Consistent with changes in the frequency of CD8 T cells noted above, the number of cells stimulated by AH-1 peptide to produce IFNγ was >8-fold higher in mice treated with CpG ODN plus 3M-052 than in controls by 3 days post treatment (p < 0.001, Figure 3). To evaluate the relevance of these T cells in vivo, mice that had been challenged with tumor and treated with the combination CpG ODN plus Figure 3 Effect of TLR agonists on tumor-specific CTL. CT26 tumors were implanted into BALB/c mice as described in Figure 1. Spleen cells and tumor infiltrating lymphocytes were isolated one day after the second treatment, stimulated ex vivo with AH-1 peptide, and monitored for IFNγ secretion by ELIspot assay. Results represent the mean + SD of 6 mice from 2 independent experiments. *, p < 0.05; **, p <0.01.
3M-052 were injected with anti-CD8 Abs. As seen in Figure 4, protection was abrogated by depletion of CD8 + but not CD4 + T cells, indicating that tumorspecific CD8 T cells were critical mediators of tumor immunity.
Effect of TLR agonists on gene expression in the tumor microenvironment
Treatment with CpG ODN and/or 3M-052 led to a significant changes in the frequency of CD8 T cells, NK cells and MDSC ( Figure 2). To evaluate the activity of these cells, the expression of genes associated with their immunological function was examined by qPCR. The genes selected to evaluate CD8 and NK cell responses were IL-12 and IFNγ (which contribute to the induction and maintenance of immunity) and granzyme B (which mediates their cytotoxicity) [24][25][26]. As seen in Table 1, cells isolated from the tumor of mice treated with either CpG ODN or 3M-052 had higher levels of expression of IL-12, IFNγ and granzyme B than tumor infiltrating cells from untreated mice (p <.05). In animals treated with a combination of both agonists, mRNA levels were significantly higher when compared to either agonist alone (see Table legend). This effect was additive for IL-12 and IFNγ and supra-additive for Granzyme B.
It is well established that the mechanism by which MDSC suppress T cell cytotoxicity in the tumor microenvironment is mediated by the production of L-arginine via arginase-1 and the release of iNOS [27,28]. The expression of Arg1 and Nos2 by tumor infiltrating immune cells was therefore evaluated by qPCR. Results show that 3M-052 but not CpG ODN reduced the level of expression of genes encoding these immunosuppressive agents ( Table 1). The combination of CpG ODN plus 3M-052 further reduced expression levels of both genes, an effect culminating in a nearly 90% reduction in Nos2 mRNA (p <.05).
Immune suppression in the tumor microenvironment can take many forms. One metric of the down-regulation of CTL activity is the expression of CTLA-4 by T cells and another is the production of the immunoinhibitory molecule TGFβ. CTLA-4, a homologue and antagonist of CD28 [29,30], acts as negative regulator of T cell activation by depriving them of CD28-mediated costimulation [30,31]. On the other hand, TGFβ suppresses both innate and adaptive immune responses in the tumor microenvironment. CTL-mediated tumor elimination is thus reduced by the presence of TGFβ [32,33]. As both 3M-052 and CpG ODN tend to reduce the level of immune suppression in the tumor microenvironment, their effect on CTLA-4 and TGFβ expression was examined. When compared to cells isolated from tumors treated with PBS, both TLR agonists mediated a significant reduction in the level of expression of these genes ( Table 1). The combination of both 3M-052 and CpG ODN was even more effective (p < .05).
Effect of TLR agonists on the growth of large tumors
To evaluate the effect of TLR agonists on tumors of clinically relevant size, CT26 cancer cells were implanted as described above and treatment initiated only after the resultant tumors reached ≈ 800 mm 3 in volume. Mice were then injected intra-tumorally twice weekly for one month with 200 μg of CpG ODN and/or 100 μg of 3M-052. Tumors in untreated mice proliferated rapidly over this period, reaching a volume of >2,000 mm 3 within 10 days (mandating their sacrifice as per ACUC guidelines, Figure 5). While both CpG ODN and 3M-052 therapy slowed tumor growth and prolonged survival, tumors in all animals reached 2,000 mm 3 by 3 wk after the initiation of treatment ( Figure 5). In contrast, 87% (13/15) of the mice treated with the combination of CpG ODN plus 3M-052 in 3 independent experiments completely rejected their tumors (p < 0.01; Figure 5).
To verify the utility of this combination against even more aggressive tumors, the studies were repeated in C57/BL6 mice challenged with B16-F10 tumor cells. Therapy was initiated when these tumors reached ≈ 500 mm 3 in volume. These cancers grow so rapidly that they all reached the 2,000 mm 3 endpoint in control mice and had to be sacrificed in less than one wk ( Figure 6). The same endpoint was reached by all animals treated with a single TLR agonist within 2 wk. In contrast, nearly 90% recipients (8/9) of the combination therapy survived indefinitely, totally clearing their tumors ( Figure 6 and Additional file 5: Figure S4). Two approaches were taken to verify that TLR-induced tumor-specific immunity was responsible for these cures. First, lymphocytes were isolated from the draining LN of mice challenged with CT26 tumors one week after the initiation of therapy. These cells were then stimulated in vitro with AH-1 peptide and their production of IFNγ monitored. As in Figure 3, T cells from mice treated with the combination of CpG ODN plus 3M-052 generated significantly stronger tumor specific responses that did any of the controls (p <.001, Figure 7A).
The twice weekly combination therapy with CpG ODN plus 3M-052 was discontinued when tumors could no longer be detected (generally after 1 month). There was no recurrence of these cancers through 3 months of Mice were treated as described in Figure 1. mRNA was isolated from tumor infiltrating cells one day after the second treatment and analyzed by RT-PCR. Each point represents the mean ± SD fold difference in cells from treated vs untreated tumor bearing mice derived from independently studying 6 mice/group in 2 independent experiments. *, p < 0.05; **, p <0.01, ***, p <0.001 when compared to PBS treated controls. Note: the level of expression of all genes from mice treated with CpG ODN plus 3M-052 was also significantly different (p < .01 -0.05) from that of mice treated with CpG ODN alone or 3M-052 alone.
follow up. To verify that these mice had developed long lasting tumor-specific immunity, they were re-challenged with a 10-fold higher dose of CT26 cells. As seen in Figure 7B, all of these animals survived whereas naive controls perished.
Discussion
Individual TLR agonists can significantly improve the host's response to small tumors [34][35][36]. In the hope of identifying a pairing of agonists that might be effective against large established tumors, a number of TLR agonist combinations were examined. Based on preliminary studies, the combination of 3M-052 plus CpG ODN was selected for further analysis. The anti-tumor activity of CpG ODN includes i) the stimulation of pDC that improve the generation of tumoricidal NK and CD8 T cells and ii) triggering MDSC to differentiate into M1 macrophages that no longer mediate immune suppression [19,20,22,37,38]. TLR 7/8 agonists also support the induction of cancer-specific immunity by triggering an innate response characterized by the production of Th1 cytokines (including TNFα, IL-12, and IFNγ) and DC maturation [39]. Of interest, TLRs 7, 8 and 9 are expressed on different subsets of immune cells that together include T cells, DCs, NK and NKT cells, all of which contribute to anti-tumor activity [40,41]. Whereas many forms of immunotherapy are effective against small tumors (<300 mm 3 ), activity wanes when larger tumors are targeted. A number of factors contribute to the resistance of established tumors. In addition to challenging the immune system with a larger number of target cells, organized tumors are better able to cloak themselves in immunosuppressive Tregs and MDSC [42,43]. In this context, MDSC from patients with advanced tumors are particularly effective at inhibiting tumor-specific CD8 T cells [44]. Having found that intra-tumoral delivery of CpG ODN was considerably more effective than systemic administration for the treatment of tumors, our plan was to examine whether adding a TLR 7/8 agonist could further improve this therapeutic approach. Unfortunately, first generation TLR 7/8 agonists were water soluble and proved Large established tumors (≈800 mm 3 in volume) were treated as described in Figure 5. A) Cells from the tumor draining LN were isolated one day after the third treatment, stimulated ex vivo with AH-1 peptide, and monitored for IFNγ secretion by ELIspot assay. Results represent the mean + SD of 4 independently studied mice/ group. B) Mice cured of their CT26 tumors by treatment with CpG ODN plus 3M-052 (a cure being defined as being free of detectable tumor for ≥2 months after the cessation of therapy) were rechallenged with 10 6 CT26 cells. Their survival compared to naive mice challenged with the same tumor dose is shown (N = 6 mice/ group). *, p < 0.05; **, p <0.01, ***, p <0.001.
ineffective when co-administered with CpG ODN. A relatively new TLR 7/8 agonist was identified that contains a modified tail allowing it to persist in vivo after being injected into the tumor (3M-052) [23]. Further studies therefore evaluated the activity of locally administered 3M-052 in combination with CpG ODN.
The value of combination therapy was initially examined under conditions where a single TLR agonist only delayed tumor growth (Figure 1). Large tumors were then studied in which the combination of CpG ODN plus 3M-052 proved highly successful against both CT26 colon cancer and B10-F16 melanomas. Whereas each agonist alone barely delayed the progression of these large tumors, cure rates on the order of 80 -90% were achieved by combination therapy. Indeed, as weeping of the injected material from the tumor site was sometimes observed, it is possible that even higher success rates might be achieved by technical improvements in TLR agonist delivery. Successful therapy of large tumors required twice-weekly treatment with CpG ODN plus 3M-052 over the course of ≈ 1 month. A single dose had no detectable effect on the growth of large tumors while 1-2 wks of combination therapy resulted in only shortlived tumor regression. Systemic treatment was uniformly unsuccessful.
There are several mechanisms by which TLR agonists can support the elimination of established tumors. CD28 is a co-stimulatory molecule that enhances the proliferation, cytokine production and survival of TCR-activated T cells. This process is antagonized by CTLA-4, a surface receptor that is up-regulated when T cells become activated [30,31]. We observed that the level of mRNA encoding CTLA-4 was significantly reduced in mice receiving combination therapy (Table 1). This downmodulation of CTLA-4 may help explain the improved activity of tumor-specific T cells found in the current work (Figure 3), consistent with previous findings [45]. We also observed a decrease in mRNA encoding the immunosuppressive cytokine TGFβ in mice treated with CpG ODN plus 3M-052. TGFβ is produced by tumor cells and Gr-1+ CD11b+ MDSC in the tumor microenvironment and serves to suppress both innate and adaptive arms of the immune system [29,46,47]. Consistent with current findings, reduced TGFβ signaling is known to enhance tumor elimination by improving CTL activity [32,33].
To establish the role of increased CTL function in recipients of combination therapy, cells from the tumor draining lymph node were isolated and stimulated ex vivo with the CT26-specific AH-1 peptide. While CpG ODN and 3M-052 alone boosted the number of cells secreting IFNγ, significantly more cells from recipients of combination therapy were stimulated to produce that cytokine (Figures 3 and 7A). Consistent with the conclusion that these cells contribute to tumor eradication, the level of mRNA encoding cytokines that promote Th1 and cellular immunity (IL-12 and IFNγ) and the lytic activity of NK and CD8 T cells (granzyme B) were all significantly upregulated in the tumor microenvironment (Table 1) as were the number of tumor infiltrating CD8 T cells and NK cells (Figure 2).
Despite the above findings, the mechanism(s) by which engagement of TLRs 7, 8 and 9 synergistically enhance anti-tumor immunity will require further investigation. Since all three TLRs utilize the MyD88 dependent signaling pathway, it might seem unlikely that cells expressing receptors for all three agonists could be responsible for such synergy (as any single TLR agonist would be sufficient to trigger such cells). Yet recent studies of TLR expression by individual pDC indicates that phenotypically identical cells nevertheless express very different levels of each receptor, and that engagement of multiple receptors may be necessary to reach a critical activation threshold. Moreover, certain cells express TLR 7 or 8 but not TLR 9 (such as iNKT cells) while MDSC express high levels of TLR 9 but only low levels of TLR 7/8 [22,48].
Other examples of TLR synergy have been observed. For example, co-delivery of the TLR3 agonist poly A:U induced an effective anti-tumor response when used in combination with CpG ODN under conditions when each agonist alone was ineffective [49]. It was also shown that combinations of TLR 2, 3 and 9 ligands could enhance DC function and the induction of T cell immunity following vaccination [50]. Finally, a DC based vaccine delivered with TLR3 and TLR 2 provided enhanced protection to mice challenged with tumor [51].
A number of clinical trials have explored the activity of CpG ODN in cancer patients. Results indicate that CpG treatment induces a dose-related increase in serum levels of IP-10, IFNa, MIP-1a, and IL-12p40 [52,53]. While anti-tumor activity was observed in several phase II trials [54] this finding was not reproduced in a definitive phase III study [55,56]. Of note, none of these studies combined CpG ODN with a TLR7/8 agonist and generally administered the ODN systemically rather intra-tumorally. We postulate that the local delivery of combination TLR 7/8/9 agonists is critical for improving the host's anti-tumor response by acting on multiple cell types in the tumor microenvironment, including mMDSC, CD8 T lymphocytes and NK cells. Of interest, MDSC express receptors for both agonists and play a vital role protecting tumors from immune aggression by inhibiting T and NK cell activity [22,57]. Current findings demonstrate that the combination of CpG ODN plus 3M-052 reduced mMDSC frequency by 10-fold when compared to untreated mice and 3-5 fold when compared to either agonist alone (Figure 2). This reduction was associated with a significant decline iNOS and arginase-1 expression (Table 1), a constellation of findings that may explain the increase in tumor-specific CTL activity in mice treated with combination therapy (Figures 3 and 7A).
Conclusions
This work shows that co-administering CpG ODN with 3M-052 is remarkably effective at eliminating large established tumors. This anti-tumor activity is associated with a significant diminution in the frequency of tumor resident MDSC and accumulation of tumor-lytic NK and CD8 T cells, resulting in persistent anti-tumor immunity. These findings indicate that combination TLR immunotherapy may be of considerable benefit in cases of advanced cancer.
Reagents
3M-052 was supplied by 3M Drug Delivery Systems Division as a 4 mg/ml stock solution in ethanol. Endotoxin-free phosphorothioate ODN were synthesized at the Core Facility of the Center for Biologics Evaluation and Research, Food and Drug Administration (Bethesda, MD). The sequences used were: CpG ODN 1555 (5′-GCTAGACGTTAGCGT-3′) and control ODN 1612 (5′-GCTAGAGCTTAGCGT-3′). All ODN were dissolved in PBS at a final concentration of 4 mg/ml.
Mice and tumor cell lines
6-8 wk old BALB/c and C57BL/6 mice were obtained from the National Cancer Institute (Frederick, MD). The CT26 colon cancer cell line was a kind gift from Dr. Zack Howard (National Cancer Institute) and B16-F10 cell line was purchased from American Type Culture collection (Manassas, VA). Tumor cell lines were maintained in RPMI 1640 medium supplemented with 10% FCS, 100 U/ml penicillin, 100 µg/ml streptomycin, 25 mM HEPES, 1.0 mM sodium pyruvate, nonessential amino acids, and 0.0035% 2-ME. All studies were approved by the National Cancer Institute Frederick Animal Care and Use Committee.
Tumor experiments
Balb/c mice were injected with 10 5 CT26 tumor cells while C57BL/6 mice received 10 5 B16-F10 tumor cells. All injections were s.c. into the right flank. Treatment was initiated when tumors reached a defined size (usually after 2-3 wk). Tumor size was calculated by the formula: (length × width × depth)/2 and mice whose tumor exceeded a diameter of 2.0 cm were euthanized as per ACUC regulations. Two treatment regimens were used. For small tumors (<300 mm 3 ), two doses of 100 μg of CpG ODN (4 mg/ml) and/or 50 μg of 3M-052 (4 mg/ ml) were injected intra-tumorally using a 30 g needle.
To deplete CD4 + or CD8 + T cell subsets, mice were injected i.p. with 25 ul ascites of rat anti-mouse CD4 (L3/T4) or mouse anti-mouse CD8 (Ly2.2) Abs from Cedarlane labs (Burlington, NC) on day −2, 0, 3 and 6 post-tumor implantation. For large tumors (500-800 mm 3 ), 200 μg of CpG ODN and/or 100 μg of 3M-052 were injected intra-tumorally twice weekly for one month. Inactive controls for each TLR agonist were included in all experiments. Tumor growth curves were generated from five mice per group and all results were derived by combining data from 2-3 independent experiments.
ELISpot assay
Single cell suspensions were prepared from whole spleen, tumor-infiltrating leukocytes or tumor draining lymph nodes and 1.5 -3.0 × 10 5 cells/well stimulated for 12 hr with the class-I restricted CT26-derived AH-1 peptide (1 ug/ml) in 96 well Immulon II plates (Millipore, Billerica, MA) coated with anti-IFN Ab (R4-6A2) (BD Biosciences). The plates were washed and treated with biotinylated polyclonal goat anti-IFNγ Ab (R & D systems, MN) followed by streptavidin alkaline phosphatase. Spots were visualized by the addition of a 5-bromo-4-chloro-3-indolyl phosphatase solution (Sigma Aldrich) in low melt agarose (Sigma Aldrich) and counted manually under ×40 magnification. The number of cytokine secreting cells was determined by a single blind reader, and all data was generated by analyzing 12 separate wells per sample.
Quantitative real-time PCR analysis
Total RNA was isolated from tumor infiltrating cells one day after the second treatment using TRIzol reagent (Invitrogen), precipitated, and then reverse transcribed with Reverse Transcription Kit (Qiagen). IL-12p40, IFNγ, | 2016-05-31T19:58:12.500Z | 2014-05-13T00:00:00.000 | {
"year": 2014,
"sha1": "03d7e79c9f2f0caa9c7a9b5e9b3d33b2d02b7a6b",
"oa_license": "CCBY",
"oa_url": "https://jitc.biomedcentral.com/track/pdf/10.1186/2051-1426-2-12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2b2241feb22ed96d87be0bb305bd8d7c36d1563",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268011168 | pes2o/s2orc | v3-fos-license | Development of a Green, Quick, and Efficient Method Based on Ultrasound-Assisted Extraction Followed by HPLC-DAD for the Analysis of Bioactive Glycoalkaloids in Potato Peel Waste
α-Solanine and α-chaconine are the two most predominant glycoalkaloids (GAs) present in potato. Potato peel contains a high concentration of GAs, which are especially interesting for application in the pharmaceutical industry due to their different beneficial properties (such as anticarcinogenic, anti-inflammatory, antiallergic, antipyretic, antiviral, fungicide, and antibiotic activities, among others); so, potato peel waste can be valorized by extracting these biologically active compounds. For this, a green, quick, and efficient miniaturized analytical approach based on ultrasound-assisted extraction (UAE) combined with HPLC-DAD was developed to quantify α-solanine and α-chaconine in potato peel. Some parameters of the extraction were optimized, including the extraction method, the type of solvent, and the sample/solvent ratio, by a three-factor, three-level (33) full factorial experimental design. The optimal extraction conditions were obtained with UAE using methanol as a solvent and a sample/solvent ratio of 1:10 (w/v, g/mL). The analytical greenness metric for sample preparation (AGREEprep) tool was used to assess the greenness of the methods used. The tool revealed an acceptable green analysis, with 0.61 points. The method was validated and applied to the evaluation of GAs in the peel of 15 commercial varieties of potato. The amount of glycoalkaloids found in the samples evaluated ranged from 143 to 1273 mg/kg and from 117 to 1742 mg/kg dry weight for α-solanine and α-chaconine, respectively. These results reveal the important variability that exists between potato varieties; so, their analysis is of great importance to select the most suitable ones for biovalorization (e.g., the Amandine and Rudolph varieties, with around 3000 mg/kg, in total, of both GAs). To provide higher stability to the peel during storage, freeze-drying or a medium-temperature drying process resulted preferable to avoid GA degradation. Overall, this study will contribute to the expansion of the future biovalorization of potato peel waste as well as provide a powerful analytical tool for GA analysis.
Introduction
Glycoalkaloids (GAs) are secondary metabolites found in plants, mainly of the Solanaceae family.They are nitrogen-containing glycosides that have a trisaccharide moiety attached, via the 3-OH group, to a lipophilic six-ring steroid aglycone skeleton (called solanidine) [1].In the common cultivated potato (Solanum tuberosum L.), α-chaconine and α-solanine are the main GAs (Figure 1); together, they account for approximately 95% of the GAs found in potato; so, their total amount is generally referred to as total GAs (TGAs) [2].GAs are present throughout the potato plant, including leaves, roots, flowers, fruits, tubers, and sprouts, with the highest levels observed in those parts of the plant with high metabolic rates [3].In tubers, the distribution of GAs is not uniform, and their concentration is from 3 to 10 times greater in the peel than in the flesh, especially, in the green parts of the peel, eyes, and sprouts.In addition, large differences can be found in TGA concentrations in tubers depending on the potato variety and stage of maturity; however their accumulation is also affected by environmental variables during farming operation, postharvest handling, and storage, such as light exposure, high temperature, and mechanical damage, among others [4,5].The presence of GAs in Solanaceae plants is associated with a natural defense mechanism against the attack of fungi, bacteria, insects, and herbivores.The ingestion of these compounds by humans at doses greater than 2 mg/kg of body weight can produce acute toxic effects due to the anticholinesterase action of GAs on the central nervous system, as well as cell membrane disruption, including gastrointestinal symptoms such as nausea, vomiting, diarrhea, and fever, and in more extreme cases, can cause neurological disorders, low blood pressure, coma, or death [6].As a result, the EU in its Commission Recommendation 2022/561 on monitoring the presence of GAs in potatoes and potato-derived products established an indicative level of 100 mg/kg fresh weight (FW) of TGAs in potatoes and processed potato products [7].However, although GAs are understood as potentially toxic, over the last two decades, some studies showed that they possess health-promoting effects.So, depending on the dosage and conditions of use, beneficial properties, such as anticarcinogenic, anti-inflammatory, antiallergic, antipyretic, antiviral, fungicide, and antibiotic activities, have been demonstrated, among others [8,9].
Nowadays, the potato stands as a significantly vital vegetable, cultivated and consumed across numerous countries worldwide, acting as high-quality food rich in carbohydrates, proteins, vitamins, minerals, and fiber [10,11].Ranking as the third largest food crop, its global production reached 359.07 million tons in 2020, according to the Food and Agricultural Organization of the United Nations (FAO) [12].Forecasts suggest its rise to a prominent position within the global food security framework, especially as other cereal crops approach their yield limits.It is noteworthy that two-thirds of the world's population consider potatoes a staple food.Within developing countries, fresh potato remains the primary consumption preference, but the consumption of processed potato products such as potato French fries, potato chips, frosted potato, dehydrated potato flasks, etc., is gradually increasing [12].This high consumption generates a huge volume of by-products, mainly potato peels, which are a problem, since wet peels are prone to microbial deterioration.This by-product is usually used as low-value animal feed or discarded, causing environmental concern.In fact, it is anticipated that in 2030 around 8000 kilotons of potato peel waste might be generated, with related greenhouse gas emissions of 5 million tons of CO2 equivalents [13].In that respect, in an approach to the concept of "One Health", potato peel waste can be valorized by extracting high-value-added compounds (i.e., phenolic compounds, GAs, fiber, minerals, etc.), particularly interesting for application in the food and pharmaceutical industries, that can be used for the formulation of nutraceuticals and phytopharmaceutical products or as additives and ingredients in GAs are present throughout the potato plant, including leaves, roots, flowers, fruits, tubers, and sprouts, with the highest levels observed in those parts of the plant with high metabolic rates [3].In tubers, the distribution of GAs is not uniform, and their concentration is from 3 to 10 times greater in the peel than in the flesh, especially, in the green parts of the peel, eyes, and sprouts.In addition, large differences can be found in TGA concentrations in tubers depending on the potato variety and stage of maturity; however their accumulation is also affected by environmental variables during farming operation, postharvest handling, and storage, such as light exposure, high temperature, and mechanical damage, among others [4,5].The presence of GAs in Solanaceae plants is associated with a natural defense mechanism against the attack of fungi, bacteria, insects, and herbivores.The ingestion of these compounds by humans at doses greater than 2 mg/kg of body weight can produce acute toxic effects due to the anticholinesterase action of GAs on the central nervous system, as well as cell membrane disruption, including gastrointestinal symptoms such as nausea, vomiting, diarrhea, and fever, and in more extreme cases, can cause neurological disorders, low blood pressure, coma, or death [6].As a result, the EU in its Commission Recommendation 2022/561 on monitoring the presence of GAs in potatoes and potatoderived products established an indicative level of 100 mg/kg fresh weight (FW) of TGAs in potatoes and processed potato products [7].However, although GAs are understood as potentially toxic, over the last two decades, some studies showed that they possess health-promoting effects.So, depending on the dosage and conditions of use, beneficial properties, such as anticarcinogenic, anti-inflammatory, antiallergic, antipyretic, antiviral, fungicide, and antibiotic activities, have been demonstrated, among others [8,9].
Nowadays, the potato stands as a significantly vital vegetable, cultivated and consumed across numerous countries worldwide, acting as high-quality food rich in carbohydrates, proteins, vitamins, minerals, and fiber [10,11].Ranking as the third largest food crop, its global production reached 359.07 million tons in 2020, according to the Food and Agricultural Organization of the United Nations (FAO) [12].Forecasts suggest its rise to a prominent position within the global food security framework, especially as other cereal crops approach their yield limits.It is noteworthy that two-thirds of the world's population consider potatoes a staple food.Within developing countries, fresh potato remains the primary consumption preference, but the consumption of processed potato products such as potato French fries, potato chips, frosted potato, dehydrated potato flasks, etc., is gradually increasing [12].This high consumption generates a huge volume of by-products, mainly potato peels, which are a problem, since wet peels are prone to microbial deterioration.This by-product is usually used as low-value animal feed or discarded, causing environmental concern.In fact, it is anticipated that in 2030 around 8000 kilotons of potato peel waste might be generated, with related greenhouse gas emissions of 5 million tons of CO 2 equivalents [13].In that respect, in an approach to the concept of "One Health", potato peel waste can be valorized by extracting high-value-added compounds (i.e., phenolic compounds, GAs, fiber, minerals, etc.), particularly interesting for application in the food and pharmaceutical industries, that can be used for the formulation of nutraceuticals and phytopharmaceutical products or as additives and ingredients in functional foods [9,14].
For this reason, efficient and sustainable industrial-scale extraction processes to obtain high-value extracts are required, being this one of the greatest challenges to be addressed for potato peel waste valorization [9].In addition, this involves the development and validation of analytical protocols that allow for the characterization of the composition of the potato peels before they can be released for potential biovalorization [8].
Nowadays, analytical methods should be aligned with the current trend towards green analytical chemistry (GAC) whose most important assumptions were formulated in the form of twelve principles that express a willingness to care for human safety and the environment during the development and application of analytical procedures [15].In this regard, the proper implementation of the GAC principles would include practices such as minimizing reagent consumption, use of biodegradable and low-toxic reagents, saving energy, reducing waste production, and increasing the degree of miniaturization of analytical tools and procedures, among others.Regarding GA analysis, high-performance liquid chromatography with a diode array detector (HPLC-DAD) or coupled to mass spectrometry (HPLC-MS) are recommended techniques [6] that have traditionally been used after GA extraction with an appropriate sample treatment protocol.Despite MS detection having advantages (i.e., higher selectivity), HPLC-MS methods require an extract purification step to avoid matrix effects that is time-consuming.For this reason, HPLC-DAD should be considered a suitable alternative due to the relative widespread availability of the required instrumentation in analytical laboratories.Conventionally, α-solanine and α-chaconine are extracted from potato peel using the solid-liquid extraction (SLE) technique, using different laboratory mixers (i.e., vortex mixers, magnetic stirrers, shakers) to improve the extraction rate [16].On the other hand, ultrasound-assisted extraction (UAE) has also been evaluated as, in general, this technique notably reduces extraction times and energy and solvent consumption and enhances the recovery yields of the target analytes [17].In this sense, UAE is a potential environmentally friendly choice to extract GAs from potato peels, which can be scaled up to the industrial level [18].However, for an efficient GA extraction in line with GAC, some parameters need to be optimized, and the analytical methodology has to be validated to assure the quality of the results.In addition, because the efficiency of this extraction process depends on several parameters, to achieve the optimal experimental conditions, a design of the experiments is advisable.The methods developed would be useful to find and use potato varieties that have a greater or lesser tendency to accumulate GAs in the peel, depending on the final use of the extract obtained.In addition, the discovery of health benefits of potato GAs, balanced against concerns of their toxicity, implies that the analytical methodology will be paramount in future efforts designed to enhance the levels of these compounds in the human diet [11].
Therefore, the aim of this study was to compare different extraction protocols for αsolanine and α-chaconine from potato peel waste.Some parameters of the extraction were optimized, including the extraction method, the type of solvents, and the sample/solvent ratio by a three-factor and three-level (3 3 ) full factorial experimental design methodology.Under optimal conditions, the performance of the UAE-HPLC-DAD developed method was evaluated and applied to determine the levels of α-solanine and α-chaconine in the peel of fifteen varieties of potato.Finally, the effect of temperature on the GA content in the peel extracts was evaluated with the developed procedure.To the best of our knowledge, this is the first green, quick, efficient, and validated analytical methodology based on HPLC-DAD that allows the characterization of the main GAs in this by-product.
Reagents and Materials
The HPLC-grade solvents such as methanol (MeOH) and ethanol (EtOH) were obtained from Scharlab (Barcelona, Spain), and acetonitrile (ACN) was obtained from Fisher Chemical (Madrid, Spain).Monosodium phosphate (NaH 2 PO 4 ) and disodium phosphate (Na 2 HPO 4 ) were purchased from Panreac (Barcelona, Spain).Ultrapure deionized water with a resistance of 18.2 MΩ cm was obtained by a Milli-Q system (Billerca, MA, USA).
Sample Collection and Moisture Determination
Fifteen commercial varieties of potato (Agata, Agria, Amandine, Amaris, Caesar, Colomba, Evolution, Frisia, Lady Amarilla, Memphis, Monalisa, Rudolph, Soprano, Universa, and Vivaldi) were acquired from supermarkets in Madrid and Albacete (Spain) in 2023.Table S1 summarizes the labelling information found for the prepackaged fresh potatoes used in this study (e.g., origin, caliber, category, and recommended use).The potatoes were peeled manually with a knife peeler (~1.5 mm thickness).The peels were then pre-frozen in an ultra-freezer at −80 • C for 24 h and then freeze-dried in a LyoBench freeze-dryer (Noxair Life Sciences S.L., Barcelona, Spain) for 48 h at a temperature of −50 • C and a pressure of 0.076 mbar.Subsequently, the freeze-dried samples were powdered using an analytical grinder (IKA, Staufen, Germany) to yield potato peel powder, which was stored in Falcon ® tubes into a desiccator at room temperature until use.The total moisture content in the peels was found to be between 78 and 85% (Table S1), determined by loss on oven-drying at 60 • C for 24 h, as described [19].In the optimization of GA extraction, all experiments were carried out with the Caesar potato variety.Due to the content of GAs not being homogeneous in potato peel, for all the optimization experiments, a quantity of homogenized and powdered sample large enough to be used in all of them was prepared.
Preliminary Studies
Preliminary studies were conducted to assess the efficacy of different solvents and extraction methods in the extraction of GAs.First, the sample and solvent (in a ratio of 1:20, w/v, g/mL) were homogenized by vortexing for 1 min at 3000 rpm (Rx3 Velp Scientifica, Usmate, MB, Italy).Then, for conventional SLE, the mixtures were subjected to agitation for 5, 10, 15, and 20 min with a magnetic stirrer (IKA RCT basic, Staufen, Germany).On the other hand, for UAE, the mixtures were sonicated with a Sonopuls HD 3100 ultrasonic homogenizer (100 W, 60 kHz, Bandelin, Berlin, Germany) equipped with an MS 73 titanium probe (13 mm diameter).The UAE employed an amplitude of 75% in pulsed mode (pulse durations of 0.1 s "on" and 0.2 s "off") for 5, 10, 15, and 20 min.These experiments were carried out at ambient temperature (around 23 • C) and were performed in triplicate.
Design of Experiments to Reach the Optimal Extraction Conditions
A full factorial experimental design methodology employing three factors at three levels (3 3 ) was utilized.This design aimed to assess the impact of the extraction method (A), solvent type (B), and sample/solvent ratio (C) on GA extraction.The extraction methods examined in the experimental design included (i) conventional vortex-assisted SLE (VA-SLE) for 1 min; (ii) conventional SLE with magnetic stirring (MgS-SLE) for 5 min; (iii) UAE with 5 min of sonication.The solvents tested were MeOH, EtOH, and H2O, while the sample/solvent ratios of 1:10, 1:20, and 1:40 (w/v, g/mL) were evaluated.The choice of the levels for each independent variable was based on preliminary experiments and previous related research.The experimental design, the analysis of the results, and the prediction of the responses were conducted using Statgraphics Centurion software (version 16.3.03).
Optimized Extraction Conditions
For UAE of GAs, 0.3 g of a sample (freeze-dried potato peel powder) was placed in a Falcon ® tube, and 3 mL of MeOH was added (sample/solvent ratio of 1:10 w/v, g/mL).The ultrasound probe was submerged to a depth of 5 mm in the solvent, and the mixture was sonicated at a constant frequency for 2.5 min at room temperature.Then, 1 mL of the resulting solution was filtered through a nylon membrane (0.45 µm) and, subsequently, analyzed by HPLC-DAD.The extraction was performed in triplicate (n = 3), and the results are presented as mean ± standard deviation (SD).
Optimal Chromatographic Conditions for Analysis
The chromatographic analysis was performed on an Agilent 1260 Infinity II HPLC system (Agilent Technologies, Madrid, Spain), equipped with a G7104C 1260 flexible pump, a G7167A multisampler, a G7116A multicolumn thermostat, and a G7117C diode array detector HS.Agilent OpenLab CDS ChemStation Edition was used for full instrument control and data acquisition and analysis.The separation of α-solanine and α-chaconine was performed using an InfinityLab Poroshell 120 EC-C18 column (3.0 mm i.d.× 150 mm, 2.7 µm particle size) to which a guard column (3.0 mm i.d.× 50 mm, 2.7 µm particle size) with the same stationary phase was attached.The mobile phase was a mixture of ACN-0.01 M sodium phosphate buffer, pH 7.2-MeOH (60:30:10, v/v/v) in isocratic mode.The flow rate was set at 1 mL/min with an injection volume of 20 µL.The column temperature was maintained at room temperature, and the autosampler tray was cooled to 4 • C. The analysis time was 11 min.Quantification was performed by UV detection at 202 nm and comparison with the external standards for both compounds.The results are expressed as mg/kg dry weight (DW) of each analyte and as TGAs (sum of α-solanine and α-chaconine levels).
Method Validation
Due to the current lack of no official regulations on analytical performance requirements for GAs in food, method validation was conducted in terms of linearity, method detection (MDL) and quantification (MQL) limits, accuracy, precision, and selectivity, following the criteria described in the SANTE/12682/2019 document in regulation EC No 401/2006 and in the Q2(R1) ICH guidelines [20][21][22].Linearity was evaluated through calibration curves, which were constructed using five standard solution mixtures containing from 1 to 100 mg/L of each GA for three consecutive days.A suitable regression analysis of the signal (y, peak area) on the analyte concentrations (x) established in the calibration set yielded the calibration curve for the predicted responses.Linearity was evaluated through the coefficient of determination (R 2 ) of the calibration curves.The sensitivity of the method was determined through the LOD and LOQ from the analysis of the least concentrated standard solution analyzed (1 mg/L), which were estimated as the lowest concentrations of analyte that could be detected and quantified with a signal-to-noise ratio (S/N) exceeding 3 and 10, respectively.The accuracy was evaluated by spiking the samples at low (200 mg/kg of each analyte) and high (400 mg/kg of each analyte) levels, obtaining the recovery values (% ± SD).Recoveries were calculated by comparing the areas obtained for samples spiked with a known concentration of the target analytes and subjected to the UAE optimized procedure (n = 6) with those areas obtained for the simulated samples (samples that were spiked at the same concentration but at the end of the UAE procedure, just prior to their chromatographic analysis).The recovery values had to be between 70 and 120%.On the other hand, the method precision was evaluated in terms of repeatability and reproducibility, using the same validation levels (200 and 400 mg/kg of each analyte).Repeatability is expressed as the relative standard deviation (RSD, %) for six replicates (n = 6) of a sample spiked with the GAs, at the low and high validation levels, in the same day.Reproducibility (also expressed as %RSD) was calculated by the analysis of three replicates of a sample (spiked with the analytes at both validation levels), which was carried out over three different days (n = 9).According to the validation guidelines used, the RSD values for these precision parameters had to be ≤20%.The selectivity of the method was determined by comparing the UV-Vis spectra of standard solutions of α-solanine and α-chaconine with the spectra obtained for the target analytes in the spiked and non-spiked sample extracts.The absence of coeluted peaks and signals from matrix interferences, as well as the constant retention times, revealed the selectivity of the method.
Evaluation of the Effect of the Drying Conditions on the α-Solanine and α-Chaconine Content in Potato Peels
To assess the effect of the temperature during potato peel drying on the GA content, two heating conditions achieved using a conventional laboratory oven (60 • C and 103 • C for 24 h) were evaluated.The potato peels from six different varieties were subject to drying (Agata, Amandine, Caesar, Monalisa, Rudolph, and Soprano).Subsequently, once the TGA content in each potato peel had been analyzed, the results were compared to those obtained for the same samples subjected to freeze-drying, as indicated in Section 2.2.
Optimization of the Chromatographic Method for α-Solanine and α-Chaconine Determination
The separation of α-solanine and α-chaconine can be achieved by HPLC-DAD on reversed-phase mode with C18 columns using aqueous phosphate buffers in combination with an organic modifier, typically MeOH or ACN [6].Taking this into account, various experiments were conducted to optimize the chromatographic parameters.This included assessing the ratio between organic solvents (ACN or MeOH) and the aqueous phosphate buffer as the mobile phase.Based on previous work [23], a combination of 60% ACN and 10% MeOH as the organic solvent was selected.In regard to the aqueous solution, a 30% solution of sodium phosphate buffer was evaluated at different pH values.The results obtained revealed that the best results were achieved with a 0.01 M phosphate buffer solution at pH 7.2.Further optimization included adjusting the column temperature to 20 • C and setting the flow rate to 1 mL/min.In this optimized conditions, α-solanine exhibited a retention time of 8.3 min, while α-chaconine showed a retention time of 9.4 min, achieving a resolution of 5.1.This indicated an exceptional chromatographic separation of the target analytes (Figure 2).Preliminary experiments were carried out using the one-factor-at-a-time methodology to select the variables for the experimental design to optimize the extraction process.Figure 3 shows the results obtained expressed in mg of TGAs per kg of DW.For this purpose, 0.5 g of sample was weighed, and 10 mL of solvent (sample/solvent ratio 1:20, w/v, g/mL) was added.The extractions were performed with both MeOH and EtOH, since they are common solvents previously used by other authors [24][25][26][27][28].In all experiments, the sample-solvent mixture was first subjected to 1 min of vortex agitation in order to disperse the sample into the solvent and to accelerate the extraction process; then, the resulting extracts were analyzed (0 min in Figure 3).The extracts were subjected to MgS-SLE or UAE.In both cases, 1 mL aliquots were taken every 5 min, and the TGA concentration was determined in the extracts (5, 10, 15, and 20 min, in Figure 3).Based on the results obtained in previous work by Apel et al. [16], all experiments were carried out with UAE in pulse mode with an amplitude of 75%, since these conditions did not have any significant effect on the extraction yields of individual GAs and on the TGAs of potato peels, with pulse amplitude in the range of 50-100%.The continuous-pulse mode was not tested, as it involves higher energy consumption.Furthermore, experiments were conducted at room temperature with the aim of developing a potential low-cost, green protocol that could be suitable for industrial-scale extraction.As shown in Figure 3, the highest value of TGAs (around 1600 mg/kg DW) was achieved with MeOH.Using this solvent, no significant differences were found in the extraction after applying 5 min of magnetic stirring or UAE, compared to vortexing exclusively (0 time, Figure 3).On the other hand, with EtOH, it was observed that 5 min of magnetic stirring or UAE produced an increase in the TGAs extracted, with a higher value obtained with UAE.Extraction times greater than 5 min did not increase the extraction yields of the target analytes with either of the two solvents.Whit these results in mind, VA-SLE (for 1 min), MgS-SLE (for 5 min), and UAE (for 5 min) were selected to be evaluated in the experimental design to optimize the extraction protocol.
Taking into account the results obtained with EtOH, which is a greener and more environmentally sustainable extraction solvent, assays were additionally carried out to As shown in Figure 3, the highest value of TGAs (around 1600 mg/kg DW) was achieved with MeOH.Using this solvent, no significant differences were found in the extraction after applying 5 min of magnetic stirring or UAE, compared to vortexing exclusively (0 time, Figure 3).On the other hand, with EtOH, it was observed that 5 min of magnetic stirring or UAE produced an increase in the TGAs extracted, with a higher value obtained with UAE.Extraction times greater than 5 min did not increase the extraction yields of the target analytes with either of the two solvents.Whit these results in mind, VA-SLE (for 1 min), MgS-SLE (for 5 min), and UAE (for 5 min) were selected to be evaluated in the experimental design to optimize the extraction protocol.
Taking into account the results obtained with EtOH, which is a greener and more environmentally sustainable extraction solvent, assays were additionally carried out to evaluate whether the increase in temperature improved the extraction of GAs, as this is a more viscous solvent.The tests were carried out at 50 • C applying 1 min of VA-SLE or 5 min of UAE, and the results showed no significant increase in the extraction yields.For this reason, this variable was not included in the experimental design aimed at developing a potentially low-cost and environmentally friendly protocol that could be suitable for industrial-scale extraction.
On the other hand, for the selection of a third solvent, SLE and UAE experiments were carried out with H 2 O, CH 3 COOH (1 and 5%), 5% CH 3 COOH-MeOH (1:1 and 1:4, v/v), ACN, 0.01 M sodium phosphate buffer (pH 7.2), and ACN-0.01M sodium phosphate buffer (pH 7.2)-MeOH (60:30:10, v/v/v).Among these options, considering that the extraction yields were not significantly improved with any of the solvents tested, H 2 O was selected for the experimental design, as it is a more environmentally sustainable option.Furthermore, the aqueous extracts were suitable for HPLC analysis, allowing for achieving a very good chromatographic resolution of the α-solanine and α-chaconine peaks.
Finally, in order to maximize the extraction of TGAs and to minimize the amount of residues, the sample/solvent ratio was considered as the third independent variable for the experimental design.In that respect, some preliminary assays were carried out to verify the influence of this variable on TGA extraction (from 1:10 to 1:40, w/v, g/mL), with significant differences between the different types of extraction and solvents used.
Experimental Design, Evaluation of the Variables Influencing the Extraction Efficiency, and Statistical Analysis
The optimization of the extraction conditions by the one-factor-at-a-time methodology does not consider the possible interactions between the studied factors.Therefore, it is very useful to use the full factorial experimental design methodology that allows the number of experiments to be minimized and the effect of each factor and the interaction between the factors on the extraction yield to be evaluated simultaneously.Based on the results obtained from the preliminary tests, the full factorial experimental design proposed in this work included three independent variables, corresponding to the three different factors of the design, two of them being categorical (extraction type (A) and solvent type (B)), and the other numerical (sample/solvent ratio (C)).Each of them had three different levels: the categorical ones had level 1, 2, and 3, while the numerical one had a low (−1), a medium (0), and a high (1) level.The dependent variables corresponded to the concentrations of the analytes, i.e., TGAs, α-solanine, and α-chaconine, obtained in each case (Table 1).Table 2 shows the results obtained for the 27 assays indicated in the experimental design matrix (Table 1), expressed as mean concentration values obtained for three replicates ± SD.As can be seen, the experimental values obtained ranged from 15 ± 6 to 917 ± 75 mg/kg DW for α-solanine, from 24 ± 7 to 915 ± 42 mg/kg DW for α-chaconine, and from 48 ± 22 to 1622 ± 157 mg/kg DW for TGAs.To determine the effects of each variable and the possible interactions between them, both the main effects plot of each variable and the Pareto chart were constructed and are showed in Figure 4.The main effects plot can be used to compare the relative strength of the effect of different variables as well as to determine the positive or negative effect of each variable on the response.The Pareto chart shows the absolute values of the standardized effects of each variable and the possible interactions between them and allows for determining whether a factor has a significative effect on the response.The gradients of the main effects plot shown in Figure 4A indicated that the type of solvent (B) was the most influential parameter with the same effect on α-solanine and α-chaconine.The extraction type (A) and the sample/solvent ratio (C) showed a lower effect on the GA extraction yield, which was opposite for α-solanine and α-chaconine.While for α-solanine the best type of extraction was MgS-SLE, for α-chaconine, the best type was UAE, making this latter type of extraction the best option for the analysis of TGAs.In relation to the sample/solvent ratio (C), the highest extraction yield for α-chaconine was obtained with the sample/solvent ratio of 1:20, while for α-solanine, the extraction increased by increasing the sample/solvent ratio up to 1:40, resulting in a slightly positive effect of this variable on the extraction of TGAs.The trend observed in the main effects plot (Figure 4A) for the three variables coincided with that observed in the Pareto chart (Figure 4B).As can be seen, the solvent type (B) was the variable which had the strongest influence, showing a much more significant effect on the GA extraction yield than the rest of the variables (A and C).This fact could be due to the intrinsic chemical nature of the solvents used.Water, being a high-polarity solvent, had a lower GA extraction efficiency, since aglycones have a lipophilic nature.In contrast, MeOH and EtOH, being less polar organic solvents than water, allowed for a better extraction efficiency of α-solanine and α-chaconine.The effect observed for the extraction type (A) was only significant for the α-chaconine extraction yield, and the effect of the sample/solvent ratio (C) was significant for the α-chaconine and TGAs extraction yields.
Regarding the interaction between the three variables studied, the interaction between extraction type and solvent type (AB) was considered significant for the α-chaconine and TGA extraction yields; so, it can be deduced that each solvent was efficient to a greater or a lesser extent depending on the type of extraction used.α-Chaconine and TGAs were also significantly affected by the interaction between solvent type and sample/solvent ratio (BC), whereas the interaction between extraction type and sample/sol- The trend observed in the main effects plot (Figure 4A) for the three variables coincided with that observed in the Pareto chart (Figure 4B).As can be seen, the solvent type (B) was the variable which had the strongest influence, showing a much more significant effect on the GA extraction yield than the rest of the variables (A and C).This fact could be due to the intrinsic chemical nature of the solvents used.Water, being a high-polarity solvent, had a lower GA extraction efficiency, since aglycones have a lipophilic nature.In contrast, MeOH and EtOH, being less polar organic solvents than water, allowed for a better extraction efficiency of α-solanine and α-chaconine.The effect observed for the extraction type (A) was only significant for the α-chaconine extraction yield, and the effect of the sample/solvent ratio (C) was significant for the α-chaconine and TGAs extraction yields.
Regarding the interaction between the three variables studied, the interaction between extraction type and solvent type (AB) was considered significant for the α-chaconine and TGA extraction yields; so, it can be deduced that each solvent was efficient to a greater or a lesser extent depending on the type of extraction used.α-Chaconine and TGAs were also significantly affected by the interaction between solvent type and sample/solvent ratio (BC), whereas the interaction between extraction type and sample/solvent ratio (AC) had a significant effect only on the α-chaconine extraction yield.Finally, the quadratic value of the sample/solvent ratio (CC) was not significantly influential in any of the cases, showing that this variable did not exert a significant effect on the GA extraction yield.
The statistical parameters obtained from the statistical analysis (ANOVA) (F-values, p-values, R 2 , R 2 adjusted, R 2 predictive) are reported in Table S2 and indicated that the resulting quadratic models had very high predictability and could be used to optimize the extraction procedure of GAs from potato peel.The statistical analysis also confirmed the significant terms of the obtained quadratic models (p < 0.05) showed in the Pareto chart (Figure 4B).
Based on the statistical outcomes derived from the full factorial experimental design, an optimized extraction procedure for obtaining the highest extraction efficiency of GAs from potato peel samples was established.Statistically, it was deduced that the most efficient extraction type (A) was UAE (5 min) with MeOH as the extraction solvent (B) and a sample/solvent ratio (C) of 1:10 (w/v, g/mL).To verify the reliability of the statistically estimated optimum, the experimentally obtained results were compared with the predicted quantitative results.To maintain the sample/solvent ratio while reducing waste production and reagent consumption, extractions were performed with 0.3 g of sample and 3 mL of MeOH, as the ultrasonic probe used in the laboratory had a recommended sample volume range of up to 3 mL.Table 3 demonstrates a high similarity between the experimental and the predicted results, validating the effectiveness and reliability of the optimized method for determining the optimal extraction conditions of GAs and maximizing their extraction.Finally, once the optimal extraction conditions obtained for GAs to maximize their extraction were established, a re-optimization study was carried out to determine if the extraction time could be reduced and increase the sustainability of the developed method.For this purpose, new UAE experiments were carried out applying 1, 2.5, and 5 min of sonication.All assays were performed on the same day in triplicate (n = 3) under the optimal conditions.Fisher's least significant difference (LSD) test was performed to discriminate between the mean TGA concentrations determined, which were 1131 ± 30, 1227 ± 50, and 1296 ± 69 mg/kg DW after 1, 2.5, and 5 min, respectively.As it can be seen in Figure 5, the boxplot shows visually the distribution of the obtained data and demonstrated that there were significant differences between the GA amounts extracted in the first (1 min) and second trials (2.5 min), but there were no significant differences between those extracted in the second (2.5 min) and third trials (5 min).Additionally, the data shown in Figure 5B indicate two homogeneous groups according to the alignment of X in the columns: one group for trial 1 (1 min), and the other homogeneous group for trials 2 (2.5 min) and 3 (5 min).This means that there were no statistically significant differences between the values obtained after 2.5 and 5 min of sonication.In Figure 5C, we applied a multiple comparison procedure to determine which means were significantly different from others.The asterisks showed in the pairs 1-2.5 min and 1-5 min indicate that these pairs showed statistically significant differences at the 95% confidence level, whereas the pair 2.5-5 min did not show statistically significant differences at the same confidence level.Therefore, given that with 2.5 min of sonication the capacity to extract the maximum amount of GAs from potato peel samples was reached, this was the optimal sonication time chosen for the method validation.
Method Validation
The optimized UAE-HPLC-DAD method for the quantification of GAs in potato peels was validated, and the results are shown in Table 4.The external calibration curves (1-100 mg/L) were obtained with an R 2 ~ 0.991 for both analytes.In addition, the deviation of the slopes of the calibration curves obtained on three different days and with three consecutive injections for each standard solution (n = 9) was calculated obtaining RSD between 2 and 18%.The values obtained for LOD and LOQ were 0.3 mg/L and 1 mg/L, for both analytes analyzed, respectively.Accuracy was evaluated at two different concentration levels, showing adequate mean recovery values of 103 ± 5 and 100 ± 4% for αsolanine and α-chaconine, respectively (Table 4).On the other hand, as shown in Table 4, satisfactory results were obtained for intra-day and inter-day precision at the two concentration levels, since the RSD values were lower than 13%.Finally, the selectivity of the method was also studied, as shown in Figure 2, where the chromatograms obtained for the standard solutions of α-solanine and α-chaconine are compared with spiked and nonspiked sample extracts.Furthermore, a determination of selectivity was made by assessing the purity of the chromatographic peaks.To achieve this, the absorption spectra of α-solanine and α-chaconine were obtained, with no evidence of co-elution of any other compounds or interferents at the retention time of the target analytes.In addition, the retention times showed a deviation ≤ 0.1 min for all the analytes.
Method Validation
The optimized UAE-HPLC-DAD method for the quantification of GAs in potato peels was validated, and the results are shown in Table 4.The external calibration curves (1-100 mg/L) were obtained with an R 2 ~0.991 for both analytes.In addition, the deviation of the slopes of the calibration curves obtained on three different days and with three consecutive injections for each standard solution (n = 9) was calculated obtaining RSD between 2 and 18%.The values obtained for LOD and LOQ were 0.3 mg/L and 1 mg/L, for both analytes analyzed, respectively.Accuracy was evaluated at two different concentration levels, showing adequate mean recovery values of 103 ± 5 and 100 ± 4% for α-solanine and α-chaconine, respectively (Table 4).On the other hand, as shown in Table 4, satisfactory results were obtained for intra-day and inter-day precision at the two concentration levels, since the RSD values were lower than 13%.Finally, the selectivity of the method was also studied, as shown in Figure 2, where the chromatograms obtained for the standard solutions of α-solanine and α-chaconine are compared with spiked and non-spiked sample extracts.Furthermore, a determination of selectivity was made by assessing the purity of the chromatographic peaks.To achieve this, the absorption spectra of α-solanine and α-chaconine were obtained, with no evidence of co-elution of any other compounds or interferents at the retention time of the target analytes.In addition, the retention times showed a deviation ≤ 0.1 min for all the analytes.a x = mg/L; b LOD: limit of detection estimated as 3 times the signal/noise ratio; c LOQ: limit of quantification estimated as 10 times the signal/noise ratio; d accuracy and precision were obtained by spiking samples at two concentration levels: low (200 mg/kg of α-solanine and α-chaconine) and high (400 of mg/kg α-solanine and α-chaconine).
Greenness Evaluation of the Developed Method
The analytical procedure developed for the extraction of GAs from potato peel was evaluated in terms of greenness using the analytical greenness metric for sample preparation (AGREEprep) [29].Using this tool, the overall sample preparation greenness performance is indicated by a pictogram with an inner circle representing the overall sample preparation score and a color based on traffic lights (from red to green).The criteria used were based on the ten principles of green sample preparation (e.g., use safe solvents and reagents, minimize waste, maximize sample throughput, etc.) [30], whose overall values can range from 0 to 1, the score of 1 indicating greenness performance (see Figure 6).As it can be observed, the proposed analysis method achieved an optimal score of 0.61 points due to the sample and solvent miniaturization and a significant reduction in the extraction time, calculated according to the criteria and scores established for this method (Figure 6).
Application to the Analysis of Different Potato Peels
The methodology developed was applied to determine the concentration of α-solanine and α-chaconine in the peel waste of fifteen different commercial varieties of potatoes (Table S1).It can be seen in Table 5 that the TGA concentration in potato peel waste ranged from 260 ± 8 to 2823 ± 33 mg/kg DW, being Vivaldi the potato variety with the lowest content, and Amandine the one with the highest content.In addition, significant differences in the TGA content in the potato peels were found depending on the variety ana-
Application to the Analysis of Different Potato Peels
The methodology developed was applied to determine the concentration of α-solanine and α-chaconine in the peel waste of fifteen different commercial varieties of potatoes (Table S1).It can be seen in Table 5 that the TGA concentration in potato peel waste ranged from 260 ± 8 to 2823 ± 33 mg/kg DW, being Vivaldi the potato variety with the lowest content, and Amandine the one with the highest content.In addition, significant differences in the TGA content in the potato peels were found depending on the variety analyzed.Nevertheless, it remains uncertain whether the concentration of GAs in the samples analyzed was predominantly influenced by the potato variety or by other variables related to their cultivation and storage conditions.This is because the samples were chosen at random on the market; so, the external factors that affect the GA content (such as climate, soil type, light exposure, maturity, etc.) were not controlled in this study.These results agree with those found in the literature, and according to [2], depending on the potato cultivar, irradiation, storage conditions, and mechanical injury, the TGA content in potato peel can ranged from 84 to 3526 mg/kg DW.As it can be seen in Table 5, α-chaconine was generally found in a higher percentage (from 45 to 77%) than α-solanine (from 23 to 55%), except in the Vivaldi potato variety, where 55% of α-solanine and 45% of α-chaconine were found.These results are consistent with those of other authors such as Musita et al. [31], who typically found a higher quantity of α-solanine than of α-chaconine.The concentration of α-solanine in the potato peel varied between 143 and 1273 mg/kg DW, while the concentration of α-chaconine varied between 117 and 1742 mg/kg DW.There seemed to be no differences between potatoes with a yellow or with a red peel and neither between potatoes of large nor small caliber (Table 5).In this sense, these results demonstrated that the proposed UEA-HPLC-DAD method is suitable to determine GAs in potato peel samples at concentrations varying in a wide range.
Evaluation of the Temperature Effect on the Concentration of α-Solanine and α-Chaconine during Drying
Figure 7 shows the effect of the temperature during potato peel drying on the GA content.As indicated in Section 2.6, two heating conditions, achieved using a conventional laboratory oven (60 • C and 103 • C for 24 h), were evaluated, and the results were compared to those obtained for the same samples subjected to freeze-drying.It was observed that the drying temperature had a significant effect on the TGA content in the potato peel samples.Thus, in all samples analyzed, the content of TGAs decreased significantly when the peels were subjected to heat drying.The decrease in TGAs at 60 • C varied from 20% (Soprano) to 62% (Amandine), while degradation at 103 • C resulted in a reduction in TGAs from 54% (Agata, Caesar and Monalisa) to 77% (Rudolph and Amandine), as shown in Figure 7.As observed, in general, the level of analyte reduction increased with the heating temperature.Since potato peels have a high moisture content, they require a drying treatment to extend their shelf life, avoiding enzymatic and microbial degradation during storage.In this sense, the results obtained in this study are important, since it was observed how the drying conditions affect the GA content.Therefore, to obtain extracts at an industrial level rich in these compounds (for subsequent purification and GA isolation) it would be preferable to use freeze-drying or a medium-temperature drying process.
Conclusions
A green, quick, and efficient miniaturized analytical approach using HPLC-DAD was developed to quantify α-solanine and α-chaconine in potato peel discarded by the food industry and consumers.A statistical analysis was carried out through a design of experiments (3 3 ) to maximize the extraction of the target compounds in potato peels.The parameters optimized were the extraction method (conventional vortex-assisted SLE, conventional SLE with magnetic stirring, and UAE), the type of solvent (MeOH, EtOH, and H2O), and the sample/solvent ratio (1:10, 1:20, and 1:40, w/v, g/mL).The optimal extraction conditions involved the use of MeOH as a solvent, the application of the UAE technique for 2.5 min, and a sample/solvent ratio of 1:10, w/v.The AGREEprep tool used to evaluate the method indicated its acceptable ecological character, with a score of 0.61 points.The developed method was successfully validated and demonstrated an excellent recovery of the target analytes (around 100%), low limits of quantification (1 mg/L, for both analytes), good precision, and selectivity.In addition, it was applied to 15 varieties of commercial potato peels to determine their GA concentration, yielding values of up to 2895 mg/kg
Conclusions
A green, quick, and efficient miniaturized analytical approach using HPLC-DAD was developed to quantify α-solanine and α-chaconine in potato peel discarded by the food industry and consumers.A statistical analysis was carried out through a design of experiments (3 3 ) to maximize the extraction of the target compounds in potato peels.The parameters optimized were the extraction method (conventional vortex-assisted SLE, conventional SLE with magnetic stirring, and UAE), the type of solvent (MeOH, EtOH, and H 2 O), and the sample/solvent ratio (1:10, 1:20, and 1:40, w/v, g/mL).The optimal extraction conditions involved the use of MeOH as a solvent, the application of the UAE technique for 2.5 min, and a sample/solvent ratio of 1:10, w/v.The AGREEprep tool used to evaluate the method indicated its acceptable ecological character, with a score of 0.61 points.The developed method was successfully validated and demonstrated an excellent recovery of the target analytes (around 100%), low limits of quantification (1 mg/L, for both analytes), good precision, and selectivity.In addition, it was applied to 15 varieties of commercial potato peels to determine their GA concentration, yielding values of up to 2895 mg/kg DW, for the combined amounts of both compounds in the Amandine and Rudolph varieties.Given the high levels of TGAs found in commercial samples, potato peels could potentially be valorized through the extraction of high-value-added compounds and used in the development of new products for the pharmaceutical industry.Lastly, it was verified that controlling the drying temperature is crucial, as the compounds to be extracted are affected.If scaled up to food industry levels, it is advisable to use controlled temperatures as low as possible to prevent the degradation of the GAs of interest.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods13050651/s1,Table S1: Potato peel varieties analyzed in this work and their specifications; *data obtained from the product labels.;Table S2: Values obtained from ANOVA analysis.
Figure 3 .
Figure 3. TGAs determined in potato peel after solid-liquid extraction with magnetic stirring (MgS-SLE) and ultrasound-assisted extraction (UAE) for different times, using ethanol (EtOH) or methanol (MeOH) as the extraction solvent.The sample-solvent mixture (1:20, w/v) was first subjected to vortex homogenization and then analyzed (0 min).
Figure 3 .
Figure 3. TGAs determined in potato peel after solid-liquid extraction with magnetic stirring (MgS-SLE) and ultrasound-assisted extraction (UAE) for different times, using ethanol (EtOH) or methanol (MeOH) as the extraction solvent.The sample-solvent mixture (1:20, w/v) was first subjected to vortex homogenization and then analyzed (0 min).
Foods 2024 , 19 Figure 4 .
Figure 4. (A) Main effects chart of the three examined variables (extraction type, solvent type, and sample/solvent ratio) at three levels (see Table 1) for three responses (contents of TGAs, α-solanine.and α-chaconine, mg/kg DW).(B) Pareto charts from the 3 3 full factorial experimental design of the standardized effect on each of the responses (contents of TGAs, α-solanine, and α-chaconine, mg/kg DW) for the analysis of the three variables (A) extraction type; (B) solvent type; (C) sample/solvent ratio.p value is significative when it is <0.05.
Figure 4. (A) Main effects chart of the three examined variables (extraction type, solvent type, and sample/solvent ratio) at three levels (see Table 1) for three responses (contents of TGAs, α-solanine.and α-chaconine, mg/kg DW).(B) Pareto charts from the 3 3 full factorial experimental design of the standardized effect on each of the responses (contents of TGAs, α-solanine, and α-chaconine, mg/kg DW) for the analysis of the three variables (A) extraction type; (B) solvent type; (C) sample/solvent ratio.p value is significative when it is <0.05.
Figure 4 .
Figure 4. (A) Main effects chart of the three examined variables (extraction type, solvent type, and sample/solvent ratio) at three levels (see Table 1) for three responses (contents of TGAs, α-solanine.and α-chaconine, mg/kg DW).(B) Pareto charts from the 3 3 full factorial experimental design of the standardized effect on each of the responses (contents of TGAs, α-solanine, and α-chaconine, mg/kg DW) for the analysis of the three variables (A) extraction type; (B) solvent type; (C) sample/solvent ratio.p value is significative when it is <0.05.
Figure 4. (A) Main effects chart of the three examined variables (extraction type, solvent type, and sample/solvent ratio) at three levels (see Table 1) for three responses (contents of TGAs, α-solanine.and α-chaconine, mg/kg DW).(B) Pareto charts from the 3 3 full factorial experimental design of the standardized effect on each of the responses (contents of TGAs, α-solanine, and α-chaconine, mg/kg DW) for the analysis of the three variables (A) extraction type; (B) solvent type; (C) sample/solvent ratio.p value is significative when it is <0.05.
Foods 2024 , 19 Figure 5 .
Figure 5. (A) Box plot.(B) Homogeneous group values obtained from the Fisher's multiple range test.(C) Significant differences between pair values obtained from the Fisher's multiple range test demonstrating the statistical differences for the three extraction times evaluated under UAE optimized conditions for TGA determination (combined amounts of α-solanine and α-chaconine) in potato peel.All results were obtained with 95% confidence.
Figure 5 .
Figure 5. (A) Box plot.(B) Homogeneous group values obtained from the Fisher's multiple range test.(C) Significant differences between pair values obtained from the Fisher's multiple range test demonstrating the statistical differences for the three extraction times evaluated under UAE optimized conditions for TGA determination (combined amounts of α-solanine and α-chaconine) in potato peel.All results were obtained with 95% confidence.
Figure 6 .
Figure 6.Greenness results of the UAE-HPLC-DAD method developed for glycoalkaloid determination in potato peel evaluated with the AGREEprep metric.MeOH: methanol; UAE: ultrasoundassisted extraction.
Figure 6 .
Figure 6.Greenness results of the UAE-HPLC-DAD method developed for glycoalkaloid determination in potato peel evaluated with the AGREEprep metric.MeOH: methanol; UAE: ultrasoundassisted extraction.
Figure 7 .
Figure 7. Effect of the heating temperature during drying compared to freeze-drying in different potato peel samples.TGAs refers to the combined amounts of α-solanine and α-chaconine.Different letters (a, b, c) means statistically significant differences (p ≤ 0.05).
Figure 7 .
Figure 7. Effect of the heating temperature during drying compared to freeze-drying in different potato peel samples.TGAs refers to the combined amounts of α-solanine and α-chaconine.Different letters (a, b, c) means statistically significant differences (p ≤ 0.05).
Table 1 .
Summary of the three independent factors and their three different levels used for the experimental design.
Table 2 .
Results obtained from the 3 3 full factorial experimental design methodology to optimize the glycoalkaloid extraction conditions from potato peel.
Table 3 .
Comparison of the predicted values with the experimental results at statistically optimal extraction conditions.
* Results are shown in mg/kg DW.TGAs refers to the combined amounts of α-solanine and α-chaconine.
Table 4 .
Validation parameters of the UAE-HPLC-DAD method for the quantification of α-solanine and α-chaconine in potato peel. | 2024-02-27T17:16:01.983Z | 2024-02-21T00:00:00.000 | {
"year": 2024,
"sha1": "a2402dd95b5a35e5909d5e8b516e34ad0f8679f8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6421e5f34c0d2e3c72c55cf3d09d9cd3036b5256",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
235338267 | pes2o/s2orc | v3-fos-license | Fulfilling the promise of digital health interventions (DHI) to promote women’s sexual, reproductive and mental health in the aftermath of COVID-19
© The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. The Creative Commons Public Domain Dedication waiver (http:// creat iveco mmons. org/ publi cdoma in/ zero/1. 0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Introduction Globally, over 800 women die every day in pregnancy and childbirth; violence against women remains devastatingly pervasive, affecting 1 in 3 women in their lifetime, and depression rates are twice that of men, according to the World Health Organization (WHO) [1]. The report further emphasizes that sexual and reproductive health (SRH) services are quickly disrupted when health systems are under pressure which is dangerous and disempowering. Therefore, access to contraception, safe abortion to the maximum extent permitted by law, STI prevention and recovery, care and assistance for abuse survivors, and self-care interventions should all be prioritized in countries’ COVID-19 responses, according to WHO [2]. As the COVID-19 pandemic paralyzes the health systems across nations, there is a significant drop in access to routine healthcare, and many patients are showing interest and turning towards telehealth, telemedicine, or remote virtual health services to access essential primary care. For example, in the United States, all the states have expanded the telehealth policies to reduce the pressure on the hospitals treating COVID-19 patients and reduce patients’ exposure [3]. Global health emergencies in the past have revealed that during the crisis, access to safe abortion can be negatively affected [4]. While countries are still grappling with COVID-19 and its response is ever-evolving. The increased burden on health systems can result in reduced access to abortion facilities. As the health systems come under mounting pressure and providers become infected, some countries have had to close down clinics offering abortion services. Such circumstances necessitate innovative solutions not only in remote places or countries with limited resources but also in developed countries [5]. During global health emergencies, there is a total reversal of priorities and, as a result, the availability, accessibility, and affordability of SRH services may become challenging, especially in resource-poor settings A study from South Africa by Pattison et al. on the impact of the first wave of COVID-19 on maternal and reproductive health services and maternal mortality showed that there had been an increase of 30% in maternal deaths since lockdown started and the pandemic peaked in 2020, compared with the same period in 2019. Use of reproductive health services (contraception and termination of pregnancy) has declined sharply since lockdown. Rural provinces are experiencing increased pressure on their services due to pregnant women migrating from metropolitan areas back to their homes, increasing the burden on already under-resourced facilities [6]. Hence, it is critical to provide effective health facilities and health delivery systems to achieve the UN SDG target 3.7 and universal access to SRHR services [7]. Sexual and reproductive health and rights (SRHR) and bodily autonomy are explicitly recognized in International Open Access
Introduction
Globally, over 800 women die every day in pregnancy and childbirth; violence against women remains devastatingly pervasive, affecting 1 in 3 women in their lifetime, and depression rates are twice that of men, according to the World Health Organization (WHO) [1]. The report further emphasizes that sexual and reproductive health (SRH) services are quickly disrupted when health systems are under pressure which is dangerous and disempowering. Therefore, access to contraception, safe abortion to the maximum extent permitted by law, STI prevention and recovery, care and assistance for abuse survivors, and self-care interventions should all be prioritized in countries' COVID-19 responses, according to WHO [2]. As the COVID-19 pandemic paralyzes the health systems across nations, there is a significant drop in access to routine healthcare, and many patients are showing interest and turning towards telehealth, telemedicine, or remote virtual health services to access essential primary care. For example, in the United States, all the states have expanded the telehealth policies to reduce the pressure on the hospitals treating COVID-19 patients and reduce patients' exposure [3]. Global health emergencies in the past have revealed that during the crisis, access to safe abortion can be negatively affected [4]. While countries are still grappling with COVID-19 and its response is ever-evolving. The increased burden on health systems can result in reduced access to abortion facilities. As the health systems come under mounting pressure and providers become infected, some countries have had to close down clinics offering abortion services. Such circumstances necessitate innovative solutions not only in remote places or countries with limited resources but also in developed countries [5].
During global health emergencies, there is a total reversal of priorities and, as a result, the availability, accessibility, and affordability of SRH services may become challenging, especially in resource-poor settings A study from South Africa by Pattison et al. on the impact of the first wave of COVID-19 on maternal and reproductive health services and maternal mortality showed that there had been an increase of 30% in maternal deaths since lockdown started and the pandemic peaked in 2020, compared with the same period in 2019. Use of reproductive health services (contraception and termination of pregnancy) has declined sharply since lockdown. Rural provinces are experiencing increased pressure on their services due to pregnant women migrating from metropolitan areas back to their homes, increasing the burden on already under-resourced facilities [6].
Hence, it is critical to provide effective health facilities and health delivery systems to achieve the UN SDG target 3.7 and universal access to SRHR services [7]. Sexual and reproductive health and rights (SRHR) and bodily autonomy are explicitly recognized in International human rights law. Under these rights, states are obligated to ensure access to abortion services and remove any obstacles that deny access [8]. Provision of digital health interventions (DHIs), e.g., telemedicine, mHealth, etc., is the perfect strategy for thinking innovatively to transform the existing health systems and improve the SRH services and healthcare for both the short term and long term. This technology-driven services address equity, especially for the rural communities and marginalized groups with poor access to family planning providers and specialists in Obstetrics and Gynecology. As highlighted by McCoy et al., DHI may provide access to disadvantaged or difficult-to-reach groups identified by geography, stigmatized attitudes or personalities, or people who value confidentiality. In addition, DHI connects women to contraceptives, expands HIV self-testing and HIV preexposure prophylaxis (PrEP) access among vulnerable groups such as men who have sex with men (MSM), and can help spread the word about low-cost maternal care services [9]. Telehealth services such as digital communication channels have a wider scope and play a significant role in sending the message to the target groups or individuals, thereby improving the service delivery of SRHR. As highlighted by Bacchus et al., digital health technologies provide both opportunities to advance the SRH and pose potential risks due to confidentiality and SRH being a highly sensitive area [10]. Through telemedicine, the existing geographic or social, or behavioral barriers in accessing the SRH services can be addressed by facilitating the self-use of these services adapted to the types and access to technology and the digital literacy skills of users [11].
Digital health and COVID-19
In May 2018, the Seventy-First World Health Assembly (WHA) passed Resolution WHA71.7 on Digital Health to promote healthy lives and wellbeing for everyone, everywhere, at all ages. The concept includes a range of functions for promoting the Sustainable Development Goals, equitable and universal access to quality health services; increasing health systems sustainability, accessibility, and affordability; strengthening health promotion, disease prevention, treatment, rehabilitation, and palliative care. It defines Digital Health as "the field of knowledge and practice associated with any aspect of adopting digital technologies to improve health, from inception to operation" and encompasses eHealth [12]. On March 6, 2019, the WHO Director-General announced the creation of the Department of Digital Health "to enhance WHO's role in assessing digital technologies and support Member States in prioritizing, integrating and regulating them" [13]. Digital health solutions are gaining popularity and attention and likely will persist COVID-19 pandemic revamping healthcare systems globally. The technology is improving day by day, and many developed nations have already conducted feasibility studies and implemented use in various specializations for delivering healthcare services to remote patients [14].
Impact of COVID-19 on sexual and reproductive health and rights (SRHR)
COVID-19 has accelerated the use of digital technologies for immediate outbreak responses (including health communication, contact tracing, testing, surveillance, diagnostics, and treatment) and impact mitigation measures (including wellbeing and mental health promotion, telemedicine, support for gender-based violence survivors, financial protection). Of 96 countries surveyed by WHO, 60 have deployed telemedicine to replace in-person consultations, and many have been using a range of digital technologies in their COVID-19 responses [15]. During COVID-19, to avoid preventable complications associated with abortion, it is necessary to enable self-managed abortion through telemedicine counseling, guaranteeing access to medications, and ensuring that women are not criminalized for inducing self-abortions, a vital step towards fulfilling states' blinding human rights obligations [5,16]. It is encouraging that several organizations have moved their services online and continued to sustain SRHR advocacy through innovative approaches such as telemedicine, mHealth services or by partnering with other sectors such as commercial service deliveries and online commercial platforms, pushing governments to leverage the potential use of telemedicine for SRH, particularly for abortion [17].
In a pandemic, pregnancy and childbirth are not placed on hold. Whatever the circumstances, all women have the right to a healthy and supportive pregnancy and childbirth experience, and they need high-quality, compassionate, and respectful maternity care. Evidence of unnecessarily separating mothers from their newborn babies during the pandemic is also alarming, posing serious health and well-being risks [2].
Abortion is a time-sensitive service; delays can lead to unsafe abortions, restrictions on abortions or the lack of availability can turn people towards unsafe options to end a pregnancy. Several countries have enabled telemedicine for SRH services, including abortion. The United Kingdom, France, and Ireland have approved telemedicine and remote support of abortions. In addition, Albania has enabled telemedicine for prenatal care, Belgium is using telemedicine for abortion pre-meetings and prescriptions. It is in accordance with the WHO guidance, which confirms that self-managed abortion is safe, given that pregnant individuals have been fully informed on protocols and, if needed, have access to follow-up healthcare [3,17].
Globally, numerous DHIs are targeting a range of populations for a variety of SRHR topics across continents in different cultural contexts, which have been shown to be acceptable and feasible to implement by the end-user. Some of the recent successful interventions are summarized below (Table 1).
Therefore, the application of successful DHIs shows great promise in the area of SRH, which can address the issues of equity, access, and affordability, especially in certain remote settings. In this context, Crawford et al. have proposed a Digital Health Equity (DHE) framework which can be used to consider the health equity factors, and they further argued that along with person-centered care, DHE should be integrated into health provider education and promoted at the individual, institutional, and social levels [30].
However, under the guise of the COVID-19 pandemic, some governments undermine women's health when it needs the most protection, such as Poland and Romania [17]. Some law and policymakers in the United States (US) have been effectively working to ban abortions by categorizing them as not "medically necessary" care and "non-essential. " Both US and Netherlands courts' responses towards petitions for safeguarding abortion access during this pandemic have been mixed [31]. Further, some countries have taken regressive approaches towards women's SRR; for instance, the Lithuanian health minister has asked women to rethink abortion during their time in lockdown [17]. To cite some success stories from Africa, Zimbabwe and Nigeria had ensured continuity of SRH services by integrating them with other essential services such as immunization and food delivery programs. In Uganda, a mobile app "SafeBoda" allowed women to order contraception to their doorstep through a motorcycle [32].
Access to safe abortions is essential now more than ever; reports have indicated that states' COVID-19 responses could increase unwanted pregnancies due to lockdowns, lack of access to contraceptive supplies, raising incidence of domestic violence, and increasing income insecurity [33]. Compelling women to continue with an unwanted pregnancy is a human rights violation under several circumstances, including foreseeable mental and physical health impacts on the pregnant person. During the COVID-19 pandemic, several health care services may be disrupted or are inaccessible due to increased burden on healthcare systems, further creating barriers to services required for a pregnant person [5,16,34].
Women and mental health
Common disorders such as depression, anxiety, and somatic complaints are more common among women and affect 1 in 3 people in the community, which is a huge public health problem. Depressive disorders account for 42% of disability from neuropsychiatric disorders among women compared to 29% among men. The lifetime prevalence rate of violence against women ranges from 16-50%, and 80% of the 50 million displaced people affected by violent conflicts, civil wars, disasters, and displacements are women and children [35]. Regardless of exposure to the virus, people may experience fear and anxiety of becoming sick or dying and helpless. Some of them may blame other people who are ill, potentially [29] triggering off a mental breakdown. There is a wide range of psychiatric morbidities that have been found, such as depression, anxiety, panic attacks, somatic symptoms, and posttraumatic stress disorder (PTSD) symptoms, to delirium, psychosis, and even suicidality [36]. A study of COVID-19 and adverse mental health outcomes by Gold et al. highlighted that healthcare workers, 70% of which are women, are at a high risk of mental health problems [37]. Anxiety and/or depressive disorders affect up to 20% of those seeking primary health care in developing countries, and many health professionals have gender biases that cause them to either over-treat or under-treat women when they dare to report their problems. Therefore, the WHO [35] emphasizes on 3 key areas to address women's mental health, namely: 1. Build evidence on the prevalence, causes, mediating factors, and protective factors for mental health problems among women. 2. Encourage the formulation and implementation of health policies that address the needs and concerns of women from childhood to old age. 3. Improve primary care providers' ability to recognize and manage the mental health effects of domestic violence, sexual harassment, and acute and chronic stress in women.
Application of digital health interventions (DHIs) in mental health
Mental health support to frontline health workers, patients, and carers will be crucial, as long isolation, lack of social interaction, as well as anxiety over one's own and others' health will take a toll on well-being [38]. Psychiatrists, psychotherapists, and psychologists need to ensure that they are maintaining their own mental health during this time, with programs such as professional supervision being of help [39]. Telemedicine services will become increasingly crucial in the pandemic setting, as physical isolation and frontline work pose both access issues and mental health stressors [40]. The various studies done among diverse groups of patients to assess different digital mental health interventions are summarized below ( Table 2).
Protection of SRHR
The state's obligation under international human rights law to protect, respect, and protect the right to health, life, and non-discrimination, among other rights, should not be interrupted in times of crisis. Measures should be taken to prevent unsafe abortion while ensuring that access to SRH services, including abortion, are non-derogable core obligations of states and should be upheld even during a crisis such as COVID-19 [50][51][52]. Therefore, to fulfill these core obligations, policies and laws that criminalize or obstruct access to sexual and reproductive services should be repealed. Governments must adopt WHO guidelines and a patient-centered, human rights-based approach. They must adapt their technical guidance, policies, and service-delivery models to guarantee access to SRHR by allowing telemedicine during the crisis [5,17]. The resistance towards making safe abortion has highlighted the importance of including feminist methodologies in global health research to reveal both formal and informal ways by which gender inequality manifests in healthcare access and its delivery. To ensure inclusivity and representation, we must actively consider what barriers to participation exist, whose voices are missing and what methods are used to expose these factors; above all, the global health agenda must be feminist [53]. WHO has made progress on many facets of women's rights, health, and gender equality over the last 25 years, as outlined in the visionary global policy framework, the 1995 Beijing Platform for Action on Women. Supporting feminist movements that keep governments accountable and drive change in societies by using a human rightsbased approach is critical for continuing to advance the health and well-being of women everywhere, in all their diversity [54].
Conclusions
The COVID-19 pandemic has disrupted SRH services across the world, resulting in many unwanted pregnancies, stillbirths, maternal and neonatal deaths, with negative impacts on mental health outcomes for women. Despite the challenges, some countries have leveraged health technologies to ensure the access and delivery of healthcare, paving the way to a digital health future. During lockdowns, mHealth and telemedicine have been gained global prominence revealing their potential beyond serving marginalized and underserved communities. In a post-COVID era, there are also opportunities to improve healthcare access and promote gender equality. Poverty, a lack of access to digital health interventions (DHIs), a lack of engagement with digital health in some communities, and barriers to digital health literacy are some factors that can lead to poor health outcomes. Therefore, digital health equity (DHE) should be integrated into the health policies to address the issues of equity, access, and affordability, especially in remote settings. Hence, there is an urgent call for health systems to be intentional in correcting broader gender inequities and in integrating digital health technologies to build resilience to future health crises. | 2021-06-05T13:56:14.880Z | 2021-06-04T00:00:00.000 | {
"year": 2021,
"sha1": "8c06bb540b309b54de48ff28cb1ba0f7eaafc27e",
"oa_license": "CCBY",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-021-01168-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c06bb540b309b54de48ff28cb1ba0f7eaafc27e",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
37327272 | pes2o/s2orc | v3-fos-license | Evaluation of laser speckle contrast imaging as an intrinsic method to monitor blood brain barrier integrity
The integrity of the blood brain barrier (BBB) can contribute to the development of many brain disorders. We evaluate laser speckle contrast imaging (LSCI) as an intrinsic modality for monitoring BBB disruptions through simultaneous fluorescence and LSCI with vertical cavity surface emitting lasers (VCSELs). We demonstrated that druginduced BBB opening was associated with a relative change of the arterial and venous blood velocities. Cross-sectional flow velocity ratio (veins/arteries) decreased significantly in rats treated with BBB-opening drugs, ≤0.81 of initial values. © 2013 Optical Society of America OCIS codes: (170.0110) Imaging systems; (170.3880) Medical and biological imaging; (140.2020) Diode lasers; (170.6480) Spectroscopy, speckle. References and links 1. N. J. Abbott, A. A. K. Patabendige, D. E. M. Dolman, S. R. Yusof, and D. J. Begley, “Structure and function of the blood-brain barrier,” Neurobiol. Dis. 37(1), 13–25 (2010). 2. R. N. Kalaria, “The Blood-Brain Barrier and Cerebrovascular Pathology in Alzheimer’s Disease,” Ann. N. Y. Acad. Sci. 893, 113–125 (1999). 3. M. B. Shlosberg, D. Kaufer, and A. Friedman, “Blood-brain barrier breakdown as a therapeutic target in traumatic brain injury,” Nat. Rev. Neurol. 6, 10 (2010). 4. O. Tomkins, I. Shelef, I. Kaizerman, A. Eliushin, Z. Afawi, A. Misk, M. Gidon, A. Cohen, D. Zumsteg, and A. Friedman, “Blood-brain barrier disruption in post-traumatic epilepsy,” J. Neurol. Neurosurg. Psychiatry 79(7), 774–777 (2008). 5. E. Seiffert, J. P. Dreier, S. Ivens, I. Bechmann, O. Tomkins, U. Heinemann, and A. Friedman, “Lasting BloodBrain Barrier Disruption Induces Epileptic Focus in the Rat Somatosensory Cortex,” J. Neurosci. 24(36), 7829– 7836 (2004). 6. W. H. Oldendorf, “Blood-Brain Barrier Permeability to Drugs,” Annu. Rev. Pharmacol. 14(1), 239–248 (1974). 7. W. M. Pardridge, “CNS Drug Design Based on Principles of Blood-Brain Barrier Transport,” J. Neurochem. 70(5), 1781–1792 (1998). 8. M. Kinoshita, N. McDannold, F. A. Jolesz, and K. Hynynen, “Noninvasive localized delivery of Herceptin to the mouse brain by MRI-guided focused ultrasound-induced blood-brain barrier disruption,” Proc. Natl. Acad. Sci. U.S.A. 103(31), 11719–11723 (2006). 9. S. I. Rapoport, “Osmotic Opening of the Blood-Brain Barrier: Principles, Mechanism, and Therapeutic Applications,” Cell. Mol. Neurobiol. 20(2), 217–230 (2000). 10. N. Sheikov, N. McDannold, N. Vykhodtseva, F. Jolesz, and K. Hynynen, “Cellular mechanisms of the bloodbrain barrier opening induced by ultrasound in presence of microbubbles,” Ultrasound Med. Biol. 30(7), 979– 989 (2004). 11. Q. Jiang, J. R. Ewing, G. L. Ding, L. Zhang, Z. G. Zhang, L. Li, P. Whitton, M. Lu, J. Hu, Q. J. Li, R. A. Knight, and M. Chopp, “Quantitative evaluation of BBB permeability after embolic stroke in rat using MRI,” J. Cereb. Blood Flow Metab. 25(5), 583–592 (2005). 12. P. S. Tofts and A. G. Kermode, “Measurement of the blood-brain barrier permeability and leakage space using dynamic MR imaging. 1. Fundamental concepts,” Magn. Reson. Med. 17(2), 357–367 (1991). #187730 $15.00 USD Received 25 Mar 2013; revised 7 Jun 2013; accepted 23 Jul 2013; published 30 Aug 2013 (C) 2013 OSA 1 October 2013 | Vol. 4, No. 10 | DOI:10.1364/BOE.4.001856 | BIOMEDICAL OPTICS EXPRESS 1856 13. S. Taheri, E. Candelario-Jalil, E. Y. Estrada, and G. A. Rosenberg, “Spatiotemporal Correlations between BloodBrain Barrier Permeability and Apparent Diffusion Coefficient in a Rat Model of Ischemic Stroke,” PLoS ONE 4(8), e6597 (2009). 14. M. Wintermark, J. Hom, J. Dankbaar, J. Bredno, and M. Olszewski, “Blood-brain barrier permeability: quantification with computed tomography and application in acute ischemic stroke,” Dear Friends 53, 3 (2009). 15. L. Ruiz-Valdepeñas, J. A. Martínez-Orgado, C. Benito, A. Millán, R. M. Tolón, and J. Romero, “Cannabidiol reduces lipopolysaccharide-induced vascular changes and inflammation in the mouse brain: an intravital microscopy study,” J. Neuroinflammation 8(1), 5 (2011). 16. D.-E. Kim, D. Schellingerhout, F. A. Jaffer, R. Weissleder, and C. H. Tung, “Near-infrared fluorescent imaging of cerebral thrombi and blood-brain barrier disruption in a mouse model of cerebral venous sinus thrombosis,” J. Cereb. Blood Flow Metab. 25(2), 226–233 (2005). 17. E. E. Cho, J. Drazic, M. Ganguly, B. Stefanovic, and K. Hynynen, “Two-photon fluorescence microscopy study of cerebrovascular dynamics in ultrasound-induced blood-brain barrier opening,” J. Cereb. Blood Flow Metab. 31(9), 1852–1862 (2011). 18. O. Prager, Y. Chassidim, C. Klein, H. Levi, I. Shelef, and A. Friedman, “Dynamic in vivo imaging of cerebral blood flow and blood-brain barrier permeability,” Neuroimage 49(1), 337–344 (2010). 19. D. A. Boas and A. K. Dunn, “Laser speckle contrast imaging in biomedical optics,” J. Biomed. Opt. 15(1), 011109 (2010). 20. A. Ponticorvo and A. K. Dunn, “How to build a Laser Speckle Contrast Imaging (LSCI) system to monitor blood flow,” J. Vis. Exp. (45): (2010). 21. S. Yuan, A. Devor, D. A. Boas, and A. K. Dunn, “Determination of optimal exposure time for imaging of blood flow changes with laser speckle contrast imaging,” Appl. Opt. 44(10), 1823–1830 (2005). 22. P. Miao, H. Lu, Q. Liu, Y. Li, and S. Tong, “Laser speckle contrast imaging of cerebral blood flow in freely moving animals,” J. Biomed. Opt. 16(9), 090502 (2011). 23. Y. Atchia, H. Levy, S. Dufour, and O. Levi, “Rapid multiexposure in vivo brain imaging system using vertical cavity surface emitting lasers as a light source,” Appl. Opt. 52(7), C64–C71 (2013). 24. A. K. Dunn, H. Bolay, M. A. Moskowitz, and D. A. Boas, “Dynamic imaging of cerebral blood flow using laser speckle,” J. Cereb. Blood Flow Metab. 21(3), 195–201 (2001). 25. I. Sigal, Y. Atchia, R. Gad, A. M. Caravaca, D. Conkey, R. Piestun, and O. Levi, “Laser Speckle Contrast Imaging with Extended Depth of Field for Brain Imaging Applications,” in CLEO: Science and Innovations, Imaging & Microscopy I (Optical Society of America, 2013), paper CTu2M. 26. A. K. Dunn, “Laser Speckle Contrast Imaging of Cerebral Blood Flow,” Ann. Biomed. Eng. 40(2), 367–377 (2012). 27. J. D. Briers, “Laser Doppler, speckle and related techniques for blood perfusion mapping and imaging,” Physiol. Meas. 22(4), R35–R66 (2001). 28. L. M. Richards, E. L. Towle, D. J. Fox, and A. K. Dunn, “Laser Speckle Imaging of Cerebral Blood Flow,” in Optical Methods and Instrumentation in Brain Imaging and Therapy (Springer New York, 2013), pp. 117–136. 29. M. Kaiser, A. Yafi, M. Cinat, B. Choi, and A. J. Durkin, “Noninvasive assessment of burn wound severity using optical technology: a review of current and future modalities,” Burns 37(3), 377–386 (2011). 30. H. Levy, D. Ringuette, and O. Levi, “Rapid monitoring of cerebral ischemia dynamics using laser-based optical imaging of blood oxygenation and flow,” Biomed. Opt. Express 3(4), 777–791 (2012). 31. E. A. Munro, H. Levy, D. Ringuette, T. D. O’Sullivan, and O. Levi, “Multi-modality optical neural imaging using coherence control of VCSELs,” Opt. Express 19(11), 10747–10761 (2011). 32. M. B. Bouchard, B. R. Chen, S. A. Burgess, and E. M. Hillman, “Ultra-fast multispectral optical imaging of cortical oxygenation, blood flow, and intracellular calcium dynamics,” Opt. Express 17(18), 15670–15678 (2009). 33. J. Greenwood, J. Adu, A. J. Davey, N. J. Abbott, and M. W. Bradbury, “The effect of bile salts on the permeability and ultrastructure of the perfused, energy-depleted, rat blood-brain barrier,” J. Cereb. Blood Flow Metab. 11(4), 644–654 (1991). 34. H. Ichikawa and K. Itoh, “Blood-arachnoid barrier disruption in experimental rat meningitis detected using gadolinium-enhancement ratio imaging,” Brain Res. 1390, 142–149 (2011). 35. A. Saria and J. M. Lundberg, “Evans blue fluorescence: quantitative and morphological evaluation of vascular permeability in animal tissues,” J. Neurosci. Methods 8(1), 41–49 (1983). 36. A. Y. Shih, J. D. Driscoll, P. J. Drew, N. Nishimura, C. B. Schaffer, and D. Kleinfeld, “Two-photon microscopy as a tool to study blood flow and neurovascular coupling in the rodent brain,” J. Cereb. Blood Flow Metab. 32(7), 1277–1309 (2012). 37. H. P. Rani, T. W. Sheu, T. M. Chang, and P. C. Liang, “Numerical investigation of non-Newtonian microcirculatory blood flow in hepatic lobule,” J. Biomech. 39(3), 551–563 (2006). 38. L. Grinberg, V. Morozov, D. Fedosov, J. A. Insley, M. E. Papka, K. Kumaran, and G. E. Karniadakis, “A new computational paradigm in multiscale simulations: Application to brain blood flow,” in High Performance Computing, Networking, Storage and Analysis (SC), 2011 International Conference for(IEEE, 2011), pp. 1–12. 39. S. Lorthois, F. Cassot, and F. Lauwers, “Simulation study of brain blood flow regulation by intra-cortical arterioles in an anatomically accurate large human vascular network. Part II: flow variations induced by global or localized modifications of arteriolar diameters,” Neuroimage 54(4), 2840–2853 (2011). #187730 $15.00 USD Received 25 Mar 2013; revised 7 Jun 2013; accepted 23 Jul 2013; published 30 Aug 2013 (C) 2013 OSA 1 October 2013 | Vol. 4, No. 10 | DOI:10.1364/BOE.4.001856 | BIOMEDICAL OPTICS EXPRESS 1857 40. S. Lorthois, F. Cassot, and F. Lauwers, “Simulation study of brain blood flow regulation by intra-cortical arterioles in an anatomically accurate large human vascular network: Part I: methodology and baseline flow,” Neuroimage 54(2), 1031–1042 (2011). 41. R. Byron Bird and P. J. Carreau, “A nonlinear viscoelastic model for polymer solutions and melts—I,” Chem. Eng. Sci. 23(5), 427–434 (1968). 42. Y. I. Cho and K. R. Kensey, “Effects of the non-Newtonian viscosity of blood on flows in a diseased arterial vessel. Part 1: Steady flows,” Biorheology 28(3-4), 241–262 (1991). 43. A. Sequeira and J. Janela, “An overview of some mathematical models of blood rheology,” in A Portrait of State-of-the-Art Research at the Technical University of Lisbon(Springer, 2007), pp. 65–87. 44. B. M
Introduction
Exchanges between the blood and the central nervous system (CNS) are highly controlled by the blood brain barrier (BBB), essentially composed of endothelial tight junctions and astrocytic glial cell endfeet wrapped around small brain vessels [1].A breakdown of the barrier's integrity can negatively affect normal brain behavior.For example, recent studies demonstrated that malfunction of BBB integrity is a key factor in many pathological brain states such as Alzheimer's disease and Post Traumatic Epilepsy (PTE) [2][3][4][5].Conversely, lack of BBB permeability represents a challenge for drug delivery into the CNS [6,7].Consequently, many techniques are being developed to locally alter the barrier integrity and allow optimized local drug release into brain tissue [8][9][10].It is thus often necessary to monitor the progression in BBB disruptions.MRI or CT scanners are used to monitor BBB integrity [11][12][13][14], as various MRI and CT parameters were proven to vary with BBB opening [11].However, MRI and CT scanners are expensive to operate, are not optimized for prolonged measurement periods, and limit access to the subject during the recording period.
In preclinical studies on rodents, BBB permeability is often visualized and/or quantified using optical imaging methods and extrinsic fluorescent markers [15][16][17].In fluorescent imaging, markers need to be injected within the vasculature, making these techniques inappropriate for continuous BBB permeability monitoring.Fluorescent dye can be re-injected in multiple successive imaging sessions, but the typically long lifetime of the dye inside the vasculature and/or the tissue considerably increases the intervals at which imaging can be performed, leading to the reduction of the sampling rate.
As BBB opening is known to have an effect on cerebral blood flow [18], we studied the effect of BBB opening on blood flow velocity maps acquired with Laser Speckle Contrast Imaging (LSCI).LSCI is an intrinsic imaging technique that uses coherent light to measure wide-field relative flow velocity maps and offers high spatiotemporal resolution [19][20][21].This technique has gained popularity in the past decade due to its simplicity, low cost, high spatiotemporal resolution, and its potential use in miniature, animal-mounted systems [22,23].The ability of the technique to measure flow velocities with high sensitivity has been confirmed by comparison to both Laser Doppler measurements [24], time of flight measurements in vivo [23], and in microfluidic systems in-vitro [25].A number of review papers in the last few years outline the breadth of applications enabled by the technique in cortical blood flow imaging in healthy and damaged tissue [26][27][28][29].
Recently, we demonstrated that the coherence length of Vertical-Cavity Surface-Emitting Lasers (VCSELs) can be rapidly altered by applying different driving current modes (swept current and continuous current modes), thus making VCSELs an attractive illumination source for fast simultaneous measurements of blood flow and blood oxygenation via LSCI and Intrinsic Optical Signal Imaging (IOSI) modalities, respectively [30,31].
Previous studies reported simultaneous wide-field fluorescence imaging and LSCI [20,32].Here we show that a similar system can be replicated with low power VCSELs.VCSELs offer several advantages as compared to LEDs (size, sharp emission peak, low energy consumption, stability) and other laser sources (cost, size, rapid intensity, and coherence modulation), as further discussed in reference [31].In this study, we acquired simultaneous relative velocity maps and vascular fluorescent maps during BBB opening and determined the effect of the opening on the relative velocity maps.Fluorescent dye imaging was used as a "gold standard" comparison technique to confirm BBB permeability changes and leakage.In addition, we simulated the effect of leaky vessel boundaries on the velocity maps to compare to our experimental observations.Our results show that the ratio between venous and arterial blood flow is significantly reduced in animals with compromised BBB, indicating that this parameter is a promising metric to assess BBB integrity.To our knowledge, this work represents the first demonstration that LSCI can be used as an intrinsic method for monitoring BBB integrity.
Evaluation of flow parameters
Figure 1 shows a schematic representation of vessels with both intact (Fig. 1(a)) and leaking (Fig. 1(b)) BBB.Based on the findings of Prager et al., who used a method based on fluorescent labeling to monitor changes in blood flow dynamics [18], we hypothesized that local changes in flow can be observed in blood vessels located near leaking zones and that leakage at the capillary level (with or without red blood cell extravasation) would translate into a change in the vascular ouput/input balance, i.e. the ratio of the blood flowing in and out of the tissue.Figure 1(c) represents the transverse flow profile along the dotted line in Fig. 1(b), which defines the different parameters that were assessed in this study: (1) the maximum relative blood velocity (usually at the vessel center), (2) the diameter of the vessel (full width at half maximum of the velocity profile) and (3) the area under the transverse velocity profile (calculated as a spatial integration of the flow velocity values between the two sides of the vessel).The last parameter was measured because it comprises information on both (1) the relative blood velocity and (2) the vessel dilation (diameter change).The integrated velocity profile involves integrating the values of speckle contrast in the transverse direction.The local speckle contrast values effectively integrate speed over depth.Therefore, the transverse velocity profile is an average integral of the speed in the two transverse directions along a vessel.Therefore, the integrated transverse velocity profile effectively shows the volume flux by multiplying speed by area.
Animal preparation
The imaging studies were conducted on anesthetized (2-3% isoflurane) male Sprague Dawley rats (200-300g).All animal studies were performed in accordance with ethics protocols approved by the University of Toronto Animal Care Committee.After anesthesia induction, the animal was placed in a stereotaxic frame.A local analgesic (lidocaine cream, EMLA) was applied to all pressure points and tissues to be incised.The animal body temperature was maintained at 37.5°C using a thermal blanket (T/Pump, Gaymar Industries, Orchard Park, NY).Hind limb withdrawal reflex, heart and breathing rates were observed at regular intervals throughout the experiment to ensure that the animal remained at a surgical plane of anesthesia.A 5 mm diameter craniotomy was performed over the sensory cortex and the dura was carefully removed to expose the brain tissue to be imaged.As previously described [30,31], the craniotomy was then surrounded with a petroleum gel track to form a well which was filled with 1-2% agarose gel.Subsequently, the gel track was covered with a coverslip to form a cranial window.Evans blue dye (Sigma, 1.5%, 1ml/kg) was injected via the tail vein.For local drug application, the well was filled with saline only.In five animals, a craniotomy was performed on both hemispheres, as described in section 3.2; a window was placed over the open skull area in one of the hemispheres while a saline bath (made from petroleum gel wall, surrounding the exposed skull area) was prepared over the second hemisphere.This configuration allowed for unilateral drug application by changing the solution in the bath.In two of these five animals, no fluorescent dye was injected to eliminate possible effect of dye injection on our results.
BBB opening
Lipopolysaccharide (LPS) or deoxycholic acid (DOC) were used to alter the blood brain barrier permeability [15,33].In a subset of experiments, we used tail vein injection of LPS (Sigma, 1 mg/kg) a drug known to induce a BBB opening [15].Blood flow and fluorescence maps were acquired prior to, and two hours after, LPS injection, and were subsequently compared to one another.We were also interested in observing the effect of a local BBB opening.Local application of LPS on the tissue, rather than intravenously, does not systematically induce a disruption of the BBB [34] and for this reason topical application of 2 mM DOC (dissolved in saline) was used for local BBB disruptions.In animals treated with DOC, relative blood flow and fluorescence maps were acquired prior to, and 30-40 minutes after, the application of the drug.
Optical imaging
A schematic representation of the imaging setup is shown in Fig. 2(a) Several VCSELs with different wavelengths (680, 795 and 850 nm) were incorporated into a small (~ 5 mm diameter) package (Vixar Inc., Plymouth, MN), to be used as an illumination source.The backscattered (or fluorescence) light was collected, collimated, and redirected onto a 14-bit EMCCD camera (Rolera EM-C 2 , QImaging, Surrey, BC) with the help of two imaging lenses (Nikon, 28 mm f/2.8 and 50 mm f/1.4).This configuration led to 1.8X magnification, and images of 500 × 500 pixels (or 2 × 2mm) were acquired.To accommodate a fluorescence imaging configuration, two long-pass filters (NT54-753, Edmund Optics Inc., Barrington, NJ) were placed in between the two imaging lenses (where light is collimated) to sustain an efficient blocking of the 680 nm excitation light (OD >6), while allowing the emitted fluorescence light in wavelengths above 700 nm to pass through to the camera.A current source (Model 6221, Keithley Instruments Inc., Cleveland, OH) and a switch (Model 7001, Keithley Instruments Inc., Cleveland, OH) were used to power the VCSELs.For fluorescence images, Evans blue dye was excited by a 680 nm VCSEL source (swept driving current = 3-10 mA, optical power < 10 mW) and the camera integration time was set to 500 ms.The Evans blue dye was chosen because in its fluorescent form (when bound to albumin) it does not cross the intact BBB [35], the accumulation of fluorescence in the extravascular milieu was thus used to confirm BBB permeation.While a 680 nm wavelength is not centered at the peak of the Evans blue absorption spectra [35], it is well suited for fluorescence imaging since it creates sufficient fluorescence light that can be easily observed by our sensitive camera.Furthermore, the light in this excitation wavelength has reduced absorption by hemoglobin as compared to visible wavelengths.
For LSCI images, the tissue was illuminated by a 795 nm VCSEL (driving current = 1.2 mA, optical power < 1 mW) and the integration time was set to 5 ms to satisfy the long exposure time criteria suggested in [21].Speckle contrast images were calculated from the raw reflection images obtained using a 5 × 5 pixel window around each pixel and averaged over 300 images to significantly reduce camera noise.4-6 different depths were imaged to maintain all vessels in focus.In previous studies [30], a 680 nm VCSEL was chosen to illuminate the brain tissue in speckle imaging using the LSCI technique.The current choice of 795 nm as an illumination wavelength for speckle imaging allows a simultaneous recording of fluorescence and speckle maps.This is done through a coded sequence of 50 fluorescence images acquired at high exposure times (300 -500 ms) followed by 300 shortexposure (5 ms) speckle contrast images using the 795 nm VCSEL (Fig. 2(b)).The use of a single VCSEL package for both wavelengths of illumination retains the advantages of a VCSEL as a fast, low cost, and miniature light source for optical brain imaging.
The temporal averaging performed in such manner enables extraction of robust flow values that are insensitive to transient and inhomogeneity of flow and scatter density, while retaining spatial (~10μm) and temporal (1.5 seconds per data point, repetition period of 16.5 seconds) resolution that is sufficiently high for the measurement performed.
Evaluation of BBB leakage on LSCI maps
The study was performed in two phases.The goal of the first phase was to determine the appropriate flow parameter to be used as a metric of BBB integrity compromise, while the goal of the second phase was to evaluate the efficacy of the metric to observe the temporal evolution of BBB permeability.The first phase of this study was performed on 15 rats separated into three distinct groups; a control group (n = 5), a second group treated with LPS (n = 4), and a third treated with DOC (n = 6).In animals treated with LPS, the BBB opening is global and all vessels are expected to be included in a treated region; we thus considered any vessel as being in the Region of Interest (ROI).However, in animals treated with DOC (local BBB opening) we only considered the arteries and the veins arising from the treated region.While the specific veins in the ROI may not always be fully related to the arteries that we observed in that ROI, we note that any arteries and veins that are diving vertically into an affected ROI region will have their relative velocities disturbed upon BBB opening (see for example Fig. 1 in [36]).For all groups (control, DOC and LPS) we evaluated blood velocity values and velocity profiles in different vessels.For the control group, 14 arteries (with diameters ranging from 60 to 300 µm) and 20 veins (diameters ranging from 60 to 260 µm) in five rats were analyzed.For the DOC group, the hemodynamic properties were evaluated for 21 arteries (diameters ranging from 45 to 230 µm) and 26 veins (diameters ranging from 85 to 400 µm) in six rats.For the LPS group, the same parameters were measured in 11 arteries (diameters ranging from 70 to 170 µm) and 15 veins (diameters ranging from 145 to 400 µm) in four rats.Only animals that showed no initial perioperative damage (brain swelling and dye leakage prior to drug application, noted in two animals) were used in this study.
In each animal, the hemodynamic changes of 3-5 veins and 3-5 arteries in the focal plane of the region of interest (ROI) were analyzed.These numbers were chosen according to the number of arteries and veins diving in the imaged ROI.Arteries and veins were identified visually by their morphology, diameter, branching pattern, and flow direction, as well as by their reflectance under green and red LED illumination, which differ due to their different concentrations of oxy-and deoxy-hemoglobin.The transverse flow profile of each vessel was traced and the three parameters defined in Fig. 1(c) were measured.In order to compare the effects in the different vessels, all measurements were normalized to their respective initial values.We then compared the changes observed in veins and arteries and calculated the relative changes in the output/input ratio, i.e. the relative changes observed in veins divided by the relative changes observed in arteries.
For the second phase of this study (n = 3 rats), LSCI-derived relative velocity maps and fluorescence maps were continuously acquired.Both hemispheres were simultaneously imaged.The temporal behavior of both arterial and venous velocities was analyzed for the drug-treated (DOC) and the control hemispheres, and compared to the rate of fluorescent dye leakage to evaluate the temporal progression of BBB opening on a given brain hemisphere.
Flow simulations
In support of our experimental flow velocity studies in a rat brain, we simulated blood flow velocities in a topology that resembles a section of the brain with leading arteries and collecting veins.We modeled an artery-like vessel (80 µm diameter) as an "inlet" branching into different arterioles, capillaries and venules (5-20 µm diameter) which then reconnected in a vein-like compartment (200 µm diameter).The simulated geometry has a total length of 2000 μm (Fig. 3).Geometries and dimensions were chosen according to previous imaging studies [36].To provide better insight into the flow dynamics we included limited redundancy of the vascular network at the capillary level, even though accurate velocities and pressures for microcirculatory networks were demonstrated in simpler models [37].We considered the blood as a non-Newtonian fluid (NNF) where the viscosity, η, is dependent on shear rate or shear rate history.This model captures the effects of the individual red blood cells on the flow through the properties of the fluid, as opposed to explicitly simulating them as a separate portion of the blood stream, thus enabling the observation of the general flow behavior.While more complex models consider the flow of red blood cells, they are, nevertheless, based on numerous assumptions and require extensive computational resources [38][39][40].
The relation between the shear stress and the shear rate for NNF is non-linear, and can be time-dependent.The viscosity of blood is time-dependent and tends to decrease with increased stress.The Carreau power-law fluid model is a common model used to describe the viscosity of blood [41][42][43], and is depicted by the equation: where η ∞ is the viscosity at infinite shear rate [Pa × s], η 0 is the viscosity at zero shear rate [Pa × s], λ is the relaxation time [s], n is the power index, and t ) as a power-law fluid.Thus, in large-and medium-sized vessels, blood behaves as a homogeneous incompressible ( 0 u ∇ ⋅ = ) Newtonian fluid, with flow behavior described by the time-dependent Navier-Stokes equation: Here u denotes the flow velocity vector, ρ is the constant fluid density, and σ is the stress tensor.The flow dynamics in small capillaries is described using the time-dependent viscosity given by the Carreau model (Eq.( 1)), where ( ) The COMSOL Multiphysics finite element method (FEM) suite was used to study the changes in the blood flow dynamics due to leakage in the capillaries connecting arteries and veins in the presence of a localized leaky boundary.We solved the time-dependent Navier-Stokes equation (Eq.( 2)) by introducing the time-dependant viscosity predicted by the Carreau model (Eq.( 1)) into the stress tensor σ .We considered a simplified model where the inflow was set as a constant (u in = 10 [mm/s]) and the outflow had a zero-pressure boundary condition.The flow is considered laminar while the blood is treated as an incompressible NNF with density ρ = 1060 kg/m 3 and the viscosity of which is obtained from the Carreau model.Following Eq. ( 1), the parameters for the time-dependent viscosity are given in ref [44] and ref [42], where η ∞ = 0.00345 Pa × s, η 0 = 0.056 Pa × s, λ = 3.13 s and n = 0.3568.Note that the blood flow velocity, u, in healthy brain capillaries varies between 5 mm/s for large (5-10 μm diameter) and < 1 mm/s for small (< 5 μm diameter) capillaries [30,45].
Experimental effect of BBB opening on the hemodynamics of veins and arteries
Simultaneous recordings of the relative velocity map, using the LSCI technique, and of the fluorescence map from a fluorescent dye marker, Evans blue, were used to evaluate the permeability of the BBB in response to application of drugs.Following drug application, the observation of extravascular fluorescence from the dye that was accumulated over time outside the vessels was used to confirm the presence of a leaky BBB (see Figs. 4(a), 5(a) and Fig. 9(a) in the Appendix).Figure 4(a) shows an example of fluorescence and relative velocity maps before (labeled "initial" in the figure) and after topical application of DOC, targeting a local opening of the BBB (labeled "DOC" in the figure), respectively.As shown in the fluorescence images (Fig. 4(a), bottom two panels), DOC induced a large permeability change and resulted in dye leakage from the vessels.Therefore, the vasculature that was clearly visible in the "initial" image is hardly visible in the final fluorescence image (even in the best focus plane).Correspondingly, this large permeability change has resulted in visible changes in the flow velocity map (Fig. 4(a) top panels).The vasculature, for the most part, remains visible in the LSCI-derived relative blood flow map.There is a noticeable arterial vasodilatation accompanied by a net velocity reduction in many veins, as can be better seen by plotting the difference in flow velocities from their initial values after DOC application (Fig. 4(b)).Examples of relative flow profile for an artery and for a vein (both locations are marked with a thick black line in Fig. 4(a)) overlaid by a fit to a parabolic flow velocity profile are shown in Fig. 4(c), in the upper and lower panels, respectively.
Another example of the effect of DOC application, where the permeability changes in the brain vessels for an individual rat were more subtle, is shown in Fig. 9.Note that in this case, fluorescence accumulation in the extravascular medium was observed only in discrete zones (see arrow in Fig. 9, bottom right panel), and relative flow profiles for an artery and for a vein near the fluorescent dye accumulation region (black lines in Fig. 9(a)) show smaller changes as well.Figure 5 shows a similar effect for the LPS-induced BBB permeability changes.Simultaneous recordings of the relative velocity map, and of the fluorescence map from the same fluorescent dye, Evans blue, were used to evaluate global changes in the permeability of the BBB in response to a tail vein injection of LPS. Figure 5(a) shows an example of the fluorescence and relative velocity maps before (labeled "initial" in the figure) and after tail vein injection of LPS, targeting a global opening of the BBB (labeled "LPS" in the figure), respectively.The action of the LPS drug was slower than that of the DOC drug.Consequently, the final maps were obtained 120 minutes after LPS injection.A clear observation of extravascular fluorescence from the dye due to opening of the BBB (Fig. 5(a), bottom two panels) was accompanied with a reduction of the relative flow velocity (Fig. 5(a), top two panels).There was no noticeable arterial vasodilatation.The reduction in net velocity in the arteries was accompanied by a larger net velocity reduction in many veins.The difference in flow velocities from their initial values is shown in Fig. 5(b).This difference can be seen in an example of the relative flow profile for an artery and for a vein (both marked with a thick black line in Fig. 5(a)) overlaid by a fit to a parabolic flow velocity profile in Fig. 5(c).Our experimental data, shown in Figs. 4 and 5, suggests that BBB opening has a different effect on flow speeds in veins and arteries.There are several physiological factors that may affect the local individual flow velocities (anesthesia state and duration, body temperature, compensation process for the lost intravascular fluid, etc.), which will be discussed in section 4. In order to separate these possible effects and other factors causing spontaneous hemodynamic changes that are not related to BBB opening, we evaluate a "macroscopic" variable influenced by the drug-induced permeability changes, namely the ratio of the output velocities and input velocities.
The output/input ratio of the relative velocity values was evaluated for the three parameters defined in Fig. 1(c), namely (1) the maximum flow velocity value (amplitude), (2) vessel diameter, and (3) integrated transverse profile.The histogram presented in Fig. 6 shows a calculation of the output/input ratio for each parameter measured in the three distinct animal populations (control, DOC-treated, LPS-treated).The values of the integrated transverse profile output/input ratios (parameter 3) normalized according to initial values are 1.03 ± 0.04, 0.78 ± 0.06 and 0.70 ± 0.07 for the control, DOC-treated and LPS-treated animal populations respectively (data shown as mean ± SE).Note that while there seems to be a slight reduction in the vessel diameter ratios (2) in treated animals, no statistically significant changes in this parameter were observed.The integrated transverse profile (3) showed a statistically significant reduction in the output/input ratio after DOC (two tailed t-test, p = 0.01) and LPS treatment (two tailed t-test, p = 0.02) when compared to the initial values and when compared to the control group (p = 0.04 and 0.03 respectively).Fig. 6.BBB opening and permeability change effects on the vascular output/input ratio.To compare the veins and arteries hemodynamics before and after drug-induced BBB disruption, three parameters were measured in veins and arteries: the vessel diameter, the maximum relative flow and the profile area.These parameters were normalized to initial values and used to calculate the output/input ratios, i.e. the relative change in veins divided by the relative change in arteries for each measured parameters.Output/Input ratios for different parameters: the vessel diameter (width at half maximum of the velocity profile), the maximum relative velocity (the maximum of the velocity profile) and the transverse profile area (the area under the relative velocity profile curve) are shown in control animals (white, n = 5 rats (14 arteries and 20 veins)), in animals treated with DOC (dark blue, n = 6 rats (21 arteries and 26 veins)) and in animal treated with LPS (pale blue, n = 4 rats (11 arteries and 15 veins)).For each animal, the ratio was normalized according to the initial values (* = p < 0.05).
Continuous monitoring
Temporal evaluation of the effect of the DOC drug, before and after application (Fig. 4), showed that while individual arteries and veins may be susceptible to velocity fluctuations during the induced BBB permeability changes.The calculated ratio of venous and arterial blood velocity aggregated over several vessels in the ROI was significantly reduced in treated animals and can be used as a metric to assess BBB integrity.Simultaneous observation of fluorescence and speckle-derived relative velocity maps in two regions of the brain, while only one of them was treated with DOC, validates the assertion that the "control" side is minimally affected by the permeability changes, and that the velocity maps clearly differentiate between the regions where the BBB was altered, and the regions where the BBB was intact in the same animal.As described in the methods section, two cranial windows were prepared and DOC was applied in only one of the cranial windows, allowing the untreated hemisphere to serve as a reference (see Fig. 7(a)).Figure 7(b) shows the fluorescence (top two panels) and relative velocity (bottom two panels) maps after DOC application.A clear accumulation of dye outside the vessels was shown in fluorescence image (top right) for the treated hemisphere.Overlaid on the relative velocity image for the treated hemisphere (bottom right) is a marking for regions inside a vein, inside an artery, and in the extravascular tissue where the temporal evolution of the relative flow velocity is traced, as shown in Fig. 7(c).While the temporal curves for all tissue compartments (vein, artery and tissue) show a monotonic slow reduction in the blood velocity of ~10% over an hour, the drug-treated hemisphere (bold lines, Fig. 7(c)) clearly shows an effect in permeability change that results in relative velocity changes.In this animal, the observed velocity in the drug-treated hemisphere has increased in all of these tissue compartments.However, the increase was stronger for the artery, leading to an effective reduction in the output/input ratio for these individual vessels.Moreover, our analysis shows that the output/input integrated transverse profile ratio, calculated for several veins and several arteries in the ROI, was significantly reduced in the treated area (Fig. 7(d), thick bold time traces), as compared with the untreated "control" hemisphere, in agreement with the findings reported in Fig. 6.Taken over (n = 5 rats) the output/input ratio (integrated transverse velocity profile ratio) was reduced after DOC application to the treated hemisphere (Fig. 7(d), bar graph).This reduction in flow velocity ratio was associated with an increase of the extravascular tissue fluorescence intensity due to dye accumulation (Fig. 7(e)), indicating that it is related to permeability changes and opening of the BBB.The mean output/input ratio (integrated transverse velocity profile ratio) was reduced to 0.81 ± 0.11 after 10 minutes of DOC application (Fig. 7(f)).This reduction is significantly different than the ratio observed in the untreated "control" hemisphere, 1.01 ± 0.08 (n = 5 rats, p = 0.002).Dye was injected at the beginning of the experiment, 20 minutes before DOC application, to better resolve the effects due to dye injection and DOC application.However, since the dye injection caused an increase in blood volume that could affect blood flow and even compromise the BBB integrity, two experiments were performed without dye injections and fluorescent imaging.Even without dye injection, DOC application led to a reduction of the output/input ratio (Fig. 10 in the Appendix).The time dynamics of the response was similar to those observed with the dye present in the blood stream.
Flow simulation of leaky vessels
In parallel to our experimental studies, we sought to evaluate if a change in BBB permeability can be observed in velocity changes in our model.In order to first validate the model, we compared simulated and measured flow velocities without leaking condition in a flow map shown in Fig. 11 in the Appendix.The absolute velocities were measured using the time of flight technique described in [30].The simulated velocity values match the measured velocities with an average absolute variance of 12.0 ± 9.0% (Table 1 in the Appendix).Both simulated and measured values are in good agreement with our previous studies and literature [30,36].Figure 8(a)-8(b) shows the simulated schematic of a vasculature topology before and after applying leaky boundary conditions.When comparing the simulated relative velocity maps in intact conditions (Fig. 8(a)) to a case where localized leaky boundaries were added to a capillary (see arrow in Fig. 8(b)), we observed changes in the relative flow velocities.These changes become clearly visible in the velocity changes map (Fig. 8(c)), calculated as (v fv i )/v i , where v f is the final velocity (with leak) and v i stands for the initial velocity (no leak).According to our simulations, a leakage affects velocities upstream and downstream (see Fig. 8(d)-8(g)) of the capillary bed).Note that in the present case, only one leaking zone was simulated (20 µm length leaking zone and the leaking velocity at the boundary varies from 0 to 0.2 mm/s within 2 s and follows u leak = u 0 [1 + tanh(t-t 0 )] [mm/s]) [46].The leak results in complex velocity changes and depending on the location of the leak the effect on the velocity measured in one artery (or one arteriole) may differ (see Fig. 8(e)-8(g)).However, there seems to be a constant drop in the blood output/input velocity ratio when comparing the velocity in the superficial vein and the arteries.Different leaking conditions (opening sizes, flow velocities, and locations of opening in the network) repeat the general trend, showing a decrease of the vein-artery flow velocity ratio.As such, in these scaled-down simulations we chose to scale down the amount of leakage proportionally.
Similarly, we have evaluated the effect of a partial occlusion on cerebral blood flow.The leaking zone was replaced by a vessel diameter reduction by a factor 2 over a 20 µm length.Occlusion led to a rerouting of blood in the vessel network but no change in the output/input velocity ratio is seen (see Fig. 12 in the Appendix).
Discussion
In this study we used the intrinsic LSCI technique to observe changes in flow velocity and related parameters, such as the flow output/input ratio, accompanying drug-induced BBB opening.
A comparison of the hemodynamic response for arteries and veins penetrating a given region in the cortex allows us to infer on the microvasculature state in deeper cortical layers.Our most important observation is the different manner that the arterial and venous velocities changed (see experimental example in Figs.4(c) and 5(c), Fig. 9(c) and results derived from simulations in Fig. 8(d)-8(g)).Notably, observing the superficial arterial and venous flow velocities informs us on the state of the underlying vessel network, including the BBB integrity, over time.Initially we expected to observe a decrease in venous velocity following BBB disruption due to capillary leakage, but this predicted reduction of the venous flow was observed in only 45% of the veins in the ROI and, interestingly, an increase in arterial flow velocities was observed in 64% of the arteries in the ROI, across all animals in our study.This suggests a local physiological compensation mechanism for the losses in fluid volume to the extravascular tissue space.This compensation could be due to signaling implicating nearby cells causing an arterial vasodilatation upstream.
Several factors can influence the local flow dynamics for individual vessels, including the state and duration of the anesthesia.No statistically significant changes in raw velocity values in individual veins and arteries were observed; we thus used the ratio of venous to arterial integrated velocity profile as a metric for compromised BBB.This self-referencing of the measurements reduced the effect of other physiological factors involved in regulation of blood flow in the brain, as well as the effects of possible changes in absorption and scattering properties of the tissue following drug application.Applying a drug to disrupt the BBB significantly decreased this ratio, indicating a permeability change in the BBB.Indeed, this ratio was reduced by 22 ± 6% (final ratio of 0.78 ± 0.06) after DOC application and 30 ± 7% (final ratio of 0.70 ± 0.07) after LPS injection, which is statistically meaningful (see histogram in Fig. 6).The reduction of the vasculature output/input ratio after LPS or DOC treatment is believed to be due to a leakage at the capillary bed and superficial vessels.This leakage was confirmed by observing the accumulation of extravascular fluorescent dye, as shown in Figs.4(a) and 5(a).Furthermore, the same output/input ratio was measured in rats where only one hemisphere was treated with DOC.The output/input ratio calculated 10 minutes after drug application was significantly smaller than the one calculated from the control hemisphere (0.81 ± 0.11 and 1.01 ± 0.08 respectively), as shown in Fig. 7(f).This is consistent with the values reported in Fig. 6.A reduction of the same ratio was also observed in animals where no fluorescent dye was used (Fig. 10).
Our simulations provide insight as to the leakage dynamics due to focal BBB opening.Simulated velocity maps have confirmed that the vein/artery velocity ratio decreases due to leakage from the blood vessels.The simulation results (Fig. 8) are in good agreement with the experimental observations.Although these simulations represent a simplified version of the realistic brain vascular network compared to more extensive computational flow model [38,39,47] (limited redundancy, use of a homogenous fluid without simulated blood cells), the general behavior depicted by the model was in agreement with values reported using the more complex models.
A possible limitation of the LSCI-based technique that we currently see is that each area under study must contains a few diving veins and arteries to ensure statistically significant results, thereby requiring a large field of view and potentially limiting the spatial resolution of this technique.A potential solution is to measure the velocity in smaller arterioles, venules and capillaries located in deeper cortical layers using a higher magnification optical imaging system.Another limitation of using LSCI is the depth resolution provided by the technique, since the contrast ratios calculated for each pixel are affected by moving scattering elements above and below the vessel of interest.To limit the effect of underlying vessels more than one vessel was analyzed, and vessel cross sections were also chosen to avoid non-quadratic distribution when possible.The balance between arterial and venous flow could also potentially be measured with other techniques such as functional micro-ultrasound, recently reported by van Raaij and colleagues [48].Their method quantifies blood flow and volume in arterioles and venules.Nevertheless, LSCI remains an advantageous solution when cost, portability, and simplicity are considered.
It is important to keep in mind that using LPS and DOC to instigate the BBB opening triggers a complex inflammatory response, leading to osmotic exchanges, recruitment of cells such as leukocytes, and thrombosis, which can also affect flow values.For example, it was demonstrated that thrombosis caused a rerouting [18,49,50]; the observed changes were similar for superficial veins and arteries.A decrease in both arterial and venous flow speeds was observed near the thrombosis site and a compensating increase was observed in peripheral regions [18].These results are consistent with our occlusion simulation (Fig. 12), which predicts homogenous changes across arteries and veins and no change in the output/input ratio.It is challenging to completely separate BBB opening from the aforementioned effects since they are interconnected.Indeed, it was reported that flow is an important factor in leukocytes adhesion [51,52].Nevertheless, the inflammatory response and the recruitment of immune cells depend on several processes including the expression of binding proteins occurring minutes/hours after the insults [53].Overall, the following four factors suggest that changes observed in the first minutes following drug application are caused by leakage: 1) the reported and simulated effect of thrombosis on cerebral blood flow is uniform over veins and arteries, 2) the time required for cell rolling and attachment to the vessel wall (reported to be several tens of minutes, even hours [54,55]), 3) similar flow changes observed in previous studies with a different modality [18], and 4) good agreement between the temporal behavior of the fluorescent dye accumulation outside the vessel and the observed changes in venous/arterial flow ratio.Nevertheless, the contribution of occlusions, osmotic changes, and other contributors to the flow changes remain to be better investigated.
We emphasize that the sensitivity of the LSCI technique well exceeds the threshold required to observe the ~20% changes in arterial-to-venous flow ratio we report on.Calibration against absolute velocities in vivo [23] and in vitro [25] show the ability to discern flow speed changes of <5% reliably.The expected profile shape of the vessels is parabolic; such profile is observed in the centre of the blood vessels using the LSCI technique.We note that experimentally observed deviations from the parabolic shape near vessel edges are due to the convolution of the raw speckle images with the 5 x 5-pixel-wide filter, leading to blurring of the vessel edges due to the contribution of the static scatterers in the nearby tissue.
The sampling rate of fluorescence imaging methods to monitor the state of the BBB is limited by the dye's lifetime within the vasculature.For example, our recent studies using Cy5.5 based molecular markers have shown fast dye accumulation dynamics (within the first ~20 minutes) and long dye retention dynamics (typically for hours inside the body) [56].Furthermore, our data indicates (Fig. 7(c), 7(d), 7(e)) that the flow response to barrier permeability change is visible several minutes prior to an appreciable change in the fluorescence signal, rendering our proposed technique better suited to studies where precise permeability change dynamics are of interest.Our proposed technique can also present an alternative to such modalities as MRI and CT, which require expensive instrumentation and allowing little access to the subject, and are drawbacks in experiments where a large number of animals need to be studied or in experiments where additional drugs need to be administered into the animal following BBB permeability change.
Conclusion
In this work we demonstrated that the ratio of arterial and venous flow profiles measured by wide-field LSCI are affected by LPS and DOC application and that LSCI is a potential lowcost, label-free technique to monitor the BBB integrity in a live rodent brain, with a possibility for application in long (hours to days) imaging sessions.Using a simultaneous measurement of fluorescence intensity of a dye leaking from vessels and relative velocity changes, we observed that arteries and veins respond differently to a drug-induced blood brain barrier opening.We proposed to measure the ratio between arterial and venous blood velocities as a metric to track the BBB disruption dynamics.Our numerical simulations in the simplified blood network model agree well with our experimental findings.The integrated transverse velocity profile ratio was shown to be the most promising metric for assessing the changes associated with drug-induced BBB opening.Along with further understanding of the hemodynamic and the physiological changes associated with localized BBB leaky boundaries, the technique proposed in this paper will help narrow the parameters to be measured to assess BBB integrity, and help develop screening protocols in monitoring brain diseases such as Alzheimer's disease and post-traumatic epilepsy.
Appendix
In this Appendix we include supplementary information, supporting our studies on the evaluation of LSCI as a technique for monitoring the BBB integrity in a live rodent brain.An example of the effect of DOC application, where the permeability changes in the brain vessels for an individual rat were subtle, is shown in Fig. 9.The time course of the normalized integrated transverse output/input profile ratio without dye injections is shown in Fig. 10.Table 1 show the measured and simulated velocity values for the vessel morphology maps marked in Fig. 11.Fig. 12 shows the simulated effects of a localized clog on a velocity map.
Fig. 1 .
Fig. 1.(a) Hypothetical illustration of the blood flowing through an artery (or arteriole), capillaries and a vein (or venule), subsequently, in the condition of intact BBB.(b) Same representation for a compromised blood brain barrier, wherein the venous output (red arrow) is decreased.The input and output blood volumes are represented by white and red arrows, respectively.(c) Hypothetical transverse velocity profile along the dotted line (b) and definition of the different parameters measured and analyzed in this study.The maximal velocity amplitude is represented by the vertical arrow, the vessel diameter (full width at half maximum) is represented by the horizontal arrow, and the area under the transverse velocity profile is represented by the shaded area.
Fig. 2 .
Fig. 2. Schematics of (a) the imaging experimental setup and (b) the illumination and image acquisition sequences.
Fig. 4 .
Fig. 4. DOC-induced BBB opening signature in the blood velocity map.(a) Fluorescence intensity and blood relative velocity maps before and after DOC application.(b) Relative changes of flow velocity after DOC application.(c) Relative blood velocity profiles were traced for an artery (upper) and a vein (lower) before and after DOC application.The DOC fluorescence and relative velocity images were recorded 30 minutes after the drug application.Initial profiles are traced in black and the final ones are traced in grey.Locations of the profiles shown in (c) are highlighted in panel (a) by the black bars.
Fig. 5 .
Fig. 5. LPS-induced BBB opening signature in the blood velocity map.(a) Fluorescence intensity and blood relative velocity maps before and after LPS application.(b) Relative changes of flow velocity after LPS application.(c) Relative blood velocity profiles were traced for an artery (upper) and a vein (lower) before and after LPS application.The LPS fluorescence and relative velocity images were recorded 2 hours after the drug application because LPS effect was slower.Initial profiles are traced in black and the final ones are traced in grey.Locations of the profiles shown in (c) are highlighted in panel (a) by the black bars.
Fig. 7 .
Fig. 7. Simultaneous observation of fluorescence and blood flow velocity maps in treated and untreated hemispheres.(a) Schematic representation of the surgical procedures.(b) Fluorescence (top) and relative velocity (bottom) maps for untreated "control" hemisphere (left two panels) and treated hemisphere (right two panels) as they appear 60 minutes after DOC application on the right treated hemisphere.(c) Temporal evolution of the relative flow velocity in different tissue compartments for the treated (bold lines) and untreated hemispheres.Artery (black), vein (grey) and extravascular tissue (dotted lines) are presented.The regions over which the relative velocities were averaged are highlighted in panel b.(d) Time course of the normalized integrated transverse output/input profile ratio (bar graph, data shown as mean ± SE) and for the ratio calculated from the vessels shown in (b)-(c) (thick bold time traces).(e) Extravascular fluorescence accumulation for a treated and control regions.Note that in panels (c)-(e), the DOC application duration is represented by a grey zone.(f) Mean output/input ratio 10 minutes after DOC application (n = 5 rats, ** = p < 0.01, data shown as mean ± SD).
Fig. 8 .
Fig. 8. Simulated effects of a localized leakage on velocity maps.a-b) Velocity maps without (a) and with (b) leaky boundary conditions.The arrow in (b) shows the location of the leaking zone.(c) Velocity changes calculated as 100•(v f -v i )/ v i , where v f is the final velocity (with leak) and v i stands for the initial velocity (no leak).(d) Initial (black curve) and final (grey curve) transverse velocity profile of the simulated vein.(e-g) Initial and final transverse velocity profiles of different simulated arteries or arterioles.Lines along which the profiles were taken (corresponding to locations i, ii, iii) are marked in panel (b).
Fig. 9 .
Fig. 9. Small permeability changes in vessels due to DOC application to a live rat brain.(a) Fluorescence intensity and blood flow relative velocity maps before and after DOC application.In some discrete locations, an accumulation of fluorescent dye is observed in the extravascular region (white arrow).There is a slight change in the blood velocity map (top), correlated with the observed small changes in intensity outside the vessels in the fluorescence map (bottom) (b) Relative changes of velocity after DOC application.(c) Transverse relative blood velocity profiles for an artery (upper) and a vein (bottom) before and after DOC application.Initial profiles blood velocity profiles (black) and the final blood velocity profiles (grey) after DOC application are overlaid in the figure.Spatial Locations of the plotted profiles are highlighted in black in (a), top left panel.
Table 1 . 11 .
Measured and simulated velocity values for the vessels marked by a number in Fig. Velocities were measured with the time of flight technique under green LED illumination along the lines highlighted in Fig. 11.Given the challenges in translating the vessel morphology accurately into COMSOL, the simulations did not converge for a portion of the vessels.
Fig. 10 .
Fig. 10.Time course of the normalized integrated transverse output/input profile ratio (N = 2 rats, data shown as mean ± SE).Grey bars represent the treated hemisphere and white bars the untreated hemisphere.DOC application period is marked with a grey box.
Fig. 12 .
Fig. 12. Simulated effects of a localized clog on velocity maps.a-b) Velocity maps without (a) and with (b) an occlusion.The arrow in (b) shows the location of the clogged zone (smaller diameter over a length of 20 µm).(c) Velocity changes calculated as 100*(( v f -v i )/ v i ) where v f is the final velocity (with an occlusion) and v i stands for the initial velocity.(d) Initial (black dots) and final (grey cross) transverse velocity profile of a simulated vein.(e) Initial and final transverse velocity profiles of the simulated artery.Lines along which the profiles were taken are marked in panel (b).
is the shear rate.
is the identity tensor, and D is the rate of deformation tensor), and f are the external body forces per unit volume (e.g.gravity). | 2018-04-03T04:01:05.542Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "8b716850b2f08b2ca192af05db781fe8660608ff",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.4.001856",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8b716850b2f08b2ca192af05db781fe8660608ff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
150725682 | pes2o/s2orc | v3-fos-license | The Effect of Corporate CSR on Customer Attitudes
Prita Prasetya Management Program, Economics & Business, Faculty Universitas Mercu Buana Abstract The research aim to analyze (1) the effect of CSR toward corporate image, (2) effect of corporate image toward attitude, (3) effect of CSR toward customer’s perception, and (4) effect of customer’s perception toward attitude of Bank Mandiri. This descriptive study using survey method with quistinaire to data collection. Respondent ini this research are customer’s of bank Mandiri in Jabodetabek area. The analysis approach used in this research in Structural Equation Modeling (SEM). The hyphotheses results obtained indicated that (1) CSR has positively influence significantly towards corporate image, (2) corporate image has positively influence significantly toward attitude, (3) CSR has positively influence significantly toward customer’s perception, and (4) customer’s perception has positively influence significantly toward attitude. CSR has direct and indirect positively influence, through corporate image and customer’s perception, toward customer’s attitude..
Corporate social responsibility is one of the factors forming a good image of the company. In addition to forming a good image, corporate CSR will also form a good perception in the eyes of its customers. Furthermore, the attitude of respondents as corporate consumers to companies that do not carry out CSR is not going to buy products from the company concerned and talk to others about the shortcomings of the company. The survey simply explains that the implementation of CSR will shape good opinion in the community while at the same time forming a good image of the company.
Research related to the importance of CSR, in relation to corporate image, was also found in several studies which, as in the research of Yong tae bang (2010) found that social image, location image, brand image had a relationship and a positive and significant influence on consumer loyalty. While in the Privanko Gucharit study, Mark Anner et al. Found that consumer perceptions of CSR have a positive and significant influence on consumer attitudes and consumer behavior intentions. Second, their research findings also indicate that CSR and consumer perceptions have a significant effect on the quality of services received by consumers.
The focus of the discussion in this study on all of Bank Mandiri's CSR activities that have been carried out in principle aims to create benefits for the community in order to grow to be more prosperous and independent and build positive perceptions of Bank Mandiri as a leading financial institution in Indonesia that has a commitment to harmonize the vision and its mission with enthusiasm to prosper the country. With CSR that has been done, it is hoped that it will form a good corporate image, because it is an absolute requirement for the success of a bank. Because with a good image and perception from the community, it will increase their trust, which ultimately forms a positive attitude to be able to make cooperative relations in the form of financial management.
Corporate Social Responsibility (CSR)
According to some experts and researchers when a company or institution implements corporate social responsibility, there will be benefits such as increasing positive perceptions (Bahttacharya and Holmes in Muhadjir and Qurani, 2011, as well as improving the company's image to be more positive (Tench and Yeomans in Bruhn, 2013;Smith and Stodghill in Pirsch, et al. 2007;Kimet.al in Meechoobot and Rittipant, 2012) Corporate Social Responsibility programs need to be carefully organized and managed so that a company can be socially responsible in accordance with the full social response approach.
A collection of images in the minds of audiences or the public forming corporate image. Corporate image reflects a public perception of past actions and will be a future company prospect that explains the company's overall approach from related parties (stakeholders) when compared with other leading companies (Fombrun, views on products and learning processes both from experience or others that can be a positive or negative attitude. From the various opinions, it can be concluded that the attitude of the Customer is a customer belief about a banking product or on banking activities carried out. The attitude of the Customer reflects an evaluation of the object being assessed so that it can be a positive, negative, or neutral attitude. There is an attitude model that is Three Component Attitude Modular which was developed by behavioral experts, especially social psychologists. According to this model Attitudes consist of three components, including: 1) Cognitive Component The cognitive component is cognition and perception obtained through a combination of direct experience with the object attitude (attitude object) and related information obtained from various sources. These components are often known as beliefs so consumers believe that an attitude object has certain attributes and that certain behaviors will lead to certain outcomes or outcomes. 2) Affective Components Affective components are emotions, or feelings for a particular product or brand. Emotions and feelings mainly have evaluative nature, namely whether consumers like or dislike certain products.
3) Conative Components
Conative component is the tendency of a person to carry out an action and behavior in a certain way towards an object of attitude. In marketing and consumer research, the conventional component is usually treated as an expression of the consumer's intention to buy or reject a product.
The objectives of this study are: (1) To determine the effect of Corporate Social Responsibility (CSR) on Corporate Image; (2) To find out the effect of Corporate Image on attitude; (3) To find out the effect of Corporate Social Responsibility (CSR) on customer perceptions; (4) To determine the effect of customer perceptions on attitude? (5) What is the influence of Corporate Social Responsibility (CSR) with attitude.
The output to be achieved from this study is that the results of this study are expected to provide benefits to the company. PT Bank Mandiri Tbk, this research is used as a reference for the success of the Corporate Social Responsibility activities that have been carried out and can be used as an evaluation of the effectiveness of the implementation of Corporate Social Responsibility, so that it can be better for the activities carried out next. For the wider community, this research can be used as a source of knowledge and sources of information regarding the implementation of Corporate Social Responsibility which has an impact on the attitude of customers in choosing the Bank as a trusted institution in saving and managing their money.
Research design
In this study the researcher wants to test the hypothesis of the relationship between variables. Collecting data and information are taken from the sample using a questionnaire, then analyzed to get accurate data about the facts and the relationship between the research variables.
Population and sample
In this study the population is all customers of Bank Mandiri in the Jabodetabek area. Determining the location of the study is based on the limitations of the researchers with a sample of 125 respondents.
Data Analysis Techniques
The level of measurement used in this study is a questionnaire constructed in the form of a rating scale using a Likert scale. The data analysis technique used in the study is the analysis of validity and reliability, structural equation model (SEM) analysis, and dimensional correlation analysis.
Analysis of Structural Equation Model (SEM), Validity and Reliability Test
SEM data processing techniques with the confirmatory analysis method were used in this study. Validity test relates to whether a variable measures what should be measured based on the value of Confirmatory Factor Analysis (CFA), the standard factor load ≥ 0.50 is very significant. Reliability is the consistency of a measurement. High reliability shows that indicators have high consistency in measuring latent constructs. In the SEM analysis the most appropriate reliability test is to use the value of construct reliability. CR value ≥ 0.70 shows good reliability.
Source: Ferdinand (2002)
In SEM there is no single statistical tool to test the model created. Generally the suitability of the model is done by testing various criteria for goodness of fit. Table 3 shows some conformity indices to test whether a model can be accepted or rejected.
Hypothesis testing
After the measurement model meets the requirements, what needs to be done next is to test the hypothesis. The t test is done to show how far the influence of one independent variable individually in explaining the variation of the dependent variable.
Operational Definition of Research
Operational definitions of variables are statements relating to measurements that are emphasized in the properties of concepts that can be observed and measured. Each variable is measured based on dimensions or indicators as in
Respondents Characteristics
The characteristics of the respondents in this study were grouped by gender, age, recent education, occupation and duration of being a customer. Specifically, the demographic characteristics of consumers in this study are shown in Table 4. .
CSR
The concept where companies give attention to society and the environment
Reliability Test
Reliability is a measure of the internal consistency of the indicators of a construct that shows the degree to which each indicator indicates a common construct. Reliability tests are also used to test research instruments which, when used several times to measure the same object, will produce the same data.
In the SEM analysis in this study, reliability testing was carried out using the construct reliability. If the value of construct reliability is ≥ 0.70, then it shows good reliability. The reliability test results are shown in Table 6 below:
Measurement Model Analysis
The initial SEM modeling in the form of a basic standardized solution model is shown in Figure 2. Furthermore, it is done with a Goodness of fit. The results of the suitability test are obtained as shown in table 7.
Source: Results of LISREL 8.8 -2018 Data Processing
Results from 9 model match sizes, 6 of which showed good values (fit), one marginal fit and 3 data that were not fit. Overall the model is good. According to Wijanto (2007) there are several suitability models in SEM and the assessment of model compatibility is assessed based on how many model sizes can be met by the value of the research model. The more the target match value of the Goodness of Fit measure is met by the model, the better the research model. The characteristics of the respondents in this study were grouped by gender, age, education, occupation and length of time as a Bank Mandiri customer. Characteristics of respondents based on the results of the study indicate that Bank Mandiri customers are mostly over 30 years old, which is as much as 71%. The majority of respondents' education is diploma or bachelor, as much as 83% and work as private employees (73%). 54% of respondents have been Bank Madiri customers for more than 5 years. Throughout 2017, Bank Mandiri has carried out many CSR activities as a form of corporate social responsibility, besides that CSR is also a good image forming factor of the company so that it will get good perceptions in the eyes of its customers. The results showed that Bank Mandiri's CSR activities had a positive effect on corporate corporate image. This shows that CSR carried out by Bank Mandiri can stimulate the customer's perspective on the company. Corporate Image is a character that is owned by a company that is how the activities carried out by the company will influence the impression of others on the character of the company. Someone will have an impression on the company when they use the services offered. By having a good corporate image, the community hopes that the company can be socially responsible. Bank Mandiri customers take into account the company's environmental and social image in making their purchasing decisions against. Therefore it is important for companies to control CSR programs that will or have been given to get good results for the company's image.
Measurement of Goodness of Fit
A good corporate image will form a positive attitude for customers. This is also shown from the results of this study which showed positive and significant results. Customer attitude is one of the important factors that will influence customers' decisions in choosing banking products or services. The concept of attitude is very closely related to trust and behavior that will reflect a preference or evaluation of an idea or object. The results of this evaluation can lead to positive, neutral or negative feelings. In this study the customer positively assessed the implementation of CSR as indicated by the respondent's answer to the statement that CSR carried out by Bank Mandiri was purely well-intentioned and right on target.
CSR also has a positive and significant influence on customer perceptions. The acceptance of the hypothesis can be interpreted that CSR activities carried out by Bank Mandiri as a form of awareness and real social responsibility can improve customer perceptions for the better. Furthermore, customer perceptions have a positive and significant influence on the customer's attitude. For customers, Bank Mandiri was perceived positively, affecting their attitude towards the decision to choose Bank Mandiri as a provider of banking products and services. This positive attitude will make customers also have positive behavior. The company can help customers by involving and empowering the community in Corporate Social Responsibility activities.
Corporate Social Responsibility has a positive effect on customer attitudes both directly and through mediating variables, namely corporate image and customer perception. CSR activities carried out, proved to be able to form a positive attitude towards Bank Mandiri customers. The CSR also makes the company's image better and builds positive perceptions. The company's CSR activities are not only a form of fulfillment of government regulations but also a form of corporate appreciation for the community which indirectly participates in achieving the goals and survival of the company.
In line with the research by Binawan, Ali (2017) entitled Analysis of the Company Image and Service Quality through Customer Satisfaction to Customer Loyalty (A Fiel Research in PT Nusantara Water Center) The results showed that the company's image had a significant positive effect on customer satisfaction , service quality has a significant positive influence on customer satisfaction, corporate image has a significant positive influence on customer loyalty, and service quality has a significant positive effect on customer loyalty and satisfaction. | 2019-05-13T13:05:32.910Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "a012c020d386e34a5797f971ad82787205715736",
"oa_license": "CCBY",
"oa_url": "https://www.iiste.org/Journals/index.php/JMCR/article/download/46730/48256",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ec31b7f5013250c139721969e3965a81a32c9909",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
17806172 | pes2o/s2orc | v3-fos-license | First Phytochemical Evidence of Chemotypes for the Seagrass Zostera noltii
The variability of the flavonoid content of two populations of Z. noltii from different geographical zones, i.e., the Bay of Arcachon and the Bay of Cadiz, was evaluated. Samples were collected in spring and autumn at the two sites, and extracts were prepared by maceration in water. The phenolic content was fully characterized using Nuclear Magnetic Resonance (NMR), UV and Liquid Chromatography-Mass Spectrometry (LC-MS), and the concentration of the individual phenolic was determined by quantitative High-Performance Liquid Chromatography with Diode-Array Detection (HPLC-DAD). The two populations show a strong geographical differentiation in their flavonoid content. The samples from Cadiz were dominated by apigenin 7-sulfate, which represents 71% (autumn collection) and 83% (spring collection) of the total flavonoids, whereas the samples from Arcachon were characterized by diosmetin 7-sulfate (85 and 93% of the total flavonoids). Structural elucidation of the individual phenolics was assigned using the complementary information from their spectral evidence. In addition, the results were confirmed by acid hydrolysis of the flavonoid sulfates, and comparison to synthetic standards obtained by sulfation of apigenin, diosmetin and luteolin. The results represent the first experimental evidence of the existence of chemotypes within the species Z. noltii.
Introduction
Seagrasses are a group of about 60 species of rooted vascular plants of terrestrial origin that have successfully returned to the sea. They form the most widespread and productive coastal system in the world, but also one of the most threatened. They grow in large marine meadows, which constitute valuable habitats. Their contribution to the productivity of the oceans has become increasingly recognized over recent decades [1]. The subdivision of the genus Zostera (Zosteraceae family) is still under debate, and a discussion continues about dividing or not dividing the genus Zostera into two genera, Zostera and Nanozostera [2]. Nine species belong to this genus, among which Zostera noltii constitutes a homogenous group.
Seagrass declines have been reported worldwide either by natural or anthropogenic disturbances [3]. The resilience of seagrass meadows to these events may be strongly mediated by the presence and abundance of secondary metabolite compounds, which could be a factor determining the way the meadow responds to periodic or permanent disturbances. All submersed aquatic angiosperms are secondarily adapted for life in water. A survey of 43 species of seagrass showed that five of the 12 genera, including the genus Zostera, had flavonoid sulfates [4]. Flavonoid sulfates have been identified as being of possible taxonomical and ecological significance for seagrasses and other plants of saline habitats, and may play a role in their allelochemical relations [4,5]. Water-soluble compounds from leaves of Zostera marina have been reported to exhibit antialgal and antibacterial activities, but the causative substances have not been identified [6,7].
Z. noltii Hornem (common name dwarf eelgrass) is an important species of eelgrass occurring along European and North African coasts [1]. Zostera beds were severely reduced in Europe due to an outbreak of an epidemic disease in the 1920s. Since then, recovery has been slow and patchy. Z. noltii is under increasing threat, with local extinctions recorded for some meadows, and the species is classified as vulnerable and endangered in many parts of Europe. Only a few studies, other than our own [8][9][10], have investigated the concentration of phenolics in Z. noltii. Concerning the characterization of the flavonoid profile, only qualitative studies have been reported. Diosmetin-and luteolin 7-sulfates had been previously reported for a Herbarium tissue of Z. noltii [4,5]. Diosmetin, diosmetin-7-O-glucoside and luteolin-7-O-glucoside have been mentioned, but not unambiguously characterized for Z. noltii from the Black Sea [11], and luteolin-7-O-glucoside for the specimen from Adriatic Sea [12]. However, compound identifications in the latter two were only based on the comparison of the paper electrophoretic mobility or Thin Layer Chromatography (TLC), and the extraction conditions were not adapted to flavonoids and do not meet the current standard. As a result, many of the compounds reported for Z. noltii, which were not detected by appropriate phytochemical methods, could represent artifacts.
Until recently, Zostera taxonomy was only based on morphology. The development of DNA-based molecular markers has led to an abundant literature on seagrass genetics over the last decade. The existence of geographically distinct populations of Z. noltii throughout its biogeographic range has been reported [13,14]. As yet, the factors underlying this geographical genetic variability are poorly understood. Their possible consequences for the phenolic secondary chemistry of Z. noltii have not been considered despite the role of these substances as chemical defenses.
Our aim was to fully characterize the flavonoid profile in living tissues of Z. noltii and to examine how these compounds vary among seagrass meadows across large geographical areas. This preliminary work reports on Z. noltii specimen from two intertidal meadows separated by approximately 1,000 km, namely Arcachon Bay (French Atlantic coast) and Cadiz Bay (Cadiz Gulf, Spain), and the characterization and quantification of their flavonoid content using spectroscopic and chromatographic methods.
Results and Discussion
Aqueous extracts of Z. noltii leaves were prepared from the sample collected in the Bays of Arcachon and of Cadix (see Figure 1 and Table 1 for details). The crude extracts were analyzed by Nuclear Magnetic Resonance (NMR) and High-Performance Liquid Chromatography with Diode-Array Detection (HPLC-DAD), which both gave a clear understanding of their flavonoid content. 1 H-and 13 C-NMR spectra of the crude extracts from Arcachon show a well-defined typical pattern of diosmetin moiety as the major phenolic, whereas crude extracts from Cadiz show the absence of a methoxy group and the typical pattern of an apigenin moiety. In both cases, the shifts observed for 1 H and 13 C resonances of ring A are in good agreement with the presence of a sulfate group linked to the C-7 hydroxyl group [15]. High performance liquid chromatography (HPLC) combined with diode array detection (DAD) was used for both qualitative and quantitative analyses of the extract composition ( Table 1). The results show that the Arcachon samples contain higher amounts of flavonoids than the Cadiz samples (6,623 and 9,895 versus 3,378 and 4,355 μg/g, respectively). As expected on the basis of the NMR data, the HPLC flavonoid profiles of the four extracts were largely dominated by a single product, which was eluted at 26.3 min (peak 5, on-line λ max , 337 nm; Cadiz) and 27.3 min (peak 6, on-line λ max , 347 nm; Arcachon) ( Figure 2), respectively. Apigenin 7-sulfate accounted for 2,410 μg/g (Cadiz, autumn 2007 collection) and 3,600 μg/g (Cadiz, spring 2008 collection), representing 71 and 83%, respectively, of the total flavonoids (TF) detected ( Figure 3). In contrast, apigenin 7-sulfate was not found in the samples from Arcachon, which were dominated by diosmetin 7-sulfate with 5,636 μg/g (autumn 2007, 85% of TF), and 9,198 μg/g (spring 2008, 93% of TF). Diosmetin 7-sulfate is also found in Cadiz, but as a minor product accounting for only 410 and 256 μg/g, which represents 9 and 6% of the TF (Figure 3). In addition, small amounts of apigenin 7-O-glucoside (Peak 4, on-line λ max , 335 nm) were detected in Cadiz, but not in Arcachon. Luteolin 7-sulfate was found as a minor product at the two locations (Peak 3). The comparison of the HPLC-DAD profiles (Figure 2), and the flavonoid composition expressed as a percentage of the TF detected at each site ( Figure 3) clearly shows the dramatic geographical chemodifferentiation between the two study sites.
All the UV absorptions are in agreement with the literature [4,16]. The structural assignments were supported by LC-ESI-MS analysis in positive mode. In particular, the mass spectra clearly show the [M+1] molecular peak for all the flavonoids detected, and the characteristic ion peak at [M+1-80] for sulfated flavonoids or [M+1-162] for gluco-flavonoids. The linkage of the sulfate moiety to the 7-position was established from the UV [5,16] and NMR data [15]. Our results were confirmed by acid hydrolysis of the crude extracts, which led to diosmetin (Arcachon) and apigenin (Cadiz) as the major product (comparison with standards). In addition, authentic samples of the 7-sulfated flavonoids were synthesized by sulfation of luteolin, apigenin and diosmetin with tetrabutylammonium hydrogen sulfate [15]. Comparison of the NMR, MS and UV spectra and HPLC retention time allows the unambiguous identification of the sulfated flavonoid content of Z. noltii from Arcachon and Cadiz.
Zosteric acid was also found as a minor compound at the two study sites (see Table 1), and caffeic acid in the samples from Arcachon. Zosteric acid is a highly hydrophilic sulfated coumaric acid, which is known to prevent settlement of some marine bacteria, algae, barnacles and tubeworms at low concentration [9].
The sulfate component is believed to represent a marine adaptation [17]. Sulfate is the third highest ion in concentration in seawater and hydrogen sulfide is commonly found in anoxic marine sediment. Harborne evoked the possibility of flavonoid sulfates having a dynamic function in salt uptake and metabolism [17]. Nissen and Bessen [18] found that 50% of the radiolabeled sulfate fed to Zostera marina was recovered in the phenolic flavonoid fraction. Taxonomic and ecological implications were evoked by McMillan et al. [4]. The role of sulfated flavonoids in seagrasses remains unclear and has yet to be documented. Nevertheless, there is increasing evidence that these hydrophilic substances have a role to play in the physiological survival of seagrasses in the marine environment. Luteolin 7-O-D-glucopyranosyl-2-sulfate isolated from the tropical seagrass Thalassium testidinum has been shown to chemically defend the seagrass against zoosporic fungi [19]. We have recently shown that aqueous extract of Z. noltii from the Bay of Arcachon and the Thau lagoon significantly inhibits the growth of the Harmful Algal Bloom (HAB) Alexandrium catenella. The highest concentrations of phenolics were found to correspond to the lowest EC 50 values, suggesting that these metabolites might be responsible for the observed algicidal activity [20]. This is the first time sulfated flavonoids have been quantified in Zostera species. Apigenin 7-sulfate has never been reported for Z. noltii before. Long-term monitoring of the phenolic content in monthly-collected fresh leaves of Z. noltii from the Arcachon lagoon is now in progress in our laboratory. From the results acquired since 2007, it appears that diosmetin 7-sulfate was the only major flavonoid sulfate whatever the season, while apigenin 7-sulfate has never been detected [21]. Based on these data, our results show that Z. noltii grown in Cadiz Bay is chemically distinct from specimens grown in Arcachon Bay.
Only a few chemotaxonomical studies have been reported for some seagrasses. They were generally conducted at the intergeneric or interspecific level [22]. To the best of our knowledge, the only study at the specific level was reported for Halophila ovalis subspecies populations of the Pacific, Indian Ocean and Australia, which differ in the occurrence of sulfated flavonoids on the basis of morphological variations and geographical distribution [23,24].
From a biosynthetic point of view, flavonoid compounds result from the stepwise condensation of three molecules of malonyl CoA and one molecule of 4-coumaroyl CoA followed by a stereospecific cyclization leading to a flavanone [25]. All flavonoids are derived from a limited number of flavanone intermediates, which serve as substrates for a variety of enzyme activities, enabling the generation of diversity in flavonoid structures. The biosynthesis of the 7-sulfate of apigenin, diosmetin, and luteolin is summarized in Figure 4. They all share the same metabolic flavanone precursor, naringenin, but the subsequent steps differ. While naringenin leads directly to apigenin (FNS step), followed by apigenin 7-sulfate (F7S step), diosmetin 7-sulfate can originate from two distinct pathways: through apigenin or through eriodictyol, which both involve a flavonoid 3'-hydroxylase (F3'H). The lack of diosmetin 7-sulfate in the sample from Cadiz suggests a poor expression of the gene encoding F3'H. Further research will be needed to elucidate the different metabolisms of these two populations. In particular, it would be of interest to perform a cross phytochemical/phylogenetic analysis of Z. noltii to correlate the phenolic fingerprint and the amino acid sequences of genes encoding the flavonoid pathway. This is the first report of quantitative data on the individual flavonoids in Z. noltii and the first report of the existence of chemotypes within the Zosteraceae family. This work reveals unknown features about the chemical plasticity and patterns of the phenolic composition in Z. noltii and shows the need for tandem phytochemical and genetic studies of this species throughout its biogeographic range. Understanding the underlying causes of the geographic variation of the Z. noltii sulfated flavonoid content and its possible link with ecological factors appears crucial to elucidating the functioning of Z. noltii communities, and for monitoring and managing Zostera beds. Fingerprinting of specimens collected at twelve localities throughout the Atlantic and Mediterranean is now in progress.
General Methods
The solvents used were all HPLC-grade. Standards were purchased from Extrasynthèse (Genay, France), and all the chemical reagents used were from Aldrich Chemical Company. 1 H-, 13 C-NMR and 2D-NMR spectra were recorded on an AVANCE 300 MHz instrument (Bruker) in DMSO-d6 (Euriso-Top, Gif-Sur-Yvette). Chemical shifts are expressed in δ (ppm) values relative to tetramethylsilane (TMS) as an internal reference. Coupling constants are reported in hertz (Hz). 13 C-NMR assignments were made by 2D HSQC and HMBC experiments. High performance liquid chromatography (HPLC) combined with diode array detection (DAD) was performed on a Thermo Electron liquid chromatography system. LC-MS was performed using a HP1100 (Hewlett-Packard) equipped with an Agilent MSD 1946B simple quad mass spectrometer and an HP Chemstation software.
Study Sites and Plant Collection
The two study sites are intertidal monospecific Z. noltii meadows. Both are exposed to long periods of desiccation and to rapid variations and extreme values of temperature, light intensities and salinity.
The Bay of Cadiz (SW Spain; 36°23'-36°37'N, 6°09'-6°21'W) is located in the Atlantic Ocean, close to the Mediterranean Sea and to Northern Africa (Figure 1). The bay is subdivided into two basins, a shallower basin (inner bay), with a maximum depth of 11 m, and a deeper basin (outer bay) with a maximum depth of 17 m. In Cadiz Bay, Z. noltii beds are extensive and colonize the major part of the exposed intertidal area.
The Bay of Arcachon ( Figure 1) is a 155 km 2 mesotidal system located on the south-western French Atlantic coast (44°40'N, 1°10'W). It opens to the ocean via a narrow channel. Approximately 130 × 106 m 3 (neap tide) and 400 × 106 m 3 (spring tide) of water are exchanged between the lagoon and the ocean during one tidal cycle. The tidal amplitude ranges from 1.10 m on neap tides to 4.95 m on spring tides, and the local mean sea level (MSL) is +2.20 m, relative to French marine 0. Rivers and streams, mainly located in the northern and eastern parts of the Bay, provide a freshwater inflow of approximately 14,000 m 3 /d. Z. noltii beds are extensive in the Arcachon Bay and colonize the major part of the exposed intertidal area between −1.9 m and +0.8 m relative to the local Mean Sea Level. The period of meadow emersion during low tide is long (10-14 h). The sampling station was located in Andernos (inner part of the Bay).
Thirty shoots of Z. noltii Hornem. (Zosteraceae) were sampled in the growing season in 2007 and 2008 from intertidal monospecific meadows in Andernos (Arcachon Bay, 14 October 2007 and 6 June 2008) and El Bajo de la Cabezuela (Cadiz Bay, 2 October 2007 and 3 June 2008). Plants were gathered carefully to keep belowground parts intact and transported to the laboratory. After collection, the samples were thoroughly rinsed in seawater, and then quickly washed in freshwater to remove sand and salt. The collected material was handpicked to remove associated debris, and leaves were separated from rhizomes. Then, the plant material was air-dried at room temperature to a constant weight. The moisture content of the dried material was <1%. Leaves were manually ground using a mortar and pestle immediately before extraction.
Extraction and Flavonoid Content Determination
The pulverized air-dried leaf material (10 g) was extracted by maceration in water for 24 h at room temperature. The process was repeated twice, and then the extracts were pooled together and freeze-dried yielding an amorphous powder. Extraction yields were: 27.6 and 25.4%, respectively for the Cadiz samples; 30 and 25.1%, respectively for the Arcachon samples (given as % of the seagrass dry weight).
Quantification of the Phenolic Content
Separation and quantification of phenolics in the crude extracts were performed using high-performance liquid chromatography, consisting of a liquid chromatography system (Thermo Electron) equipped with a SCM 1000 solvent degasser, a thermostatically controlled column apartment, an AS 3000 autosampler with a 100 μL loop, a PDA UV6000LP detector and a Chromquest Chromatography Workstation. Separations were carried out at 40 °C on a Hypersil GOLD C8 column (Thermo Finnigan), 175 A° pore size, 5 μm particle size, 250 × 4.6 mm i.d. column. The analytes were eluted at a flow rate of 1 mL/min using the binary gradient 0.1% (v/v) TFA in water (A) and methanol (B). The following linear gradient was used: zero min, 1% B; 60 min, 99% B. Run time was 60 min, stop time was 60 min, post time was 10 min. UV spectra were collected over the range of 220-440 nm, and the chromatograms were recorded at 270, 328 and 350 nm with a resolution of 1 nm and no smoothing. In addition, the data were processed to create a chromatogram, in which each chromatographic peak represents the absorbance of the eluting substance at its λ max (max-plot chromatogram). The injection volume was 20 μL. The data were integrated using the Chromquest automated software system. Stock solutions of the dried extracts were prepared in DMSO at a concentration of 0.5 mg/mL. All solutions were filtered prior to analysis through a 0.2 μm syringe filter and injected three times into the HPLC. Chromatographic peaks were checked for peak purity and identification was achieved by comparing retention times and UV spectra with those of standards and authentic samples obtained by sulfation of apigenin, diosmetin and luteolin.
Quantitative determinations of flavonoids were carried out by peak area measurements at 350 nm, using an external calibration curve of diosmetin dissolved in DMSO. The curve was established on six data points, covering the concentration range 0.0619-0.00619 mg/mL. Linear regression on the HPLC analyses gave R 2 values of 0.9994.
Quantitative determinations of zosteric acid were carried out by peak area measurements at 280 nm, using a calibration curve of coumaric acid [9]. The linear regression coefficient was 0.9998 (six points).
Quantitative determinations of caffeic acid were carried out by peak area measurements at 328 nm, using a calibration curve of caffeic acid at the same wavelength (0.9996, six points).
The data presented in Table 1 are the average from three experiments, calculated using the following equation: individual phenol compound = (C × y)/Cs (1) where C is the concentration of the tested phenolic compound (mg/mL) in the analyzed extract, calculated from peak areas and linear regression; y is the extraction yield; Cs is the concentration of the sample (mg/mL), diluted in DMSO/deionised water 4:1 (v/v).
Data are expressed in micrograms per gram of dry matter of Z. noltii (μg/gdw; mean ± standard deviation (SD) of three determinations).
LC-MS Analyses
HPLC-PDA-ESI/MS analyses were performed using a HP1100 (Hewlett-Packard) equipped with an Agilent MSD 1946B simple quad mass spectrometer and a HP Chemstation software. Positive mode ESI spectra of the column eluate were recorded in the range of m/z 100-1,000 a.m.u. Absorbance was measured at 280 and 320 nm. Compounds were separated using an MN Nucleodur C18 column (Macherey-Nagel, Germany) measuring 125 mm × 2 mm i.d, 3 µm particle size. The analytes were eluted at a flow rate of 0.3 mL/min using the binary gradient (v/v) formic acid in water (1%, pH = 2.55, A) and methanol (B). The following linear gradient was used: fifteen percent B to 100% B (15 min). Separation of the analytes was carried out at 50 °C. The injection volume was 2 µL. For mass spectrometric analysis, compounds were detected using the following conditions: nebulising gas pressure, 60 psi; drying gas flow rate, 12 L/min; temperature, 350 °C; capillary voltage, 4,000 V; temperature source, 350 °C. Data were acquired in full scan mode (m/z 100-1,000) at a fragmentor voltage of 70 V.
Acid Hydrolysis of the Crude Extracts
One hundred milligram samples of crude extract from Arcachon and Cadiz were separately dissolved in 100 mL of methanol and stirred with 5 mL of TFA at room temperature until total disappearance of the sulfated flavonoids as monitored by HPLC. After evaporation of methanol under vacuum, the reaction mixture was partitioned between n-butanol and water. Addition of BaCl 2 to the aqueous layer gave a white precipitate of BaSO 4 . The butanolic fraction was evaporated to dryness, then analyzed by HPLC, UV and NMR, and compared with authentic samples of apigenin, diosmetin and luteolin. Results showed the large predominance of apigenin in the case of Cadiz and diosmetin in the case of Arcachon. Small amounts of luteolin were also found in the hydrolysis mixtures, which confirms the presence of luteolin 7-sulfate at the two locations.
Conclusions
The present study is the first effort to compare and fully characterize the sulfated flavonoid profile in two populations of the seagrass Z. noltii. Results show that each population is largely dominated by a specific flavonoid: apigenin 7-sulfate in Cadiz and diosmetin 7-sulfate in Arcachon. This well-marked chemodifferenciation fits well with the recent genetic data reported in the literature for the Cadiz group. The results represent the first experimental evidence of the existence of chemotypes within the species Z. noltii. They demonstrate the need to correlate data obtained with DNA-based molecular markers and the secondary chemistry in addition to morphology. | 2017-04-20T03:19:51.879Z | 2012-09-01T00:00:00.000 | {
"year": 2012,
"sha1": "bf4bfee281d4f60973228e6108f32a54aa886d16",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/plants1010027",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4aa0d74106e7760a934f03429893cb75f131340",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253592790 | pes2o/s2orc | v3-fos-license | Wheal and flare reactions in skin prick tests of patients treated with montelukast alone or in combination with antihistamines
Because antileukotrienes may inhibit inflammation, it is plausible that montelukast administered for a long time could suppress skin wheal and flare reaction, and thus, it should be discarded prior to the tests. This study assessed the effect of long-lasting treatment with montelukast alone or in combination with antihistamines on wheal and flare in skin pricks tests (SPT) in patients sensitized to perennial allergens. We conducted a 32-week, double-blind, placebo-controlled, cross-over and randomized trial that implicated two arms: arm A, 20 patients received levocetirizine, montelukast with or without levocetirizine or placebo; arm B, 20 patients received desloratadine, montelukast with or without desloratadine or placebo. All treatment periods lasted 6 weeks and were separated by 2-week washouts. At baseline and on the last day of each treatment period, SPT were performed in all participants. Both levocetirizine and desloratadine in monotherapy, or in combination with montelukast, were effective in reducing wheal and flare in SPT. Monotherapy with montelukast did not change the size of the wheal for either histamine or for house dust mites, in either arm of the study, but significantly reduced the size of flare for histamine in arm A. Addition of montelukast to antihistamine did not exceed efficacy of monotherapy with antihistamine in both arms of the study. Since the size of wheal determines the results of SPT, montelukast, even taken for a long time, does not have to be discarded prior to the tests.
Introduction
Skin prick tests (SPT) are commonly used to confirm sensitization to a wide spectrum of environmental allergens. SPT help to diagnose the underlying cause of rhinitis, asthma or urticaria, and are required to recommend appropriate prophylaxis or for qualification for immunotherapy [1,2]. Since early wheal and flare reactions result mainly from histamine released from degranulating mast cells, it is obvious that antihistamines are more or less able to inhibit this reaction [3].
Since montelukast, a potent and selective leukotriene receptor antagonist, suppresses allergic inflammation [4][5][6][7][8], improves control of asthma [8] and reduces symptoms of seasonal [4,5] and perennial [6,7] allergic rhinitis (AR), it is possible that it may inhibit skin response to allergens measured in skin prick tests. Although guidelines for skin prick testing do not recommend discontinuation of montelukast before the SPT [1,2], most studies relied on assessment of wheal and flare reactions after the single dose [9] or a very short-term treatment with montelukast [10]. Furthermore, there were studies that confirmed gradually increasing improvement of AR symptoms in the course of long-lasting treatment with montelukast [7,11,12] or combination of montelukast with antihistamine [11]. Therefore, it is plausible that one dose or short treatment with montelukast may not affect SPT, whereas long lasting treatment with this agent administered alone or in combination with antihistamine due to the increasing efficacy and immunomodulative properties may affect the skin response in SPT.
In this study, we aimed to determine the influence of montelukast administered for 6 weeks as monotherapy or added to antihistamine on the size of wheal and flare in SPT of patients with allergic rhinitis sensitized to perennial allergens, in relation to placebo as well as to monotherapy with desloratadine or levocetirizine.
Study design
This study was designed as a prospective, double-blind, randomized, cross-over and controlled with placebo, including two arms with a 2-week run-in period and four treatment periods each lasting for 6 weeks separated by 2-week washouts.
Patients were recruited in our outpatient clinic over 4 months (June-September) and study was performed between September and March.
Study included male and female patients, aged 18-65 years with at least 2 years history of mild to severe persistent allergic rhinitis who were sensitized to perennial allergens relevant to Central Europe (HDM: house dust mite, cat and dog), confirmed by a positive history and positive results of skin prick tests, whereas patients who suffered from skin diseases that prevented execution and interpretation of skin prick tests, who were treated with systemic steroids or immunomodulative medicaments, as well as patients who were current smokers, with infection within 6 weeks preceding the study or with neoplasmatic disease and severe diseases, were excluded from the study. Pregnant and breast-feeding women were excluded too. Patients could not use an allergen-specific immunotherapy or any anti-allergic medications during the course of the study except the study medication. Xylomethasoline (0.1 %) nasal drops were allowed as a rescue medication.
After a two-week run-in period, all eligible patients (30 female, 10 male, mean age was 28.9 ± 2.7) were assigned randomly into group A (n = 20) receiving either levocetirizine (5 mg tablet one daily in the evening) or montelukast (10 mg tablet once daily in the evening) or a combination of montelukast and levocetirizine (in the evening) or placebo, or to group B (n = 20) receiving either desloratadine (10 mg tablet once daily in the evening) or montelukast (10 mg tablet once daily in the evening), or a combination of montelukast and desloratadine (in the evening) or placebo (5 mg saccharose in starch pills, one daily in the evening). Medications were administered in a cross-over and blinded manner.
Both at baseline and on the last day of each treatment period, skin prick tests were done for each participant.
All patients signed written informed consent and the study protocol was approved by the ethical committee of the Medical University in Lodz.
The principal endpoint of this study was the size of wheal and flare in skin prick tests after the 6 weeks of treatment, either with monotherapy with antihistamine (desloratadine or levocetirizine) and montelukast, or with combination therapy, which included antihistamine and montelukast in relation to the baseline test and placebo.
Skin prick tests
Skin prick tests with 11 common allergens (Allergopharma J. Ganzer KG, Reinbeck, Germany) were performed for each patient, with histamine (10 mg/ml) as a positive and diluent as a negative control. Results were regarded as positive when the mean wheal diameter (assessed as a sum of the largest diameter and its perpendicular measurement) was greater than or equal to 3 mm. Since all patients were sensitized to the house dust mites, results of the SPT were presented for Dermatophagoides pteronyssinus and farinae in relation to the histamine.
Statistical methods
The distribution of the results was determined with the Shapiro-Wilk normality test, while a Mann-Whitney test was used to compare groups and one-way analysis of variance (Anova) was done to compare results in each arm on different visits. A p \ 0.05 was considered as statistically significant. The mean with standard error of the mean (SEM) was provided. Statistica 5.1 PL for Windows software (StatSoft Polska, Cracow, Poland) was used for analyses.
Results
All randomized patients completed four treatment periods, and only two patients were lost to follow-up. All participants were sensitized to house dust mites and six patients were additionally sensitized to cat and dog allergens. Although sensitization to seasonal allergens was present in some patients, it was not essential in relation to HDM. Patients' baseline characteristics are presented in Table 1.
Generally, the mean size of the wheal and flare was the biggest at baseline. The placebo did not affect the size of skin reactions both to histamine and HDM in patients evaluated in the arm A and B ( Levocetirizine administered as monotherapy or in combination with montelukast in arm A and monotherapy with desloratadine, as well as concomitant treatment with desloratadine and montelukast in arm B were the most effective treatment options of inhibiting the size of wheal and flare in SPT. There were no significant differences between antihistamine taken alone or in combination with montelukast ( Table 2; Figs. 1, 2, 3).
If montelukast was administered as monotherapy, it did not change the size of the wheal for either histamine or for HDM, in either arm of the study. However, montelukast significantly reduced the size of flare in the SPT with histamine in arm A and slightly, but not significantly (p = 0.052), reduced the size of flare for D. pteronyssinus in arm B. Administration of montelukast in combination with antihistamine had no effect on the size of wheal and flare in comparison to monotherapy with antihistamine in both arms of the study (Table 2; Figs. 1, 2, 3).
Discussion
The results of this study demonstrate that long-term therapy with montelukast, which is administered in monotherapy or concomitantly with the levocetirizine or desloratadine, does not affect formation of wheals in SPT, nor does it potentiate the inhibitory effect of antihistamines. Since the diameter of the wheal underlies the assessment of results of SPT, slight inhibition of flare by montelukast does not affect outcomes of the test; thus montelukast, even administered for a long time, does not have to be discarded before the skin prick test.
The formation of wheals and flares in skin prick tests result from an immunoglobulin E-dependent basophils and mast cells activation, marked by the release of inflammatory mediators, including histamine [1]. Histamine induces capillary dilation, increases vascular permeability, stimulates nociceptors responsible for pain, and further causes eosinophil chemotaxis to the inflamed tissue. As a result, the exudation enters the skin and causes swelling accompanied by itching [1,2]. Although it is unlikely that montelukast, a potent leukotriene receptor antagonist, directly affects release of histamine from basophils and mast cells [10,13], it is well documented that it possesses other anti-inflammatory properties [14][15][16][17][18], and may increase the clinical effect during the treatment [11,12]. Thus, if taken for a long time, it could alter the inflammatory response of the skin. Montelukast has been able to modify the skin response; however, the number of studies supporting this finding is still obscure. In rats exposed to water avoidance stress, 5-day treatment with montelukast decreased the number of both degranulated and mature granulated mast cells in the dermis [19]. In humans, montelukast significantly delayed the occurrence of late skin responses, which constitute very frequent side effects of specific immunotherapy [9], and decreased the severity of severe hypersensitivity reactions to platinum in patients undergoing rapid desensitization [20]. In patient with delayed pressure urticaria, addition of montelukast to antihistamine resulted in better improvement regarding the suppression of the challenge test and clinical improvement [21].
Despite its anti-inflammatory properties and clinical efficacy, a single dose [9] or 1-week treatment with montelukast [10,22] did not significantly suppress both wheal [9,10,22] and flare [9,10] at any time point compared with placebo. Furthermore, in this field, the efficacy of short treatment with montelukast was always inferior to efficacy of antihistamines [9,10]. What is more, montelukast as add-on therapy to antihistamine did not bring any additional benefits compared to monotherapy with antihistamine [10].
Similarly in this study, the long-lasting treatment with montelukast did not alter the size of the wheal in SPT. The efficacy of montelukast was comparable to placebo, and significantly lower than efficacy of monotherapy with levocetirizine and desloratadine in arm A and B, respectively. Furthermore, montelukast did not change the mean diameter of flare for HDM in arm A and B and histamine in arm B, but significantly reduced the size of the flare to histamine in arm A when compared to baseline. However, the efficacy of montelukast was lower than the efficacy of levocetirizine and desloratadine and comparable to placebo at any time point of the study. Also, addition of montelukast did not potentiate effects of antihistamines.
The results of our study confirm that montelukast, which may modify tissue infiltration and inflammatory milieu, and subsequently may modify the late phase of allergic inflammation, does not seem to affect release of histamine from activated mast cells and basophils. Therefore, montelukast, even if is taken for a long time, does not need to be discontinued before allergen skin testing. The reduction of flare for histamine in patients treated with montelukast results from a small number of participants, rather than anti-inflammatory properties of montelukast, and does not affect reading of SPT where the mean size of the wheal is important. | 2022-11-18T14:39:57.506Z | 2013-11-27T00:00:00.000 | {
"year": 2013,
"sha1": "b56ccb88231bb7217e47214f66b6545d744c0ea6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00011-013-0688-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "b56ccb88231bb7217e47214f66b6545d744c0ea6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
4374779 | pes2o/s2orc | v3-fos-license | Vision Diagnostics and Treatment System for Children with Disabilities
Vision plays a crucial role in children's mental development. Therefore, early diagnosis of any vision disparities and implementation of a correct therapy is very important. However, carrying out such a procedure in case of young children and especially children with brain dysfunctions poses some limitations due to cooperation problems. The vision diagnostics and treatment (VisDaT) system presented in this paper is meant to help therapists in proper diagnosis and treatment involving such children. It utilizes a computer connected to two monitors and equipped with a specialized software. The main system components are as follows: an eye tracker recording child's eye movements and a digital camera monitoring online child's reactions. The system is equipped with a specialized software, which creates the opportunity to stimulate children's vision with a dedicated stimulus and post hoc analyses of recorded sessions, which enable making decision as to the future treatment.
Introduction
From the very moment of birth, every human is faced with the challenge of knowing the world and has to activate all the senses for this purpose. There is a huge progress in skills during the first year of life, and the proper vision development has considerable impact on this process. Recognizing shapes, colors, or other people's emotions is facilitated significantly when the oculomotor system cooperates correctly with the brain. However, not all people are fortunate and their eyesight is impaired from the beginning of their lives or becomes weakened or damaged later. Lack of ability to see the surrounding world properly may result in some serious negative consequences, especially for children, because it may slow down cognitive processes but also influence psychosocial relationships. The problem becomes more severe when it concerns infants or older children whose vision impairments are accompanied by other disabling conditions such as cerebral injuries [1]. The difficulty stems from the fact that a child is unable to communicate what and how he/she sees, which makes reaching a diagnosis a demanding task. It is crucial to evaluate the extent to which children with impairments are able to use their vision, because even weak visual abilities may serve as the basis for vision enhancement [2], which subsequently may entail overall intellectual development [3]. Children with complex disabilities require systematic brain stimulation. If many visual, auditory, and tactile stimuli engage children's attention, then the brain receiving pieces of information from various senses has to organize them by recognizing, analyzing, and integrating them. Therefore, a therapy, appropriate for a particular impairment, should be prepared based on functional vision assessment examination [4]: (i) Fixation correctness-ability to keep eyesight on a visual stimulus, (ii) Eyeballs motility-ability to trace a moving stimulus with eyes, (iii) Functional visual acuity-a distance from which a child recognizes a character of a given size, (iv) Contrast sensitivity-the impact of a presented level of a contrast on a child's visual ability, (v) Field of vision-an area, within which a child is able to see the presented object.
During series of tests determining conditions, in which a child is able to perceive objects, a feedback from participants is expected. As has been mentioned, it may pose certain difficulties, thus supporting techniques are searched for. The solution that proves useful in dealing with the aforementioned problem is eye tracking technology, which provides methods for registering and analyzing eye movements and thus for evaluating the vision quality.
This was the motivating factor behind the decision to undertake studies on developing an environment for the vision diagnosis and therapy for children with various visual impairments. In this paper, we present such a solution with the discussion of its possible applications. The solution offers using specialized device for collecting eye movements and the multicomponent system facilitating gathering data and its further processing and analysis.
The paper starts with the introduction to eye tracking methods and eye movement data processing presented in Section 2-Eye Tracking Basics. The description of the proposed system including the main modules and algorithms is described in Section 3-VisDaT System. Subsequently, some examples of VisDaT system usages, together with obtained preliminary results, are presented in Section 4-Results. Finally, the last section summarizes the presented studies.
Eye Tracking Basics
When considering an application of eye tracking methods, the first decision must concern a device utilized to record eye movements. Among currently used technologieselectrooculography [5] and video-oculography systems [6,7]-the latter one, due to its low invasiveness, is the obvious choice for children-oriented experiments. VOG eye trackers do not require direct contact with eyes, because they record eye movement by means of digital video cameras, capturing a sequence of eye images. An eye image may be obtained by the application of an infrared illuminator. Most implementations use near IR light sources with a wavelength of approximately 880 nm, almost invisible for the human eye, but still possible to be detected by most commercial cameras [8]. On the basis of eye images, specialized algorithms are used to evaluate the center of the pupil and the differences in its positions to determine eye movements. Such calculations utilize reflections caused by light falling into the eye (Figure 1).
These reflections are almost stable regardless of an eye ball rotation, which provides reference points for pupil center positions. There are four such reflections named Purkinje images ( Figure 2): (i) The first Purkinje image (P1) is the reflection from the outer surface of the cornea (CR also called glint).
(ii) The second Purkinje image (P2) is the reflection from the inner surface of the cornea.
(iii) The third Purkinje image (P3) is the reflection from the outer (anterior) surface of the lens.
(iv) The fourth Purkinje image (P4) is the reflection from the inner (posterior) surface of the lens.
Video-based eye trackers usually use only the first one, because this is the brightest and easiest reflection to detect and track. The difference vector between the pupil center and the corneal reflection indicates the direction and scope of eye movement.
The limitation of such a solution is that the eyes should remain visible in the recording, because subsequent eye positions are evaluated based on information obtained from consecutive images. Construction-wise, VOG eye trackers can be divided into three categories: (i) Tower eye trackers-which use a tower-based ergonomic chin rest to ensure the stable position of the head and record an eye position using a high-end camera located directly above patient's eyes. They ensure the best accuracy and precision among all types of eye trackers.
(ii) Remote eye trackers-contactless, placed at some distance in the front of an examined person. Because of this distance, such eye trackers are less accurate but also less cumbersome for users as there is no chin rest and limited head movements are allowed.
(iii) Glasses-mobile, wearable eye trackers designed to capture visual behavior in any environment. Such eye trackers are typically equipped with two cameras-eye and scene cameras-and allow users to make free movements. The data obtained from both cameras must be synchronized to calculate the gaze position.
Since we need to ensure that the eyes are seen by an eye tracker camera, a tower eye tracker seems to be the best solution. However, it is easy to imagine how difficult it may be for children, even in the case of healthy ones, to sit in a still position with an immobilized head. It is even more complicated when the problem regards children with various impairments. On the other hand, head-mounted eye trackers allow for free head movement during recordings; however, they may be unadjusted to children's heads, which makes it uncomfortable and disturbing for them. There are trials of overcoming this problem with the device suitable for children [9], but it must be emphasized that an experiment conducted with such a device is rather child driven, because the scene viewed by a child is under its control and it is difficult to automatically correlate a scene camera image with the intended stimulus. Thus, remote eye trackers are considered a solution to be used [10,11], despite the fact that turning a head away from an eye tracker causes loss of an eye movement signal. The aforementioned problems are strengthened when children with impairments such as cerebral palsy are taken into consideration, because of the lack of ability to control head movements. Thus, an examining setup may be complemented with an additional equipment such as a stroller, car seat, or wheelchair appropriately restricting the infants' movements [12].
2.1. Registered Data Processing. Ensuring the possibility of registering eye position is only the first step towards obtaining information about children's vision quality. There are some additional steps, which have to be undertaken to change registered data into valuable information. The aim of the first step is to adjust an eye tracking environment to a particular child, which may be obtained during the calibration process.
The commonly applied calibration procedure consists of two stages. During the first one, the examined person's eye movements are registered when looking at stimuli in known locations of a scene-usually there are points evenly distributed over a screen. In the case of an experiment engaging adults, there are 9 points presented. Subsequently, for correlating an eye tracker output with appropriate points, a function mapping registered eye positions to points of viewed scene is defined. It is further used to provide coordinates of user's gazes (1). Because of idiosyncratic features of the oculomotor system, this process is conducted independently for each examined person [13].
x s = f x e , y e , where x e and y e represent the data obtained from an eye tracker and x s and y s are the estimated gaze coordinates on a screen.
The function f may be defined in various ways [14]; however, the solution commonly used is the second degree polynomial regression.
y s = A y x 2 e + B y y 2 e + C y x e y e + D y x e + E y y e + F y 2 Not every term of the polynomial must be utilized in the mapping function. It may be adjusted to a particular environment during its configuration, based on the accuracy of provided gaze points. The simplest way for its assessment is the usage of the root-mean-squared error (RMSE), measuring the difference between the stimulus location and the mapped gaze point: where G i is an observed value andĜ i is a value calculated by the mapping function. Nevertheless, it is also possible to apply other methods [15]. Applying the above-presented procedure may be difficult in the case of young children [16], as they lose their attention very quickly and the number of calibration points should be reduced at least to 5 [11]. Sometimes it is even impossible to use so few points, because of the problem with communication or with understanding the rules. It forces us to further decrease the number of points to 2 [9,12] or the usage of other objects interesting for children. This solution was utilized in the work described in [17], where a small, visually attractive sounding toy was displayed at one of the five predefined spatial positions. Similarly, in [18], infants were shown attention getters placed at the top left and bottom right corners of an imaginary rectangle corresponding to the corners of the stimulus viewed during test. The calibration routine was repeated, if infant's gazes at test stimuli were outside of the assumed spatial accuracy.
However, when children are affected by serious cerebral impairments [4], additional simplification may be required; thus, extended methods are searched for. The purpose of this paper is to present the solution ready to be applied for examining vision quality of children with whom verbal communication is very difficult.
The VisDaT System Description
An environment meant for a vision diagnosis and therapy, to prove useful, should be equipped with many visual, auditory, and tactile stimuli, which force the brain to organize them by recognizing, analyzing, and integrating them. Additionally, it should be flexible in adjusting to children's needs, which are different dependent on their age and impairment. For this purpose, the VisDaT system was developed in the cooperation with a group of therapists from the BRUNO association involved in daily care of impaired children. They provided many useful guidelines for tool development, especially in terms of children's physical condition and the stimuli elaborating. Based on the collected knowledge, some assumptions have been made with regard to hardware and software configuration. These assumptions and the system implemented under their conditions were described in subsequent sections.
3.1. Assumptions. The first and the most important circumstance is the inability to explain and to encourage the group of the previously described children to follow the rules of an intended eye tracking procedure. To facilitate eye movement registration in such difficult cases, it was necessary to prepare the workplace that would be usable even when it was not possible to calibrate users properly. This imposes the need to work on new methods, which could overcome calibration problems by implicit procedure, without any user's cooperation. Even, when the system is not calibrated at all, it should give some feedback-at least information if users somehow respond to presented stimuli.
On the other hand, the system should be attractive for children and engage them as much as possible, taking into account their visual impairments. Therefore, a stimuli preparation should be preceded by an initial recognition of objects which are interesting for a child. It would be convenient to define specific stimuli and feedbacks for each child, that is, specific colors, shapes, and sounds that a particular child normally finds attractive. Additionally, to provide a reward system, visual and acoustic system's responses should be introduced to trigger a proper child's reaction. These responses may be activated automatically or manually by an operator conducting an eye tracking session. It is assumed that this will be a therapist who works with the child on a daily basis.
System
Realization. The VisDaT system is the extended version of the prototype presented in [19]. It consists of the computer with dual graphics card, two displays, an Eye Tribe eye tracker, Genius Eye camera, and speakers. The examined person (child) sits in front of one of the displays (stimulation screen) and watches stimuli. The Eye Tribe is situated below the display using a special mount and records the child's eye movements during the whole session. The Genius camera is situated above the display and records the child's movements. This information may be helpful in post hoc analysis of the session. Speakers are used to emit sound, which is treated as an award for a child for active participation. The other display (control screen) is located nearby, but in such a way that its content is not visible to the examined person. The operator of the system observes the content of the control screen and uses mouse to change the content of the stimulation screen and to influence the work of the eye tracker. Both outputs-from the eye tracker and the camera-are presented online to the operator. The simplified setup of the system is presented in Figure 3.
The system is controlled by the specialized software. It was written in Java language; thus, it is independent of the operating system. However, so far, it has been tested only in Microsoft Windows 10 environment. The workplace presented in the paper used the Dell Precision T1600 computer with Intel Xeon 3.1 GHz CPU and 4 GB RAM, but any computer with at least one USB3 socket should be suitable.
3.3. Solving the Lack of Calibration Problem. As mentioned in Section 2, a typical calibration scenario, when a participant is instructed to follow with his eyes the point displayed on a screen, is not feasible for children with brain disorders as they are not likely to follow any instructions. Therefore, instead of the traditional calibration-presentation scenario, two other techniques were proposed for the presented system: the differential analysis (DA) and implicit calibration (IC).
3.3.1. Differential Analysis. Any usage of an Eye Tribe eye tracker has to be preceded by the calibration to make the registration possible. Thus, at first, it must be calibrated by the operator. Then, the eye tracker's output is calculated based on the operator's calibration and this output is used for subsequent experiments. Such the method does not give accurate absolute gaze coordinates, but it may be used to check, if the eyes move at all. Our previous experiments have showed that when the eye tracker calibrated for one person is utilized for registering eye movements of other people, in most cases, the direction of a calculated gaze movement is similar, yet less accurate than for the calibrated user [20].
Examples of such recordings are visualized in Figure 4. Figure 4(a) shows recordings for the calibrated person, while the following three for other users, evaluated with the usage of the same calibration function. Red crosses denote stimulus locations, the blue ones are the eye tracker's output, and green lines connect calculated gaze coordinates with the actual stimulus positions. It may be noticed that in all cases, eye positions are shifted in one direction, with respect to stimulus. It means that when the same calibration model is applied for different subjects' data should show correct eye movement directions, but not the correct eye positions. Thus, when there is no need to provide a gaze point with the high accuracy, the proposed solution will fulfill its task; namely, it will point out the direction of an eye movement.
So, even when the eye tracker is not calibrated for the person who is being observed, it is very likely that real direction of movements will be the same as the one showed by the eye tracker. Therefore, a special component called gaze-rose has been introduced and is visible on operator's screen. The gaze-rose shows the current direction of eye movement ( Figure 5).
This direction is presented in the form of a vector (dx,dy) calculated as a resultant of vectors of movements for the previous five recordings (in a time window of approximately 80 ms of the recording). For a given moment, the k coordinates of the vector are calculated using the equations: where x i and y i are gaze coordinates read from the eye tracker. The gaze-rose may be used to detect, if a child reacts to a new stimulus. For instance, when the stimulus appears on the right side of the screen and the operator notices sudden eye movement to the right (right red arrow on the gaze-rose), she/he may assume that this movement is a reaction to the stimulus and-what is the most important-that the child sees this stimulus.
3.3.2. Implicit Calibration. As the explicit calibration of the system with children's cooperation is impossible, it must be done implicitly during the normal activity. It requires obtaining some pairs: an eye tracker output-actual gaze coordinates. Given such pairs, it is possible to build a regression function that maps eye tracker output to gaze coordinates. Of course, the function works better when there are more points and the data is of better quality. However, in the case of children with brain disorders, we have to agree on objective restrictions. Therefore, the calibration is done 'on the fly-when any stimulus appears on a screen and the operator notices any child's reaction (indicated by the gaze-rose)-which may mean that the child is looking towards the stimulus. One click at the operator's screen triggers the process of collecting data for the calibration. The implicit calibration algorithm works as follows: (1) Stimulus appears on the screen.
(2) A child moves its eyes towards the stimulus.
(3) The operator sees the movement and, when it is finished, clicks on the stimulus to inform the system that the child is now looking at that point.
(4) The system starts registration of eye tracker's output and triggers a potential feedback (sound or movie).
(5) When eye tracker's output changes (indicating a subsequent eye movement), the calibration module stops the registration and adds the clicked point's coordinates together with the registered eye tracker's output to the existing calibration dataset.
(6) The calibration function is recalculated using all data from the calibration dataset.
Of course, more points result in better calibration model, but even one point calibration may be valuable. For instance in [20], it has been shown that even after one point calibration, it was possible to evaluate at which of nine parts of a screen the user was looking.
The number of collected points is not the only factor that influences accuracy. It is also important, if the points are more or less evenly distributed across a screen and if the calibrated area corresponds to the further scene of an experiment. Thus, for instance, if an experiment task is to explore the efficiency of a child vision in the horizontal direction, a stimulus should consist of objects appearing along this axis. Or, if an experiment is based on the central area of a screen, stimuli should be concentrated around this central point within a particular radius.
Another very important factor is the quality of recordings. If the operator clicks the point and the child immediately turns its head or closes eyes, the quality of eye tracker's output could be deteriorated and adding such a point to the model will probably worsen the overall accuracy of the model. Examples of points with high and low deviations may be observed in Figure 4. Therefore, before building the model, the application automatically checks the deviation of eye tracker outputs for each point and then removes points for which the deviation is significantly higher than for others.
The deviation for N readings recorded for one gaze point is calculated as follows: where x i and y i are the eye tracker outputs and μ x and μ y are the average values in both directions. When there are more than three points registered, the point with the highest deviation is automatically removed from calculations. The next step is the creation of pairs (x s ,y s ) and (μ x ,μ y ) where x s and y s are the screen coordinates and μ x and μ y are the average eye tracker's output registered for this point. These pairs are then used to calculate coefficients of the polynomial presented in (2) by means of the classic Levenberg-Marquardt algorithm [21]. Then, the results are immediately used to recalculate the gaze point presented on the control screen.
Automatic and Manual
Feedback. Sometimes, the eye tracker data may not be calibrated sufficiently and it may be difficult for the system to automatically produce feedback when a child looks directly at the stimulus. Therefore, the operator has the opportunity to produce the feedback manually. If the operator notices the child's reaction to the stimulus (utilizing the gaze-rose), he/she may click on it. It adds the point to the calibration model and, in the same time, triggers the action attached to this stimulus (sound or movement).
Post Hoc Analysis.
All events during every session are recorded in a specific format by the storage module meant to keep data for further analysis. The recording includes eye tracker output, video registered by the camera, and screenshots of stimulus screen. The developed software provides the functionality facilitating the eye movement analysis by means of charts, scan-paths, and heatmaps [22]. The last two techniques use eye movement events-fixations and saccades. The first is defined as a movement when a person's eyes are almost stable and in the process of acquiring information from a scene. The other one is a quick movement between two fixations [15,23]. In the case of the presented system, the IDT algorithm has been used for these event detections [24]. The algorithm is based on measuring dispersion between subsequent eye positions. If the dispersion is below a specified threshold, the gaze is considered to belong to a fixation. The presented software uses 0.5degree threshold.
Results and Discussion
The described system, the general schema of which is provided in Figure 6, was tested with the usage of various stimuli. The stimuli differed in the form, filling, and type of feedback they produced in a response to children's reaction, which included the following: (i) Changing shape (ii) Changing color (iii) Emitting sounds (iv) Animated
Stimulus
Examples. An example of the first stimulus is a simplified image of a face moving on the screen (Figure 7). The face expression is sad until a child looks at the face, which makes the face smile.
Some children are very sensitive to color changes. An example of a stimulus for them is presented in Figure 8. The butterfly changes its color from black to red when a subject looks at it.
An ability to emit sounds gives the system the opportunity to stimulate children not only visually. According to therapists, it is very important to teach children that a combination of different types of stimuli may occur. An example of such stimulus is presented in Figure 9. When a child looks at one of the animals, the appropriate sound is emitted.
By means of the stimulus editor, it is also possible to prepare more sophisticated stimuli including simple animations. An example of such an animated feedback is presented in Figure 10. The task of this stimulus is to introduce some objects' movements caused by a determined gaze. If it is placed on the watering can, it changes its position for watering flower. Directing eyes towards the flower changes its shape.
Visualization
Examples. The experiment presented in this section involved an animated stimulus. During the experiment, a bike moved on the screen from right to left and back. The child was supposed to follow the moving object with eyes. The eye tracking data was recorded during the experiment and could be used for post hoc analysis. Figure 11 presents the stimulus together with a part of the scan-path recorded for one child. It is visible that the child's eyes followed the object. A more informative visualization is presented in Figure 12. It presents eyes' positions separately in horizontal and vertical directions compared to the current location of the bike stimulus (the colored belt). It is evident that the child's eyes followed the object moving horizontally. Figure 13 presents the same visualization, but for another child with more severe disparities. This time the child was able to follow the object only at the very beginning of the presentation, and even then, the eye movement signal is very instable and disappears for some periods.
More examples of the system usage are discussed in [19].
The explicit assessment of the proposed system is infeasible in a short perspective, because its effectiveness strongly depends on child's age, impairment, and its general health conditions influencing the regularity of therapy experiments. The first and basic possibility is a careful observation of a child by its therapist, who knows the child best. Additionally, we can point out some factors, which can help to confirm our observation, namely, time to the first gaze on a stimulus, increased duration of fixations on a stimulus, longer pursuit of a stimulus, or more precise saccadic movements. The first of the aforementioned metrics may indicate better perception of the objects. The second and third ones may provide evidence of the improved ability to keep attention and of better functioning of oculomotor muscles. Finally, reaching a stimulus position more precisely may reveal both better objects' recognition and progress of the oculomotor system work.
All these metrics collected during therapy sessions may be collated and compared on various charts providing convenient way for reasoning about therapy results and vision advances.
Discussion and Conclusions
The children with communication imparities require special care and treatment, because it is difficult to solve many everyday issues due to the problems with articulating them. Thus, life of such children is hard not only because of health issues but also due to the fact that their needs are not properly met.
The solution presented in this paper is a step forward towards alleviating communication difficulties and was meant as a tool providing the objective assessment of child's vision quality and areas of interest. This was achieved by the application of the affordable eye tracker, by which the developed system might become available for a wider group of people. However, this device does not suffice in case of children with imparities as they do not cooperate. For this reason, other solutions had to be elaborated, including the application working on two displays, implicit calibration procedure, and multimedia simulations responding automatically to the expected eye movement.
Preliminary experiments conducted by means of the proposed system corroborated the discussed solution's ability to become supportive in revealing children's visual performance. However, it must be emphasized that this is not the only function of this tool. Ascertaining children's sight introduces possibility of undertaking an appropriate therapy, which may be realized within the presented environment. The usage of the same stimulus or its continuous adjustment to a particular child constantly stimulates the brain and oculomotor system. This functionality has been achieved due to the convenient mechanism of the stimulus set extension, easy to perform for each system operator. This feature offers other opportunities for the system utilization not only for children but also for any "difficult" subjects who cannot or do not want to cooperate, such as adult people with various diseases including, for instance, Parkinson's disease, alcoholic disease, or even anorexia. Figure 10: A flower in a pot and a watering can. At the beginning, the flower is withered. When a child looks at the watering can, the animation presenting watering the flower is invoked, which results in the flower straightening. If a gaze is focused on the flower, its petals grow. Currently, the authors intend to carry out detailed clinical trials of the created system in cooperation with the Association for Children with Developmental Dysfunctions BRUNO located in Rzeszow, Poland. | 2018-04-04T00:06:17.680Z | 2018-01-10T00:00:00.000 | {
"year": 2018,
"sha1": "b32bc1946e58b5e93c00c8c0bc7442531b30d864",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2018/9481328",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff9f9f976f45404c3a4c20fe12e85e9f270d43a1",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
255481 | pes2o/s2orc | v3-fos-license | COPD exacerbation severity and frequency is associated with impaired macrophage efferocytosis of eosinophils
Background Eosinophilic airway inflammation is observed in 10-30% of COPD subjects. Whether increased eosinophils or impairment in their clearance by macrophages is associated with the severity and frequency of exacerbations is unknown. Methods We categorised 103 COPD subjects into 4 groups determined by the upper limit of normal for their cytoplasmic macrophage red hue (<6%), an indirect measure of macrophage efferocytosis of eosinophils, and area under the curve sputum eosinophil count (≥3%/year). Eosinophil efferocytosis by monocyte-derived macrophages was studied in 17 COPD subjects and 8 normal controls. Results There were no differences in baseline lung function, health status or exacerbation frequency between the groups: A-low red hue, high sputum eosinophils (n = 10), B-high red hue, high sputum eosinophils (n = 16), C-low red hue, low sputum eosinophils (n = 19) and D- high red hue, low sputum eosinophils (n = 58). Positive bacterial culture was lower in groups A (10%) and B (6%) compared to C (44%) and D (21%) (p = 0.01). The fall in FEV1 from stable to exacerbation was greatest in group A (ΔFEV1 [95 % CI] -0.41 L [-0.65 to -0.17]) versus group B (-0.16 L [-0.32 to -0.011]), C (-0.11 L [-0.23 to -0.002]) and D (-0.16 L [-0.22 to -0.10]; p = 0.02). Macrophage efferocytosis of eosinophils was impaired in COPD versus controls (86 [75 to 92]% versus 93 [88 to 96]%; p = 0.028); was most marked in group A (71 [70 to 84]%; p = 0.0295) and was inversely correlated with exacerbation frequency (r = -0.63; p = 0.006). Conclusions Macrophage efferocytosis of eosinophils is impaired in COPD and is related to the severity and frequency of COPD exacerbations.
Background
Chronic obstructive pulmonary disease (COPD) is a heterogeneous condition exemplified by the identification of a subgroup of COPD subjects with eosinophilic airway inflammation [1,2]. The role of eosinophilic inflammation in COPD remains controversial, but is consistently reported in 10-30% of COPD subjects and is associated with better responses to inhaled and oral corticosteroids [3,4]. The relationship between eosinophilic airway inflammation, clearance of these cells and clinical outcomes in COPD is poorly understood.
Apoptosis and subsequent removal of dead cells by phagocytes is a critical mechanism for the non-inflammatory clearance of granulocytes, including eosinophils [5][6][7][8]. Failure of phagocytosis and efferocytosis, the clearance of apoptotic cells, leads to secondary necrosis of these cells and release of toxic intracellular pro-inflammatory mediators. Impaired phagocytic ability of macrophages is consistently observed in COPD [9][10][11][12][13][14] and asthma [15], but whether this extends to abnormal efferocytosis of eosinophils in COPD needs to be determined.
Efferocytosis of eosinophils by macrophages can be measured directly in vitro and indirectly in vivo by the assessment of macrophage cytoplasmic red hue analysed on stained sputum cytospins [16]. In asthma increased macrophage cytoplasmic red hue predicts future risk of the emergence of a sputum eosinophilia and poor asthma control following corticosteroid withdrawal [16]. Whether this biomarker can identify clinically important subgroups with impaired eosinophil efferocytosis in COPD is unknown.
We hypothesised that i) COPD subjects categorised into subgroups determined by their sputum eosinophilia and sputum macrophage red hue will identify important differences in terms of their clinical characteristics, exacerbation frequency and severity, ii) macrophage efferocytosis of eosinophils in COPD will be impaired; directly related to the sputum macrophage cytoplasmic red hue and indirectly associated with exacerbation frequency and severity. To test our hypotheses, we have examined sputum cytospins available from an earlier study [17] and prospectively assessed macrophage efferocytosis in subjects that participated in this study.
Subjects and study design
Clinical data and sputum cytospins were available from 196 subjects that had participated in an observational study of COPD exacerbations [17,18]. Subjects had undergone extensive clinical characterisation including clinical history, demographics, visual analogue symptom (VAS) scores, health status assessment using the chronic respiratory questionnaire (CRQ) and St George's Respiratory questionnaires (SGRQ), spirometry before and after administration of a short-acting bronchodilator, sputum analysis for cellular profiles and microbiological assessment at baseline, 3 monthly stable follow-up visits and exacerbations for at least one year. All COPD included subjects were either ex or current smokers. Subjects assessed at ≥2 stable visits with sputum cytospins of adequate quality to assess the cytoplasmic red hue of ≥50 macrophages were included.
From the original cohort of 196 subjects 103 subjects met the inclusion criteria ( Figure 1). These subjects were not significantly different from the 196 in terms of lung function, symptoms or health status. We then imaged between 70-100 macrophages for each subject in Romanowsky-stained sputum cytospin slides, except in 15 subjects that had fewer macrophages in which at least 50 were imaged. The percentage area of cytoplasm with red hue was determined by thresholding. Using Image J software, the cytoplasmic area of macrophages is selected in saved (tiff) images (Additional file 1: Figure S1, please see additional online file 1). After defining the suitable threshold, the software calculates the number of red pixels which correspond to eosinophilic staining and the median percentage area of cytoplasm was derived as previously described [16]. The sputum eosinophil area under the curve (AUC) was derived from the sputum samples collected at stable visits and expressed as sputum eosinophil %/year. Subjects were stratified into 4 groups based on cut-offs for the sputum eosinophil count (≥3%) and the upper limit of the normal range for % area macrophage red hue (>6%) [16], (Figure 1 and Tables 1 and 2).
To study macrophage efferocytosis, we recruited prospectively 17 of the 103 COPD subjects and 8 healthy controls. These subjects underwent clinical characterisation and donated blood to generate monocyte derived All subjects gave written informed consent and the study was approved by the Leicestershire, Northamptonshire and Rutland local ethics committee.
Eosinophils purification and induction of apoptosis
Eosinophils were immunomagnetically purified from peripheral blood 2 days before co-culture with MDM as described previously [16]
MDM efferocytosis of eosinophils
Apoptotic eosinophils were added to MDM in 1:5 ratio and incubated for 120 minutes in the same medium and conditions used to culture MDM [16]. Cells were then fixed and permeablised with 4% paraformaldehyde and 0.1% saponin. Immunofluorescence staining was carried out as previously described [16] with mouse monoclonal anti-human ECP (Diagnostic Development, Sweden) indirectly conjugated with RPE (Dako, Denmark) and directly conjugated CD68-FITC (Dako). Efferocytosis was quantified in 100 macrophages per donor and the percentage of MDM that had ingested fully or were engulfing eosinophils was recorded. Cytospins were prepared from the efferocytosis experiments and stained as per the sputum slides and % area of MDM red hue was measured before and after feeding with eosinophils to validate that red hue increases after ingestion of eosinophils. All the slides were assessed by a single blinded observer. The observer was blinded during the counting of MDM efferocytosis from the captured images.
Statistical analysis
GraphPad Prism version 6 (GraphPad, San Diego) and IBM SPSS version 20 (SPSS, Inc. Chicago) were used to perform statistical analysis. Mean (standard error of the mean [SEM]) was used to present parametric data whilst median (interquartile ranges [IQR]) was used for nonparametric data and geometric mean (95% confidence interval) for data that was log normally distributed. Comparisons between groups used unpaired T-test or Mann-Whitney for parametric or non-parametric data respectively. Comparisons across groups were assessed by one-way analysis of variance (ANOVA) with Tukey pair-wise comparisons or Kruskal-Wallis test with Dunn's multiple pair-wise comparisons for parametric and nonparametric data respectively. Chi-square or Fisher's exact test as appropriate were used to assess categorical data. Spearman rank correlation coefficients were used to assess the correlations. A p value less than 0.05 was considered statistically significant.
Results
The 103 COPD subjects were categorised into four groups: A-low red hue, high area under the curve sputum eosinophil count (n = 10), B-high red hue, high sputum eosinophils (n = 16), C-low red hue, low sputum eosinophil (n = 19), and D-high red hue, low sputum eosinophils (n = 58) ( Figure 1). The distributions of the macrophage red hue and sputum eosinophils for the 4 groups are as shown (Figure 2a) and example cytospins for each group are as illustrated (Figure 2b). The baseline clinical characteristics of the groups are as shown (Table 1). There were no significant differences in age, gender, health status, symptoms, use of inhaled corticosteroids or exacerbation frequency between the 4 groups. Lung function was not significantly different between groups; however group A had a greater proportion of subjects with GOLD stage 2 and lower GOLD stage 3 than the other groups. The peripheral blood eosinophil count was elevated in groups A and B. Positive bacterial culture was lower in groups A (10%) and B (6%) (eosinophilic groups) compared to (non-eosinophilic groups) C (44%) and D (21%) (p = 0.04), although there was no difference in total bacterial colony forming units. (Figure 2c). Health status and symptoms worsened at exacerbation in all groups, but there were no differences between groups ( Table 2).
The clinical characteristics of the COPD subjects and healthy volunteers that provided blood to generate MDM are as shown (Table 3). There were no significant differences in age, gender, lung function, or dose of inhaled corticosteroids between the 4 COPD groups (ANOVA p = 0.26). Examples of MDM either undergoing eosinophil efferocytosis or not are shown ( Figure 3a). As expected, MDM efferocytosis of eosinophils was significantly correlated with sputum macrophage cytoplasmic red hue (Spearman r = 0.54; p = 0.027). Furthermore, there was a significant increase in mean (SEM)% area of MDM red hue from 0.5 (0.2) before to 7.1 (0.7)% following COPD-derived MDM feeding with eosinophils (p < 0.0001) (Figure 3e). The mean (SEM)% area of MDM red hue after feeding with eosinophils was significantly higher in control subjects compared to those with COPD (10.3 (1.5)% versus 5.8 (0.6)%; p = 0.027).
The median (IQR) proportion of MDM that efferocytosed eosinophils was impaired in COPD patients 86 (75 to 92)%, compared to controls 93 (88 to 96)%, (p = 0.028) (Figure 3b). Even though the healthy controls were younger than the subjects with COPD, there was no significant difference in efferocytosis of eosinophils by macrophages between the 5 control subjects <60 years old and 3 ≥ 60 years old (p = 0.14). In addition, there was no significant correlation between MDM efferocytosis of eosinophils and age in the COPD subjects or COPD subjects and healthy controls combined. Likewise there was no relationship between smoking status or pack years and MDM efferocytosis of eosinophils in COPD subjects. There were no significant correlations between health status or symptoms and MDM efferocytosis of eosinophils. The fall in FEV 1 at exacerbation was greater in those COPD subjects with <90% (n = 8) versus ≥90% MDM eosinophil efferocytosis (n = 5), mean delta The MDM efferocytosis of eosinophils was significantly different between the 4 COPD subgroups and healthy controls (Kruskal-Wallis p = 0.048). Post-hoc pairwise comparisons demonstrated that impairment of efferocytosis was greatest in group A, those subjects with high sputum eosinophils and low red hue 71 [70 to 84]% and was significantly lower than the healthy controls (p = 0.0295) (Figure 3b), but was not significantly different to the other COPD groups.
Discussion
Here we report for the first time that macrophage efferocytosis of eosinophils is impaired in COPD and is related to increased exacerbation frequency and severity. We have assessed MDM efferocytosis of eosinophils directly in vitro and indirectly in vivo by the assessment of the sputum macrophage cytoplasmic red hue. This index of macrophage function together with the sputum eosinophil count measured over time allowed us to identify 4 subgroups of COPD segmented into those with high or low sputum eosinophil counts and high or low macrophage red hue. The group with high sputum eosinophil count and low red hue in vivo is predicted to represent those subjects with the greatest impairment in MDM efferocytosis of eosinophils, which was confirmed in vitro. This group had the greatest fall in lung function during exacerbations. Exacerbation frequency was associated with MDM efferocytosis of eosinophils and impairment was greatest in those with evidence of frequent exacerbations in the last year. Taken together our findings suggest that macrophage dysfunction in COPD might play an important role in the persistence of eosinophilic inflammation in some subjects, which in turn is related to the severity and frequency of exacerbations. This is the first study in COPD to apply the cytoplasmic macrophage red hue, an index validated in asthma as a specific biomarker of exposure to and efferocytosis of eosinophils which had excellent interobservor and intraobservor repeatability [16]. A high cytoplasmic red hue suggests that the airway macrophages are both exposed to eosinophils and are able to competently efferocytose apoptotic cells [16]. A low red hue suggests either lack of exposure over time or impaired eosinophil efferocytosis. Indeed, macrophage red hue was correlated with MDM efferocytosis and MDM red hue increased substantially following eosinophil efferocytosis. The application of this index reveals hitherto unrecognised features of COPD. Firstly, the majority of subjects had high red hue and normal sputum eosinophil counts suggesting that the contribution of the eosinophil to the total inflammatory burden in COPD might be under-estimated. Indeed, only 18% of subjects had neither high sputum eosinophils nor high red hue. Secondly, some subjects that are exposed to eosinophils have impaired eosinophil clearance. This was confirmed by direct assessment of MDM efferocytosis of eosinophils suggesting that it is an intrinsic abnormality observed in peripheral blood derived cells rather than secondary to the airway environment. This failure of macrophage function in COPD adds to the growing evidence of impairment in macrophage efferocytosis and phagocytosis. To date this has been considered a phenomenon that promotes bacterial colonisation [9,10,12,14]. However, we have identified that impaired efferocytosis can occur in subjects with high sputum eosinophil counts and low bacterial colonisation. Therefore, a more plausible explanation is that impaired macrophage function acts as an amplification of the underlying abnormal innate immunity or inflammatory profile rather than being a primary event. This also suggests that strategies that promote macrophage function [6,19] might be best targeted at those subjects with evidence of dysfunction rather than those with bacterial colonisation per se.
Interestingly, the subjects that had persistently high sputum eosinophilia with a low red hue were those that had the greatest impairment in MDM efferocytosis of eosinophils and the greatest fall in FEV 1 at exacerbations. Additionally, there was also a strong relationship between exacerbation frequency and MDM efferocytosis of eosinophils. Indeed, macrophage function was impaired in those subjects with frequent exacerbations (≥2/year) compared to those without [20]. However, this relationship did not persist retrospectively beyond the year prior to the assessment of the MDM efferocytosis, suggesting that this relationship is more variable. Future studies will need to study the stability of this relationship. Notwithstanding this potential limitation, this group might represent those that are at greatest risk and would warrant further eosinophil-specific therapy such as anti-IL-5, which would likely reduce the overall eosinophil burden as observed in asthma [21,22].
One of the strengths of our study was our ability to use data from a longitudinal observational study of COPD subjects extensively studied in stable state and at exacerbations. Although, this was a strength for the comparison of macrophage cytoplasmic red hue and sputum eosinophil counts across groups, one important criticism is that the number of subjects in each of the COPD subgroups that were recruited to study MDM efferocytosis of eosinophils was small. This was a consequence of our inability to recall many of these subjects. Additionally, although the inter-and intra-observer variability of the sputum macrophage red hue is excellent, the reproducibility of this measure and how it varies dynamically with a sputum eosinophil count overtime and in response to exacerbations is unknown and requires further study. Another potential drawback is that our normal controls were not well matched to the COPD subjects. The normal controls were younger and had a lower smoking pack year history than the COPD subjects. However, we have shown that age and smoking history, albeit in contrast to previous studies [23], were not correlated with MDM eosinophil efferocytosis. Nevertheless, further comparisons in larger populations of healthy volunteers are required to explore the effects of age upon macrophage function. The effects observed in the COPD subjects was also independent of age, therefore although age might contribute to macrophage dysfunction, we do not think it is likely to exert a major influence upon the observations we have made here. Additionally, the normal ranges for macrophage red hue were derived from our earlier work. We agree that this does not represent a large population study of the normal range of macrophage red hue, but does represent the largest study to date. Another minor potential critique is that eosinophils were obtained from subjects with asthma and or other allergic diseases. However, there was a strong correlation between red hue and efferocytosis in our study irrespective of the donor's diagnosis. Thus, the differences observed here between health and disease are likely to be real, but need to be interpreted with caution. Together these limitations underscore the need for a larger multicentre study, including assessment of alveolar macrophages, to validate and replicate our findings. Furthermore, we have not explored the underlying mechanisms driving the macrophage dysfunction observed here and this needs to be explored in future studies. | 2016-05-12T22:15:10.714Z | 2014-07-09T00:00:00.000 | {
"year": 2014,
"sha1": "0a01af56dbd576fa0d3deb092ff8ccd2dfc0668b",
"oa_license": "CCBY",
"oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/1471-2466-14-112",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18bae3d9cf2b49728cd6c0f38cf8b394e07c3be7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270604645 | pes2o/s2orc | v3-fos-license | Reproducible Research Practices in Magnetic Resonance Neuroimaging: A Review Informed by Advanced Language Models
MRI has progressed significantly with the introduction of advanced computational methods and novel imaging techniques, but their wider adoption hinges on their reproducibility. This concise review synthesizes reproducible research insights from recent MRI articles to examine the current state of reproducibility in neuroimaging, highlighting key trends and challenges. It also provides a custom generative pretrained transformer (GPT) model, designed specifically for aiding in an automated analysis and synthesis of information pertaining to the reproducibility insights associated with the articles at the core of this review.
Introduction
Reproducibility is a cornerstone of scientific inquiry, particularly relevant for data-intensive and computationally demanding fields of research, such as MRI. 1 Ensuring reproducibility thus poses a unique set of challenges and necessitates the diligent application of methods that foster transparency, verification, and interoperability of research findings.
While numerous articles have addressed the reproducibility of clinical MRI studies, few have looked at the reproducibility of the MRI methodology underpinning these studies.This is understandable given that the MRI development community is smaller, driven by engineers and physicists, with modest representation from clinicians and statisticians.However, performing a thorough meta-analysis or a systematic review of these studies in the context of reproducibility presents challenges due to: (i) the diversity in study designs across various MRI development subfields, and (ii) the absence of standardized statistics to gauge reproducibility performance.
Considering these challenges, we opted to conduct a minireview leveraging the semantic extraction capabilities of the advanced language models.Specifically, we trained a custom generative pretrained transformer (GPT) model using a knowledge base constructed for a selection of articles coupled with web scraping of content pertaining to their reproducibility.With this mini-review, we aim to examine the current landscape of reproducible research practices across various MRI studies, drawing attention to common strategies, tools, and repositories used to achieve reproducible outcomes.We anticipate that this approach provides a living review that can be automatically updated to accommodate the continuously expanding breadth of methodologies, helping us identify commonalities and discrepancies across studies.
Methodology
In distilling reproducibility insights powered by GPT, this review centered on 31 research articles published in the journal Magnetic Resonance in Medicine (MRM), chosen by the editor for their dedication to enhancing reproducibility in MRI.Since 2020, the journal has published interviews with the authors of these selected publications, discussing the tools and practices they used to bolster the reproducibility of their findings (available at https://blog.ismrm.org/category/highlights).
Mapping selected articles in the semantic landscape of reproducibility
We performed a literature search to identify where these studies fall in the broader literature of reproducible neuroimaging.To retrieve articles dedicated to reproducibility in MRI, we utilized the Semantics Scholar API 2 Among 1098 articles included in the Semantic Scholar records, SPECTER vector embeddings 3 were available for 612 articles, representing the semantics of publicly accessible content in abstracts and titles.For these articles, the high-dimensional semantic information captured by the word embeddings was visualized using the uniform manifold approximation and projection method 4 (Fig. 1).This visualization allowed the inspection of the semantic clustering of the articles, facilitating a deeper understanding of their contextual placement within the reproducibility landscape.In addition, to illustrate the hierarchical clustering of the selected studies in the broader literature, a PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) diagram was provided (Fig. 2).
Creating a knowledge base for a custom GPT
We created a custom GPT model, designed specifically to assist in the analysis and synthesis of information pertaining to the 31 reproducible research insights.The knowledge base of this retrieval-augmented generation framework incorporates GPT-4 summaries of the abstracts from 31 MRM articles, merged with their respective MRM Highlights interviews, as well as the titles and keywords associated with each article (refer to Appendix A).This compilation was assembled via API calls to OpenAI on November 23, 2023, using the gpt-4-1106-preview model.Please visit https://preprint.neurolibre.org/10.55458/neurolibre.00021/gpt_insights.htmlto explore the respective Python (v3.8) scripts and to create newer versions of the summary file (requires an OpenAI authorization token).
AI-driven Review of Reproducibility in MRI
This specialized GPT, named RRInsights, is tailored to process and interpret the provided data in the context of reproducibility; for the system prompts, please see Appendix B.
Contextual placement of the selected articles in the landscape of reproducibility
The MRI systems cluster was predominantly composed of the majority of the selected MRM articles (23/31), with only two publications appearing in a different journal. 5,6Additionally, this cluster was sufficiently distinct from the rest of the reproducibility literature, as can be seen by the location of the dark red dots in Fig. 1.Few other selected articles (8/31) were found at the intersection of the MRI systems, deep learning, and data/ workflows clusters, which in total spans 103 articles.Since the custom GPT model was trained on the 31 selected MRM articles (red dots), Fig. 1 serves as a map for inferring the topics where RRInsights is more likely to be context-aware.
Custom GPT for reproducibility insights
Through its advanced natural language processing capabilities, RRInsights can efficiently analyze the scoped literature, extract key insights, and generate comprehensive overviews of the research papers focusing on MRI technology.The custom GPT is available at https://chat.openai.com/g/g-5uDwBlnx4-rrinsights (requires subscription as of May 2024).Figure 3 presents an example interaction with RRInsights to create a summary of the vendor-neutral solutions found in the knowledge base.The model's response to this inquiry demonstrates that RRInsights is context-aware, adept at pinpointing relevant information in the knowledge base, and offering interpretations within the framework of reproducibility.The following subsections are written based on the interactions with RRInsights:
GPT-Powered Summary of the Reproducible Magnetic Resonance Neuroimaging
Most MRI development is done on commercial systems using proprietary hardware and software.Peeking inside the black boxes that generate the images is nontrivial, but it is essential for promoting reproducibility in MRI.
Quantitative MRI articles are powerful showcases of reproducible research practices, as they usually come with fitting models that can be shared on public code repositories.][18][19][20] Transparent reconstruction and analysis pipelines are also prominently featured in the reproducible research insights, including methods for real-time MRI, 21 parallel imaging, 22 large-scale volumetric dynamic imaging, 23 pharmacokinetic modeling of dynamic contrast-enhanced MRI (DCE-MRI), 24 phase unwrapping, 25 hyperpolarized MRI, 26 Dixon imaging, 27 and X-nuclei imaging. 28Deep learning is increasingly present in the reproducibility conversation, as MRI researchers are trying to shine a light on AI-driven workflows for phase-focused applications, 29 CEST, 14 diffusion-weighted imaging, 30 myelin water imaging, 18 B 1 estimation, 31 and tissue segmentation. 32eproducibility of MRI hardware is still in its infancy, but a recent study integrated RF coils with commercial field cameras for ultrahigh-field MRI, exemplifying the coupling of hardware advancements with software solutions.The authors shared the design CAD files, performance data, and image reconstruction code, ensuring that hardware innovations can be reproduced and utilized by other researchers. 33inally, vendor-neutral pulse sequences are putting interoperability and transparency at the center of the reproducibility landscape.Pulseq and gammaSTAR are vendor-neutral platforms enabling the creation of MRI pulse sequences that are compatible with three major MRI vendors. 34,35In addition, vendor-neutral sequences (VENUS) is an end-to-end vendorneutral workflow that was shown to reduce inter-vendor variability in quantitative MRI measurements of myelin, thereby strengthening the reproducibility of quantitative MRI research and facilitating multicenter clinical trials. 36,37ta Sharing There is a growing number of studies providing access to raw imaging data, preprocessing pipelines, and post-analysis results.
Repositories like Zenodo, XNAT, and the Open Science Framework serve as vital resources for housing and curating MRI data.Data sharing is also made easier thanks to unified data representations, such as the International Society for Magnetic Resonance in Medicine (ISMRM) raw data format 38 for standardizing k-space data, and the Brain Imaging Data Structure for organizing complex datasets 39 and their derivatives. 40de Sharing Software repositories such as GitHub and GitLab are making it easier to centralize processing routines and to adopt version control, unit tests, and other robust software development practices.The introduction of tools for automated quality assurance (QA) processes, as seen in the development of platforms like PreQual for diffusion-weighted imaging (DWI) analysis, 12 signifies an emphasis on interoperability and standardization.
The increasing adoption of containerization and virtual environments makes workflows transparent and easy to execute.Tools like Docker and Singularity are used to package computing environments, making them portable and reproducible across different systems.Studies employing these tools enable MRI researchers to replicate computational processing pipelines without dealing with dependency issues in local computational environments. 32,35,36he rise of machine learning and artificial intelligence in MRI necessitates rigorous evaluation to ensure reproducibility.Studies that use deep learning are beginning to supplement their methodological descriptions with the open-source code, trained models, and simulation tools that underpin their algorithms.Algorithms such as DeepCEST, developed for B 1 inhomogeneity correction at 7T, showcase how clinical research can be improved by reproducible research practices. 14Sharing these algorithms allows others to perform direct comparisons and apply them to new datasets.Fig. 3 An example of user interaction with the RRInsights custom GPT.The model efficiently retrieves relevant studies concerning the requested content, such as vendor-neutral solutions.It provides summaries that highlight thematic similarities, with a particular focus on reproducibility aspects.GPT, generative pretrained transformer.
AI-driven Review of Reproducibility in MRI
4][35][36] For a long time, MRI vendors have been reluctant to open up their systems, 41 but standardized phantoms 42 are creating benchmarks that require transparency and reproducibility.[45] Dissemination Reproducibility is also bolstered by interactive documentation and tools such as Jupyter Notebooks, allowing for dynamic presentation and hands-on engagement with data and methods.Platforms incorporating such interactive elements are being utilized with greater frequency, providing real-time demonstration of analysis techniques and enabling peer-led validation.Resources such as MRHub (https://ismrm.github.io/mrhub),MRPub (https://ismrm.github.io/mrpub),Open Source Imaging (https://www.opensourceimaging.org/projects/), and NeuroLibre 46 serve as a gateway to a wide range of tools and tutorials that promote reproducibility in MRI.The curation of these resources is essential for ensuring that publications featuring Jupyter Notebooks and R Markdown files 47 remain executable and properly archived. 47,48scussion and Future Directions The progress toward reproducibility in MRI research points to a distinct cultural shift in the scientific community.The move toward open-access publishing, code sharing platforms, and data repositories reflects a concerted effort to uphold the reproducibility of complex imaging studies.Adopting containerization technologies, pushing for standardization, and consistently focusing on quality assurance are key drivers that will continue to improve reproducibility standards in MRI research.
Figure 4 is a word cloud generated from the articles included in this review, highlighting the concepts and vocabulary that are driving reproducibility in MRI.As can be seen from the figure, the components of reproducibility in MRI research are multifaceted, integrating not just data and code but also the analytical pipelines and hardware configurations.The shift toward comprehensive sharing is motivated both by a scientific ethic of transparency and the practical need for rigorous validation of complex methodologies. 49,50owever, this shift is not without challenges. 51Variations in data acquisition and analysis methodologies limit crossstudy comparisons.Sensitivity to software and hardware versions can impede direct reproducibility.Privacy concerns and data protection regulations can be barriers to data sharing, particularly with clinical images.While challenges persist, steps are taken by individual researchers and institutions to prioritize reproducibility.Moving forward, the MRI community should work collectively to overcome barriers, institutionalize reproducible practices, and constructively address data sharing concerns to further the discipline's progress.
The initiatives and tools identified in this review serve as a blueprint for future studies to replicate successful practices, safeguard against bias, and accelerate neuroscientific discovery.As MRI research continues to advance, upholding the principles of reproducibility will be essential to maintaining the integrity and translational potential of its findings.
We also hope that our methodology in generating this review will pave the way for future studies that leverage large language models to create unique literature insights.In particular, we believe that the RRInsights GPT can serve as a blueprint for generating a scoping review 52 and inspire other scientists to experiment with the format of scientific publications in the age of AI.
Reproducibility was ensured by sharing the project's code on
GitHub, including a requirements file to specify code dependencies, and by detailing the training/testing processes for their deep learning models.3. The decisions on data/code sharing were guided by lab and institutional policies, as well as data privacy considerations, and active steps were taken to anonymize data sets and secure necessary permissions for sharing.1.This study developed and successfully implemented a Fourier-based decomposition method using a voxel-GRAPPA (vGRAPPA) approach to simultaneously acquire and discern spectroscopic data from two MRS voxels within the brain at 7T, achieved by a multi-banded 2 spin-echo, full intensity acquired localized (2SPECIAL) sequence.2. Reproducibility was underscored by sharing a meticulously documented GitLab code repository that included example data, the vGRAPPA algorithm, and a demo script, inviting external researchers to assess and build on the work.3. From inception, there was a clear intent to share the code and data, inspired by a broader culture of transparency and accountability in using public funds for research, as well as initiatives from scientific organizations that encourage reproducible practices.4. To further stimulate the sharing of open-source materials within the MRI community, the researchers suggest giving published data sets and code equivalent scientific merit to traditional publications, which would acknowledge the time and effort required to produce high-quality, reusable research outputs.
--------------------------- easy application and testing of the research outcomes by the broader scientific community.3. The researchers recognized the increased visibility and potential for adoption that comes with sharing code, leading them to commit to making their diffusion MRI processing tools readily available within their evolving toolbox.4. They believe that institutional recognition for software development and maintenance alongside traditional academic metrics such as publication numbers could encourage more researchers to invest time in creating open-source code, which is currently underestimated in the evaluation of researchers' contributions.
--------------------------- 1.The study developed an algorithm to reconstruct undersampled parallel transmit field maps efficiently for both body and brain MRI without the need for calibration data, using a joint transmit and receive low-rank tensor completion approach.2. Reproducibility was facilitated by sharing all scripts and data necessary to reproduce every figure published in the paper, enabling independent validation and educational opportunities for others.3. The researchers had a proactive approach to sharing code and data from the start of the project, which allowed for thorough documentation and adherence to good practices.They also ensured data protection compliance by re-organizing brain dataset releases to avoid reconstruction of deanonymizing facial features.4. They suggest that rewarding and citing open-source contributions, as well as potentially creating a dedicated category for papers detailing community resource releases, could further encourage the MRI community to contribute open-source code alongside research papers.Additionally, they express interest in using container tools like Docker or Singularity for managing dependencies in future work.
--------------------------- 1.The study proposed a model-based framework to correct ΔB1+ (B1 inhomogeneity) errors in magnetization transfer saturation (MTsat) and inhomogeneous magnetization transfer (ihMT) saturation maps, using an R1 and B1+ map alongside numerical simulations of the sequence.2. Reproducibility was supported by sharing the code associated with the paper, featuring extensive and detailed documentation within the code and on external platforms such as GitHub.3. The code release aligns with the lab's open science practices, supported by their institution, particularly after paper publication and peer-review to ensure meaningful and validated outputs are distributed.4. By releasing their code, the authors aimed to facilitate the MRI community's ability to implement B1 inhomogeneity corrections, with the benefit of aiding peer validation, enhancing educational value, and fostering good internal documentation and archiving practices.They also expressed an interest in exploring interactive scripting tools like Jupyter Notebooks in future work.
---------------------------Paper number: 12 Year: 2020 Title: Myelin water fraction estimation using small-tip fast recovery MRI Keywords: Myelin Water Fraction, Small-Tip Fast Recovery MRI, Optimization, Brain Imaging Study Achievements and Reproducibility Summary: 1.The study demonstrates the feasibility of using an optimized set of small-tip fast recovery (STFR) MRI scans for rapidly estimating the myelin water fraction (MWF) in the brain, providing an efficient approach to brain imaging.2. Reproducibility was prioritized by sharing all the code used in the research, accompanied by scripts that allow for the reproduction of every figure in the paper, fostering transparency and enabling peers to build upon their work.3. Sharing was an integral component from the beginning of the project, with the understanding that releasing code and data would contribute to educational efforts and facilitate validation by others in the field.4. The researchers suggest embedding reminders about code/data sharing within the manuscript submission system and considering code/data citations for open-source contributions, thereby encouraging more MRI researchers to share their work openly.They also made use of Julia as a programming language for its interactive nature and swift execution, and expressed an interest in enhancing documentation and exploring Jupyter Notebooks for future projects.
--------------------------- --------------------------- 1.The study achieved the application and demonstration of an artificial neural network (ANN), specifically a convolutional neural network, for real-time processing of myelin water imaging (MWI), presenting a significant advance in speed without compromising accuracy.2. Reproducibility was prioritized by sharing not only the ANN code but also the pre-trained deep learning network, facilitating direct application and comparative studies by other researchers.3. The researchers adopted a proactive approach to sharing, embedding this practice in the early stages of their research to enhance collaborative opportunities and research validation.4. While sharing the code and the trained models was routine, the researchers faced challenges in data sharing due to institutional review board (IRB) policies, which they now aim to address in future projects to further support reproducible research practices.They encourage using their trained model as-is or as a seed for transfer learning to adapt to other experimental setups.
The reproducibility of this research was ensured by sharing
the code for the framework, along with Dockerfiles that reproduce the necessary coding environment, which allows peers to execute the software under the same conditions the authors used.
3.
The decision to share code and data was inherent from the start, motivated by a wish to enhance the framework's growth through use by the broader community and to invite scrutiny and feedback for iterative improvement.4. The authors mentioned future interests in decentralized web technologies such as the Interplanetary File System (IPFS) and blockchain for robust reproducibility and easier contributions by individuals.They also encouraged researchers developing software to consider using web technologies for user interface design and to explore Docker as a solution to ensure software is installed correctly.
- ------------------------- 1.The study developed an end-to-end workflow that begins with a vendor-neutral acquisition protocol and demonstrated that using vendor-neutral sequences reduces intervendor variability in quantities like T1, MTR (magnetization transfer ratio), and MTsat (magnetization transfer saturation) measurements.1.This study developed a deep neural network (DNN) for fitting intravoxel incoherent motion (IVIM) models to diffusion-weighted magnetic resonance imaging (DW-MRI) data and demonstrated successful training for IVIM parameter estimation.2. Reproducibility is supported through the sharing of the code and the pre-trained network, enabling the DNN to be applied directly or used as a foundation for further research, improving accessibility and verification of results.3. The authors chose to share their code from the early stages of the project, with a commitment to transparency and open science that allows for the reproducibility of methods and results.4. They also shared a Jupyter Notebook as a demo to help others interactively understand and execute their analysis, illustrating a commitment to open-source practices and improving the communication of complex computational methods to a broader audience.They believe that providing detailed documentation and sharing clinical data responsibly will facilitate open science and reduce the need for duplicative data collection efforts.
--------------------------- 1.The study achieved the development of a rapid and accurate MRI phase unwrapping technique called ROMEO, designed to address the challenges of high magnetic field strengths and the presence of metal implants or post-operative cavities by incorporating both spatial and temporal coherence information.2. Reproducibility is promoted through sharing open-source code, accompanying executables for different operating systems, and detailed documentation, all of which enable the application of ROMEO by other researchers and its verification against traditional methods.3. The decision to share code and data was made from the outset to ensure transparent validation of the results and to facilitate broader dissemination and feedback that enhance the method's usability and performance.4. The authors highlighted the importance of sharing not just algorithms but also tools and the ethos of reviewing the results, rather than relying solely on selected images in manuscript figures.They advocate for comprehensive documentation, realistic input data for testing, visualization of significant algorithm steps, and cooperative development with thorough testing to ensure long-term usability and enhancement of the code.
--------------------------- 1.The study introduced the Generalized Bloch model, a theoretical construct to describe the pulsed magnetization transfer process in MRI, particularly for nuclei in semi-solid pools and their interaction with nuclei in water.2. Reproducibility was enhanced by sharing the full implementation code, datasets, and figure reproduction scripts, permitting users to replicate the research findings and utilize the model for their experiments.3. A commitment to sharing materials as part of the publication process was evident, with the understanding that such practices not only benefit the scientific community by promoting collaboration and progress but also assist the authors in gaining credibility and visibility for their work.4. Alongside standard reproducibility practices, the authors went a step further by creating a comprehensive tutorial with interactive figures and MyBinder links, thus offering a deepened understanding of the Generalized Bloch model and encouraging its practical application and examination by peers.They emphasize learning how to appropriately share larger datasets while addressing privacy concerns as a future goal for reproducible research habits.
--------------------------- 1.The study developed PreQual, an automated processing pipeline designed to conduct preprocessing and quality assurance of diffusion weighted MRI images, which integrates a variety of standard tools and produces comprehensive quality reports.2. Reproducibility is augmented by open-source sharing of the pipeline code and containerizing the environment using Singularity, enabling users across the MRI community to implement PreQual easily and reliably.
3.
The commitment to open science principles guided the decision to openly share the code and was reflected in the research practices of the lab and institution, which emphasize accessibility and collaboration in clinical and basic science studies.4. The authors recommend adopting containerization as standard practice for newly published pipelines to facilitate reproducibility and believe it is essential for advancing reproducible science.They plan to continue making their tools open-source, and in the future, they aspire to share data as widely as possible within the bounds of privacy protection.
--------------------------- 1.The study developed a pipeline that improves the reproducibility of R1 mapping at 7T MRI by using a new model to compensate for inter-scan motion artifacts.2. The study's reproducibility is supported by sharing the pipeline code as part of an open-source toolbox (hMRI) and providing scripts that allow the reproduction of all simulation figures.3. A highly committed approach to open science from both the lab and institution ensures that code sharing is a standard practice for all publications.4. The researchers emphasize starting with the intention to release code when drafting the publication, as this promotes careful and reproducible research practices and benefits both the scientific community and the authors.
--------------------------- --------------------------- 1.The study developed a dual bandwidth rapid acquisition with relaxation enhancement (RARE) Dixon imaging technique, which is used to improve the signal-to-noise ratio (SNR) in MRI by removing dead times between refocusing pulses and preventing redundant chemical shift encoding.2. Reproducibility was supported through the sharing of code and interactive figures that help other researchers to understand complex results easily and explore the techniques implemented within the paper.3. A proactive approach to sharing was demonstrated, with the researchers making their interactive plots available as an intuitive resource for understanding their method's impact.4. The authors believe in open science practices, planning to continue sharing their work in future publications.They encourage the MRI community to share data/code where beneficial and suggest that peer review could include an evaluation of reproducibility efforts to enhance contribution standards.
- ------------------------- 1.The study optimized methods for correcting B1 inhomogeneity artifacts commonly caused by surface RF coils in fast spin-echo (RARE) MRI sequences at ultrahigh magnetic fields.2. Reproducibility was facilitated by making the correction code and data openly available, so other researchers can apply the methods to their own data and benefit from improved B1 corrections.3. The decision to share code and data was inspired by the positive practices in the field, following a trend toward transparency.The authors felt collaborating and sharing within the scientific community was not just a responsibility but beneficial for advancement.
Fig. 1
Fig. 1 Edge-bundled connectivity of the 612 articles identified by the literature search.A notable cluster (red) is formed by most of the MRM articles that were featured in the reproducible research insights (purple nodes), particularly in the development of MRI systems.Few other selected articles fell at the intersection of MRI systems, deep learning, and workflows (delineated by the dashed line).Notable clusters for other studies (pink) are annotated by gray circles.An interactive version of this figure is available at: https://preprint.neurolibre.org/10.55458/neurolibre.00021.MRM, Magnetic Resonance in Medicine.
Fig. 2
Fig. 2 PRISMA flow diagram illustrating the hierarchical relationship between the 31 studies included in the review and the broader literature on reproducible magnetic resonance neuroimaging.Web services and software packages used to automate this process are also displayed (right panel).PRISMA, Preferred Reporting Items for Systematic reviews and Meta-Analyses.
Fig. 4 A
Fig. 4 A word cloud generated from the 31 reproducible research insights published by Magnetic Resonance in Medicine Highlights.
of single breath hyperpolarized 129Xe MRI with dynamic 19F MRI in cystic fibrosis lung disease Keywords: Cystic Fibrosis, Hyperpolarized 129Xe MRI, Dynamic 19F MRI, Ventilation Abnormalities Study Achievements and Reproducibility Summary: 1.The study achieved a quantitative comparison between dynamic 19F MRI and single breath hyperpolarized 129Xe MRI for detecting ventilation abnormalities in subjects with mild CF lung disease, contributing valuable insights into the efficacy of different imaging modalities for CF assessment.2. Reproducibility was enhanced by sharing the code and data that reproduce several figures, allowing others in the research community to validate and potentially extend the findings.3. The decision to share came during the submission process, and recognizing the benefits, including feedback and collaboration, the team has been motivated to continue sharing for future publications.4. The researchers advocate for culture change within the MRI community where data sharing becomes an expectation and plan to explore additional reproducibility tools, such as explanatory videos, to further disseminate their research methods and findings.---------------------------Papernumber:3 Year: 2019 Title: Extreme MRI: Large-scale volumetric dynamic imaging from continuous non-gated acquisitions Keywords: Dynamic MRI, Non-gated Acquisition, Large-scale Volumetric Imaging, Pulmonary Imaging Study Achievements and Reproducibility Summary: 1.The study achieved the development of a framework capable of reconstructing large-scale volumetric dynamic MRI images from continuous, non-gated acquisitions, with successful applications demonstrated in pulmonary and DCE imaging.2. Reproducibility was prioritized through the sharing of code and an interactive Google Colab notebook demo, which allows for easy visualization and manipulation of the reconstruction process.3. The culture of sharing within UC Berkeley and the research team directly influenced the early decision to share code/ data, with the practice of version controlling, documentation, and unit tests facilitating the sharing process.4. Encouraging open-source contributions in the MRI community via showcases, hands-on sessions, and interviews has proven effective, and the authors intend to continue these efforts while exploring additional platforms like Zenodo for data hosting.
4 .
To promote a culture of open source code contribution, the researcher recommended MRI challenges where code is a submission requirement and showed an interest in exploring new platforms, such as Weights & Biases, for enhanced tracking and reproducibility of research results.---------------------------Papernumber: 8 Year: 2022 Title: Fourier-based decomposition for simultaneous 2-voxel MRS acquisition with 2SPECIAL Keywords: 2-Voxel MRS, 2SPECIAL Sequence, vGRAPPA Decomposition, 7T MRI Study Achievements and Reproducibility Summary: number: 10 Year: 2020 Title: Accelerated calibrationless parallel transmit mapping using joint transmit and receive low-rank tensor completion Keywords: Parallel Transmit Mapping, Accelerated MRI, Calibrationless Imaging, Low-rank Tensor Completion Study Achievements and Reproducibility Summary: 3. A commitment to transparency and the educational value of shared resources motivated the researchers to share their work, with the intention of sharing established from the project's commencement.4.The researchers endorse efforts like MRM Highlights toinspire open-source contributions and suggest that making code sharing advantageous or essential for researchers' careers could further promote widespread openness in the scientific community.They also express interest in providing code that is broadly compatible across different systems without user modification for future projects.---------------------------Papernumber: 16 Year: 2020 Title: Portable and platform-independent MR pulse sequence programs Keywords: MRI Sequence Programming, Pulse Sequence, Vendor-Independence, Modular Development Study Achievements and Reproducibility Summary:
2 .
The reproducibility of the research is strengthened by the sharing of open-source pulse sequences, Docker containers, BIDS-compliant data, and an interactive Jupyter Book containing code that generates the figures from the study.3. The decision to share code and data was made early in the study, driven by institutional policies and lab culture that prioritize open science and the sharing of research materials.4. The authors utilized open-source tools to create the vendorneutral sequences and enable transparent pipelines, making their work highly reproducible and useful for quantitative MRI research and multicenter clinical trials.They suggest that offering incentives might increase contributions of open-source content, and they express interest in developing and sharing interactive dashboards for future work to enhance the reproducibility and visualization of research findings.---------------------------Papernumber: 18 Year: 2019 Title: Submitted to Magnetic Resonance in Medicine Deep Learning How to Fit an Intravoxel Incoherent Motion Model to Diffusion-Weighted MRI Keywords: Deep Neural Network, Intravoxel Incoherent Motion, Diffusion-Weighted MRI, Quantitative Analysis Study Achievements and Reproducibility Summary: unwrapping with a rapid opensource minimum spanning tree algorithm (ROMEO) Keywords: Phase Unwrapping, Minimum Spanning Tree, Quantitative MRI, High Field Strength MRI Study Achievements and Reproducibility Summary:
4 .
Future directions aim to increase the flexibility of data sharing as part of informed consent, which would enable the release of example cases alongside the tools.The authors recommend Docker containers to overcome environment setup challenges and ensure portability and consistency across different systems.---------------------------Papernumber: 21 Year: 2021 Title: Generalized Bloch model: A theory for pulsed magnetization transfer Keywords: Magnetization Transfer, Bloch Model, Classical Model, Semi-solid Spin Pool Study Achievements and Reproducibility Summary: 3. A strong commitment to open science, beginning with public development on GitHub and continuing through the final stages of the research, facilitated the sharing of materials alongside the published article.4. The authors advocate that increasing recognition of code as a crucial scientific output, alongside traditional publications, could promote the sharing of open-source content.They also welcome contributions to their toolbox and suggest that researchers ensure their code can work on various systems and is free of local dependencies for true reproducibility.---------------------------Paper number: 23 Year: 2020 Title: PreQual: An automated pipeline for integrated preprocessing and quality assurance of diffusion weighted MRI images Keywords: Diffusion Weighted Imaging, Automated Processing Pipeline, Quality Assurance, Deep Learning Study Achievements and Reproducibility Summary: 3. A commitment to transparent and reproducible research guided the decision to share code from the inception of the project, with a lab culture supportive of open sharing aligned with the trend toward open science.4. The authors encourage code sharing as a means to validate methods and findings, and they consider sharing code and data an essential component of facilitating scientific progress and community collaboration.They plan to adopt more consistent practices for structuring directories and naming files in future research to make reproducibility even more straightforward.---------------------------Papernumber: 28 Year: 2021 Title: Deep neural network based CEST and AREX processing: Application in imaging a model of Alzheimer's disease at 3T Keywords: Deep Learning, CEST MRI, AREX, Alzheimer's Imaging Study Achievements and Reproducibility Summary: 1.The study optimized and applied a deep learning-based pipeline (deepCEST and deepAREX) for CEST MRI imaging, improving analysis for brains affected by Alzheimer's disease at 3T. 2. Reproducibility was fostered by sharing the code and sample data via an easily accessible platform, demonstrating the implementation's effectiveness and enabling others to apply the techniques to their datasets.3. Sharing was motivated to promote the deep learning-based processing method for CEST/AREX and to allow others to test and incorporate the approach into their research.4. The authors encourage sharing deep learning implementations with detailed instructions for use, offering training models and suggesting that sharing with repositories like Zenodo, which assigns DOIs to shared items, could be a useful practice for future research efforts.---------------------------Papernumber: 29 Year: 2020 Title: B1 inhomogeneity correction of RARE MRI with transceive surface radiofrequency probes Keywords: B1 Inhomogeneity, RARE MRI, Surface RF Coils, Quantitative MRI Study Achievements and Reproducibility Summary:
4 . 4 .
The authors provide thorough documentation in their software repository to ensure the reliability and longevity of their code.They also express the importance of code sharing for the credibility of research publications and advocate for open science initiatives that reinforce the value of sharing among the MRI community.---------------------------Papernumber: 30 Year: 2020 Title: FSL-MRS: An end-to-end spectroscopy analysis package Keywords: Magnetic Resonance Spectroscopy, Toolbox, FSL-MRS, Open-Source Study Achievements and Reproducibility Summary: 1.The study developed FSL-MRS, a comprehensive opensource toolbox for magnetic resonance spectroscopy analysis that integrates with the FSL software library.2. Reproducibility was a focal point, achieved by sharing the FSL-MRS code as part of the wider FSL software, along with providing exemplary documentation and examples.3. The decision to share was made early, with an understanding that transparency and accessibility are crucial for advancing healthcare research.The goals were to encourage community development and to maintain a useful archive of the research process.The authors stress the importance of making academic tools open-source and believe that sharing code and data should become a standard practice that is promoted through education, training, and changing the cultural norms within academia.They also suggest that better recognition of the contribution of shared code and data could incentivize researchers to adopt more reproducible research practices.---------------------------Paper number: 31 Year: 2021 Title: Integration of an RF coil and commercial field camera for ultrahigh-field MRI Keywords: RF Coil, Field Monitoring, Ultra-High Field MRI, Spiral Imaging A. Karakuzu et al. - AI-driven Review of Reproducibility in MRI Reproducibility was ensured by sharing both the source code for FatSegNet and the Dockerfiles necessary to create an environment to run the software, allowing users to replicate the environment and reproduce the results.3. The authors had a commitment to open-source sharing from the project's inception, with the intent to provide the scientific community with tools that can be readily validated, replicated, and applied to different datasets.
Reproducibility of the research was supported by sharing the deep learning model, code, and training datasets publicly, allowing other researchers to test, apply, and potentially improve upon the method.3. The team decided during the revision process of their manuscript to share code and data to facilitate validation and further exploration by the community, inspired by skepticism from a reviewer that led to a positive outcome.4. The authors shared a Jupyter Notebook via Google Colab for running the model without installation hassles; they suggest researchers adopt open source sharing early on and consider the type of software license when releasing code.They aspire to make sharing example cases alongside tools a standard practice, pending appropriate data sharing permissions.
--driven Review of Reproducibility in MRI 2. Reproducibility is enhanced by the shared R code, accompanied by detailed instructions that allow researchers to reproduce the results reported in the paper completely.
1.The study introduced an adaptive baseline fitting algorithm (ABfit) that improves the accuracy of metabolite estimation in MR spectroscopy, particularly useful at short echo-times where interference from baseline artifacts is pronounced.AI | 2024-06-20T15:20:10.720Z | 2024-06-19T00:00:00.000 | {
"year": 2024,
"sha1": "504905dab8e80c0e0caf9c7140dae33cd34abced",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.2463/mrms.rev.2023-0174",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "118bc90e3edde062b8e676ac13802b7fd4a22de9",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234677128 | pes2o/s2orc | v3-fos-license | The Bilateral Limb Decit (BLD) Phenomenon During Leg Press: An Investigation Into Central And Peripheral Factors
Background: The bilateral limb decit (BLD) phenomenon is the difference in maximal or near maximal force generating capacity of muscles when they are contracted alone or in combination with the contralateral muscles. It has been suggested that the BLD may be due to interhemispheric inhibition, however the origin of the decit is yet to be determined. The aim of this study was to investigate central and peripheral factors responsible for the BLD during leg press using surface electromyography (EMG) and electroencephalography (EEG). Methods: Fourteen adults (age = 23.7 ± 4.7 years old) completed bilateral (BL), unilateral left (UL) and unilateral right (UR) isometric leg press exercises. Bilateral limb ratio (BLR) was calculated similar to previous studies and surface EMG from three muscles of the quadriceps femoris (vastus lateralis, vastus medialis and rectus femoris) were used to compare signal amplitude in each condition. Movement related cortical potentials (MRCPs) over the left and right motor cortex areas (C3 and C4, respectively) were used to assess brain activity asymmetries reecting central factors. Results: The BLD was present in ten of the fourteen participants (mean BLR=81.4%). Mean RMS activity demonstrated differences in amplitudes between the quadriceps muscles, however no signicant differences were noted between bilateral and unilateral conditions. No signicant differences in MRCPs were observed between brain activity of the C3 and C4 electrodes in any of the conditions. Conclusion: This study noted the presence of BLD however the results did not provide evidence of signicant limitations in either the EMG and EEG data. C3 and C4 electrodes (C3) (C4) RP, C4 precentral cortex right bilateral NS’ signicantly bilateral right p
Background
Evidence suggests that lower forces are produced with bilateral limb contraction when compared to the summed force produced when the same homologous muscles are contracted unilaterally (Ruiz-Cárdenas et al. 2018). This phenomenon, termed as bilateral limb de cit (BLD), has been exhibited in both upper and lower limbs, however the magnitude of the de cit is typically larger in lower limbs (Magnus & Farthing 2008). BLD occurs similarly in males and females (Kuruganti et al. 2005;Kuruganti & Seaman 2006), but it has been shown to be sensitive to limb dominance and training interventions (Cornwell et al., 2012;Howard & Enoka, 1991;Kuruganti et al., 2005;Yuko Taniguchi, 1998). It has been demonstrated that speci city training can reduce BLD, for example, training under unilateral and bilateral conditions can increase unilateral and bilateral strength, respectively (Kuruganti et al., 2005;Rube & Secher, 1991;Yuko Taniguchi, 1998).
Despite the established force de cit, the source of BLD is still poorly understood. Understanding the cause of the de cit is important to understand neuromuscular function and how it can be impacted by tasks which use both limbs simultaneously. Investigating the source of this inhibition will help to understand the effect of the de cit and its functional implications including muscle imbalance and coordination.
Two primary theories that have been proposed, and they are referred to as the postural stability theory and the neural inhibition theory. The postural stability theory postulates that that postural stability requirements of the exercise studied may be the cause of the de cit (Herbert & Gandevia, 1996;Magnus & Farthing, 2008). This was further supported by evidence demonstrating that multi-jointed lower body exercises, particularly those involving large muscles and high force generation, require more postural stability and exhibit a greater de cit (Ning Lan, 2002). It has been suggested that those exercises involving multiple muscle groups and higher ground reaction forces may exhibit larger bilateral de cits due to the increased di culty in maintaing postural stability under the bilateral condition (Magnus & Farthing, 2008). However, no differences in activation of postural-related core muscles has been found, and still does not explain why we see BLD even in upper limb tasks (Magnus & Farthing, 2008).
The neural inhibition theory, conversely, has received more attention. Surface electromyography (EMG) obtain though RMS calculations to measure the extent of the neural commands sent to the muscle.
Although some work has had mixed results regarding the relationship between force and muscle activity (Cresswell & Overdal, 2002;Häkkinen et al., 1997;Howard & Enoka, 1991;Kuruganti et al., 2011;Kuruganti & Seaman, 2006), a there is evidence suggesting neural mechanism are behind BLD. Early research (Ohtsuki, 1983) suggested that the de cit could be related to inhibitory spinal re exes, which occur when the neural control for one limb is affected when the opposite limb is simultaneously activated. It is possible that afferent sensory information from one limb may inhibit the control of the motor neurons acting on the contralateral limb (Khodiguian et al. 2003). Furthermore, a study looking at BLD in plantar exor muscles also suggested reduced motor neuron excitability during bilateral contraction may also contribute to BLD (Kawakami et al., 1998). EEG can be an effective method of measuring brain activity during human movement (Spring et al. 2016). During maximal voluntary contractions, the preparatory brain activity that occurs prior to the onset of movement can be analyzed to investigate event-related potentials, more speci cally, movement-related cortical potentials (MRCPs). MRCPs are composed of two distinct components. The rst component is termed the Bereitschaftspotential (or Readiness Potential, RP), which is classi ed as a slow negative shift that occurs between onset of movement and is related to the MRCP peak amplitude (Shibasaki & Hallett 2006;Spring et al. 2016). The second component is called the Negative Slope (NS') which occurs between 500 ms before the onset of movement and movement onset. The Motor Potential (MP) falls within the NS' and occurs at the onset of movement and corresponds to the peak amplitude of the MRCP.
MRCPs have been investigated in the motor cortex area (C3 and C4) during unilateral and bilateral handgrip contractions (Oda & Moritani, 1995. It was concluded that a bilateral de cit in both force production and EMG was associated with a reduction in MRCPs, indicating that the bilateral handgrip contraction produced less force and EMG activity than the unilateral handgrip contraction because of a mechanism of interhemispheric inhibition. Interhemispheric inhibition is thought to occur when the activity in one hemisphere of the brain affects the activity in the opposite hemisphere while both are concurrently activated, thereby decreasing neural drive to the muscles (Oda & Moritani, 1995;Y. Taniguchi et al., 2001;Van Dieen et al., 2003). However, it remains that few studies that have examined brain activity simultaneously with EMG and force to study BLD, and this effect has not been shown during lower limb movements.
The purpose of the current study was to investigate the underlying cause of the BLD phenomenon in active, young adults in the lower limbs. Force output was recorded in parallel with surface EMG and EEG data during unilateral and bilateral leg press exercises using an isokinetic dynamometer. It was hypothesized that (1) the bilateral force output will be less than the sum of the unilateral force output during the leg press, (2) the unilateral muscle activity will support the discrepancies in force output, and (3) there will be differences in neuronal activity between the bilateral and unilateral leg press suggesting that the bilateral de cit is caused, at least in part, by the central nervous system.
Methods
Fourteen healthy men (n = 5) and women (n = 9) participated in the present study. Participant characteristics are summarized in Table 1. A questionnaire was distributed prior to beginning the study and it was observed that all participants were right-leg dominant (determined by asking which leg they would kick a soccer ball with) and were considered active (i.e., they engaged in resistance training at least three times per week on a regular basis) but were not varsity athletes. All participants were provided a detailed overview of the study and written informed consent was obtained from each participant prior to testing. This study was approved by the University of New Brunswick Research Ethics Board and has been assigned the le number REB#2019 − 159. Right Anterior Patella Skinfold [mm] 9.9 ± 3.9 14.6 ± 2.0 12.9 ± 3.6
Instrumentation
Torque data for the unilateral and bilateral leg press was collected using an isokinetic dynamometer (Cybex Humac Norm, CSMI Inc., USA) with an attached closed kinetic chain adapter. The sampling frequency of the dynamometer was 100 Hz. A 32-channel wireless surface EMG system (Trentadue, OT Bioeletrronica, Italy) was used to record muscle activity during all maximal voluntary contractions (MVCs). The EMG system had a Common Mode Rejection Ration (CMMR) of over 96 dB and a signal bandwidth of 10/500 Hz. The signals were sampled at a frequency of 2000 Hz, and an A/D converter resolution of 24 Bit, with a gain of 256. A dry, wireless EEG system (Cognionics Quick-30 Dry Electrode, Cognionics Inc., San Diego, CA, USA) was used to acquire brain activity during the leg press at a sampling frequency of 1000 Hz. To create a time-stamp for the MRCPs, a microcontroller (Arduino MEGA 2560, Arduino LCC, Italy) was used to send a trigger impulse to the EEG system when the participant reached 5 percent of their maximum torque production.
Isometric Strength Testing
Participants were seated in an upright position on the Cybex. The dynamometer was positioned at a selfselected back-angle (approximately 90°) and a horizontal translation (35-40°) to ensure comfort, and the closed kinetic chain adapter was set so that participant's knees were at a 90° angle, measured using a goniometer. Hip angle varied as participant's were able to adjust the back angle for comfort, but the angle was typically kept at approximately 90° (85-100°). To ensure no contribution of force transmitting from the upper body, participants crossed their arms over their chest during the contractions. Participants were then instructed to perform three bilaterally maximum voluntary contractions (MVCs), unilaterally three MVCs with their left leg, and three unilaterally MVCs with their right leg, where the order of testing was randomized. Participants were asked to hold the contraction for 5 seconds to provide su cient time to reach maximal force production. A two-minute rest period was given after each MVC to minimize fatigue.
During all trials, experimenters provided verbal encouragement (such as "push as hard as you can") to elicit motivation for maximal force production.
Surface Electromyography
Skinfold measurements were taken on the right leg of all participants in the supine position for anterior thigh (mid-point between the patella and the inguinal fold) and patella (2 cm above the proximal edge of the patella). Thigh girth for the right leg of each participant was also measured. Criteria for skinfold measurements was similar to that of Kuiken et al., (2003) to ensure that all participants had less than 0.4 mm of adipose tissue which could interfere with the myoelectric signal. Bipolar surface electrodes (Duotrode silver-silver chloride electrodes (Myo-tronics Inc.); interelectrode spacing of 21.0 ± 1.0 mm) were placed bilaterally (left and right) on palpated muscle bellies of the rectus femoris (RF), vastus medialis (VM), and vastus lateralis (VL) adhering to Seniam guidelines (The Seniam Project, 1999). To reduce impedance caused by skin, the area was shaved and cleaned with alcohol wipes prior to electrode placement. For the RF, electrodes were placed parallel to the muscle bers at half the distance between the anterior superior iliac spine (ASIS) and the superior part of the patella. Electrodes were placed over the VL at two-thirds the distance between the ASIS and the lateral aspect of the patella. Electrodes were then placed over the VM at an oblique angle (55°) at 80% of the distance between the ASIS and the joint space in front of the anterior border of the medial ligament. The reference electrode was placed over the right patella. All data was ltered using commercial software (OTBioLab Software, Bioelettronica, Italy).
Electroencephalography
A dry EEG headset (Cognionics Quick-30) was used to acquire continuous brain wave activity during each set of 3 trials for the leg press. The sampling rate of the EEG was 1000 Hz and conductive gel was used to keep impedances around 100 kOhm for the electrodes. The system was positioned on each participant's head based on the standard 10/20 channel system with the left earlobe as the reference point as shown in Fig. 1.
Data Analysis
Torque The trial with the highest peak force was chosen for further analysis. The corresponding trial was used for further processing for the EMG data. The bilateral limb ratio (BLR) was calculated similar to previous studies as follows (Ohtsuki, 1983).
Surface Electromyography
Surface EMG signals that corresponded to the trial with the maximum peak torque were used for processing. For these trials, a bandpass lter of 20-400Hz was applied in the OTBioLab software and the les were then exported into an Excel spreadsheet. For further processing, the data were converted to a MATLAB le and a notch 60 Hz lter was applied to the data. The amplitude of the EMG signal was estimated using the root mean square calculation (RMS). A 1.0 second window of EMG data, centered at the peak force was used for all calculations similar to previous studies (Kuruganti et al. 2008, Kuruganti et al., 2011.
Electroencephalography
The EEG data were processed using a custom MATLAB script (MathWorks, Natick, MA, USA) using EEGLAB (Delorme & Makeig, 2004) functions. Data were rst ltered with a band-pass lter of 0.1-100Hz to eliminate low frequency noise/DC offset. All blinking and other ocular artifacts were removed from the data using an independent component analysis approach (Delorme & Makeig, 2004). Epochs time-locked to the onset of movement were extracted from the data from − 1500 ms to 200 ms in order to analyze the MRCP. Similar to the EMG data, a notch 60 Hz lter was applied. The electrodes used to analyze the MRCP were over the left and right precentral cortex (C3 and C4, respectively), as they reside over the primary motor cortex (Oda & Moritani, 1995. The grand average of all EEG trials was calculated and then used to obtain the MRCP according to (Shibasaki & Hallett, 2006) at 3 phases: readiness potential (RP; -1000 ~ -600 ms), negative slope (NS'; -600 ~ -200 ms) and the motor potential (MP; -200 -50 ms).
Normality of the dataset was assessed using a Shapiro-Wilks test prior to any statistical analyses. A twoway repeated measures analysis of variance was used to examine the effect of contraction type (bilateral, unilateral) and muscle type (VM, RF, VL) on the RMS values. A pairwise t-test using a Bonferroni correction was used when an ANOVA resulted in a p-value less than the alpha value, which was set at 0.05. All of the statistical tests were performed using RStudio 1.0. 136 (RStudio, Boston, MA).
Unilateral and Bilateral Torque
The mean torque data for the unilateral and bilateral conditions are shown in Table 2. The mean BLR across all participants was 94.8 ± 22.0% which was less than, but not statistically signi cantly different (p > 0.05) than 100%. Out of 14 participants, 10 showed a bilateral limb de cit. An analysis was performed on both the participants that demonstrated a BLD response (n = 10) and the participants that demonstrated a facilitation (n = 4). A t-test showed that those participants that exhibited a de cit had a mean BLR = 81.4% which was signi cantly lower than 100% (p < 0.01). The participants that demonstrated a facilitation had a BLR of 117.1% which was signi cantly higher than 100% (p = 0.0155) indicating a bilateral facilitation. Unilateral and Bilateral EMG Figure 2 provides sample EMG data from one subject. Muscle activity from the rectus femoris (RF), vastus medialis (VM), and vastus lateralis (VL) during unilateral and bilateral isometric leg press is shown. Table 3 presents the amplitude data for each muscle (RF, VM, VL) for bilateral and unilateral conditions. The one-way repeated measures ANOVA did not reveal any signi cant differences due to condition (bilateral versus unilateral). The subset of individuals who presented a BLD were also examined and there were no signi cant differences detected due to condition in those individuals either. There were signi cant differences detected between muscles (p < 0.001) with the VM having higher amplitude than either the VL or RF. Unilateral and Bilateral EEG Figure 4 illustrates the average integrated amplitudes during the three components of the MRCP (RP, NS', and MP) during the three conditions. The C3 and C4 electrodes represent electrical activity at the left (C3) and right (C4) precentral cortex. There were no signi cant differences found between the electrodes for the RP, NS', or MP for any condition (p > 0.05). When comparing within each electrode, there was a signi cant difference found in C4 (the precentral cortex of the right hemisphere) between the unilateral right and bilateral conditions; the average NS' was signi cantly greater in the bilateral condition than the unilateral right condition (0.00257 ± 0.00475mV*s and − 0.00168 ± 0.00361mV*s for the bilateral and unilateral right conditions, respectively; p < 0.05). The average MP was also greater in the bilateral condition compared to the unilateral right condition (0.00106 ± 0.00238mV*s and − 0.000351 ± 0.00187mV*s for the bilateral and unilateral right conditions, respectively; p < 0.05).
Discussion
This study presents BLD leg press similar to other studies (Janzen et al., 2006, Magnus & Farthing, 2008, MacDonald, Losier, Chester & Kuruganti., 2014, but with varying results. While the mean BLR detected in the present study was similar but slightly higher (~ 81%) than what was discovered in previous research (MacDonald et al., 2014). This could due, in part, to the fact that MacDonald et al. studied varsity swimmers that incorporate potential unilateral and bilateral training into their programs, ultimately reducing the de cit. Overall, the torque data presented in this work was also higher than that of MacDonald et al. (2014).
It has been previously reported that the BLD is more evident in dynamic exercises (e.g. isokinetic knee extension) than isometric contractions (Jakobi and Chilibeck, 2001, Kuruganti et al. 2005, Kuruganti et al., 2006. Similar to Janzen et al. (2006), we found that the BLD is present in complex exercises such as the leg press which combines hip and knee extension. In addition, the nervous system may be more involved during multi-articulate contractions such as the leg press, that involve movements at multiples joints (Chilibeck et al., 1998). The postural stability theory suggests that exercises involving multiple muscle groups and higher ground reaction forces, such as the leg press, might exhibit larger bilateral de cits because it is more di cult to maintain postural stability under the bilateral condition (Magnus and Farthing 2008). There is comparable evidence to suggest that single-jointed movements, such as knee extension, may result in a smaller bilateral de cit compared to multi-jointed movements, such as a lateralis pull-down and leg press (Janzen et al. 2006). This is because multi-jointed movements tend to involve larger muscles and greater force production, thus requiring greater postural stability (Simoneau-Buessinger et al., 2015). It was determined that muscle activation of the trunk was signi cantly greater in the leg press, a multi-joint movement, compared to the knee extension and handgrip exercises, which are single-jointed movements.
The surface EMG data in this study did not show any differences between the bilateral and unilateral conditions for any of the quadriceps muscle. While 14 individuals participated in this research, EMG from only nine participants was used for analysis due to a hardware issue further reducing the sample size.
The EMG was also examined from those that exhibited a BLD (10/14 participants) and no signi cant trend was observed with respect to the de cit.
Similar to previous studies, this study found that the muscles of the quadriceps femoris are not homogenously activated during the leg press (Ema et al., 2016). They studied knee extension and leg press at differing intensities and found inter-muscle and inter-exercise differences in the activation of the quadriceps femoris from the involvement of the hip extension torque and that the RF activation is low in multi-joint exercise. However, Alkner et al. (2000) did not nd signi cant differences in the EMG amplitude of the VL, VM, RF and Biceps Femoris (BF) between isometric knee extension and leg press. While the sample in this study was small, the results suggest that there are differences in the relative contributions of each muscle to the overall activation. One limitation of the present study was the lack of measured antagonist muscle activity. In addition, this work used traditional bipolar surface EMG over the three muscles. Using multichannel, high density EMG electrodes over the entire quadriceps muscle may reveal greater insight regarding muscle activation during the leg press and also provide greater support to the postural stability theory of the BLD.
Previous studies that have investigated the role of surface EMG and the development of the BLD have been inconclusive and in many cases EMG data have not paralleled force or torque data under the same conditions. Some researchers have reported that the amplitude of the EMG signal is lower under bilateral conditions compared to unilateral conditions (Kawakami et al.1998 ;Koh et al. 1993 ;Oda and Moritani 1994 ;Ohtsuki 1981Ohtsuki , 1983Rube and Secher 1990 ;Steger and Denoth 1996 ;Van Soest et al. 1985 ;Vandervoort et al. 1984 ). Several authors (Henry & Smith, 2013;Oda & Moritani, 1994;Ohtsuki, 1981) have observed a greater force reduction in the dominant limb when investigating BLD, however, these results were primarily based on upper limbs. Other studies have also found that bilateral EMG amplitudes are lower than the unilateral (Cresswell & Overdal, 2002;Oda & Moritani, 1995Rejc et al., 2009;Vandervoort et al., 1984). While some researchers have found that EMG amplitudes are lower during bilateral conditions compared to unilateral conditions (MacDonald et al., 2014, andMurphy 2008;Cresswell and Ovendal 2002;Kawakami et al. 1998;Koh et al. 1993;Oda and Moritani 1994;Ohtsuki 1981Ohtsuki , 1983Rube and Secher 1990;Steger and Denoth 1996;Van Soest et al. 1985;Vandervoort et al. 1984) others have shown no de cit in the EMG data (Howard and Enoka 1991;Owings and Grabiner 1998;Schantz et al. 1989). In addition, this study only found differences in bilateral and unilateral EMG on the left side suggesting other factors contribute to the de cit. It has been suggested by researchers that the de cit may be caused by signi cant decreases in motor unit activation of the quadriceps muscles during the bilateral contraction compared to the unilateral (Vandervoort et al., 1984), decreased cortical activity (Oda & Moritani, 1995), and a reduction in neural drive in conjunction with interhemispheric inhibition (Cresswell & Overdal, 2002;Rejc et al., 2009).
While some studies have proposed that BLD is due to neural inhibition during bilateral compared to unilateral tasks (Vandervoort et al., 1984), few studies have used EEG to explore brain activity during these types of contractions (Oda and Moritani, 1995). In this study we examined strength, surface EMG measures, and brain activity during bilateral and unilateral contractions. Previously, Oda and Moritani (1995) concluded that there was a greater MRCP de cit of the non-dominant right hemisphere compared to the dominant left hemisphere. It was also suggested that the bilateral de cits in the integrated amplitudes for both the negative slope (NS') and motor potential (MP) could be due to the decreased neural activation of the primary motor cortex. Similarly to their ndings, our results illustrated no differences between hemispheres during each condition, but there was a decrease in brain activity in the (non-dominant) right hemisphere during the unilateral right condition compared to the bilateral condition. Given that the right hemisphere controls the left side of the body, it is plausible that this hemisphere would display a decrease in neural activity when the left leg is not involved in the MVC.
This study was limited to one movement and it would be interesting to determine if there are neural differences in other types of contractions which have demonstrated the BLD such as elbow exion. Given that the lower-extremity primary motor cortex is located in close proximity to the medial longitudinal ssure may introduce barriers in measuring interhemispheric interactions in lower extremities (Palmer et al., 2017). One such challenge may be because the electrical elds created by the activation in the adjacent parts of the ssure may be polar opposites, thus canceling out the signal when measuring the overall potential using EEG.
Conclusions
This study found the presence of the BLD during isometric leg press. There was no evidence of reduced muscle activity in bilateral compared to unilateral contractions. There were also no signi cant differences found between cortical hemispheres bilateral and unilateral contractions, indicating that the de cit was not induced because of interhemispheric inhibition during isometric leg press. This study examined contractions from healthy, university aged men and women. It is well established that muscle strength declines due to age and is often accompanied by alterations in muscle and neural activity. A higher sample size as well as a larger age range may provide greater information regarding muscle and neural adaptation due to the de cit. Furthermore, it has been shown that the BLR can be reduced with targeted training. Including EEG measurement may provide greater insight regarding the response of the de cit to training. Ethics approval (include appropriate approvals or waivers): We con rm that we have read the Journal's position on issues involved in ethical publication and a rm that this report is consistent with those guidelines. This project was approved by the University of New Brunswick Research Ethics Board (REB) and is led with the university (REB#2019-159).
Abbreviations
Consent to participate (include appropriate statements): Written informed consent was obtained from all participants prior to engaging in the experiment.
Consent for publication (include appropriate statements): Informed consent was obtained from all participants ensuring that they understood that while their individual identi able data would not be made public, aggregate and coded mean data would be made available for publication.
Availability of data and material (data transparency): The datasets generated and/or analysed during the current study are not publicly available due privacy issues but are available from the corresponding author on reasonable request.
Code availability (software application or custom code): The software and custom code used are not available for public use.
Authors' contributions: UK conceived and designed research. EW, OO and JT conducted experiments. EW, OO and JT analyzed data. EW wrote the manuscript. All authors read and approved the manuscript.
Acknowledgements: Not applicable Sample Data. EMG data from one subject during an MVC in a bilateral leg press. First column EMG data from the left limb (RF, VM, VL). Second column EMG data from the right limb (RF, VM, VL).
Figure 4
Mean integrated amplitudes (mV*s) of the RP, NS', and MP at the C3 and C4 electrodes during each condition | 2020-10-28T18:42:13.968Z | 2020-09-30T00:00:00.000 | {
"year": 2020,
"sha1": "9cf1cc57bb666e2abf2bc6cde42b3317373fa361",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-70761/v1.pdf?c=1601495671000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "64dbcb8ddd8524c847a9b9640a55c490f7db1758",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255784647 | pes2o/s2orc | v3-fos-license | Life-history traits of Drosophila melanogaster populations exhibiting early and late eclosion chronotypes
The hypothesis that circadian clocks confer adaptive advantage to organisms has been proposed based on its ubiquity across almost all levels of complexity and organization of life-forms. This thought has received considerable attention, and studies employing diverse strategies have attempted to investigate it. However, only a handful of them have examined how selection for circadian clock controlled rhythmic behaviors influences life-history traits which are known to influence Darwinian fitness. The ‘early’ and ‘late’ chronotypes are amongst the most widely studied circadian phenotypes; however, life-history traits associated with these chronotypes, and their consequences on Darwinian fitness remain largely unexplored, primarily due to the lack of a suitable model system. Here we studied several life-history traits of Drosophila melanogaster populations that were subjected to laboratory selection for morning (early) and evening (late) emergence. We report that the late eclosion chronotypes evolved longer pre-adult duration as compared to the early eclosion chronotypes both under light/dark (LD) and constant dark (DD) conditions, and these differences appear to be mediated by both clock dependent and independent mechanisms. Furthermore, longer pre-adult duration in the late chronotypes does not lead to higher body-mass at pupariation or eclosion, but the late females were significantly more fecund and lived significantly shorter as compared to the early females. Coevolution of multiple life-history traits in response to selection on timing of eclosion highlights correlations of the genetic architecture governing timing of eclosion with that of fitness components which suggests that timing ecologically relevant behaviors at specific time of the day might confer adaptive advantage.
Background
It is believed that circadian timekeeping mechanisms underlying rhythmic processes provide adaptive advantage to organisms [1][2][3][4][5][6][7], and this has prompted studies employing a variety of strategies to examine the adaptive benefits of possessing functional circadian clocks. Surgical ablation of the mammalian 'master circadian clock'suprachiasmatic nucleus [8], and genetic manipulation of circadian clocks in fruit flies Drosophila melanogaster [9] which are known to drive loss of rhythmicity in several key circadian behaviors result in reduced survivorship [10][11][12][13]. Environmentally induced or naturally occurring circadian dysfunction has also been reported to reduce longevity in D. melanogaster [14,15]. Beaver et al. [16,17] reported that D. melanogaster strains carrying loss-of-function mutation in two core clock genes exhibit reduced reproductive output. In addition, studies on organisms inhabiting different latitudes, as well as those living in constant conditions reported large variation in circadian phenotypes in accordance to their local habitats, suggesting that the underlying clocks may have evolved as an adaptation to the presence or absence of cyclic environmental conditions [6,[18][19][20][21][22][23][24][25][26][27][28]. Nevertheless, conclusions drawn from such studies are limited by the lack of adequate information about the ancestry, population size and history of the environmental conditions pertaining to the organism's ecology [6].
The eclosion waveform of D. melanogaster comprises a primary peak at dawn (under natural conditions) or around night-day transition (under laboratory light/dark cycles) which gradually reduces through the day with little or no eclosion occurring at night (Additional file 1: Figure S1; [9,29]). This restriction/gating of eclosion primarily around dawn is hypothesized to be an adaptation to avoid desiccation of pharate adults by high temperature and low humidity prevailing during the rest of the day [3], partly supported by the results of a recent study [30]. Laboratory selection approach has been previously adopted to study how circadian clocks evolve in response to selection for time/phase of eclosion. Selection for 'early' and 'late' emerging strains of Drosophila pseudoobscura and moth Pectinophora gossypiella under LD12:12 (12 h of light and dark cycles each) resulted in the evolution of divergent phase of eclosion (4 h in D. pseudoobscura and 5 h in P. gossypiella) [31,32]. As a correlated response, the early flies in both studies evolved longer circadian clock period while the late flies evolved shorter period. However, these studies suffered from some major shortcomings such as lack of population level replication, details of population ancestry and selection protocols employed (population maintenance methodology, population size and sex ratio) which are known to considerably modify the evolutionary trajectories in response to selection; and thus might have led to misinterpretation of the observed responses to selection (reviewed in [6]). Although these studies suggest that circadian clocks might have evolved to ensure temporal order in behavior and physiology thus enhancing Darwinian fitness (reviewed in [6]), our understanding of how selection for timing of clock controlled behaviors influence life-history traits remains nominal.
To explore the evolutionary trajectory of circadian clocks in response to selection for timing of eclosion, we initiated a long-term study on D. melanogaster populations by imposing selection for eclosion during early morning and late evening hours, which is in contrast to the usual time of eclosion in this species. From a set of 4 ancestral control populations we derived 8 populations -4 replicate early populations using flies that eclose early in the morning and 4 replicate late populations using flies that eclose late in the evening (Additional file 1: Figure S2; see materials and methods for detailed selection protocol). Consequently, the early 1-4 and the late 1-4 populations evolved significantly higher morning and evening eclosion respectively relative to the control [1][2][3][4] populations, and exhibited several properties analogous to the well-known 'morning/early' and 'evening/late' chronotypes in humans. Similar to the 'early' and the 'late' human chronotypes [33][34][35], the early and the late Drosophila populations evolved shorter and longer clock periods respectively with the control populations exhibiting intermediate period [36], and also exhibited diverged photic phase response curves (PRCs) for both eclosion [36] and activity/rest rhythms [37]. These results indicate that circadian clocks of the two sets of populations 'entrain' differently to LD cycles, or in other words, they are differentially sensitive/interact differentially with LD cycles. This is corroborated by the results of a previous study which reported that the early populations are sensitive to light primarily in the evening while the late populations are sensitive to light primarily in the morning [38]. Collectively, these studies suggest that divergent coevolution of circadian clocks in the early and the late populations might mediate differential interaction/entrainment to regulate time of eclosion.
In the present study, we used the early and the late populations to examine genetic correlations between mechanisms that underlie eclosion at a specific time of the day and that of various pre-adult (egg-to-puparium and egg-to-adult duration, egg-to-puparium and egg-toadult survivorship, and puparial dry-weight) as well as adult life-history traits (dry-weight at eclosion, fecundity, pre-and post-fecundity assay dry-weight, and longevity). As discussed earlier, the early and the late eclosion chronotypes have been shown to be associated with different circadian clock period and differential entrainment to LD cycles, and pre-adult traits such as egg-toadult duration is known to be correlated with circadian clock period. Therefore, to assess the relative contribution of circadian clock period and differential entrainment to LD cycles in driving life-history trait differences between the early and the late populations, we performed some of our experiments under both LD12:12 as well as constant darkness (DD). The rationale being that if differences in life-history traits between the early and the late populations are solely determined by circadian clock period as can be observed under DD when the clock is not under the influence of LD cycles, such differences would either decrease or cease to exist because clock period of all the populations would be held at 24 h in LD 12:12 by virtue of entrainment [8]. Persistence of differences between populations under both light regimes would imply that the observed life-history trait differences are also driven by clock independent mechanisms.
As mentioned earlier, since D. melanogaster eclose predominantly during 'dawn' , eclosion at other times of the day is considered to be maladaptive (Additional file 1: Figure S1; [3]). If this is true, then the proportion of individuals which normally eclose early in the morning in the control populations might also differ in terms of fitness from those that eclose late in the evening. To test for such a possibility, one generation before the assays we derived 8 additional populations from the controls ─ 4 populations comprising individuals emerging early in the morning, referred to as the early-control, and similarly, 4 populations comprising individuals emerging late in the evening, referred to as the late-control. Also, the early-control and the late-control populations are likely to reveal whether the observed differences in fitness measures between the early and the late populations (if any) are indeed evolved responses to the selection imposed on the timing of eclosion, or are merely environment-driven.
We report that the late populations have evolved significantly longer median egg-to-puparium duration leading to longer egg-to-adult duration, are more fecund around day 11 post-emergence which is the usual day for egg collection as per the selection protocol (see materials and methods), and also exhibit reduced median longevity as compared to the early populations, whereas the early-control and the late-control populations did not differ in the aforesaid life-history traits thus suggesting that the observed differences between the selected populations (early and late) are evolutionary responses to selection for timing of eclosion. Also, even though the early populations differed significantly from the late populations, they were similar to the control populations for most of the traits assayed, the possible reasons for which are discussed later.
Results
Egg-to-puparium duration ANOVA on median egg-to-puparium duration showed statistically significant effect of population, light regime and population × light regime interaction (Table 1a). Across light regime comparisons revealed that egg-to-puparium Table 1 Summary of results of ANOVA on (a) median egg-to-puparium duration, (b) arc-sine square root transformed egg-to-puparium survivorship, (c) median egg-to-adult duration, (d) arc-sine square root transformed egg-to-adult survivorship, (e) dry-weight at pupariation, and (f) dry-weight at eclosion values of all populations under LD12:12 and DD light regimes. Summary of results of ANOVA on (g) average eggs laid/female, (h) dry-weight at pre-and post fecundity assay stages, (i) log transformed fecundity per unit dry-weight loss, and (j) median longevity of virgin males and females of all populations under LD12:12 Figure S3; Additional file 1: Table S1). In LD12:12, the late populations had a significantly longer (by 6.5 h or 5.4 %) egg-to-puparium duration (129.11 h) as compared to all other populations (early = 122.43 h, early-control = 123.01 h, control = 121.84 h and late-control = 122.89 h) while that for the remaining four sets of populations did not differ among each other (Fig. 1a, c; Additional file 1: Figure S3; Additional file 1: Table S1).
In DD, the late populations took significantly longer (by 5 h or 3.6 %) to pupariate (118.14 h) as compared to the early (113.57 h) and the control (114.47 h) populations but did not differ from the early-control Figure S3; Additional file 1: Table S1).
Egg-to-puparium survivorship
ANOVA on egg-to-puparium survivorship revealed that the effect of population, light regime and population × light regime interaction was statistically not significant (Table 1b), indicating that the populations did not differ in their egg-to-puparium survivorship both within and across light regimes.
Egg-to-adult duration
ANOVA on median egg-to-adult duration revealed statistically significant effect of population, light regime and population × light regime interaction (Table 1c). As observed for egg-to-puparium duration, the egg-to-adult duration in LD12:12 was also significantly longer (by 16 h or 7.5 %) for all the populations as compared to that in DD ( Figure S4; Additional file 1: Table S1). Figure S4; Additional file 1: Table S1).
Dry-weight
Since the late populations exhibited significantly longer egg-to-puparium and egg-to-adult duration, we further tested if this lengthening of pre-adult developmental duration translated to higher dry-weight at pupariation and eclosion.
ANOVA on pupal dry-weight revealed statistically significant effect of population and light regime but not of population × light regime interaction (Table 1e). In accordance with the difference in egg-to-puparium duration between light regimes, the pupal dry-weight was found to be significantly higher (on an average by 6.3 %) in LD12:12 (early = 576.16 μg, early-control = 570.53 μg, control = 572.17 μg, late-control = 575.11 μg and late = 580.16 μg; Fig. 3a; Additional file 1: Table S3) as compared to that in DD (early = 533.52 μg, early-control = 536.46 μg, control = 533.33 μg, late-control = 544.07 μg and late = 544.83 μg; Fig. 3a; Additional file 1: Table S3) whereas no difference was observed between populations within either of the light regimes.
ANOVA on dry-weight at eclosion reported statistically significant effect of light regime but not of population or population × light regime interaction (Table 1f ). In accordance with egg-to-adult duration differences across light regimes, dry-weight at eclosion was found to be significantly higher (on an average by 4.35 %) in LD12:12 (early = 359.39 μg, early-control = 358.30 μg, control = 361.71 μg, late-control = 362.64 μg and late = 369.94 μg; Fig. 3b; Additional file 1: Table S3) as compared to that in DD (early = 342.63 μg, early-control = 347.85 μg, control = 346.19 μg, late-control = 348.12 μg and late = 348.06 μg; Fig. 3b; Additional file 1: Table S3) whereas the populations did not differ among each other in either of the light regimes. Fecundity ANOVA on average fecundity data revealed a statistically significant effect of population (Table 1g) Table S4).
Pre-and post-fecundity assay dry-weights ANOVA on female dry-weight measurements at pre-and post-fecundity assay stages showed statistically significant effect of stage (pre/post-fecundity assay) and population × stage interaction but not of population ( Table 1h; Additional file 1: Table S4).
Fecundity per unit loss in dry-weight
When normalized by the dry-weight lost (difference in pre-and post-fecundity assay dry-weight), fecundity per unit dry-weight lost did not differ statistically across populations (early = 0.15 eggs/μg, early-control = 0.16 eggs/μg, control = 0.17 eggs/μg, late-control = 0.15 eggs/ μg and late = 0.15 eggs/μg; Fig. 4c; Table 1i), suggesting that although the late populations were more fecund they lose more dry-weight due to the higher number of eggs laid. As an additional confirmation of this, we performed a linear correlation between egg output and dryweight loss by pooling data from all the populations, and found that the two were significantly positively correlated (r = +0.75, p < 0.0001; Fig. 4d).
Longevity
ANOVA on median longevity reported statistically significant effect of population, sex and population × sex interaction (Table 1j) Fig. 5; Additional file 1: Table S4). Within sex comparisons revealed that females of the late populations exhibited significantly shorter (~12 %) median longevity as compared to females of the other populations with the exception of the late-control females which did not differ statistically from the late populations. The median male longevity was not observed to differ statistically across populations ( Fig. 5; Additional file 1: Table S4).
Discussion
We observe that the late populations evolved longer egg-to-puparium and egg-to-adult duration as compared to the early populations thus highlighting an association between eclosion chronotype and pre-adult developmental duration (Figs. 1a-c, 2a-c). Under LD12:12, the difference in median egg-to-puparium duration between the late and the early populations is~7 h, which is reduced to~3 h at eclosion (Figs. 1c, 2c). One possible reason for this might be a genotype dependent effect of light on pupal development as also suggested by the median eggto-puarium duration of the late populations which differs from all other populations under LD12:12 but not DD (Fig. 1c). Alternatively, under LD cycles, timing of eclosion is known to be governed by a circadian clock component as well as a clock independent masking response to lights-ON as discussed with respect to the same populations in a previous study [39]. Thus, in addition to the clock determined time of eclosion, All other details are same as in Fig. 1 masking response to light, which results in an additional burst of eclosion immediately following lights-ON might have reduced the median pre-adult duration. This is further supported by our observation under DD that~4 h difference in egg-to-puparium duration between the late and the early populations increases to~7 h at eclosion (Figs. 1c, 2c). Thus, the observed reduction in median egg-to-adult duration under LD12:12 may be a result of the combination of both (a) artefact of masking response to lights-ON which is clearly absent under DD and (b) differential effect of light on pupal development, which remains to be addressed further. In addition to the divergent eclosion chronotypes, the early and the late populations have also evolved shorter and longer clock periods differing by 40 min [36,39] which suggests a correlation between emergence chronotype and circadian clock period. Such correlations have been reported earlier in the melon fly, Bactrocera cucurbitae [40,41], and between clock period and egg-to-adult duration in fruit flies D. melanogaster [42][43][44], suggesting that clock period differences influence pre-adult developmental rates. In DD, egg-to-puparium duration of the late populations was 118.14 h (4.9 days) and egg-to-adult duration was 226.41 h (9.4 days) as opposed to 113.56 h (4.7 days) and 218.90 h (9.1 days) respectively of the early populations. If the pre-adult duration of the early and the late populations was entirely driven by circadian clock period difference, under DD the early and the late populations would drift apart by 0.66 h (40 min) every day, and consequently the two populations would exhibit a 3.12 h difference in egg-to-puparium duration (in 4.7 days which is equal to the time taken by the early population to pupariate) and 6.01 h difference in egg-to-adult duration (in 9.12 days) which is considerably smaller than that observed empirically (Figs. 1a, c; 2a, c). Since eggs for the egg-to-puparium and egg-to-adult duration assays were collected from all populations at the same time (thus were age matched), the observed differences in pre-adult duration between the early and the late populations are unlikely to be due to the differences in the age of eggs. Moreover, the time of egg-collection or the age of eggs does not alter the difference in egg-toadult duration between the early and the late populations [45]. Taken together these results suggest that difference in pre-adult developmental rates of the early and the late populations is not entirely circadian clock driven, and may also involve clock independent mechanisms which might drive differential interaction with LD cycles (significant population × light regime interaction reported in Table 1a, c). Furthermore, light mediated enhancement in the preadult developmental rates is apparent as both egg-topuparium and egg-to-adult duration of all the populations was 7-7.5 % longer in LD12:12 as compared to DD (Figs. 1a-c, 2a-c; Additional file 1: Figure S3, Additional file 1: Figure S4; Additional file 1: Table S1). While effect of light on pre-adult duration has been documented earlier [46,47], precise mechanisms underlying such effects remains to be explored. The timing of eclosion in Drosophila is believed to depend upon a number of factors including the developmental state of the fly, the phase and period of circadian rhythm, hormonal cascade, and environmental condition [48,49]. The LD cycles interact with the circadian clock controlled gate of eclosion such that even if flies have completed development, they are allowed to eclose only during certain time of the day and not merely in accordance with their developmental state, and consequently the time of eclosion is delayed in LD12:12 as compared to DD [47][48][49][50]. Additionally, the time of eclosion on a given day is also a function of the circadian clock period such that individuals with shorter period eclose earlier than those with longer clock period [51]. This further supports the notion that pre-adult development in D. melanogaster is probably mediated by the interaction of circadian clocks with LD cycles.
Differences in pre-adult developmental rates of the early and the late populations do not seem to influence their egg-to-puparium and egg-to-adult survivorship; nor do the light regime mediated differences affect pre-adult survivorship (Figs. 1d, 2d; Additional file 1: Table S2). This might be due to the magnitude of difference in egg-topuparium or egg-to-adult duration between the populations not being large enough to influence egg-topuparium and egg-to-adult survivorship.
Although the late populations have evolved longer preadult developmental duration, their body-weight at pupariation and at eclosion did not differ from that of all the other populations ( Fig. 3; Additional file 1: Table S3). However, dry-weight of puparia and adults were found to be significantly higher for all the populations in LD12:12 as compared to DD ( Fig. 3b; Additional file 1: Table S3) which is not surprising as egg-to-puparium and egg-toadult duration is significantly enhanced in LD12:12 as compared to DD (Figs. 1, 2).
Coevolution of pre-adult life-history traits in response to selection for timing of eclosion is intuitive, as changes in pre-adult stages can directly affect the time course and waveform of eclosion. It would be interesting to know whether selection for eclosion at different time of the day also led to correlated changes in adult life-history traits that may not necessarily influence eclosion time but would highlight the underlying genetic correlation. In this regard, we observed that the late populations exhibited a significantly higher fecundity as compared to all other populations ( Fig. 4a; Additional file 1: Table S4). In D. melanogaster, pre-adult developmental duration is known to be correlated with fecundity, and delayed development is associated with higher dry-weight, and in turn with higher fecundity [52][53][54]. This does not appear to be the case in the late populations since neither their dry-weight at eclosion nor pre-fecundity assay dry-weight differed from the other populations ( Fig. 4b; Additional file 1: Table S4). However, post-fecundity assay dry-weight of the late populations was significantly lower as compared to that of the other populations ( Fig. 4b; Additional file 1: Table S4), but when normalized by the loss of dry-weight (difference in pre-and post-fecundity assay dry-weight), fecundity per unit dry-weight lost was similar for all populations (Fig. 4c). Therefore, a significant reduction in postfecundity assay dry-weight in the late populations appears to be a consequence of higher number of eggs laid as also substantiated by a significant correlation observed between the number of eggs laid and dry-weight lost (Fig. 4d). Therefore, contrary to the well-known positive correlation between pre-adult developmental duration, dry-weight and fecundity, our results suggest that the observed higher fecundity in the late populations is not due to higher dry-weight attained because of the delay in the timing of eclosion but may be due to other mechanisms such as pleiotropy or mutation accumulation [55]. Alternatively, higher fecundity in the late populations might have evolved as an artefact of the nature of selection protocol employed. For instance, to ensure that the number of adults in all populations is~1200, every generation we collect larger number of eggs for the late populations followed by relatively smaller number of eggs for the early populations as compared to the control populations (see materials and methods) which is 24 vials per replicate population for the early populations, 48 vials for the late populations as opposed to 16 vials for the control populations with each vial housing approximately 300 eggs. Therefore, the number of eggs collected from the late populations (~14400 eggs) is approximately twice that of the early (~7200 eggs) and thrice that of the control (~4800 eggs) populations. This might have led to an inadvertent selection for higher fecundity in the late populations, and also, possibly as a consequence of higher effective population size (N e ) the late populations might experience relatively lower extent of inbreeding depression followed by the early populations, with the control populations experiencing highest degree of inbreeding depression. If this were to be true, then the early populations would also be expected to exhibit higher fecundity as compared to the control populations, but that does not seem to be the case. Therefore, it is unlikely that this reasoning can account for the evolution of higher fecundity in the late populations even though it cannot be entirely disregarded. However, the possibility of such a scenario can also be clarified by a cross experiment between the early and the late populations. Additionally, given that fecundity in Drosophila is not constant across lifespan, the difference in fecundity between the populations observed on days 10-12 post-eclosion might also vary across different ages. For instance, in light of the results from a previous study [55], it is also possible that the early populations which exhibit significantly lower fecundity on days 10-12 might otherwise exhibit higher fecundity at an earlier stage and vice versa for the late populations. However, since the fly populations used in the current study are maintained on a non-overlapping 21 day generation cycle (see materials and methods) wherein eggs laid only on the day 11 of adulthood are used for the next generation, only eggs laid around this day determines the populations' fitness in this regime. Therefore, fecundity during other lifestages is irrelevant under the currently discussed regimen but nevertheless will be interesting to examine.
Further, we found that females of the late populations live significantly shorter as compared to those from the early and the control populations, while no difference in longevity was observed between males (Fig. 5a-c). The reduction in longevity of the late females as compared to the early and the control females was consistently observed in four replicate populations maintained under similar environmental conditions. In light of fecundity and dry-weight results, the observed reduction in longevity of the late females appears to represent the classic trade-off between fecundity and adult lifespan due to the antagonistic pleiotropic effects of the underlying genes [56][57][58]. However, since the results presented here are on virgins, the observed reduction in longevity cannot be explained entirely by such a trade-off. Therefore, even though higher reproductive output may have evolved as an artefact of the selection protocol, reduced longevity in the late females as compared to the early and the control females may have evolved as a correlated response to selection for late evening eclosion and not directly as a consequence of higher fecundity. Interestingly, we also observe that the reduced longevity in the females of the late populations is primarily a consequence of early death (around days 20-40) and not during the later lifestages, and a similar trend is also observed in the males. However, the possible reason for such observations remains to be explored.
Thus, we report that selection for late evening eclosion in fruit flies D. melanogaster is associated with the coevolution of several life-history traits in the late populations, while no difference was observed between the early, the early-control and the late-control populations. Such correlations between chronotypes/circadian clocks and life-history traits have been reported earlier. Notably, Yadav and Sharma [59,60] demonstrated that selection for faster pre-adult development leads to the coevolution of shorter clock period, and that the faster developing populations evolve reduced dry-weight, body size, fecundity, starvation and desiccation resistance, and longevity. Similarly, in a separate study on the melon fly Bactrocera cucurbitae, selection for egg-to-adult duration resulted in the coevolution of divergent phase of activity/rest and mating rhythms [40]. Most differences observed in our study, however, correspond to the late populations relative to the control populations, while very little difference was observed between the early and the control populations. Also, most of the life-history traits assayed in the late populations differed by a small magnitude varying from 2-10 % as compared to the early and the control populations. This is not surprising considering the larger time difference between the selection window of the late populations from the eclosion peak of the control populations, and proximity of the selection window of the early populations from the eclosion peak of the control populations (see materials and methods). Since evening eclosion is not predominantly seen in the control populations, the late populations would experience a much stronger selection pressure as compared to the early populations which in turn might drive faster coevolution of life-history traits.
In summary, selection for late evening eclosion leads to lengthening of pre-adult duration without any increase in body-weight at eclosion, increased fecundity associated with greater post-fecundity assay dry-weight loss and reduced virgin female longevity. The observed life-history traits in the late populations being evolved responses to selection is further supported by our observation on the early-control and the late-control populations. That life-history traits of the early-control and the late-control populations did not differ significantly from each other but were different from that of the early and the late populations (in most cases) suggests that the observed life-history trait differences between the early and the late populations are evolutionary response to the imposed selection and are not merely environmentally driven. Furthermore, although the pre-adult and adult life-history traits studied here are known to be highly correlated, enhanced fecundity in the late populations does not seem to be a consequence of higher biomass attained by lengthening of egg-to-adult duration. Thus the differences in adult traits do not seem to be associated with pre-adult trait differences and appear to be driven by independent mechanisms that might have evolved as a consequence of selection.
Conclusions
Thus, in contrast to studies which demonstrated the effect of direct manipulation of circadian clock on fitness aspects (see introduction), we report coevolution of life-history traits in independently evolved replicate populations of D. melanogaster exhibiting early and late eclosion chronotypes, suggesting that the genetic architecture underlying eclosion at specific times of the day (eclosion chronotypes) is genetically correlated with several life-history traits, and these correlations appear to encompass both circadian clock-dependent and clock-independent mechanisms. Thus the extent of circadian clocks' influence in the observed trait differences, and the underlying genetic architecture remains to be explored.
Experimental populations and assay conditions
Additional file 1: Figure S2 presents a schematic of the selection protocol employed to generate the early and the late populations from the control populations. Populations selected for early morning and late evening eclosion comprised four replicates each of the early i and the late j (i = j = 1-4) initiated from four replicates of the control k (k = 1-4) whose ancestry details are provided elsewhere [36]. Briefly, the early and the late populations with a given subscript were derived from the control population with the same subscript, and therefore share a common ancestry. For example, the early 1 and the late 1 populations were initiated from the control 1 population, and similarly for the other three replicates. Since our study aims at exploring evolutionary trajectories of traits in a population, the unit of biological replication is a population and thus, the four populations of each selection type are biological replicates in all our experiments. All 12 populations (four replicates each for early, control and late) were maintained on a 21 day discrete generation cycle, and flies were housed in plexi-glass cages of dimension 25 × 20 × 15 cm 3 with~1200 individuals per cage (sex ratio~1:1), and were provided with ad libitum banana-jaggery (BJ) medium. The parental populations were provided with food supplemented with yeast paste (to boost their fecundity) for three days prior to egg collection, and~300 eggs were collected and dispensed into each culture vial (16 vials for control, 24 for early and 48 for late populations) containing~6 ml BJ medium. From the initiation of eclosion, which is generally on day 9 (at 25°C) post egg collection, flies that eclosed early in the morning between Zeitgeber Time (ZT) 21-01 (ZT00 and ZT12 represents time of lights-ON and lights-OFF respectively under LD12:12) for 3-4 consecutive days were collected to form the early populations, while flies that eclosed late in the evening between ZT09-13 formed the late populations. For the control populations, flies were collected once every 24 h for the same 3-4 days and thus, comprised individuals emerging throughout these 3-4 days without any selection imposed on timing of eclosion. The days of initiation and termination of fly collection within the respective selection windows was kept constant for all populations. In other words, if collection of flies for the early populations was started on day x and terminated on day y, collection of flies for the control and the late populations was also initiated and terminated on days 'x' and 'y' respectively so as to ensure that the populations are selected only for eclosion at different gate/time of the day and to avoid any unintended selection for faster and slower pre-adult development. The implementation of selection protocol and regular maintenance of populations was performed under LD12:12 with~0.4 Wm -2 light intensity during the light phase, 25 ± 0.5°C temperature, and 75 ± 5 % relative humidity.
In addition to the four replicate populations each for the early, the control and the late, we used four replicates each for the two other populations (early-control and late-control; see Introduction). From the control populations, flies emerging in the morning window (ZT21-01) were collected to form the early-control populations and similarly, flies emerging in the evening window (ZT09-13) formed the late-control populations. This procedure was implemented on all four replicates of the control populations for only one generation prior to the assays, and therefore, unlike the early and the late populations, the early-control and the late-control populations were not subjected to any long-term selection protocol.
To minimize the effects of non-genetic inheritance (reviewed in [61]) due to different selection regimes, all populations were subjected to one generation of standardization with the maintenance protocol which is identical to that used for the control populations. This was achieved by relaxing selection on timing of eclosion by collecting all flies that eclosed throughout the first 4 days similar to that for the control populations while the population size was maintained at~1200 flies per replicate population. Since the primary purpose of using the early-control and the late-control populations was to asses if the observed differences between the early and the late populations are evolved responses to selection as not merely environmental in origin, these populations were also subjected to standardization by deriving them from the control populations followed by relaxation of selection for one generation as described above. All assays described in the present study were performed on the progeny of the standardized populations at the 242 nd generation (~14 years) either in LD12:12 or DD, or both, with light intensity, temperature and humidity same as that for the maintenance of populations. Fly handling and experiments in the dark were performed under dim red light (λ > 650 nm).
Egg-to-puparium duration assay
Egg-to-puparium duration for all the populations was assayed under two light regimes ─ LD12:12 and DD. After having provided yeast paste supplemented media for three days, all populations were provided with media plates for 1 h as substrate for oviposition. These plates were then replaced by fresh food medium plates for the next 1 h. Eggs laid on these plates were collected and 30 eggs were dispensed into each vial. A total of 10 such vials were used per replicate population per light regime making a total of 300 eggs per population per light regime. These vials were transferred to respective light regimes and monitored for the first pupariation event. After the first puparium was observed, vials were checked every two hours to count the number of puparia formed thereafter, and the assay was terminated when no pupariation event was seen for 24 consecutive hours. It was observed that a small proportion of larvae took relatively longer to pupariate, thus rendering the egg-to-puparium duration distribution right skewed (Fig. 1a, b; Additional file 1: Figure S3). Mean egg-topuparium duration cannot be used as a reliable measure for such distributions [62], and therefore, we used median egg-to-puparium duration (calculated as the time from egg collection for 50 % of total pupariation events in a vial) for the same. The median egg-to-puparium duration was estimated for every replicate vial and then averaged across vials to obtain median egg-to-puparium duration for a given replicate population.
Egg-to-adult duration assay
Egg collection protocol and environmental conditions for the egg-to-adult duration assay were identical to eggto-puparium duration assay. After egg collection and transfer to LD12:12 or DD, eclosion of the first adult fly was monitored following which vials were subjected to two hourly checks to count the number of flies that eclosed thereafter. The assay was terminated when no eclosion event was observed for 24 h. To facilitate comparisons between egg-to-puparium and egg-to-adult duration, we used median durations as a measure for analysis. The procedure to estimate median egg-to-adult duration was same as that described for median egg-topuparium duration.
Estimation of egg-to-puparium and egg-to-adult survivorship Egg-collection protocol and environmental conditions for the survivorship assays were same as that for the egg-to-puparium and egg-to-adult duration assay. Proportion of 30 eggs (total number of eggs dispensed per vial for the assay) that successfully pupariated was used as a measure for egg-to-puparium survivorship, while proportion of adults that successfully eclosed was used to estimate egg-to-adult survivorship. Individuals that were stuck in the pupal case and died within the pupa were considered to not have eclosed successfully. Percentage survivorship was calculated for every replicate vial and then averaged across vials to obtain average survivorship per replicate population.
Dry-weight at pupariation
The protocol for egg collection and subsequent environmental conditions for development under LD12:12 and DD was the same as that described for egg-to-puparium duration assay. From the initiation of the first pupariation event, freshly formed puparia (P1 stage) were collected every 2 h and frozen at -20°C. These puparia were later sorted into 10 replicate groups with 5 puparia in each group; dried at 70°C for 36 h after which their dry weight was assayed. Dry-weight of each group was measured at least thrice to account for instrument error and then normalized by the number of puparia (n = 5). The dry-weight measurements from 10 such groups were then averaged to obtain mean puparium dryweight per replicate population.
Dry-weight at eclosion
The protocol for assaying dry-weight at eclosion was the same as that for dry-weight at pupariation assay except that instead of puparia freshly eclosed (within 2 h of eclosion) adult flies in LD12:12 or DD were used.
Fecundity assay
The populations used in the present study are maintained on a 21 day discrete generation cycle where eggs for the next generation are collected on day 21 post egg collection (average adult age of 11 days). Since only eggs laid around this day would determine an individual's contribution to the gene pool for the next generation and consequently to its fitness, we estimated fecundity only under LD12:12 around day 11 (post-eclosion) in the progeny of standardized populations, which were collected in plexi-glass cages and maintained under LD12:12 in mixed-sex groups similar to that used for regular maintenance of populations. On day 8 (average adult age), flies from plexi-glass cages were collected, separated using mild carbon-di-oxide anesthesia and transferred into vials containing~4 ml BJ media for conditioning at a density of 10 flies/vial (5 of each sex). In parallel, additional sets of conditioning vials were set aside from which flies for pre-fecundity dry-weight assay were to be collected later (described in the following section). On day 10, flies from the conditioning vials were sorted into single male-female pairs and transferred into 20 vials/population containing 1 ml BJ medium. After 24 h (day 11), flies were transferred to fresh set of vials and the same was repeated on day 12. Average number of eggs laid per female across days 10-12 was used as a measure of mean fecundity/female around day 11. Only vials from which data could be collected for all three days were used and those in which either male or female died within the three days were not used for data analysis.
Estimation of pre-and post-fecundity assay dry-weights
To assess pre-fecundity assay dry-weight of females, 20 females (for every replicate population) from separate sets of conditioning vials (which were not used for fecundity assay) as described in the preceding section were frozen at -20°C at the beginning of day 10. Additionally, at the end of the fecundity assay (end of day 12), females used for the assay were collected and frozen. All flies were then dried at 70°C for 36 h, sorted into groups of 5 individuals each and weighed at least thrice to estimate dry-weight/female. Dry-weight measurements were then averaged across groups to calculate mean pre-and postfecundity assay dry-weight/female/replicate population. Further, dry-weight loss during fecundity assay was estimated by calculating the difference in pre-and postfecundity assay dry-weight and was used to normalize the fecundity/female values to calculate fecundity per unit dry-weight lost as an estimate for biomass to egg conversion ratio. However, this is under the assumption that the biomass lost is entirely converted to eggs laid which may not necessarily be the case but nevertheless can be used as a proxy for assessment of for biomass-toegg conversion ratio.
Longevity assay
Longevity of flies was assayed only in LD12:12 with environmental conditions same as described previously. Freshly eclosed virgin males and females were collected from the progeny of standardized populations every 6 h over three consecutive days. On the fourth day, all flies of a given sex and population were mixed and randomly distributed in groups of 10 flies/vial/sex into 10 replicate vials containing~4 ml BJ media. Therefore, every replicate population comprised 20 vials in total with 10 vials for each sex and each vial housing 10 flies (average age of 2 days). Thereafter, flies were transferred to fresh BJ media every 3rd day and longevity was estimated by counting the number of dead flies in each vial every 24 h. The assay was continued until all flies died. While care was taken to ensure no flies escaped during transfers to fresh media vials, a few of them either escaped or were crushed between the cotton plug and the vial, and hence were not considered for calculating percentage survivorship for that vial. Similar to egg-to-puparium and egg-to-adult duration, longevity distribution was also right-skewed and therefore, we used median longevity (time taken for the death of 50 % of individuals in a given vial) as the measure of longevity.
Statistical analyses
All measures of pre-adult duration, pre-adult survivorship, fecundity, dry-weight and longevity were estimated for each replicate vial and then averaged to obtain mean values for replicate populations. These replicate means served as data for statistical analyses by a randomized block design mixed model analysis of variance (ANOVA) with 'population' , 'light regime' , 'stage (at which fecundity was assayed)' or 'sex' (whichever was appropriate) as fixed factors and 'replicate population' as random factor. All percentage and ratio values were arcsine square root and log transformed respectively before subjecting them to ANOVA. Post hoc multiple comparisons were performed at a significance level (α) of 0.05 by method of Tukey's HSD. All statistical analyses were implemented on STATISTICA for Windows, Release 5.0B (Statsoft 1995).
Availability of data and materials
The dataset(s) supporting the conclusions of this article is(are) included within the article and as supplementary online material. | 2023-01-14T14:57:41.207Z | 2016-02-27T00:00:00.000 | {
"year": 2016,
"sha1": "ca03324b909398e02b440e33a7a741da829ae260",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12862-016-0622-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ca03324b909398e02b440e33a7a741da829ae260",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
234088530 | pes2o/s2orc | v3-fos-license | Preparation and Characterization of Activated Carbon with (ZnCl2 - Activated) from (PET) Bottle Waste for Removal of Metal ions (Cu+2) in Aqueous Solution
The power of activated carbon resides from polyethylene terephthalate (PET) by chemical and physical activation to adsorption of metal ions (Cu+2) on certain conditions, such as (Concentration of metal ion in the solution, and contact time). Its chiefly objective is to reduce the poisonousness by the metal mentioned above and reducing the surrounding contamination resulting from the bottle waste after throwing them. In this work, activated carbons were prepared from bottle waste by carburizing and activation methods. The Carburizing temperature were 500°C and 900°C under Argon gas with flow rate (150 cm3 min−1). activating agents (ZnCl2) were utilized. The isotherm models of Langmiuir and Freundlich were studied and Langmuir isotherm model was more appropriate when Carburizing temperature was 900°C, in contrast to carbonization in 500°C were studied the Freundlich isotherm model was best. Pseudo-first-order, Pseudo-seconds order kinetics also studied. The pseudo-seconds-order was more suitable to describe the adsorption properties for (Cu +2) when Carburizing temperature was 900°C. In general, the (PET) west activated with ZnCl2 and temperature of 900°C was best adsorption from activated with temperature of 500°C.
Introduction
A well-known method of recycling polymer waste is to manufacture inexpensive products or consumer goods. The alternative is to treat a fundamentally new product, such as activated carbon, which is effective and cheap [1]. AC is used for advanced technology and to meet many water quality requirements. For thousands of years, AC has been used to improve the quality of the drinking water, it has been utilized as an adsorption medium. various forms (powder and granules), AC was used for that improvement, as it is used to remove colour-producing heterocyclic compounds as well as pollutant precursors when cleaning, and to track pollutants, whether organic or inorganic and compounds (taste and odor) [2]. Organic materials of biological origin have been used to obtain activated carbon and use it in various forms, in other words converting organic materials such as (wood, banana pitch, coconut shells, and corncobs) to activated carbon [3]. The rapid expansion of heavy metal related industries such as those involved in electroplating, mining, smelting, battery manufacture, tanneries, paint, pesticides, printing, and photographic had caused a serious environmental problem due to incomplete heavy metal treatment [4]. Because of its acute toxicity and persistence in nature, heavy metals have proven harmful to both the environment and [5]. Heavy metal is a metallic chemical element with a relatively high density. Most heavy metals are toxic even in low concentrations, they tend to bio-accumulate then they are dangerous [6]. On the other hand, the problem of increasing plastic waste in large quantities does not threaten directly the environment, but it is a problem of great concern due to the amount of solid waste generated that cannot be decomposed [7]. Usually, a person obtains important minerals, including heavy metals, through various nutrients, in which case the person obtains the minerals necessary for the metabolism or strengthening of the immune system. It has been reported that some minerals such as (Copper, Selenium, and Zinc) play some important and beneficial roles in human metabolism. For example, copper in low concentration acts as cofactors for various redox cycle enzymes; However, in high concentrations, the human metabolism is disrupted leading to anemia, irritation of the liver, stomach, kidneys, and intestines. Heavy metal toxicity can also disrupt or damage the mental and central nervous system, alter blood composition, and damage the lungs, kidneys, liver, and other important organs. It has also been found that damage to the human respiratory system develops after exposure to a high level of minerals. [8]. A common treatment method is to remove metal ions from industrial water or wastewater, either through adsorption, bio-absorption, chemical precipitation, solvent extraction, reverse osmosis, or filtration, and other processes. Among these treatments, it is adsorption, so because adsorption is efficient, at the same time economical and inexpensive method, this method is used to remove heavy metal ions from aqueous solutions. An adsorbent that can be used to remove heavy metals from industrial wastewater or sewage is activated carbon due to its high surface area as well as the chemical nature of its surface, the small permeable structure, and the Ease of manufacture at low cost [9]. The aim of the research project study was to prepare the physically and chemically activated carbon material and to characterize the carbonaceous materials obtained from PET waste, and to apply for adsorptive removal of (Cu +2 ) from the aqueous solution.
Preparation of the activated carbon
PET (Polyethylene terephthalate) from drink bottles was used as the raw material for the carbonization process. After good washing and drying, the bottle is cut into small pieces (2 -0.1 cm). Then it was burned in the electric oven at 500ºC and holding at 2 hours to obtain the carbon. The burning shall be in an oxygen-isolated atmosphere under with flow rate (150 cm 3 min -1 ) from Argon gas. The carbon that the result from the carbonization through mesh number 80-160 (about 0.2 -0.098 mm) is (B1), and then Impregnation with the ZnCl 2 and mixing in the magnetic starrier for 4 hours (C1). Carbon produced from carbonization (B1) and activated carbon resulting from the impregnation of ZnCl 2 (C1) was also treated with carbonization at 900ºC, so the resulting carbon (B2 and C2), respectively as the table 1.
Preparation of metal solution
The aqueous solution (Cu +2 ) was prepared by dissolve (1g) of Cu in (50 cm 3 ) of 5M HNO 3 . Diluted to (1 L) into a volumetric flask with deionized water. Preparing was different concentrations of each solution containing metallic ions (5,10,15,20,25,30) ppm, the equilibrium concentration of the solutions was determined by the atomic absorption spectrometer (the flame used for acetylene-air). Where the calibration curve was drawn amongst adsorption and concentrations (ppm) of (Cu +2 ) as shown in Figure 1.
Adsorption equilibrium and isotherms
To determine Adsorption amount by the changes were monitored by changing the mineral concentration (mg / L), as well as the adsorption time (hour) using the balance equations [10,11]: (1) Here V is the volume of the metal ions solution (L), m is the weight of the prepared carbons (g), C 0 is the initial concentricity of the adsorbate (mg/L), C e is the concentricity of the adsorbate at equilibrium (mg/L), Langmiuir isotherm model. This model proposed by Langmuir was based on a homogeneous surface. It assumes that the surface of the sorbent forms a layer of monomolecular sorption, and that the active sites on the surface of the sorbent are energy identical. It believes adsorption to be a chemical phenomenon [12].
The formulation for Langmuir is defined as follows: and the linear form of Langmuir formula is defined as follows: Where; q e represents the amount of metal accumulated (mg/g), q max is the maximum metal sorption (mg/g), b is the ratio of adsorption and desorption rates (mL/mg).
3.2.
Freundliche isotherm model. The model proposed by (Freundliche, 1906) was based on an equation that encompasses the heterogeneity of wide-ranging affinity surface or surface support sites. It's based on the active sites and their energies being distributed exponentially. It is implicit that the stronger binding sites are engaged for sorption first and that the binding force decreases with the increasing occupancy of the site [11,13]. The formulation for Freundliche is defined as follows: The famous linear formula of the Freundlich isotherm is known by equation: Where K f and (1/n) are empirical constants dependent on several environmental factors.
Adsorption kinetics
Using the initial condition qt = 0 at t = 0, can integrate this equation.
Where; k 1 (1/h) is the constant for kinetic of pseudo-first -order adsorption, q e and q t are representing the amounts of adsorbed tetracycline at equilibrium (mg/g of AC) at time t (h).
Pseudo-second-order kinetic model.
The pseudo-second-order rate expression, in general defined by the subsequent equation [15,16]: Integrating Eq. (9) and using again the initial condition q t = 0 at t = 0, the following equation is obtained (10) In which q e and q t have the same meaning as before, and K 2 (g/mg. h) is the corresponding kinetic constant.
Equilibrium studies
When calculating the (PET-AC) adsorption capacity of the metal ions (Cu +2 ) by using equation (1) for 4 samples (B1, C1, B2, C2) found to increase with an increase in initial metal ion concentrations as shown in figure 2. This is due to an increase in the saturation of adsorbent surface with an increase in initial metal ion concentrations [17]. The relationship between contact time and adsorption capacity of heavy metals with activated carbon prepared from (AC-ZnCl 2 ) are shown in figure 3. From the result obtained, it is obvious when contact time increases will be metal ion removal increased. The adsorption capacity for (Cu) ion concentration prepared at 30 ppm and 48 hr contact time was (41.752, 22.086, 16.144, and 7.700) mg/g for sample C2, B2, C1, and B1 respectively. Increasing the contact time to the removal of heavy metals will not result in any changes to the removal percentage, but will result in desorption of the metal ions from the AC surface. The results show at different times that metal ions achieved different equilibrium (B1 and C1) because of their specific properties [18]. Figure 2. Effect of initial metal ion concentration on the adsorption capacity of ACs for adsorption of (Cu +2 ), at time=1 hr., pH=3, temp.=30ºC and ACs dosage=50 mg. Figure 3. Effect of contact time on the adsorption capacity of ACs for adsorption of (Cu +2 ), at metal ion concentration =30 ppm, pH=3, temp.=30ºC and ACs dosage=50 mg.
Adsorption Isotherm results
The shape of the isotherms is an experimental tool to diagnose the type of the adsorption. To describe the adsorption data of the equilibrium isotherm model is used, the thermodynamic parameters underlying these models give insight into the adsorption mechanism, the adsorbent affinity, and the surface properties. Because of more applications are developed, the importance of obtaining the best equilibrium isotherm becomes more important, more precise and detailed isotherm descriptions are required for the design of the adsorption system [19]. Langmuir Isotherm to predict if an adsorption system is "favourable" or "unfavourable", which is defined as [20] (11) The Langmuir constant, b, was used to calculate the separation factor, R L When: (0 < R L < 1) Type of Isotherm is Favourable, (R L >1) Type of Isotherm is Unfavourable, (R L = 1) Type of Isotherm is Linear and when (R L = 0) Type of Isotherm is Irreversible. It was found that quantities R L (0.259, 0.173, 0.152, 0.285) for (B1, C1, B2, C2) respectively, the adsorption abilities of metallic ions (Cu +2 ) is desirable as figure 4. In all types of activated carbons, the linear plot of (log C e ) versus (log q e ) in Freundlich isotherm showing in figure 5 offers a high rate of correlation coefficient (R 2 ) was (0.979, 0.991, 0.986, 0.881) for (B1, C1, B2, C2) respectively. [21].
Kinetic study
Two kinetics models were utilized to calculate the percentage of the adsorption process. These kinetic models are pseudo-first-order and pseudo-second-order. This kinetics were utilized for studying of adsorption of (Cu +2 ) onto the surface of (B1, C1, B2, and C2). This study was under the following conditions: initial concentration of metallic ion is (30 mg/L), temperature (30 o C), pH=3, and ACs dosage (50 mg). Applicability of a particular type of rate equation is selected based on the value of the correlation coefficient R 2 . Both the values of R 2 for the "pseudo-first-order" and "pseudo-secondorder" was close value showing in figure 6. But, the values of R 2 for the pseudo-first-order adsorption model aren't satisfactory. Therefore, the more suitable describe the adsorption kinetics is the pseudoseconds-order adsorption model [22].
Conclusions
The polyethylene terephthalate activated carbons were prepared to estimate the adsorption of heavy metal ions (Cu +2 ). By using physical and chemical activation. In the preparation process of carbon, 500, and 900 ºC were chosen as the optimal pyrolysis with (ZnCl 2 ) activation. Form (PET) bottle | 2021-05-10T00:03:55.026Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "051491d6a8f8bd13c0fe4f410f06e936ae2d7ada",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1757-899x/1094/1/012131",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "61af96fa2ac95fe551c2c75294f18ea11a138e39",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
258920432 | pes2o/s2orc | v3-fos-license | Non-medical prescription opioid use among high school students in 38 U.S. States
Highlights • The lifetime NMPOU was 15%, and ranged from 9% to 23% across states.• Nearly half the students who reported lifetime NMPOU also reported recent use.• There were modest differences in lifetime and recent NMPOU among girls and boys.• Recent NMPOU was associated with past 30-day alcohol and cannabis use.
Introduction
Non-medical prescription opioid use (NMPOU) can lead to fatal or nonfatal overdose (Gaither et al., 2018;Mattson et al., 2018;Wilson et al., 2020), and can also be a stepping stone to heroin use and drug injection (Cerdá et al., 2015;Jones et al., 2020;McCabe et al., 2021). As described below, large, nationally representative surveys of adolescents suggest that estimates of lifetime and annual NMPOU are high. Given the risks posed by adolescent NMPOU, this present study provides a more comprehensive understanding of the epidemiology of use to inform prevention strategies.
The 2019 National Youth Risk Behavior Survey (YRBS) indicates that 14% of US high school students report lifetime NMPOU, with significantly higher prevalence among girls (16%) compared to boys (12%) (Centers for Disease Control and Prevention, n.d.). Data from the 2021 Monitoring the Future survey shows that the past 12-month prevalence of non-medical Oxycontin use among high school seniors is 0.9% (Johnston et al., 2022). Finally, a study based on the 2015-2016 National Survey on Drug Use and Health (NSDUH) data reported that the annual prevalence of NMPOU among 12-17-year-olds was 21%. The study further showed that at least half of the youth reporting past-year NMPOU had also used tobacco or cannabis (Hudgins et al., 2019).
Given the high prevalence of NMPOU, there is a clear need to develop a more comprehensive understanding of the epidemiology of adolescent opioid use. In particular, estimates of the frequency of past 30-day use could provide insights about the scope of the problem. This information is not known because questions to assess current use have not historically been included in behavioral health surveys. Additionally, there is limited clarity about demographic or geographic subpopulations that might be at higher risk for NMPOU. Boys are more Abbreviations: NMPOU, Non-Medical Prescription Opioid Use. likely to report use of all of the commonly used drugs, but a few studies on past 12-month NMPOU noted that there were no sex differences Carmona et al., 2020). A few studies noted variation in NMPOU across different cities, whereas others noted higher prevalence in states with large rural areas (Keyes et al., 2014;Lipari et al., 2017). To address knowledge gaps and generate information to guide program and policy development, we describe sex differences in adolescent NMPOU, and also summarize variation in use across states.
The main purpose of this study is to examine NMPOU among US high school students, with focus on frequency of lifetime and past 30-day use. Several states participating in the YRBS asked about past 30-day NMPOU for the first time in 2019, enabling an examination of recent NMPOU. Secondarily, we consider whether recent NMPOU may be part of a constellation of polysubstance use by assessing recent NMPOU in association with recent use of alcohol and cannabis (Jessor, 2018;Tomczyk et al., 2016). Our objectives were to: [1] identify states with high prevalence of lifetime and past 30-day NMPOU, with a focus on examining sex differences in use; and [2] examine past 30-day NMPOU in association with alcohol and cannabis use.
Youth Risk Behavior Survey (YRBS)
To monitor a broad range of health behaviors, including substance use, the Centers for Disease Control and Prevention (CDC) supports the Youth Risk Behavior Survey. The YRBS includes biennial, school-based surveys of high school students, and data are representative of 9th-12th graders at the national, state, and district levels. We used data from the 2019 State YRBS. State YRBS samples are generated based on a two-stage cluster sampling design, with schools and classrooms within schools selected for participation. Analysis was deemed exempt from review by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board.
Measures
Lifetime NMPOU was assessed using the question, "During your life, how many times have you taken prescription pain medicine without a doctor's prescription or differently than how a doctor told you to use it?" Past 30-day NMPOU was assessed using the question "During the past 30 days, how many times have you taken prescription pain medicine without a doctor's prescription or differently than how a doctor told you to use it?" Both instructed respondents to "Count drugs such as Codeine, Vicodin, Oxy-Contin, Hydrocodone, and Percocet." Response options for both items included: 0 times, 1-2 times, 3-9 times, 10-19 times, 20-39 times, and 40 or more times. We derived binary measures for both variables, i.e., any versus no lifetime NMPOU and any versus no past 30-day NMPOU. We also created a new variable representing number of times used in the past 30-days as 0, 1-2, or 3 or more.
Covariates includes alcohol use, cannabis use, and demographic factors. Alcohol and cannabis use were derived from responses to questions about frequency of recent alcohol use (i.e., "During the past 30 days, on how many days did you have at least one drink of alcohol?") and cannabis use (i.e., "During the past 30 days, how many times did you use marijuana?"). We used binary measures of past 30-day use of alcohol and cannabis. Demographic measures included sex (male or female), grade level (9th, 10th, 11th, 12th), and race/ethnicity (Hispanic/Latino youth of any race, non-Hispanic White, non-Hispanic Black, and all others). The "other" category included youth who identified as Multi-Racial, Native Hawaiian or Other Pacific Islander, American Indian/ Alaska Native, and Asian.
Lifetime NMPOU across states (38 States)
The first set of analyses focused on state estimates of lifetime NMPOU among boys and girls. The analytic sample was comprised of 38 states. Of the 12 states not in the sample, four did not participate in the YRBS (i. e., MN, OR, WA, WY), two (DE, IN) did not achieve a sufficiently high response rate to be included in YRBS data (i.e., 60%), two did not provide permission to distribute (MA, OH), and four (NH, NJ, NY, VA) did not include the item on lifetime NMPOU (Underwood et al., 2020). We pooled the data and presented descriptive analyses. Using state-specific data, we generated estimates of lifetime NMPOU in each state for boys and girls, and conducted chi-square tests to determine whether sex differences were statistically significant. We present weighted prevalence estimates and 95% confidence intervals (CIs).
Past 30-day NMPOU (8 States)
The analytic sample for analyses of recent NMPOU included a subsample of the 38 states that included an item on past 30-day use. Data from the 8 states (i.e., AK, GA, HI, MI, MO, NE, NV, NM) were pooled. After descriptive analyses, we generated prevalence estimates for past 30-day NMPOU for the full sample and for each state separately. Next, we restricted the sample to those reporting any lifetime use and conducted sex-stratified, multi-variable logistic regression to estimate the odds of recent NMPOU in association with recent use of alcohol and cannabis. We generated odds ratios (ORs) and 95% CIs, and models were adjusted for race/ethnicity, grade, and state. All analyses were conducted in SAS Studio version 3.7 using the sampling weights in the YRBS data to account for sampling probabilities and nonresponse (Brener et al., 2013).
Description of samples
There were 151,910 high school students in the full sample (lifetime NMPOU) and 28,439 in the subsample (recent NMPOU). Both samples were balanced on sex and grade level (Table 1). In the full sample, 46.7% were White, 28.3% were Hispanic/Latino, and 15.1% were Black. The composition by race in the subsample was slightly different; 52.1% were White, 20.3% were Black, and 16.6% were Hispanic/Latino.
In the full sample, the large majority reported no lifetime NMPOU (85.1%), 8% reported having used three or more times, and 6.9% reported having used just once or twice. In the subsample, 91.3% reported no past 30-day NMPOU, 4.5% reported having used once or twice, and 4.2% reported having used three or more times. Past 30-day use of alcohol and cannabis were similar in both samples; more than one-fifth reported alcohol use and approximately 18% reported cannabis use.
Lifetime NMPOU (38 States)
Prevalence estimates of lifetime NMPOU varied widely by state, with a range of 8.6-23.2%. The range in state prevalence estimates was 9.4-22.7% for girls and 8.6-23.2% for boys (Fig. 1). Eight states were in the top quartile for both girls and boys; those states were: Nevada, Arizona, and New Mexico in the West; Missouri in the Midwest; and Arkansas, Louisiana, Mississippi, and Alabama in the South. In most states, there was no difference in lifetime NMPOU among girls compared to boys. Point prevalence estimates were 1.1 to 1.6 times higher among girls versus boys (p < 0.05) in 11 of the 38 states (i.e., CA, FL, GA, ID, MD, MT, NE, NV, ND, PA, TX), but interval estimates overlapped for 10 of those 11 states. The exception was Florida, where girls were significantly more likely to report lifetime NMPOU than boys, i.e., 16.2% (95% CI: 14.3, 18.0%) versus 11.6% (95% CI: 10.3, 12.9%).
Past 30-day NMPOU (8 States)
Eight states included an item on past 30-day use, i.e., AK, GA, HI, MI, MO, NE, NV, and NM. Among students in those states, 45% of those reporting any lifetime NMPOU also reported past 30-day NMPOU. Table 2 shows the past 30-day prevalence of NMPOU among those reporting lifetime use. Estimates ranged from 33.0% to 50.7% for girls (median = 42.6%) and 40.7% to 52.3% for boys (median = 45.8%). For both girls and boys, the state prevalence estimates were highest in New Mexico (50.7% for girls, 52.3% for boys). There were no statistically significant sex differences in past 30-day NMPOU in any of the states.
Estimates of the frequency of past 30-day NMPOU among girls and boys who reported any lifetime NMPOU in the eight states are presented in Fig. 2. For most states, more than 50% of students reported no past 30day NMPOU. The exceptions were New Mexico for girls (49.2%), and Georgia and New Mexico for boys (48.9% and 47.7%). Among girls in three states (i.e., MO, NV, and NM), more than one-fifth indicated having used three or more times in the past 30 days. Among boys, over one-fifth reported having used 3 or more times in the past 30 days in seven states, the exception being Alaska. Georgia was the only state where there were statistically significant sex differences in past 30-day frequency of NMPOU. One-third of girls in Georgia reported having used 1-2 times and 15.8% reported using 3 or more times. By contrast, 18.8% of boys reported having used 1-2 times and 32.3% reported using 3 or more times.
Past 30-day NMPOU in association with alcohol and cannabis use
Analysis of past 30-day NMPOU in association with alcohol and cannabis use were restricted to students who reported any lifetime NMPOU. Regression analyses indicate that past 30-day NMPOU was significantly associated with past 30-day alcohol and cannabis use for boys and girls ( Table 3). The odds of past 30-day alcohol use among students who reported any past 30-day NMPOU (versus no recent use) was 5.1 (95% CI: 4.3, 6.1). The magnitude of the association was higher for boys (OR: 8.5; 95% CI: 6.5, 11.0) than for girls (OR: 3.5; 95% CI: 2.8, 4.4). The odds of past 30-day cannabis use were 3.7 (95% CI: 2.8, 4.8) among those who reported past 30-day NMPOU (versus no use). As with alcohol, magnitude of the association was higher among boys (OR: 5.1; 95% CI: 3.6, 7.2) than girls (OR: 2.8; 95% CI: 2.0, 3.9).
The next series of analyses estimates odds of past 30-day alcohol and cannabis use in association with the frequency of past 30-day NMPOU among students who reported any lifetime use, with statistical adjustment for grade, sex, state, and race/ethnicity. Results indicate strong associations between NMPOU and alcohol or cannabis use. Students who reported having used prescription opioids non-medically just 1-2 times in the past 30 days had 3.4 times the odds of past 30-day alcohol use (95% CI: 2.5, 4.7), and 2.9 times the odds of past 30-day cannabis use (95% CI: 2.1, 4.0) relative to those with no past 30-day NMPOU. Odds ratios were higher for students who reported more frequent NMPOU (i.e., 3 or more times), i.e., 9.2 for alcohol (95% CI: 6.6, 12.8) and 4.9 for cannabis (95% CI: 2.8, 8.6). The magnitude of the associations of NMPOU with alcohol and cannabis were higher among boys than girls. In fact, the odds of alcohol use among boys reporting 3 or more times of NMPOU (OR: 18.4; 95% CI: 10.4, 32.6) was over threefold higher than the odds for girls (OR: 5.2; 95% CI: 3.3, 8.0). The odds of past 30-day cannabis use among students reporting 3 or more times of NMPOU was twice as high among boys (OR: 7.5; 95% CI: 3.9, 14.6) than girls (OR: 3.2; 95% CI: 1.8, 5.8).
Discussion
Our goal was to present new information about patterns of nonmedical prescription opioid use (NMPOU) among US high school students. Using data from CDC's State YRBS program (2019), we investigated the prevalence of NMPOU, and also estimated associations between NMPOU with past 30-day alcohol and cannabis use. The lifetime and past 30-day prevalence estimates for NMPOU were high, respectively 15% and 9%. We observed a strong and statistically significant association between recent NMPOU and alcohol or cannabis use for boys and girls.
Data from all states combined show that the vast majority of US high school students reported no lifetime NMPOU. Of the 15% who did, 7% used once or twice only and 8% used three or more times. There was substantial variation in lifetime NMPOU among high school students across the 38 states, with estimates ranging from 9% to 23%. States with estimates of lifetime NMPOU in the top quartile (greater than 17%) were Nevada, Arizona, New Mexico, Missouri, Arkansas, Louisiana, Mississippi, and Alabama. The only state with a statistically significant sex difference was Florida, where girls had a higher prevalence of use than boys.
Table 2
Past 30-day non-medical prescription opioid use among high school students who reported any lifetime use in eight states, by sex (N = 28,439). Note: Chi-square tests did not indicate that sex differences were statistically significant.
Data from the 8 states with information on past 30-day NMPOU provide new information about recent use. The state-pooled data show that 45% of lifetime users also reported recent use, and 4% of students reporting recent use indicated having used 3 or more times in the past 30 days. Statewide estimates of recent use among students who reported lifetime use ranged from 33% to 51% for girls (median = 43%) and 41% to 52% for boys (median = 46%). Past 30-day use among lifetime users exceeded 50% among girls in New Mexico, and among boys in Georgia, Hawaii, and New Mexico.
These findings suggest that nearly one-half of high school students who misuse prescription opioids in their lifetime have used multiple times and also that they have used in the past 30 days. The frequency and recency of NMPOU should raise public health concern about the likelihood of escalation in use or progression to other opioids, especially in states with the highest prevalence of use. Our results highlight the importance of examining more detailed measures of NMPOU and also of state and local surveillance of adolescent NMPOU.
We observed strong and statistically significant associations between alcohol use and cannabis use. Students who reported past 30-day NMPOU were more than three times more likely to report past 30-day use of alcohol (95% CI: 4.3, 6.1) and cannabis (95% CI: 2.8, 4.8). Associations were stronger for boys than for girls. We also observed a dose response relationship, higher frequency of NMPOU was associated with increased odds of alcohol and cannabis use. These findings provide initial evidence that NMPOU may be part of a constellation of polysubstance use, meaning that NMPOU could be associated with use of multiple drugs. This finding is consistent with previous studies (Carmona et al., 2020;Fiellin et al., 2013), and emphasizes that NMPOU should be included in strategies for the primary prevention of substance use.
These findings provide new information about NMPOU among adolescents but are not representative of all US high school students or of all US adolescents. Findings for lifetime NMPOU reflect data from high school students in 38 states. States that did not participate in the YRBS were excluded, as were participating states that did not achieve a response rate of 60% and/or that did not include a question on lifetime NMPOU. Findings on past 30-day NMPOU were further limited to 8 states that inquired about recent use. Additionally, results cannot be generalized to adolescents who are not attending high school, a population with higher levels of substance use. However, 95% of youth aged 14 to 17 years were enrolled in school in 2019 (National Center for Education Statistics, n.d.).
This study has some limitations. We cannot differentiate between students who misused prescription opioids for medical purposes (additional pain relief) from those who misused them for recreational purposes. Additionally, the questions assessing lifetime and current prescription opioid misuse refer to prescription pain medicine; however, the questions provide examples of opioid-containing prescription medications only. Therefore, if students considered nonopioid prescription pain medications when answering, an overestimation of prescription opioid misuse prevalence might have occurred. Finally, although the strong association between alcohol use and prescription opioid use represents a risk for overdose, YRBS data provide information about concurrent use but not simultaneous use. We cannot address the question of whether adolescents co-use alcohol and prescription opioids.
Nonetheless, the prevalence of past 30-day NMPOU among students reporting lifetime NMPOU is high and its association with alcohol and cannabis use suggest implications for practice. The findings indicate the need for communicating with youth about misuse of prescription opioids. School and community-based programs are important avenues for primary prevention, as are clinical settings. Screening for opioid misuse is recommended as part of clinical examination, particularly among youth who report alcohol and cannabis use. Several studies have showed that screening and brief interventions (SBI) are effective in reducing adolescent alcohol and cannabis use in schools, primary care settings, and emergency departments (Bernstein et al., 2009;D'Amico et al., 2018;Lunstead et al., 2017). Although more needs to be known about the effectiveness of SBI for NMPOU among adolescents, it may help prevent youth from initiation or escalation of opioid use (Hadland, 2019).
Our study contributes to the growing literature on the opioid crisis among adolescents, particularly NMPOU. These findings underscore the need for youth-focused clinical, prevention and intervention strategies. A better understanding of geographical variations in NMPOU, as well as how socio-contextual factors relate to adolescent opioid use is needed to help address the root causes of opioid use among adolescents. Additional research should also identify factors that increase risk for use, escalation, and/or that increase availability, such as prescribing rates, outlier prescribing, and widespread overprescribing. Finally, state priority and public health planning for opioid prevention should include youth.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
The data were derived from the following resources available in the public domain: Youth Risk Behavior Surveillance System (YRBSS) available at YRBSS Data and Documentation (cdc.gov). | 2023-05-27T15:16:47.302Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "cf1cb1abb81e1e5aed68900a855d030c4b0e6f43",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.abrep.2023.100498",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4c1c1297355871de376fe6be3f4e9dfa019b4ba",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119235308 | pes2o/s2orc | v3-fos-license | Environmentally-Sensitive Theory of Electronic and Optical Transitions in Atomically-Thin Semiconductors
We present an electrostatic theory of band gap renormalization in atomically-thin semiconductors that captures the strong sensitivity to the surrounding dielectric environment. In particular, our theory aims to correct known band gaps, such as that of the three-dimensional bulk crystal. Combining our quasiparticle band gaps with an effective mass theory of excitons yields environmentally-sensitive optical gaps as would be observed in absorption or photoluminescence. For an isolated monolayer of MoS$_2$, the presented theory is in good agreement with ab initio results based on the GW approximation and the Bethe-Salpeter equation. We find that changes in the electronic band gap are almost exactly offset by changes in the exciton binding energy, such that the energy of the first optical transition is nearly independent of the electrostatic environment, rationalizing experimental observations.
Introduction. Atomically-thin materials exhibit remarkable electronic properties due to their quasi-two-dimensional nature [1][2][3][4]. However, their size also makes them extremely sensitive to their local environment. A complete theoretical picture must simultaneously treat the two-dimensional nature of carriers and the dielectric character of the surroundings. This latter property is the primary distinction between atomically-thin materials (such as the transition metal dichalcogenides) and heterostructured semiconductor quantum wells (such as GaAs in AlGaAs).
To date, many theoretical studies of atomically-thin materials have focused on the excitonic properties, including the large exciton binding energy [5][6][7], the unique excitonic Rydberg series [8,9], the nature of selection rules [10][11][12], and Berry phase modifications of the exciton spectrum [13,14]. Surprisingly, the quasiparticle band gap has received significantly less attention, especially from simplified microscopic theories, perhaps because it is challenging to measure experimentally. In fact, simple theories of the exciton binding energy are often times used in conjunction with the experimentally measured optical gap in order to estimate the quasiparticle band gap [8,15].
The GW approximation represents the current method-ofchoice for the accurate calculation of band structures and band gaps [16,17]. However, the quasi-two-dimensional nature of the atomically-thin materials makes these calculations very challenging to converge [18][19][20]. In this work, we provide a simple electrostatic theory of band gap renormalization due to electrostatic proximity effects. Through combination with an effective mass theory of the exciton binding energy, we find that the optical gap -i.e. the sum of the band gap and the (negative) exciton binding energy -is extremely insensitive to the dielectric environment. To the best of our knowledge, this represents the first quasi-analytical demonstration of this remarkable effect.
The band gap of nanoscale materials differs from that of the bulk parent material because of two separate effects: carrier confinement and dielectric contrast. In the first case, the geometric confinement of carriers leads to an increased kinetic energy and a concomitantly larger band gap. However, in layered materials (such as the TMDCs), the two-dimensional confinement is already largely reflected in the bulk band gap, as evidenced by the small bandwidth in the perpendicular (stacking) direction. Therefore, in the following, we employ this idealized scenario of carriers confined to two dimensions, even when describing the bulk material. In particular, this approximation is invoked to describe low-energy carriers at the K-points of the Brillouin zone; here, the wavefunction character is primarily that of transition-metal d-orbitals, which are confined to the center of the TMDC layer, precluding strong interlayer hybridization. In Fig. 1, we show the bandstructure of bulk and monolayer MoS 2 calculated using density functional theory [21]. The monolayer band gap at the Kpoint is only 0.09 eV larger than that of the bulk, indicating that any band gap renormalization due to carrier confinement is already (largely) accounted for in the bulk band gap; we henceforth neglect this small shift so as to focus on alternative effects while treating the monolayer and bulk on equal footing. We emphasize that this geometric carrier confinement is a one-electron (kinetic energy) effect that is well-described by density functional theory -unlike dielectric screening effects.
As mentioned above, a second source of band gap renormalization in nanomaterials is the dielectric contrast effect. Physically, we recall that the quasiparticle conduction and valence bands measure the electron affinities and ionization potentials, respectively. The excess charge created in these pro- cesses polarizes the material and its environment such that the potential energy of the charge depends on the local dielectric geometry. We model atomically-thin semiconductors as a slab of dielectric constant ε 1 and width d, surrounded by environmental dielectric constants ε 2 below and ε 3 above, as shown in Fig. 2. Consistent with the arguments presented above, the carriers will be assumed to occupy the center of the slab, at z = 0.
We now proceed to calculate the band structure corrections due to such a heterogeneous dielectric environment. We assume that a reference many-body band gap is known, which could come from experiment or calculation. In particular, we will primarily consider band structure corrections to the three-dimensional bulk material. Corrections will be calculated in two ways: (1) classically, using electrostatic continuum theory; and (2) quantum mechanically, using the static Coulomb-hole plus screened exchange (COHSEX) approximation to the quantum mechanical GW self-energy. When correcting a reference band structure, we require the difference in the screened Coulomb interaction, δW(r, r ) ≡ W(r, r ) − W ref (r, r ), where W is the total screened Coulomb interaction. We calculate the respective screened interactions through their electrostatic counterparts associated with the slab dielectric geometry shown in Fig. 2. While this is a classical approximation, which neglects local field effects, it avoids the high cost of an ab initio calculation of the screened Coulomb interaction.
In recent years, effective mass theories of atomically-thin materials have made frequent use of the model potential energy derived by Rytova [22] and Keldysh [23] (RK), where H 0 and Y 0 are the Struve function and the Bessel function of the second kind and ρ is the two-dimensional in-plane separation. The screening length is given by ρ 0 = ε 1 d/(ε 2 +ε 3 ) and can be related to a two-dimensional sheet polarizability [5,24]. For the purposes of the present manuscript, the RK potential suffers from two deficiencies. First, it applies only in the limit of extreme dielectric mismatch between the slab and its surroundings; while this approximation is good for isolated (suspended) monolayers, it breaks down in more general dielectric environments. Second, the RK potential has an unphysical logarithmic divergence at ρ = 0, which precludes its use in simple electrostatic theories of band gap renormalization. Instead, we employ the exact solution of the finitethickness electrostatic problem shown in Fig. 2. We emphasize that the logarithmic behavior of the RK potential is correct over some intermediate length scale and only incorrect for ρ d.
The potential energy of two charges in a slab with locations z 1 , z 2 , and in-plane separation ρ can be calculated via image charges to give a screened interaction W(z 1 , z 2 , ρ) [25]. In the Idealized dielectric slab geometry used to model the electrostatics of atomically-thin semiconductors.
center of the slab (z 1 = z 2 = 0), we find where L 1n = (ε 1 − ε n )/(ε 1 + ε n ). Unlike the RK potential, this continuum electrostatic potential is correct in the uniform case ε 1 = ε 2 = ε 3 and has the proper divergence as ρ → 0. Electrostatic solution. In the simplest electrostatic (Born) approximation, the conduction and valence band corrections in the center of the slab are given by the self-interaction energy [25,26] which is non-divergent due to the use of an interaction difference, δW, as long as the slab dielectric ε 1 is identical in both W and W ref . When the reference potential energy is that of a uniform, bulk dielectric, i.e. W ref (r, r ) = e 2 /(ε 1 |r − r |), then the electrostatic corrections using Eqs. (2) and (3) can be summed analytically to give the relatively simple expression Tight-binding COHSEX. First-principles band structure calculations typically employ the GW approximation to the selfenergy. In the static screening limit, this approximation yields two contributions to the self-energy: a Coulomb-hole (COH) term and a screened exchange (SEX) term [16]. By assuming that an initial, many-body reference band structure is known, we can calculate corrections in alternative electrostatic environments as diagonal elements of the self-energy operator, which leads to where x = (ρ, τ) is the combined space and spin variable, ρ(x 1 , x 2 ) is the reduced density matrix of the mean-field reference, N k is the number of k-points sampled in the Brillouin zone, and p = (c, v) indexes the conduction or valence band. In the simplest approximation, we consider the twoband tight-binding Hamiltonian [27] H with eigenvectors x|pk = φ pk (x) and eigenvalues E c/v (k) = ± 1 2 E 2 g + (2atk) 2 . In this Hamiltonian, E g is the band gap, a is the lattice constant, and t is the interatomic transfer integral. A single (doubly-occupied) valence band leads to the simple density matrix ρ(x 1 , x 2 ) = q φ vq (x 1 )φ * vq (x 2 ). Further simplifications concerning the locality of the underlying real-space basis functions leads to the SEX self-energy where A BZ is the area of the Brillouin zone, and the primed summation in Eq. (7) excludes the term with G = 0 when k = q. Summarizing, the COH term yields a positive, constant shift to both the conduction and valence band, which is exactly equal to the (positive) correction obtained in the pure electrostatic theory presented above; the SEX term yields a negative, k-dependent shift with a magnitude that depends on overlap factors between the valence band and the band being corrected. To a reasonable approximation (verified numerically below), the SEX contribution is negligible in the conduction band (due to vanishing overlaps) but is substantial in the valence band. Further, if the squared overlap is approximated by unity, i.e. | vk|vq | 2 ≈ 1, then the magnitude of the SEX correction in the valence band is exactly twice that of the COH term. As shown in Ref. 28 for the case of molecules near metal surfaces, we therefore have simple, approximate COHSEX corrections given by δΣ c ≈ +P − 0 = +P and δΣ v ≈ +P − 2P = −P, where P = 1 2 lim ρ→0 δW(ρ) is precisely the electrostatically-derived correction. In reality, the squared overlap can be less than one, and the SEX correction to the valence band (and thus the band gap) will be slightly smaller than that of the continuum electrostatic theory.
Effective-mass theory of excitons. The optical gap, as measured in linear spectroscopies such as absorption or photoluminescence, is the sum of the quasiparticle band gap and the (negative) exciton binding energy. At a similar level of theory to that used so far, the exciton states can be calculated using an effective mass theory, where ρ is the electron-hole separation, Ψ n is the exciton wavefunction, and E n is its binding energy. The material parameters enter through the exciton reduced mass µ = m e m h /(m e + m h ) and the same screened Coulomb interaction W as used above. Due to the angular symmetry, the effective mass equation is a simple one-dimensional Schrödinger equation in the radial direction, which may be solved numerically exactly on a real-space grid to obtain the full Rydberg series of band-edge excitons. The exciton wavefunctions and binding energies are sensitive to the local dielectric environment, where higher dielectric constants result in stronger screening, more diffuse wavefunctions, and smaller binding energies.
Results. While our theory is appropriate for any atomicallythin semiconductor, we will apply it to the well-studied case of MoS 2 , a prototypical layered transition-metal dichalcogenide. As is common for quantum-confined materials, we correct the bulk band gap using a uniform reference Coulomb potential with ε 1 = ε 2 = ε 3 , i.e. W ref (r, r ) = e 2 /(ε 1 |r − r |) [29]; for MoS 2 , we use ε 1 = 14. For the monolayer, we solve the electrostatic problem in Fig. 2 with ε 1 = 14 and d = 6 Å, which roughly corresponds to the perpendicular extent of monolayer MoS 2 ; these parameters yield the ideal screening length ρ 0 = 42 Å in good agreement with the ab initio value of 41.5 Å [5]. We take the reference A-series band gap of bulk MoS 2 to be E bulk g = 1.98 eV [30] and for the tight-binding Hamiltonian in Eq. (6), we use at = 3.51 eV·Å.
First, we consider the experimentally-relevant situation of a monolayer on a substrate with dielectric constant ε 2 and vacuum above (ε 3 = 1). In Fig. 3(a), we show the band gap calculated using the tight-binding COHSEX approximation, as a function of the substrate dielectric constant. The purely electrostatic approximation in Eq. (4) is not shown, but gives nearly identical results, predicting band gaps that are slightly larger (about 0.05 eV), which can be understood based on arguments presented above. Remarkably, the simple theory pre- sented here -parameterized only on bulk data and an estimate of the monolayer width -predicts an isolated monolayer (ε 2 = 1) band gap of 2.62 eV (a 0.64 eV increase from bulk); this compares very favorably to a recent, carefully-converged ab initio calculation using the many-body G 0 W 0 approximation, which predicts 2.67 eV [20]. This huge increase in the quasiparticle band gap reflects the strong role played by reduced dielectric screening in atomically-thin materials.
At larger values of ε 2 , the increased screening ability of the substrate yields a rapid decrease in the band gap, demonstrating the strong sensitivity of atomically-thin materials to their local environment. Even a modest substrate like silica, with a dielectric constant of ε 2 ≈ 4, is predicted to have a band gap of 2.35 eV, which is 0.27 eV smaller than an ideal, suspended monolayer. On graphite, with ε 2 ≈ 10, the band gap is reduced by 0.45 eV. Similar results have been obtained with an approximate treatment of substrate screening in otherwise ab initio G 0 W 0 calculations [31,32]. These findings underscore the care required when comparing experimental measurements on substrates to ab initio calculations of isolated atomically-thin materials. In reverse, the simple formula given in Eq. (4) can be used to infer the ideal, suspended band gap based on measurements performed on substrates.
In Fig. 3(a), we also show the optical gap for the 1s and 2s exciton states, obtained by summing the quasiparticle band gap and the exciton binding energies of each state, as a function of the substrate dielectric constant. For the isolated monolayer, we predict optical gaps of 2.03 eV and 2.35 eV (positive binding energies of 0.59 eV and 0.27 eV) for the 1s and 2s states, respectively. Again, these compare well with converged ab initio calculations using the Bethe-Salpeter equation, which predict optical gaps of 2.04 eV and 2.32 eV (binding energies of 0.63 eV and 0.35 eV) [20].
As the dielectric constant of the substrate increases, the exciton binding energies are reduced due to increased environmental screening. Remarkably, the competing effects in the band gap and 1s binding energy almost exactly cancel. Up to a substrate dielectric constant of ε 2 = 20, the 1s optical transition energy only changes by 0.1 eV. In the aforementioned examples of silica and graphite substrates, the exciton binding energy is reduced by 0.24 eV and 0.49 eV, respectively. Not only is the optical transition energy roughly constant, but the cancellation is almost perfect such that the monolayer transition energy is nearly identical to the bulk transition energy (the bulk band gap and optical gap roughly coincide, because the exciton binding energy is only about 0.04 eV [30]).
In addition to the well-known observation that the optical gap of bulk TMDCs is almost identical to that of monolayers, the effects predicted by the theory are in good agreement with a number of other more detailed experimental findings, such as the insensitivity of the optical gap in TMDCs when comparing suspended samples and samples on fused silica substrates [33]. Identical effects in the band gap, optical gap, and exciton binding energy have been observed in a joint experimental-computational study of MoSe 2 on bilayer graphene and graphite: the latter exhibits a 0.24 eV reduction in the band gap and a concomitant 0.28 eV reduction in the exciton binding energy, leading to a minimal change in the optical gap [31].
The above analysis can be repeated for more general dielectric environments; the results of uniform encapsulation (ε 2 = ε 3 ) are shown in Fig. 3(b). While the qualitative behavior is the same, the effects are naturally stronger due to the simultaneous screening from above and below the monolayer.
Finally, we mention that although we have focused on the band gap, our theory separately predicts changes to the ionization potential and electron affinity. The environmental renormalization of these quantities may be of interest for photochemistry, catalysis, or device engineering.
Conclusions. In summary, we have presented a simple, but powerful theory of environmentally-sensitive electronic and optical transition energies in atomically-thin materials. While the theory shows that the quasiparticle band gap and the exciton binding energy are individually very sensitive to their local dielectric environment, the sum of the two (the lowestenergy optical transition) is almost completely insensitive. In some sense, this is an unfortunate state of affairs for the use of atomically-thin materials as environmental or chemical sensors, because optical transitions are the simplest to measure (by absorption or photoluminescence); by contrast, measuring the band gap by photoemission or electron tunneling experiments is much more difficult. Nonetheless, the theory presented here enables rapid and quantitative exploration of accessible energetic changes through dielectric engineering.
In light of our results, we propose that the higher-lying excitonic resonances are promising optical reporters of the local environment. Even the 2s resonance -which can typically be resolved in experiments -is predicted to redshift by 0.1 eV when a suspended sample is placed on a silica substrate. Indeed, the 1s-2s separation was used recently as an experimental probe of environmental effects [15].
Going forward, this approach can be used to study other environmentally-sensitive, atomically-thin materials such as black phosphorous [34]. These techniques can also be applied to more heterogeneous dielectric environments, as might be experimentally realized through patterning [15], molecular coverage [35,36], or functional layered heterostructures [37][38][39]. In many cases, explicit electronic hybridization and charge transfer should be accounted for in the theory. Work along these lines is currently in progress. | 2017-09-04T18:01:15.000Z | 2017-09-04T00:00:00.000 | {
"year": 2017,
"sha1": "0e3147308adcb0107fc5db27bf1bf5909971d96a",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.97.041409",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0e3147308adcb0107fc5db27bf1bf5909971d96a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
253157995 | pes2o/s2orc | v3-fos-license | Effective action of a self-interacting scalar field on brane
In extra dimensional theories, the four-dimensional field theory is reduced from a fundamental field theory in the bulk spacetime by integrating the extra dimensional part. In this paper we investigate the effective action of a self-interacting scalar field on a brane in the five-dimensional thick braneworld scenario. We consider two typical thick brane solutions and obtain the Pöschl–Teller and harmonic potentials of the Kaluza–Klein (KK) modes, respectively. The analytical mass spectra and wave functions along extra dimension of the KK modes are obtained. Further, the effective coupling constant between different KK particles, cross section, and decay rate for some processes of the KK particles are related to the fundamental coupling in five dimensions and the new physics energy scale. Some interesting properties of these interactions are found with these calculations. The KK particles with higher mode have longer lifetime, and they almost do not interact with ordinary matter on the brane if their mode numbers are large enough. Thus, these KK particles with higher modes might be a candidate of dark matter.
I. INTRODUCTION
After it was proposed in the 1920s, Kaluza-Klein (KK) theory [1,2] was discussed repeatedly over the past 100 years.The essential idea of this theory is that the observable physical phenomena of our four-dimensional world can be reduced from a five-dimensional fundamental theory.According to string theory, the existence of extra dimensions is inevitable.Various models containing extra dimensions were proposed and discussed seriously as effective theories at certain energy level.In the braneworld scenario [3][4][5][6], our world is described as a four-dimensional brane embedded in a higher-dimensional spacetime called bulk and extra dimensions are suggested to be extended far more larger than they were thought before [4], or even infinite [3,6].
The development of extra dimension theories gives new insight into fundamental physics.Nevertheless, the reasons that introduce the concept of extra dimensions are quite different.In KK theory, the fifth dimension was proposed in order to unify the four-dimensional gravity and electromagnetic force by regarding the electromagnetic field as the fifth component of the metric.In Arkani-Hamed-Dimopoulos-Dvali (ADD) model [4] and Randall-Sundrum (RS)-1 model [5], the existence of extra dimensions provides an alternative mechanism to resolve the gauge hierarchy problem in particle physics.On the other hand, some efforts have been made to ex- * liuyx@lzu.edu.cn,corresponding author plore the possibility of infinite extra dimensions, such as RS-2 model [6] and domain wall model [3].Combining the above two models [3,6], one can obtain the model of a brane with thickness (called thick brane) in a fivedimensional curve spacetime with an infinite extra dimensions.Such brane is called thick brane.In contrast, the branes in the RS-1 and RS-2 models are called thin branes since they are a pure four-dimensional hypersurface without thickness.
We may regard thin brane as an approximation to thick brane if the thickness of a brane can be neglected.However, there is a significant difference between thin brane and thick brane.In a thick brane model, there are no special dimensions as "extra dimensions" since the fourdimensional fields in the Standard Model, which are described by zero modes of some higher-dimensional fields, are lived in the bulk.However, those zero modes are localized around the thick brane and they can not propagate along extra dimensions.For a thin brane model, there are some very special extra dimensions and all the matter fields are confined on a hypersurface, i.e., the brane.Both thin and thick branes have their respective advantages.For example, thin brane resolves the hierarchy problem, while thick brane has its application in holographical QCD.
For all kinds of extra dimension theories, an important question is why we stay in a four-dimensional world and do not observe these extra dimensions.In the original KK theory it was explained by compacting the extra dimension into a tiny circle with size of Planck scale so that we can not find it in current experiments even if we actually occupy the whole volume of the five dimensions.In ADD model and RS-1/RS-2 model, it is a prior hypothesis that all matter fields are confined on a fourdimensional hypersurface embedded in five-dimensional spacetime.In the thick brane model which we mainly concern in this paper, all fields live in the bulk and the extra dimension extends infinitely, but the matter fields is trapped in a very narrow region along the extra dimension.This mechanism is called as localization.There are a lot of references for the thick brane models and localization of gravity and matter fields [7][8][9][10][11][12][13][14][15][16][17][18][19][20], see Refs.[21][22][23][24][25][26] for reviews.
Another principal problem need to be considered is how to recover ordinary four-dimensional theory such as electroweak gauge theory and general relativity from the underlying five-dimensional theory.It leads to the idea that ordinary four-dimensional physics corresponds to some five-dimensional physics.From the very popular AdS/CFT perspective, our ordinary four-dimensional physics, as a field theory living on the boundary of a fivedimensional spacetime, is totally equivalent to a gravitational theory in the bulk.On the other hand, in KK theory, domain wall model and brane model, it is accomplished by reducing the high-dimensional fundamental theory to the four-dimensional effective one, which is realized by integrating the high-dimensional action over extra dimensions.
Most of the works about localization of a scalar field on a thick brane focus on free field and few works discuss the self-interaction of a scalar field in the bulk as well as its effective theory on a thick brane.In this paper we will investigate the effective action of a selfinteracting scalar field on a thick brane.We first assume a five-dimensional fundamental action of a selfinteracting scalar field.Then we employ the KK reduction procedure to derive the four-dimensional effective actions of the scalar KK modes.The effective action should be coincident with the action in current fourdimensional theory.Furthermore, the five-dimensional interaction will bring new four-dimensional particles and new interaction terms between them.This will make some significant prophecies on particle collision at higher energy level.
The organization of this paper is as follows.In Sec.II, we give a general formulation for the KK reduction of a five-dimensional action of a self-interacting scalar field.In Sec.III, we consider two brane solutions and obtain two kinds of typical potentials which demonstrate some significant properties of the four-dimensional effective action.Finally, in Sec.IV we devote to the conclusions and discussions.
II. KK REDUCTION OF ACTION
Let us first consider a free massless scalar field in fivedimensional spacetime: The five-dimensional metric is proposed as where M, N and µ, ν are the five-dimensional and fourdimensional coordinate indices, z = x 5 is the extra dimensional coordinate, and gµν (x λ ) is the reduced fourdimensional metric at any fixed position of the extra dimensional coordinate z = z 0 by a constant factor e 2A(z0) .Usually, we can set e 2A(0) = 1 by using the degree of freedom of the coordinate transformation x µ → xµ = e A(0) x µ .In order to reduce the five-dimensional fundamental action (1) to a four-dimensional effective one, we separate the variables of extra dimension from ordinary four dimensions which is called the KK decomposition.Substituting the above decompsition into the action (1), we have The field equation corresponding to the action (1) is the five-dimensional Klein-Gordon equation: By virtue of the KK decomposition (3), the above Klein-Gordon equation ( 5) can be converted to two equations.One is the familiar four-dimensional Klein-Gordon equation of four-dimensional modes ϕ n (x µ ) and another is an eigenvalue equation of the extra-dimensional part f n (z) : where (4) = gµν ∇ µ ∇ ν .With the redefinition of the field Eq. ( 7) becomes a one-dimensional Schrödinger-like equation where the effective potential is If the eigenstate χ n (z) of the Schrödinger-like equation is a normalizable bound state, the corresponding KK mode is localized on a brane, in the sense that the energy density of the field actually distributes in a finite region of the extra dimension and one can obtain the four-dimensional action of the KK mode ϕ n (x µ ).To this end, we require the normalization condition as well as in quantum mechanics: or equivalently which could be inferred from Eq. ( 9) that where m, n are not summarized here.The ground state with ζ 0 = 0 is called the zero mode and is interpreted as the ordinary four-dimensional scalar field that we have observed on the brane.By virtue of the normalized conditions ( 12) and ( 13), the action ( 1) is reduced to where ∂ µ ≡ gµν ∂ ν and ζ n is the eigenvalue of Eq. ( 7) and it can be proven to be nonnegative.The result is interpreted as a family of four-dimensional scalar fields with different masses and √ ζ n is called induced mass originating from extra dimension.It can be seen in another way that a massless particle in five-dimensional spacetime satisfies the following relation In case of a nonzero momentum along the extra dimension, the four-dimensional particle on the brane has an effective mass m = |p 5 |.
It should be underlined that in our formulation, by virtue of the KK decomposition of the scalar field, the equation for the extra dimensional part f n (z) of the scalar field decouples from the equation of the fourdimensional part ϕ n (x µ ).This guarantees we can study physics on the brane, otherwise we will be blind to a dynamical variable and we never have complete dynamical equations.The approach of the effective action on the brane accords with such an assertion that all observers have a right to describe physics using an effective theory based only on the variables they can access.In particle physics we have met this concept in the renormalization group theory.To describe particle interactions at 10 GeV in the lab, we do not need to know what happens at 10 14 GeV.We have also seen that this concept is demonstrated in the cosmic censorship that a horizon protects our predictive ability on the basis of general relativity from a singularity.In the braneworld scenario here, if observers are confined on brane, they should still be able to study physics using only the variables accessible to them without having to know what happens outside of the brane.
So far the scalar field is free.We would like to include interaction by adding a perturbative interaction potential U (φ) in the action (1): We expect that the following four-dimensional effective interaction term can be given by integrating the above interaction term over the extra dimension: where It is just an integral projection from five-dimensional spacetime to our four-dimensional one.Comparing to the ordinary four-dimensional action which only involves the zero mode ϕ 0 it can be seen that The second term in the four-dimensional action ( 21) is new and totally originates from the extra dimension.It predicts not only new massive particles but also new interactions: S int contains more terms other than the ordinary four-dimensional interaction S (4) It contains all possible interactions of various KK modes ϕ n .
In this paper we will investigate the interaction of a quartic form It can be expressed in a four-dimensional effective action as where As we known in general, f k , f l , f m , f n are four different or same KK modes.From the four-dimensional viewpoint, there are a family of scalar KK particles with different masses √ ζ n .They interact with each other with the form (22).The coupling constant λγ klmn is exactly the scattering amplitude on tree level with the Feynman diagram showed in Fig. 1.We will calculate them in the next section.
Furthermore, there is a very interesting difference between the cases of interactive and free scalar fields.It is convinced that we cannot measure the distribution of the field on extra dimension since we are restricted on the brane.What we could measure on brane is just the mass spectrum of the KK particles.So a problem arises: can we discover the metric of five-dimensional spacetime while the observers and all our observations are confined on a four-dimensional brane?
This question is equivalent to the quantum mechanics problem that, can we reconstruct the shape of a potential V (z) from its eigenvalue spectrum {E n }? Unfortunately, the answer is negative.For example, the Morse potential and the Scarf (hyperbolic) potential have exactly the same eigenenergies: even though the potentials and eigenfuctions are quite distinct.Here C 1 and C 2 are constants and −∞ < z < ∞.In fact, we could learn form supersymmetric quantum mechanics that there are a large number of potentials that have the same eigenenergies and are related with each other by the so called isospectral deformations.We do not strive for the details here.There seems to be an inevitable conclusion: the five-dimensional metric can not be known if we just stay on the brane.Nevertheless, interactions of various KK modes will change this situation.
With a fundamental interaction, say, φ 4 , we could measure not only the mass spectrum {m n } of the KK particles, but also the coupling constants {λγ klmn }, which would distinguish the potentials corresponding to the same eigenvalues.The point is that the coupling constants in Eq. ( 23) involve eigenfunctions {f n }, which are distinct for different potentials.So we have seen the significance of interaction: the effective coupling constants reveal the structure of five-dimensional spacetime which cannot be inferred by the free fields on the brane.On the other hand, the interaction provides channels that KK particles transform into each other.The massive KK modes may decay to the zero mode, while they can also be produced from collision of the zero mode particles.
III. PROPERTIES OF THE EFFECTIVE ACTION
We have seen that the effective potential of the KK modes is only determined by the warp factor A(z) in the bulk metric (2): which determines the KK modes f n as well as the effective coupling constants λγ klmn .In this paper, we only use some explicit forms of the warp factor A(z) instead of going into the details of solving the field equations for the background spacetime, which depend on the chosen gravitational theory and the background fields generating the thick brane.We will discuss two kinds of typical potentials which demonstrate some significant properties of the four-dimensional effective action.In Refs.[27,28], the authors presented the de Sitter braneworld model, in which an induced 3-brane with spatially flat cosmological background is considered.The action and the five-dimensional metric for this braneworld model are given by where κ 2 5 = 1/M 3 * with M * being the five-dimensional fundamental scale, Λ 5 is the bulk cosmological constant, and a(t) is the scale factor of the brane.The brane solution is [27,28] where 1/b parameterizes the thickness of the brane, the parameter H is the Hubble parameter.The relation between the effective four-dimensional cosmological constant on the brane and the Hubble parameter H is Λ 4 = 3H 2 .In this paper, we consider the special case of b = H for simplicity.For the de Sitter branworld model, the corresponding potential is the Pöschl-Teller (PT) potential which is shown in Fig. 2. For this potential, we have two bound states.The first one is the ground state, i.e., the zero mode with mass FIG. 1: The Feynman diagram for (22).
The second one is the first massive KK mode with mass Here, we can see that the parameter H can be viewed as the new physics energy scale since it determines the mass of the new particle (the first massive KK particle) beyond the Standard Model.Obviously the masses are determined by the warp factor A(z) in the bulk metric.
The two bound states are in fact two kinds of KK particles on the brane.The two KK modes interact with each other via the four-dimensional effective interactions from the fundamental scalar potential φ 4 .The effective coupling constants are calculated as follows: which show that there are three kinds of interactions between these two KK modes.The corresponding Feynman diagrams are shown in Fig. 3.
It can be seen that, like the mass spectrum of the KK modes, the effective coupling constants are determined by the warp factor A(z).Both of them origin from the extra dimension.However, they associate two independent parameters H and λ.We should assume that the zero mode stands for the four-dimensonal particle which has already been observed (so that Hλ can be fixed) and has a self-interacting λγ 0000 ϕ 4 0 .Provided that we make zero mode particles collide with enough energy, it should be observed in reaction new particles that correspond to the first excited mode f 1 (so that H and λ can be fixed) as well as zero mode.On tree level the magnitudes of the reaction are just the effective coupling constants.The cross sections of the above two processes are where p 1 and p 2 are 4-momenta of the initial state particles and p 3 and p 4 are 4-momenta of the final state ones.The total cross sections are the integral of the phase space of the final states We see that the branch ratio is significant only if the initial energy p 0 1 + p 0 2 is high enough.The above results show that the cross sections and branch ratio are related to the fundamental coupling λ in five dimensions and the new physics energy scale H.
Note that, the decay of the first excited mode f 1 into three zero modes f 0 is prohibited.It implies that there exist some "selected rules" in the interactions between KK particles just like transitions between states in quantum mechanics.From our discussion in the end of section 2, in the case of the PT potential, there must be plenty of f 1 particles on the brane.
Although the above PT potential illustrates some main features of the effective action, it is a little simple because it only has two kinds of KK particles on the brane.There is more interesting content in the case of harmonic potential since all of its eigenstates are bound states and we have a series of infinite localized KK particles in four dimensions.Therefore, we can investigate how the four-dimensional effective interactions vary while the KK modes go higher.
To this end, we consider the flat thick brane model generated by a mimetic scalar field with a Lagrange multiplier [29].The action of this theory is given by where U = U (φ) and V = V (φ) are functions of the the scalar field phi, and λ is a Lagrange multiplier.For simplicity, one can choose the natural unit with κ 2 5 = 1.One of the solutions for the flat thick brane metric (2) with gµν = η µν is given by [29] where the parameter k is the scale parameter which controls the thickness of the brane.Here, we do not list the expressions of λ, U , and V .For this brane solution, this warp factor A(z) can generate a harmonic potential in a form of The eigenfunction is where H n is the Hermitian polynomial The four-dimensional induced mass, i.e., the eigenvalue is and the effective coupling constant is Here, we can also see that the parameter k is related to the new physics energy scale.
Notice that, if we add a constant c in the potential (50), the induced mass will be changed, meanwhile the effective coupling constant remains unchanged since we have the same configurations for the KK modes.This constant term c may come from the five-dimensional mass term of the scalar field.Thus, we get new induced masses but the same effective coupling constants, i.e., the effective interaction on the brane is independent of the five-dimensional mass of the scalar field.
It is assumed that only the zero mode ϕ 0 and the interaction λγ 0000 ϕ 4 0 have been observed on the brane.So they are the "ordinary" particles and the fourdimensional ϕ 4 interaction, respectively.As demonstrated before, the excited KK modes emerge and interact with each other when we increase energy of particle collision to sufficiently high level.
Note that Eq. ( 54) is an integration result.The integrand is too complicated to be figured out for the general modes (k, l, m, n).Hence, we consider the effective coupling constant between KK particles for some lower fixed modes of them.The effective coupling constant of the lowest mode (0, 0, 0, 0) is This value will be considered as a unit in the following calculations.The rest results will be presented numerically.Firstly let us check the "nnnn" interaction, i.e., the interaction in form of ϕ 4 n between the same KK modes.We see that all kinds of KK particles have a ϕ 4 n interaction.However, as the quantum number n increasing, the coupling varies form weak to extraordinary strong, see Tab.I. Unlike the mass of the n-th excited KK particle which linearly depends on the quantum number n, the effective coupling constant seems to increase as an exponential-like function of n.It may be amazing that, even though the interaction in five-dimensional spacetime is assumed to be weak, the four-dimensional effective interaction on brane is not necessarily weak.In fact, it can be very strong.
Next, we consider the "00mn" interactions, i.e., the interaction terms containing two zero modes and two massive modes ϕ m , ϕ n .Such interactions correspond to the process that two zero mode particles scatter into two KK particles in modes n and m.The tree level amplitude is exactly λγ 00mn .The result is listed in Tab.II.Obviously, there is a selected rule: γ 00mn has a nonzero value only if m + n is even.This can be easily proven from the parities of the eigenfunctions.Another significant thing is that the sign of nonzero γ 00mn becomes positive and negative alternately, which means the effective interaction becomes repulsion and attraction alternately.
On the other hand, the relation between the nonvanishing γ 00mn and m, n is interesting.Figure 4 shows how the magnitude of the interaction varies.It can be seen from Tab. II and Fig. 4 that |γ 00mn | nonzero reaches the maximum at m = n for a fixed m and decreases with "the distance between m and n" |m − n|.It also can be see that |γ 00mn | nonzero tend to zero when n → ∞.Thus we have a conclusion that a scatter process including one or two very high excited KK states can be neglected.
The cross section of the process 00 → mn is , where the subscripts 1 and 2 refer to the initial state particles and 3 and 4 refer to the final state particles.Integrating it we obtain the total cross section where it characterizes the size of the final state phase space.
It can be shown that the total cross section will decline with m, n, see Fig. 5 for illustration.For a fixed center mass energy E = p 0 1 + p 0 2 of the two zero mode particles, the total cross section is zero when m 3 + m 4 > E.
Let us assume that we have very high initial energy so that a large number of reaction branches are turned on.The branch ratio R(00 → 00) is always the largest one.When the initial energy p 0 1 + p 0 2 m 3 and p 0 1 + p 0 2 m 4 , the variation of the size of the final state phase space is negligible and the branch ratio R(00 → mn) grows with n and reaches its maximum at n = m, behaving just like |γ 00mn |.On the other hand, larger value of n or m suppresses the scattering amplitude drastically, as well as suppresses the size of the final state phase space.
Therefore, the observable branch ratios R(00 → mn) mainly come from the ones with smaller m and n.Furthermore, in an inclusive process in which the final state involves a KK mode n, the branch ratios of the processes that the final state includes another KK mode close to n are the most significant.
The "000n" interaction is especially important because it represents the amplitude that a KK particle decays to three zero mode particles.We could deduce the decay rate Γ n000 of a KK particle of mode n from γ 000n : where the subscript f denotes the final state.The values of Γ n000 in unit of 10 −9 k 3 are shown in Tab.III.For the even modes n, the decay rate Γ n000 decreases with mode number n.For the odd modes, the decay channels n → 000 do not exist.Instead, they have nonzero Γ n100 , see Tab.IV.The above two tables III and IV show the same regular pattern: the decay rate vanishes when the mode number increases.The KK particle with higher modes seem "isolate" from the zero mode in the decay process.
For a KK particle with mode n, it could decay to three lower mode particles, so there are various decay channels n → klm.The lifetime τ of the particle is the reciprocal of the sum of its decay rates into all possible final states.Checking the lifetime of the KK particles, we find that particles with higher mode have longer lifetime (see Fig. 6).The KK particles with higher modes may be practically stable.
Note that the mass spectrum of the KK modes, the cross sections, the branch ratio, the decay rate and the lifetime of a KK particle are also affected by the fundamental coupling λ in five dimensions and the new physics energy scale k in the second model.
IV. CONCLUSIONS AND DISCUSSIONS
In this paper we have investigated the effective action on the brane from two five-dimensional braneworld models with a perturbation interaction of a scalar field.We have discussed how a five-dimensional interaction affects n 0 on our four-dimensional world, in the cases of PT and harmonic potentials.We demonstrated some common features of these effects, not just of these certain potentials.These conclusions could be applied to more general interactions other than φ 4 .
The first conclusion is that one or more KK particles will appear if we improve the energy of particle collision, and new interactions between those KK particles, including the zero mode particle, will arise at the same time.The properties of these interactions can be seen by calculating the effective coupling constants.Nevertheless, the new particles corresponding to higher KK modes are more difficult to discover, not only due to the requirement of higher energy, but also for the diminishing of the involved effective interaction.
There is a obvious tendency of the effective coupling constants: γ klmn tends to vanish when k, l, m, n "depart" from each other.Especially, the zero mode ϕ 0 seems to decouple from other KK modes ϕ n with large n.
With properties of KK particles, we might have an interesting observation.The KK particles with higher modes have larger masse and longer lifetime, and it almost does not interact with the ordinary matter in fourdimensional world if its mode number n is large enough.
Exactly, this is one of the features of dark matter.Therefore, the KK particles with higher modes might be a candidate of dark matter if they are localized on the brane.
In the future work, we would like to study the interaction between KK fermions and KK vectors from Ψ γ M A M Ψ .The method is analogous to what we performed in this paper.However, there are significant differences between them.For a spinor field, the left and right components of the zero mode can not been localized on brane at the same time [15,16,25,[30][31][32][33][34][35].If we regard the zero mode particle as an electron, a contradiction would arise since it has been observed that electrons have both chiralities.On the other hand, the four-dimensional effective electrodynamics may not be gauge invariant, as a result of the massive KK modes of a vector field.
TABLE I :
The values of γnnnn in unit of γ0000.
TABLE II :
The values of γ00mn in unit of γ0000.
TABLE III :
The values of Γn000 in unit of 10 −9 k 3 for even n.
TABLE IV :
The values of Γn100 in unit of 10 −9 k 3 for odd n. | 2022-10-28T01:16:06.159Z | 2022-10-27T00:00:00.000 | {
"year": 2023,
"sha1": "d7815ce25c4b667f3f54559f3e7c4a2547048d0d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-023-11270-y.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "39193a911d361599c43bae7d385be2007753b153",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119301227 | pes2o/s2orc | v3-fos-license | Full sky harmonic analysis hints at large UHECR deflections
The full-sky multipole coefficients of the ultra-high energy cosmic ray (UHECR) flux have been measured for the first time by the Pierre Auger and Telescope Array collaborations using a joint data set with E>10 EeV. We calculate these harmonic coefficients in the model where UHECR are protons and sources trace the local matter distribution, and compare our results with observations. We find that the expected power for low multipoles (dipole and quadrupole, in particular) is sytematically higher than in the data: the observed flux is too isotropic. We then investigate to which degree our predictions are influenced by UHECR deflections in the regular Galactic magnetic field (GMF). It turns out that the UHECR power spectrum coefficients $C_\ell$ are quite insensitive to the effects of the GMF, so it is unlikely that the discordance can be reconciled by tuning the GMF model. On the contrary, a sizeable fraction of uniformly distributed flux (representing for instance an admixture of heavy nuclei with considerably larger deflections) can bring simulations and observations to an accord.
INTRODUCTION
Despite the fact that the actual sources of UHE-CRs have still not been identified, it is rather natural to expect them to follow, to some extent, the large scale structure (LSS) observed in the sky. Indeed, the propagation distance of UHECRs of above 10 19 eV is limited to several hundred Mpc due to their interaction with the IGM [1,2]. The matter distribution is not homogeneous over such distances, hence if UHECRs are extragalactic, one expects an anisotropy in their arrival direction distribution, reflecting the inhomogeneity of the source distribution. Such anisotropies, on a sphere, can be revealed via a harmonic analysis, where the coefficients of the complete set of spherical harmonics carry the information, multipole by multipole, about the possibly non-uniform UHECR flux. The harmonic analysis is thus a way to compress the data in a form most suitable for statistical tests.
Recently the Telescope Array (TA) and Pierre Auger Observatory (PAO) collaborations have joined forces to provide the first full-sky ultra-high energy cosmic rays (UHECRs) map [3]. With single earth-based experiments being forcedly blind to a big chunk of the sky, only the combined data sets from two -or more -machines can provide the complete picture. This is particularly important for the harmonic analysis which, as detailed in the joint TA/PAO paper, strongly benefits from a whole sky coverage, both theoretically/qualitatively (no need to assume anything about the flux) and practically/quantitatively (some errors are significantly suppressed) [4]. Joining the data of the two experiments 2) E-mail: petr.tiniakov@ulb.ac.be 2) E-mail: furban@ulb.ac.be thus made possible, for the first time, to measure the harmonic multipoles of the UHECR flux distribution in an assumption-free way.
One natural question is then: is the harmonic power spectrum expected from the LSS the same as that actually observed? The caveat here is that UHECRs do not travel on a straight line from source to the Earth, because of the magnetic fields (MFs) they encounter on their way; these deflect their trajectories and mask the original arrival directions, and with that the sources or UHECRs. The most relevant MF in this respect is housed by our own Galaxy (GMF), with strength in the µG range, see for instance [5] and references therein. The GMF is separated into large-(regular or coherent) and small-(turbulent or random) scale components, the regular part dominating the CR deflections. When combined, these fields are expected to steer 10 EeV protons by a few degrees, far away from the galactic plane, to up to several tens of degrees at very low galactic latitudes. Now, what does this mean for the harmonic analysis? Are the anisotropies erased, or is the power spectrum distorted?
What we find in this analysis is that there is a striking mismatch between the power spectrum we simulate from the LSS, assuming a purely protonic primary composition, and the one reconstructed from the data, in that amplitudes of low multipoles (particularly, the quadrupole C 2 , the second momentum in the harmonic decomposition) in the data are significantly lower than the calculated LSS ones. The scope of this work is to delve deeper into this issue; in particular, we want to understand the rôle of the GMF in this result.
Before embarking on the analysis, we briefly summarise our findings: • there is a lack of power in the low multipoles (notably, dipole and quadrupole) as observed by TA/PAO compared to the expectations from protons tracing LSS; • the regular GMF shuffles direction-dependent single harmonic coefficients, demonstrating how these are not fully reliable indicators of source anisotropy; • however, the power spectrum is barely affected by the regular GMF, which means that the latter can not bring observations and simulations to an accord; • the random GMF has also very little effect on the low multipoles; • a moderate fraction of uniformly-distributed events (which could, for instance, represent an of heavy nuclei) instead does temper the tension between data and expectations, for it contributes to the isotropisation of the signal even on largest angular scales.
For the rest of the paper we will begin summarising the results of the joint TA/PAO analysis in Section 2. ; then we will introduce the simulated power spectra from the LSS, and discuss the missing quadrupole problem in Section 3. . The impact of the GMF on this result is detailed in Sec. 4. , whereas the turbulent GMF is discussed in Sec. 5. , alongside the effect on the power spectrum of a different composition of cosmic rays primaries. We will conclude in Section 6. .
THE JOINT TA/PAO ANALYSIS
As any angular distribution on the unit sphere, the flux of cosmic ray Φ(n) in a given direction n can be decomposed in terms of a multipolar expansion onto the spherical harmonics Y m (n): Anisotropy fingerprints are encoded in the a m multipoles. Non-zero amplitudes in the modes contribute in variations of the flux on an angular scales of about π/ radians. Cosmic ray events, in this language, are then simply sample points for the underlying sources distribution on the sphere. However, because the sky coverage is non-uniform, what these events are sampling is the flux times exposure distribution. Now, with full-sky but non-uniform coverage, the customary recipe [3] for decoupling directional exposure effects from anisotropy ones consists in weighting the observed angular distribution by the inverse of the relative directional exposure function ω r (n), so that, inverting Eq. (1), the actual data points are unbiassed estimators of the underlying flux:â where one can prove [3] that upon averaging over a large number of realisations one has â m = a m . Here N is the number of events, which are described as Dirac delta functions centred at the actual arrival directions n i . While the individual a m coefficients are directiondependent, the angular power spectrum coefficients C , defined as averages of |a m | 2 over m, are rotation-independent quantities. Given that the regular GMF results in a (direction-dependent) rotation of the events, one might expect that the power spectrum coefficients C are much less sensitive to the presence of GMF than individual amplitudes a m . Now, in order to achieve full-sky coverage the data of two different experiments must be combined; hence, the total exposure has to be cross-calibrated in order to not introduce spurious effects in Eq. (2). The details of the cross-calibration procedure, and its performances, do not matter for us here, but can be found in Ref. [3] (see also [6], §2). Note, however, that in the crosscalibration procedure only a small subset of all eventsthose in the region of overlapping exposures -are used, and that the cross-calibration errors propagate mainly into the m = 0 components of the coefficients a m (in equatorial coordinates).
The data sets used in the analysis consist of UHE-CRs with energies above 10 EeV, which amounts to 8259 for PAO, and 2130 for TA. Table 2 reports the results for the a m coefficients as presented in the TA/PAO joint paper.
As one may see, there are no statistically significant deviations from isotropic expectation in any of the harmonic coefficients, the largest discrepancy being in the value of a 1,−1 . One also observes that the errors are systematically larger for m = 0, particularly for the dipole = 1: a consequence of the cross-calibration procedure.
Dipole, quadrupole, and octupole moments, with their uncertainties, in equatorial coordinates. These are normalised so that the a m measure the relative deviation with respect to the monopole a00, that is, the a m are redefined such that a m → √ 4πa m /a00.
LARGE SCALE STRUCTURES AND ANISOTROPIES
In order to compare the power spectrum reconstructed from the data with the expectations from the LSS model, we need to build the flux map which we are going to sample with random Monte Carlo events and derive expectations for the multipole coefficients. The procedure we used to build the expected flux is described in detail in [7,8]. We first choose a galaxy catalogue, in this case the 2MASS Galaxy Redshift Catalog (XSCz) that is derived from the 2MASS Extended Source Catalog (XSC). The flux is calculated from the flux-limited subsample of galaxies with the apparent magnitude m < 12.5 at distances D < 250 Mpc by the method described in [7,9,10]. The contribution from beyond 250 Mpc is considered uniform. All galaxies are assumed to have the same intrinsic luminosity in UHECR. To determine their individual contributions to the total flux we propagate protons to the Earth taking full account of the redshift, distance and attenuation effects. Individual fluxes are then smeared with a Gaussian distribution of an angular width θ, which is a free parameter. This is done to account for limited detector resolution, and, most importantly, the effects of the regular and turbulent GMF. In this Section we will not attempt to reconstruct the original direction of the events through the coherent GMF, which is instead investigated in detail in Sec. 4. . Finally, where this is relevant, the flux map is weighted accounting for the non-uniform exposure of the actual experiment (or experiments).
With the map of the expected flux on Earth at hand we simulate random sets of cosmic ray events that this flux distribution would produce. Each mock set has the same number of events as the actual data. We then calculate the harmonic coefficients and the power spectrum for each of these mock sets. For each harmonic coefficient we determine the mean value and the variance; we generate as many mock sets as is necessary to make the variances negligible. In Fig. 1 we show the result of this procedure: orange diamonds are the actual data points with their errors; blue triangles, green boxes, and red circles are the expectations from the simulations with smearing angles of ϑ = 15 • , 25 • , and 35 • , respectively 3) .
The most striking feature of this plot is the considerable tension between the power in the low multipoles (notably, the dipole and quadrupole) expected from the galaxy distribution, and what is observed in the data, the predicted power being systematically higher. With a smearing angle of 15 • , both the dipole and quadrupole 3) Smearing the flux with a given ϑ by definition wipes away any power for multipoles π/ϑ, so we do not include multipoles for which by construction there is no power.
ðÉÓØÍÁ OE öüôae components of the flux are expected to vary at the level of ∼ 10%, while no flux variations are detected in the data. However, because the dipole measurement has larger error, the discrepancy is most prominent in the case of the quadrupole. As the smearing angle grows the expected flux variation is watered down, the larger the multipole number, the faster the isotropy sets in.
All the higher multipoles are more or less within their expected values in the LSS case, although at the level of precision currently attainable with TA and PAO these are difficult to distinguish from simple isotropy. We will devote the rest of the paper to discuss this observation, and to understand and clarify the rôle of the GMF in drawing conclusions from it.
A comment is in order at this point. As we have seen, the measurement of the power spectrum coefficients C , in particular C 1 , is obscured by the cross-calibration procedure which introduces a large error. One may define observables that are free from this problem (note that, as we will argue in the next section, such observables would also lose an important advantage of C : their insensitivity to the regular magnetic field). These are the coefficients c n of the Fourier decomposition of the flux in right ascension φ defined as follows, with Y n (φ) ≡ ( √ 2 cos nφ, 1, √ 2 sin |n|φ)/ √ 2π for n > 0, n = 0, and n < 0, respectively. The coefficients c n can be measured in a single experiment without making any assumption about the flux. They are obviously free from the errors introduced by the cross-calibration procedure 4) .
In order to make contact with previously defined quantities, we note that c n can be expressed in terms of spherical harmonic coefficients a m with l ≥ |n| (see the appendix for a derivation). So we may use the harmonic coefficients a m calculated above for the LSS model to infer the LSS prediction for c n . The two lowest coefficients c ±1 receive contributions from the projection of the dipole and of odd higher multipoles onto the right ascension plane. For the LSS model the contribution of the dipole is dominant, and we may approximately write 4) Although it is indeed possible to determine these coefficients univocally from a single experiments, it is not guaranteed that in the north and south emisphere they will agree with each other.
In order to compare to the measurements, the coefficients (4) can be combined into an amplitude d 2 ≡ (c 2 −1 + c 2 +1 )/2 and a phase α ≡ arctan(c −1 /c +1 ). For a smearing angle of 15 • the LSS model predicts d = 0.0226, α = 73 • , which has to be compared to d = 0.0138 ± 0.0049, α = 89 • ± 20 • , obtained from the joint data set of [3]. Notice that the incompatibility with the LSS models worsens when we include the z component to form C 1 , despite the fact that the largest error comes with a 1,0 ; this is because the data value for a 1,0 is very small compared to that of the LSS model -the latter is within a factor of 3 from the other dipole coefficients.
The relations (4) become exact if the flux contains only a dipole and any other even multipole, but has zero power for all odd > 1; in this approximation these coefficients have also been measured by the Pierre Auger collaboration alone [11,12] -in this case the errors on these quantities are indeed smaller: the price to pay is a a priori decision on what the flux should be.
THE GMF AND THE HARMONIC MULTIPOLES
The results of the previous section did not take into account the effects of the propagation of UHECRs through the GMF directly, but only indirectly through a variable, and relatively large, smearing angle. A better approximation is to treat the regular part of GMF explicitely and leave the smearing to represent the random deflections only, deflections which can amount to about 10 • to 20 • for our 10 EeV protons, see [13,14]. So one may wonder whether the regular GMF could wipe away, or distort, any anisotropic harmonic imprint, for example by transferring power from a multipole to another. A caveat here is that the regular GMF is not known well enough for accurate predictions of a m .
We will show, however, that while the directiondependent a m are indeed quite sensitive to the strength (and shape) of the magnetic field in the Milky Way, the direction-blind power spectrum C is quite stable against these perturbations.
In order to demonstrate this empirically we adopted the model of [15] for the regular GMF, and we simulated again the expected fluxes from the LSS, now propagating the flux through the GMF. This GMF model has two components, a disk field and a halo field, with independent strengths. We chose to work with the best-fit parameters as reported in [15] for the version dubbed bisymmetric spiral structure, or BSS: this means that the overall disk and halo strengths are B disk = 2 µG and B halo = 4 µG, respectively. We should stress, how-ðÉÓØÍÁ OE öüôae 22 28 340 24 18 28 45 65 94 16 53 66 96 210 350 62 180 32 23 56 62 79 54 140 92 530 490 17 1100 180 120 110 150 64 500 58 73 170 190 21 5 4 3 2 1 0 1 2 3 4 5 1 2 3 4 5 Relative error for a lm and C l Colour-coded relative variation of the a m and the C with varying magnetic field strength (in percent).
ever, that neither the choice of the model parameters, nor the model itself has too strong an impact on our results, as what we found is that the effect of GMF on the coefficients of the power spectrum is small. To assess the variability, or sensitivity, of the harmonic coefficients and power spectrum on the strength of the GMF -a similar virtual experiment can be performed by changing its shape, or both -we generated 1000 fluxes 5) with randomly chosen field strengths ranging from zero to twice the best fit values, that is B disk = [0, 4] µG and B halo = [0, 8] µG. Since each time the reconstructed flux will be slightly different, we show the relative percent variation (standard deviations over the mean: σ x /mean x , where x stands for the a m and C , in Fig. 2. We immediately notice that the average spread for some harmonic coefficients a m is much larger than that of the power spectrum coefficients C , proving our point above. We show in the picture only multipoles up to = 5, but we performed the same exercise for multipoles up to = 20, and obtained the same result. A legitimate doubt is that since we do not expect these distributions to be Gaussian, as we are not sampling the same sky many times with different randomly generated events, a very skewed distribution may bias this result and the standard deviations would not represent the actual excursion of the quantities under observation. For example, the spread of the C might be a bad indicator of how much the power spectrum actually varies, but be accurate in describing the fluctuations of a m . We have checked that the ratio between the total excursion for a given parameter, that is, its maximum minus its minimum, versus the corresponding deviation, | max(x) − min(x)|/σ x , is more or less the same (to within 15%) for all the quantities we analyse, 5) When propagating UHECRs through the magnetic field we use monochromatic primaries, since the deviations are maximised at the lowest energy, for simplicity. which means that for both the a m and the C coefficients, the spread is an equally good descriptor of the range which these parameters can attain.
The conclusion we draw from these tests is that: • the power spectrum is a much more suitable quantity in assessing the anisotropic properties of the UHECR flux when dealing with the GMF, as it is much more robust against the, still poorly known, details of the GMf itself, compared to the direction-dependent a m ; • the absolute power spectrum itself is not much affected quantitatively by the regular GMF: we thus believe it is unlikely that the reason behind the low quadrupole observed in the data is to be found in the effect of GMF UHECRs deflections.
ISOTROPIC FRACTION AND THE HARMONIC MULTIPOLES
So far we have always worked with proton primaries, but the UHECRs composition at the highest energies is not known. If instead of protons we were to propagate iron nuclei, the deflections they endure would be a factor 26 larger, due to the corresponding larger rigidity. We thence expect that a fraction of iron or other nuclei in the total UHECR flux, because of its tendency to isotropise, would help loosening the tension between the observed and simulated multipoles.
To assess this, we again generated several flux maps where we subtract a fraction of the total proton flux and replace it with an isotropic one, to roughly simulate the contribution of iron. We vary this "iron fraction" (essentially, the isotropic fraction) between zero and one, and recalculate the power spectrum for each map; we then compare with the data and their errors, and compute the statistical significance of the low coefficients of the power spectrum in each map. Were the primaries a mix of several different elements, the LSS predictions would fall in between the values we obtain below.
In Fig. 3 we show 1 σ, 2 σ, 3 σ, etc, contours for the dipole C 1 , quadrupole C 2 , and octupole C 3 , where in addition to varying the isotropic fraction, we also change the smearing angle of the map, to account for a variable turbulent GMF strength.
We can perform this test with or without the regular GMF, and the results, according to our previous section, should not change much; this is indeed the case, as Fig. 4 shows: the curves move down by approximately 1σ, which again does not suffice to resolve the tension between data and LSS expectations. Dipole, quadrupole, and octupole against LSS expectations for varying iron fraction and turbulent GMF -no regular GMF.
As we see, both the dipole and the quadrupole in the data prefer a more isotropic Universe, with the quadrupole being the most pronouncedly incompatible with the expectations from LSS.
Correlating the arrival directions of UHECRs with LSS is a logical surmise, so this result is somewhat puzzling; however, it is not completely unexpected, as at this energy it is known that, for example, the TA data alone prefer isotropy to LSS [8]. What is shown here is then simply another parametrisation of the same result, but one which can bring some insight into the physics behind it. For instance, it may be that there is a bias, or systematic effect which causes the dipole and/or the quadrupole to have a surprisingly low power -such an effect could arise due to an excess on the galactic plane, for example.
It would be extremely interesting to be able to look at the same figures above, say, 60 EeV, where instead an isotropic flux is incompatible with TA data at more than 3σ at not too large smearing angles [8]; the same data are more in line with the predictions from LSS. Unfortunately, the very low statistics in the common band of the two experiments (and consequently, large errors) makes this task quite futile at present.
Finally, as expected, the random GMF, mimicked through the smearing angle, does not affect the lowpart of the power spectrum significantly -only at large smearing angles features tend to be blurred, so the differences between the LSS flux and an isotropic one are diluted.
CONCLUSION
For the first time we have a full sky map of UHE-CRs; this opens up the possibility to decompose the flux on the sphere in a harmonic basis, and obtain its angular power spectrum C , which is shown in Fig. 1. We wanted to see whether, in this language, a source distribution which traces that of matter (galaxies), would produce the same C .
We find that, assuming lone proton primaries, and discarding for now the GMF, this is not the case. In particular, LSS models tend to generate a much larger power that what is extracted from the data, especially at low multipoles such as the dipole and quadrupole; the experimental full sky map is much more isotropic. The discrepancy for the quadrupole C 2 can be as strong as about 6σ, while in the case of the dipole C 1 at small smearing angles the data value is 4σ away from the LSS prediction for it.
When we turn on the regular GMF we observe a ðÉÓØÍÁ OE öüôae Dipole, quadrupole, and octupole against LSS expectations for varying iron fraction and turbulent GMFwith regular GMF.
strong correlation between the variability of the a m values and the strength and shape of the GMF; at the same time, the power spectrum C is much more stable against the same perturbations. The latter is therefore a more reliable observable in investigations like the one we present here. This also means that the incompatibility between data and LSS is not an artifact of ignoring the GMF.
Since the data prefers a more isotropic Universe, one possibility is that the primaries are heavy, and diffuse in the GMF (both regular and random). We introduced heavy nuclei in our study in the guise of a, variable, isotropic fraction of the total CRs flux to understand how much more isotropic the distribution of UHECRs sources needs to be: in some cases (quadrupole at small and intermediate smearing angles) to tame the LSS prediction the one may need up to about 50% or more of isotropic flux fraction.
In fact, since our method simply discriminates between an anisotropic proton flux and an isotropic one, alternative explanations are possible, not related to the composition of the promaries. An additional isotropic component may be the result of acceleration mechanisms operative away from galaxies; alternatively, the more isotropic distant sources may be contributing more than expected (for instance due to exotic particle interactions); one more possibility is that our Galaxy is plunged into a strong magnetic wind, which isotropises the arrival directions of UHECRs even before they reach the Milky Way.
At 10 EeV, the energy threshold of the datasets we used in this work, the outcome of our analysis are not a surprise, as the data are known to be incompatible with LSS models; the multipolar description of the same result (but now with data from the full sky) can help in identifying the physical reason behind it: for instance, the C 1 and C 2 results may be signalling the presence of some systematic effect. On the other hand, the 60 EeV data does prefer LSS: it would be extremely useful to be able to repeat our exercise at those energies, but with current data this would be inconclusive. In the future, the source of these discrepancies could be identified, and it will bring some crucial insight into the hunt for UHECRs sources. | 2014-12-19T08:25:21.000Z | 2014-11-10T00:00:00.000 | {
"year": 2015,
"sha1": "e687ab5c8d3f8a0a56e9bf57170cf41a80b75ff7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1411.2486",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e687ab5c8d3f8a0a56e9bf57170cf41a80b75ff7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253880494 | pes2o/s2orc | v3-fos-license | Editorial: New perspectives of L2 acquisition related to human-computer interaction (HCI)
COPYRIGHT © 2022 Meunier, Pikhart and Klimova. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: New perspectives of L2 acquisition related to human-computer interaction (HCI)
Editorial on the Research Topic New perspectives of L acquisition related to human-computer interaction (HCI)
The present Research Topic aimed to collect cutting-edge research into L2 acquisition with respect to very recent findings in psycholinguistics, cognitive science, artificial intelligence (AI), Information and Communication Technologies (ICT), data science, and other human-computer interaction (HCI) related areas. Unprecedented developments and trends in AI and ICT need our attention as they (will) dramatically influence how languages are acquired and used. The four research articles published under the topic cover a range of areas on which international teams are working worldwide. They pay tribute to the value of interdisciplinarity as a key to better understanding the issues at play when technology meets language, always keeping in mind the needs and best interests of the human end users. The methodologies adopted range from experimental to qualitative studies; the target populations range from young children to English for specific purposes learners; and the language foci cover pronunciation, vocabulary, sentence comprehension, and also emotions more generally.
Song et al. address interactions between voice-activated AI assistants and human speakers; they also discuss the implications of such interactions for Second Language Acquisition. Their experimental study shows that whilst voice-activated artificially intelligent (voice-AI) assistants are very effective at processing spoken commands by native speakers, the results are much less good when the commands are produced by L2 speakers. Their study focuses on minimal vowel pairs and on Korean-speaking L2 learners of English. The AI assistant (Alexa in the present case) only achieves a 55% accuracy rate with L2 productions, compared to 98% for native productions. The study also examines modifications following a misrecognition by Alexa in the first attempts and shows that, in such cases, L2 learners made acoustic modifications which exhibited some, but not all, of the predicted characteristics of clear speech and target-like pronunciation. Despite functional fluency in a second language, subtle but significant pronunciation The present study suggests, in contrast, encouraging learners to repeat their utterance rather than abandoning it. Voice-AI can thus also prove useful as a pedagogical tool for learning/improving L2 pronunciation. Köhler-Dauner et al. use technology to experimentally monitor children's (18-36 months) and mothers' physiological reactions in three play-related episodes: a resting phase, a structured play phase, and a free play situation. Their findings indicate that higher quality of maternal caregiving helps children to regulate themselves effectively (e.g., by contributing to the child's mental and physical wellbeing, which in turn helps him/her balance and regulate negative emotions). Although the study did not focus on language use, such results are worth sharing with language specialists given the role and importance of emotions in language acquisition and use. We believe that studies of that type also encourage educational stakeholders to enhance parental collaboration, support and attention when it comes to early stages of language acquisition, be it for native or additional languages.
Rafiq et al., for their part, offer new qualitative perspectives in HCI for the design of an English language mobile module in science, technology, engineering and mathematics (STEM). In a qualitative study using semi-structured interviews, the authors covered four main themes: the importance of learning English, learners' reported problems, their language learning strategies and their readiness in using a mobile app. Mobile learning seemed an ad hoc option for learners. Beyond the typical needs related to app usage (e.g., user-friendliness and comfort), the needs' based approach adopted revealed the importance of vocabulary acquisition in ESP as a primer for all other skills. It also testified to the anxiety that STEM learners face in terms of vocabulary use. In addition, participants also mentioned the importance of audio-visual materials to support vocabulary acquisition and stressed the importance of the teacher's role to scaffold language learning.
Boustani et al.'s study analyzes how multisensory input modulates L2 Sentence Comprehension. A blend of visual, auditory, and kinesthetic/tactile senses (coined as exvolvement) and a combination of auditory, visual, kinesthetic/tactile, olfactory, and gustatory modalities (coined as involvement) were used. The authors focus on two specific measures of the event-related potential (ERP) tool to measure brain response as a result of specific sensory, cognitive, or motor events: N400 and P200. The former is sensitive to prediction and expectation functions, as well as semantic processing of words and sentences, whilst the latter is sensitive to diverse language-oriented stimuli, with functions more directed toward attention. Using various multisensory input (to present a list of unfamiliar L2 words subsequently embedded in an acceptability judgment task with 360 pragmatically correct and incorrect sentences), the authors found that the combination of five senses leads to more accurate and quicker responses, thereby empowering the subjects' performance on the acceptability judgment task. Overall, they concluded that the more senses are involved in learning a new concept, the more probable the new information is committed to long-term memory and less susceptible to forgetting.
All in all, despite their varied foci, results and methodologies, the four studies invite us not to underestimate the centrality of "humans" in human-computer interactions. Whilst technology is increasingly used in language learning and teaching, human interactions, teacher scaffolding, emotions, and multisensory aspects are more central than ever and should never become the "parent pauvre" of language learning and teaching. | 2022-11-26T14:04:39.169Z | 2022-11-25T00:00:00.000 | {
"year": 2022,
"sha1": "243538a285f32454ddc4dec147c1a02184727a39",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "243538a285f32454ddc4dec147c1a02184727a39",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3780797 | pes2o/s2orc | v3-fos-license | Effect of light wavelength on hot spring microbial mat biodiversity
Hot spring associated phototrophic microbial mats are purely microbial communities, in which phototrophic bacteria function as primary producers and thus shape the community. The microbial mats at Nakabusa hot springs in Japan harbor diverse photosynthetic bacteria, mainly Thermosynechococcus, Chloroflexus, and Roseiflexus, which use light of different wavelength for energy conversion. The aim of this study was to investigate the effect of the phototrophs on biodiversity and community composition in hot spring microbial mats. For this, we specifically activated the different phototrophs by irradiating the mats with different wavelengths in situ. We used 625, 730, and 890 nm wavelength LEDs alone or in combination and confirmed the hypothesized increase in relative abundance of different phototrophs by 16S rRNA gene sequencing. In addition to the increase of the targeted phototrophs, we studied the effect of the different treatments on chemotrophic members. The specific activation of Thermosynechococcus led to increased abundance of several other bacteria, whereas wavelengths specific to Chloroflexus and Roseiflexus induced a decrease in >50% of the community members as compared to the dark conditions. This suggests that the growth of Thermosynechococcus at the surface layer benefits many community members, whereas less benefit is obtained from an increase in filamentous anoxygenic phototrophs Chloroflexus and Roseiflexus. The increases in relative abundance of chemotrophs under different light conditions suggest a relationship between the two groups. Aerobic chemoheterotrophs such as Thermus sp. and Meiothermus sp. are thought to benefit from aerobic conditions and organic carbon in the form of photosynthates by Thermosynechococcus, while the oxidation of sulfide and production of elemental sulfur by filamentous anoxygenic phototrophs benefit the sulfur-disproportionating Caldimicrobium thiodismutans. In this study, we used an experimental approach under controlled environmental conditions for the analysis of natural microbial communities, which proved to be a powerful tool to study interspecies relationships in the microbiome.
Introduction Phototrophic microbial mats are multi-layered biofilms consisting of phototrophic and chemotrophic bacteria that form in illuminated, undisturbed habitats such as hot springs, shallow sea floors, and salt lakes [1]. Hot spring microbial mats are purely microbial ecosystems owing to their elevated temperatures [2,3], and can be found all over the world. In particular, various phototrophic bacteria and coexisting chemotrophic bacteria in the mats in Nakabusa hot springs, Japan, have been studied extensively [2,[4][5][6][7].
Photosynthetic bacteria in microbial mats shape the microbial community and influence chemotrophic bacteria in many ways, e.g., they act as primary producers, aerobic environment producers, or sulfide consumers (Fig 1). For example, autotrophic cyanobacteria produce oxygen, organic matter, and vitamins through photosynthesis and provide an environment for aerobic heterotrophs [8][9][10][11]. Some filamentous anoxygenic phototrophs such as Chloroflexus spp. play a crucial role in the natural sulfur cycle by oxidizing sulfide to elemental sulfur [5,7,12,13]. Transcriptomic and metabolomic studies of microbial mats confirmed the exchange of organic carbon, O 2 , and nitrogen between photoautotrophic and photoheterotrophic bacteria [14][15][16]. These findings support a light-dependent relationship between photosynthetic and chemotrophic bacteria. Although reports comparing the effect of environment on diel cycling or between different hot springs provide valuable information, they represent a purely observational approach. Thus, experimental studies using natural communities such as the one presented here will expand current knowledge about the environmental impact on the interspecies relationships shaping microbial mat communities.
Based on the different absorption maxima of various phototrophs, we hypothesize that irradiating microbial mats with specific light wavelengths will activate the corresponding photosynthetic bacteria to subsequently impact community composition. In the Nakabusa hot springs, oxygenic, photosynthetic cyanobacteria (genus Thermosynechococcus) occur in the surface layers of microbial mats at temperatures of 48-62˚C [6], whereas phototrophic Chloroflexi (genera Chloroflexus and Roseiflexus) are found underneath the cyanobacterial layer [2,6,17]. These photosynthetic bacteria each utilize different light wavelengths; cyanobacteria mostly absorb light around 625 and 680 nm via phycobilin and chlorophyll (Chl) a, respectively [18], whereas Chloroflexus and Roseiflexus primarily absorb wavelengths of around 740 and 880 nm via bacteriochlorophyll (BChl) c [19,20] and BChl a, respectively [21,22]. Cyanobacterial photosynthesis provides organic matter and oxygen to the surrounding microenvironment [8][9][10][11]. Chloroflexus spp. are reported to grow photoautotrophically via the 3-hydroxypropionate pathway and subsequently circulate organic matter to adjacent heterotrophic bacteria [19,[23][24][25]. Although Roseiflexus castenholzii has not been shown to grow photoautotrophically, its ability to fix inorganic carbon during autotrophic or mixotrophic growth is assumed, given that it harbors the complete gene set required for the 3-hydroxypropionate pathway [26,27]. Furthermore, Roseiflexus sp. RS-1 isolated from hot springs in Yellowstone National Park (YNP) was demonstrated to grow photoautotrophically or photomixotrophically in situ by stable carbon isotope and metatranscriptome analysis [14,26,28]. Based on this evidence, we hypothesize that irradiating microbial mats with defined light wavelengths utilized by Thermosynechococcus, Chloroflexus, and Roseiflexus spp. will specifically enrich the corresponding phototrophs, as well as their commensal chemotrophs. In the present study, natural microbial mat communities were incubated under controlled light conditions in situ and analyzed by 16S rRNA gene sequencing, which serves as a powerful approach to study interspecies relationships in microbiomes.
triplicate light-irradiating devices covered with a clear acrylic board (S3 Fig). The remainder of the bacterial suspension, accounting for~15 cm 3 of the original mat, was transferred to 2-mL reaction tubes (Eppendorf, Hamburg, Germany) for DNA isolation and 16S rRNA gene analysis. The devices were then placed in a newly dug horizontal hot spring channel at the Site B; mounting the experimental set up at the Wall site was not possible due to its vertical location (S1 Fig). The average temperature and pH at the spot in which the devices were placed were 56-50˚C and 7.3, respectively (S4 Fig). After irradiation under experimental conditions for 20 days, the mats were collected using autoclaved tweezers and mixed with biofilms that developed on the surface of the clear cover. Hot spring water (1 L) was collected from the incubation site surrounding device 1 on days 0, 7, 14, and 20 for 16S rRNA gene sequencing. Due to limitation of experimental mat volume, mat samples were obtained only at the end of the experiment and temporal observations could not be achieved. All measurements were performed in triplicate (#1-3). Due to the serial positioning of the triplicates in the hot spring stream, the microbial communities experienced slightly different ambient temperatures (55˚C, 53˚C, 51˚C for device 1-3, respectively; S4 Fig).
Light irradiation
Microbial mats were irradiated with light at specific wavelengths using a device developed by our group consisting of a black acrylic board (Shinkolite, Mitsubishi Rayon Co., Ltd., Tokyo, Japan) with five tracks for dark, 625 nm, 730 nm, 890 nm, and all three wavelengths combined (S5 Fig). Homogenized microbial mat samples were placed in the cavities and covered with a clear acrylic board. The mats were continuously irradiated for 20 days using LEDs specific for each wavelength (5 mA; OSR5CA5B61P for 625 nm, SX534IR-730 for 730 nm, and TSHF5410 for 890 nm; all from Akizuki Denshi Tsusho, Co Ltd., Tokyo, Japan). The incubation period of 20 days was chosen based on the doubling time of the three phototrophs, which is approximately 24 h [23,28,33]. Furthermore, partial mat recovery was observed in the initial mat sampling spot over that time as shown in S6 Fig. The time frame was thus expected to be sufficient for the observation of specific differences between the conditions. The distance between the LEDs and microbial mat surface was 20 mm. Due to a size limitation of the spectroradiometer (OL-750, Gooch & Housego, Ilminster, UK), light intensity at 20 mm distance was calculated by inverse-square law using intensity values measured at 30 and 50 cm distance (S7 Fig). The light intensity for the 625, 730, and 890 nm LEDs at 20 mm was approximately 0.2, 0.4, and 0.1 W/m 2 /nm, respectively. Due to a ±30 nm irradiation range for each LED, 625 nm instead of 680 nm was chosen to specifically activate photosynthesis in Thermosynechococcus spp. to avoid overlap with the in-situ absorbance of BChl c (S7 Fig).
DNA isolation, PCR amplification, and sequencing
Genomic DNA was isolated from the microbial mat and hot spring water samples using the PowerBiofilm DNA and PowerWater Sterivex DNA isolation kits (Mo Bio Laboratories, Carlsbad, CA, USA), respectively. An area spanning the V3 and V4 variable regions of the 16S rRNA gene was amplified using KOD FX Neo polymerase (Toyobo, Osaka, Japan) according to the manufacturer's protocol (primer information is in S1 Table). PCR products were cleaned using the Wizard SV Gel and PCR Clean-Up System (Promega, Madison, WI, USA). The cleaned samples were then loaded onto a MiSeq reagent cartridge for paired-end sequencing and automated clustering with MiSeq (Illumina, San Diego, CA, USA) with dual index reads and a 300-bp read length at Earth-Life Science Institute of Tokyo Institute of Technology.
Taxonomic classification based on 16S rRNA gene sequences
The paired-end reads of the partial 16S rRNA gene sequences were clustered by 97% nucleotide identity, and then assigned taxonomic information using the SILVA database [34]. The steps for data processing and assignment were as follows: (i) trimming sequences with a quality score from the 3'-end with a threshold score of 20 in PRINSEQ [35]; (ii) removing reads of the PhiX genome with Bowtie2 [36]; (iii) trimming primer sequences at a 20% error tolerance in cutadapt [37]; (iv) joining paired-end reads with QIIME [38]; (v) filtering reads with a quality score by usearch [39] with total expected errors set at 1; (vi) dereplicating reads with 100% identity by usearch; (vii) removing singletons and chimeras by usearch; (viii) clustering operational taxonomic units (OTUs) with 97% identity; and (ix) assigning taxonomic information to each OTU using uclust with SILVA taxonomy data (SILVA 123 QIIME compatible database, taxonomy 7 levels, last modification May 2016) for 97% identity in QIIME. We used all processed sequences for clustering of OTUs and relative abundance of OTU sequences in each sample without subsampling process. The numbers of row and processed sequences are shown in S2 Table and p-values were calculated by two-tailed paired t-test for comparison among experimental mats and two-tailed Welch's t-test for comparison among experimental mat, initial mat, and hot spring water.
The change in relative abundance was analyzed for each OTU observed. Only the selection of OTUs will be discussed here. We evaluated those members that show clear or notably high response to the experimental conditions, and, based on the assumption that more abundant mat members will have a higher ecological significance, specifically analyzed more abundant (!1% averaged relative abundance in any light condition) mat community members. For the calculation of the average, we summed up relative abundance of OTU in triplicate, and then, divide the summation by three. The 1% cutoff was chosen somewhat arbitrarily based on experiences of previous studies [40] and based on the average relative abundance of their representing amplicon sequences. Further OTU sequences representing possible phototrophic bacteria and/or those showing considerable changes in relative abundance (see the section "Effect of light on specific microbial mat members" for the criterion) were included in an in-depth analysis to prevent the bias introduced by focusing on abundance only. The sequences were taxonomically identified by comparison to known sequences in NCBI nr/nt databases by BLAST search [41] and phylogenetic analysis using the ARB software package [42]. Imported sequences were aligned automatically using the pt_server database and manually corrected based on secondary structure information. Initial phylogenetic affiliations were obtained by adding the aligned sequences to the tree_SSURefNR99_1200_slv_123 tree backbone implemented in SILVA (SSU Ref. NR 123, released July 2015). Phylogenetic trees were generated based on the maximum likelihood method using the phyML software included in the ARB package. The inferred confidence was based on 100 bootstrap replicates, and only values of >50 were shown in phylogenetic trees. Only sequences with length !1,000 nt were used for phylogenetic calculations. Short amplicon sequences (<1000 nt) from the present or previous studies, as well as partial sequences of uncultivated relatives, were added to trees using the ARB parsimony method without changing the tree topology.
Biodiversity analysis
Bacterial biodiversity was assessed by Shannon Diversity Index, Chao1, observed OTUs, and equitability based on 97% nucleotide sequence identity. These values and rarefaction curves were calculated by QIIME [38] with a depth of 90,000 and a trial of 10. P-values were calculated by two-tailed paired t-test for comparison among experimental mats and two-tailed Welch's t-test for comparison among experimental mat, initial mat, and hot spring water. Furthermore, wavelength-induced differences in bacterial community composition were determined by calculating the relative abundance for each OTU under different light conditions with respect to controls grown in the dark using the following equation: F i,j,k = R i,j,k /R i,0,k , where F i,j,k indicates the fold change in the relative abundance of samples grown in light (R i,j,k ) and dark (R i,0,k ) conditions, and i, j, and k represent the OTU ID, light condition (0: dark, 1: 625 nm, 2: 730 nm, 3: 890 nm, 4: combined light), and device ID (1-3), respectively. The foldchange analysis was restricted to OTUs with !10 reads, as smaller values would result in less reliable data with regard to relative changes in species abundance.
Observed differences after experimental cultivation in situ
In this study, we used a controlled approach with defined light wavelengths to examine the effect of the physiological activity of different phototrophic bacterial members on diversity and community composition in phototrophic microbial mats. Hot spring-associated phototrophic microbial mat communities were sampled, homogenized and incubated in-situ under varying light conditions to specifically stimulate three different phototrophic members, i.e., Thermosynechococcus, Chloroflexus and Roseiflexus, of the mat community. Three different wavelengths (625 nm, 730 nm, and 890 nm) were used to specifically activate one of the phototrophs under each condition. Dark and combined light conditions served as control treatments. The mats were incubated in-situ in natural hot spring water under controlled, constant LED light conditions (Fig 2). After 20 days of incubation, the microbial mats were sampled and the microbial community was analyzed using 16S rRNA gene amplicon sequencing analysis (S1 and S2 Datasets). Abundant members in experimental mats (averaged relative abundance !1%), the three phototrophs, and Sulfurihydrogenibium sp. (OTU3, 99% nt identity) dominant in hot spring water are shown in Fig 3 and Tables 1 and 2. Furthermore, they were also subjected to identification via BLAST and phylogenetic analysis (Figs 4-7). Although OTU sequences related to three phototrophs, i.e., Thermosynechococcus sp., Chloroflexus aggregans, and Roseiflexus castenholzii, increased in corresponding light conditions, we will discuss them in more detail in the section "Effect of light wavelength on phototrophic bacteria" below. At first, we discuss The relative abundance of community members was examined in microbial mats before (indicated as "IM") and after irradiation in triplicates with light at 625, 730, or 890 nm for 20 days. Samples cultivated in the dark and with combined light served as controls. Hot spring water around the devices was also sampled on days, 0, 7, 14, and 20 (indicated as "HSW" with w0, w1, w2, and w3, respectively). Averaged abundance in triplicates of !1% in at least one experimental condition, the three phototrophs, and Sulfurihydrogenibium sp. (OTU3) dominant in hot spring water are shown. https://doi.org/10.1371/journal.pone.0191650.g003 Effect of light wavelength on microbial mat the visual differences of the experimental mats with the 16S rRNA gene amplicon sequencing analysis. Visual differences in color were observed in microbial mats after 20 days of cultivation; their development under the different conditions is shown in Fig 2. Mats cultivated with 625-nm light harbored a thin green layer of Thermosynechococcus sp., as supported by 16S rRNA gene sequencing (Table 1). This layer had a thickness of <1 mm, similar to the newly formed green mats observed on the sediment surrounding the light-irradiating devices exposed to natural sunlight. The microbial mats cultivated with 730-nm light showed ã 3-mm-thick brown upper layer most likely dominated by Chloroflexus sp., which overlaid ã 2-mm-thick layer of orange-pink Roseiflexus (S8 Fig). This distribution is identical to hypersaline mats in which a Chloroflexus layer forms immediately on top of a concentrated layer of Roseiflexus as determined by FISH analysis [43]. No color differences were observed between mats cultivated with 890-nm light or in the dark; both were orange-pink, a color associated with Roseiflexus dominated communities [2,6]. This is not unexpected given that Roseiflexus can grow both photomixo/heterotrophically and chemoheterotrophically and given the observed abundance of Roseiflexus castenholzii OTU2 under both conditions (17%±4% SD vs. 11%±1% SD) [21].
Differences in mat consistency were noticed between the different light conditions. The microbial mats cultivated with 730 nm, 890 nm, and the combined light were dense, whereas those grown in the dark or with 625-nm light were rather loose mats. Cyanobacteria are known to have the ability to produce extracellular polymeric substances (EPS) that aid biofilm and mat formation [44]. However, in this study, stimulation of cyanobacteria under 625-nm LED condition led to only loose mats, which might indicate that Chloroflexus sp. and/or Roseiflexus sp. enhanced under 730-and 890-nm LED conditions were directly or indirectly responsible for the formation of dense and firm microbial mats. Chloroflexus and Roseiflexus spp. have the Effect of light wavelength on microbial mat The tree shows sequences obtained from the Nakabusa microbial mats in previous studies (bold) and this study (bold, red). potential to produce cellulose, which also is a known biofilm-enhancing component [45], as they possess the cellulose-related cesA/celA/bcsA gene set [46]. The hypothesized presence of cellulose in these dense mats is further supported by an increase of sequences representing anaerobic and putatively cellulose degrading species, e.g., SJA-28 member OTU41 (Chlorobi) Table). Sequences affiliated with the SJA-28 group have been reported to increase in the presence of cellulose under anaerobic methanogenic conditions [47] indicating a putative ability to degrade cellulose, which has readily been shown for Ruminiclostridium spp. [48]. Effect of light wavelength on microbial mat Effects of different light conditions were thus observed visually as differences in color and consistency after an incubation period of just 20 days. Furthermore, as hypothesized, cultivation under different light conditions led to changes in the relative abundance of different community members (Fig 3) and will be discussed in the section "Effect of light on specific microbial mat members" below.
Bacterial biodiversity in experimental mats, initial mat, and hot spring water
The relative biodiversity in microbial mats before and after 20 days of irradiation with specific light wavelengths and hot spring water were analyzed by 16S rRNA gene amplicon sequencing (Fig 3). A total of 22 samples from the microbial mats and the hot spring water were analyzed. Microbial mats incubated under five different light conditions as well as the initial mat used as inoculum were analyzed in triplicates, while surrounding hot spring water was analyzed at four different time points (0, 7, 14, 20 days). A total of 129,173±18,479 SD trimmed/processed sequences were analyzed for each of the 22 samples (S2 Table). No statistically significant differences were observed between the samples with regard to analyzed sequences, neither among light conditions nor replications (temperatures) (ps >0.14), except the differences of dark conditions with 625 nm and combined-light conditions (ps~0.07 and~0.03, respectively). The numbers of sequences were 143,096±9,024 under dark conditions, 113,107±16,739 Effect of light wavelength on microbial mat under 625 nm conditions, and 126,066±5,680 under combined-light conditions. However, rarefaction curves, which almost plateau with >10,000 sequences (S9-S12 Figs), showed that the numbers of sequences and coverage were sufficient for all samples. The diversity of the communities was assessed by the Shannon Diversity Index, Chao1, OTU richness, equitability based on 97% nucleotide sequence identity, and relative abundance of community members (Table 3). On average, 380±75 OTUs were detected in each of the different samples, and the expected OTU richness (Chao1) was well covered with 90±6%. The Chao1 richness of the number of OTUs detected in the hot spring water was significantly higher than that than in the mat samples (577±95 vs. 391±30 respectively) (p <0.05). Despite the higher number of obtained OTUs and greater Chao1 richness, the water samples displayed lower diversity due to weaker equitability (2.9±0.4 vs. 5.1±0.3 in Shannon Diversity Index and 0.32±0.05 vs. 0.60±0.03 in equitability for hot spring samples and the mat samples, respectively) ( Table 3).
As all mat communities clearly differed from the surrounding hot spring water community, notable differences in biodiversity were observed before and after experimental cultivation (Fig 3). Shannon Diversity Index and equitability increased significantly under experimental conditions (ps <0.05), and changes in community composition and relative abundance of community members were observed (Tables 1 and 3). Of the 16 abundant members (!1% relative abundance) in the initial mat samples, eight showed a decrease in relative abundance after irradiation with the combined light, whereas only three members increased in relative abundance after experimental cultivation (S4 Table). In contrast, the abundance of phototrophic aerobic and microaerobic bacteria (Thermosynechococcus sp. OTU7, Chloracidobacterium sp. OTU26, Elioraea sp. OTU34) decreased relative to the initial mat community (6.3%, 3.9%, and 1.7% decreased to 0.3%, 0.0%, and 0.2%, respectively). Although Elioraea tepidiphila was described as chemoheterotrophic [49], Elioraea sp. can be assumed to be photosynthetic and will be discussed in more detail in the section "Effect of light wavelength on phototrophic bacteria" below.
The observed differences in anaerobic and aerobic bacteria after experimental cultivation indicate a reduced oxygen concentration in these mats under experimental conditions, even under oxygenic photosynthesis supported by light conditions (S4 Table). This might be explained by the homogenization of the initial mat and/or the relatively low light conditions used in the experiment. Low light conditions could result in decreased cyanobacterial photosynthesis activity. Furthermore, homogenization could have led to increased oxygen consumption from abundant biomass degradation. The intensity of the experimental light was~20% of natural sunlight intensity on a clear day, and thus more representative of conditions on a cloudy day [50]. This could explain the lower photosynthesis activity and less oxygen production in the mats in comparison to the initial mat, which was located on a horizontal, south-facing wall with abundant sun exposure and available nutrients and air/oxygen from the falling hot spring water. Additionally, the continuous irradiation and limited wavelengths represent artificial conditions not observed in natural habitats and may also be responsible for the observed decrease in phototrophic bacteria, which might rely on varying conditions of light and/or oxygen as has been indicated from diel metatranscriptome analyses for phototrophic hot spring mat community members [14,51,52]. Oxygen concentrations have been measured in alkaline hot spring microbial mats before, which clearly show oxygen supersaturation during day (light condition) and anoxic conditions during night (dark conditions) [53], leading us to hypothesize relatively anoxic conditions in the experimental mats of this study.
Variability and temperature effects in experimental mats
In this study, changes in microbial community composition and diversity were observed in microbial mats incubated in-situ under controlled light conditions. We chose an approach with three replications to minimize the influence of natural variations in these mats, and the results will be discussed in the following sections. Due to the given conditions in natural hot spring environments it was not possible to keep temperature conditions stable among the three replications. A naturally occurring temperature gradient in the hot spring channel and the sequential set-up of the experimental devices led to temperature differences within the incubation location (Fig 2). In order to not add a second independent variable between the different treatments, we chose to allow different temperatures between replications, which simultaneously tested for the influence of the variable temperature. Significantly lower OTU richness (p~0.02) and Chao1 (p~0.07) were observed in device 1 (55˚C) compared to device 3 (51˚C) (345±10 vs. 379±11 for OTU richness, and 383±20 vs. 419±16 for Chao1, respectively; Table 3); indicating a lower microbial diversity at higher temperatures, as has previously indicated also by terminal restriction fragment length polymorphism and clone library analysis in Nakabusa hot spring [17]. However, observed differences between devices 1 and 2, as well as between devices 2 and 3 were not significant (for none of the parameters tested) ( Table 3). The results suggest that the gradient of temperatures in our experiment affected the community diversity gradually along with the gradient.
Additionally, relative abundance of individual OTUs varied between the replications as seen in Fig 3 Table 2). Part of the variation can be attributed to natural heterogeneity, whereas other parts are expected to represent specific temperature adaptations of the corresponding community member. High variation under all experimental conditions indicating a strong effect of temperature (CV >0.5) was observed, e.g., for Hydrogenedentes OTU4 and OTU21, Fervidobacterium sp. OTU12, "Ca. Chloranaerofilum sp." OTU27, and Ruminiclostridium sp. OTU30 (Fig 3 and S3 Table). In particular, Hydrogenedentes OTU4 showed a clear trend to higher relative abundance in device 3 (51˚C) indicating a preference for lower temperatures, which correlated with its high sequence similarity (99% nt identity) to an uncultured bacterium detected from a 45-53˚C microbial mat in Hillside Springs [54]. In contrast, Hydrogenedentes OTU21 (89% nt identity with OTU4), which showed an opposite trend towards higher relative abundance in device 1 (55˚C), which might indicate a different optimum temperature. OTU12 also showed the trend towards higher relative abundance in device 1, indicating a preference for higher temperatures, which correlates with an optimal growth temperature of 65˚C for its closest isolated relative, Ferividobacterium riparium (98% nt identity) [55]. Interestingly, "Ca. Chloranaerofilum sp." OTU27 and Ruminiclostridium sp. OTU30 strongly increased in only one sample, respectively (S3 Table), indicating a response to the specific combination of light and temperature (as discussed in the section "Effect of light wavelength on phototrophic bacteria" for OTU27). Other members showed high variation only under some but not all conditions. In particular, variations for phototrophic members differed between the conditions. Thermosynechococcus sp. OTU7, for example, showed large variations under dark conditions (0.67) but small variations under activating 625 nm and combined-light conditions (0.23 and 0.22, respectively). A similar trend was also observed for Chloroflexus sp. OTU10, for which smaller variations were observed under 730 nm and combined-light conditions than under dark conditions (0.29 and 0.14 vs. 0.41). For these cases, a lower variation in relative abundance under favorable light conditions could indicate a more active competitiveness, whereas relative abundance under unfavorable dark conditions was determined to be competitively passive, and more strongly affected by the other, more active members under these conditions. In contrast, Roseiflexus sp. OTU2 did not follow this trend, and showed a relatively small variation under dark conditions (0.06), which may be attributed to the chemoheterotrophic growth of Roseiflexus sp. [21]. Variations seen in the results of triplicates in the different light treatments indicate temperature effects due to a temperature gradient between the devices. Overall, temperature reduced the species richness, while the effects seen for specific members can be interpreted as direct or indirect effects.
Microbial mat community grown under dark conditions
The Shannon diversity in mats incubated under dark conditions did not significantly differ from that in the mats incubated under light conditions (ps >0.24) ( Table 3). Although relative bacterial abundance differed between the different treatments, most of the abundant (!1% average sequence abundance) members in the mats grown under dark conditions were consistent with those in all conditions with LED light. One exception was OTU10 sequences representing Chloroflexus aggregans, which showed a considerably lower abundance under dark conditions as compared to the combined-light control (0.3±0.1% vs. 3.1±0.4%). This strong decrease under dark condition reflects this organism's preference for a phototrophic lifestyle, as well as the need for oxygen for chemotrophic growth. Chemotrophic growth in the dark has been observed in the type strain only under aerobic conditions [20], indicating that the expected anoxic conditions in the dark mats inhibited the growth of Chloroflexus. Interestingly, all abundant species under dark conditions were heterotrophic, and the presence of chemoautotrophic members as primary producers was not indicated, although some OTUs related to chemoautotrophs, such as Thiobacter sp. (OTU46, 0.8±0.1%) and Caldimicrobium sp. (OTU45, 0.13±0.01%), were moderately abundant ( 1% and !0.1%). These data suggest that the microbial mat biodiversity was mostly dictated by the biomass and nutrients introduced with the initial mat rather than primary production by autotrophs; however, although not demonstrated by culture experiments, the Thermodesulfovibrio sp. related to OTU9 (4±1% in dark condition) is suggested to possibly have the ability to grow autotrophically based on the existence of reductive acetyl-CoA pathway enzyme genes [56], and could have contributed to primary production in the mats under dark and anaerobic conditions.
Effect of light wavelength on microbial mat biodiversity
Although no significant difference in species richness or Shannon diversity and equitability was detected between the different light conditions, an effect of light wavelength on microbial mat community was observed in relative abundance of different OTUs in comparison to the dark conditions, and is shown as semilogarithmic histograms in Fig 8. The average median values of the fold changes in the histograms for the 625-nm, 730-nm, 890-nm, and combinedlight mats were 1.15, 0.90, 0.98, and 0.98, respectively. With an average median fold change of 1.15, the relative abundance of 62% for the OTUs in mats irradiated with 625 nm light was higher than that under dark conditions. In contrast, an average median fold change of <1.0 represents a decrease in relative abundance under the light conditions for more than 50% of the OTUs (and fewer OTUs showed a significant increase). A broadening of the histogram as seen for the 890-nm samples is indicative of more pronounced changes in abundance that are evenly distributed between the different OTUs in such a way that they average out to a median value of~1.0. The observed changes are likely related to the most abundant photosynthetic bacteria for each wavelength (i.e., Thermosynechococcus, Chloroflexus, and Roseiflexus in 625-, 730-, and 890-nm samples, respectively).
The increase observed for 62% of OTUs under 625-nm LED conditions suggests that the initial community was well adapted to growing with Thermosynechococcus, and that the majority of the initial mat community suffers under dark conditions. Cyanobacteria produce molecular oxygen through photosynthesis and provide vitamins and organic matter, which has a profound impact on the other species within the microbiome [10,11]. As indicated by an average median value of <1.0, several bacteria decreased in abundance in mats irradiated with 730-nm light, suggesting that the increase of Chloroflexus under these conditions does not benefit many other community members and may result in a competitive disadvantage for other members. Chloroflexus aggregans, which shares 99% nucleotide identity with OTU10, consumes various types of organic matter under anaerobic light conditions, indicating that the outgrowth of this bacterium likely depletes available nutrients and manifests as the observed decrease in other heterotrophs [20].
The increase of Roseiflexus in 890-nm samples had equally positive and negative effects on the microbial community, as shown in a broadening of the histogram in Fig 8 and an average median value close to 1.0. Utilization of inorganic carbon sources in assumed autotrophic or mixotrophic growth by Roseiflexus castenholzii represented by OTU2 sequences would act as a primary producer of organic carbon and nutrients available to surrounding heterotrophs. Furthermore, this species likely participates in oxidation of sulfide and/or hydrogen based on genome information [28], which would facilitate the growth of sulfide-sensitive species and hinder the growth of species reliant on available electron donors.
Effect of light on specific microbial mat members
In the following section we will discuss the effect of different light conditions on selected microbial community members. Due to a combination of natural heterogeneity and the introduction of a second variable (temperature) between the replications, differing variations between the replications were observed and average values are of limited reliability. We therefore focus our discussion on selected members for which a strong effect (<0.5 or >1.5 fold change from dark conditions) was observed in at least two devices (replications) of any light condition. In total, 16 OTUs met these criteria, shown in S3 Table, and will be discussed in detail here. Effect of light wavelength on microbial mat Effect of light wavelength on phototrophic bacteria. The most abundant photosynthetic bacteria observed in the present study were Roseiflexus, Chloroflexus, and Thermosynechococcus, which predominated in mats cultivated with 890-, 730-, and 625-nm light, respectively (Table 1). Although Thermosynechococcus sp. showed the most profound increase in mats irradiated with the combined light, this was not shared with Chloroflexus and Roseiflexus spp. This could indicate that the microbiome in these mats may harbor increased competition for electron donors by Chloroflexus and Roseiflexus, or that high oxygen could inhibit the growth of both phototrophic Chloroflexi, which are known to grow phototrophically only under anaerobic conditions [20,21]. Furthermore, Chloroflexus and Roseiflexus are found at almost the same depth in mats [43], suggesting that these species utilize common resources. Under combined light conditions, both filamentous anoxygenic phototrophs are activated and would therefore compete against each other for those common resources.
Similar to all cyanobacteria, Thermosynechococcus spp. are oxygenic chlorophototrophs that express the photosynthetic pigments chlorophyll a (A max = 680 nm) and phycobilins (e.g., allophycocyanin in light-harvesting phycobilisomes; A max = 625 nm) [18]. Although the species showed decreased relative abundance in all experimental conditions compared to the initial mat, the light conditions increased the relative abundance compared to the dark conditions. As expected given the in vivo absorption maxima of cyanobacteria, 625-nm light and the combination of all three wavelengths had the largest impact on relative abundance with a 16-and 22-fold increase, respectively. This effect was also observed visually based on the presence of a 1-mm-thick, dark green layer on top of the mats and on either side of the glass cover. Moreover, the increased sequence abundance observed at 730 nm and 890 nm could result from partial absorbance at these wavelengths, but did not manifest as visual color change on the mats.
The BChl c and chlorosome-containing filamentous anoxygenic phototroph Chloroflexus aggregans specifically increased under 730-nm and combined-light conditions, which is well in accordance with the A max of BChl c of 740 nm in this organism [20]. As abundance of BChl a is clearly lower than that of BChl c in this organism [20], the light absorbed by BChl a (890 nm) had no considerable effect on the relative abundance of C. aggregans-related 16S rRNA gene sequences.
Sequences representing Roseiflexus castenholzii, a chlorosome-lacking filamentous anoxygenic phototroph that expresses BChl a as its main photopigment, increased in abundance at 890 nm and with the combined light (17±4% and 15±3% vs. 11±1% relative abundance in dark conditions), in accordance with the A max of BChl a in this organism at 880 nm [21]. However, high abundance of Roseiflexus sequences was not restricted to these conditions; rather these sequences were the predominant sequences in all mats in the present study, as well as in those collected from hot springs in YNP in a previous study [40], reflecting the ability of Roseiflexus to grow both photo-and chemotrophically.
In addition to these three abundant photosynthetic bacteria, sequences of four less abundant (<1%) photosynthetic bacteria, namely Elioraea sp. (OTU34), "Ca. Chloranaerofilum sp." (OTU27), "Ca. Roseilinea sp." (OTU120), and Chloracidobacterium sp. (OTU26), also increased in abundance under specific experimental light conditions (S3 Table). For example, the abundance of Elioraea sp. increased in all conditions with LED light, and was most pronounced in the mats at 890 nm and with the combined light (all fold changes >1.5 in triplicates). Although Elioraea tepidiphila has been described as chemoheterotrophic [49], Elioraea sp. can be assumed to be photosynthetic based on the presence of all genes necessary for BChl a production and anoxygenic photosynthesis in the E. tepidiphila type strain genome and a related metagenome bin observed in YNP hot spring microbial mats [40]. Moreover, the Elioraea sp. isolate "Ca. E. thermophila" obtained from microbial mats in Mushroom Spring, which shares 99% 16S rRNA nucleotide identity with OTU34, has been confirmed to produce BChl a and grow phototrophically [57]. Based on the close relationship and the observed increase under the light conditions in this experiment, a photoheterotrophic lifestyle for Elioraea sp. OTU34 is assumed. OTU27 sequences related to phototrophic "Ca. Chloranaerofilum corporosum" (Chloroflexi) increased in the presence of light only with device 3, which had the lowest ambient temperature (51˚C) (S3 Table). This may indicate a preference for lower temperatures for this organism (S4 Fig). "Ca. Chloranaerofilum corporosum" (OTU27, 98% nt identity) reportedly expresses BChls a and c based on metagenomic and autofluorescence studies, and has been observed to grow phototrophically under anaerobic conditions in the laboratory [57]. Thus, our observations in the present study further support the growth of "Ca. Chloranaerofilum sp." in Nakabusa hot spring mats. Another sequence representing a putatively phototrophic member of the community, OTU120, is related to "Ca. Roseilinea gracile" (96% nt identity), a BChl a-expressing uncultured phototroph first identified in a YNP hot spring mat [57]. OTU120 showed a~2-fold increase in mats cultivated with 890-nm light, supporting a phototrophic life style for this organism in these mats. Interestingly, this increase was observed primarily in devices 1 and 2, which had slightly higher temperatures than device 3 (S4 Fig), possibly indicating a preference for higher temperatures. This filamentous anoxygenic phototrophic bacterium displays a need for oxygen and is affiliated with the class Anaerolineae within the phylum Chloroflexi [56,57].
The fourth phototrophic low abundance (<1%) member was represented by OTU26, and represents the BChl c-and BChl a-containing anoxygenic photoheterotrophic acidobacterium Chloracidobacterium thermophilum (97% nt identity) [58]. Chloracidobacterium sp. OTU26 sequences showed limited abundance in initial mat samples, but increased in all light conditions compared to the dark condition (3.0-, 2.3-, 3.0-, and 4.5-fold increases in sequence abundance with 625-nm, 730-nm, 890-nm, and combined light, respectively; S3 Table) in device 3, which exhibited the lowest ambient temperature (51˚C) and corresponded to the optimum temperature of 51˚C for its closest isolated relative, C. thermophilum strain B(T) (97% nt identity) [58], reflecting a phototrophic life style. Because this photoheterotrophic species expresses BChl c, a and Chl a with optimal absorbance at 745 nm [58], an increase in relative abundance was mainly expected with 730-nm and combined-light conditions. However, relative sequence abundance also increased in the 625-and 890-nm mats. As C. thermophilum has been shown to depend on low oxygen concentrations and cyanobacterial sequences also increased under all experimental light conditions, microaerobic conditions are hypothesized to have occurred in these mats. We confirmed the effect of various light wavelengths on phototrophic bacteria in microbial mats.
Light effects on chemotrophic bacteria. Light of specific wavelengths was hypothesized to show effects on chemotrophic bacteria indirectly via the activation of different phototrophic mat members. In particular, the activity of the oxygenic phototroph Thermosynechococcus was expected to contribute to the growth of other bacteria in mats irradiated with light of 625 nm by providing aerobic conditions and nutrients. Abundant chemotrophic bacteria (!1% average sequence abundance) that varied in abundance in a wavelength-dependent manner included Exilispria sp. OTU5, Fervidobacterium sp. OTU12, and Thermodesulforhabdus sp. OTU28 (Table 1). For all three of these bacteria, an influence of oxygen is hypothesized. Exilispira sp. has been reported to be a strictly anaerobic, chemoheterotrophic bacterium [59], which correlates with a lower abundance under 625-nm LED conditions in which the presence of oxygen can be suspected in this experiment. Similarly, sequences related to the strictly anaerobic, chemoheterotrophic Thermotogae member Fervidobacterium riparium (OTU12, 99% nt identity), suggested as temperature sensitive bacterium in the section "Variability and temperature effects in experimental mats", were the least abundant (1.9±1.2%) in the 625-nm mats and most abundant at 730 nm (4.8±7.3%). This trend was clearly observed in device 1 (3.1% and 13.2%, respectively), which exhibited the highest ambient temperature of the three devices (55˚C); this might reflect a preference for higher temperatures, as seen in the optimal growth of the type strain at 65˚C [55]. As the oxygen produced by Thermosynechococcus sp. under the 625-nm condition most likely inhibited growth and given that elemental sulfur promotes Fervidobacterium riparium growth [55], the increased presence of elemental sulfur associated with Chloroflexus sp. together with the anaerobic conditions was likely responsible for its profound abundance at 730 nm. Lastly, sequences representing Thermodesulforhabdus sp. were most abundant in mats cultivated in the dark (3.0±0.5%) and less abundant in those irradiated at 730 nm, 890 nm, and the combined light (1.7±0.3%, 1.5±0.8%, and 1.4±0.6%, respectively). The sequences were related to Thermodesulforhabdus sp. M40/2 CIV-3.2 (94% nt identity) and Thermodesulforhabdus norvegicus (92% nt identity), which both reduce sulfate by using acetate as an electron donor [60,61]; thus, the sequences may decrease in the presence of Chloroflexus and Roseiflexus, which both also utilize acetate [14,20].
Three additional, but less abundant (<1%), sequences affiliated with chemoheterotrophic species increased in response to different light conditions: Meiothermus OTU33, Thermus OTU67, and Caldimicrobium OTU45 (S3 Table). Similarly, a high influence of oxygen concentrations is hypothesized for the former two of these species; but in contrast to the aforementioned species, a positive effect is postulated. Meiothermus and Thermus spp. are strict aerobic heterotrophs belonging to the Thermaceae family, and their sequence abundance increased in conjunction with Thermosynechococcus in 625-nm light. An interaction between Thermosynechococcus and Meiothermus has been reported previously in which Thermosynechococcus provides organic carbon, oxygen and reduced nitrogen to heterotrophic Meiothermus, and Meiothermus enhances the biomass production efficiency of Thermosynechococcus and reduces cyanobacterium-induced oxidative stress [11]. Given their high similarity to Meiothermus, Thermus spp. are hypothesized to have a similar relationship to Thermosynechococcus. Although cyanobacteria contribute to the growth of different heterotrophs [11,62,63], only Meiothermus and Thermus showed a manifest positive association with Thermosynechococcus in our experimental data. In contrast, sequences similar to those of the hot spring-derived sulfur disproportionating Thermodesulfobacteria species Caldimicrobium thiodismutans increased along with those of Chloroflexus and Roseiflexus, suggesting that these filamentous anoxygenic phototrophs may function cooperatively in the sulfur cycle. For instance, Chloroflexus aggregans and Roseiflexus castenholzii both oxidize sulfide via sulfide:quinone oxidoreductase activity [7,12]. Notably, none of the sequenced Chloroflexus strains encode dissimilatory sulfite reductase (dsr) or sulfur oxidation (sox) genes, consistent with the observation that globules of elemental sulfur are deposited outside the cells in sulfide culture medium [7,12,64]. Based on these findings, a possible sulfur-cycle mechanism present in hot spring microbial mats consisted of Chloroflexus and Roseiflexus oxidizing sulfide to elemental sulfur, which can then be disproportionated by Caldimicrobium [65].
In contrast, the abundance of several sequences was highest under dark conditions and decreased in response to experimental light conditions, such as the 50% decrease of Thiobacter subterraneus (OTU46, 100% nt identity) sequences [66] in mats irradiated with the combined light (S3 Table). Thiobacter subterraneus is a strictly chemoautotrophic bacterium oxidizing thiosulfate/elemental-sulfur as a sole energy source with molecular oxygen as the electron acceptor [66]. Thiobacter and Caldimicrobium spp. both utilize and compete for elemental sulfur as an electron donor. Given that Thiobacter utilizes oxygen whereas Caldimicrobium prefers anoxic conditions, it is likely that Thiobacter would exhibit a competitive advantage under oxygenic light conditions; however, the abundance of Caldimicrobium sp. sequences increased in mats irradiated with the combined light. One possible explanation for this could be a higher pH tolerance of Caldimicrobium thiodismutans over Thiobacter subterraneus indicated by their type strain descriptions [65,66], as the autotrophic growth of cyanobacteria can significantly increase the pH of hot spring microbial mats [53,67].
Hot spring water community
The hot spring water surrounding the microbial mats is not only the chemical source for the mat community but also a possible source of bacterial seeds invading into the mats. We studied the hot spring water microbiome at different time points during the experimental incubations. The water microbiome differed significantly from the mat communities both in diversity and community composition. Although species richness was higher than that in mat samples, diversity was reduced and the community highly uneven. The water community was dominated by sequences representing a single species, the sulfur-oxidizing Aquificae member Sulfurihydrogenibium azorense [68] (OTU3, 99% nt identity, abundance gradually decreased from 73% to 53%), which is a common and dominant member of the chemotrophic streamer communities found at higher temperatures (67-75˚C) upstream of the experimental site [17]. Additionally, sequences representing Tepidimonas thermarum (OTU24, 99% nt identity, abundance gradually increased from 0.01% to 10%), Hydrogenophilus thermoluteolus (OTU48, 99% nt identity, 2±1%), "Ca. Roseovibrio tepidum" (OTU29, 99% nt identity, abundance gradually increased from 0.002% to 3%), and Thermus arciformis (OTU67, 99% nt identity, 1.1±0.5%) were detected in the hot spring water [57,[69][70][71] (S5 Table). The sequences obtained in the water sample rather most likely originated from white bacterial streamers observed upstream and are not adapted to the relatively lower temperatures in this experiment. However, Tepidimonas thermarum OTU24 and "Ca. Roseovibrio tepidum" OTU29 gradually increased in these conditions. Their common features can be assumed to be aerobic and adaptation to the temperature. Tepidimonas thermarum is strictly aerobic and its optimum growth temperature is approximately 50-55˚C [69]. Furthermore, OTU29 with 99% shared nucleotide identity to the novel aerobic anoxygenic photoheterotroph "Ca. Roseovibrio tepidum" [57] also shared 96% nucleotide identity to aerobic Roseomonas alkaliterrae, which can grow at up to 55˚C (optimum, 40-50˚C) [72]. Their growth temperatures would be related to the temperature at the sampling location (~56˚C, T1 in S4 Fig). In regards to the aerobic condition, the proportion of oxygen-producing Thermosynechococcus sp. and Chloroflexus aggregans also increased notably, from 0.001% to 2.5% and 0.1% to 2%, respectively (S5 Table). The incubation channel for this experiment has been artificially constructed and no natural microbial mat communities were present around the installed irradiation devices at the beginning of the incubation. During the incubation period and correlating with the increase of Thermosynechococcus and Chloroflexus spp. sequences detected in the hot spring water, a thin green microbial mat formed on the sediment surrounding the light-irradiating devices during that time. Thus, the Thermosynechococcus sp. and Chloroflexus aggregans sequences detected in the hot spring water likely originated from the unintended disruption of these young microbial mats during sample collection. However, one member, Thiobacter sp. OTU46 (0.6% in hot spring water) was not detected in the initial mat but present in the mats after 20 days of cultivation (largest in 625-nm condition, 0.8%), which might indicate invasion from the surrounding hot spring water. Further, although not detected in high abundance in the initial spring water, the low-abundance cells can be hypothesized to be the seeds for the newly grown phototrophic microbial mats observed. We therefore confirmed the possibility that the hot spring supplied not only chemical compounds but also bacteria into the mats.
Conclusions
In this study, we examined the effect of photosynthetic bacteria on chemotrophic members in a hot spring microbial mat in situ under controlled light conditions using 16S rRNA gene sequencing. Biodiversity analysis before and after 20 days of cultivation revealed an increase in anaerobic bacteria and a decrease of relative abundance for phototrophic bacteria that could be explained by the homogenization methods and artificial light conditions. As hypothesized, mats irradiated with light at wavelengths of 625 nm, 730 nm, and 890 nm showed significant increases in the abundance of Thermosynechococcus, Chloroflexus, and Roseflexus, respectively. We also observed increases of other minor phototrophic bacteria with light. These results reinforce the current knowledge of phototrophs and characterize their commensal relationship with chemotrophs which shapes the mat microbiome in situ. For example, the abundance of aerobic chemoheterotrophs such as Thermus sp. and Meiothermus sp. increased, with Thermosynechococcus providing aerobic conditions and photosynthates. Some chemotrophs involved in the sulfur cycle such as Caldimicrobium thiodismutans were correlated with the increase in Chloroflexus and Roseiflexus abundance. Control of environmental conditions in natural microbial ecosystems is a powerful tool to reveal interspecies relationships because it can reproduce various environmental conditions or regulate a specific factor. To further test the hypotheses generated and fully characterize the molecular basis of these interactions, dynamical/spatial sampling of mats and environmental information under controlled environmental conditions will be performed in the future. Table. Sequences processed for taxonomic classification based on 16S rRNA gene sequences. MiSeq output sequences (total number of sequences), and then we removed the Phix genome from the raw sequences (PhiX genome removed sequences). Subsequently processed sequences (trimmed/processed sequences) were clustered as OTUs. (XLSX) S3 Table. Abundant members with relative abundance !0.5% in hot spring water. Orange highlight indicates relative abundance !0.5%. Nearest neighbors were determined by BLAST analysis of all NCBI database sequences. (XLSX) S1 Dataset. The number of reads of all OTUs taxonomically assigned using the SILVA database. "INI" indicates initial mat. "HSW" indicates hot spring water samples collected on days 0, 7, 14, and 20. (XLSX) S2 Dataset. The averages, standard deviations, and coefficient of variations between triplicates for relative abundance of all taxonomically assigned OTUs. "INI" indicates initial mat. "HSW" indicates hot spring water samples collected on days 0, 7, 14, 20. (XLSX) | 2018-04-03T02:16:04.379Z | 2018-01-30T00:00:00.000 | {
"year": 2018,
"sha1": "06bb473f2be82e846ee7ab361a92a5e2e9dff156",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0191650&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06bb473f2be82e846ee7ab361a92a5e2e9dff156",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
263787038 | pes2o/s2orc | v3-fos-license | The International Pulsar Timing Array second data release: Search for an isotropic Gravitational Wave Background
We searched for an isotropic stochastic gravitational wave background in the second data release of the International Pulsar Timing Array, a global collaboration synthesizing decadal-length pulsar-timing campaigns in North America, Europe, and Australia. In our reference search for a power law strain spectrum of the form $h_c = A(f/1\,\mathrm{yr}^{-1})^{\alpha}$, we found strong evidence for a spectrally-similar low-frequency stochastic process of amplitude $A = 3.8^{+6.3}_{-2.5}\times10^{-15}$ and spectral index $\alpha = -0.5 \pm 0.5$, where the uncertainties represent 95\% credible regions, using information from the auto- and cross-correlation terms between the pulsars in the array. For a spectral index of $\alpha = -2/3$, as expected from a population of inspiralling supermassive black hole binaries, the recovered amplitude is $A = 2.8^{+1.2}_{-0.8}\times10^{-15}$. Nonetheless, no significant evidence of the Hellings-Downs correlations that would indicate a gravitational-wave origin was found. We also analyzed the constituent data from the individual pulsar timing arrays in a consistent way, and clearly demonstrate that the combined international data set is more sensitive. Furthermore, we demonstrate that this combined data set produces comparable constraints to recent single-array data sets which have more data than the constituent parts of the combination. Future international data releases will deliver increased sensitivity to gravitational wave radiation, and significantly increase the detection probability.
INTRODUCTION
Inspiralling supermassive black hole binaries (SMBHBs) with masses larger than 10 7 M are expected to generate the strongest gravitational-wave (GW) signals in the Universe.The incoherent superposition of all of these inspiralling SMBHBs should generate a stochastic GW background (GWB) that is the strongest in the nanohertz frequency band (e.g., Rajagopal & Romani 1995;Jaffe & Backer 2003;Sesana et al. 2008;Burke-Spolaor et al. 2019).Other sources that could also produce a stochastic background in the nanohertz band are cosmic strings (e.g., Ölmez et al. 2010), cosmological phase transitions, and a primordial background produced by quantum fluctuations in the gravitational field in the early universe (e.g., Grishchuk 2005;Lasky et al. 2016).For comparison, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Virgo Collaboration, which are terrestrial GW detectors and have detected GWs from merging stellar mass black holes and neutron stars (e.g., Abbott et al. 2019Abbott et al. , 2021)), are only sensitive to GW signals that are ten orders of magnitude higher in frequency than PTAs.
A nanohertz GWB can be detected using a precisely timed ensemble of millisecond pulsars (Sazhin 1978;Detweiler 1979), called a pulsar timing array (PTA, Foster & Backer 1990).The GWs distort the space-time between the Earth and pulsars, changing their proper distance, thereby leading to a measurable deviation of the pulsar pulse arrival times.Since such effects cannot be detected with confidence using only one pulsar, PTAs leverage the imprint of spatially-correlated timing deviations between pulsars which are separated by kiloparsec distances across the galaxy, yet are subject to the common influence of the GWB.
An isotropic GWB manifests itself as a long timescale, low-frequency (or red) common signal across the pulsars in a PTA.This common signal is characterized by the common spectrum and the inter-pulsar spatial correlations.For an isotropic GWB these spatial correlations are unique, referred to as the Hellings & Downs (1983) (HD) correlations, and thus are considered to be the "smoking gun" signature for the presence of a GWB (Tiburzi et al. 2016) in any PTA data set.The spectral amplitude of this common signal is determined by the characteristic strain, hc(f ), of the GWB, which itself is a function of the physics sourcing the GWB (e.g., SMBHB masses, merger timescale, and number density) (e.g., Sesana 2013a; Kelley et al. 2017;Chen et al. 2019).Thus, precise spectral characterization of the GWB will allow us to extract the underlying astrophysics of the background, as well as distinguish between different sources of the GWB (e.g., Pol et al. 2021).
The ability to detect GWs relies on, among other things, the number of pulsars available to cross-correlate in GWB searches, and on the length of each pulsar data set (Siemens et al. 2013).Improvements in both of these parameters increases the detection significance of the GWB signal which in turn allows for better constraints on the parameters of the GWB spectrum (Pol et al. 2021).Hence, international efforts spanning decades from the European Pulsar Timing Array (EPTA, Desvignes et al. 2016), North American Nanohertz Observatory for Gravitational Waves (NANOGrav, Arzoumanian et al. 2016), and the Parkes Pulsar Timing array (PPTA, Manchester et al. 2013), as well as newer PTAs such as the Indian Pulsar Timing Array (InPTA, Joshi et al. 2018), Chinese Pulsar Timing Array (CPTA, Lee 2016), and with the MeerKAT Interferometer in South Africa (Bailes et al. 2020) share and combine their data to form the International PTA (IPTA, Hobbs et al. 2010).
In this spirit of international collaboration the IPTA has produced two data sets to date.The first IPTA data release (DR1, Verbiest et al. 2016) consisted of 44 millisecond pulsars and yielded no conclusive detection of a GWB.The second IPTA data release (DR2, Perera et al. 2019) consists of 65 pulsars and is the focus of this analysis.The pulsars in DR2 have data sets spanning 0.5 − 30 years.For the first time, we process the data subsets from each individual PTA and search for a GWB in a self-consistent way, thus enabling us to make a fair comparison of respective PTA constraints.
Recently, a spatially uncorrelated (pulsar-weightedaverage) spectrally similar common process or commonspectrum process (CP) was detected in the NANOGrav 12.5year data set (Arzoumanian et al. 2020), the second data release of the Parkes Pulsar Timing Array (Goncharov et al. 2021a), and the EPTA six-pulsar data set of the second data release (Chen et al. 2021b).The process is modeled as an additional time-correlated term with the same power spectrum in all of the pulsars.However, there is little evidence to support the existence of spatial HD correlations in any of these data sets.We compare the IPTA DR2 constraints on the GWB with those obtained from these analyses.
The paper is organized as follows: in Section 2 we give an overview of the second IPTA data release, hereafter referred to as DR2.We describe our data analysis methods in Section 3, and give our results in Section 4. Caveats and implications of our analysis and results are discussed in Section 5, including the astrophysical interpretation of a potential GWB.The conclusion is given in Section 6.
IPTA DATA RELEASE 2
IPTA DR2 includes a combination of timing data from the following individual PTA data releases: the EPTA data release 1.0 (Desvignes et al. 2016), the NANOGrav 9-year data set (Arzoumanian et al. 2015), and the PPTA first data release (Manchester et al. 2013) and its extended version (Reardon et al. 2016).The EPTA data set includes high-precision timing data from 42 MSPs obtained with the largest radio telescopes in Europe -Effelsberg telescope, Lovell telescope, Nançay telescope, and Westerbork Synthesis telescope -covering data from 1996 to 2015 with a time baseline between 7-18 yr.In addition to these data, archival timing data of PSR J1939+2134 since 1994 was included.
The NANOGrav 9-year data set includes high-precision timing observations from 37 MSPs obtained with the Robert C. Byrd Green Bank Telescope and the Arecibo telescope, spanning a time baseline between 0.6-9.2yr, covering the data from 2004 to 2013.In addition, the long-term timing data of PSR J1713+0747 from Zhu et al. (2015) and the data of PSRs J1857+0943 and J1939+2134 from 1984 through 1992 (Kaspi et al. 1994) were included.The PPTA data set includes high-precision timing observations from 20 MSPs obtained with the Parkes radio telescope (also known as Murriyang) from 2004 to 2011.IPTA DR2 also included single frequency band (1.4 GHz/L-band) Parkes telescope legacy data obtained since 1994.The additional 3.0 GHz timing data reported in Shannon et al. (2015) for PSRs J0437−4715, J1744−1134, J1713+0747, and J1909−3744 were also included in the data set.In total, the timing data from 65 MSPs were included in IPTA DR2, which has 21 more source than the IPTA DR1 (Verbiest et al. 2016).There are 27 and 7 MSPs in IPTA DR2 with a timing baseline >10 yr and >20 yr, respectively.All pulsars were observed at multiple frequencies.All EPTA and PPTA observations were averaged in time and frequency to obtain a single time-of-arrival (TOA) for each receiver and observation.The NANOGrav observations were averaged in time and included sub-band information, i.e., averaged in frequency to maintain a frequency resolution ranging from 1.5 to 12.5 MHz depending on the receiver and backend instrument combination, resulted in a single TOA for each frequency channel.More details about the constituent PTA data sets can be found in Perera et al. (2019).
The different data sets for a given pulsar in IPTA DR2 were combined by fitting for time offsets, referred to as JUMPs, in the timing model to account for any systematic delays between data sets.The highest weighted data set with the lowest sum of TOA uncertainties was used as the reference data set in this process.The timing models of pulsars included astrometric parameters, rotational frequency information, dispersion measure information, and Keplerian and Post-Keplerian parameters if the pulsar is in a binary system.For NANOGrav observed pulsars, "FD" parameters were included to minimize the effect of frequency-dependent profile variations of pulsars (see Arzoumanian et al. 2015).IPTA DR2 produced two data set versions depending on different methods of handling the dispersion measures (DM) variations of pulsars over time (VersionA and VersionBsee Perera et al. 2019, for details).In VersionA, the DM variations of pulsars were determined using DMMODEL described in Keith et al. (2013) and the noise parameters for different data sets were directly taken from their original data releases.In VersionB, the DM variations were modeled using the first two time derivatives of the DM and a timecorrelated stochastic DM process in the timing model.The noise parameters were also re-estimated based on the new IPTA data combination in this version.We use VersionB for this work.
DATA-ANALYSIS METHODS
In this work we follow the conventions established by other pulsar timing array data analyses (i.e., Arzoumanian et al. 2016Arzoumanian et al. , 2020;;Lentati et al. 2015;Goncharov et al. 2021a).The multivariate Gaussian likelihood L(δt|θ) is employed to model noise and signal contributions, parametrised by θ, to the observed timing residuals.Our likelihood was of the same form as other PTA analyses, (e.g., Arzoumanian et al. 2015).We used enterprise (Ellis et al. 2020) to evaluate the likelihood and priors, and PTMCMCSampler (Ellis & van Haasteren 2017) to perform a Markov chain Monte Carlo (MCMC) simulation, drawing samples from the posterior probability distribution.Model selection was performed via the product-space sampling method (Carlin & Chib 1995;Hee et al. 2016).Additionally, we used the Savage-Dickey approximation to the Bayesian evidence ratio when appropriate.
Noise models
For each pulsar, we modeled the TOAs with a combination of four processes: the timing model, white noise, intrinsic red noise, and DM variations.Deterministic contributions from the timing model, described in Section 2, were analytically marginalised (van Haasteren et al. 2009).The time-uncorrelated white noise was modeled with EFAC and EQUAD, and ECORR parameters for NANOGrav pulsars with their sub-banded TOAs (definitions of EFAC, EQUAD, and ECORR can be found in, e.g., Verbiest et al. 2016;Perera et al. 2019).Every observing receiver and backend system combination is given its own set of white noise parameters.The time-correlated red noise process (e.g., pulsar spin noise, Shannon & Cordes 2010) and stochastic DM variations (Keith et al. 2013) were modeled as Fourier basis Gaussian processes.In each case Fourier spectrum coefficients were modeled as power laws, where A is the power law amplitude, γ its spectral index, and fyr = 1 yr −1 ≈ 3.17 × 10 −8 Hz.The difference between these two processes lies in the radio frequency ν dependence.Intrinsic red noise is achromatic, i.e., frequency independent, while DM variations follow a ν −2 dependence (e.g., Lentati et al. 2016).Despite all MSPs in IPTA DR2 exhibiting high rotational stability, such that the marginalized timing model, red, and DM noise terms are in general sufficient, certain pulsars have been found to experience timing events that need to be included in their data model.Of interest to this analysis is PSR J1713+0747, which was observed to experience multiple sudden drops in apparent DM with an exponential recovery (Demorest et al. 2013;Lam et al. 2018;Goncharov et al. 2021b).Only the first such event lies within the timespan of the IPTA DR2 and was included as an additional deterministic term to the full noise model of PSR J1713+0747.The amplitude, epoch and recovery time scale of the DM exponential dip are sampled simultaneously with the pulsar red and DM noise terms.Lentati et al. (2016) found additional sources of red noise in IPTA DR1.These include radio frequency banddependent and observing system-dependent terms, which may affect measurements of PRN and PDM, if not modeled.It is possible that mismodeling these effects can bias recovery of the CP.More prescriptive models for the CP should be less affected by this bias.Recent PTA analyses have included more complex red noise and DM variation models, where different pulsars in the array use different models (e.g., Aggarwal et al. 2019;Goncharov et al. 2021b).In the name of computational efficiency we opted to use the same power law models for all pulsars except when absolutely necessary, as is the case for PSR J1713+0747.
IPTA DR2, being the combination of data from multiple telescopes and many observing systems, has larger model parameter space than its constituent data sets.The large number of model parameters and TOAs increases the computational complexity of the analysis.As we searched for long-term processes, such as the GWB, we limited our analysis to pulsars whose observation time exceeded 3 years.This reduced the number of pulsars from the full 65 in DR2 to 53.Additionally, we fixed the white noise parameters (EFAC, EQUAD, and ECORR) to median aposteriori values from single pulsar analyses.Both of these choices reduced the analysis parameter space to a more manageable size.
Common-spectrum process models
In addition to modeling noise intrinsic to the individual pulsars, we also include a red CP that is present in all of the pulsars.The source of this process could be the GWB, or any other common noise that manifests itself in all pulsars, such as clock errors (Caballero et al. 2016;Hobbs et al. 2020) or errors in the Roemer delays from Solar-system ephemeris (SSE) systematics (Tiburzi et al. 2016;Vallisneri et al. 2020).The choice of red noise priors also affects the recovery of a CP due to covariance between pulsar intrinsic red noise and the CP (Hazboun et al. 2020b;Goncharov et al. 2021a).Each of these effects can be distinguished by a unique pattern of spatial cross-correlations between pulsars.The cross-power spectral density is defined as, where PCP is the common-spectrum process and Γ ab is the overlap reduction function (ORF) describing the inter-pulsar correlations.
For some analyses we did not account for any interpulsar correlations, taking Γ ab = δ ab to be the identity matrix.For others, we also included different choices of nondiagonal ORFs, such as the quadrupolar Hellings & Downs (1983) correlations that describe a GWB, dipolar correlations associated with SSE errors, or monopolar correlations, Γ ab = 1, associated with clock errors.In some cases we split the diagonal auto-correlation part of the ORF from the offdiagonal cross-correlation part, treating them as independent processes as a consistency check.When modeling the CP using only the auto-correlations, it is possible to analyze the data from each pulsar independently, then recombine the results to achieve a joint posterior on the CP.We refer to this as the factorized likelihood approach.
We modeled the CP using a Fourier basis Gaussian process, using basis frequencies f = 1/T, 2/T, . .., where T is the timespan between the earliest and latest observation in the data set.We model the power spectrum of the CP as a power law using Eqn.(1), replacing the pulsar noise amplitude and spectral index with those from the common process ACP and γCP.In this parameterization of the power spectrum the characteristic strain spectrum for the GWB is In some cases we fixed γCP = 13/3, equivalent to α = (3 − γCP)/2 = −2/3, the expected spectrum for a GWB composed of circular supermassive binary black holes (Phinney 2001), and in others we left γCP as a free parameter.
To determine the number of Fourier frequencies used in the power law CP model, we fit the power spectrum with a broken power law model.The broken power law is the sum of the standard, red power law and a white spectrum.This is implemented as a single spectrum with a fixed spectral index at low frequencies that smoothly transitions into a flat, white noise dominated spectrum at high frequencies: where f bend is the frequency where the spectral index of the power spectrum changes and κ controls the smoothness of the transition.In this model P (f ) ∼ f −γ CP for f f bend , and P (f ) constant for f f bend .As a verification of our power law models, we performed a free spectral analysis, where the power at each frequency is fit independently rather than being constrained by a particular spectral shape, P (f ).
Frequentist analyses
As a comparison for our primary Bayesian data analysis pipeline, we performed a frequentist analysis using the noisemarginalized optimal statistic.The optimal statistic is an estimator for the amplitude of the GWB based on the interpulsar correlations (Anholm et al. 2009;Demorest et al. 2013;Chamberlin et al. 2015).Its original derivation assumed the pulsars have no intrinsic red noise.The noisemarginalized optimal statistic uses posterior samples from the Bayesian data analysis to marginalize over the pulsars' red noise.It has been shown to more accurately estimate the amplitude of the GWB when the pulsars have intrinsic red noise (Vigeland et al. 2018), as is the case in IPTA DR2.
RESULTS
The IPTA DR2 data set with its large number of pulsars (53 for this work), long timespan, and various independent observing systems offers a wealth of different analysis opportunities.Here we present a selection of analyses to give a complete picture of what IPTA DR2 teaches us and how it compares to other PTA data sets.
IPTA DR2 data set
We first show results from the full combined IPTA DR2 data set using methods which differ in their spectral modeling and choice of ORF.The full standard GWB search uses all the available information from the auto-as well as the crossterms (which are assumed to follow the HD correlation).As the cross-terms, which come from the inter-pulsar correlations, are the defining feature of the GWB, we insist on their presence in order to confidently claim a detection.However, the auto-correlations are initially the dominant source of information, especially for spectral parameter estimation, and the detection of power in them is considered to be the first hint of a GWB (Romano et al. 2021).It is important to emphasize that detecting the auto-correlations alone is insufficient to claim a detection of a GWB.The top panel shows the power in terms of time of arrival residuals at frequencies in the nanohertz band for the full PTA.The maximum likelihood power law is shown overlaid on posteriors for free spectral parameters, a generic model that measures power at various frequencies without imposing any empirical model.The bottom panel shows the power law and free spectral information from above converted into units of characteristic strain, i.e., the noise power measured in the same units as GW amplitude.The additional line shows the characteristic strain for the detector using the noise parameters for the pulsars.The lower limit of all violins is a result of the lower bound of the prior range for each frequency component.
Common-spectrum process
To begin, we apply a free spectral model for the CP, measuring the amount of power at each sampling frequency independently, up to 30 frequency bins, the result of which is shown in the top panel of Figure 1.The common red noise power can be converted into GW strain using Eqn.(4).This alternative representation of the IPTA DR2 analysis can be found in the lower panel of Figure 1.For reference, we include the predicted sensitivity curve, made using hasasia (Hazboun et al. 2019a,b) and the measured white noise parameters of the DR2 data set.Note that the noise power spectral density used in this curve only contains TOA errors, EFAC, EQUAD and ECORR, and does not contain any estimates for the red noise, as for many pulsars it is difficult, and in fact the point of this analysis, to disentangle intrinsic red noise from a GWB.Hence the low-frequency end of the sensitivity represents a "best case" scenario for comparison.The lowest frequency bin corresponds to the longest timespan, and only two pulsars, J1939+2134 (29 yr) and J1857+0943 (28 yr), have observation baselines suffi- cient to probe this frequency.However, both have significant RN with spectral indices ∼ 3.3 (Perera et al. 2019).Therefore, it is not surprising that we do not confidently detect any power there, evidenced by the wide tail extending to low power and median of ∼ 10 −7 s.However, the second, third, fifth, and eighth frequency bins show power well above the expected sensitivity of IPTA DR2.This could either be the emergence of a GWB or some other unmodeled noise process.
The CP power spectrum can be modeled with a simple power law using Eqn.(1).Arzoumanian et al. (2020) have shown that the choice of the number of modeled frequencies can affect the constraints on the power law amplitude and spectral index.Thus, we apply the broken power law model from Eqn. (5) to find the optimal number of frequency bins for the analysis.Figure 2 shows the marginalized posterior on the bend frequency.We can identify a clear peak at the 13th frequency in N/T , corresponding to a frequency of 1.4 × 10 −8 Hz, indicated by the orange dashed line.For the remainder of this work, we will limit the search to use only the lowest 13 frequencies with the simple power law model.This produces constraints equivalent to an analysis using the broken power law, but in a simpler and computationally efficient way.
We have also verified that the addition of the BAYESEPHEM SSE model (Vallisneri et al. 2020) to the analysis does not change our results significantly from an analysis that fixed the SSE to DE438.The nearly 30 years of timespan allows for the separation of the SSE effects from other correlated signals (Vallisneri et al. 2020).For simplicity, we will only show results with DE438, unless stated otherwise.
Figure 3 compares the results when using two different ORFs.The model that uses only the auto-correlation terms, which we denote CP in Table 1, is very strongly favored over a model with only intrinsic pulsar noise and no common-spectrum process with log 10 Bayes factor of 8.2.Despite the large Bayes factor in favor of the CP, this does not suffice to claim a GWB detection, as we have only used the auto-correlations.This strong evidence only indicates that a number of pulsars have red noise with similar spectral characteristics.We must turn to the cross-correlations Figure 3.Comparison of common-spectrum process parameters when using auto-correlations only and the full auto+cross-correlated HD model.left: 2D posterior for common-spectrum process power law parameters.Green lines mark γ = 13/3 and A CP = 2.8 × 10 −15 , while the contours represent the 1-, 2-and 3-σ confidence intervals.right: 1D posterior for common-spectrum process power law amplitude, using fixed spectral index γ = 13/3.to determine if this CP is HD correlated as a GWB should be.Using the full HD ORF containing both auto-and crosscorrelations, we find only middling evidence in favor of the auto+cross HD model.The log 10 Bayes factor for the full HD model compared to the auto-correlated only CP is 0.3, as shown in Table 1.
Figure 3a shows the 2D posterior contours of these two models are in relatively good agreement.A small shift towards lower amplitudes and higher spectral index can be seen when using the full HD ORF with both auto-and crosscorrelations.Using the auto-correlation terms only, we find ACP = 5.1 +6.7 −3.1 × 10 −15 and γCP = 3.9 ± 0.9, where the errors represent 95% credible regions.Using the full HD ORF we can constrain the CP power law to ACP = 3.8 +6.3 −2.5 × 10 −15 and γCP = 4.0 ± 0.9.
When we fix the power spectrum index to γ = 13/3 as shown in Figure 3b, it is clear that full HD model finds a systematically lower amplitude.In this case we find an am- The posterior from the auto-correlations is well constrained, while the posterior from the cross-correlations is unconstrained, but prefers amplitudes slightly smaller than the auto-only analysis.The effect of this is seen in the amplitude posterior from the full auto+cross-correlation model, where the posterior peaks at lower amplitude than the auto-only analysis.
plitude of ACP = 3.2 ± 1.0 × 10 −15 for the auto-correlation only analysis and an amplitude of ACP = 2.8 +1.2 −0.9 × 10 −15 using the full HD ORF, where the uncertainties represent the 95% credible regions.These results are in broad agreement with published constraints on the CP.A more detailed comparison can be found in Section 4.3.
Split ORF analysis
Similar to how we may consider the auto-correlation parts of the ORF alone, the full ORF can be split into two independent processes.In this case the auto-correlation and the cross-correlation parts each have their own independent amplitude, as was done in Arzoumanian et al. (2020).In the HD ORF the auto-correlation part is Γaa = 1 and the crosscorrelation parts are suppressed by at least a factor of 2, Γ ab < 0.5.This makes the cross-correlations harder to constrain.Figure 4 shows the posteriors for the two amplitudes of a split ORF analysis for fixed γ = 13/3, compared to the full auto+cross-correlation model.The cross-correlations do not have sufficient precision to place constraints on the amplitude of the GWB.However, they do place a 95% upper limit of 3.6 × 10 −15 on the GWB when the prior choice is taken into account.This is consistent with the amplitude derived using the full auto-and cross-correlation model in Fig. 4. The auto-correlation terms are much more informative.Combining the information from both shifts the amplitude towards lower values.This shows that the cross-terms can contribute to the full GWB search, even if they provide less information.The auto-correlations are more likely to be affected by intrinsic pulsar noise.Using a more sophisticated noise model for each pulsar can help produce a more robust estimate on the amplitude of any CP.
Optimal statistic
Figure 5 shows the amplitudes and signal-to-noise ratio (S/N) that are recovered by the pulsar noise marginalized optimal statistic (OS) method, which uses cross-correlations only.We find no evidence for a dipolar correlated process, as the amplitude and S/N for this model are centered on 0. SSE systematics are expected to manifest at specific frequencies related to the celestial bodies.The IPTA DR2 data set is long enough to probe lower frequencies that should be less affected by SSE errors (Vallisneri et al. 2020).The S/N = 0.6 +1.2 −0.8 for the Hellings-Downs correlation is insufficient to claim a detection.This is consistent with the Bayesian model selection.The HD amplitude from the OS seems to be in tension with the Bayesian results for the auto-correlated CP, but consistent with the Bayesian results for the full HD model.This strengthens the case that the cross-terms have a significant role to play in parameter estimation as well as detection confidence.Finally, the OS has the largest S/N = 2.0 +1.8 −1.4 for a monopole with a small amplitude.This can be due to the complexity of IPTA DR2 and some amount of unmodeled noise.
As the spatial correlations are not well constrained, see Figure 6, both the HD and monopolar correlation can fit the data.We have binned pulsar pairs according to their angular separation.Increasing the number of pulsars in the array as well as better timing of pulsars can help to tighten the constraints.
We test the significance of the OS S/N by performing two analyses which estimate the false alarm rate for a given S/N.So called phase-shifts and sky scrambles (Cornish & Sampson 2016;Taylor et al. 2017) break the correlations between the pulsars leaving the red noise power in the pulsars, but removing evidence for spatial correlations.By analyzing the phase-shifted and sky scrambled data we can determine the rate of observing a particular S/N for a type of correlations in data that has none.These two false alarm studies result in p-values, Table 2, too high to conclude that there is evidence for any correlations.It is possible that the measured HD S/N can therefore arise by chance from a common process with no spatial correlation.
IPTA DR2 data subsets
With the large volume of the IPTA DR2 data set we can look at different subsets to investigate hints of the origin and evolution of the CP signal across different slicings of the full data set.
Pulsar-based selection
Since a PTA is made from a number of single pulsars, we can look at how each pulsar contributes to the CP by itself.The dropout factor gives a measure on how consistent a given pulsar's intrinsic red noise is with the CP by comparing a model with and without the CP for the pulsar (see e.g., Arzoumanian et al. 2020).The dropout factors for each pulsar computed using both the traditional hypermodel and factorized likelihood approaches are shown in Figure 7.About 20 pulsars have factors > 1, while only three slightly disfavor the CP, with the remaining pulsars displaying indifference.Monte Carlo sampling uncertainties on the dropout factors (computed either way) can be estimated through statistical bootstrapping (Efron & Tibshirani 1994).In the hypermodel dropout analysis the MCMC chain is re-sampled with replacement, generating a new statistical realization of the sampled chain that is exceptionally unlikely to be identical to the original chain.This process was repeated 10 3 times, generating many realizations of the MCMC chain from which the dropout factors were computed.Hence a distribution of dropout factors over bootstrap realizations was generated for each pulsar, allowing us to compute median values and 95% confidence intervals.A similar procedure was performed for the factorized likelihood approach.For a given bootstrap realization, each individual pulsar's MCMC chain was re-sampled with replacement.With the re-sampled CP posteriors for each pulsar, the factorized-likelihood approach pieces together the dropout factor by iteratively removing pulsars from the array, and all by using bootstrapped pulsar CP posterior chains.This process was repeated for 10 3 bootstrap realizations across 25 different combinations of metaparameters used in the factorized-likelihood dropout factor calculation.The end result is that the median dropout factor and 95% confidence intervals were computed from a total of 2.5 × 10 4 factorized-likelihood dropout values for each pulsar.As is seen in Figure 7, all dropout factors are consistent between both techniques.The vast majority of pulsars have dropout factors with overlapping error bars from both methods, and those that don't are within a few sigma of each other.Those pulsars that show the largest disparity are ones for which there were MCMC sampling inefficiencies that manifested in different stages of the dropout factor calculation, e.g., in the Savage-Dickey density ratio, or in the integral of the (N − 1) array's CP likelihood over the posterior of a given pulsar (Taylor et al., in prep.).
The modularity and speed allowed by the factorized likelihood method can be used to approximate different combinations of pulsars within the array.These sub-arrays are a useful way to verify and understand the results we see in the full array.We created four sub-arrays, consisting of pulsars with the highest/lowest dropout factors, longest/shortest timespans.These pulsars were selected by sorting all pulsars in the array by their dropout and time-span characteristics, and then taking the top half of 27 pulsars and the bottom half of 26 pulsars.The Savage-Dickey density ratio was calculated for these sub-arrays to compare them to that of the full array.The sub-array made up of the top half of pulsars according to dropout factor had a Savage-Dickey density ratio of 5.6 × 10 9 , an order of magnitude larger than that of the unaltered array, 1.6 × 10 8 .The corresponding sub-array of the bottom half of pulsars based on dropout had a density ratio of 1.8.When comparing the pulsars with the longest and shortest timespans, the sub-array with the longer timespans had a Savage-Dickey density ratio of 1.8 × 10 7 and the shorter timespan sub-array had a density ratio of 3.6.These results were not surprising, as the dropout factor is a method of measuring the evidence for the array's common process in a particular pulsar, so by removing those that have low (high) dropout factors, the evidence for the CP will increase (decrease).Similarly, pulsars with short timespans are not sensitive to the lowest frequencies explored by the array when in combination with longer timespan pulsars.
Splitting IPTA DR2 by time
To test the evolution of the common-spectrum process, the DR2 is split into two data sets that have equally long timespans, i.e., cutting the DR2 in two time slices (in a similar manner as Hazboun et al. 2020a).The two data sets are not fully equivalent though: the early part contains only 19 pulsars and is mostly dominated by single-radio frequency observations, while the second part has data from all 53 pulsars as well as multi-radio frequency coverage and higher quality timing measurements.Each data set is then analysed separately.We find that the first half gives little information to the CP, with a broad power law 2D posterior contour that still encompasses the contour from the full data set.The second half contains the majority of information and produces almost identical constraints as the full data set.This is the expected evolution as the quality of the data set gradually improves over time (Hazboun et al. 2020a).
Constituent data sets
We can also select the data that were provided by the constituent PTA collaborations to get three data subsets: EPTA, NANOGrav and PPTA.As each data subset has a different timespan, we set a frequency cutoff at 1.4×10 −8 Hz to limit the number of frequencies for the analyses.Figure 8 1.0 10.0 shows that IPTA DR2 produces the tightest constraints on the CP power law compared to the constituent data sets.
Dropout analysis Factorized likelihood analysis
While the PPTA data is still consistent with a upper limit, some support for a common red noise can be found with the EPTA and NANOGrav data.The free spectra also show consistency with a power law model that spans across all three constituent data sets and IPTA DR2.
Comparison with other recent data sets
Since the data from the regional PTAs were combined to form the IPTA DR2, the regional PTAs have continued to collect data and improve their data analysis methodology.
We can compare the results using the older IPTA DR2 data set and the most recent data sets from NANOGrav, PPTA and EPTA.Compared to the constituent PTA data sets, the recent NANOGrav data set includes ∼ 4 more years and 10 new pulsars, the PPTA expands by ∼ 3 years and 7 pulsars, the EPTA DR2 adds ∼ 7 years for 6 pulsars (Alam et al. 2021;Kerr et al. 2020;Chen et al. 2021b).
The published free spectral and power law model recoveries can be found in Figure 9.For simplicity, we also show the recovered amplitudes at the reference frequency of 1/(1yr) and fixed γCP = 13/3 in Figure 10.The Mahalanobis distance DM acts as a generalization to compute the n-dimensional sigma deviation between two distributions Figure 10.CP amplitude posteriors for fixed spectral index, γ = 13/3.IPTA DR2 and EPTA DR2 find a systematically higher amplitude for the common-spectrum process than NANOGrav 12.5 yr and PPTA DR2, although the disagreement is not substantial.(Mahalanobis 1936), where µ1 and µ2 are the mean vectors of the multivariate distributions to be compared and Σ = Σ1 + Σ2 is the joint covariance.To quantify the overlap and consistency of the power law parameters as determined using each dataset, the Mahalanobis distance between the 2D posterior distributions are computed in Table 3.Despite some differences the posteriors overlap better than 3-sigma for all pairs of distributions.IPTA DR2, using older observations, still shows similar features as the NANOGrav 12.5, 6-pulsar EPTA DR2 and PPTA DR2 analyses, which have added a significant amount of new data to the regional PTA data sets.A future combination of these data sets will boost the total PTA sen-sitivity in the same way IPTA DR2 is more sensitive than its constituent data sets.Future combined IPTA data sets will be important for investigating the origin of this commonspectrum process.
Source of the common-spectrum process
The first IPTA data release did not show signs of a commonspectrum temporally-correlated process, but set an upper limit of 1.7 × 10 −15 instead.This appears to be in tension with our results from analysis of the second data release with a CP amplitude of 2.8 × 10 −15 .However, there are two major differences to point out: 1) the different choice of priors for the pulsar red, DM and common noise (Hazboun et al. 2020b) and 2) the DR1 upper limit was computed without the use of a SSE uncertainty model (Vallisneri et al. 2020).Both of which have been shown to lead to an increase in the upper limit, alleviating tensions between the DR1 and DR2 CP amplitudes.
As in other recent PTA analyses, we find strong evidence in favor of the CP over the noise only hypothesis.It is important to note that 1) the lack of support for GWlike spatial correlations prohibits any claims of GW detection, however 2) this type of evidence for a similar red noise is expected to precede a detection of spatial correlations (Siemens et al. 2013;Pol et al. 2021;Romano et al. 2021).Goncharov et al. (2021a) recently demonstrated that the common-spectrum process model is favored over the noise-only hypothesis when the noise spectra cluster in a similar range, and it is not favored anymore when the noise spectra are drawn from the prior distribution.Because we know that the employed prior distribution for red noise parameters is not representative, it is possible that the evidence we find for a common-spectrum process is caused by a rejection of a null hypothesis rather than by all pulsars exhibiting the spatially-uncorrelated component of a GWB.Thus, it is important to examine the single pulsar red noise in detail.We have looked at constraints on the simple power law models for the pulsars used in the CP search.In general, pulsars with detectable intrinsic noise have comparable or larger noise than the CP; pulsars without red noise typically have large amount of white noise, such that the CP is 'hidden'.One noticeable exception is PSR J2317+1439, whose noise spectrum falls clearly below the CP, see also its low dropout factor in Figure 7.
As the search for the common spectrum can be influenced by pulsar intrinsic noise, especially in an inhomogeneous data set, the crucial analysis has to consider information from the cross-correlations.It should be noted that the median amplitudes are slightly different in the analyses with and without spatial correlations, 2.8 × 10 −15 vs 3.2 × 10 −15 .One can also note the stark difference in the posterior for the split ORF analysis, Figure 4 and the optimal statistic analyses vs the Bayesian uncorrelated analysis, Figure 5.In other analyses (Arzoumanian et al. 2020;Goncharov et al. 2021a;Chen et al. 2021b) the amplitudes between the two analyses are more in line with one another.The difference here could be in part due to the very long baselines of only a handful of pulsars.This legacy data allows only scant opportunity for correlations amongst those few pulsars, while the long baselines allow the detection of auto-correlated power, even in noisier data.Another possibility is that there is unaccounted for noise in individual pulsars that is contaminating the signal.Advanced models to take these pulsar noises into account for the GWB search have been shown to affect the individual pulsar red noise (e.g., Arzoumanian et al. 2020;Goncharov et al. 2021a, Chalumeau et al., in prep.).
Other Correlated Signals
Spatial correlations in pulsar timing data have been studied in depth in the literature (Tiburzi et al. 2016;Roebber 2019), and their consideration is an important part of any GW detection procedure.While GWs induce a quadrupoledominated set of correlations there are other types of spatial correlations between pulsar data sets (Roebber & Holder 2017;Tiburzi et al. 2016;Roebber 2019).Monopolar spatial correlations, i.e., all pulsars seeing the same shifts in residuals irrespective of sky position, can manifest from clock errors, either in the BIPM clock standards, or the various observatory clocks used across the world (Hobbs et al. 2020).Dipolar spatial correlations can manifest from the error in measurement of processes where the motion of the Earth in the solar system is important (Caballero et al. 2018).This is most direct in the modeling of the solar system barycenter frame of reference, into which all pulsar TOAs are transformed.Errors in solar wind modeling can also add dipolar correlations (Tiburzi et al. 2016).While monopole and dipole correlations are theoretically orthogonal to HD correlations in the space of overlap reduction functions, real data with noise can result in some of these modes mixing (Roebber 2019).This mixing could erroneously be detected as a GWB.
The polarization content of the GWB could also deviate from the two tensor transverse (TT) modes predicted by general relativity which lead to the HD spatial correlations (Arzoumanian et al. 2021b).Deviations from general relativity would result in a correlation pattern that differs from HD.We would like to emphasize that the current data set does not allow us to draw any conclusions on the presence of spatial correlations.As can be seen in Figure 6 the uncertainties on the spatial correlation coefficients Γ ab (ξ) determined by the optimal statistic analysis are large.For most angular bins the correlation is indistinguishable from zero, corresponding to the uncorrelated CP.Close to submission of this paper, we noticed a preprint by Chen et al. (2021a).They have analyzed the IPTA DR2 searching for alternative GW polarizations and claim evidence for spatial correlations induced by a scalar transverse (ST) polarization mode.Chen et al. (2021a) also report a Bayes factor in favor of HD correlations (TT modes) compared to an uncorrelated CP about six times larger than we find: 12 against our 2.Even though we have not searched for the ST mode, we would like to highlight that the reported high evidence of spatial correlations in Chen et al. (2021a) is contrary to what we conclude using the same data set.The scalar transverse ORF is positive definite and should be accompanied by positive evidence for a monopolar correlation in analyses using only the cross-terms (Arzoumanian et al. 2021b).This is the case in our optimal statistic analysis where the monopolar correlations have the largest S/N and smallest false alarm p-value.In this sense finding some evidence in favor of ST correlations is not too surprising, however, we find no conclusive evidence in favor of any correlation pattern.Using more information in our analysis with both auto-and cross-terms disfavors monopolar correlations compared to an uncorrelated CP.Therefore, the conclusions of Chen et al. (2021a) need to be taken with caution.Finally, Chen et al. (2021a) find a Bayes factor in favor of the common process to be several orders of magnitude smaller (log 10 BF = 4.6 compared to 8.2) than we do.They use a different pulsar noise model, including an additional sinusoidal annual DM variation in all pulsars, which could account for some, but likely not all, of the differences.
Astrophysical Interpretation
The nHz GWB is generally thought to be dominated by GW emission from SMBHBs (Burke-Spolaor et al. 2019), with the most massive local SMBHBs expected to be individually observable in the next 5-10 years (Mingarelli et al. 2017).Given that these systems are just a local subset of the cosmological SMBHB population producing the GWB, their local number density, ΦBHB,0, should correlate with the amplitude of the GWB.The GWB amplitude induced by a cosmological population of circularized SMBHBs can be expressed in geometric units (where G = c = 1) as (e.g.Phinney 2001;Sesana et al. 2008;Sesana 2013b) where M1 is the mass of the primary SMBH, M2 the mass of the secondary, q = M2/M1 ≤ 1 is the mass ratio, M 5/3 = [q(1 + q) −2 ]M 5/3 is the SMBHB chirp mass with total binary mass M = M1 + M2, and d 3 ΦBHB/(dM1dzdq) is the differential comoving number density of SMBHBs per unit M1, z, and q.
To determine the local number density of SMBHBs implied by a AGWB ≈ 2. which assumes proportionality between SMBHB and quasar populations (which may be triggered by galaxy major mergers, Stemo et al. 2020) over mass and redshift.This has the effect of setting AGWB ∝ ΦBHB,0, so that AGWB directly implies ΦBHB,0.To check coverage of the entire signal from SMBHBs over mass and redshift, we parameterize AGWB = AGWB(ΦBHB,0, M1,min, zmax), where M1,min and zmax are the minimum primary SMBH mass and maximum redshift, respectively, in Equation 7 (Casey-Clyde et al. 2021).We plot this parameterization of the GWB compared to various strain measurements in Figure 11, including an AGWB ≈ 2.8 × 10 −15 signal (bold contour, gray isosurface).
The 2D panels of Figure 11 show three representative slices from this parameter space (one along each axis), with contours denoting their intersection with isosurfaces of constant GWB signal amplitude.The 3D plot shows the bottom right 2D panel in this 3D parameter space, along with its intersection with an AGWB ≈ 2.8 × 10 −15 isosurface.We find that recovery of a background amplitude like ACP requires ΦBHB,0 ≈ 1.5 × 10 −5 Mpc −3 (corresponding to the bottom right 2D panel in Figure 11), roughly an order of magnitude larger than the ∼ 1.6 × 10 −6 Mpc −3 number density implied by Mingarelli et al. (2017).
Besides this new quasar-based method the standard approach to determining the local number density ΦBHB,0 is to model d 3 ΦBHB/(dM1dzdq) using major mergers and empirically observed galaxy and black hole relations (e.g., Simon & Burke-Spolaor 2016;Chen et al. 2019;Middleton et al. 2021).Following the methods from Middleton et al. (2021) we analyze the IPTA DR2 CP amplitude.Figure 12 compares the spread of amplitude and spectral index from the IPTA DR2 CP against values recovered from realizations of SMBHB population simulations.The original population constraints using the NANOGrav (Arzoumanian et al. 2020) frequency bins are shown by the grey shaded area.Repeating the analysis with the frequency coverage of the IPTA DR2 gives the purple shaded contours.As we reach into lower frequencies the simulations become more constrained towards the expected spectral index γ = 13/3.Limiting the SMBHB chirp mass M > 10 8.5 M in the integral of Equation 7we get ΦBHB,0 ≈ 3.0 × 10 −5 Mpc −3 , which is about a factor of 20 times larger than the number density from Mingarelli et al. (2017).
Outlook for other GWBs
Although we have evidence for a single common process, whose amplitude and spectral index are consistent with predicted values from a population of SMBHBs with a number density Φ0 ≈ 10 −5 Mpc −3 , other sources could also be plausible, for example, primordial black holes (e.g., Vaskonen & Veermäe 2021;De Luca et al. 2021;Kohri & Terada 2021), cosmic strings (e.g., Ellis & Lewicki 2021;Blasi et al. 2021;Blanco-Pillado et al. 2021) or phase transitions (e.g., Arzoumanian et al. 2021a, and references therein).These sources could produce a GWB consistent with the CP contours from PTA data.Pol et al. (2021) have shown that the initial confident detection of a GWB including HD correlation will place very stringent constraints on the properties of the possible source of GWBs.It is also possible to have several backgrounds affecting the data, splitting the total common power into several components.A detailed study into how well we can separate multi-component GWBs is underway.
Modern Noise Mitigation
The PTA community continues to develop new data analysis strategies towards the detection of gravitational waves in pulsar timing data.Aggarwal et al. (2019Aggarwal et al. ( , 2020) ) showed that unmodeled noise features in a single pulsar could leak into the gravitational wave channel for deterministic continuous GW and GW memory signals respectively.More recently immense effort was put into the development of individualized noise models for PPTA pulsars, demonstrating that sensitivity can be gained from better modeling (Goncharov et al. 2021b).Similar advanced noise modeling efforts are currently underway in both NANOGrav (Arzoumanian et al., in prep.) and EPTA (Chalumeau et al., in prep.).More sophisticated noise modeling is important, because many types of noise can add steep spectral index, low frequency power to pulsar data sets, complicating GWB recovery.For example, noise from fluctuations due to the interstellar medium or time-correlated noise from long-term instrumental effects in telescope systems, which could arise from a combination of polarisation miscalibration and secular changes in receiver gain.
CONCLUSION
This work shows the immense potential of combining the global efforts of PTA collaborations into one data set.Figure 1 and Figure 8 show that IPTA DR2 is significantly more sensitive than any of the constituent data sets from which it is constructed.While the data in DR2 have now been superseded by more up-to-date efforts, (Alam et al. 2021;Kerr et al. 2020;Chen et al. 2021b), Figure 9 shows that the sensitivity from combining these older data sets is comparable with these newer single PTA data releases.
The conclusions of this analysis are broadly similar to the various GWB analyses carried out by the NANOGrav, the PPTA and EPTA (Arzoumanian et al. 2020;Goncharov et al. 2021a;Chen et al. 2021b).All of these data sets favor the CP model over one with only intrinsic red noise in the individual pulsars.None of these data sets shows clear support for the spatial correlations indicative of a GWB.Therefore a detection of a GWB can not be claimed.The strong detection of red noise that broadly matches the spectral characteristic of a GWB from SMBHBs before there is support for spatial correlations is expected from our understanding of the change in sensitivity of PTAs (Romano et al. 2021;Siemens et al. 2013).As was shown in (Pol et al. 2021), if the power in the auto-correlations of these pulsars is the first sign of the GWB then evidence for the spatial correlations should follow in upcoming data sets.If the next individual PTA data sets show increased support, but are short of detection thresholds, then combining them into an IPTA data set could immediately result in a data set with a significant detection.Such a combination will have the longest timespan, largest number of pulsars and independent observing systems, and will thus enable a robust GWB search.tributed to the writing of the paper.The figures were created by NSP, JSH, JACC, and SChe.
Figure 1 .
Figure 1.IPTA DR2 free spectrum and characteristic strain:The top panel shows the power in terms of time of arrival residuals at frequencies in the nanohertz band for the full PTA.The maximum likelihood power law is shown overlaid on posteriors for free spectral parameters, a generic model that measures power at various frequencies without imposing any empirical model.The bottom panel shows the power law and free spectral information from above converted into units of characteristic strain, i.e., the noise power measured in the same units as GW amplitude.The additional line shows the characteristic strain for the detector using the noise parameters for the pulsars.The lower limit of all violins is a result of the lower bound of the prior range for each frequency component.
Figure 2 .
Figure2.Bend Frequency: The posterior for the bend frequency parameter in a broken power law search is shown.The peak of the posterior is at the 13th frequency, 13/T for the data set, denoted by the dashed vertical line.
Figure 4 .
Figure4.The constraints from the split ORF analysis on the common process amplitude with spectral index fixed to γ = 13/3.The posterior from the auto-correlations is well constrained, while the posterior from the cross-correlations is unconstrained, but prefers amplitudes slightly smaller than the auto-only analysis.The effect of this is seen in the amplitude posterior from the full auto+cross-correlation model, where the posterior peaks at lower amplitude than the auto-only analysis.
Figure 5 .
Figure 5. Results from the noise marginalized optimal statistic.Top: Optimal statistic, A 2 , distribution for monopole, dipole and HD ORFs.The relevant posteriors from the Bayesian split ORF analysis are also shown for comparison.Bottom: The signal-tonoise (S/N) distribution for monopole, dipole and HD ORFs.
Figure 6 .
Figure 6.Cross-correlation ORF curve from the optimal statistic.The black points indicate the amount of cross-correlation for a given angular separation.Due to the large number of pulsar pairs, we have binned multiple pairs with similar angular separation.The blue and orange dashed lines show the best-fit values for the HD and Monopole correlations.
Figure 7 .Figure 8 .
Figure7.Individual pulsar consistency with common-spectrum process, error bars represent 95% credible intervals.Pulsars with dropout factors > 1 contribute to the detection of the CP.Dropout factors of ∼ 1 correspond to no evidence for or against the CP, usually due to higher white noise levels and/or shorter observation timespans.Pulsars with dropout factors < 1 are in tension with the CP.
Figure 9 .
Figure 9.Comparison of IPTA DR2 to other recent data sets.left: Free spectral common-spectrum process model.The inclusion of legacy data not used in recent PTA analyses allows IPTA DR2 to reach lower frequencies despite missing the most recently collected data.right: 2D posterior for CP parameters log-amplitude and spectral index, where the contours represent the 1-, 2-, and 3-σ confidence intervals.All recent data sets are in broad agreement on the characteristics of a common-spectrum process.
Figure 11 .
Figure11.The GWB characteristic strain as a function of the local SMBHB number density, Φ BHB,0 , and the minimum primary BH mass, M BH,1 min , and maximum redshift, zmax, of the population contributing 95% of the GWB signal.Left: Three representative slices of the strain in this parameter space (one along each axis), with solid contours showing their intersection with isosurfaces of constant strain value (A CP shown in bold).Right: 3D visualization of the zmax − M BH,1,min panel from the left and its intersection with an A GWB = 2.8 × 10 −15 isosurface (gray).
Figure 12 .
Figure 12.Comparison of power law constraints versus theoretical SMBHB populations.The 2D amplitude, spectral index constraints of the CPs from Figure 9 are compared to the region of parameters recovered from a large number of realizations of SMBHB population simulations using astrophysical relations from (M21 Pop, Middleton et al. 2021) shown in grey contours and this work (IPTA DR2 Pop) shown in purples contours.
Table 1 .
Bayes factors model comparison: The table shows the logarithmic Bayes factors for a number of model comparisons from the hypermodel and factorized likelihood (marked with an asterisk * ) analyses.The preferred model is on the left side of the two models.Brackets indicate the uncertainty in the last digit of the Bayes factors.
Table 2 .
The p-values calculated from various false alarm analyses of the data set.The measured values of the S/N are compared to the distribution of 10k analyses where the correlations are broken with phase shifts and sky scrambles.Since a monopolar spatial correlation is uniform across the sky, sky scrambles are unable to break the correlations.Hence only the phase shift p-value is quoted.
Table 3 .
Mahalanobis distance between CP parameters (logamplitude and spectral index) for each pair of PTAs.For all cases, there is less than 3-sigma separation. | 2022-01-12T02:15:48.628Z | 2022-01-11T00:00:00.000 | {
"year": 2022,
"sha1": "2ca3c22101aab4778fbc2eaf331a7cfdfae73f49",
"oa_license": null,
"oa_url": "http://cds.cern.ch/record/2802680/files/2201.03980.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2ca3c22101aab4778fbc2eaf331a7cfdfae73f49",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17139646 | pes2o/s2orc | v3-fos-license | Purification and In Situ Immobilization of Papain with Aqueous Two-Phase System
Papain was purified from spray-dried Carica papaya latex using aqueous two-phase system (ATPS). Then it was recovered from PEG phase by in situ immobilization or preparing cross-linked enzyme aggregates (CLEAs). The Plackett-Burman design and the central composite design (CCD) together with the response surface methodology (RSM) were used to optimize the APTS processes. The highly purified papain (96–100%) was achieved under the optimized conditions: 40% (w/w) 15 mg/ml enzyme solution, 14.33–17.65% (w/w) PEG 6000, 14.27–14.42% (w/w) NaH2PO4/K2HPO4 and pH 5.77–6.30 at 20°C. An in situ enzyme immobilization approach, carried out by directly dispersing aminated supports and chitosan beads into the PEG phase, was investigated to recover papain, in which a high immobilization yield (>90%) and activity recovery (>40%) was obtained. Moreover, CLEAs were successfully used in recovering papain from PEG phase with a hydrolytic activity hundreds times higher than the carrier-bound immobilized papain.
Introduction
Papain (EC 3.4.22.2) is one of the minor constituents (5-8%) in the cysteine endopeptidases extracted from the latex of Carica papaya [1]. It is one of the most exploited plant proteases, which has been used in brewing, baking, meat tenderizing, wounds defibrinating, edemas treating, wool anti-shrinking, cells isolating and Fab fragments preparing, etc. [2]. Papain has also been successfully applied in synthesis of many compounds such as peptides, lipoamino acid-based surfactants, esters of amino acids and carbohydrate derivatives [3].
Papain is extracted from the latex of Carica papaya fruit. Previously, the commercially available latex, which was seriously contaminated and contained substantial quantities of insoluble material, was usually dried by sun or oven without further purification. Now, the spray-dried latex available in the market is more refined and free from insoluble material [1,4]. Traditionally, both types of papaya latex are used to purify papain by multi-steps salt precipitation followed by crystallization. However, the process is time-consuming and the purified enzyme still contaminated with other proteases [4,5]. Another purification strategy which involves various chromatographic techniques including ion exchange, covalent or affinity chromatography, is difficult to scale up and the cost is high [6,7].
It is important to develop industry-desired procedures which are not only time saving with low cost, but also generate enzyme with high yields and purity. Aqueous two-phase system (ATPS) is such a powerful method which has been extensively exploited to separate or purify biological products from different sources, and generates robust, easy to scale and biocompatible extraction processes [8]. This purification process integrates the clarification, concentration and purification in one unit operation. ATPS forms when two incompatible hydrophilic polymers or a polymer and a salt are mixed in aqueous solution above a critical concentration. Biological products such as enzymes can then be partitioned between the phases and purified to a good extent [9]. Some successful applications of ATPS on large/industrial scale have been demonstrated [10,11]. In 1990, Kuboi et al. used the ATPS for the separation of papain from papaya latex [12]. Their study showed that the separated papain was still contaminated with chymopapain. In 2006, Nitsawang et al. reported the use of polyethylene glycol (PEG)-(NH 4 ) 2 SO 4 system for purifying papain from fresh papaya latex collected from the papaya fruit directly (which was not commercially available and difficult to handle) [7]. But this study was based on single-factor experimental design, and did not systematically optimize the ATPS process. Furthermore, the study didn't mention how to recover the purified papain from the PEG phase.
An ideal partition of proteins in ATPS can be accomplished by manipulating a variety of system parameters [13]. So it is very crucial to optimize the parameters of ATPS process in purifying papain from papaya latex. Response surface methodology (RSM), which includes experimental design, model fitting, validation and condition optimization, has eliminated the drawbacks of singlefactor experimental design and been proved to be powerful and useful for the optimization of ATPS [14,15].
ATPS extraction of protein mixtures leads to one or several protein fractions, which also contain mainly one of the phaseforming polymers. So another problem for ATPS industrialization is how to recover the target protein from the phase forming polymer. Traditionally, a number of methods can be used for this purpose, such as gel chromatography, ultrafiltration, ion-exchange chromatography and back extraction [16]. However, these methods are complicated, expensive and difficult to scale up. Alternatively, an in situ immobilization method, which is carried out by direct immobilization of the enzyme from the PEG phase onto a support, may be a feasible choice. It avoids the use of other purification steps and can get immobilized biocatalyst at the same time. More importantly, the PEG phase or salt phase can be recycled. Several works had reported this method for the isolation and immobilization of enzymes, and good results had been attained [17][18][19]. In the present work, we optimized the ATPS to purify papain from commercially available papaya latex using RSM. Then the in situ immobilization method was investigated to recover and immobilize the papain from the PEG phase. In addition, preparing cross-linked enzyme aggregates (CLEAs) was preliminarily proposed by Kallenberg et al. as a potential method to recover enzyme from ATPS in a review [20], which inspired us to explore the feasibility of preparing CLEAs from the PEG phase for the first time.
Sample preparation
45 g spray-dried latex powders were dissolved in 250 ml 20 mM-cysteine buffer (containing 1 mM-EDTA, pH 5.7) at 4uC. The resulting suspension was submitted to centrifugation (20,0006g, 4uC, 15 min). The supernatant (approximate 45 mg/ ml) used as the starting enzyme solution for ATPS was diluted to different protein concentration.
Aqueous two-phase systems preparation
Aqueous two-phase systems were prepared in a graduated tube with 4 g enzyme solution plus various amounts of PEG (4000 or 6000), salt solution (40% w/w phosphate or 40% w/w (NH4)2SO4) and deionized water to reach a total weight of 10 g. Phosphate solution was prepared using K 2 HPO 4 and NaH 2 PO 4 , as they display greater solubility than their respective monobasic and dibasic salts [21]. To achieve a certain pH value, different ratios of 40% (w/w) monobasic and dibasic salt solutions were mixed. The pH of enzyme solutions was adjusted with 6 M HCl or NaOH. All system components were thoroughly mixed in orbital shakers at 4 or 20uC for 2 h. To ensure complete phase separation, the systems were centrifuged at 10,0006g for 15 min at respective temperature.
Phase volumes were measured, and then aliquots of the phases were taken to determinate protein concentration and activity. The presence of papain was verified by Basic Protein Native-PAGE and FPLC. Phase composition was determined using phase diagrams reported by Albertsson [22].
Experimental design for ATPS process
2.4.1. Screening of important factors. To achieve the screening of important factors, a Plackett-Burman (P-B) design was adopted. The P-B design is an efficient way to screen the important factors among a large number of variables which are studied at two widely spaced levels (the low level (-1) and high level (+1)). Table 1 showed the design matrix covering seven variables to evaluate their effects and two dummy variables.
2.4.2. Optimization of screened components. Central composite design (CCD) was employed for determining the optimal conditions of the three most significant factors identified by P-B design. Each variable was designed at five levels with six star points and six replicates at the centre points. 20 experiments were required for this procedure. The alpha value was set as 2. Table 2 showed the CCD design matrix and responses for the papain purity and total activity of PEG phase.
Statistical design and analysis were performed using design expert software (version 7.1.6, Stat-Ease, Inc., Minneapolis, MN, USA).
Determination of protein content
The protein content in the samples was determined by Bradford method using bovine serum albumin (BSA) as standard [23].
Enzyme assay for amidase activity
The amidase activity of the samples was measured using DL-BAPNA as substrate [24,25]. Substrate stock solution was 10 mM DL-BAPNA in DMSO. The activity buffer (pH 6.8) contained citrate-borate-phosphate (100 mM each), 2.5 mM DTT and 1 mM EDTA.
x ml of enzyme solution was incubated with (1.8-x) ml activity buffer at 37uC for 15 min. Then 0.2 ml substrate preincubated at 37uC was added to start the reaction. After 15 min, the reaction was stopped with 0.5 ml of 50% acetic acid. When the immobilized enzymes or CLEAs were used, x g of the enzyme was incubated with 1.8 ml activity buffer. The release of pnitroaniline was determined spectrophotometrically at 410 nm using a e410 = 8800 M 21 cm 21 . x was chosen so that DA 410 never exceeded 1.0 after 15 min [26,27]. One unit of activity (nkat) is the amount of proteinase (free, immobilized or CLEAs) that hydrolyses one nmol of substrate per second under the abovementioned conditions.
Purity analysis by fast protein liquid chromatography (FPLC)
Purity of the purified papain was evaluated by ion-exchange chromatography on FPLC (AKTA Explorer 100, Amersham Biosciences, Uppsala, Sweden). Chromatographic studies were achieved on a HiTrap TM SP-FF (1 ml) column. All the top PEG phase samples were diluted to 1 mg/ml for the FPLC.
The mobile phase A of 50 mM NaAc buffer (pH 5.0) and the mobile phase B of 50 mM NaAc 1 M NaCl (pH 5.0) were used for FPLC. The mobile phases were filtered prior to use. The sample (1 ml) was loaded onto the column pre-equilibrated with phase A, and the chromatographic separation was carried out using a gradient (5-50 Column Volume, phase B from 0% to 70% and 50-60 Column Volume, keep at 70% phase B) at the flow rate of 1.0 ml/min. The UV-900 detector was set at 280 nm for measuring the protein's aromatic residues. The elution peak of papain was confirmed by standard papain and referring to the published works [24,28]. The peak areas of papain and other proteins were obtained from an automatic integrator. The purity of papain was specified as the percentage peak area of papain with respect to the total peak area.
Basic protein native-polyacrylamide gel electrophoresis (PAGE)
The experiment was carried out according to the method of Reisfeld et al. [29] and Dekeyser et al. [25] with some modification. The stacking gel consisted of 5% polyacrylamide (pH 6.8), and the resolving gel consisted of 15% polyacrylamide (pH 4.3). The electrode buffers in upper and lower chambers consisted of 0.35 M b-alanine-0.14 M acetic acid (pH 4.5). The protein sample was diluted (1:1, v/v) prior to Table 2. Design matrix for optimization of papain purity and total activity using CCD.
In situ immobilization of papain from PEG phase
In the ATPS, the papain was enriched in the top PEG phase and still mixed with PEG. Therefore, it is important to further recover papain from the PEG phase. An ''in situ'' enzyme immobilization method, which contained carrier-bound and carrier-free immobilization (CLEAs), was assessed for this purpose in this work. The ATPS used here was consisted of 40% (w/w) 15 mg/ml enzyme solution, 14.33% (w/w) PEG 6000, 14.27% (w/w) NaH2PO4/K2HPO4 and pH 5.77 at 20uC.
2.9.1. Activation of aminated supports. The supports (ZH-HA, LH-HA and BB-A) were activated as follows: 3 g support was incubated with 12 ml 0.1 M potassium phosphate buffer (pH 8.0), which was stirred (200 rpm) in an orbital shaker for 1 h and the pH was maintained between 7.8-8.2. Then, the support was filtered and added to 12 ml 2% (w/v) glutaraldehyde in 0.02 M potassium phosphate buffer (pH 8.0), and stirred (200 rpm) at 25uC for 1 h. The activated support was thoroughly rinsed with deionized water and stored at 4uC (used within 24 h).
2.9.2.
Preparation and activation of chitosan beads. Chitosan beads were prepared according to the reported methods with some modification [30,31]. 2 g of chitosan powder was added to 200 ml of 1.5% (v/v) acetic acid solution (70-80uC). The obtained 1% (v/v) chitosan solution was dropped into a gently stirred 1 M NaOH 30% (v/v) methanol solution through a syringe at room temperature. The beads of dia. 2.5-3.0 mm with uniform shape were selected and immediately washed with plenty of deionized water until the solution became neutral, and then stored in water at 4uC till activation. 10 g chitosan beads were added to 40 ml 2% (w/v) glutaraldehyde in 0.02 M potassium phosphate buffer (pH 8.0) and stirred (200 rpm) in an orbital shaker at 25uC for 5 h. The activated beads were thoroughly rinsed with deionized water and stored in water at 4uC (used within 24 h).
Immobilization of papain onto activated
supports. Generally, support of different weights were added to 5 ml enzyme solution (the PEG phase from ATPS) in 25 ml screwcapped glass vial, and the mixture was stirred at 25uC and 200 rpm in an orbital shaker. The protein concentration of the supernatant was determined at intervals. The immobilized enzyme particles were first washed with deionized water, then rinsed with 1 M NaCl solution (prepared with 0.02 M pH 7.0 potassium phosphate buffer), and finally washed thoroughly with 0.02 M pH 7.0 potassium phosphate buffer. The immobilized enzymes were taken to assay their activities and stored at 4uC. The immobilization yield and activity recovery were calculated as follows: [32]. Precipitant screening: 450 ml of precipitant (acetone, acetonitrile, DMSO, dioxane, ethanol, propanol, iso-propanol, butanol, pentanol, hexanol) was added to 50 ml enzyme solution (the PEG phase from the ATPS). The precipitation was allowed to last for 15 min at 4uC. Then, the mixture was centrifuged (12,000 rpm, Eppendorf 5415D) and the precipitates were redissolved in 500 ml activity buffer. The activity of redissolved precipitates was measured. The appropriate ratio of precipitant to enzyme solution was also investigated. CLEAs preparation: The pilot assays yielded optimal enzyme precipitation when propanol was used at precipitant/enzyme solution ratio 4:1. So, 0.8 ml propanol was added to 0.2 ml enzyme solution. The mixture was allowed to precipitate for 15 min at 4uC. Then, appropriate amount of glutaraldehyde solution (25%, w/v) was added into the suspensions to attain the desired concentration (0.2%, 0.5%, 1%, 2%), and the mixture was stirred at 25uC and 200 rpm for 2 h. After cross-linking, the crosslinked aggregates were quenched with 9-fold volume of activity buffer. A sample (A) containing CLEAs as well as residual free enzyme was withdrawn from the suspension and assayed for activity. Then, the CLEAs were centrifuged off (20,0006g, 15 min), and the supernatant containing only free enzyme was withdrawn as a sample (B). The difference in activity between sample A and B was the CLEAs activity.
The pilot assays yielded optimal active CLEAs when propanol was used at a ratio of 4/1 with a 2 h cross-linking period at 0.5% glutaraldehyde. To scale-up the CLEAs production, an initial 20 ml enzyme solution was used. At the end of the cross-linking period, the entire suspension was centrifuged at 20,0006g and 4uC for 15 min. The precipitated CLEAs collected were washed three times with deionized water. Finally, the preparation of the CLEAs was lyophilized to obtain dried powder.
Plackett-Burman screening
According to the earlier published reports [7,12] and our preliminary tests, seven factors were considered to perform the P-B design (Table 1). According to the experimental data analysis (taking the best papain purity and activity recovery into account), three variables namely PEG concentration, salt concentration and pH had significant effect (data not shown). The ATPS was preferred at initial protein concentration 15 mg/ml, PEG 6000, NaH 2 PO 4 /K 2 HPO 4 and 20uC.
Optimization of screened factors
Central composite design (CCD) was employed to optimize the three most significant factors (PEG concentration (C PEG , %), salt concentration (C salt , %) and pH) identified by P-B design for enhancing the responses of papain purity (P PAP , %) and total activity of PEG phase (A TOP , nkat). The three variables were studied at five levels and a set of 20 experiments was carried out ( Table 2).
The responses of P PAP and A TOP could be best fitted using second-order polynomial equation as follows: Both the models were verified using ANOVA (see Supplementary Table S1 and S2). The regression model was determined by the Design Expert procedure that considered initially all the factors and then eliminated those having no effect step-by-step. The significance of each term in the model was evaluated by its corresponding P value. The value less than 0.05 indicated that the terms were significant, whereas the value more than 0.1 indicated that the terms were not significant. The large F value (45.56 for P PAP and 89.11 for A TOP ) and very low P value (,0.0001 for both P PAP and A TOP ) suggested that both models were significant at high confidence level. The lack of fit values (1.22 for P PAP and 3.68 for A TOP ) were not significant with respect to their corresponding pure error, which proved that both models could be fitted to evaluate the responses. Furthermore, the fitness of the models was assessed by determination coefficient (R 2 ). Adjusted R 2 (0.93 for P PAP and 0.97 for A TOP ), suggesting more than 90% of the variation due to the variables presented in the models, were in reasonable agreement with the predicted R 2 (0.87 for P PAP and 0.92 for A TOP ). High R 2 (0.95 for P PAP and 0.98 for A TOP ) indicated a good agreement between predicted and experimental values.
The criterion for the numerical solution was evaluated by setting the maximum goals for P PAP and A TOP with different importance values while the other variables were in their range. The predicted solutions and the experimental results were shown in Table 3 (Run 1-4). The higher purity of papain was obtained, the lower total activity of PEG phase was observed. The results presented in Table 3 (Run 1-4) clearly indicated optimization was effective for purifying papain using ATPS. The Native-PAGE ( Figure 1) also confirmed that papain was extracted to the PEG phase. The proteins (often called as crude papain) in the spray-dried Table 3. Constraints targeting for both P PAP and A TOP and its solutions according to the model. Table 3 represented the PEG phase of ATPS; 1b-4b corresponding to the run number of Table 3 represented the salt phase of ATPS; Crude papain: spray-dried latex powders; Standard papain: 26crystallized papain. All the samples were loaded 10 mg. doi:10.1371/journal.pone.0015168.g001 latex powders were separated into five bands on the electrophoresis. One of these proteins was identified as papain according to the standard papain and other protein bands were identified by the published reports [6,26,33]. The electrophoresis patterns indicated that the purity of obtained papain was improved with the increase of C PEG , C salt and pH (increased the importance value of P PAP ), and all the purified papain was purer than the commercially available purest one obtained by 26crystallized.
To confirm whether the optimum operating conditions established for the PEG/phosphate system could indeed provide desired outcome in large scale, validation experiments (Table 3, Run 5-6) were performed using 200 g ATPS, in which consistent results were yielded comparable to those obtained in a smaller system (10 g). Therefore the optimum operating conditions for purifying papain in ATPS could be concluded as: 40% (w/w) 15 mg/ml enzyme solution, 14.33-17.65% (w/w) PEG 6000, 14.27-14.42% (w/w) NaH2PO4/K2HPO4 and pH 5.77-6.30 at 20uC. The purity of papain obtained ranged from 96% to 100%.
In situ immobilization of papain from PEG phase on aminated supports
The immobilization of enzymes on glutaraldehyde preactivated supports is quite simple and efficient, and in some instances even improves the enzyme stability by multipoint or multisubunit immobilization. In general, the immobilization of enzyme on preactivated aminated supports follows a two-step mechanism: firstly, a rapid modest ionic exchange absorption of the enzyme occurs on the support; and secondly the covalent reaction between the absorbed enzyme and activated groups on the support takes place [34]. So it is important to know when the enzyme is absorbed onto the support and when the immobilization is finished, i.e. make clear the immobilization course. Figure 2A presented the variation of immobilization yield and activity recovery of papain versus immobilization time. In the first 12 h, papain was quickly absorbed onto the surface of the supports and the activity recovery increased rapidly. After 12 h, the immobilization yield and activity recovery slowed down because proteins slowly diffused into the porus of supports and reacted with the inside activated groups. The immobilization on ZH-HA finished at 24 h and the immobilization yield of 90.2% and activity recovery of 52.5% were achieved. The immobilization on LH-HA and BB-A finished at 36 h and the immobilization yields (more than 90%) of these two supports were almost the same as ZH-HA, but the activity recovery was only 38.9% and 28.2%, respectively. The appropriate support/enzyme solution ratio (g/ml) was also investigated. The results showed that the best ratios for ZH-HA, LH-HA and BB-A were 0.3/5, 0.5/5 and 0.5/5, respectively ( Figure 2B).
In situ immobilization of papain from PEG phase on chitosan beads
The mechanism of immobilizing papain onto chitosan beads (CH) is similar to that of aminated support. The immobilization yield and activity recovery of papain versus immobilization time was presented in Figure 3A. As shown, the immobilization on chitosan beads finished at 36 h and by when the maximum immobilization yield and activity recovery were achieved. The optimal ratio of chitosan beads to enzyme solution was further investigated as shown in Figure 3B, i.e. 1.2/5 (g/ml), in which the immobilization yield and the activity recovery reached to 90.4% and 40.3% respectively.
In this work, we also tested the in situ immobilization of papain on epoxy supports such as Eupergit C, Amerzyme and LH-EP. Unfortunately, the immobilization yield and activity recovery for all the epoxy supports were very low (data not shown). There may be three reasons for this phenomenon: (1) The low ionic strength of the PEG phase could not promote the enzyme to absorb onto the high hydrophobic surface of the epoxy supports; (2) The pH of the PEG phase was acid, but the covalent reaction between the absorbed enzyme and activated groups on supports was promoted at alkaline pH; (3) The epoxy groups on supports might react with the thiol group inside the active site of papain, and thus inactivated papain [35].
In situ immobilization of papain from PEG phase not only realized the separation of papain from PEG and avoided the use of other purification steps, but also opened a door for reusing the phase-forming polymer (PEG). After in situ immobilization and filtering out the supports, the top phase mixed with the bottom phase portion to reform the ATPS which could be used for further purification of papain [17].
Preparation of CLEAs from PEG phase
CLEAs preparation consists of two steps: aggregation by precipitation and cross-linking. Precipitation by the addition of salts, organic solvents or nonionic polymers to the enzyme solutions, is a commonly used method for enzyme purification [32]. The resulting physical aggregates of enzyme molecules are supramolecular structures that are held together by non-covalent bonding and can be easily redissolved in water. Cross-linking produces insoluble CLEAs in which the structural properties and catalytic activities of the enzyme are maintained. Due to the different biochemical and structural properties of enzymes, the best precipitant and cross-linker can vary from one enzyme to another [36]. Our work was carried out by precipitating the purified papain from PEG phase followed by cross-linking the aggregates using glutaraldehyde. In the screening of precipitants, propanol was found to generate solid enzyme aggregates with almost 120% activity upon resolubilization through dilution of the precipitant ( Figure 4A). This hyperactivation was thought to find its origin in conformational changes of the protein induced by the aggregated state [37]. Similar phenomenon was also observed by R. Schoevaart et al. [32]. We also found the optimal ratio of propanol to enzyme solution for completely precipitating papain was 4/1 (v/v).
R. Schoevaart et al. reported that temperature had little effect on precipitation, and generally at room temperature there was no increase in cross-linking observed after 3 h [32]. So we carried out cross-linking at 25uC, and the reaction was quenched after 2 h. Glutaraldehyde was usually chosen as the cross-linker as it was inexpensive and readily available in quantities. In preparing the CLEAs, the concentration of glutaraldehyde should be optimized. If too little cross-linker was used, the enzyme molecule might still be too flexible. Whereas too much cross-linker could result in a loss of the minimum flexibility needed for the activity of enzyme [36]. Figure 4B presented the CLEAs activity after cross-linking at different glutaraldehyde concentrations. As shown, the CLEAs obtained the maximum activity at 0.5% glutaraldehyde.
To test the validity of the parameters found in the small-scale pilot assays of CLEAs preparations, we scaled up the procedure to a 100-fold. The final product of the CLEAs was lyophilized to get the dry powder. The dried CLEAs have a hydrolytic activity hundreds of times higher than those of carrier-bound immobilized papain (360.0 nkat/g for CLEAs, 27.5 nkat/g for ZH-HA, 16.9 nkat/g for LH-HA, 10.9 nkat/g for BB-A, 5.0 nkat/g for CH). This is because that a distinct disadvantage of carrier-bound enzymes, whether they involve binding to or encapsulation in a carrier, is the dilution of catalytic activity resulting from the introduction of a large proportion of noncatalytic mass, generally ranging from 90 to .99% of the total mass. This inevitably leads to lower volumetric, space-time yields and lower catalyst productivities. However, CLEAs do not suffer from this disadvantage, because the molecular weight of the cross-linker is negligible compared with that of the enzyme [38]. These were also confirmed by the scanning electron microscopy of CLEAs papain (Fig. 5). As shown, the CLEAs had large open channels and loose structures, which could overcome the diffusion limitation often observed in carrier-bound immobilization [20,39].
Conclusions
The feasibility of using ATPS for the purification of papain from the spray-dried papaya latex followed by enzyme immobilization was shown in this paper. RSM was used to optimize the ATPS process. The optimum process conditions were 40% (w/w) 15 mg/ ml enzyme solution, 14.33-17.65% (w/w) PEG 6000, 14.27-14.42% (w/w) NaH2PO4/K2HPO4 and pH 5.77-6.30 at 20uC. The purity of papain could attain to 96-100%. In situ immobilization of papain in the PEG phase resulted in very high immobilization yield (.90% for all supports except for ZH-HA) and better activity recovery (43.3% for ZH-HA, 38.9% for LH-HA, 28.2% for BB-A and 40.3% for CH). Moreover, preparation of CLEAs was realized to recover papain from PEG phase for the first time and the obtained CLEAs had a hydrolytic activity hundreds of times higher than those of carrier-bound immobilized papain. Table S1 ANOVA for papain purity in CCD. (DOC) | 2014-10-01T00:00:00.000Z | 2010-12-13T00:00:00.000 | {
"year": 2010,
"sha1": "303ba05e1e02b50a9e0014e72600d6c11fd0bfa1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0015168&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "303ba05e1e02b50a9e0014e72600d6c11fd0bfa1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
254650785 | pes2o/s2orc | v3-fos-license | Bioinspired Legged Robot Design via Blended Physical and Virtual Impedance Control
In order to approach the performance of biological locomotion in legged robots, better integration between body design and control is required. In that respect, understanding the mechanics and control of human locomotion will help us build legged robots with comparable efficient performance. From another perspective, developing bioinspired robots can also improve our understanding of human locomotion. In this work, we create a bioinspired robot with a blended physical and virtual impedance control to configure the robot’s mechatronic setup. We consider human neural control and musculoskeletal system a blueprint for a hopping robot. The hybrid electric-pneumatic actuator (EPA) presents an artificial copy of this biological system to implement the blended control. By defining efficacy as a metric that encompasses both performance and efficiency, we demonstrate that incorporating a simple force-based control besides constant pressure pneumatic artificial muscles (PAM) alone can increase the efficiency up to 21% in simulations and 7% in experiments with the 2-segmented EPA-hopper robot. Also, we show that with proper adjustment of the force-based controller and the PAMs, efficacy can be further increased to 41%. Finally, experimental results with the 3-segmented EPA-hopper robot and comparisons with human hopping confirm the extendability of the proposed methods to more complex robots.
that body mechanics and neural control are inextricably linked together. The human body comprises hundreds of muscle-tendon complexes (MTC); a high-performance unit with a neuromechanical control system. The impedance formulated in the force-length and force-velocity relationships [3] governing these MTCs can be varied by changes in physical properties of the muscle (e.g., muscle thickness, tendon stiffness) or be tuned by the activation signals coming from the central nervous system [4]. This neuromuscular system leverages the biological actuators by physically or virtually adapting their impedance.
Inspired by the functional performance and neuromechanical control of biological muscles [5], appropriate design of the physical body dynamics and the controller can largely enhance the locomotion performance [6]. By adding compliance to the body of a given robot or adapting the existing compliant elements in the body, part of the locomotion control problem can be shifted from the brain to body dynamics [7] and [8]. The use of the body as a computational resource in conjunction with the brain is a concept generally known as control embodiment [9]. The control embodiment is quite advantageous in reducing the control effort [10], minimizing energy consumption, protecting motors from impacts, decreasing peak motor power requirements, and reducing the amount of required sensor data [11][12][13]. Compliant elements come in different forms with fixed or variable stiffness and have different configurations, mainly categorized as: series elastic actuators (SEAs) [1] and [14], parallel elastic actuators (PEAs) [15][16][17][18], and in combination (SPEAs) [19] and [20]. Numerous studies have already applied such combinations for improving the efficiency or robustness of legged robot locomotion [10-12, 21, 22]. The search for the best arrangement among these actuators showed that for each specific task, one could be preferred over the others [23] and [24]; meaning that there is still no winning design. Moreover, adaptable compliance as found in biological systems provide further significant advantages over traditional actuation for legged robots [6,14,25,26]. However, despite the significant progress in the development of variable impedance actuators (VIA) in recent years, they are not still comparable with their biological counterparts regarding efficient performance in a wide range of tasks and motion conditions [27].
As an alternative to the aforementioned elastic actuators, in [28], we suggested combining pneumatic artificial muscles (PAM) and electric motors (EM) in the EPA (electricpneumatic actuator). This novel hybrid actuator can provide direct access for virtual and physical impedance adjustment in robots. On one hand, PAMs are the closest actuators to biological muscles [29] and [30] and they can be considered as cheap and reconfigurable [31] physical impedance. Because of their high power to weight ratio, this adjustable physical impedance is advantageous for periodic movements such as legged locomotion [32]. On the other hand, EMs are suitable actuators for precise control and implementing virtual impedance control. Different arrangements (SEA, PEA, SPEA) and even implementation of multiarticular coupling using PAMs provide the required flexibility of EPA for benefiting from morphological computation and control embodiment. Within this hybrid design, different features of legged locomotion can be optimized; e.g., the addition of parallel PAMs to EMs can make the system more efficient (compared to EM) and more robust against impacts [16] and [33].
In this work, we develop two EPA-based robots, namely EPA-Hopper-I & -II, based on the blended physical and virtual impedance control concept. In the knee joint of these robots, the PAM pressure tunes the physical impedance while the EM controls the virtual knee impedance. Inspired by human motor control [34] and [35], the leg force (equivalently the ground reaction force (GRF)) can be a helpful feedback signal to tune the muscle activation and consequently the virtual impedance. With this insight, we previously developed force modulated compliance (FMC) control methods to adjust the stiffness (impedance) of hip and ankle to control a variety of gaits [36,37]. More recently, we utilized the FMC on the knee joint of the 1D MARCO-Hopper II robot and showed that this controller could generate stable hopping patterns [38]. Similar to the virtual model control framework [39] and [40], here, we consider a virtual knee spring in which the GRF modulates the stiffness. The simple design and the ability of this control method to be extended to other robots (e.g., with more degrees of freedom) are the key features of FMC control that are investigated in this study. The contribution of this paper is threefold: 1) verifying the applicability of a bioinspired GRF-based control (FMC) on a simple segmented hopper robot and its extendability to an anthropomorphic hopper robot, 2) investigating the benefits of tuning physical and virtual impedance in terms of efficiency and performance, and 3) introducing a modified version of the FMC controller and harnessing the potentials of PAMs to achieve higher efficiency, performance, and human-like motion behavior. In the following, we describe how the blended impedance control is developed and applied to the EPA-hopper robots, analyze the effects of the virtual and physical impedance on efficiency and performance, and finally compare the outcomes with human hopping.
Methods
The first part of our blended control scheme is the physical impedance control addressed in the following by the EPA actuation system. Then, we explain the second part by introducing the force modulated compliant knee (FMCK) as the bioinspired virtual impedance control approach. We also present a measure for evaluating the performance and efficiency of the controlled movement, followed by human hopping experiments to be considered as a reference model.
Physical Compliance Control with EPA
in cyclic tasks. PAMs can also be installed in a parallel configuration with the EM to reduce power or torque requirements, provided that they are appropriately tuned. A PAM can cross two joints of an articulated robot in another possible arrangement, acting as an energy exchanger. The biarticular arrangement is advantageous for increasing energy efficiency [41] and simplifying control task [14]. In Fig. 1, a general schematic of these arrangements is depicted for a three-segmented mechanism (e.g., a two-segmented leg and a trunk). A specific combination of the three arrangements (shown with different colors) can provide an optimal solution for different applications. Each PAM can be used as an adjustable compliant element or an individual actuator. This hybrid actuation system can be applied to any legged robot.
In the following, we first describe the design of EPA-Hopper-I and then its extension to the 3-segmented leg in EPA-Hopper-II. Fig. 1 General schematic of Electric-Pneumatic Actuation showing different possible arrangements of electric motors (M) and PAM in a three segmented mechanism. Here, the green, red and blue colors represent series, mono-articular and bi-articular PAMs, respectively
EPA-Hopper-I
To concentrate on the blended impedance control concept, we only select two mono-articular PAMs (knee extensor and a flexor) from the general arrangement shown in Fig. 1. These two McKibben PAMs are placed in a parallel configuration and act antagonistically on the knee joint; see Fig. 2a. This robotic leg comprises two actuated degrees of freedom (DoF) moving in 2D, with the hip joint constrained to move only in the vertical direction to exclude the body posture control. Figures 2b and 2c show the robot and its detailed simulation model.
The thigh and shank segments of the leg are made of hollow lightweight carbon fiber tubes to keep the weight and moment of inertia low. Most other mechanical parts, except the bearings, are 3D printed with PLA and ABS thermoplastics. In this setup, the leg is equipped with two electric motors and two hand-made PAMs. The two brushless direct current (BLDC) motors (HYmotor E8318-120kV) are located at the hip co-axially to minimize the leg moment of inertia. The first one actuates the hip joint directly, and the other one drives the knee joint via a rope-pulley transmission with a moment arm ratio of 1:5. Using a rope and pulley system instead of a gearbox helps avoid friction and high mechanical stiffness in the transmission chain. Therefore, the direct drive for the hip actuation and the quasi direct drive for the knee ensures the transparency between the motor and the environment [42]. This setup facilitates torque control implementation using motor current sensing. The electrical motors are also equipped with current sensors and AMT10-series incremental encoders for position measurements. Each PAM on the robot operates with two continuous valves (PVQ-series proportional solenoid valves) for supplying and exhausting the air. The air pressure is provided from a JUN-AIR (Quiet Air 6-15) compressor. We used PSE530 sensors to control the PAM pressure.
EPA-Hopper-II
To develop EPA-Hopper-II (Fig. 2d), we extended EPA-Hopper-I by addition of a 3D printed foot as well as an ankle extensor PAM and an ankle flexor spring, mimicking Soleus (SOL) and Tibialis Anterior (TA) muscles, respectively. As the ankle joint in humans has shown to have a more elastic behavior than the knee and hip [43], no electrical motors were considered for this joint in this setup, and as a result, it is passively actuated by the SOL PAM and TA spring. The compliant curved foot is designed to resemble the shape of the human foot. A rubber sheet is also attached beneath the foot to absorb the shock during initial ground contact further. The SOL-like PAM is located between the heel and top of the shank to support the ankle extension passively. In both setups, a lithium polymer battery provides the power source, which can deliver high peak current to the motor drivers. For measuring GRF during hopping, a force plate is placed under the leg. The kinematics are measured using three motion capture cameras. Finally, with xPC Target of MATLAB, we control the robot using Simulink in real-time and collect data at the rate of 1 kHz.
To ensure safety while conducting experiments with the robot, multiple safety measures are implemented in the xPC Target of Matlab. The desired motor currents for both hip and knee are saturated to a maximum of 50 A. The knee and hip angles are also limited to a range that results in a reasonable motion output. On top of these measures, to further ensure that the safety conditions are fulfilled, a kill switch is implemented to shut off the electronics manually.
Simulation Model
To develop simulation models of EPA-Hopper robots that best matches the experimental setup, we imported the 3D CAD-designed parts of the leg along with their inertia properties directly into the SimMechanics environment of MATLAB. Moreover, by means of a 1-DoF prismatic joint as a guide, we limited the leg motion to the vertical direction; see Fig. 2c. For modeling the ground and contact forces, we utilized the Simscape Multibody Contact Forces Library with friction and nonlinear force law. The parameters related to the simulated ground (e.g., stiffness, damping, and maximum penetration for full damping) and also the friction values of the joints were all assessed by conducting several experiments further to match the leg model with the robot in practice. Table 1 summarizes the parameters considered for the simulation model of EPA hopper robots.
To incorporate the PAMs into the simulations, we modeled them as prismatic actuators acting antagonistically on the knee joint. The output forces generated by these actuators are predicted based on a dynamical muscle-like model, which was shown in our previous work to have an acceptable precision in predicting the actual PAMs' dynamic behaviors [29]. Our biological model of PAM consists of a contractile element in parallel with a compliant component as: where P is the instantaneous pressure inside the PAM, playing the role of muscle activation, and F is a constant corresponding to the maximum isometric force in the Hilltype muscle model. The two polynomial functions f la and f lp describe the dependency of PAM force to its length l, representing the force-length pattern of biological muscles. Finally, f v is a linear function of velocity v, defined as the rate of PAM length variations. For more details regarding the PAM modeling and identification, please refer to [29].
In the EPA-Hopper-I simulation model, we considered a pair of mono-articular PAMs, one extensor and one flexor, placed on the knee joint with lengths of 22 cm and 16 cm, respectively. The parameters related to model these PAMs were identified in separate experiments as explained in [29].
Virtual Compliance Control with FMCK
Achieving a stable bouncing pattern or hopping in place while experiencing consecutive ground impacts and other possible disturbances is a challenging problem [10]. It was shown that by injecting a predetermined amount of energy greater than the system losses during a hopping cycle, an actuated prismatic leg (MARCO-Hopper) could generate stable hopping [44]. Even with a simple feedforward control with minimal sensory information, stable hopping with a segmented leg having sufficient robustness against moderate perturbations is achievable [45]. However, it requires a precise set of parameters to function, which makes it in turn sensitive to changes, uncertainties, and large disturbances. On the other hand, feedback controllers overcome these drawbacks by incorporating some additional sensory information. Among feedback control methods, the virtual model control (VMC) [39] is an exemplary approach for robust hopping motion with a tunable hopping height, just by emulating a virtual spring for mimicking human leg behavior in hopping [40]. Faster recovery from groundlevel perturbations counts as one advantage of VMC in comparison to feed-forward approaches.
In a biomechanical study, Geyer et al. attempted to mimic the human reflex system, where muscle length, velocity, or force were used for proprioceptive feedback [46]. They showed that positive force and length feedback of an extensor muscle on a two-segmented leg model could result in periodic hopping [46], while force feedback has further advantages regarding improved performance and reproducing human-like elastic leg behavior. Later, this positive force feedback was further analyzed with respect to muscle properties [47] and the sensor-motor-map concept [48]. Recently, in [49] the applicability of the neuromuscular reflex controller was tested in both single-and two-leg robots. The combination of feed-forward and feedback signals for hopping control was also investigated in [50]. It was found that this combination improves hopping stability and recovery from perturbations, thanks to the nonlinear Hilltype representation of intrinsic muscle properties. However, the performance of these methods relies on the level of details considered for implementing the controller (i.e., the nonlinear force-length-velocity relationship). Moreover, finding the right parameters of these controllers and their tuning demands an exhaustive search [46][47][48]51].
Inspired by the positive force feedback concept [46], here we introduce the force modulated compliant knee (FMCK) method to control the EPA robots. Similar to our previous FMCH [36] and FMCA [52] control methods that respectively tune hip and ankle compliance, here, we use the ground reaction force (GRF) to modulate the knee compliance. In the FMCK control method, the knee torque τ k is given by an adjustable spring equation: where GRF , φ k , φ k0 and C are the ground reaction force, the current knee angle with respect to the thigh, the nominal knee joint angle, and the normalized stiffness, respectively. Hence, the FMCK can be interpreted as a simplified reflex control in which the muscle and muscle force are replaced by a spring and the leg force (measured by GRF), respectively.
To implement the controller, the hopping sequence is divided into flight and stance sub-phases with detection of the foot collision in-between using the measured GRF signal. During the flight phase, knee and hip joints are position controlled to predefined target angles using independent PD controllers. The parameters of the PD controllers are manually tuned to reach the desired leg posture before the foot touches the ground and thus ensure repeatability of each experiment. At the onset of foot collision, a PD transition controller from flight to stance phase is employed for both hip and knee motors for a short period (t c = 5 ms). This collision phase controller has a relatively low P gain but a high D value to absorb the impact energy during the collision and prevent undesired oscillations after landing. After this short period, the controller switches to FMCK for controlling the knee joint in the stance sub-phase, and the hip motor is set free. The knee motor desired current (computed from Equation 2) is inputted into the motor driver, where the low-level field oriented control (FOC) runs at a rate of 20 kHz. Figure 3 shows the block diagram of this control approach.
Comparison Metrics
To evaluate and compare the results in the following sections, we define three metrics, namely hopping height, energy consumption, and efficacy, all measured once the robot performs stable hopping. The hopping height, denoted by h hereinafter, is defined as the difference between the maximum hip height and a predetermined hip height at touchdown: The energy consumption E is calculated as the summation of knee and hip absolute work during both stance and flight phases, as follows: where τ k andφ k indicate the knee torque and velocity, respectively. τ h andφ h are defined similarly but for the hip joint, and T is the period of hopping cycle. To define the efficacy criterion, we use the ratio between the required energy for reaching a certain hopping height and the consumed energy in the robot. During the stance phase, the robot needs to absorb the kinetic energy at touchdown and generate the same amount while moving in the opposite direction at the take-off moment to return to its initial starting state. Therefore, two times the kinetic energy at touchdown could be a measure for normalizing the robot mechanical work (E) and defining the efficacy. An easy way to find the kinetic energy at touchdown is calculating the potential energy difference over the hopping height h as mgh where m is the total mass of the robot and g is the gravitational acceleration. As a result, the efficacy ρ is computed as: With this definition, the larger the efficacy, the higher the cost-effective movement.
Human Hopping Experiment
The hopping experiments were conducted with seven young healthy subjects; 6 males and 1 female, age: 24.14 ± 3.33 years, mass: 68.5 ± 9.7 kg. All participants provided written informed consent. The subjects were instructed to perform vertical hopping on both legs with their hands resting on their hips. For the first hopping trial, subjects were asked to hop with their preferred hopping frequency (PHF) for 20 sec. The purpose of this trial was to calculate the PHF of each individual numerically. After that, they were cued with a signal tone by a metronome and were asked to hop with 75 %, 100 %, 125 % and 150 % of their PHF. Subjects repeated hopping in each frequency six times, lasting 35 sec each, with two minutes rest in between. Each trial started with 5 sec of preparation, followed by 5 sec of standing still on force plates, then 20 sec of continuous hopping, and finally 5 sec of standing still again. For kinematic data collection, 10 motion capture cameras (Qualisys Type 5+/6+, 500 Hz) were employed to record the movements of 21 markers placed on anatomical locations with minimal skin/muscle motion. Moreover, for measuring the ground reaction force (GRF) during hopping, two piezoelectric Kistler force plates (Type 9260AA) were used for the left and right leg individually. Finally, OpenSim Fig. 3 Schematic overview of the control system architecture that uses ground reaction force (GRF) and joint angles (φ i ) as feedback signals for control of hopping. The high-level controller is implemented in real-time with MATLAB Simulink xPC target. The interface between xPC target machine and other parts is through an EtherCAT communication bus at 1 kHz [53] along with its inverse kinematics and inverse dynamics tools, were used to calculate the hip, knee, and ankle angles as well as the corresponding joint torques.
Results
We conducted the EPA simulations/experiments in three different scenarios. In scenario A, we put the FMCK controller to test by relying only on the electrical motors (i.e., with PAMs being turned off). The purpose of this scenario is to evaluate the performance of our proposed virtual impedance controller in achieving stable hopping. In scenario B, we incorporated the PAMs into the hopping control while inflating with a fixed amount of air pressure. Then, in scenario C, we added another degree of freedom to the leg by adding a passive foot and compared the results with human data in hopping. In
Virtual compliance control with FMCK
The adjustable virtual compliance is determined with the normalized stiffness C and the rest angle φ k0 (in Equation 2). To assess the performance of our proposed FMCK controller, we first searched for the values with which the robot can hop stably. Note that in this scenario, the PAMs were off completely. Here, stable periodic hoppings of the robot were checked and confirmed by the return maps of the apex height. The hopping heights achieved in terms of the values of the FMCK control parameters are shown in Fig. 4a. For illustration purposes and better comparison Fig. 4b displays a zoomed area of the former figure in 2D. According to this figure, it can be seen that (1) various hopping heights up to 25 cm can be achieved with the FMCK controller, (2) hopping with different heights can be regulated, especially by tuning φ k0 in the range of 0 • ≤ φ k0 < 50 • , and (3) there is a range of control parameters ([C, φ k0 ] values) to reach a certain hopping height. Thus, the control parameters can be used to optimize other metrics such as energy efficiency. This property can be analyzed by Figs. 4c and 4d which illustrate the consumed energy of the robot for the same range of control parameters. These two graphs make it possible to find an efficient solution for a specific desired hopping height. For example, by setting C to 0.3 m/rad, hopping height of 25 cm can be achieved with 0 • ≤ φ k0 ≤ 12 • while increasing the rest angle could reduce the consumed energy from 50 J to 42 J. This 16% energy consumption reduction is obtained by tuning one control parameter without disturbing the hopping performance.
We implemented the same virtual compliance control approach (described in Fig. 3) on the EPA-Hopper-I (and -II) robot to validate the simulation model and test the performance of the controller in practice. Since the objective is to test the FMCK (virtual impedance) control quality without additional physical impedance, both flexor and extensor PAMs were turned off during hopping; see Extension 1 for the robot hopping performance in this case.
To compare the simulation and experimental results, Fig. 5 (first row) demonstrates the hip position, GRF, and consumed power in one hopping cycle. Here, the control parameters are set to C = 0.07 m/rad and φ k0 = 0 for both simulation and experimental trials. As seen in Fig. 5a, the hip position follows a periodic sinusoidal pattern with a frequency of f = 1.53 Hz. The simulation can reproduce the hip position observed in the experiment as a measure for kinematic behavior. Figure 5b and 5c show that the pattern and magnitude of the GRF and power profiles of the knee and hip joints resemble those of simulations and are commensurate with them. Contrary to the simulation outcomes, the GRF (and consequently the power) is jittery in the experiments. This discrepancy might come from the imprecise contact model. However, the experiments and simulations' general kinematic and energetic behavior are matching.
Calculation of the metrics for this experiment shows a hopping height of h = 17.9 cm, energy consumption of E = 40.59 J and efficacy of ρ = 24.85. Regardless of the stable hopping motion obtained in using this controller, the efficacy can be further improved by using the EPA design, which is described in the following.
Physical compliance control with EPA
In this scenario (B), tuning the physical compliance by setting an appropriate PAM pressure could complement the virtual compliance control (FMCK). To investigate the influence of the PAMs on the hopping performance and efficiency, we chose the same control parameters for both simulation and experiment as scenario A. After searching for appropriate PAMs' compliance, we set the initial air pressure inside the extensor and flexor PAMs to P = 0.6 MPa and P = 0 MPa, respectively. The results of this scenario are shown in Fig. 5 (second row). The first observation from Fig. 5d is the increased hopping height h = 19.16 cm compared to Scenario A (Fig. 5a). This improvement is achieved with less energy expenditure than in the previous scenario (E = 39.68 J) according to Fig. 5f. Computing the efficacy metric in this case yields ρ = 26.62 which shows an improvement of 7.1% compared to that of (Fig. 5e) is also more human-like (curved, instead of plateau) than that of scenario A (Fig. 5b). This result shows that the addition of PAM to the robot can support the motors to consume less energy and help improve the overall performance.
Extended Applicability of Blended Impedance Control
To support the comparison of the robot's kinematic and dynamic hopping behaviors with those of humans, we applied the blended impedance controller to the threesegmented EPA-Hopper-II robot. This robot includes the foot to resemble better the morphology of the human leg. We kept the same control system of the EPA-Hopper-I. The ankle joint is controlled passively by an ankle extensor PAM (mimicking SOL muscle) and a metal spring (representing TA muscle) as shown in Fig. 2a. Interestingly, the same FMCK control parameters used for EPA-Hopper-I (C = 0.07 m/rad and φ k0 = 0) can stabilize EPA-Hopper-II when the SOL PAM is pressurized at least with P = 0.3 MPa. The results obtained from this experiment are depicted in Fig. 6. Stable hopping is achieved despite the addition of a third DoF, which changes both the kinematics and the leg's dynamical behavior. This outcome supports the extensibility of the proposed bioinspired control concept to more complex robots with minimum required changes. The next achievement is the similarity between the robot and human hopping behavior, shown in Fig. 6 (Extension 2). As hypothesized, the addition of the foot improves the similarity to human hopping. Figure 6b demonstrates that the GRF pattern of EPA-Hopper-II looks more like the single peak pattern observed in the human hopping. In Fig. 6c, we normalized the consumed power to the body weight (BW) and then by considering the hopping height ratio. For this, we calculated the normalized hopping height for human (α h ) and robot (α r ) as the ratio of the hopping height to the leg length. For a fair comparison, we multiplied the robot power to α r /α h . The normalized power graphs are comparable, as shown in Fig. 6c.
Discussions & Future Outlook
In this work, we developed a hopper robot actuated from a hybrid electric-pneumatic actuator as an infrastructure for simultaneous adjustment of physical and virtual impedance. In this setup, the PAM pressure was adjusted to tune the physical impedance, and motors were controlled by a force feedback scheme, named FMCK, as a simplified bioinspired neural control for virtual impedance adjustment. The core concept of having physical impedance is supported by the human musculoskeletal system. Even with sufficiently high bandwidth, real-time reflection of a physical element cannot be exactly replicated by control. In [54], the dissimilarity of virtual and physical constraints in the developed dynamic behavior was analytically demonstrated. Improved efficiency, decreased dependency on sensors and motor functionality (e.g, noise effects) and control loop delay are the other advantages of using physical impedance [55] and [1].
Achievements of Blended Control with EPA
The aforementioned biological and mathematical pieces of evidence support the application of the PAMs as physical impedance besides the virtual impedance control in our EPA design. For repetitive movements, we suggested using the PAM as a tunable physical impedance while the electric motor continuously controls the total joint (or leg) impedance. Based on our simulation and experimental normalized total power consumed in the joints. In all figures, the solid and dashed graphs with their corresponding shaded areas depict the human and robot data (mean ± variance), respectively studies, the achievements of this article, which will be discussed further in the following, can be summarized as 1) Successful implementation of a new bioinspired GRFbased control of virtual impedance, 2) Demonstrating the advantages of tuning the physical compliance (impedance), 3) Verifying the extendability of the proposed approach, and finally, 4) The ability to produce human-like hopping performance. In Section 4.2, we will also analyze the hybrid control of physical and virtual impedance control, supported by simulations, showing the great potential for significant improvement in efficiency and efficacy.
1) GRF-based virtual impedance control:
Using virtual spring for generating bouncing behavior as in the SLIP (spring-loaded inverted pendulum) model [56] for hopping and running was already applied to robots [26] and [40]. Instead of emulating a fixed or switching virtual spring using a virtual model control (VMC) [39], [26], and [40], here the continuous GRF signal is employed for feedback control. As shown in Fig. 4, the normalized stiffness (C) and the rest angle (φ k0 ) can be used to adjust hopping height. Clearly, φ k0 is more appropriate than C for tuning the hopping height. By selecting C between 0.2 and 0.3 m/rad, this adjustment can tune hopping height to any value below 25 cm. Generally speaking, increasing the rest angle of the virtual knee spring can reduce hopping height. Setting lower φ 0 means selecting more extended knee resting angles. Further, this value (besides the normalized stiffness) could also be utilized to decrease the consumed energy. For example, with C = 0.3 m/rad, changing φ 0 from 0 • to 12 • will reduce energy consumption by 16% while keeping the same hopping height. With this bioinspired control and a learning-based adaptation, we can first find the appropriate range for reaching a specific hopping height and then finetune the parameters to minimize energy consumption. This approach could be executed in a higher-level control (not investigated here).
In FMCK, the body load is the primary feedback signal for the virtual impedance control of the stance leg. From a broader perspective, legged locomotion can be described as a composition of three locomotor subfunctions (LSFs), namely stance (which characterizes the axial leg function), swing, and balance (posture control) [57]. The GRF-based control strategy, which is supported by the biological studies on healthy [58] and pathological gait [59] and [34] can provide further advantages in synchronizing different LSFs. In [60], a concerted control concept was introduced using the GRF signal as the leading signal from the conductor to harmonize the stance and balance LSFs as two key players of locomotion. Successful implementation of the GRFbased control to modulate ankle (FMCA) and hip (FMCH) joint impedance, respectively in a prosthetic foot [52] and an exoskeleton [61] could support the idea of concerted control at the joint level. In that respect, the FMCK could complement the FMCH [36] and FMCA [52] to generate a stable gait with coordinated joint movement led by the GRF as the conductor.
2) Adjustable physical impedance: Locomotion can be considered as a sequence of oscillatory motions [62]. It is known that each oscillatory system owns at least one natural frequency, in which it needs minimal control effort. To change the natural frequency, the mechanical property of the system should change. In a spring-mass system, either mass or stiffness of the spring can be used to tune the natural frequency. In our EPA-hopper design, the PAM pressure can be used to change the leg stiffness and, consequently, the natural frequency. If the desired frequency is close to the natural frequency, the effort from the electric motor (EM) can be minimized. The results of our simulations and experiments in Table 2 support the successful application of the PAM as a tunable physical impedance to adjust the natural frequency and, consequently, a more efficient hopping. Improving energy efficiency is not the only advantage of EPA design. Figure 5 demonstrates the filtering effect of the parallel PAM in the smoothened GRF and power graphs of the second-row pictures. The appropriate PAM pressure can also generate more human-like GRF patterns.
In our experiments (and simulations), PAMs are not used as the main energy resources, but as simple adjustable compliance. During performing a specific movement (hopping with a certain condition), no energy resources are required for the PAMs except the initial pressure adjustment, which is negligible in comparison to the total electrical energy for repetitive hops. PAM pressure adaptation could potentially increase energy efficiency by optimizing the absorption and recoiling the gravitational energy. In Section 4.2, we will show that an additional knee extensor PAM allows the motor to be off in the first half of the stance phase and then significantly reduce the required energy.
3) Extensibility of the proposed approach: Besides the bioinspired control approach, the combination of the EM and PAM in the EPA framework provides an extendable basis for versatile and efficient hopping. Inspired by human leg morphology, a three-segmented leg is well-fitted for a wide range of locomotion such as bouncing gaits [63]. By keeping the control architecture and extending the robot with the additional foot segment, we examined the extensibility and modularity of the EPA-based robot design and GRF-based control. Interestingly, without another energy source for the ankle, a passively compliant joint complemented the implemented control of the hip and knee joint and generated stable hopping. Although the elastic behavior of the ankle joint in human hopping was previously demonstrated [64], achieving stable hopping without changing any control parameter was not expected. The FMCK control parameters for all cases are considered as C = 0.07 and φ k0 = 0 This level of robustness against changing system dynamics might come from the FMCK control and the built-in physical compliance. In other words, the GRF feedback includes the required information about the system status, which can be complemented by the considered elastic behavior inducted by the physical impedance. Based on these findings, we envision extending the proposed design and control framework to generate robust and efficient forward hopping, bipedal hopping, and running. In doing so, taking insights from other locomotor systems that generate locomotion by leveraging from the passive dynamics and using interactions of internal impact force and external static friction can also be helpful [65,66].
4) Mimicking human hopping:
By approaching the human leg morphology in EPA-Hopper-II, the resulting behavior also approached human hopping. As shown in Fig. 6, the kinematic and kinetic patterns became more similar to those of humans than in the EPA-hopper-I with the 2segmented leg. Despite the comparable hopping height, the robot performance is not as symmetric as human hopping. However, the comparable GRF magnitude and pattern, normalized to the body weight, supports the biologically inspired design and control of EPA-Hopper-II. Further, the consumed power required to reach a certain hopping height (relative to the leg length) is similar between humans and the robot. This result means that the efficiency of our EPA-based robot approaches the human hopping efficiency. Therefore, learning from human mechanics and control in the design and control of the hopper robot successfully provided comparable outcomes.
Harnessing the Potential of PAMs
So far, we have discussed virtual impedance control using the FMCK and constant PAM pressure for physical impedance control. To better analyze the coevolution of controlling these two impedance sources, we utilize the template & anchor concept [67]. Although the hybrid dynamics of locomotion complicates periodic motion stability analyses, the combination of mass and spring could provide template models to better support understanding of the motion [56] and [62]. To represent the hybrid dynamics of vertical hopping considering both stance and flight phase, two masses connected with spring were used as a template model [68] and [44]. Although this system is not a smooth oscillator, the addition of a periodic actuation force to the spring can compensate losses (e.g., impact) and generate a smooth system [68] and [62]. With this argumentation which was mathematically proved in [68], the combination of the EM and PAM could perfectly generate a smooth oscillatory hopping motion. To provide a realistic expectation of the EPA design potentials, we examined more advanced versions of the proposed blended control in another simulation study with the EPA-Hopper-I robot.
1) Hybrid state-based control of physical and virtual impedance adjustment:
Inspired by the human [43] and the robot hopping (Figs. 5c and 5f), we modified the FMCKbased EPA control by switching off the motors in the downward movement. Thus, the knee extensor PAM instead of the knee motor provides the required opposing torque (to gravity) to decelerate the robot motion in the first half of the stance phase (deceleration phase).
Here, we carried out the simulations in two cases to investigate the performance of this control approach. In the first case, C1, we used two parallel extensor PAMs with the same fixed pressure while controlling the electric motors with the FMCK method only after reaching the maximum compression. The second PAM could compensate for the missing motor contribution in the first half of the stance phase.
In the second case, C2, we repeated the same approach but with the addition of a third extensor PAM coming to play simultaneously with the motor in the acceleration phase (second half of the stance phase). Therefore, the third PAM is activated at the maximum compression moment by injecting air pressure and is deactivated by depleting air pressure as the flight phase begins.
In order to simulate these cases, the controller parameters are chosen as C = 0.032 and C = 0.012 m/rad for C1 and C2 cases, respectively, while φ k0 = 0 for both. The reason for lowering the controller gain in these simulations is for comparison purposes, as these values are chosen to generate comparable hopping height to scenario B (see Section 3.2). The results are shown in Fig. 7. An insight into these figures reveals that both cases resulted in stable movements, with similar hopping heights; h = 24.55 cm. We expect less energy consumption by eliminating the knee joint's negative work in the deceleration phase (Fig. 7c). Calculation of the energy consumption for the first and second cases gives E = 26.18 J and E = 23.58 J, which shows a reduction of 28.7% and 35.8% in energy consumption compared to the scenario B presented in Section 3.2. Similarly, the efficacy for these cases increased to ρ = 52.80 and ρ = 58.68.
2) Higher performance with lower energy consumption: Utilizing two parallel PAMs with fixed pressure could significantly reduce the load on the knee motor by switching it off in the first half of the stance phase. Compared to the actuation with only EM, here EPA can increase the hopping height by 21% (to 24.55 cm) while decreasing about 30% of the consumed energy (compared to Table 2). Hence, this augmentation in EPA design yields a 71% increase in efficacy (ρ) of motion (compared to scenario A). The simple two-level PAM pressure adjustment during hopping in case C2 could generate an acceptable improvement (about 13%) in energy consumption (compared to C1). Further elaborate PAM control as adjustable physical compliance could also raise the benefits.
Outlook
Our hybrid actuator design is a key feature that provides access to investigate body intelligence [9] in terms of identifying the role of reflex control and mechanics. We demonstrated higher stability, energy efficiency, and performance using the blended control with the EPA design. Other motion control characteristics such as robustness against uncertainties or perturbations, and adaptation to different environmental or gait conditions are other topics to be tested with the EPA technology and the blended control. In [33], we demonstrated the role of parallel PAMs (in EPAhopper-I) for increasing robustness against ground-level perturbations.
Compared to human motor control, the functionality of the PAM pressure, which could emulate the stimulation signal for muscle activation, was not fully perceived in our control implementation. Coordination between the virtual and physical impedance control can be improved by using feedback control for PAMs. One approach could be implementing a GRF-based pressure control similar to the FMCK. GRF could also synchronize the virtual and physical impedance control at one joint in such a condition. Thus, we need to compromise between control complexity and efficacy. Adjusting the PAM pressure during locomotion might improve efficacy (performance and efficiency), while it complicates control and increases sensitivity to measurement uncertainties and noises.
Developing a stable, performant, and efficient robot was not the only target of this study. With the bioinspired design and control, we also intended to generate humanlike movements and understand the role of leg morphology and motor control in human gaits. The first step to approach human leg morphology was the addition of the foot to the 2-segmented leg. Surprisingly, the FMCK controller, which was designed for the 2-segmented leg, was also able to stabilize the robot with one extra degree of freedom thanks to the passive compliant ankle design. Furthermore, having a closer leg morphology to the human leg increased the behavioral similarity to human hopping. Still, few muscles were modeled by the EPA-Hopper-II, which can be extended by additional EPA actuators representing the other mono-and bi-articular muscles, as shown in Fig. 1. Our preliminary experiments (not shown) supported the important role of the gastrocnemius muscle in hopping. This result supports the idea of morphological computation, which can significantly simplify control. Therefore, EPAbased robots and blended control can be utilized as a practical tool for reverse engineering human locomotion control. The identified principles can be applied in the design and control of assistive devices [69]. and contributed in doing the robot experiments. Andre Seyfarth made substantial contributions to the conception of the work, contributed in interpretation of data, discussions, and commenting on and writing the article. Koh Hosoda also contributed in conception of the study, and revised it critically for important intellectual content. Maziar Ahmad Sharbafi was responsible for the conception of the work, designing and supervising both human and robot simulations/experiments, analysis and interpretation of data, and writing the manuscript. All authors gave final approval for participation.
Declarations
Ethics approval The human hopping experimental study was approved by the Ethical Committee of the Technical University of Darmstadt.
Consent to participate All subjects voluntarily provided written informed consent to participate in the hopping experiments.
Consent for Publication
All subjects gave consent to publish their data prior to submitting the manuscript to a journal.
Competing interests
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 2022-12-15T14:25:16.126Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "c56f8ac3633fe92cae086fc86555bc3df09d27f7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10846-022-01631-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "c56f8ac3633fe92cae086fc86555bc3df09d27f7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
12298702 | pes2o/s2orc | v3-fos-license | Gastric Schwannoma Diagnosed by Endoscopic Ultrasonography-Guided Trucut Biopsy
Schwannomas of the gastrointestinal (GI) tract are rare subepithelial tumors comprising approximately 3.3% to 12.8% of all mesenchymal tumors of the GI tract. On endoscopic ultrasound (EUS) they are seen as hypoechoic tumors arising most commonly from the 4th proper muscle layer. Although EUS helps to distinguish tumor characteristics, tissue sampling is required for differentiation with other more common tumors such as GI stromal tumors. Both EUS-guided fine needle aspiration and EUS-guided trucut biopsy (EUS-TCB) can be used for tissue sampling. However, only EUS-TCB allows core biopsy and a high yield of immunohistochemical staining. We report a case of a gastric schwannoma diagnosed by EUS-TCB.
INTRODUCTION
Schwannoma of the gastrointestinal (GI) tract is considered to be a rare and distinctively different neoplasm from conventional schwannomas that arise in soft tissue or the central nervous system. Histologically, GI schwannomas are S-100 protein-positive spindle cell tumors with a microtrabecular pattern, peripheral lymphoid cuffing, and occasional germinal centers. 1 They are benign tumors associated with an excellent prognosis after surgical resection as reported by Daimaru et al. 2 GI schwannomas occur most commonly in the stomach (60% to 70% of cases), followed by the colon and rectum. 1 Esophageal and small intestinal schwannomas have been rarely reported. 2 Endoscopic ultrasound (EUS)-guided biopsies are reliable, safe, and effective techniques in obtaining samples for cytological or histological examinations either as a primary proce-dure or in cases that other biopsy techniques have failed. EUS-guided fine needle aspiration biopsy (EUS-FNA), as well as EUS-guided Trucut biopsy (EUS-TCB), has been proven to be of significant value in the diagnostic evaluation of benign and malignant diseases, as well as in staging of the malignant tumors of the GI tract and of adjacent organs. 3 This is a very rare case report of gastric schwannoma diagnosed by EUS-TCB to our knowledge.
CASE REPORT
A 72-year-old female was referred for the evaluation of right colon cancer. During staging workup, esophagogastroduodenoscopy showed submucosal elevated lesion with bridging folds located at the greater curvature side of the antrum (Fig. 1A). EUS (UM-2000; Olympus, Tokyo, Japan) demonstrated a 40.7×31.7 mm-sized round, hypoechoic mass with internal hyperechogenicity originating from the proper muscle layer (Fig. 1B). Abdominal computed tomography scan showed 5×4 cm-sized heterogenous mass located at the greater curvature of the stomach without enlarged lymph nodes (Fig. 1C). EUS-TCB with a 19 gauze needle (QuickCore; Wilson-Cook Medical Inc., Winstom-Salem, NC, USA) was performed (Fig. 1D). The tissue from the tumor was composed of spindle cells that were only stained with S-100 on immunohistochemistry.
Staining for CD 117, CD34, smooth muscle actin, and desmin was negative. These results corresponded with schwannoma of the stomach (Fig. 2). Laparoscopic gastric wedge resection was performed with right hemicolectomy of the colon cancer. There was an intramural solid mass from muscularis propria of the stomach, measuring 5.5×3.5 cm. On section, the cut surface showed a yellowish gray and myxoid appearance (Fig. 3). On immunohistochemistry, diffuse and intense positivity for S-100 protein indicating neurogenic differentiation of the tumor cells was observed. The tumor did not show any expression of CD117, CD34, smooth muscle actin, and desmin, identical with the result of TCB. This suggested that final pathologic diagnosis was also gastric schwannoma.
DISCUSSION
Schwannomas arise from Schwann cells of the nerve sheath, which encompass the axons of peripheral nerves. Schwannomas of the GI tract have rarely been reported and occurred predominantly in the stomach. Their reported prevalence ranges from 3.3% to 12.8% of all GI mesenchymal tumor. 1,2 In contrast to gastric GI stromal tumors (GISTs), which may be malignant or have malignant potential, schwannomas behave in a benign fashion and there have been no recurrence, metastasis, or tumor-related mortality reported after curative resection. 1 EUS is a useful technique to further characterize subepithelial lesions and surrounding structures of the GI tract. 4,5 It can be verified to ascertain the layer of tumor origin and differentiate intramural lesions from extrinsic compressions. EUS measurements of tumor size, cystic spaces, and extraluminal margins have a positive predictive value in differentiating between benign and malignant submucosal tumors. 6 Ji et al. 7 reported the positive predictive value of 98.7% for locating GI mesenchymal tumors with EUS and 80.3% positive rate of differentiation between benign and malignant tumors using EUS.
It is also difficult to obtain tissue specimens from muscularis propria or muscularis mucosa by conventional endoscopic biopsy. These types of tumors, which cannot be clearly diagnosed with EUS, require EUS-FNA or EUS-TCB for better evaluation. Preoperative diagnosis using EUS-FNA or EUS-TCB may be improved with the application of immunohisto- A C B D chemical analysis. As was in our case, subepithelial tumors originating from the proper muscle layer of the stomach are GIST, leiomyoma, etc. All these subepithelial tumors showed hypoehic lesions originating from the proper muscle layer in the EUS. EUS imaging alone cannot make a differential diagnosis among these subepithelial tumors. Thus, obtaining tissues from submucosal tumor with EUS-FNA or TCB and performing immunohistochemical analysis will help to make accurate diagnoses and improve patient prognosis. 8,9 In a recent study, Fernández-Esparrach et al. 10 evaluated the diagnostic yield of EUS-FNA and EUS-TCB in 40 patients with gastric subepithelial lesions. The diagnostic yield of EUS-FNA was 52% and that of EUS-TCB was 55%, because of technical failure and inadequate sample. They are reported cases that were diagnosed by EUS-FNA as mediastinal schwannoma 11 and retroperitoneal schwannoma. 12 Until now, gastric schwannoma diagnosed by EUS-FNB are reported seldom.
Spindle cell neoplasms of the GI tract show considerable morphologic overlap. Thus, immunohistochemistry is an essential tool for proper diagnosis of these lesions. Several immunohistochemical stains are particularly helpful in the evaluation of these spindle cell lesions, each with its own strengths and limitations. The strength of S-100 protein expression in spindle cell neoplasms of the GI tract lies in its ability to iden- A C E B D F tify schwannomas. Schwannomas show strong and uniform nuclear and cytoplasmic staining for S-100 protein in 100% of cases in reported series. [13][14][15][16] In this case, the tumor was successfully diagnosed as gastric schwannoma by EUS-TCB with immunohistochemistry and curative resection was performed by laparoscopic surgery. | 2016-05-12T22:15:10.714Z | 2013-05-01T00:00:00.000 | {
"year": 2013,
"sha1": "2180af8be562c83682d68b82d62980183994ec43",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5946/ce.2013.46.3.284",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2180af8be562c83682d68b82d62980183994ec43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238198393 | pes2o/s2orc | v3-fos-license | Demazure formula for $A_n$ Weyl polytope sums
The weights of finite-dimensional representations of simple Lie algebras are naturally associated with Weyl polytopes. Representation characters decompose into multiplicity-free sums over the weights in Weyl polytopes. The Brion formula for these Weyl polytope sums is remarkably similar to the Weyl character formula. Moreover, the same Lie characters are also expressible as Demazure character formulas. This motivates a search for new expressions for Weyl polytope sums, and we prove such a formula involving Demazure operators. It applies to the Weyl polytope sums of the simple Lie algebras $A_n$, for all dominant integrable highest weights and all ranks $n$.
Introduction
The Brion formula [3,4] is a general expression for an exponential sum over lattice points in a polytope. That sum is sometimes called the integer-point transform of the polytope.
In the weight lattice of a simple Lie algebra, a Weyl polytope has the weights in an orbit of the Weyl group as its vertices. Applied to Weyl polytopes, the Brion theorem yields a formula that is remarkably similar to the Weyl character formula [6,14,11]. As a consequence, the polytope expansion of Weyl characters in terms of integer-point transforms is natural and useful [14,11,13,15,6,12].
Here we explore further the relation between Lie characters and integer-point transforms. Other formulas exist for the characters, and following [14], we are interested in finding similar expressions for the integer-point transforms of Weyl polytopes, herein called Weyl polytope sums. We focus on the Demazure character formula [5,1,7] and use the Demazure operators involved to write expressions for the Weyl polytope sums. We have obtained results for the simple Lie algebras A n , for all ranks n ∈ N.
In the following section, we review the initial motivation for the present work, in part to establish our notation. We describe the similarity between the Weyl character formula and the Brion formula, and the polytope expansion that exploits it. Section 3 is a quick account of the Demazure character formula. Our new formula for A n Weyl polytope sums is presented and proved in Section 4. The final section offers a short conclusion.
Polytope expansion of Lie characters
Let X n denote a simple Lie algebra of rank n (X is a letter from A to G). The sets of fundamental weights and simple roots are denoted by F := {Λ i | i = 1, . . . , n} and S := {α i | i = 1, . . . , n}, respectively. The corresponding weight and root lattices are P := Z F and Q := Z S. The set of dominant integrable weights is P + := N 0 F , and we write R (R + , R − ) for the set of (positive, negative) roots of X n .
2.1. Weyl character formula. Consider a finite-dimensional irreducible module L(λ) over X n of highest weight λ ∈ P + . The formal character of L(λ) is defined as where mult λ (µ) is the multiplicity of the weight µ in the module L(λ), while P (λ) is the set of weights of L(λ): The formal exponentials of weights obey e µ e ν = e µ+ν . With where (µ, σ) is the inner product of weights µ and σ, the formal exponential e µ simply stands for e (µ,σ) before a choice of weight σ is made. A choice of σ fixes a conjugacy class of elements in the Lie group exp(X n ), and the formal character becomes a true character: ch λ (σ), the trace, in the irreducible highest-weight representation of highest weight λ, of elements of exp(X n ) in the conjugacy class labelled by σ.
The celebrated Weyl character formula is where ρ := n i=1 Λ i . The Weyl invariance of the character can be made manifest, as where For brevity, we sometimes write meaning that w acts on an explicitly indicated argument only or, if no argument is given, on everything to its right. We can then rewrite (5) as with 2.2. Brion formula. A polytope is the convex hull of finitely many points in R d . A polytope's vertices form such a set of points, with minimum cardinality. A lattice polytope has all its vertices in an integral lattice in R d . The corresponding (formal) integer-point transform of the polytope is the sum of terms e φ over the lattice points φ in the polytope.
Brion [3,4] found a general formula for these integer-point transforms. For λ ∈ P , let the Weyl polytope Pt λ be the polytope with vertices given by the Weyl orbit W λ. Consider the integer-point transform where the relevant lattice is the λ-shifted root lattice λ + Q of the algebra X n . We refer to these integer-point transforms as Weyl polytope sums. Since the Weyl polytope sum has an interpretation as a "multiplicity-free" character [6]. By (11), it is obtained from the character (1) by putting mult λ (µ) → 1 for all µ ∈ P (λ).
Applied to a Weyl polytope, the Brion formula yields Following (8) and (9), we rewrite (12) as 2.3. Polytope expansion. The Brion formula (12) is remarkably similar to the Weyl character formula (5) [14,11,6]. It is therefore natural and fruitful to consider the polytope expansion of Lie characters [14,6,15]: The expansion coefficients A λ,µ are integers. They were dubbed polytope multiplicities and denoted polyt λ (µ) in [15], in analogy with the weight multiplicities mult λ (µ) appearing in the expansion (1). For type A, they were shown in [9] to be non-negative. However, other examples have been found that are negative [9], so "multiplicity" appears to be a misnomer.
We do not consider the polytope expansion further in this note. Instead, we focus on the striking relationship between characters and Weyl polytope sums.
Demazure character formula
Here we show that expressions similar to the Demazure character formula can be written for the Weyl polytope sums in (10).
Let us first sketch the Demazure character formula. The Weyl group W is generated by the reflections r β in weight space across the hyperplanes normal to the corresponding roots β ∈ R: where β ∨ := 2β/(β, β). In fact, the Weyl group is generated by the primitive (simple-root) reflections, For each primitive reflection r i , we define the Demazure operator For λ ∈ P , we set r i (e λ ) := e r i λ and thus have We will also use the modified Demazure operators For every w ∈ W , a Demazure operator D w can be defined: In a reduced decomposition of w, replace the factors r j with D j . Demazure has shown [5] that the resulting operator D w is independent of which reduced decomposition is used (see also [7]). Accordingly, the Demazure operators obey relations encoded in the Coxeter-Dynkin diagrams of X n . To illustrate, let w L ∈ W denote the longest element of the Weyl group. For A 2 , for example, we have w L = r 1 r 2 r 1 = r 2 r 1 r 2 , and the associated Demazure operator can be written in the two ways D w L = D 1 D 2 D 1 = D 2 D 1 D 2 .
A n weight-polytope formulas of Demazure type
We will now restrict attention to the simple Lie algebras A n . When appropriate, the superscript (n) will be used to indicate the dependence on the rank n.
will be useful. For k, m ∈ {1, . . . , n} with k = m, we also define From it follows that Here and henceforth we use the standard numbering of A n simple roots, so that (α k , α ∨ k+1 ) = −1 for all k ∈ {1, . . . , n − 1}.
The following expression is reminiscent of the Poincaré series discussed by Macdonald in [10].
Our main result is given in (33) below and may be viewed as the Demazure analogue of this Lemma.
Proof. Use (21) and substitute into (28) to obtain Now notice that from which the result follows.
With the similarity between (28) and (26), the Weyl group algebra relation (27) motivates our main result, the following Theorem.
The relation (38) now follows, thus completing the proof.
Conclusion
Our main result is the formula (33) involving (modified) Demazure operators for the weight-polytope lattice sums for the Lie algebras A n . It is valid for all ranks n ∈ N and all dominant integrable highest weights.
In [16], formulas are written for the rank-2 weight-polytope lattice sums. It is interesting to note that these formulas are easily recast into a form that is very similar to the one we have found for A n . Apart from A 2 , C 2 ∼ = B 2 and G 2 are the only (up to isomorphism) rank-2 simple Lie algebras. In both cases, let α 1 denote the short root. For C 2 , we then find while for G 2 , we obtain B = (1 + d 2 ) (1 + d 1 + r 1 d 2 + r 1 r 2 d 1 + r 1 r 2 r 1 d 2 + r 1 r 2 r 1 r 2 d 1 ) .
We believe this indicates that we are on track toward a general form, valid for all simple Lie algebras. Furthermore, we hope that such a formula might lead to one that applies beyond the Lie context, as the Brion formula does, to polytopes besides the Weyl polytopes.
To finish, let us mention some interesting related work. The polytope expansion of Lie characters is highly reminiscent of the early work of Antoine and Speiser [2] and the recursive formulas found by Kass [8]. Recent work generalizes the context significantly. Dhillon and Khare [6] thus report results for all simple highest-weight modules over Kac-Moody algebras. In [9] by Lecouvey and Lenart, a connection with the atomic decomposition of characters is described, along with (q-or t-)deformations of the structures described herein. | 2021-09-29T01:15:50.963Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "58a69b8a8bdc7eb1c6dd90cc187e88551f325886",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2109.13314",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "58a69b8a8bdc7eb1c6dd90cc187e88551f325886",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
44218416 | pes2o/s2orc | v3-fos-license | Topological phase for spin-orbit transformations on a laser beam
We investigate the topological phase associated with the double connectedness of the SO(3) representation in terms of maximally entangled states. An experimental demonstration is provided in the context of polarization and spatial mode transformations of a laser beam carrying orbital angular momentum. The topological phase is evidenced through interferometric measurements and a quantitative relationship between the concurrence and the fringes visibility is derived. Both the quantum and the classical regimes were investigated.
We investigate the topological phase associated with the double connectedness of the SO(3) representation in terms of maximally entangled states. An experimental demonstration is provided in the context of polarization and spatial mode transformations of a laser beam carrying orbital angular momentum. The topological phase is evidenced through interferometric measurements and a quantitative relationship between the concurrence and the fringes visibility is derived. Both the quantum and the classical regimes were investigated. The seminal work by S. Pancharatnam [1] introduced for the first time the notion of a geometric phase acquired by an optical beam passing through a cyclic sequence of polarization transformations. A quantum mechanical parallel for this phase was later provided by M. Berry [2]. Recently, the interest for geometric phases was renewed by their potential applications to quantum computation. The experimental demonstration of a conditional phase gate was recently provided both in nuclear magnetic ressonance [3] and trapped ions [4]. Another optical manifestation of geometric phase is the one acquired by cyclic spatial mode conversions of optical vortices. This kind of geometric phase was first proposed by van Enk [5] and recently found a beautiful demonstration by E. J. Galvez et al [6].
The Hilbert space of a single qubit admits an useful geometric representation of pure states on the surface of a sphere. This is the Bloch sphere for spin 1/2 particles or the Poincaré sphere for polarization states of an optical beam. A Poincaré sphere representation can also be constructed for the first order subspace of the spatial mode structure of an optical beam [7]. Therefore, in the quantum domain, we can attribute two qubits to a single photon, one related to its polarization state and another one to its spatial structure. Geometrical phases of a cyclic evolution of the mentioned states can be beautifully interpreted in such representations as being related to the solid angle of a closed trajectory. However, in order to compute the total phase gained in a cyclic evolution, one should also consider the dynamical phase. When added to the geometrical phase, it leads to a total phase gain of π after a cyclic trajectory. This phase has been put into evidence for the first time using neutron interference [8]. The appearence of this π phase is due to the double connectedness of the three dimensional rotation group SO(3). However, in the neutron experience, only two dimensional rotations were used, and this topological property of SO(3) was not unambiguously put into evidence, as explained in details in [9,10].
As discussed by P. Milman and R. Mosseri [9,11], when the quantum state of two qubits is considered, the mathe-matical structure of the Hilbert space becomes richer and the phase acquired through cyclic evolutions demands a more careful inspection. The naive sum of independent phases, one for each qubit, is applicable only for product states. In this case, the two qubits are geometrically represented by two independent Bloch spheres. When a more general partially entangled pure state is considered, the phase acquired through a cyclic evolution has a more complex structure and can be separated in three contributions: dynamical, geometrical and topological. Maximally entangled states are solely represented on the volume of the SO(3) sphere which has radius π and its diametrically opposite points identified. This construction reveals two kinds of cyclic evolutions, each one mapped to a different homotopy class of closed trajectories in the SO(3) sphere. One kind is mapped to closed trajectories that do not cross the surface of the sphere (0−type) and the other one is mapped to trajectories that cross the surface (π−type). The phase acquired by a maximally entangled state is 0 for the first kind and π for the second one.
In the present work we demonstrate the topological phase associated to polarization and spatial mode transformations of an optical vortice. This phase appears first in the classical description of a paraxial beam with arbitrary polarization state and has its quantum mechanical counterpart in the spin-orbit entanglement of a single photon, which constitutes one possible realization of a two-qubit system and the topological phase discussed in Ref. [9]. However, it is interesting to observe that, like the Pancharatnam phase, the two-qubit topological phase also admits a classical manifestation, since it can be implemented on the classical amplitude of the optical field. This is also the first experiment unambiguously showing the double connectedness of the rotation group SO(3). The optical modes used in our experiment have a mathematical structure analog to the one of entangled states, so that the geometrical representation developped in [10] also applies and the results of Ref. [9,11] can be experimentally demonstrated. When excited with single photons, these modes give rise to single particle entangled states and provide a more direct relationship with the ideas put forward in Refs. [9,10,11]. This regime is also investigated in the present work. There are a number of quantum computing protocols that can be implemented with single particle entanglement and will certainly benefit from our results.
Let us now combine the spin and orbital degrees of freedom in the framework of the classical theory in order to build the same geometric representation applicable to a two-qubit quantum state. Consider a general first order spatial mode with arbitrary polarization state: whereê H(V ) are two linear polarization unit vectors along two orthogonal directions H and V , and ψ ± (r) are the normalized first order Laguerre-Gaussian profiles which are orthogonal solutions of the paraxial wave equation [12]. We may now define two classes of spatialpolarization modes: the separable (S) and the nonseparable (NS) ones. The S modes are of the form For these modes, a single polarization state can be atributted to the whole wavefront of the paraxial beam. They play the role of separable two-qubit quantum states.
For nonseparable (NS) paraxial modes, the polarization state varies across the wavefront. As for entanglement in two-qubit quantum states, the separability of a paraxial mode can be quantified by the analogous definition of concurrence. For the spin-orbit mode described by Eq.(1), it is given by: Let us first consider the maximally nonseparable modes (MNS) of the form For these modes C = 1. It is important to mention that the concept of entanglement does not applies to the MNS mode, since the object described by Eq. (4) is not a quantum state, but a classical amplitude. However, we can build an SO (3) representation of the MNS modes as it was done in Refs. [11,13]. Let us define the following normalized MNS modes: The SO (3) sphere is then constructed in the following way: mode E 1 is represented by the center of the sphere, while modes E 2 , E 3 , and E 4 are represented by three points on the surface, connected to the center by three mutually orthogonal segments. Each point of the SO(3) sphere corresponds to a MNS mode. Following the recipe given in Ref. [13], the coefficients α and β of Eq.(4) are parametrized to: where (k x , k y , k z ) = k is a unit vector, and a is an angle between 0 and π. With this parametrization, each MNS mode is represented by the vector a k in the sphere. In order to evidence the topological phase for cyclic transformations, we must follow two different closed paths, each one belonging to a different homotopy class, and compare their phases. The experimental setup is sketched in Fig.(1). First, a linearly polarized TEM 00 laser mode is diffracted on a forked grating used to generate Laguerre-Gaussian beams [14]. The two side orders carrying the ψ + (r) and ψ − (r) spatial modes are transmitted through half waveplates HWP-A and HWP-B, followed by two orthogonal polarizers Pol-V and Pol-H, and finally recombined at a beam splitter (BS-1). Half waveplates HWP-A and HWP-B are oriented so that their fast axis are paralell. This allows us to adjust the mode separability at the output of BS-1 without changing the corresponding output power, what prevents normalization issues.
Experimentally, an MNS mode is produced when both HWP-A and HWP-B are oriented at 22.5 o , so that the setup prepares mode E 1 located at the centre of the sphere. Other MNS modes can then be obtained by unitary transformations in only one degree of freedom. Since polarization is far easier to operate than spatial modes we choose to implement the cyclic transformations in the SO(3) sphere using waveplates. The MNS mode E 1 is first transmitted through three waveplates. The first one (HWP-1) is oriented at 0 o and makes the transformation E 1 → E 2 , the second one (HWP-2) is oriented at −45 o and makes the transformation E 2 → E 3 , and the third one (HWP-3) is oriented at 90 o and makes the transformation E 3 → E 4 . Finally, two alternative closures of the path are performed in a Michelson interferometer. In one arm a π−type closure is implemented by double pass through a quarter-waveplate (QWP-1) fixed at −45 o . In the other arm, either a 0−type or a π−type closure is performed by a double pass through another quarter-waveplate (QWP-2) oriented at a variable angle between −45 o (π−type) and 45 o (0−type). These trajectories are analogous to spin rotations around different directions of space [13]. They evidence the topological properties of the three dimensional rotation group.
In order to provide spatial interference fringes, the interferometer was slightly misaligned. The interference patterns were registered with either a charge coupled device (CCD) camera or a photocounter (PC), depending on the working power. First, we registered the interference patterns obtained when an intense beam is sent through the apparatus. The images shown in Fig.(2a) demonstrate clearly the π topological phase shift. The phase singularity characteristic of Laguerre-Gaussian beams can be easily identified in the images and is very useful to evidence the phase shift. When both arms perform the same kind of trajectory in the SO(3) sphere (QWP-1 and QWP-2 oriented at −45 o ), a bright fringe falls on the phase singularity. When QWP-2 is oriented at 45 o , the trajectory performed in each arm belongs to a different homotopy class and a dark fringe falls on the singularity, what clearly demonstrates the π topological phase shift.
In order to discuss the role played by mode separability, it is interesting to observe the pattern obtained when QWP-2 is oriented at intermediate angles, which correpond to open trajectories in the SO(3) sphere. We observed that during the phase shift transition, the interference fringes are deformed and finally return to its initial topology with the π phase shift. This is clearly illustrated by the intermediate image displayed in Fig.(2a), which corresponds to QWP-2 oriented at 0 o . Notice that, despite the deformation, the interference fringes display high visibility.
As we mentioned above, the mode preparation settings can be adjusted in order to provide a separable mode. For example, when we set HWP-A and B both at 45 o , the output of BS-1 is the separable mode ψ + (r)ê H , which can be represented in the Poincaré spheres for spatial and polarization modes. The same π phase shift can be observed when QWP-2 is rotated, but the transition is essentially different. The intereference pattern is not topologically deformed, but its visibility decreases until it completelly vanishes at 0 o , and then reappears with the π phase shift. This transition is clearly illustrated by the three patterns displayed in Fig.(2b). In this case, the π phase shift is of purely geometric nature, since the spatial mode is kept fixed while the polarization mode is turned around the equator of the corresponding Poincaré sphere.
The relationship between mode separability and fringes visibility can be clarified by a straightforward calculation of the interference pattern. Therefore, let us consider that HWP-A and B are oriented so that the output of BS-1 is described by where ǫ is the fraction of the ψ + (r)ê H mode in the output power. Now, let us consider that QWP-2 is oriented at 0 o and suppose that the two arms of the Michelson interferometer are slightly misaligned so that the wave vectors difference between the two outputs is δk = δkx , orthogonal to the propagation axis. Taking into account the passage through the three half waveplates, and the transformation performed in each arm of the Michelson interferometer, we arrive at the following expression for the interference pattern: where φ = arg(x + iy) is the angular coordinate in the transverse plane of the laser beam, and |ψ(r)| 2 is the doughnut profile of the intensity distribution of a Laguerre-Gaussian beam. It is clear from Eq.(8) that the visibility of the interference pattern is 2 ǫ(1 − ǫ), which is precisely the concurrence of E ǫ (r) as given by Eq. (3). Therefore, the fringes visibility is quantitatively related to the separability of the mode sent through the setup. However, the numerical coincidence with the concurrence is restricted to modes of the form given by Eq. (7). In fact, it is important to stress that the fringes visibility cannot be regarded as a measure of the concurrence for any nonseparable mode, but for our purposes it evidences the topological nature of the phase shift implemented by the experimental setup. A detailed discussion on the measurement of the concurrence is available in Ref. [15]. Next, we briefly discuss the quantum domain. When a partially nonseparable mode like E ǫ (r) is occupied by a single photon, this leads to partially entangled single particle quantum states of the kind Experimentally, we attenuated the laser beam down to the single photon regime, and scanned a photocounting module across the interference pattern. First, HWP-A and B were set at 22.5 o (ǫ = 1/2) in order to evidence the topological phase in this regime. Fig.(3) displays the interference patterns obtained with QWP-2 oriented at −45 o and 45 o . The π phase shift is again clear. The relationship between the fringes visibility and the state separability was evidenced by fixing QWP-2 at 0 o and rotating HWP-A and B by an angle θ so that ǫ = cos 2 2θ . Fig.(4) shows the experimental results for the fringes visibility for several values of ǫ . The solid line corresponds to the analytical expression of the concurrence, showing a very good agreement with the experimental values.
As a conclusion, we demonstrated the double connected nature of the SO(3) rotation group and the topological phase acquired by a laser beam passing through a cycle of spin-orbit transformations. We investigated both the classical and the quantum regimes and com- pared the separability of the mode travelling through the apparatus with the visibility of the interference fringes.
Our results may constitute an useful tool for quantum computing and quantum information protocols. The authors are deeply grateful to S.P. Walborn and P.H. Souto Ribeiro for their precious help with the photocounting system and for fruitful discussions. Funding was provided by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Fundação de Amparò a Pesquisa do Estado do Rio de Janeiro (FAPERJ-BR), and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). | 2007-04-06T14:27:51.000Z | 2007-04-06T00:00:00.000 | {
"year": 2007,
"sha1": "e7b4c7cb3afc9ed1f185529accc191f35c693bf8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0704.0893",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e7b4c7cb3afc9ed1f185529accc191f35c693bf8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
256906935 | pes2o/s2orc | v3-fos-license | The independent effects of vitamin D deficiency and house dust mite exposure on lung function are sex-specific
Vitamin D deficiency is increasing around the world and has been associated with the development of asthma. This study aims to evaluate the effect of dietary vitamin D deficiency at different life stages on lung function using a murine model of allergic airways disease. BALB/c mice were challenged intranasally with HDM or saline alone for 10 days. Twenty four hours after the last challenge, mice were anesthetized and lung function was measured using the forced oscillation technique (FOT). Mice were euthanized for assessment of inflammation in the bronchoalveolar lavage (BAL) and total collagen content in lung homogenates by ELISA. Vitamin D deficiency impaired lung function in both male and female mice, increasing tissue damping and elastance, however had no effect on HDM induced inflammation. The impact of vitamin D deficiency was more evident in females. HDM also decreased airway distensibility, but only in females and this response was not altered by vitamin D deficiency. Our data suggest that vitamin D deficiency and HDM exposure have independent effects on lung mechanics and that females are more susceptible to these effects. Vitamin D deficiency may exacerbate lung function deficits by having a direct, but independent, effect on parenchymal mechanics.
Asthma is a chronic disease characterized by airway inflammation, airway remodeling and reversible deficits in lung function 1,2 . The prevalence of asthma increases when communities adopt western lifestyles and become more urbanized [3][4][5][6] . Due to the associated reduction in outdoor activity, some have suggested that vitamin D deficiency may be responsible for this association 5 . It has been estimated that one billion people around the world have inadequate levels of vitamin D due to many factors such as an indoor life style, increased use of sunscreen 7 and low dietary vitamin D 8 . Due to the scale of this problem, it is important that we understand the potential health implications of widespread vitamin D deficiency.
While recent vitamin D supplementation trials in community based cohorts of pregnant women have shown no effect on the risk of wheeze in children at 3 years of age 9,10 , it is unclear whether maternal vitamin D supplementation has effects on postnatal lung function, which is an important risk factor for asthma later in life 11 . We have shown that maternal vitamin D deficiency at 16-20 weeks' gestation is associated with impaired lung function at 6 years of age in offspring 12 . In line with this finding, we have also shown that in utero vitamin D deficiency is sufficient to induce increased airway smooth muscle (ASM) mass and cause deficits in lung function in a mouse model [13][14][15] ; both of which are key characteristics of the asthmatic phenotype 1 .
While these observations point to a role for vitamin D deficiency in causing alterations in lung structure, the inflammatory process itself can also lead to airway remodeling. House dust mite (HDM), a prevalent environmental allergen, is associated with allergic airway diseases 16 and drives inflammatory processes that are associated with airway remodeling 17,18 resulting in increased airway resistance 19 . HDM induces a robust Th-2 driven inflammatory response in the airways that is characterized by eosinophilia and the production of IL-4, IL-5 and IL-13 17,18 . These inflammatory processes lead to goblet cell metaplasia, an increase in ASM thickness and deposition of collagen around the airways 18,20 . Collectively, these structural changes cause deficits in lung function 18,19,21 .
Given that both vitamin D deficiency and HDM may lead to deficits in lung function, we investigated the interaction between vitamin D deficiency and HDM, and their effects on lung function. We hypothesized that the combination of vitamin D deficiency and HDM exposure would lead to deficits in lung function that are greater than the individual effects of vitamin D deficiency and HDM alone. We addressed this hypothesis by evaluating the effects of in utero, postnatal and whole life vitamin D deficiency on lung function in a murine model of HDM induced allergic airways disease.
Materials and Methods
Mouse model. All studies were conducted with the approval of the University of Tasmania Animal Ethics Committee and conformed to the guidelines of the National Health and Medical Research Council (Australia). Three-week old female BALB/c mice (Cambridge Farm Facility, University of Tasmania, TAS, AU) were placed on vitamin D deficient or replete diets and mated with vitamin D replete males at 8 weeks of age as described previously 14 . Pups were cross fostered at birth to assess the effects of in utero (Vit D −/+), postnatal (Vit D +/−) and whole-life (Vit D −/−) vitamin D-deficiency on inflammation and lung function outcomes compared to replete controls (Vit D +/+) 15 . At 8 weeks of age (7-13 mice per group; see Figure legends for further details), male and female offspring were challenged intranasally with 25 µg of an HDM extract (Greer Laboratories, Lenoir, NC, USA) in 50 µl of saline or saline alone for 10 consecutive days under light methoxyflurane anesthesia. Twenty four hours after the last challenge, the outcomes described below were assessed.
Lung function. Mice were anesthetized with ketamine (40 mg/mL) and xylazine (2 mg/mL) by intraperitoneal injection at a dose of 0.01 mL/g body weight. Two-thirds of the dose was administered before tracheostomy and cannulation, and the remaining anesthetic was given when the mice were connected to the animal ventilator (HSE-Harvard MiniVent; Harvard Apparatus, Holliston, MA, USA). Mice were ventilated at 400 breaths/min with a tidal volume of 10 mL/kg, and 2 cmH 2 O of positive end-expiratory pressure (PEEP). Lung mechanics were assessed using a modified low frequency forced oscillation technique (LFOT) 22 during slow inflation manoeuvers from end-expiratory lung volume (EELV), up to 20 cmH 2 O transrespiratory pressure (Prs) 22 . The oscillatory signal consisted of 9 frequencies ranging from 4-38 Hz and was delivered to the endotracheal cannula via a wavetube of known impedance to calculate the respiratory system impedance. A four-parameter model with constant-phase tissue impedance was fitted to the respiratory system impedance spectrum 23 . This allowed us to calculate airway resistance (R aw ), tissue damping (G), tissue elastance (H) and hysteresivity (η = G/H) from 0 to 20 cmH 2 O Prs. We also used these data to calculate airway distensibility as the slope of the conductance (G aw = 1/R aw ) versus pressure curve between 2 and 10 cmH 2 O Prs.
Differential cell counts.
After lung function measurements, mice were euthanized with an overdose of ketamine/xylazine, and a bronchoalveolar lavage (BAL) was performed by washing the lung 3 times with 500 µL of saline. The BAL was centrifuged at 5000 rpm for 5 minutes. Cytospin slides were generated from the resuspended pellet and stained with Haem Kwik (HD Scientific Supplies Pty Ltd., AU). Differential cells counts were performed under light microscopy by counting a minimum of 200 cells per mouse.
Collagen.
We have previously found that vitamin D deficiency can increase collagen type 1 alpha 1 (COL1A1) expression in utero 24 which may impact on lung mechanics. In order to determine whether this persisted into adulthood, and whether it was altered by HDM exposure, we assessed COL1A1 expression in lung homogenates by ELISA according to the manufacturer's instructions (DLDEVELOP Ltd., Wuxi, Jiangsu, PRC). COL1A1 levels were calculated relative to total protein content in the lung measured by Bradford assay (Thermo Fisher Scientific, Whaltham, MA, USA). Statistical analysis. SigmaPlot (v12.5, Systat, Germany) was used to perform the statistical analysis.
Two-way ANOVA with Holm-Sidak posthoc tests were used to assess the effect of vitamin D and HDM exposure on the outcomes of interest. Data were log-transformed when necessary to satisfy the model assumptions. A p-value < 0.05 was considered significant. Data are presented as mean (SD).
Results
Lung function. R aw , G, H and η have characteristic pressure dependences 25 . Specifically, R aw (airway resistance) decreases monotonically from 0 to 20 cmH 2 O Prs (Fig. 1A), G (tissue damping) and H (tissue elastance) (Fig. 1B,C) initially decrease as Prs increases before increasing exponentially at high Prs, while η (hysteresivity = G/H; Fig. 1D) initially increases before decreasing at high Prs. In order to simplify the analysis of our data, and to facilitate simple comparisons between groups, we characterized the pressure dependence of lung mechanics using the following indices: R aw , G, H and η at 0 cmH 2 O Prs (R 0 , G 0 , H 0 , η 0 ), R aw , G, H and η at 20 cmH 2 O Prs (R 20 , G 20 , H 20 , η 20 ), the minimum G and H (G min , H min ) and the maximum η (η max ) (Fig. 1). We then compared these parameters for each of the vitamin D deficiency groups (Vit D −/−, Vit D −/+ and Vit D +/−) against the replete controls (Vit D +/+).
Females. In females, R aw was not affected by HDM (p > 0.05 for all comparisons) or vitamin D deficiency (p > 0.05 for all comparisons, data not shown). However, whole-life vitamin D deficiency increased tissue damping (Fig. 2A) Males. In males, whole-life vitamin D deficiency increased airway resistance at R 0 (p = 0.017, data not shown), tissue damping at G 20 (18070 hPa.L −1 vs 17040 hPa.L −1 ; p = 0.008) (Fig. 2D) and, tissue elastance at H 20 (122340 hPa.L −1 vs 111740 hPa.L −1 ; p = 0.011) (Fig. 3D), with no differences in hysteresivity (Fig. 4D). Similar to the female mice, HDM exposure did not affect R aw , G, H or η (p > 0.05 for all comparisons). In contrast to the female mice, deficits in lung mechanics were only observed in male mice that were whole-life vitamin D deficient, while airway distensibility (Fig. 5B) was not affected by HDM exposure (p = 0.48) or vitamin D deficiency (p = 0.85).
Males. In males, HDM caused an influx of eosinophils (p < 0.001, Fig. 6C) and neutrophils (p < 0.001, data not shown), while vitamin D deficiency had no effect on these cells (eosinophils, p = 0.761; neutrophils, p = 0.550). In contrast to the female mice, HDM also increased lymphocytes numbers in the BAL (Fig. 6D) Collagen. We sought to determine whether the effects we saw were due to differences in COL1A1 expression, however there were no differences in COL1A1 between groups (data not shown).
Discussion
In this study, we evaluated the independent and combined effect(s) of vitamin D deficiency during different life stages (in utero and/or postnatal) and allergen exposure on lung function outcomes using a mouse model. Vitamin D deficiency in utero and/or postnatally had wide-ranging effects on lung function, particularly in female mice, causing significant impairments in tissue mechanics (G, H and G/H). Vitamin D deficiency also resulted in an impairment in lung mechanics (R aw , G, and H) in male mice, but to a lesser extent than observed in females, and only in response to whole-life vitamin D deficiency. Vitamin D deficiency did not appear to have an influence on airway stiffness. In contrast, while HDM had no effect on the pressure dependence of R aw , G or H it significantly decreased airway stiffness; but only in female mice. These differences were discordant with cellular inflammation and could not be explained by differences in collagen expression in the lungs. These findings suggest that vitamin D deficiency and HDM have independent effects on lung function, which are unrelated to inflammation, and are sex-dependent. Thus, the net effect of in utero vitamin D deficiency on lung outcomes may depend on whether you are female or male, and will be influenced by the effect of postnatal allergen responses via vitamin D independent pathways.
In this study, vitamin D deficiency had a significant impact on lung mechanics, particularly parenchymal tissue mechanics. In males these effects were limited to the whole-life vitamin D deficiency group while in females these deficits were evident in all deficient groups. The observation that whole-life vitamin D deficiency increased G, a measure of lung mechanics linked to the small airways and ventilation heterogeneity 26 , in both males (at G 20 ) and females (G min and G 20 ) at 8 weeks of age is consistent with our previous studies on 2 weeks old animals 13 . A similar pattern was observed in H, a measure of tissue stiffness, while changes in η were only observed in females. Collectively, these observations suggest that vitamin D deficiency has an impact on parenchymal lung mechanics. Given that we have previously shown that vitamin D deficiency does not affect lung volume in adulthood 14 , it is unlikely that these differences are due to the influence of vitamin D deficiency on somatic growth. Vitamin D deficiency in female mice, either in utero or postnatally, was sufficient to impair lung function. These deficits in lung function may increase susceptibility to chronic lung disease and respiratory morbidity later in life 27 .
There are well described differences in lung mechanics between males and females and we have previously described increased susceptibility in females to altered lung function as a result of in utero vitamin D deficiency 12,15 . In relation to asthma, boys have a higher prevalence of asthma in early life whereas, after puberty, asthma is more prevalent in females 28 . Similarly, airway reactivity increases with age in females but decreases in males 29 . Some of these sex differences in asthma susceptibility have been linked to estrogen levels 30 and estrogen signaling is a critical component of lung development 31 . Given the intimate association between vitamin D and estrogen synthesis 32 , it is possible that estrogen is related to the increased susceptibility of females to the effects of vitamin D deficiency; although we did not directly address this in the present study.
Despite the strong impact of vitamin D deficiency on the pressure-dependent parenchymal mechanics, airway distensibility was not affected by vitamin D deficiency. However, airway distensibility was diminished after exposure to HDM, but only in whole-life vitamin D deficient female mice. Airway distensibility is related to airway stiffness and is reduced in asthmatics 33 . Our observation provides evidence that HDM exposure can directly alter airway stiffness, which has been linked to the increased propensity of the asthmatic airway to constrict 34 . Based Figure 5. Airway distensibility, calculated as the slope of the conductance (G aw = 1/R aw ) versus pressure curve between 2 and 10 cmH 2 O Prs, for female (A) and male (B) vitamin D replete (Vit D +/+) and vitamin D deficient (Vit D −/−) mice exposed to 25 µg of HDM in 50 µL intranasally for 10 days (black bars) or saline alone (grey bars). Data are presented as mean (SD), n = 7-10 for each group in female and 8-12 in male. *p < 0.05. on these data, it is clear that vitamin D and HDM had independent effects on lung function whereby vitamin D did not modify the response to HDM for any of the lung function outcomes we measured. Thus, the net effect of HDM exposure and vitamin D deficiency is likely to be additive.
Interestingly, these lung function responses were completely discordant with inflammation. For example, in female mice, HDM caused substantial eosinophilia that was unaltered by vitamin D deficiency and yet parenchymal mechanics was altered in response to vitamin D deficiency. In contrast, vitamin D deficiency modified the inflammatory response to HDM in male mice but this was not associated with alterations in lung mechanics. While eosinophilia was associated with decreased airway distensibility in the females, inflammation was clearly not sufficient to alter airway stiffness in all cases as this association was not evident in male mice. At this stage we also do not have a structural explanation for the alterations in lung function and deficits in parenchymal mechanics as a result of vitamin D deficiency were not due to altered type 1 collagen levels.
There are several limitations to this study. Firstly, after the measurements of lung function and post-mortem tissue processing (collecting BAL fluid), we were unable to obtain reliable structural measurements that could be directly related to the changes in lung function. Secondly, structural protein analysis was limited to only one type of collagen, but it is possible the changes in lung mechanics may be related to other functional proteins such as surfactant proteins that are essential for lung function and pulmonary homeostasis.
Notwithstanding these limitations, our data suggest that vitamin D deficiency and HDM have independent effects on lung function that are sex-specific. HDM induces a robust inflammatory response that may lead to increased airway stiffness in females. In contrast, vitamin D deficiency had limited effects on inflammation but caused consistent deficits in parenchymal mechanics that were more pronounced in female mice. Interestingly, in utero or postnatal vitamin D deficiency was sufficient to alter lung mechanics in these mice. While we were unable to identify mechanisms linking these observations, our data clearly highlight the complexity of the effects of vitamin D on lung function and the importance of probing the influence of sex on responses to respiratory insults.
Data availability. All data generated or analysed during this study are included in this published article. intranasally in 50 µL of saline or saline alone (grey bars) for 10 consecutive days. Data are presented as mean (SD), n = 8-11 for each group in females and 7-11 in males. *p < 0.05; **p < 0.01; ***p < 0.001. | 2023-02-17T14:24:22.881Z | 2017-11-09T00:00:00.000 | {
"year": 2017,
"sha1": "5cb9d2e69c0b0a534a2c493ae6659da5100ca27c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15517-z.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "5cb9d2e69c0b0a534a2c493ae6659da5100ca27c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
54071398 | pes2o/s2orc | v3-fos-license | Investigation of the microbial diversity of an extremely acidic , metal-rich water body ( Lake Robule , Bor , Serbia ) SR Đ
An investigation of the microbial diversity in the extremely acidic, metal-rich Lake Robule was performed using culture-dependant and cultureindependent (T-RFLP) methods. In addition, the ability of the indigenous bacteria from the lake water to leach copper from a mineral concentrate was tested. T-RFLP analysis revealed that the dominant bacteria in the lake water samples were the obligate heterotroph Acidiphilium cryptum (≈50 % of the total bacteria) and the iron-oxidizing autotroph Leptospirillum ferrooxidans (≈40 %) The iron/sulfur-oxidizing autotroph Acidithiobacillus ferrooxidans was reported to be the most abundant bacteria in the Lake in an earlier study, but it was not detected in the present study using T-RFLP, although it was isolated on solid media and detected in enrichment (bioleaching) cultures. The presence of the two bacterial species detected by T-RFLP (L. ferrooxidans and A. cryptum) was also confirmed by cultivation on solid media. The presence and relative abundance of the bacteria inhabiting Lake Robule was explained by the physiological characteristics of the bacteria and the physico-chemical characteristics of the lake water.
INTRODUCTION
The Copper Mine Bor has been in operation since 1903, and during that time millions of tons of mine tailings have been deposited in the close proximity of the town of Bor.Exposure of the tailings to air and water initiated the microbially accelerated oxidative dissolution of sulfide minerals, forming an acid mine drainage (AMD) containing elevated concentrations of metal cations and sulfates.It is known that acidophilic bacteria and archaea, some of which are directly involved in the process of sulfide mineral oxidation and thereby accelerate the production of AMD by a factor of up to 10 6 , rapidly populate acidic environments. 1This commonly leads to the formation of acidic lakes and ponds under deposits of tailings and in open pits.
Lake Robule is located at the foot of the overburden of the open pit named Visoki planir, which was created during the five-year period from 1975 until 1980.It is the largest overburden of the Copper Mine Bor (100 m in height) and is composed of 150 million tons of waste rocks and off-balanced ore. 2 By preventing water circulation, the Visoki planir overburden created Lake Robule.The lake also collects water draining the overburden after rainfall.The lake is 200 m in length and 150 m in width with a maximum depth of about 10 m.Although water is drained from the lake into the Bor River at a constant rate of about 500 m 3 per day, 3 the lake water level is constant.
Lake Robule has become an extreme environment due to the acidic solutions containing elevated concentrations of metal cations and sulfates continuously flowing into it from the surrounding overburden following rainfall.The input of AMD from the overburden has caused the water of the lake to become highly acidic, and deep red in color due to high concentration of ferric iron.Such an environment is a potential source of acidophilic bacteria that could be used in biohydrometallurgy, specifically for leaching of copper from concentrates, ores and tailings, referred to as bioleaching.Bioleaching technology enables economically feasible exploitation of low-grade ores and tailings, as it is significantly cheaper than the classical pyrometallurgical technology. 4In addition, the environmental impacts of the bioleaching technology are generally much lower than those associated with the pyrometallurgical processing of copper-containing ores. 5t is estimated that tailings of the Copper Mine Bor still contain about one million tons of copper.According to its mineral composition and average contents of copper, the tailing deposits Visoki Planir, Cerovo and the Old Flotation are particularly suitable for microbial leaching.These dumps contain 200 million tons of tailings, with estimated content of 346 000 tons of copper. 6t is now accepted that compositions of microbial communities cannot be determined solely using culture-dependant methods 7 and that culture-independent approaches generally yield data that are more comprehensive.In order to better define the bacterial community inhabiting Lake Robule, terminal restriction enzyme fragment length polymorphism (T-RFLP) analysis was used in this study, in parallel with cultivation of acidophilic bacteria on solid media.T-RFLP enables the microbial diversity and relative abundance of microorganisms in samples to be estimated based on the different lengths of polymerase chain reaction (PCR)-amplified DNA (often 16S rRNA genes) that were digested with restriction endonucleases.
The aims of this study were to investigate microbial diversity of the Lake Robule, to isolate and identify indigenous acidophilic bacteria, and to test the bioleaching potential of the microbial community in the lake water.
Sampling and measurement of the physicochemical parameters of the lake water
Lake water samples were collected in 50 ml sterile plastic containers on July 26 th , 2012.Water temperature, pH and conductivity were measured on site using a Hanna Instruments HI98311 mobile instrument.The redox potential of the water was measured using a combined Pt-Ag/AgCl electrode system.
Copper analysis
The total copper concentrations in the water samples were measured using a modified method described by Anwar et al. 8 Cu(II) was reduced to Cu(I) by adding 200 µL of 10 % hydroxylamine solution to 100 µL of water sample.The solution was mixed well and incubated for 5 min at room temperature.First, 1 mL of tartrate buffer (1 mL of 0.5 M HCl added to 100 mL of 0.5 M sodium tartrate and pH adjusted to 5.5) was added and the solution vortexed.Next, 500 µL of phosphate buffer (87.7 mL of 0.2 M NaH 2 PO 4 with 13.3 mL of 0.2 M Na 2 HPO 4 ) and 100 µL of 0.1 % bicinchoninic acid (Sigma Chemical, USA) diluted in tartrate buffer was added and the mixture vortexed.Finally, 0.8 mL distilled H 2 O was added and the solution was mixed well.After 10 min at room temperature, the absorbance was measured at 562 nm (Cecil CE1011 spectrophotometer).
Iron analysis
The ferrozine assay was used to determine the concentrations of both soluble Fe(III) and Fe(II).The concentrations of Fe(II) were determined using the standard method described by Lovely and Phillips. 9Then, the total iron concentration was determined by repeating the analysis following addition of hydroxylamine (to reduce the Fe(III) present to Fe(II)).The Fe(III) concentrations were obtained from the difference the obtained values.
Isolation and cultivation of acidophilic bacteria from the Lake Robule on selective solid media
Acidophilic bacteria from Lake Robule were cultivated on overlay solid media 10 that comprised a bottom layer of solid medium inoculated with an acidophilic heterotrophic bacterium Acidiphilium cryptum (strain SJH) and a top layer of the same medium inoculated with water from Lake Robule. A. cryptum SJH, a heterotrophic acidophilic bacterium, was added to a bottom layer of a plate in order to metabolize products of the hydrolysis of the agarose, which have an inhibitory effect on the growth of most acidophilic chemolithoautotorphs. Bacteria were cultivated on two types of overlay solid media: iFeo, medium, which contained ferrous sulfate as the sole energy source, and is suitable for the cultivation of iron oxidizers such as Leptospirillum ferrooxidans, while the FeSo medium that contains ferrous iron, tetrathionate and tryptone soya broth, supports the growth of iron-and sulfur-oxidizers, such as Acidithiobacillus ferrooxidans, and also some heterotrophic acidophiles. 11The iFeo medium contained 1 basal salts of 50×concentrated solution (12.5 g L -1 (NH 4 ) 2 SO 4 , 5 g L -1 MgSO 4 •7H 2 O), 0.1 % trace elements solution, 25 mM FeSO 4 , while the composition of the FeSo medium was as follows: 1×basal salts of 50×concentrated solution, 0.1 % trace elements solution, 0.025 % tryptone soya broth, 5 mM FeSO 4 , and 10 mM K 2 S 4 O 6 , in final concen-trations.All media were gelled using a 0.5 % agarose solution.The inoculated plates were incubated for 30 days at 30 °C.
Extraction and analysis of DNA from the bacterial colonies
A small amount of biomass from several colonies displaying the same morphologies was suspended in 20 μL cell lysis solution (0.05 M NaOH, 0.25 % sodium dodecyl sulfate) and heated at 95 °C for 15 min in a PCR thermocycler.The crude cell lysates were allowed to cool and 180 μL of MilliQ:Tris buffer (0.01 mM Tris, pH 7.5) added.The Fe(III)-encrusted colonies were washed first in 100 mM oxalic acid, and then in sterile ultra-pure water, followed by the addition of the cell lysis solution addition.To identify the isolates, their 16S rRNA genes were amplified using PCR (described below) and the obtained products were digested with the restriction enzyme HaeIII and the fragment lengths determined using T-RFLP (described below).Bacterial identities were determined by comparing the fragment lengths obtained with those in the databank of acidophilic bacteria maintained at Bangor University, UK.
PCR amplification of 16S rRNA genes
Bacterial 16S rRNA genes were amplified using 27F: 5′-AGAGTTTGATCMTGGCTCAG-3′ and 1387R: 5′-GGGCGGAGTGTACAAGGC-3′ primers.For PCR, a final volume of 25 µL, containing 12.5 µL master mix (Promega, USA), 10 pmol of each primer, 2.5 mM MgCl 2 .0.5 µL ultra-pure dimethylsulfoxide and 1 µL DNA template, and ultra-pure water, was used.The PCR reactions were realized in a Techne TC-312 thermocycler.Amplification was performed as follows: initial denaturation 95 °C for 5 min, followed by 30 cycles of denaturation at 95°C for 30 s, annealing 55 °C for 30 s and elongation at 72 °C for 90 s.Final extension was performed at 72 °C for 10 min.The PCR products were analyzed by gel electrophoresis on a 0.7 % agarose gel.
Isolation of DNA from Lake Robule
Approximately 400 mL of lake water was filtered through a 0.2 µm (pore size) sterile membrane filter.The filter was cut into segments and DNA was isolated using a MoBio Ultra Clean Soil DNA isolation kit following manufacturer's instructions.The isolated DNA was used as a template for the amplification of the 16S rRNA genes.
Terminal restriction fragment length polymorphism (T-RFLP) analysis of the amplified 16S rRNA genes
T-RFLP analysis was used to identify the isolated bacteria, and to study the diversity and relative abundance of the microorganisms present in the Lake water samples, as well as in samples following bioleaching of copper concentrate.In this case, PCR amplification was performed as described using a 27F primer labeled with Cy5 dye at the 5′ end (MWG Biotech, Germany) and unlabelled 1387R primer.The PCR products were digested using three different restriction endonucleases, HaeIII, AluI, and CfoI in three separate reactions.The Reaction mixture consisted of 0.5 µL of enzyme, 1 µL of enzyme specific buffer, 1 µL of PCR product and 7.5 µL of ultra-pure water.The reaction mix was incubated at 37 °C for 1 h.Mixes containing 2 µL of digestion products and 28 µL of sample loading solution were analyzed using a Beckman Coulter CEQ 8000 capillary electrophoresis apparatus.The sample loading solution contained 0.5 µL 600b CEQ DNA size standard dissolved in 27.5 µL of formamide.The T-RFLP analysis for each restriction enzyme was performed in triplicate and the summarized results are presented.
Bioleaching of copper concentrate
To evaluate the bioleaching potential of bacteria inhabiting Lake Robule, a concentrate containing 17 % of copper from the Copper Mine Majdanpek, Serbia, which contained chalcopyrite as the dominant copper sulfide mineral, was used as the test material.The basal salts solution (100 mL, pH 2.0) was transferred into 250 mL conical flasks (in triplicate) and 1 g of concentrate and 1 mL of water from Lake Robule were added.The cultures were incubated at 30 °C and shaken at 150 rpm.The concentrations of soluble iron and copper, pH, redox potentials (using a combined Pt-Ag/AgCl electrode) and the bacteria present in the cultures were determined after three weeks of incubation.
Physical and chemical properties of Lake Robule
The physical and chemical properties of Lake Robule measured on site are given in Table I.The Lake water is highly acidic and characterized by high conductivity due to the presence of elevated concentrations of dissolved ions.The highly positive redox potential of the Lake water is a consequence of the high concentration of Fe(III), which accounts for 99.7 % of the total iron present.
T-RFLP analysis of the bacterial community of Lake Robule
Results of T-RFLP analysis of the PCR-amplified 16S rRNA genes sug- gested that the bacterial diversity in Lake Robule was very limited, as only three bacterial species were identified (Fig. 1).According to the T-RFLP profiles, the bacteria present in this extreme environment were L. ferrooxidans, A. cryptum and (more tentatively) Acidisphaera rubrifaciens.The presence of L. ferrooxidans and A. cryptum was confirmed by terminal restriction fragments (T-RFs) produced with all three restriction enzymes, but the presence of Acd.rubrifaciens was less certain as only one corresponding T-RF (AluI digests) was detected (Table II).The terminal restriction fragments observed in T-RFLP profiles that could not be related to any fragment in the database are most likely pseudo T-RFs, PCR-related artifacts. 12The approximate relative abundance of bacteria in the Lake water was calculated from the peak areas of each terminal restriction fragment as a percentage of total peak area.The most abundant bacteria were A. cryptum (50 %), followed by L. ferrooxidans (40 %), and Acd.rubrifaciens (1.3 %).
The relative abundance of unidentified T-RFs (pseudo T-RFs) was 8.7 %.
Isolation of bacteria from Lake Robule
Three species of acidophilic bacteria were isolated from Lake Robule on overlay plates.Only very small Fe-encrusted colonies (identified as L. ferrooxi- dans) grew on the iFeo medium.In contrast, three colony variants were identified on the FeSo overlay media: very small Fe-encrusted colonies of L. ferrooxidans, larger Fe-encrusted colonies with translucent halos of At. ferrooxidans and round, non ferric-iron stained colonies of A. cryptum.The colony variants that grew on FeSo overlay plates are shown in Fig. 2. The most abundant colonies were colonies of A. cryptum, followed by those of L. ferrooxidans and the colonies of At. ferrooxidans were the least abundant.
Bioleaching test
After three weeks of the bioleaching experiment, the pH value of the solution was 2.20 and the redox potential was 820 mV.The concentration of the total iron was 815±1.633mg L -1 and the concentration of the total copper was 808.97±5.735mg L -1 .These concentrations of total iron and copper are the mean values of three measurements.
T-RFLP analysis was conducted using only HaeIII digests, as this restriction endonuclease was able to produce different T-RFs for each of the bacterial species identified in the Lake water.Two bacterial species were identified: L. ferrooxidans and At.ferrooxidans.However, an additional (and relatively minor) T-RF, not found in the HaeIII digests of the amplified genes from Lake Robule, itself was observed.No acidophilic bacterium corresponding to this T-RF was present in the database.The relative abundances of the microorganisms present in the bioleach liquor are shown in Fig. 3. Available on line at www.shd.org.rs/JSCS/DISCUSSION Lake Robule has been studied for over thirty years.Korać and Kamberović 13 reported that the pH of the lake was 2.97, and that it contained large concentrations of iron, 895 mg L -1 , sulfate, 4145 mg L -1 , and copper, 55.6 mg L -1 .Beškoski et al. 3 monitored the physical and chemical properties as well as the microbial diversity of the lake water between 1975 and 2008, and identified At. ferrooxidans as the most abundant bacterium in the lake.These authors reported that concentration of copper decreased between 1975, when it was 153 mg L -1 , and 2008, when it was 96 mg L -1 .The concentration of soluble iron as well as the pH and redox potential fluctuated during this time.The highest and lowest concentrations of iron were detected in 1988, 961 mg L -1 and in 1975, 562 mg L -1 , respectively.Redox potential of the water was highest in 1988, 527 mV, and lowest in 1975, 297 mV.The redox potential of the lake water was measured by using saturated calomel reference electrode (personal correspondence with the author).The lowest pH of the lake water was detected in 1975, 2.40, and the highest was in 1988, 2.81.Results obtained in the present study were concordant to the results obtained by these authors, with exception of the redox potential of the water (measured with Pt-Ag/AgCl electrode pair), which was higher than any value reported in previous studies.This could be explained by the dominance of Fe(III), which constitutes 99.7 % of total iron in the water.
T-RFLP analysis is a molecular fingerprinting technique that is widely used when studying microbial ecology.It does not require microorganisms to be isolated in order for them to be identified, as it is based on the analysis of genes amplified using environmental DNA as a template. 14It is a rapid and reliable molecular method for the identification of microorganisms in environmental samples when the microbial diversity of the analyzed sample is low. 15The T-RFLP profiles obtained with three different restriction enzymes confirmed with high confidence that the most abundant bacteria in Lake Robule are A. cryptum and L. ferrooxidans (Fig. 4).In addition, there is an indication that a bacterium related to Acd. rubrifaciens might be present in relatively low numbers, but since only one T-RF characteristic of this bacterium was observed following digestion with AluI, further analysis (e.g., construction and analysis of a clone library) needs to be performed to elucidate this.Interestingly, At. ferrooxidans, previously reported as the dominant bacterial species in Lake Robule 3 was not been detected in the Lake water by T-RFLP analysis, although it was isolated on the solid medium (FeSo plates), along with A. cryptum and L. ferrooxidans.This indicates that while At.ferrooxidans is present in the lake, its relative abundance is low compared to those of both L. ferrooxidans and A. cryptum.
Earlier studies suggested that microbial communities in acidic environments were dominated by At. ferrooxidans, but this appears to have been an artifact of the methods, particularly in the enrichment culture and most probably the number counts. 7Media for cultivation of acidophilic bacteria that have been widely used, and sometimes still are, such as 9K, 16 contain very high concentrations of Fe 2+ (9 g L -1 in 9K) that favor the growth of At. ferrooxidans.Even if there is a very small number of At. ferrooxidans in a sample, it will be dominant after cultivation in 9K medium.Therefore, the results obtained in such studies are not surprising, as At.ferrooxidans thrives in environments with high concentrations of Fe 2+ and a low redox potential. 17In contrast, L. ferroxidans has a far higher affinity for ferrous ions and greater tolerance to ferric ions, and therefore tends to out-compete At. ferroxidans in high redox potential environments. 18oth direct plating of mine waters onto overlay media 19 and molecular methods such as T-RFLP and fluorescent in situ hybridization (FISH) 7 have revealed that the most abundant bacterium in iron-rich acidic environments is often L. ferrooxidans.The concentration of ferric iron in Lake Robule at the time of sampling was 614 mg L -1 (11 mM), and the redox potential was 850 mV, which are conditions that are far more conducive for the growth of L. ferrooxidans than for At.ferrooxidans (Table I).Reports on the composition of microbial community in the Lake published by Beškoski et al. (2009) that differ significantly from the results presented in this paper are, probably, the consequence of the methods that were used previously to cultivate bacteria from the Lake water.However, it is also possible that At. ferrooxidans was indeed more relatively abundant in the past when redox potentials were generally lower (and more variable) than more recently.
At the end of the bioleaching experiment, only L. ferrooxidans and At.ferrooxidans were detected in the mineral leachate (Fig. 3).At the start of the experiment, both the ratio of the Fe(III) to Fe(II) concentrations and the redox potential were low, but both increased during culture incubation.Initially, At. ferrooxi-dans would have outgrown L. ferrooxidans, since At.ferrooxidans has faster growth rate than L. ferrooxidans in low redox potential solutions.However, because the leptospirilli have a greater affinity for Fe(II) and are less sensitive to Fe(III), they would become dominant in the later stages of the bioleaching process. 18These data indicate that At. ferrooxidans exist in the lake water, but the numbers of this bacterium in the lake are extremely low, and are undetectable by T-RFLP.The obligatory acidophilic heterotroph A. cryptum, the most abundant bacterium in the lake water as determined by T-RFLP analysis and isolation on the solid medium was not detected at the end of the bioleaching period since it is more sensitive to copper than both At.ferrooxidans and L. ferrooxidans, tolerating up to a maximum of about 10 mM of Cu (635 mg L -1 ). 20However, the concentration of copper determined in bioleaching solution was greater than this, i.e., 808.97 mg L -1 .
The numbers of heterotrophic acidophiles in acidic, sulfide mineral-rich environments are often much lower than those of chemolithoautotrophic acidophiles, such as L. ferrooxidans and At.ferrooxidans.Heterotrophic acidophiles in these environments use metabolic products (lysates and exudates) of autotrophic acidophiles as growth substrates, as well as any extraneous organic carbon.In this mutualistic relationship, autotrophs produce growth substrate for heterotrophs, while heterotrophs, utilizing them, eliminate organic compounds (notably small molecular weight aliphatic acids) that are toxic to most acidophiles. 21ince autotrophic acidophiles produce only small amounts of organic compounds, the numbers of heterotrophic acidophiles are often less than the number autotrophic acidophiles.However, if there is enough organic substrate, acidophilic heterotrophs can grow faster and can outnumber the autotrophs.One potential source of organic matter in the Lake is a municipal waste dump, which is in close proximity to the Lake, while other potential sources could be acidophilic algae.On the bottom of the Lake, green and filamentous biomass exists in the form of a microbial mat, indicating the presence of algae and fungi.Recent reports showed that in acidic environments exposed to sunlight, primary producers of organic matter are algae.Acidophilic algae excrete glycolic acid and sugars and sustain the growth of heterotrophic acidophilic bacteria, including Acidiphilium spp. 22Production of oxygen by algae also helps in the growth of chemolithoautotrophic acidophiles.Since Leptospirillum spp.are very sensitive to the presence of organic compounds in the environment (particularly organic acids), it appears that heterotrophic acidophilic bacterium A. cryptum efficiently metabolizes organic compounds, facilitating the growth and activity of L. ferrooxidans within Lake Robule.The bacterial consortium populating Lake Robule is limited in its biodiversity, and is dominated by two bacterial species: A. cryptum and L. ferrooxidans.The most abundant microorganism in lake is the heterotrophic bacterium A. cryptum.This finding suggests that the lake water has a constant supply of organic matter.A possible source of organic matter could be the municipal waste dump that is very close to the lake.Another source of organic matter in the lake is probably acidophilic algae that populate a microbial mat at the bottom of the lake.L. ferrooxidans is an autotrophic iron oxidizer.This bacterium thrives in environments, such as that of Lake Robule, that have high concentrations of Fe 3+ and very positive redox potentials.These conditions are less suitable for the growth of At. ferrooxidans, which was not detected by T-RFLP analysis, but was isolated directly from lake water on an overlay solid medium.At. ferrooxidans was also detected in the leach liquor from a test performed on the bioleaching of copper from a chalcopyrite concentrate using Lake Robule water as the inoculum.This bacterium prefers low redox potentials and high concentrations of Fe 2+ , and it grew faster than L. ferrooxidans during the initial stages of the bioleaching process.This finding indicates that the lake water contains At. ferrooxidans, but in relatively small abundances.Cultivating bacteria from Lake Robule on media with high concentrations of ferrous ions could lead to the wrong conclusions concerning the microbial diversity of the lake.Moreover, this study showed that using only molecular, cultivation independent methods (such as T-RFLP) to evaluate the microbial diversity of environmental samples is not sufficient since At.ferrooxidans was not detected by this method.For the most accurate evaluation of microbial diversity under extremely acidic environments, the employment of both molecular-and cultivation-based methods is required.
Physical and chemical properties of the Lake display both seasonal and longterm variations.Consequently, the microbial community of the Lake Robule is also probably subject to variation, and the results presented in this paper are of lake water sampled during the summer months in the recent past.Future research should focus on tracking the changes in physical properties and chemistry of the Lake Robule, followed by an investigation of microbial diversity by combining molecular methods and plating on overlay solid media.This approach would give insight into changes in the microbial communities that populate Lake Robule over time and should explain correlations between changes in physical and chemical properties of the Lake water and the structure of bacterial consortium that inhabits this extreme environment.
Fig. 1 .
Fig. 1.Analysis of restriction fragments obtained from a sample of Lake Robule water.The length of T-RFs identified after digestion with three endonucleases (x-axis) and their relative abundance (y-axis).
Fig. 2 .
Fig. 2. Colonies of the three species of acidophilic bacteria on a FeSo overlay plate, inoculated with water from Lake Robule.
Fig. 3 .
Fig. 3. Relative abundance of bacteria in solution after bioleaching determined by T-RFLP analysis.
Fig. 4 .
Fig. 4. Relative abundance of bacteria in Lake Robule determined by T-RFLP analysis.
TABLE I .
Physical and chemical properties of Lake Robule water | 2018-11-30T09:02:35.810Z | 2014-01-30T00:00:00.000 | {
"year": 2014,
"sha1": "312ced282972dbab5e3f3d0ab936baf343cf0e7e",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0352-51391300071S",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "312ced282972dbab5e3f3d0ab936baf343cf0e7e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
44392288 | pes2o/s2orc | v3-fos-license | Negative Symptom and Cognitive Deficit Treatment Response in Schizophrenia. Edited by R. S. E. Keefe and J. P. McEvoy. (Pp. 201.) American Psychiatric Press: Washington, DC. 2001.
Some years ago I attended a lecture given by the then outgoing President of the Royal College of Psychiatrists. The topic was the future role of the consultant in ‘general adult psychiatry’. The take homemessages were that consultants would become increasingly acquainted with neuroscience but see fewer patients ; wouldmanage teams but see only ‘difficult cases ’ ; that psychotherapy would be ceded to psychologists (and would comprise only cognitive behavioural therapies). Nowadays, I meet trainees who are disenchanted with the state of psychiatry. Journal clubs spent picking apart company-sponsored drug trials lack the sustenance of a good paper by Freud; out-patient clinics are sometimes perceived as prescription services. Has the biological paradigm been ‘too successful ’? The risk is that the perceived reductionism of the current paradigm may precipitate a reactionary flight into a narrow, psychodynamic obscurantism. The book under review may go some way towards encapsulating the disquiet felt by some of these psychiatrists in training. Peter Zachar’s text is part of a series entitled Advances in Consciousness Research and, throughout, he argues for a space for psychological description, theory and practice in contemporary psychiatry. His theoretical opponents reside among the ‘biomedical materialists ’ (of psychiatry) and ‘eliminative materialists ’ (of philosophy). These authors appear to argue for a radical reduction of the first person, subjective world of experience (and ‘folk belief ’) to a third person, objective world of scientific measurement and data (‘broken brains ’). Proponents of the latter view purvey a form of ‘scientism’ (below), which is narrow in its purview and heavily dependent upon analogies with the more neurological end of psychiatric practice. Hence, if it can be argued that all mental disorders are really brain disorders akin to general paresis of the insane (GPI) or traumatic brain injury (TBI), then what is really required for their treatment is invariably a biomedical solution. There is no point ‘wasting time’ on psychological understandings when physical treatments are required. Zachar pulls at the seams of each of these assertions, eventually demonstrating that theymay be unravelled. However, the biomedical materialists may also go on to espouse moral justifications for their project : biomedical perspectives on psychiatric conditions will reduce blame and stigma, harsh treatments will be curtailed ; while psychological theories and treatments unfairly burden sufferers. This is a Utopian quest that sufficient knowledge of genetics and neurophysiology will serve to instantiate. How much of this is true? Is it all just ‘scientism’? ‘Scientism is the (implicit) presumption that, in addition to the superiority of scientific methodology, the more rigorously and exclusively we use the scientific approach in any endeavor, the more superior the product’ (Zachar, p. 115, emphasis added). Zachar’s response is a measured, careful and (perhaps) overly long consideration of the evidence. He is explicit in stating his bias : he is for a ‘broadly considered co-evolutionary perspective, which accepts multiple levels of analysis, explanatory pluralism, the ecology of neuroscience, andmolar explanation (in the evolutionary sense of the term)’ (p. 269). He shows that the most rigorous diagnostic systems rely for their application upon the elicitation of subjective information, describing the mood or perceptions, that empathy and understanding are central topractice, and thatpsychiatric diagnoses do not resemble ‘natural kinds’. Instead, Zachar posits that our diagnostic categories comprise ‘practical kinds ’ : prototypical descriptions, of clinical utility but not equivalent to ‘ultimate reality ’. Hence, investigators must be careful to avoid the reification of concepts they may misapply. Zachar argues that a diagnosis has multiple uses : providing a common language Psychological Medicine, 2003, 33, 369–374. f 2003 Cambridge University Press Printed in the United Kingdom
Some years ago I attended a lecture given by the then outgoing President of the Royal College of Psychiatrists. The topic was the future role of the consultant in 'general adult psychiatry '. The take home messages were that consultants would become increasingly acquainted with neuroscience but see fewer patients ; would manage teams but see only ' difficult cases '; that psychotherapy would be ceded to psychologists (and would comprise only cognitive behavioural therapies). Nowadays, I meet trainees who are disenchanted with the state of psychiatry. Journal clubs spent picking apart company-sponsored drug trials lack the sustenance of a good paper by Freud ; out-patient clinics are sometimes perceived as prescription services. Has the biological paradigm been 'too successful '? The risk is that the perceived reductionism of the current paradigm may precipitate a reactionary flight into a narrow, psychodynamic obscurantism. The book under review may go some way towards encapsulating the disquiet felt by some of these psychiatrists in training.
Peter Zachar's text is part of a series entitled Advances in Consciousness Research and, throughout, he argues for a space for psychological description, theory and practice in contemporary psychiatry. His theoretical opponents reside among the 'biomedical materialists' (of psychiatry) and ' eliminative materialists ' (of philosophy). These authors appear to argue for a radical reduction of the first person, subjective world of experience (and 'folk belief') to a third person, objective world of scientific measurement and data (' broken brains '). Proponents of the latter view purvey a form of ' scientism ' (below), which is narrow in its purview and heavily dependent upon analogies with the more neurological end of psychiatric practice. Hence, if it can be argued that all mental disorders are really brain disorders akin to general paresis of the insane (GPI) or traumatic brain injury (TBI), then what is really required for their treatment is invariably a biomedical solution. There is no point 'wasting time ' on psychological understandings when physical treatments are required. Zachar pulls at the seams of each of these assertions, eventually demonstrating that they may be unravelled. However, the biomedical materialists may also go on to espouse moral justifications for their project : biomedical perspectives on psychiatric conditions will reduce blame and stigma, harsh treatments will be curtailed ; while psychological theories and treatments unfairly burden sufferers. This is a Utopian quest that sufficient knowledge of genetics and neurophysiology will serve to instantiate. How much of this is true ? Is it all just 'scientism ' ?
'Scientism is the (implicit) presumption that, in addition to the superiority of scientific methodology, the more rigorously and exclusively we use the scientific approach in any endeavor, the more superior the product ' (Zachar, p. 115, emphasis added).
Zachar's response is a measured, careful and (perhaps) overly long consideration of the evidence. He is explicit in stating his bias : he is for a 'broadly considered co-evolutionary perspective, which accepts multiple levels of analysis, explanatory pluralism, the ecology of neuroscience, and molar explanation (in the evolutionary sense of the term)' (p. 269). He shows that the most rigorous diagnostic systems rely for their application upon the elicitation of subjective information, describing the mood or perceptions, that empathy and understanding are central to practice, and that psychiatric diagnoses do not resemble 'natural kinds'. Instead, Zachar posits that our diagnostic categories comprise 'practical kinds ': prototypical descriptions, of clinical utility but not equivalent to ' ultimate reality '. Hence, investigators must be careful to avoid the reification of concepts they may misapply. Zachar argues that a diagnosis has multiple uses : providing a common language between professionals, access to services, insurance cover. Such categories are not random or arbitrary, they do say something about the way subjects become ill, but they do not (yet) represent natural kinds (nature has yet to be ' carved at her joints '). Yet, Zachar is careful to avoid slipping into relativism and this is not a 'post-modern ' critique : throughout, the consideration of case material is sober and concerned with achieving the appropriate level of description. Does borderline personality disorder resemble GPI or TBI ? Can we be so confident that a purely biological level of explanation will suffice ? If we tell a patient in the clinic that his affective disorder is 'due to' an imbalance of neurotransmitters, are we telling the truth ? Or are we saying more than we really know ?
Zachar addresses what he sees as the excessive criticism of psychoanalysis, Freud and interpersonal therapies. He critiques the 'false dawns ' of the biomedical paradigm : frontal lobotomy, social Darwinism and eugenics. Again, he is careful to state that biological theories and treatments are not inherently flawed, merely that they may be perverted or misapplied. Equanimity is called for ; there is a place for biological accounts and another for the psychological. Importantly, psychiatry does not collapse into neurology ; these are still distinct specialties.
In many ways the dilemma for clinical psychiatry is similar to that for contemporary philosophy. The latter may be caricatured as a contrast between sterile language games that have little to do with wisdom and arcane hermeneutics where sentences may seem to lack veridical content (Critchley, 2001). This distinction approximates to one between the Anglo-American and Continental traditions, respectively (though not exclusively). Is there a problem with 'meaning ' in psychiatry, or is Zachar merely pushing against an open door ? I think his thesis is timely, indeed perennial ; no matter how good our biological accounts may be we will still be interacting with human agents who experience distress and describe beliefs about the world.
A broader, political perspective is also relevant here: why do ex-psychiatric patients require a ' survivors' movement '? What is the appropriate relationship that should pertain between doctors and the pharmaceutical industry? Sceptics may wish to consider the contents of a recent issue of Adbusters magazine, devoted entirely to the perceived deception practiced by industry and its apologists. Zachar's account is not polemical but it does raise a central question: when we understand the mind in a material way, does this effect what we are thinking when we listen to a patient?
It may be argued that conscious states provide a necessary level of description in psychiatry, if only because they tell the subject the consequences of their actions (Spence et al. 2002), and in some cases comprise the very result that the subject seeks (as when a substance is procured to alter consciousness). Describing the neural correlates of such an experience may be informative but it does not provide an exhaustive account of that experience (or the motivation for its pursuit).
Relevant here may be a philosopher who is not mentioned by Zachar, but whose work dealt explicitly with the work of conscious thought, and its place in action. When asked why she encouraged thinking in her students, rather than merely teaching them (didactically), Arendt responded: [W]hen the chips are down, the question is how they will act. And then this notion[:] that I examine my assumptions … that I think 'critically ', and that I don't let myself get away with repeating the cliches of the public mood … And I would say that any society that has lost respect for this, is not in very good shape. In 1998 Professor Sir Michael Rutter retired from the MRC Child Psychiatry Unit. This was a major event for child mental health research and clinical practice, both in the UK and worldwide. His retirement coincided with the opening of the MRC Centre for Social, Developmental and Genetic Psychiatric Research, of which he was the initial director. Sir Michael's remarkable research and clinical contributions to child and adolescent psychiatry occurred continuously over four decades. His outstanding contribution was recognized by the Royal College of Psychiatrists and the Association of Child Psychology and Psychiatry, who combined forces to publish the contents of two separate celebratory events as a single Festschrift in honour of his achievements. A second volume, reproducing some of his classic papers acts as both a compilation and a valuable source material for much of what is referred to in Volume 1. The second volume also provides a remarkable overview of the breadth of Michael's interests over his working lifetime. The selected papers provide a very real insight into Michael's skills as a clinical scientist, an outstanding capacity for synthesis of scientific fact from many for both an academic and a clinical audience. The topics covered in both volumes include contributions on psychiatric genetics, autism, conduct disorders, social psychiatry and the importance of longitudinal and prospective studies. To do all this within a developmental framework and to be recognized as one of the founders of the modern discipline of developmental psychopathology, is an extraordinary achievement for one working lifetime.
Michael Rutter entered the field of child psychiatry at time when the subject was driven by opinion and psychological theories with little scientific validity that dominated the practice within child and adolescent mental health services. From the mid-1960s he set about the task of introducing measurement to childhood behaviour and the social environment together with assessment and classification procedures for clinical syndromes. By the 1970s he had set the standards for child psychiatry research worldwide and was publishing not only original science but also highly influential books and monographs on the theories and practice. His views have always been based firmly on the facts and observations derived from a critical scientific method. Michael has spent virtually all of his career at the Institute of Psychiatry, the foremost centre for psychiatric research in the world. It is from this secure base that he developed his ideas and methods and promulgated in particular some of the most influential studies into autism. He demonstrated the importance of genetic influences in the origins of this most profound neuropsychiatric disorder, firmly refuting the notion that this syndrome was caused by maternal emotional indifference in the early years of life. Chapter 6 of Volume 1, entitled ' Five decades of research on autism ' by Fred Volkmar is a most readable summary of where the field has gone since Rutter established a biological basis for this disorder. Chapter 4 in Volume 2 provides a fine illustration of a set of papers from Michael and colleagues over the past two decades regarding the role of brain dysfunction in autism and related neuropsychiatric conditions. His interests in autism were clinical as well as scientific as indicated by the inclusion of his own excellent chapter on the interplay between research and clinical work in Volume 1.
His early work on the influence of parental psychiatric disorder and the subsequent studies of the role of psychosocial adversities on child development and psychopathology paved the way for a reconsideration of how social processes exert different effects on the liability for psychiatric disorder over time. He demonstrated that childhood developmental trajectories are determined not by some 'fixed dose' of early family difficulty but through a dynamic and ongoing interplay of experience and behaviour. The implications were that there are opportunities and turning points throughout childhood for the disadvantaged and the institutionalized young person. His psychosocial studies have paved the way for delineating the mechanisms and processes regarding how life experiences, both good and bad, exert their effects on the developing mind. This psychosocial side of Michael's work is abundantly represented throughout these volumes. The inclusion of two seminal papers in the first section in Volume 2 on the concept and practice of developmental psychopathology and an essay on the same topic in Volume 1 by Dante Cicchetti provide very readable accounts of what this term means and why it is important. It would be easy to continue to describe each and every contribution in this Festschrift in glowing terms. Instead, I recommend these volumes as a 'must purchase ' for mental health professionals and all libraries. It documents more than any other current publication how child and adolescent psychiatry came of age in the last third of the twentieth century thanks in a very large measure to the extraordinary achievements of Michael Rutter and his colleagues at the Institute of Psychiatry, London. A past generation might consider the title of this book an oxymoron ; forensic psychiatry was once the province of closed intuitions with merely a handful of specialists involved. This book marks forensic psychiatry's coming of age and charts its transformation, not merely in the number of psychiatrists (in the UK a nine-fold increase in consultants in the last 30 years) but its transition to all levels of secure setting, especially where security appears ethereal : the community. So if it is not the walls, what defines the speciality ? In contrast to other psychiatric sub-specialities it is the social, the breaking of laws or risk of the same, which is the key determinant and the sociological perspective is one that is particularly well addressed in this work. Alec Buchanan has gathered well-informed commentators from a number of disciplinary perspectives. There is a light editorial touch, which allows spirited debate on the issues in question, such as the ethics of risk, the role of compulsory treatment in the community and management of personality disorder. The reader may not come away with easy answers but will certainly be better informed about the questions. What after all has changed in the science of forensic psychiatry to help explain its rise ? As Joan Busfield comments, the two over-riding empirical conclusions from research reveal that mental disorder predicts violence rather poorly and the accuracy of risk prediction is similarly limited. Yet the one theme to run throughout the book is that of risk. The language of risk assessment and management was supposed to reduce stigma and aid treatment. It now has perhaps created a stigma of its own ; whereas only a few patients were ascribed the label of 'dangerous ' every one has a level of risk. The terminology of risk, adopted in the early 1990s, resonated with changes in public attitudes about risk. To understand the expansion in forensic psychiatry one needs to appreciate this wider context, which is well described by David Tidmarsh and other contributors. From the government policy perspective the catchphrase is ' safe, sound and supportive' mental health services. From the professional perspective safe practice is risk averse and failure in preventing violence is coupled with the fear of a blameprone culture.
Forensic clinicians need to be well informed about the risk debate. Proper appreciation of why the terminology was adopted in psychiatry and the current state of the art regarding risk assessment should be facilitative of good clinical practice. Contributions by Jennifer Skeem and Edward Mulvey are excellent on this, but also read the introduction by Paul Mullen and the chapter by Nikolas Rose, which give timely ethical warnings. As Mullen states, ' [if ] risk management is to emerge out of risk assessments; if risk management is amount to more than coercion and incapacitation ; if risk management is to be a legitimate activity for health professionals, then, assessments must focus on establishing those vulnerabilities contributing to offending which are open to modification through appropriate health related treatments. '.
A key theme that emerges is the interface between general and forensic psychiatry. Different models of community forensic psychiatry and definition of which patients might be suitable for a specialist service are appraised and contrasted. The book balances the contributions with practical and organization focus with contributions that are more reflective and place current concerns in a wider context. Among those clinical contributors there is a preference for the former, while the non-clinical academics prefer the latter.
The book as a whole is both practical and theoretical. Clinically it is well informed and evidence based, but its excellence is to place that material in a wider ethical and moral context. This is a book that marks the development of forensic psychiatry in Britain and should be a core text for forensic practitioners, as well as providing important reading for those with an interest in forensic psychiatry or who are responsible for planning forensic services. Health and, to some extent, social services in the UK continue to organize themselves based on the age of their users. Old age psychiatrists predominantly look after patients over the age of 65, while geriatricians use 75 as their cut-off. Unfortunately, most humans are oblivious of such arbitrary cut-offs and develop the wrong illness at the wrong time ! Patients with early onset dementia and their carers often have to cope with not only the disease itself but also the boundaries of various services and poorly trained specialists. During my specialist registrar training in adult and old age psychiatry I received considerable training in the diagnosis and management of dementia in older people, but the experience of managing patients with early onset dementia was, to be polite, limited. Hence, I read this first 'comprehensive and international ' book on early-onset dementia with great enthusiasm. I liked the layout of the book, and found most of the chapters easy to read. Judicious use of tables and diagrams makes it easy to understand difficult text and help focus on key points. The book is divided into three sections (as per preface but not marked in the index). The first section concentrates on assessments (physical, psychological, psychiatric and radiological), and pathology of early onset dementia. The second section includes chapters on individual diseases and the third section covers management of early onset dementia. The book is truly comprehensive. It includes well-researched information on diseases causing early onset dementia that I did not know even existed! I especially enjoyed reading the chapters on neuropsychological assessment, functional neuroimaging, Huntington's disease and inflammatory and infective disorders. I got the impression that one of the main aims of the book was to encourage clinicians to look for specific treatable conditions when dementia is of early onset and especially if features atypical of common degenerative causes are present. John Hodges has done well in this respect.
The book describes the clinical, neuropsychological, neuroradiological and pathological features of early onset dementia in great depth (the first two being the topics of special interest to the editor). There is some overlap and repetition of information as the first section covers many causes of early onset dementia that have chapters of their own in the second section of the book. It is useful to have key points highlighted at the end of each chapter but in some chapters the key points are rather general and not specifically related to early onset dementia. For some reason, alcohol-related dementia, which often presents early, does not get much mention. Also, I found the third section, on management, rather weak, especially with regard to a nonpharmacological approach. This is particularly relevant, as the book's title includes 'a multidisciplinary approach '. I hope that the next edition will include contributions from other professionals (e.g. social workers, occupational therapists, specialist nurses). A key question facing clinicians and managers in health and social care is how to meet multi-faceted and unique needs of patients with early onset dementia. In this respect, it would have been useful to have a chapter describing organization of specialist services in different countries followed by discussion about pros and cons to consider. This book will be of particular interest to readers in the UK as the current service provision for these vulnerable patients varies considerably and needs to be improved in line with National Service Frameworks.
I would recommend this book to doctors from the field of medicine, neurology and psychiatry, who have a special interest in patients with early onset dementia. For other doctors, who do not come across such patients commonly in their practice, it would still be useful to have this on the shelf as a reference book. This book is a constructive contribution to the issue of treating negative and deficit symptoms in schizophrenia. They are the major reason for patients' failure to survive in the community and the ability to improve them is often pessimistically perceived by clinicians. The book is authored mostly by psychiatrists but there are some chapters by psychologists and it is of benefit that some of the authors have written their contributions with knowledge of other chapters. The research methodology is closely inspected and it is constantly emphasized how the area is bedevilled by the failure of many researchers to distinguish between the primary cognitive deficit and secondary negative features of the condition. This results in an appropriately cautious interpretation of the research. Areas covered include assessment of social functioning, cognitive deficit, the experience of emotion and family perspective. The biological basis and pathophysiology of negative symptoms are also addressed. For trainee, trainer, and researcher, there is much to be gained from reading this book. It would have benefited from having a broader professional range of authors as the psychosocial treatments were not addressed as fully as they might have been. It is after all nurses and occupational therapists who spend most time rehabilitating these patients and the research from these professions is not particularly drawn upon. I also felt the emphasis on secondary negative features was overrestricted to the areas of EPS and drug side-effects once again to the neglect of psychosocial causes. An important point is that the cover and print are dowdy and there were virtually no diagrams to heighten interest. In spite of these reservations this reviewer used the book as a basis for a teaching session that was well received. This is perhaps the best proof of the value of this book. | 2018-12-03T19:38:55.009Z | 2003-02-01T00:00:00.000 | {
"year": 2003,
"sha1": "da9c104062af29be7184dddf30ce3f2fffcf5367",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "Cambridge",
"pdf_hash": "da9c104062af29be7184dddf30ce3f2fffcf5367",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
26415502 | pes2o/s2orc | v3-fos-license | PEMPHIGUS VULGARIS MASQUERADING AS SUBCORNEAL PUSTULAR DERMATOSES – A CASE REPORT
Both pemphigus vulgaris and subcorneal pustular dermatoses are intraepidermal blistering disorders though the treatment for the two varies. We present a 26 years old male patient with multiple vesicles and bullae filled with clear fluid as well as pus, hypopyon sign present, predominantly over the trunk and crusted lesions over the scalp. The patient did not have mucosal involvement at presentation and Nikolsky’s and bulla spread sign were negative. A clinical diagnosis of subcornealpustulardermatosis and IgA pempgigus was made; however, direct immunofluorescence was suggestive of pemphigus vulgaris as was histopathology examination. The patient responded to treatment with oral corticosteroids. A previous case report of pemphigus foliaceous presenting as IgA pemphigus and responding to dapsone has been reported and so has a report of pemphigus vulgaris presenting with multiple pustules.
Introduction
Pemphigus is a group of chronic autoimmune bullous diseases of the skin and/or mucosae characterized by the presence of desmoglein 3 and/or 1 antibody.There are mainly two types of pemphigus, pemphigus vulgaris (and its variant pemphigus vegetans), and pemphigus foliaceus (and its variant pemphigus erythematosus) [1].In pemphigus vulgaris flaccid blisters filled with clear fluid arise either on normal skin or an erythematous base.Mucosal erosions may precede cutaneous lesions by many days or months.Direct immunofluorescence shows deposition of IgG and C3 in the intercellular spaces in a 'fishnet' pattern.Subcorneal pustular dermatosis presents with flaccid pustules in which the pus characteristically accumulates in the lower half.Both direct and indirect immunofluorescence is negative.IgA pemphigus presents as flaccid vesicles or pustules usually associated with pruritus.The lesions have a predilection for the axillae and groins.Intercellular IgA deposition is seen in the epidermis either at different levels or throughout.This case is unique as the patient presented with lesions which were clinically suggestive of subcorneal pustular dermatosis but DIF proved otherewise.
Case Report
26 years old married male, fisherman, presented to us with painful lesions over the scalp since two months and fluid and pus filled vesicles and bullae over the trunk and upper limbs since 5-6 days associated with itching.Few lesions had ruptured after scratching to give rise to erosions.No history of spontaneous rupture of lesions, peripheral extension or history suggestive of healing with milia formation.There was no history of mucosal lesions at presentation.No history of prior drug intake, or any constitutional symptoms.No history suggestive of wheal formation or severe itching prior to development of lesions.There was no history suggestive of systemic involvement.On examination he had crusting, erosions and matting of hair over the scalp.Multiple tense vesicles and bullae filled with both clear fluid and pus predominantly over the trunk (Fig. 1) and few over the arms, neck and medial aspect of thigh.Hypopyon sign was positive (Fig. 2).
Few erosions over trunk were seen.Oral cavity examination revealed single erosion over the left buccal mucosa.Genitals were normal.Both marginal and direct Nikolsky's and bulla spread sign were negative.A clinical diagnosis of subcorneal pustular dermatoses and IgA pemphigus was made and he was investigated.Tzanck smear showed only pus cells and gram staining from pustule revealed gram positive cocci and pus cells.A perilesional skin biopsy for direct immunofluorescence showed intercellular staining with Ig G and C3 and skin biopsy from a vesicle showed suprabasal acantholysis with cleft formation and detached roof of the bullae.(Fig. 3) Basal cells with increased melanin pigmentation arranged in a row of tomb stone fashion in the base of the bulla were also seen.Indirect immunofluorescence showed intercellular staining with Ig G at 1:100 dilutions.ELISA for desmoglein 1 and 3 was positive with >200 RU/ ml.Complete blood count, fasting, post prandial blood sugars, renal function, liver function tests and urine examination were within normal limits.Chest x-ray was normal.Serum protein electrophoresis showed normal pattern.On the basis of DIF, IIF, ELISA and histopathology findings a diagnosis of pemphigus vulgaris was made and patient as treated with oral prednisolone 1mg per kg body weight, to which he had a dramatic response.
Discussion
The typical presentation of pemphigus vulgaris is as flaccid blisters which may occur anywhere on the skin surface.The blisters burst to give rise to erosions which have a tendency to spread at their periphery.50-70 % of the patients may present with oral erosions which are irregularly shaped over the palate or buccal mucosa [2].Many atypical presentations of pemphigus vulgaris have been reported.A 60 years old patient presented with ulceration over bilateral dorsa of feet which persisted for four months before the characteristic lesions of pemphigus vulgaris appeared.A 30 year old female patient presented with a single erythematous crusted plaque in the right nasal wing.On histologic examination and immunofluorescence it was found to be pemphigus vulgaris [3].A 50 years old male patient presented with erythematous scaly plaques and was diagnosed as psoriasis which however did not respond to treatment for the same.Direct immunofluorescence revealed pemphigus foliaceous [4].Pemphigus foliaceous masquerading as IgA pemphigus and responding to dapsone has been reported [5].There has also been a case report of pemphigus foliaceus presenting with prominent neutrophilic pustules where the lesions mimicked subcorneal pustularder matosis clinically [6].In both cases a correct diagnosis was made based on the findings of direct immunofluorescence.No case reports have yet been reported where pemphigus vulgaris presented with subcorneal pustular dermatoses like lesions.
Figure 1 .
Figure 1.Multiple tense vesicles and bullae filled with both clear fluid and pus predominantly over the trunk.Figure 2. Hypopyon sign positive.
Figure 2 .
Figure 1.Multiple tense vesicles and bullae filled with both clear fluid and pus predominantly over the trunk.Figure 2. Hypopyon sign positive. | 2017-10-19T16:51:20.061Z | 2014-04-11T00:00:00.000 | {
"year": 2014,
"sha1": "57479f9aabe0d45733e09146ef14b1e4b73741f6",
"oa_license": "CCBY",
"oa_url": "http://www.odermatol.com/odermatology/22014/12.PV-KabraV.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57479f9aabe0d45733e09146ef14b1e4b73741f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13092798 | pes2o/s2orc | v3-fos-license | Environmental Controls on the Increasing Gpp of Terrestrial Vegetation across Northern Eurasia
Terrestrial ecosystems of northern Eurasia are demonstrating an increasing gross primary productivity (GPP), yet few studies have provided definitive attribution for the changes. While prior studies point to increasing temperatures as the principle environmental control, influences from moisture and other factors are less clear. We assess how changes in temperature, precipitation, cloudiness, and forest fires individually contribute to changes in GPP derived from satellite data across northern Eurasia using a light-use-efficiency-based model, for the period 1982–2010. We find that annual satellite-derived GPP is most sensitive to the temperature , precipitation and cloudiness of summer, which is the peak of the growing season and also the period of the year when the GPP trend is maximum. Considering the regional median, the summer temperature explains as much as 37.7 % of the variation in annual GPP, while precipitation and cloudiness explain 20.7 and 19.3 %. Warming over the period analysed, even without a sustained increase in precipitation, led to a significant positive impact on GPP for 61.7 % of the region. However, a significant negative impact on GPP was also found, for 2.4 % of the region, primarily the dryer grasslands in the southwest of the study area. For this region, precipitation positively correlates with GPP, as does cloudiness. This shows that the southwestern part of northern Eurasia is relatively more vulnerable to drought than other areas. While our results further advance the notion that air temperature is the dominant environmental control for recent GPP increases across northern Eurasia, the role of precipitation and cloudi-ness can not be ignored.
Introduction
Several analyses of normalized difference vegetation index (NDVI) data derived from satellite remote sensing have pointed to a positive trend in gross primary productivity (GPP) and leaf area index (LAI) of the northern high latitudes in the recent decades (Myneni et al., 1997;Carlson and Ripley, 1997;Zhou et al., 2001;Guay et al., 2014).Warming has also occurred over this time.Global mean surface air temperatures increased by 0.2 to 0.3 • C over the past 40 years, with warming greatest across northern land areas around 40-70 • N (Nicholls et al., 1996;Overpeck et al., 1997).Precipitation increases have also been observed over both North America and Eurasia over the past century (Nicholls et al., 1996;Groisman et al., 1991).Urban et al. (2014) describe the co-occurrence of these climatic and ecosystem changes.Here we investigate increasing GPP of terrestrial ecosystems of northern Eurasia and determine the relative attribution arising through changes in several geophysical quantities, hereinafter referred to as "environmental variables", as they potentially drive observed temporal changes in vegetation productivity.
GPP is a physical measure of the rate of photosynthesis, or the rate at which atmospheric CO 2 is fixed by autotrophic (generally green) plants to form carbohydrate molecules.Photosynthesis, being a biological process, is regulated by several environmental factors.Productivity is highest at the optimum temperature, though this optimum can be modified by cold or warm acclimation (Larcher, 1969(Larcher, , 2003)).Water availability also affects plant hydraulics and chemistry by overlaid with the spatial distribution of the 10 flux tower sites whose GPP (gross primary productivity) data were used to validate the GPP data derived from satellite NDVI (normalized difference vegetation index).For our statistical analysis, we show the distribution of two fundamental types of vegetation types: (i) herbaceous, i.e. without woody stems, which includes tundra in the north and grasslands (Eurasian Steppe) to the south, and (ii) wooded, i.e. plants with wood as its structural tissue, which includes the boreal forests appearing in the middle and extending from the western to the eastern boundary.This land cover map has been derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Type 5 land cover product (Friedl et al., 2002).The details of the flux tower sites are listed in Table 2.
controlling the nutrient uptake through shoot transportation (Sharp et al., 2004;Stevens et al., 2004).Increasing atmospheric CO 2 concentration increases GPP by biochemical fertilization for C 3 plants and increasing water use efficiency for both C 3 and C 4 plants (Bowes, 1996;Rötter and Geijn, 1999).
There is both direct and indirect evidence of increasing productivity across the northern high latitudes.Flask-and aircraft-based measurements show that the seasonal amplitude of atmospheric CO 2 concentration across the Northern Hemisphere has increased since the 1950s, with the greatest increases occurring across the higher latitudes (Graven et al., 2013).This trend suggests a considerable role of northern boreal forests, consistent with the notion that warmer temperatures have promoted enhanced plant productivity during summer and respiration during winter (Graven et al., 2013;Kim et al., 2014;Myneni et al., 1997).Observed at eddy covariance sites, net ecosystem exchange (NEE), the inverse of net ecosystem productivity (NEP), is a strong function of mean annual temperature at mid-and high latitudes, up to the optimum temperature of approximately 16 • C, above which moisture availability overrides the temperature influence (Yi et al., 2010).Other studies have found vulnerabilities in ecosystems of North America as well as Eurasia from warming-related changes in hydrological patterns (Parida and Buermann, 2014;Buermann et al., 2014), thereby highlighting the importance of precipitation.With warming, lowtemperature constraints to productivity have relaxed (Nemani et al., 2003;Zhang et al., 2008;Yi et al., 2013).Tree-ring data suggest that black spruce forests have experienced drought stress during extreme warmth (Walker et al., 2015).Over northern Eurasia, precipitation trends have complicated the relationship between temperature and productivity, as the increasing moisture constraints have made northern Eurasia more drought-sensitive (Zhang et al., 2008;Yi et al., 2013).Increasing atmospheric CO 2 concentration is another factor, as CO 2 fertilization has been demonstrated through observations, models, and FACE (free-air CO 2 enrichment) experiments (Ainsworth and Long, 2005;Hickler et al., 2008;Graven et al., 2013).Cloudiness or shade can strongly influence vegetation productivity (Roderick et al., 2001), particularly over northern Eurasia (Nemani et al., 2003).Disturbances through forest fires also affect vegetation productivity by destroying existing vegetation and allowing for regeneration (Goetz et al., 2005;Amiro et al., 2000;Reich et al., 2001).
The role of temperature and precipitation in the positive trend of GPP of northern high latitudes, especially northern Eurasia, has not been firmly established.Few studies have examined the effect of CO 2 concentration, cloudiness, and forest fires.Of these environmental variables, CO 2 concentration is unlike the others, given its long atmospheric lifetime (∼ 100-300 years; Blasing, 2009).Thus, CO 2 concentration is assumed to be more spatially uniform.As a result, any statistical analysis using this variable will not be comparable with the other variables.We consequently do not analyse the influence of CO 2 concentration.While some studies have focused on terrestrial ecosystems of the pan-Arctic (Urban et al., 2014;Myneni et al., 1997;Guay et al., 2014;Kim et al., 2014) or the high latitudes of North America (Goetz et al., 2005;Buermann et al., 2013;Thompson et al., 2006), few studies have investigated the relative role of different environmental variables on increasing GPP of northern Eurasia.Therefore, we assess in this study how vegetation productivity trends in northern Eurasia are influenced by the environmental variables air temperature, precipitation, cloudiness, and forest fire.Objectives are to (1) calculate the long-term trend of both GPP and the environmental variables, (2) assess the magnitude of the effect of the environmental variables on GPP, (3) identify the seasonality of the variables, and (4) identify the regions of northern Eurasia where the variables boost or reduce GPP.Exploiting the availability of long-term time series observation-based data we perform a spatially explicit grid point statistical analysis to achieve the above objectives.
Land cover
The study domain is the Northern Eurasia Earth Science Partnership Initiative (NEESPI) region (Groisman and Bartalev, Thus the effect of each environmental variable accounts only for changes in NDVI and does not track potential changes in land cover type.While the GPP products use the standard IGBP MODIS global land cover classification, for our statistical analysis we simplify the LC distribution into two fundamental types.One is "herbaceous", without woody stems, found in the tundra to the north and grasslands to the south, one of the driest biomes of northern Eurasia.The second is "woody vegetation", plants with woody stems, located within the area of boreal forests extending from west to east across much of the centre of the domain (Fig. 1).
Vegetation productivity -long-term data
GPP represents the total amount of carbon fixed per unit area by plants in an ecosystem utilizing the physiological process of photosynthesis (Watson et al., 2000).GPP is one of the key metrics useful in assessments of changes in vegetation productivity.It is also a standard output of processbased vegetation models.The GPP fields used in this study represent model estimates driven by satellite data.The GPP model used is based on a light use efficiency (LUE) model that prescribes theoretical maximum photosynthetic conversion efficiency for different land cover classes.LUE is reduced from potential (LUE max ) rates for suboptimal environmental conditions determined as the product of daily environmental control factors defined for the different land cover types using daily surface meteorological inputs from ERA-Interim reanalysis data.Daily surface meteorology inputs to the model include incident solar radiation (SW rad ), minimum and average daily air temperatures (T min and T avg ), and atmospheric vapour pressure deficit (VPD).GPP is derived on a daily basis as (Running et al., 2004;Zhang et al., 2008) where ε is a LUE parameter (g C MJ −1 ) for the conversion of photosynthetically active radiation (PAR, MJ m −2 ) to GPP.FPAR is estimated from NDVI using biome-specific empirical relationships emphasizing northern ecosystems (Yi et al., 2013).Several studies demonstrated the linear relationship between NDVI and FPAR through field measurements and theoretical analysis (Fensholt et al., 2004;Myneni and Williams, 1994;Ruimy et al., 1994;Sellers, 1985).Two sets of NDVI records are obtained for this study and used to derive alternative FPAR and GPP simulations: (i) the third generation Global Inventory Modeling and Mapping Studies (GIMMS3g; Zhu et al., 2013;Pinzon and Tucker, 2010), downloaded from https://nex.nasa.gov/nex/(referred to as GIMMS-GPP), and (ii) the Vegetation Index and Phenology (VIP) database (Didan, 2010;Barreto-Munoz, 2013), downloaded from http://phenology.arizona.edu/(University of Arizona's Vegetation Index and Phenology Lab; referred to as VIP-GPP).The 16-day NDVI records are first interpolated to a daily time step using temporal linear interpolation to estimate daily FPAR following previously established methods (Yi et al., 2013).The use of daily NDVI and FPAR inputs rather than coarser (8-day or 16-day) temporal composites reduces potentially abrupt step changes in the model calculations due to temporal shifts in the coarser time series canopy inputs.Moreover, the daily interpolation was found to improve simulations of GPP seasonality es-
P. Dass et al.: Vegetation greening across northern Eurasia
pecially during spring and autumn transitional periods over northern land areas (Yi et al., 2013).PAR is estimated as a constant proportion (0.45) of incident shortwave solar radiation (SW rad ).ε max is the potential maximum ε under optimal environmental conditions.T f and VPD f are scalars that define suboptimal temperature and moisture conditions represented by respective daily T min and VPD inputs.T f and VPD f are defined using a linear ramp function (Yi et al., 2013;Heinsch et al., 2006), as well as minimum and maximum environmental constraints defined for different biome types (T mn_min and T mn_max , VPD min and VPD max ).Table 1 summarizes the biome property look-up table (BPLUT) used to define the environmental response characteristics in the model.These GPP data sets are currently available through a public FTP directory (ftp://ftp.ntsg.umt.edu/pub/data/HNL_monthly_GPP_NPP/).The GPP data are derived at a daily time step and have been aggregated to a monthly time step for this study.Spatial resolution is 25 km, with a temporal range from 1982 to 2010, restricted to the northern high latitudes (> 45 • N).In many of the statistical analyses to follow we use the ensemble mean of the two satellite-derived GPP data sets, henceforth denoted as "GPPsat".Winter is characterized by extremely low productivity, and technical constraints of optical-IR remote sensing due to low solar illumination and persistent cloud cover make for a particular challenge in estimating vegetation indices and consequently computing GPP across the high latitudes (Pettorelli et al., 2005).Given the limited confidence in GPP data over winter (driven mainly by the uncertainty in winter NDVI) we focus on the remainder of the year in our analysis.
Accuracy of the GIMMS-NDVI data set has been examined in several recent studies.Analysing trends in growingseason start over the Tibetan Plateau, Zhang et al. (2013) found that GIMMS NDVI differed substantially over the period 2001-2006 from SPOT-VGT and MODIS-NDVIs, indicating significant uncertainty among NDVI retrievals from different satellite sensors and data records.The GIMMS3g data set is based on the NOAA-AVHRR (Advanced Very High Resolution Radiometer) long-term time series record, which is comprised of AVHRR2 and AVHRR3 sensors on board the NOAA-7 through to NOAA-19 satellites spanning multiple overlapping time periods; this leads to potential artifacts from cross-sensor differences and inter-calibration effects influencing long-term trends in the AVHRR NDVI time series (Pinzon and Tucker, 2014).The Vegetation Index and Phenology (VIP) NDVI data set applies a different data processing scheme from that of GIMMS3g (Fensholt et al., 2015), and involves an integration and calibration of overlapping AVHRR, SPOT, and MODIS sensor records for generating consistent NDVI (Didan, 2010).The ensemble mean and variance of alternate GPP calculations derived using the GIMMS3g and VIP NDVI records was used as a metric of uncertainty in the regional productivity trends and underlying satellite observation records.
Flux tower data
To verify the satellite-based GPP estimates we use gap-filled daily tower GPP data at 10 flux tower sites distributed across northern Eurasia, available for different periods of time.Details of the individual towers are provided in Table 2.The data, generated using the eddy covariance measurements acquired by the FLUXNET community, were collected from http://www.fluxdata.org/for the "free fair-use" data subset.The spatial distribution of the flux towers used in this study is shown in Fig. 1.Unless otherwise noted, we use seasonal totals of the daily gap-filled tower GPP data.Monthly and seasonal values were aggregated from the daily data.
We also use monthly GPP data computed using FLUXNET observations of carbon dioxide, water and energy fluxes upscaled to the global scale for additional verification of the satellite-derived GPP record for the entire study area, on a per grid cell basis.Upscaling of the FLUXNET observations was performed using a machine learning technique and model tree ensembles (MTE) approach from the Max Planck Institute of Biogeochemistry, Jena, Germany, and available online at https://www.bgc-jena.mpg.de/geodb/projects/Data. php.Description and benchmarking of this data set can be found in Jung et al. (2009) and Jung et al. (2011).Of the two versions available, we use the one which incorporates flux partitioning based on Reichstein et al. (2005).
Temperature, precipitation, and cloudiness
Monthly values of 2 m air temperature (in • C), precipitation (in mm), and cloudiness (in %) are taken from monthly observations from meteorological stations, extending over the global land surface and interpolated onto a 0.5 • grid (Mitchell and Jones, 2005).The data set, CRU TS 3.21, is produced by the Climatic Research Unit of the University of East Anglia in conjunction with the Hadley Centre (at the UK Met Office) and is available at http://iridl.ldeo.columbia.edu/SOURCES/.UEA/.CRU/.TS3p21/.monthly/(Jones and Harris, 2013).
Although the LUE-based GPP model does not use precipitation as an input, we assume that precipitation is a useful metric of water supply to vegetation and thus analyse it as one of the environmental variables affecting GPP.Here we use monthly values of temperature, precipitation, and cloudiness for the period of 1982 to 2010, since this is the common period for which both GPPsat and the environmental variable data are available.Seasonal means for spring (March, April, May), summer (June, July, August), and autumn (September, October, November) are derived from the monthly values.As explained in Sect.2.1.2,lower reliability and availability of satellite NDVI observations and associated GPP data for the winter months lead us to focus on the spring, summer, and autumn seasons.
Fire
Fire is represented by proportional burnt area (% of each grid cell) estimates from the Global Fire Emissions Database (GFED) Monthly Burned Area Data Set Version 3.1 released in April 2010.This product was developed on a global scale at a 0.5 • spatial resolution and covers the period from 1997 to 2011.The GFED is an ensemble product of burn areas derived from multiple satellite sensors, though primarily emphasizing MODIS surface reflectance imagery (Giglio et al., 2010).
Spatial interpolation
Data not on a 0.5 • grid were interpolated to that resolution using spherical version of Shepard's traditional algorithm (Shepard, 1968;Willmott et al., 1985).This method takes into account (i) distances of the data points to the grid location, (ii) the directional distribution of stations in order to avoid overweighting of clustered stations, and (iii) spatial gradients within the data field in the grid point environment.
Verification
The GIMMS-GPP and VIP-GPP simulations are evaluated against co-located tower-based GPP observations for model grid cells corresponding to each of the ten regional flux tower locations (Table 2).The evaluation is carried out using five different approaches: 1. Pearson's product moment correlation, which is a measure of the linear dependence between simulated (GIMMS-GPP and VIP-GPP) and observed (towerbased GPP) values and its value ranges from −1 to +1, where 0 is no correlation and −1/ + 1 is total negative or positive correlation respectively.
2. Percent bias, which measures the average tendency of the simulated values to be larger or smaller than the corresponding observations.The optimal value is 0.0 with low-magnitude values indicating accurate model simulations.Positive values indicate overestimations and vice versa (Yapo et al., 1996;Sorooshian et al., 1993).
3. The Nash-Sutcliffe efficiency (NSE) coefficient, which is a normalized statistic that determines the relative magnitude of the residual variance compared to the measured data variance (Nash and Sutcliffe, 1970).The statistic indicates how well the plot of observed vs. simulated data fits the 1 : 1 line.Nash-Sutcliffe efficiencies range from −∞ to 1.An efficiency of 1 corresponds to a perfect match of model-simulated GPP to the observed www.biogeosciences.net/13/45/2016/Biogeosciences, 13, 45-62, 2016 Table 3. Validation of GIMMS3g and VIP-GPP data sets along with their ensemble mean using flux tower GPP from 10 flux tower sites across northern Eurasia.The spatial distribution of the flux tower sites is shown in Fig. 1.Validation was carried out using the following approaches.
(1) Pearson's product moment correlation, which is a measure of the linear dependence between the simulated and observed GPP and its value ranges from −1 to +1, where 0 is no correlation and −1/ + 1 is total negative or positive correlation.
(2) Percent bias, which measures the average tendency of the simulated values to be larger or smaller than their observed ones.The optimal value is 0.0, with low-magnitude values indicating accurate model simulations.Positive values indicate overestimations and vice versa (Yapo et al., 1996;Sorooshian et al., 1993).( 3) Nash-Sutcliffe model efficiency coefficient (Nash and Sutcliffe, 1970), values of which range from −∞ to 1.An efficiency of 1 corresponds to a perfect match of model-simulated GPP to the observed data.An efficiency of 0 indicates that the model predictions are as accurate as the mean of the observed data, whereas an efficiency less than zero occurs when the observed mean is a better predictor than the model or, in other words, when the residual variance (between modelled and observed values) is larger than the data variance (between observed values and the observed mean).Essentially, the closer the model efficiency is to 1, the more accurate the model is.data.An efficiency of 0 indicates that the model predictions are as accurate as the mean of the observed data, whereas an efficiency less than zero occurs when the observed mean is a better predictor than the model or, in other words, when the residual variance (between modelled and observed values) is larger than the data variance (between observed values and the observed mean).
Essentially, the closer the model efficiency is to 1, the more accurate the model is.
4. A scatter plot, which demonstrates using Cartesian coordinates the correlation between satellite-derived GPP and tower-derived GPP at the respective sites for the respective time periods.This along with the line of best fit helps determine how well the two data sets agree with each other.
5. Spatially explicit, pixel-by-pixel validation using the upscaled GPP data from FLUXNET observations (described in Sect.2.1.3)using correlation and difference maps for the entire period.
Trend analysis
Temporal changes for each environmental variable are determined using linear regression.Both annual and seasonal time integrations are examined.Trends are deemed statistically significant at the 95 % level.For each variable, we compute the trend per decade (10 yr −1 ) from the monthly values (month −1 ).Other studies have implemented a similar methodology to identify trends (Piao et al., 2011;de Jong et al., 2011;Forkel et al., 2013;Goetz et al., 2005).In order to determine whether the temporal rate of change differs for different periods of the study period we plot the percentage difference of the annual means (of the regional average) from that of the first 5-year mean.
For the entire period of study, a few of the variables assessed show strong trends.Moreover, we assume the variables to be linearly associated.This introduces the issue of collinearity, as a consequence of which the study of influence of one variable on another becomes less precise.Therefore, in order to make accurate assessments of correlation between two variables, correlation analysis has only been carried out after long-term trends (for period 1982-2010) have been removed and consequently only the interannual variability is preserved.
Correlation
We use the Pearson product-moment correlation coefficient (represented as R), one of the more popular measures of dependence between two variables and which is sensitive only to a linear relationship between two variables.This metric is defined from +1 (perfect increasing linear relationship) to −1 (perfect decreasing linear relationship or "inverse correlation") and as the value approaches zero, the relationship becomes uncorrelated (Dowdy and Wearden, 1983).When a single variable is affected by more than one independent factor, simple correlation is inappropriate.We perform partial correlation to better assess the relationship between two variables after eliminating the influence of other variables.
Attribution
The primary objective of this study is to determine the magnitude and spatio-temporal variations in trends for environmental conditions (variables) which have contributed to the increase in GPP of northern Eurasia indicated from the satellite records.Ideally one would study the direct influence of one condition on another in experiments in which all other possible causes of variation are eliminated.However, since this study involves only large-scale observational data and not process-based models or laboratory-based experiments, there is no control over the causes of variation.Investigations into the structure and function of terrestrial ecosystems, like those for many elements of the biological sciences, involve quantities which are often correlated.In some cases, the derived relationship may be spurious.The coefficient of determination (represented as R 2 ) is a common measure to estimate the degree to which one variable can be explained by another (percentage; Wright, 1921), while correlation anal-ysis (R) can explain this dependence of one variable on another keeping the sign of the relationship (±) intact (Aldrich, 1995).
Verification of satellite-derived GPP
The GIMMS-GPP and VIP-GPP, as well as their ensemble mean (GPPsat), are individually verified against the fluxtower-based GPP data using Pearson's correlation coefficient, percent bias, and the Nash-Sutcliffe normalized statistic.Scatter plots (Fig. 2) show that GPP derived from the satellite NDVI records is generally higher than the towerbased GPP at the flux tower sites that have comparatively lower productivity (and vice versa).Moreover, the agreement is stronger at lower-productivity sites than at higherproductivity sites.Though Table 3 lists all of the verification statistics, we focus primarily on the annual GPPsat results for the rest of the study.The correlation coefficients are all positive and high (0.7 for annual GPPsat); percent bias is predominantly negative (18.3 %); and since all the values of the Nash-Sutcliffe efficiencies are above zero (0.33), we conclude that the satellite NDVI-derived values are a more accurate estimate of GPP than the observed mean for the respective flux tower sites.Spatially explicit verification of GPPsat reveals that the correlation is high and statistically significant for almost the entire study area (Fig. 3a).GPPsat shows a general underestimation in the boreal forests of the western parts of northern Eurasia and overestimation in the Eurasian steppes to the south of the study area (Fig. 3b).
Satellite-derived vegetation indices have been evaluated using a variety of techniques.Using tree-ring width measurements as a proxy for productivity, Berner et al. (2011) examined its relationship with NDVI from AVHRR instruments and found the correlation to be highly variable across the sites, though consistently positive.Remarkably strong corre- shows yearly change in the regional average GPP for the data sets derived from the GIMMS3g (red) and VIP (blue) NDVI data sets.The interannual variation is smoothed using a smoothing spline using a smoothing parameter of 0.8.lations were observed in comparisons of GIMMS3g NDVI to aboveground phytomass at the peak of summer at two representative zonal sites along two trans-Arctic transects in North America and Eurasia (Raynolds et al., 2012).From comparison of production efficiency model-derived NPP (Zhang et al., 2008) to the stand level observations of boreal aspen growth for the 72 CIPHA (Climate Impacts on Productivity and Health of Aspen) sites, the correlation was found to be positive.LUE algorithms similar to the one used in this study for the generation of GPP data sets from satellite NDVI produce favourable GPP results relative to daily tower observations, with a strong positive correlation (Yi et al., 2013;Yuan et al., 2007;Schubert et al., 2010).Evaluating the uncertainties in the estimated carbon fluxes computed using a similar LUE-based GPP model, Yi et al. (2013) concluded that the uncertainty in LUE (ε) characterization is the main source of simulated GPP uncertainty.GPP simulation errors under dry conditions are increased by an insufficient model vapour pressure deficit (VPD) representation of soil water deficit constraints on canopy stomatal conductance and ε (Leuning et al., 2005;Schaefer et al., 2012).It was also found that the GPP model does not consider the response of ε to diffuse light due to canopy clumping (Chen et al., 2012) and shaded leaves (Gu et al., 2002).
Temporal changes in GPP
Across the study domain, regionally averaged GPPsat exhibits a trend of 2.2 (±1.4) g C m −2 month −1 decade −1 .Figure 4a displays the annual GPP trend map.Increases are noted across most of the region except for a small area in the north-central part of the region, just east of the Yenisey River.The largest increases are located in the western and south-eastern part of the region.Over half (69.1 %) of the study area exhibits a statistically significant positive trend (95 % significance level), while 0.01 % of the area has a statistically significant negative trend.Uncertainty in the ensemble mean GPP is illustrated by the coefficient of variation map (Fig. 4b).The highest uncertainty is noted in the north-central and the south-western part of the region.The yearly increase in annual GPP for both GIMMS-GPP (red) and VIP-GPP (blue; Fig. 4c) reveal the difference between the two data sets, which is highest at the beginning of the study period.The nature of increase in GPP is also different for the two data sets, with the rise in one being more linear than the other.A possible explanation for the differences in the two data sets is discussed in Sect.2.1.2.Examining the seasonality of GPP trends (of GPPsat; Fig. 5), we find that the summer trend is greatest among all other seasons.This implies that the response of GPP to environmental changes is greatest at the peak of the growing season.While the productivity of the region is predominantly increasing, there are clearly certain areas each season with decreasing productivity.
The GPP increase described here is consistent with the results of Sitch et al. (2007), who also noted considerable interannual and spatial variability, with many areas demonstrating decreased greenness and lower productivity.Using a processbased model (LPJ-DGVM) to perform a retrospective analysis for the period of 1982-1998, Lucht et al. (2002) ) found, after accounting for the carbon loss due to autotrophic respiration, that boreal zone NPP increased by 34.6 g C m −2 yr −1 , which is comparable to our estimate.The higher GPP trend in summer (Fig. 5), especially over the northern Eurasia portion of the domain, suggests that the vegetation of this region is predominantly cold-constrained, a finding described in other recent studies (Yi et al., 2013;Kim et al., 2014).
Temporal changes in the environmental variables
The regionally averaged air temperature increase is nearly monotonic and the distributions displayed in Fig. 6a show that the region has a predominantly positive trend for all parts of the growing season.Warming is highest in autumn.A statistically significant increase in temperature is noted for approximately half of the region.The greatest increases are found in the north-eastern and south-western parts of the region (maps not shown).Unlike temperature, precipitation does not exhibit a sustained increase over the study period.While the regional median trend for precipitation is highest for spring (Fig. 6b), the range of trends for this region, from minimum to maximum, is highest for summer.The fraction of the region experiencing significant increases in annual precipitation is about 3 times the area experiencing significant decreases.The significant positive trends are located in the north-eastern and western parts (mainly boreal forests) of the domain, while significant negative trends are located in the west-central (boreal forests) and south-eastern (steppes) parts of the region (maps not shown).Along with the regional averages of other environmental variables, Table 4 reveals the regional average of cloudiness, which shows a negative trend.However, similar to precipitation, the spatial standard deviation is very high, implying a high spatial variability in cloudiness trends across the region.Unlike precipitation, a greater fraction of the region is experiencing significant decreasing cloudiness or a significant clear-sky trend (Fig. 6c).
Compared to the rest of the region, annual cloudiness shows higher negative trends in the southern parts of the study area (maps not shown).Burnt area exhibits significant trends, both positive and negative, over only 1 % of the region, with the total yearly burnt area for the study area increasing from 15.9 to 17.1 million hectares from 1997 to 2010.The negative trend of the regional mean (Table 4; Fig. 6d) is not significant.
Recent studies have reported similar changes in these environmental variables.For the period of 1979period of to 2005period of , Trenberth et al. (2007) ) found temperature trends over the region range from 0.3 to 0.7 • C decade −1 , and for most regions of the higher latitudes, especially from 30 to 85 (Harris et al., 2014).Burnt area data from the Global Fire Emissions Database (GFED; Giglio et al., 2010).
positive precipitation trends have occurred.Contrary to the cloud cover trend we find here, studies reported in AR4 suggest an increase in total cloud cover since the middle of the last century over many continental regions, including the former USSR and western Europe (Sun et al., 2001;Sun and Groisman, 2000).The large spatial variability in the gridded cloud cover trends (Table 4) may explain the disagreement.Burnt area data, representing fire disturbance, is dissimilar from the other environmental variables in that it spans only 14 years of the 29-year study record, and it is spatially nonuniform, involving only a fraction of the total study area.This limitation makes it difficult to assess impacts on vegetation productivity (Balshi et al., 2007).While the model used to generate the satellite NDVI-derived GPP data does not account for CO 2 fertilization directly, the fertilization effect may be partially represented through associated changes in NDVI.As stated in Sect. 1, we do not analyse atmospheric CO 2 concentration due to its spatial homogeneity.
Attributing GPP changes to environmental variables and assessing seasonality
Annual GPP is affected by more than one environmental variable.To study the impact of an individual environmental variable, we eliminate the impact of other variables by performing partial correlations.With the temporal range of the fire data (GFED) being a fraction of that of the other environmental variables, it is not possible to compute the partial correlation.Consequently, we are unable to assess the effects of only fire by eliminating the effects of the other variables.Moreover, fires have been found to be significantly correlated with annual GPP (GPPsat) for only a small fraction (1.7 to 3.4 % depending on season) of the entire study area.The impact of fires on annual GPP for the region is therefore ignored in this study.The regional median partial coefficient of determination (R 2 ) for significant values (Table 5) suggests that the summer values of the environmental variables have the highest influence on annual GPPsat.The contrast between summer and the other seasons is strongest for temperature, highlighting the importance of summer temperatures to annual productivity.Figure 7 reveals that the relationships between annual GPP and the environmental variables are not completely explained by simple correlation (R 2 ), as the distributions of partial correlations provide more information about the interaction.Considering only significant correlations (Fig. 7), we find that increasing temperatures predominantly increase GPP.The relationship between precipitation or cloudiness and GPP, on the other hand, leads to a predominantly bimodal distribution, with both positive and negative effects.Other than spring, areas demonstrating significant negative partial correlations appear to be larger than the areas of significant positive partial correlations.Among the environmental variables assessed, temperature has the highest partial coefficient of determination (Table 5).Moreover, unlike precipitation and cloudiness, temperature has a predominantly Bean plots of the multi-modal distribution for significant (95 % significance) partial correlation between annual de-trended GPP (GPPsat) and the values of each de-trended environmental variable after eliminating the influence of the other variables.A bean plot is an alternative to the box plot and is fundamentally a one-dimensional scatter plot.Here it is preferred over a box plot as it helps to show a multi-modal distribution.The thickness of a "bean" is a function of the frequency of the specific value -that is, the thicker a "bean" is for a value, the relatively higher the number of grid points having that value.The values shown are the Pearson's correlation coefficients which are based on the linear least-squares trend fit.Correlation values range from −1 to +1.Values closer to −1 or +1 indicate strong correlation, while those closer to 0 indicate weak correlation.The colour of the box indicates the season of the environmental variable being investigated (annual: grey; spring: green; summer: red; autumn: amber).The short horizontal black lines for each "bean" is the median of that distribution.
positive relationship with annual GPP.These relationships imply that, over recent decades, low temperatures have been the major constraint for GPP in northern Eurasia.
Similar results were reported by Yi et al. (2014), who concluded that satellite-derived vegetation indices show an overall benefit for summer photosynthetic activity from regional warming and only a limited impact from spring precipitation.The dominant constraint of temperature was described by Zhang et al. (2008), who found the same constraint to be decreasing.However, our results contrast with those of Piao et al. (2011), who concluded that at the continental scale of Eurasia, vegetation indices in summer are more strongly regulated by precipitation, while temperature is a relatively stronger regulator in spring and autumn.Regarding the dominance of temperature as a regulator, Yi et al. (2013) concluded that, over the last decade, Eurasia has been more drought-sensitive than other high-latitude areas.
Table 5. Medians of the distributions of the relative partial significant contribution (R 2 -95 % significance) of each de-trended environmental variable (except fire) of each season to the interannual variability in de-trended annual GPP (GPPsat).In each case the total contribution may not add up to 100 %.In these cases the factors behind the unexplained attribution are not identified.Since GPP trends are highest in summer (Fig. 5), the peak of the growing season, we are interested more in the impact of the environmental variables during summer on annual GPP since the terrestrial vegetation is likely to be more responsive to variations in summer environmental conditions relative to other seasons.Spatial analysis helps to elaborate on the results shown in Table 5 and Fig. 7. Assessing the partial significant correlation of annual GPP and summer temperature (Fig. 8a; Table 6), we find that areas with a positive correlation (62 % of the area) are concentrated to the north and east of the region, which include both tundra and boreal forest areas.Negative correlations occur across 2 % of the region, largely in the south within the Eurasian steppes.For other parts of the year (maps not shown for spring and autumn correlations but distributions represented in Fig. 7), significant negative correlations become more spatially disperse, while significant positive correlations are limited to the centre and west of the region for spring, becoming more disperse in autumn.Determining the partial correlation between annual GPP and summer precipitation, Fig. 8b reveals that the areas of significant positive correlations (4 % of area) are scattered over the southern part of the study area (steppes vegetation), while the significant negative correlations (16 % of area) are scattered across the north (tundra and boreal).Correlations for spring precipitation with annual GPP (maps not shown) are predominantly positive, while that for autumn precipitation is predominantly negative.The spatial correlations for summer cloudiness and summer precipitation are similar (Fig. 8c), though the area under significant correlation is comparatively less.Negative correlation areas are about 9 times more extensive than positive correlation areas (Table 6).Compared to summer, the area under significant positive correlation is higher for spring, while the area under negative correlation is higher for autumn (maps not shown).
Environmental variable
The negative correlations for temperature and positive correlations for precipitation and cloudiness in the southern grasslands (Eurasian steppes) are not surprising, as these grasslands are relatively dry compared to other biomes in the broader region.In this part of the study area, increasing temperatures in summer may lead to greater water stress (Gates, 1964;Wiegand and Namken, 1966;Jackson et al., 1981) over, increasing cloud cover would tend to lead to a higher probability of rain (Richards and Arkin, 1981), thus relieving water stress induced by warming in this relatively dry area.The cause of the negative correlations in the north is unclear.The relationship may be attributable to the predominantly positive relationship between cloud cover (equivalent to inverse of sunshine duration) and precipitation (Sect.3.5).
In the light-limited and relatively colder north, an increase in cloud cover could, on the one hand, cause a decrease in direct radiation and increase in diffuse radiation, which may increase GPP through higher LUE (Alton et al., 2007;Gu et al., 2002;Williams et al., 2014;Roderick et al., 2001).However, an increase in cloud cover could decrease total solar radiation and, in turn, productivity (Nemani et al., 2003;Shim et al., 2014).
Recent studies have shown similar relationships to those found here.Zhang et al. (2008) showed that, across the pan-Arctic basin, while productivity increased with warming, increasing drought stress can offset some of the potential benefits.However, Yi et al. (2013) concluded that while GPP was significantly higher during warm years for the pan-Arctic, the same was not true for the Eurasian boreal forests, which showed greater drought sensitivity.Positive impacts of warming on GPP have been suggested in warming experiments (Natali et al., 2013).However, decreasing growing- season forest productivity, represented as a decline in "greenness" across northern Eurasia, may be a reflection of continued summer warming in the absence of sustained increases in precipitation (Buermann et al., 2014;Zhou et al., 2001).
Relationships among individual environmental variables
Environmental variables are not independent of one another.We examine correlations among the de-trended individual variables to better understand their interactions.Figure 9 shows distributions of the correlations.The temperatureprecipitation correlation is predominantly negative, indicating that increases in precipitation did not accompany recent warming.Significant negative trends are located in the southern parts of the study area (steppes) as well as the boreal forests at the western and eastern ends of the region.These changes may be leading to increasing water stress, evidence of which is noted in a subset of the region.Indeed, approximately 2.4 % of the area in the southern parts of the study area (Fig. 8a) shows significant negative partial correlation between annual GPP (GPPsat) and summer temperature.The relationship between temperature and cloud cover is similarly predominantly negative.Spatially, however, the significant negative correlations are located in the central and western parts of the region.Grid-cell-wise correlations between precipitation and cloud cover are predominately positive, with the significant correlations spread out across the region.
As described in Sect.3.4, the correlations between precipitation and cloud cover help to explain why spatial distributions of the correlation coefficients of precipitation and cloud cover with GPP are similar.Wang et al. (2014) documented a positive relationship between sunshine duration (equivalent to the inverse of cloud cover) and vegetation greenness.
While increasing cloud cover leads to an increased probability of precipitation, and thus reduces water stress, it also reduces the sunshine duration and hence GPP.According to Table 4, regional mean precipitation has a positive trend, while cloudiness has a negative trend.However, Fig. 9 reveals the predominantly positive correlation between these two variables.This apparent contradiction is because the long-term trends are calculated for the actual values, while the correla-tion analysis is performed after de-trending (removing longterm trends) the variables.Consistent with our results, Thompson et al. (2006) found that, in the boreal and tundra regions of Alaska, NPP decreased when it was warmer and dryer and increased when it was warmer and wetter.They also described how colder and wetter conditions also increased NPP.Yi et al. (2013) concluded that while, globally, annual GPP for boreal forests is significantly higher in warmer years, the relationship does not hold true for Eurasian boreal forests, which they identify to be more drought-sensitive.For this reason, regional GPP variations are more consistent with regional wetting and drying anomalies, as we note for the south-western part of the study region.In this study we assessed only GPP.Other carbon cycle processes such as autotrophic and heterotrophic respiration and disturbances may not be responding in a similar manner.Additional studies are required before extrapolating these results to other carbon cycle components.
Conclusions
The ensemble mean of the GPP data sets derived from GIMMS3g and VIP NDVI data indicates that vegetation productivity generally increased across northern Eurasia over the period 1982 to 2010, with a significant increase for as much as 69.1 % of the region.A significant decrease in GPP occurred across only 0.01 % of the region.We note some disagreement in the nature and magnitude of the increasing GPP among the two data sets.The regional mean trend for the ensemble mean GPP is 2.2 (±1.4) g C m −2 month −1 decade −1 .The regional analysis is consistent with results of prior studies which have suggested that air temperature is the dominant environmental variable influencing productivity increases across the northern high latitudes.Examining partial coefficients of determination (R 2 ), we find that the summer values of temperature, precipitation, and cloudiness have the highest influence on annual GPP.Considering the regional median of partial significant R 2 values, summer air temperature explains as much as 37.7 % of the variation in annual GPP.In contrast, precipitation and cloudiness explain 20.7 and 19.3 % respectively.A significant positive partial correlation between summer air temperature and annual GPP is noted for 61.7 % of the region.For 2.4 % of the area, specifically the dryer grasslands in the south-west, temperature and GPP are inversely correlated.Precipitation and cloudiness during summer also impart a significant influence, showing areas with both positive and negative significant partial correlation with annual GPP.Fire has a very small effect, with only up to 3.4 % of the region showing significant correlation, and consequently the impact of fire on GPP was ignored for the subsequent analysis.The spatial analysis reveals that the statistical relationships are not spatially homogeneous.While warming likely contributed to increasing productivity across much of the north of the region, the relationship reverses in the southern grasslands, which are relatively dry.That region exhibits increasing GPP, but with warming accompanying increased moisture deficits potentially restricting continued productivity increase.This result demonstrates that vegetation has been resilient to drought stress, which may be increasing over time.
We recommended that this study be followed up with experiments conducted using process-based models in which a single forcing variable independent of the others is manipulated.If feasible, multiple models should be used in order to quantify the uncertainty due to differences in model parameterization.Depending on emissions, population, and other forcing scenarios, rates of change in the environmental drivers such as air temperature and precipitation may be different than those found in this study.Thus it is critical to examine future scenarios of change across the region to better understand terrestrial vegetation dynamics under the respective model simulations.Environmental drivers influence other elements of the carbon cycle beyond the individual plant.In order to determine how terrestrial carbon stocks and fluxes have changed in the recent past, or may change in the near future, all aspects of the carbon cycle should be investigated in the context of changes in overarching climate influences.
Figure 1 .
Figure1.Simplified land cover for northern Eurasia for year 2007 overlaid with the spatial distribution of the 10 flux tower sites whose GPP (gross primary productivity) data were used to validate the GPP data derived from satellite NDVI (normalized difference vegetation index).For our statistical analysis, we show the distribution of two fundamental types of vegetation types: (i) herbaceous, i.e. without woody stems, which includes tundra in the north and grasslands (Eurasian Steppe) to the south, and (ii) wooded, i.e. plants with wood as its structural tissue, which includes the boreal forests appearing in the middle and extending from the western to the eastern boundary.This land cover map has been derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Type 5 land cover product(Friedl et al., 2002).The details of the flux tower sites are listed in Table2.
)Figure 2 .
Figure2.Relationship between the annual GPP recorded at the flux tower sites and the corresponding values of the satellite-derived GPP.The black solid line is the line of best fit and helps better understand the relationship between the two.The dashed line is the 1 : 1 line and demonstrates how much the relationship between the two sets of values deviates from the 1 : 1 perfect relationship.
Figure 3 .
Figure 3. Spatially explicit validation of GPPsat using upscaled FLUXNET observations.Panel (a) is the correlation map and displays the statistically significant (95 % level) correlations between the two sets of values of annual GPP for the period of 1982-2010.Panel (b) is the difference between the 29-year mean of GPPsat and upscaled FLUXNET database, with negative values demonstrating an underestimation of GPPsat and vice versa.
Figure 4 .
Figure 4. Change in annual GPP for GPPsat over the period 1982-2010.Panel (a) is the trend map for GPPsat, i.e. the ensemble mean (of two GPP data sets).Shades of green represent a positive trend and shades of red represent a negative trend.The trends have been derived from a linear least-squares fit to the GPP time series for GIMMS3g and VIP data sets.Trend values represent the rate of change of productivity per decade (g C(Carbon) m −2 month −1 10 yr −1 ).Panel (b) is the uncertainty map (uncertainty due to the use of two GPP data sets) represented by computing the coefficient of variation (CV).Darker values represent higher uncertainty and vice versa.Panel (c)shows yearly change in the regional average GPP for the data sets derived from the GIMMS3g (red) and VIP (blue) NDVI data sets.The interannual variation is smoothed using a smoothing spline using a smoothing parameter of 0.8.
Figure 5 .
Figure5.Box plot showing grid distributions of seasonal GPP trends for GPPsat.The GPP trends are in g C m −2 month −1 10 yr −1 .The black band and middle notch represent the 2nd quartile or median; box extents mark the 25th (1st quartile) and 75th (3rd quartile) percentiles.Whiskers extend from the smallest non-outlier value to the largest non-outlier value.The colours, green, red, orange, and grey represent spring, summer, autumn, and annual seasonal trends respectively.As described in Sect.2.1.2,GPP trends for winter have not been assessed in this study.
)Figure 6 .
Figure 6.Change in the environmental variables over the period of study represented by seasonal trends.Panels (a-c) show distribution of 2 m air temperature, precipitation, and cloud cover respectively for the period 1982-2010, and panel (d) illustrates seasonal trends of total burnt area for the period 1997-2011.The temperature, precipitation, and cloud cover data are taken from the Climatic Research Unit (CRU TS 3.21) data set(Harris et al., 2014).Burnt area data from the Global Fire Emissions Database (GFED;Giglio et al., 2010).
Figure 7. Bean plots of the multi-modal distribution for significant (95 % significance) partial correlation between annual de-trended GPP (GPPsat) and the values of each de-trended environmental variable after eliminating the influence of the other variables.A bean plot is an alternative to the box plot and is fundamentally a one-dimensional scatter plot.Here it is preferred over a box plot as it helps to show a multi-modal distribution.The thickness of a "bean" is a function of the frequency of the specific value -that is, the thicker a "bean" is for a value, the relatively higher the number of grid points having that value.The values shown are the Pearson's correlation coefficients which are based on the linear least-squares trend fit.Correlation values range from −1 to +1.Values closer to
Figure 8 .Figure 9 .
Figure 8. Spatial distribution of statistically significant (95 % significance level) partial correlation between de-trended annual GPP (GPPsat) and de-trended summer values of environmental variables (a) temperature, (b) precipitation, and (c) cloud cover.Negative correlations are shown with shades of red and positive correlations are shown in shades of blue.
Table 1 .
Biome property look-up table (BPLUT) for GPP algorithm with ERA-Interim and NDVI as inputs.The full names for the University of Maryland land cover classes (UMD_VEG_LC) in the MOD12Q1 data set are evergreen needleleaf forest (ENF), evergreen broadleaf forest (EBF), deciduous needleleaf forest (DNF), deciduous broadleaf forest (DBF), mixed forests (MF), closed shrublands (CS), open shrublands (OS), woody savannas (WS), savannas (SVN), grassland (GRS), and croplands (Crop).defined as the area between 15 • E longitude in the west, the Pacific coast in the east, 45 • N latitude in the south, and the Arctic Ocean coast in the north.The total area of this region is 22.4 million km 2 .Land cover distribution for the region is drawn from the Moderate Resolution Imaging Spectroradiometer (MODIS) MCD12Q1 Type 5 land cover product for the year 2007, available online at https://lpdaac.
Friedl et al. (2002)/data_pool from Land Processes Distributed Active Archive Center (LP DAAC), Sioux Falls, South Dakota, USA.The product provides global land cover at 1 km spatial resolution, produced from several classification systems, principally that of the International Geosphere-Biosphere Programme (IGBP).Friedl et al. (2002)describe the supervised classification methodology which leveraged a global database of training sites interpreted from highresolution imagery.The GPP products used in this study (described below) use a static land cover (LC) classification to define biome response characteristics over the study record.
Table 2 .
Details of the flux towers whose GPP data have been used to validate the satellite NDVI-based GPP data.The spatial distribution of these flux towers is shown in Fig. 1.
Table 4 .
Trend statistics for annual monthly averages of environmental variables.The first and second columns list the fraction of the region with significant (95 % significance level) positive trends and negative trends respectively.The third column is the regional mean trend of the variables per decade.The fourth column is the coefficient of variation, estimated as the distribution mean divided by the standard deviation.
Table 6 .
Connection between annual GPP of northern Eurasia (GPPsat) and summer values of environmental variables shown as percentage of the study area with statistically significant (95 % significance level) positive and negative partial correlation coefficients. | 2017-03-23T18:01:15.529Z | 2015-06-18T00:00:00.000 | {
"year": 2015,
"sha1": "807e3d83e4a4951281d8fd14b93d988dc90316d2",
"oa_license": "CCBY",
"oa_url": "https://www.biogeosciences.net/13/45/2016/bg-13-45-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "d24142676d4a08f7da6f75176b5fecd947a539bd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
218899014 | pes2o/s2orc | v3-fos-license | The Effect of (E)-1-(4’-aminophenyl)-3-phenylprop-2-en-1-one on MicroRNA-18a, Dicer1, and MMP-9 Expressions against DMBA-Induced Breast Cancer
Background: Most of breast cancer patients are estrogen receptor alpha-positive and have high resistance and side effect of chemotherapeutic drug. Therefore, discovering an effective anticancer agent is needed. This research explored the effect of (E)-1-(4’-aminophenyl)-3-phenylprop-2-en-1-one (APE) on miR-18a, Dicer1, and MMP-9 expressions. Methods: Twenty four female Sprague-Dawley rats were invetigated in this study. The rats were divided into 6 groups of 4. G1 was considered as normal rat. G2, G3, T1, T2, and T3 were given DMBA 20 mg/kgBW twice a week for 5 weeks to induce mammary cancer. After being affiliated with cancer, G2 was given vehicle and G3 was treated with tamoxifen. T1, T2, and T3 were treated with APE intraperitoneally everyday for 21 days at doses of 5, 15, and 45 mg/kgBW/day, respectively. Blood plasma was collected to measure miR-18a expression using qRT-PCR. Mammary tissues were also collected to determine Dicer1 and MMP-9 expressions by using immunohistochemistry. Results: The results showed significant down-regulation of miR-18a relative expression and up-regulation of Dicer1 expression in G3 and T1 compared to G2 (P<0.05). MMP-9 expression has significant decrease in T1 compared to G2 (P<0.05). Conclusion: APE can decrease miR-18a and MMP-9 expressions and increase Dicer1 expression in rat mammary cancer. Therefore, this compound could be a candidate of novel anticancer.
The medicinal chemists of natural product have been explored for drug discovery of potential anti-cancer agents, such as flavonoids given their antioxidant and cytotoxic properties (Pande et al., 2017). Flavonoid regulates miRNA expressions through epigenetic modification, transcription factors modulation, and miRNA maturation process (Srivastava et al., 2015). Chalcones belong to flavonoid family that has been shown to exert many properties for human diseases, including anticancer (Jin et al., 2013).
A series of new chalcone derivatives had been successfully synthesized to discover new anticancer, such as (e)-1-(4'-aminophenyl)-3-phenylprop-2-en-1-one (APE) (Suwito et al., 2015). Previously, the potential cytotoxicity of this compound against breast cancer cell line had been proven reducing tumor growth and down-regulating miR-21 in rat mammary cancer (Wahyuniari et al., 2017). However, lack of study regarding mechanism of action of this compound in metastasis effect.
Loss of Dicer expression is associated with the progression and metastasis of breast cancer (Khoshnaw et al., 2012). Dicer expression is significantly lower in triple-negative breast cancer (TNBC) versus estrogen receptor-positive (ER+) clinical specimen of primary tumor. Overall, TNBC has poor prognosis in comparison with ER+ breast cancer (Spoelstra et al., 2016). Dicer is a Ribonuclease III enzyme playing a crucial role in microRNAs (miRNA) maturation from pre-miRNAs molecules (Price and Chen, 2014). As we know, miRNA is an important epigenetic mechanism which acts as a negative gene regulator by binding to mRNA. Epigenetic is potential target in cancer treatment and prevention due to its modifiable nature (Basse and Arock, 2015). Furthermore, the activity of more than 60% of all protein-coding genes is predicted to be controlled by miRNAs in mammals (Catalanotto et al., 2016). Interestingly, Dicer1 is also regulated by miRNA. Hence, over-expression of miRNAs targeting Dicer1 leads to global down-regulation of miRNA expression (Luo et al., 2013).
One of the miRNAs that regulates Dicer is miR-18a through its affinity with the 3' untranslated region (Luo et al., 2013;Chen et al., 2014). MiR-18a also regulates ERα (Howard and Yang, 2018); however, Dicer is more significant than ER as a prognostic factor (Khoshnaw et al., 2012). In breast cancer, the expression of miR-18a increases (Shidfar et al., 2016), but the expression of Dicer1 decreases (Yan et al., 2012). A previous study showed that increased miR-18a expression targeting Dicer-1 in nasopharyngeal cancer led to 78% decreased miRNA expression, including miR-143 (Luo et al., 2013). Decrease in miR-143 expression leads to increase of matrix metalloproteinase-9 (MMP-9) expression due to MMP-9 as target molecule of miR-143 (Abba et al., 2014). MMP-9 degrades protein in extracellular matrix and it is associated with tumor invasion, metastasis, and poor prognosis in breast cancer (Merdad et al., 2014;Yousef et al., 2014). In addition, the increase of MMP-9 expression can be also due to insufficiency of PTEN (Chiang et al., 2016), which is also a binding target for miR-18a (Zhang et al., 2016).
The current study aimed to investigate APE effect on miR-18a, Dicer1, and MMP-9 as molecular targets. Given that animal model may generate ERα-positive breast cancer on Sprague Dawley rat (Abba et al., 2016;Alvarado et al., 2017), we first determined the molecular mechanism by which APE inhibited invasiveness against 7,12-dimethylbenz(a)anthracene (DMBA)-induced breast cancer.
Tested compound and animals
We tested (e)-1-(4'-aminophenyl)-3-phenylprop-2-en-1-one (APE) in this experimental research. It was synthesized by (Suwito et al., 2015) at Department of Chemistry, Faculty of Science and Technology, Universitas Airlangga, Indonesia. This in vivo research was a randomized post-test only control group design using twenty four female Sprague-Dawley rats with the aged of 3-4 weeks were provided by Laboratorium Penelitian dan Pengujian Terpadu, Universitas Gadjah Mada (UGM). This study was approved by Medical and Health Research Ethics Committee Faculty of Medicine, UGM. The rates were caged individually in animal house of the Department of Pharmacology and Therapy, Faculty of Medicine, UGM. They were maintained on a 12 h light-dark cycle at 24°C. Rats were fed a standard diet. They were given ad libitum access to water. The rats were randomly divided into 6 groups of 4. Group 1 (G1) was treated with corn oil as control group. Other groups (T1, T2, T3, G2, G3) were given chemical carcinogen dimethylbenz(a)antracene (DMBA) 20 mg/kgBW (Sigma-Aldrich, St Louis) dissolved in corn oil twice a week for five weeks to induce mammary cancer (Meiyanto et al., 2007). Tested compound was dissolved with vehicle (saline: tween 80: DMSO = 8: 1: 1). Upon the appearance of mammary cancer, G2 (mammary cancer) was given vehicle and G3 (mammary cancer + tamoxifen) was treated with tamoxifen citrate (tamofen 10, Kalbe Farma, Indonesia) 6.6 mg/kgBW. All T (mammary cancer + APE) groups were treated with APE dissolved in vehicle intraperitoneally every day for 21 days at the doses of 5, 15, and 45 mg/kgBW, respectively (Wahyuniari et al., 2017).
Analysis of qRT-PCR
Qiagen miRNeasy plasma kit (Cat#217184) and miRNeasy plasma spike-in control (Cat#219610) were used to extract microRNA in accordance with the manufacturer's protocol. Synthesis of cDNA was based on the protocol of Qiagen miscript II RT Kit (Cat#218160). The level of miRNAs was quantified by using MyGo Mini Real-time PCR (IT-IS Life Science, UK) and Qiagen miScript SYBR Green PCR Kit (Cat#218073). The primers used were rno-miR-18a specific primer, 5'-TAAGGTGCATCTAGTGCAGATAG-3', and miScript universal primer (IDT, Singapore). The miR-18a levels of plasma used C. elegans miR-39 as internal control. They were normalized with the comparative Ct method relative to this exogenous miRNA. The fold change in expression of the target gene relative to the internal control gene was calculated using 2 -ΔΔCT method (Wang et al., 2015b;Vigneron et al., 2016).
Discussion
This study showed that APE at dose of 5 mg/kgBW could decrease miR-18a expression in this in vivo study.
This compound belongs to chalcone derivatives as part of flavonoids family (Solomon and Lee, 2012). One study showed that flavonoids affected epigenetic mechanism including microRNA (Busch et al., 2015;Srivastava et al., 2015). Flavonoid regulates miRNA expression through modulation of transcription factors, epigenetic modification, and maturation of miRNA (Srivastava et al., 2015). This compound was also reported to down-regulate miR-21 (Wahyuniari et al., 2017). It was revealed that chrysin also down-regulated miR-221, miR-21, and miR-18a expressions in gastric cancer (Mohammadian et al., 2016) The expression of miR-18a in this study was significantly higher in untreated rat mammary cancer (G2). Some studies showed similar results for breast cancer (Kodahl et al., 2014;Shidfar et al., 2016) and other cancers, such as nasopharyngeal cancer (Luo et al., 2013), gastric cancer (Tsujiura et al., 2015), esophageal cancer (Hirajima et al., 2013), pancreatic cancer (Morimura et al., 2011), colorectal cancer (Zhang et al., 2013Yau et al., 2014), and hepatic cancer (Li et al., 2012). MicroRNA-18a is highly stable in blood; therefore, it is suitable as biomarker for non-invasive monitoring of tumor dynamics (Komatsu et al., 2014;Jin et al., 2015). MicroRNA can be a specific target for new anticancer agents due to its regulation on multiple target of mRNAs (Guo et al., 2013). Dicer1 mRNA is regulated by miR-18a through its affinity with the 3' untranslated region of Dicer1 (Luo et al., 2013;Chen et al., 2014). In breast cancer, the expression of miR-18a increases (Shidfar et al., 2016) but the expression of Dicer1 decreases (Yan et al., 2012). In this study, it was found that APE at the dose of 5 mg/ kgBW significantly up-regulated Dicer1 expression in T1. So, that is appropriate result that the decrease of miR-18a expression is followed by the up regulation of Dicer1 in Group T1 due to APE administration. According to a previous study, Dicer expression was positively correlated with disease free interval in 5 years. It meant that those patients who had high Dicer expression were less likely to have recurrence in 5 years compared to low Dicer expression patients. Dicer was thought to be more significant as prognostic factor than estrogen receptor in breast cancer patients (Khoshnaw et al., 2012).
In this study, we found that Dicer1 had the lowest expression in rat mammary cancer without treatment (G2). Another study showed a gradual decrease of Dicer during progression of breast cancer, which was the strongest Dicer expression found in normal breast epithelial cells and the weakest in metastatic cells (Khoshnaw et al., 2012). Given that Dicer plays a crucial role in final maturation of miRNA (Price and Chen, 2014), Dicer dysregulation can lead to global disruption of miRNA expressions. Interestingly, Dicer1 is also regulated by miRNA. One study showed that up-regulation of miR-18a suppressed Dicer expression, causing global down-regulation of miRNA expression (78%), such as miR-143 (Luo et al., 2013). One of miR-143 target molecule is MMP-9 that is associated with tumor invasion, metastasis, and poor prognosis of breast cancer via protein degradation in extracellular matrix (Merdad et al., 2014;Yousef et al., 2014).
In this study, anti-invasive control by APE was confirmed by the measurement of MMP-9 expression. Down-regulation of MMP-9 was detected in groups treated with APE in 3 variation of dosage and tamoxifen, but significant only in group APE with dose 5 mg/kgBW. In addition, MMP-9 down-regulation may be also due to PTEN up-regulation which is also a binding target for miR-18a. We found APE capability to inhibit cancer progression. Other chalcone derivatives showed the same manner in metastasis inhibition. Synthetic chalcone, E-2-(40-methoxybenzylidene)-1-benzosuberone decreased secretion of MMP-9 (Pilatova et al., 2010). However, MMP-2 expression was also decreased by novel anthraquinone based chalcone analogue (Kolundzija et al., 2014) and 2, 2-dimethylbenzopyran (Wang et al., 2015a).
The best response of APE was found in lower dosage. High dose of flavonoids may be toxic given their prooxidant activity and pro-inflammatory effects (Galati and O'Brien, 2004;Bouayed and Bohn, 2010;Corcoran et al., 2012). Further studies are suggested to find the optimal dosage of APE considering better efficacy of its lower dosage.
In conclusion, this study suggested that (E)-1-(4'aminophenyl)-3-phenylprop-2-en-1-one (APE) may decrease miR-18a and increase Dicer1. This compound could inhibit invasiveness of breast cancer by decreasing MMP-9 expression. These results are substantial for the future development of this compound to control metastasis that often occurs in resistance and recurrence of breast cancer . | 2020-05-27T03:41:57.190Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "012d077faf57ff20eec2417eac88da1dc0ad3405",
"oa_license": "CCBY",
"oa_url": "http://journal.waocp.org/article_89074_29f072a31493b61f1c4a68b7b7220737.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "012d077faf57ff20eec2417eac88da1dc0ad3405",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248718566 | pes2o/s2orc | v3-fos-license | Safety assessment of the process INTCO MALAYSIA, based on the EREMA Basic technology, used to recycle post‐consumer PET into food contact materials
Abstract The EFSA Panel on Food Contact Materials, Enzymes and Processing Aids (CEP) assessed the safety of the recycling process INTCO MALAYSIA (EU register number RECYC236), which uses the EREMA Basic technology. The input material is hot caustic washed and dried poly(ethylene terephthalate) (PET) flakes originating from collected post‐consumer PET containers, including no more than 5% PET from non‐food consumer applications. The flakes are heated in a continuous reactor under vacuum before being extruded. Having examined the challenge test provided, the Panel concluded that the continuous reactor (step 2, for which a challenge test was provided) is critical in determining the decontamination efficiency of the process. The operating parameters to control the performance of this step are temperature, pressure and residence time. It was demonstrated that this recycling process is able to ensure a level of migration of potential unknown contaminants into food below the conservatively modelled migration of 0.1 μg/kg food, derived from the exposure scenario for infants when such recycled PET is used at up to 100%. Therefore, the Panel concluded that the recycled PET obtained from this process is not considered to be of safety concern when used at up to 100% for the manufacture of materials and articles for contact with all types of foodstuffs, including drinking water, for long‐term storage at room temperature. Articles made of this recycled PET are not intended to be used in microwave and conventional ovens and such uses are not covered by this evaluation.
Background and Terms of Reference as provided by the requestor Recycled plastic materials and articles shall only be placed on the market if the recycled plastic is from an authorised recycling process. Before a recycling process is authorised, the European Food Safety Authority (EFSA)'s opinion on its safety is required. This procedure has been established in Article 5 of Regulation (EC) No 282/2008 1 on recycled plastic materials intended to come into contact with foods and Articles 8 and 9 of Regulation (EC) No 1935/2004 2 on materials and articles intended to come into contact with food.
According to this procedure, the industry submits applications to the competent authorities of Member States, which transmit the applications to EFSA for evaluation.
In this case, EFSA received from the Bundesamt f € ur Verbraucherschutz und Lebensmittelsicherheit, Germany, an application for evaluation of the recycling process INTCO MALAYSIA, European Union (EU) register No RECYC236. The request has been registered in EFSA's register of received questions under the number EFSA-Q-2021-00005. The dossier was submitted on behalf of INTCO MALAYSIA SDN BHD, Malaysia.
According to Article 5 of Regulation (EC) No 282/2008 on recycled plastic materials intended to come into contact with foods, EFSA is required to carry out risk assessments on the risks originating from the migration of substances from recycled food contact plastic materials and articles into food and deliver a scientific opinion on the recycling process examined.
According to Article 4 of Regulation (EC) No 282/2008, EFSA will evaluate whether it has been demonstrated in a challenge test, or by other appropriate scientific evidence, that the recycling process is able to reduce the contamination of the plastic input to a concentration that does not pose a risk to human health. The poly(ethylene terephthalate) (PET) materials and articles used as input of the process as well as the conditions of use of the recycled PET are part of this evaluation.
2.
Data and methodologies
Data
The applicant has submitted a dossier following the 'EFSA guidelines for the submission of an application for the safety evaluation of a recycling process to produce recycled plastics intended to be used for the manufacture of materials and articles in contact with food, prior to its authorisation' (EFSA, 2008).
Additional information was provided by the applicant during the assessment process in response to requests from EFSA sent on 30 July 2021 and 22 December 2021 (see 'Documentation provided to EFSA').
The following information on the recycling process was provided by the applicant and used for the evaluation: • General information: general description, existing authorisations. • Specific information: recycling process, characterisation of the input, determination of the decontamination efficiency of the recycling process, characterisation of the recycled plastic, intended application in contact with food, compliance with the relevant provisions on food contact materials and articles, process analysis and evaluation, operating parameters.
Methodologies
The principles followed for the evaluation are described here. The risks associated with the use of recycled plastic materials and articles in contact with food come from the possible migration of chemicals into the food in amounts that would endanger human health. The quality of the input, the efficiency of the recycling process to remove contaminants as well as the intended use of the recycled plastic are crucial points for the risk assessment (EFSA, 2008).
The criteria for the safety evaluation of a mechanical recycling process to produce recycled PET intended to be used for the manufacture of materials and articles in contact with food are described in the scientific opinion developed by the EFSA Panel on Food Contact Materials, Enzymes, Flavourings and Processing Aids (EFSA CEF Panel, 2011). The principle of the evaluation is to apply the decontamination efficiency of a recycling technology or process, obtained from a challenge test with surrogate contaminants, to a reference contamination level for post-consumer PET, conservatively set at 3 mg/kg PET for contaminants resulting from possible misuse. The resulting residual concentration of each surrogate contaminant in recycled PET (C res ) is compared with a modelled concentration of the surrogate contaminants in PET (C mod ). This C mod is calculated using generally recognised conservative migration models so that the related migration does not give rise to a dietary exposure exceeding 0.0025 µg/kg body weight (bw) per day (i.e. the human exposure threshold value for chemicals with structural alerts for genotoxicity), below which the risk to human health would be negligible. If the C res is not higher than the C mod , the recycled PET manufactured by such recycling process is not considered to be of safety concern for the defined conditions of use (EFSA CEF Panel, 2011).
The assessment was conducted in line with the principles described in the EFSA Guidance on transparency in the scientific aspects of risk assessment (EFSA, 2009) and considering the relevant guidance from the EFSA Scientific Committee.
General information 3
According to the applicant, the recycling process INTCO MALAYSIA is intended to recycle food grade PET containers using the EREMA Basic technology. The recycled PET is intended to be used at up to 100% for the manufacture of materials and articles to be used in direct contact with all kinds of foodstuffs, such as bottles for mineral water, soft drink and beer as well as sheet/thermoforming applications for food containers, for long-term storage at room temperature, with or without hotfill. The final articles are not intended to be used in microwave and conventional ovens.
3.2.
Description of the process 3.2.1. General description 4 The recycling process INTCO MALAYSIA produces recycled PET pellets or sheets from PET containers from post-consumer collection systems (kerbside and deposit systems).
The recycling process comprises the three steps below.
Input
• In step 1, the post-consumer PET containers are processed into hot caustic washed and dried flakes. This step may be performed by a third party or by the applicant.
Decontamination and production of recycled PET material • In step 2, the flakes are crystallised and decontaminated under high temperature and vacuum. • In step 3, the decontaminated flakes are extruded to produce pellets or sheets.
The operating conditions of the process have been provided to EFSA. Pellets and sheets, the final products of the process, are checked against technical requirements, such as intrinsic viscosity, colour and black spots.
Characterisation of the input 5
According to the applicant, the input material for the recycling process INTCO MALAYSIA consists of hot washed and dried flakes obtained from PET containers, e.g. bottles, previously used for food packaging, from post-consumer collection systems (kerbside and deposit systems). A small fraction may originate from non-food applications. According to the applicant, the proportion will be no more than 5%.
Technical specifications on the hot washed and dried flakes are provided, such as information on physical properties and on residual contents of moisture, poly(vinyl chloride) (PVC), glue, polyolefins, cellulose and metals (see Appendix A).
3.3.
EREMA Basic technology 3.3.1. Description of the main steps 6 The general scheme of the EREMA Basic technology, as provided by the applicant, is reported in Figure 1. The steps are: • Decontamination in a continuous reactor (step 2): The flakes are continuously fed into a reactor equipped with a rotating device, running under high temperature and vacuum for a pre-defined minimum residence time.
• Extrusion of the decontaminated flakes (step 3): The flakes, continuously introduced from the previous reactor, are molten in the extruder.
The process is run under defined operating parameters 7 of temperature, pressure and residence time.
Decontamination efficiency of the recycling process 8
To demonstrate the decontamination efficiency of the recycling process INTCO MALAYSIA, a challenge test on step 2 was submitted to the EFSA.
PET flakes were contaminated with toluene, chlorobenzene, chloroform, methyl salicylate, phenylcyclohexane, benzophenone and methyl stearate, selected as surrogate contaminants in agreement with the EFSA guidelines (EFSA CEF Panel, 2011) and in accordance with the recommendations of the US Food and Drug Administration (FDA, 2006). The surrogates include different molecular masses and polarities to cover possible chemical classes of contaminants of concern and were demonstrated to be suitable to monitor the behaviour of PET during recycling (EFSA, 2008).
Solid surrogates (benzophenone and methyl stearate) and liquid surrogates (toluene, chlorobenzene, chloroform, methyl salicylate and phenyl cyclohexane) were added to 25 kg of conventionally recycled 9 post-consumer PET flakes. Sixteen such barrels were prepared and stored for 7 days at 50°C with periodical agitation. Afterwards, the contaminated flakes were rinsed with 10% ethanol. For each batch, the concentrations of surrogates were determined. The barrels were shipped to the EREMA facilities, where they were merged into two batches of 200 kg each.
Step 2 of the EREMA Basic technology was challenged at industrial scale. The contaminated flakes (200 kg) were fed into the decontamination reactor. Samples were taken during the filling and at the outlet of the reactor at regular intervals and analysed for their concentrations of the applied surrogates.
Instead of being operated continuously (as it would be in the industrial process), the challenge test was run in mode. The Panel considered that the reactor ran at the same temperature and pressure as foreseen for the industrial process. In order to prove the representativeness of the residence time of the flakes in the challenge test, an additional challenge test running in continuous mode was provided. In this test, a mixture of green (contaminated) and clear (non-contaminated) flakes was challenged. At different residence times, the ratio of green and clear flakes exiting the reactor was determined. Based on the results, the Panel concluded that the residence time in the challenge test reactor corresponded to the minimum residence time in the industrial continuous reactor.
The decontamination efficiency of the process was calculated from the concentrations of the surrogates measured in the washed contaminated flakes before and after the EREMA Basic reactor (step 2). The results are summarised in Table 1. The decontamination efficiency ranged from 97.5% for chloroform and phenylcyclohexane up to 99.8% for toluene.
Discussion
Considering the high temperatures used during the process, the possibility of contamination by microorganisms can be discounted. Therefore, this evaluation focuses on the chemical safety of the final product. Technical specifications, such as information on physical properties and residual contents of PVC, glue, polyolefins and metals, were provided for the input materials (i.e. hot caustic washed and dried flakes, step 1). These are produced from PET containers, e.g. bottles, previously used for food packaging, collected through post-consumer collection systems. However, a small fraction may originate from non-food applications, such as bottles for soap, mouth wash or kitchen hygiene agents. According to the applicant, the collection system and the process are managed in such a way that in the input stream this fraction will be no more than 5%, as recommended by the EFSA CEF Panel in its 'Scientific opinion on the criteria to be used for safety evaluation of a mechanical recycling process to produce recycled PET intended to be used for manufacture of materials and articles in contact with food' (EFSA CEF Panel, 2011).
The process is adequately described. The washing and drying of the flakes from the collected PET containers (step 1) is conducted in-house or by third parties and, according to the applicant, this step is under control. The EREMA Basic technology comprises the continuous decontamination reactor (step 2) and extrusion (step 3). The operating parameters of temperature, pressure and residence time have been provided to EFSA.
A challenge test to measure the decontamination efficiency was conducted at industrial plant scale on the process step 2 (decontamination reactor). The reactor was operated under pressure and temperature conditions as well as residence time equivalent to those of the commercial process. Since step 2 was conducted with only contaminated flakes, cross-contamination could not occur. The Panel considered that this challenge test was performed correctly according to the recommendations of the EFSA guidelines (EFSA, 2008) and that step 2 was critical for the decontamination efficiency of the process. Consequently, temperature, pressure and residence time of step 2 of the process should be controlled to guarantee the performance of the decontamination (Appendix C).
The decontamination efficiencies obtained for each surrogate, ranging from 97.5% to 99.8%, have been used to calculate the residual concentrations of potential unknown contaminants in PET (C res ) according to the evaluation procedure described in the 'Scientific opinion on the criteria to be used for safety evaluation of a mechanical recycling process to produce recycled PET' (EFSA CEF Panel, 2011;Appendix B). By applying the decontamination percentages to the reference contamination level of 3 mg/kg PET, the C res for the different surrogates was obtained (Table 2).
According to the evaluation principles (EFSA CEF Panel, 2011), the dietary exposure must not exceed 0.0025 lg/kg bw per day, below which the risk to human health is considered negligible. The C res value should not exceed the modelled concentration in PET (C mod ) that could result, after 1 year at 25°C, in a migration giving rise to a dietary exposure exceeding 0.0025 lg/kg bw per day. Because the recycled PET is intended for the manufacture of containers (also for drinking water), the exposure scenario for infants has been applied (water could be used to prepare infant formula). A maximum dietary exposure of 0.0025 lg/kg bw per day corresponds to a maximum migration of 0.1 lg/kg of the contaminant into the infant's food and has been used to calculate C mod (EFSA CEF Panel, 2011). C res reported in Table 2 is calculated for 100% recycled PET, for which the risk to human health is demonstrated to be negligible. The relationship between the key parameters for the evaluation scheme is reported in Appendix B. On the basis of the provided data from the challenge test and the applied conservative assumptions, the Panel considered that under the given operating conditions the recycling process INTCO MALAYSIA using the EREMA Basic technology is able to ensure that the level of migration of unknown contaminants from the recycled PET into food is below the conservatively modelled migration of 0.1 lg/kg food. At this level, the risk to human health is considered negligible when the recycled PET is used at up to 100% to produce materials and articles intended for contact with all types of foodstuffs including drinking water.
The Panel noted that the input of the process originates from Malaysia. In the absence of data on misuse contamination of this input, the Panel used the reference contamination of 3 mg/kg PET (EFSA CEF Panel, 2011) that was derived from experimental data from an EU survey. Accordingly, the recycling process under evaluation using the EREMA Basic technology is able to ensure that the level of unknown contaminants in recycled PET is below a calculated concentration (C mod ) corresponding to a modelled migration of 0.1 lg/kg food.
Conclusions
The Panel considered that the INTCO MALAYSIA recycling process using the EREMA Basic technology is adequately characterised and that the critical step to decontaminate the PET is identified. Having examined the challenge test provided, the Panel concluded that temperature, pressure and residence time in the continuous reactor of step 2 are critical for the decontamination efficiency.
The Panel concluded that the recycling process INTCO MALAYSIA is able to reduce foreseeable accidental contamination of post-consumer food contact PET to a concentration that does not give rise to concern for a risk to human health if: i) it is operated under conditions that are at least as severe as those applied in the challenge test used to measure the decontamination efficiency of the process; ii) the input material of the process is washed and dried post-consumer PET flakes originating from materials and articles that have been manufactured in accordance with the EU legislation on food contact materials and contain no more than 5% of PET is from non-food consumer applications; iii) the recycled PET is used at up to 100% for the manufacture of materials and articles for contact with all types of foodstuff, for long-term storage at room temperature, including drinking water, with or without hotfill.
The final articles made of this recycled PET are not intended to be used in microwave and conventional ovens and such uses are not covered by this evaluation.
Recommendation
The Panel recommended periodic verification that the input material to be recycled originates from materials and articles that have been manufactured in accordance with the EU legislation on food contact materials and that the proportion of PET from non-food consumer applications is no more than 5%. This adheres to good manufacturing practice and the Regulation (EC) No 282/2008, Art. 4b. Critical steps in recycling should be monitored and kept under control. In addition, supporting documentation should be available on how it is ensured that the critical steps are operated under conditions at least as severe as those in the challenge test used to measure the decontamination efficiency of the process.
6.
Documentation provided to EFSA | 2022-05-12T15:20:03.612Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "86c8b73e6a6c25b479c9f7a88f72af62d28e77ed",
"oa_license": "CCBYND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2022.7232",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e08e657d9f97056f0b2881916c888c744bbb2ff4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256688602 | pes2o/s2orc | v3-fos-license | A flexible electron-blocking interfacial shield for dendrite-free solid lithium metal batteries
Solid-state batteries (SSBs) are considered to be the next-generation lithium-ion battery technology due to their enhanced energy density and safety. However, the high electronic conductivity of solid-state electrolytes (SSEs) leads to Li dendrite nucleation and proliferation. Uneven electric-field distribution resulting from poor interfacial contact can further promote dendritic deposition and lead to rapid short circuiting of SSBs. Herein, we propose a flexible electron-blocking interfacial shield (EBS) to protect garnet electrolytes from the electronic degradation. The EBS formed by an in-situ substitution reaction can not only increase lithiophilicity but also stabilize the Li volume change, maintaining the integrity of the interface during repeated cycling. Density functional theory calculations show a high electron-tunneling energy barrier from Li metal to the EBS, indicating an excellent capacity for electron-blocking. EBS protected cells exhibit an improved critical current density of 1.2 mA cm−2 and stable cycling for over 400 h at 1 mA cm−2 (1 mAh cm−2) at room temperature. These results demonstrate an effective strategy for the suppression of Li dendrites and present fresh insight into the rational design of the SSE and Li metal interface. The high electronic conductivity of solid-state electrolytes leads to Li dendrite growth, thus hindering the commercialization of solid-state batteries. Here, the authors propose a flexible electron-blocking interface to protect garnet electrolytes from the electronic degradation.
D ue to the rapid development of portable devices and electric vehicles, current lithium-ion batteries cannot meet future requirements for energy density, cycle life, and safety 1 . Solid-state batteries (SSBs) have received much attention for their potential as next-generation batteries 2 . Solidstate electrolytes (SSEs) paired with a Li metal anode and a highvoltage cathode not only enhance energy density but also improve safety through the elimination of flammable liquid electrolytes.
It is generally acknowledged that interfacial properties play a critical role in regulating Li deposition 19 . LLZO shows poor wettability with Li metal. Large interfacial resistance which promotes Li dendrite nucleation is therefore difficult to prevent. Various strategies have been proposed to enhance the interfacial contact between LLZO and Li metal, such as introducing intermediate layers [20][21][22] , cleaning surface contaminants 23,24 , increasing the pressure or temperature 25,26 , and constructing a threedimensional (3D) interfacial structure 27 . These approaches improve the wettability and thus reduce Li dendrite propagation to some extent; however, lithium penetration through the electrolyte still occurs with increased current density or extended cycling time 28 . These results indicate that improvement of interfacial contact is not enough to address dendritic deposition on its own.
Although the mechanisms for Li dendrite growth remain elusive, the electron attack to the garnet electrolytes has recently received increased scrutiny since high electronic conductivity of SSEs was reported as the cause for certain types of Li dendrites 29 . The electron attack to the garnet electrolytes was recently visualized by a scanning electron microscope (SEM) 30 . The electron beam in the SEM irradiated on the Ta-doped LLZO (LLZTO) surface can expulse the Li out of the LLZTO to satisfy the neutrality. Due to heterogeneous Li + transport in LLZTO electrolytes, Li + preferentially accumulates in defects and voids, forming metallic Li as it combines with electrons 31,32 . Poor interfacial contact leads to an uneven electric field, resulting in large local currents at the interface and promoting rapid Li dendrite penetration through the electrolyte. The construction of an electron-blocking interface with excellent wettability is therefore important for the development of dendrite-free Li anodes in SSBs 33 . Unfortunately, most interlayers alloy with Li metal and are electronically conductive, while ionically conductive but electronically insulating materials show large interfacial resistance with Li metal 34 . A hybrid interlayer formed with electronically conductive nanoparticles embedded in an ionically conductive matrix was recently reported to achieve excellent interfacial wettability and a uniform electric field distribution 28 ; however, the improvement of electrochemical performance was still limited due to the conductive interface not preventing electron mobility within the garnet electrolyte.
Herein, we propose a flexible electron-blocking interfacial shield (EBS) to achieve uniform interfacial contact and prevent dendritic deposits attributed to high electronic conductivity in garnet electrolytes. Polyacrylic acid (PAA) polymer at the interface reacts with molten Li at 250°C, forming Li-inserted PAA (LiPAA). Such an EBS leads to good wettability with Li metal, decreasing the interfacial resistance from 1104.3 to 54.5 Ω cm 2 at 25°C. In addition, the flexible polymer interface relieves the interfacial stress generated by the changing volume of the Li anode, thus maintaining excellent interfacial contact during cycling 35 . The electron-blocking nature of the EBS is supported by density functional theory (DFT) calculations. Electrostatic potential profiles and density of states (DOS) profiles show that the LLZTO electrolyte and surface Li 2 CO 3 contamination conduct electrons, while the EBS is electrically insulating. As a proof of concept, garnet electrolytes with EBS show improved performance in both Li symmetric cells and LiFePO 4 /Li cells.
Results
Characterizations of the LLZTO@PAA. LLZTO ceramic pellets were fabricated by the hot-press sintering technique detailed in our previous study 36 . Cross-sectional SEM images of LLZTO show a transgranular fracture morphology without obvious grain boundaries, leading to a high relative density of over 99.5% ( Supplementary Fig. 1a) 37 . The X-ray diffraction (XRD) pattern, shown in Supplementary Fig. 2, shows diffraction peaks which match well with the standard pattern of cubic-phase garnet electrolytes (PDF#45-0109). The high relative density and pure cubic phase result in an ionic conductivity as high as 1.1 × 10 −3 S cm −1 at 25°C (Supplementary Fig. 1b). PAA exhibits an amorphous structure with a broad peak at 2θ =~18°3 8 . The PAA was dissolved in a dimethyl sulfoxide (DMSO) solution and coated on the surface of the LLZTO by drip casting. To evaluate the chemical stability between the LLZTO, PAA, and DMSO, the LLZTO particles were mixed with the PAA slurry and the DMSO solvent evaporated at 80°C. The XRD pattern in Supplementary Fig. 2 shows no change to the garnet structure, confirming the stability of the constituent components.
Time-of-flight secondary-ion mass spectroscopy (TOF-SIMS) was carried out to examine the thickness and homogeneity of the PAA thin films on garnet electrolytes. TOF-SIMS depth profiling reveals the composition of fragments from the specimen during the sputtering process 39 . Here, CHO 2 − and C 2 HO − fragments originate from the PAA layer, while LaO 2 − , ZrO 2 − , and TaO 2 − fragments come from the LLZTO underneath. As shown in Fig. 1a, the CHO 2 − and C 2 HO − signal intensities are initially high, but gradually decline after 45 s of Cs + sputtering. In contrast, the LaO 2 − , ZrO 2 − , and TaO 2 − signals from the LLZTO initially weak, but gradually increase during the 45 s of Cs + sputtering. A uniform PAA film is thus shown to coat the LLZTO pellet. The thickness of the PAA coating is estimated to be 43 nm based on a sputtering rate of 0.96 nm s −1 . Supplementary Fig. 3 shows the TOF-SIMS mappings of the CHO 2 − , C 2 HO − , LaO 2 − , ZrO 2 − , and TaO 2 − signals after sputtering. Strong LaO 2 − , ZrO 2 − , and TaO 2 − signals corresponding to LLZTO are observed from the sputtered region and intense CHO 2 − and C 2 HO − signals corresponding to the PAA are observed across the pristine region. Three-dimensional views of the sputtered volume of LLZTO@PAA directly visualize the homogeneous coverage of PAA on the surface of the LLZTO electrolyte (Fig. 1b). In addition, Supplementary Fig. 4 shows the cross-sectional SEM image and energy dispersion spectrum (EDS) scanning of the PAA-coated LLZTO pellet. The thickness of the uniform PAA film is~48 nm, which is consistent with the result of TOF-SIMS.
Topographical atomic force microscopy (AFM) images indicate that loose surface contaminants (e.g. Li 2 CO 3 ) from exposure to air leave LLZTO pellets with a rough surface (Fig. 1c) 40 . The surface becomes relatively smooth after coating with the PAA film (Fig. 1e). Interfacial hardness can greatly affect Li dendrite growth due to residual stresses during repeated cycling 41 .
Interfaces with poor ductility may be broken by the Li volume change, leading to poor interfacial contact and large resistance 35 . A soft interface is therefore required to relieve interfacial stress and maintain good interfacial contact. To compare the surface hardness before and after coverage with PAA, Young's modulus (E) mappings were created by fitting force-distance curves at 100 locations in a 30 × 30 μm 2 area. The average Young's modulus (Avg E ) for LLZTO is 20.6 GPa, while the Avg E for LLZTO@PAA is 3.3 GPa ( Fig. 1d and f). The decreased Avg E indicates a flexible interface which can serve as a stable interface during cycling and suppress Li dendrite growth.
Formation of the EBS by the substitution reaction. The EBS was formed in situ by the reaction of a PAA film with molten Li at 250°C. The reaction mechanisms and products were investigated using first-principles calculations. Supplementary Fig. 5a and b shows there are two possible reaction mechanisms between PAA and molten Li. One is a recombination reaction, where Li inserts directly into PAA polymer chains. The other is a substitution reaction, where Li replaces the H in a PAA -COOH group. The electrostatic potential profiles in Supplementary Fig. 5c show that the dehydrogenated interphase created by the substitution reaction is more stable. Differential electrochemical mass spectrometry (DEMS) was used to detect the H 2 release and further confirm the substitution reaction (Supplementary Fig. 6 and Supplementary Note 1). The escaping electrons accompanied by H 2 gas release suppress the interfacial electrostatic potential and prohibit electron permeation. The structure and composition of LiPAA were studied by SEM, X-ray photoelectron spectroscopy (XPS), Fourier Transform Infrared Spectroscopy (FTIR), and Raman, which can support the results of theoretical calculations (Supplementary Fig. 7 and Supplementary Note 2).
The work of adhesion (W ad ) for dehydrogenated PAA on Li metal is 60.1 meV Å −2 , much higher than the 58.0 meV Å −2 for LLZTO (110)/Li(001) and the 16.5 meV Å −2 for Li 2 CO 3 (001)/Li(001) ( Fig. 2a, b and Supplementary Fig. 8a). Note that Li 2 CO 3 is the main component of the contamination on LLZTO surfaces exposed to air 40 . As a result, the PAA layer improves the wettability of Li metal on LLZTO, especially when the LLZTO is covered by lithiophobic Li 2 CO 3 . The contact angle was calculated with the following equation: where W ad is the interfacial work of adhesion, σ Li is the surface energy of Li, and θ is the contact angle 42,43 . The θ for PAA/Li, LLZTO/Li, and Li 2 CO 3 /Li is~0°, 85.9°, and 132.7°, respectively, indicating greatly improved wettability between the LLZTO and Li metal using a PAA intermediate layer.
Electron-blocking property of the EBS. A LiPAA EBS effectively blocks electrons at the interface. This fact is confirmed by the electrostatic potential profiles and DOS simulation results shown in Fig. 2c and Supplementary Fig. 8b for LLZTO(110)/Li(001) and Li 2 CO 3 (001)/Li(001), respectively. There is no barrier to the transfer of electrons from the interface to the LLZTO electrolyte.
In the case of LLZTO(110)/Li(001), electrons and Li atoms preferentially deposit within the LLZTO rather than at the LLZTO/Li interface, a behavior corroborated by DOS results. The result is that LLZTO becomes electronically conductive when lithiated ( Fig. 2d and e), forming Li dendrites across LLZTO electrolytes 29 .
The interfacial electron density of Li 2 CO 3 (001)/Li(001) is higher than that of LLZTO(110)/Li(001). An abnormal space charge layer is shown in Supplementary Fig. 8c and d. The outer layer has a slightly higher electronic DOS than the inner layer, indicating that insulative Li 2 CO 3 promotes electron permeation due to complex interfacial phenomena. In contrast, the electrostatic potential of the PAA/Li(001) interface is 1.92 eV lower than LiPAA polymer, which is attributed to the dehydrogenation reaction (Fig. 2f). Electrons are contained to the Li metal and permeate only into the outer layer of the interface. In addition, Li deposition occurs preferentially at the interface rather than within the LiPAA, prohibiting the penetration of Li dendrites through the PAA. The electronically insulating nature is further confirmed by DOS results for PAA/Li(001), shown in Fig. 2g and h. Electrons are captured within the Li/PAA interfacial bonds, while the inner layer remains insulating. To further confirm the electronically insulating property of LiPAA, the electronic conductivity of the LLZTO and the LLZTO@EBS was evaluated by DC polarization at 0.1 V. As shown in Supplementary Fig. 9, the electronic conductivity of the LLZTO@EBS is smaller than that of the LLZTO, indicating the excellent capability of electron block by the EBS. The LLZTO@EBS/Li wettability was evaluated with molten Li and LLZTO@PAA. As shown in Fig. 3a, molten Li forms a sphere on the LLZTO surface, indicating a large θ. This poor wettability leads to gaps at the interface. In contrast, molten Li completely wets LLZTO@EBS (Fig. 3b). Cross-sectional SEM image shows intimate contact between the LLZTO@EBS and the Li metal without any voids at the interface. This enhanced wettability is consistent with the simulated θ. Complete wetting significantly decreases the interfacial resistance, thus improving electrochemical performance.
SSBs benefitting from the EBS. Li/LLZTO@EBS/Li and Li/LLZTO/Li symmetric cells were assembled for electrochemical characterization. Electrochemical impedance spectroscopy (EIS) was carried out to compare the interfacial resistance of cells with and without the EBS. Figure 3c shows the impedance spectra obtained at 25°C. The impedance spectrum of the Li/LLZTO/Li cell exhibits one large semicircle. The starting point of the spectrum corresponds to the bulk resistance of the LLZTO, while the semicircle corresponds to the interfacial resistance between LLZTO and Li metal 37 . In an ideal situation, the charge transfer across two Li/LLZTO interfaces should be identical in a symmetric cell. The interfacial resistance determined from the semicircle is divided by two to obtain the value for each Li/LLZTO interface. Thus, the LLZTO/Li interfacial resistance is found to be 1104.3 Ω cm 2 . The Li/LLZTO@EBS/Li symmetric cell shows multiple semicircles resulting from the EBS bulk and EBS/ LLZTO interface at high frequency and EBS/Li interface at low frequency. The overall resistance of the LLZTO@EBS/Li interface was 54.5 Ω cm 2 . The decrease in interfacial resistance from 1104.3 Ω cm 2 to 54.5 Ω cm 2 can be ascribed to the lithiophilicity of the EBS film. In addition, the temperature-dependence of the interfacial resistance was characterized between 25°C and 85°C. The activation energy (E a ) of the EBS modified and the unmodified interface was calculated using the Arrhenius law. The E a of the LLZTO@EBS/Li interface is 0.38 eV, while the E a of the LLZTO /Li interface is 0.51 eV (Fig. 3d). The decreased E a is beneficial for Li + migration across the interface 28 .
The critical current density (CCD) was used as a measure of the interfacial stability and capacity for Li dendrite suppression. The CCD is defined as the current density where the cell reaches a short circuit. An applied current density was increased from 0.1 to 1.5 mA cm −2 with a step increase of 0.1 mA cm −2 per hour at 25°C. Figure 3e shows that the CCD of the Li/LLZTO/Li cell is as low as 0.2 mA cm −2 . The large overpotential over 1 V is a result of poor interfacial contact. The CCD of the Li/LLZTO@EBS/Li cell is significantly improved to 1.2 mA cm −2 . The voltage profile of the Li/LLZTO@EBS/Li cell remains relatively stable before short circuiting. The improvement in CCD can be attributed to combined contributions from the electronically insulating interface and from the relieved interfacial stress. More specifically, the LiPAA EBS facilitates Li + transport and prevents electronic degradation of the LLZTO bulk. In addition, the flexibility of the polymer interface alleviates interfacial stress, maintaining interfacial contact and suppressing Li dendrite growth. To our knowledge, a CCD of 1.2 mA cm −2 at room temperature is the highest value ever reported for garnet electrolytes (Supplementary Table 1). Despite the various surface modification approaches used to decrease interfacial resistance by enhancing wettability, the CCD is still limited due to electronic degradation and poor interfacial stability at high current densities.
Galvanostatic Li plating/stripping experiments were carried out to evaluate the long-term stability of Li + transport and the effectiveness of dendrite suppression at the interface. As shown in Fig. 4a, the Li/ LLZTO/Li cell exhibits an overpotential over 0.45 V for the first charge/discharge cycle at 0.2 mA cm −2 (0.1 mAh cm −2 ), indicating inhomogeneous Li deposition. A short circuit occurs within three cycles. The poor LLZTO/Li interfacial contact leads to uneven current distribution and serious electronic degradation at the defects, thus inducing Li dendrite growth 44 . In contrast, the Li/LLZTO@EBS/ Li cell continuously operates for over 1000 h with an overpotential of 46.1 mV at 0.2 mA cm −2 (Fig. 4b). Moreover, the Li/LLZTO@EBS/Li cell shows stable cycling for 400 h at 0.5 mA cm −2 (0.25 mAh cm −2 ), while the Li/LLZTO/Li cell cannot be cycled even once (Fig. 4c and d). Increasing the current density and areal capacity to 1 mA cm −2 and 1 mAh cm −2 , the Li/LLZTO-EBS/Li cell continues to show stable cycling for 400 h (Fig. 4e). To our knowledge, the performance of LLZTO@EBS is superior to the performance achieved with garnet electrolytes in all previous studies (Fig. 4f).
After disassembling the short-circuited Li/LLZTO/Li cell and reacting the Li metal with a water/alcohol solution, a rough LLZTO surface with voids and defects is revealed. The dark spots reveal areas where Li dendrites have grown into the LLZTO pellet (Fig. 5a). This is confirmed by SEM (Fig. 5b). The cross-sectional SEM image shows the proliferation of Li dendrites through the LLZTO grain boundaries ( Fig. 5c and d), the cause of short circuiting. As shown in Fig. 5e and f, the surface of the LiPAAprotected LLZTO remains smooth after 1000 h without dark spots from dendrites. The flexible polymer EBS accommodates the Li volume change to maintain good contact (Fig. 5g). The slight increase in overall resistance from 209.1 to 224.3 Ω cm 2 confirms that no short circuiting occurs after 1000 h of cycling ( Supplementary Fig. 10). The dendrite-free grain boundary of the LLZTO further confirms the ability of the EBS to prevent dendrite growth (Fig. 5h).
Discussion
To understand the mechanisms for dendrite suppression by the EBS, other interfacial layers were synthesized between the LLZTO electrolyte and Li metal for comparison. A Au layer was first coated on the LLZTO surface by sputtering. Supplementary Fig. 11a shows the excellent wettability of the LLZTO@Au with molten Li due to the formation of a Au-Li alloy at 300°C. This enhanced wettability leads to a dramatically decreased interfacial resistance of 43.2 Ω cm 2 (Supplementary Fig. 11b). The Li/ LLZTO@Au/Li cell exhibits a CCD of 0.7 mA cm −2 and stable cycling over 200 h at 0.5 mA cm −2 (0.25 mAh cm −2 ). This is a marked improvement over the pristine Li/LLZTO/Li cell (Supplementary Figs. 11c and 12) where poor wettability of the LLZTO causes an uneven electric field and local hot spots which lead to Li dendrite nucleation and propagation (Fig. 6a). The improved cycling performance achieved with a Au interfacial provides a basis for the idea that interfacial wettability is important for the uniform distribution of the electric field (Fig. 6b).
Although the interfacial resistance of LLZTO@Au/Li is smaller than that of LLZTO@EBS/Li, the Li/LLZTO@Au/Li cell shows a much shorter cycle life than Li/LLZTO@EBS/Li cell at 0.5 mA cm −2 ( Supplementary Fig. 12). This can be attributed to the electron attack 29 . Electrons within the electrolyte can combine with lithium ions to form Li metal within the polycrystalline electrolytes, especially at the grain boundaries. Pristine LLZTO with its lithiophobic nature accumulates electrons unevenly at locations with point contact, more readily forming Li dendrites (Fig. 6a). The LLZTO@Au is also unable to avoid this form of degradation due to the conductive nature of the interface (Fig. 6b). In contrast, the EBS protected LLZTO prevents electrons from entering the electrolyte, avoiding dendrite formation (Fig. 6c) 45 . The flexible LiPAA polymer maintains interfacial contact by accommodating the Li volume change during cycling. In contrast, the overpotential of the Li/ LLZTO@Au/Li cell gradually increases after 150 h cycling due to fracturing of the interface, accelerating the short circuit (Supplementary Fig. 12).
To further explore the capability of the EBS layer in suppressing dendrites, PAA was coated on LLZTO electrolytes with a relative density of 96% (LLZTO(96%)). LLZTO(96%) facilitates Li dendrite growth compared to high-density LLZTO. The crosssectional SEM image, shown in Supplementary Fig. 13a, shows that the LLZTO(96%) consists of small garnet grains with many grain boundaries and voids, leading to a decreased ionic conductivity of 5.4 × 10 −4 S cm −1 at 25°C (Supplementary Fig. 13b). The increased number of grain boundaries trap electrons and lead to increased Li dendrite formation 32,46 . Supplementary Fig. 14 shows short-circuiting of the Li/LLZTO(96%)@Au/Li cell after 18 h of cycling at 0.2 mA cm −2 (0.1 mA cm −2 ). SEM images show a large amount of mossy Li dendrite within the grain boundaries in the short-circuited LLZTO(96%) electrolyte ( Supplementary Fig. 15). In contrast, the Li/LLZTO(96%)@EBS/ Li cell demonstrates stable cycling for 150 h at 0.2 mA cm −2 ( Supplementary Fig. 14).
Li + transport at the interface also affects Li dendrite growth. For comparison, a traditional poly(ethylene oxide)/Lithium bis(trifluoromethanesulfonyl)imide electrolyte layer (PEO) was coated on the surface of LLZTO pellets. Supplementary Fig. 16a shows a PEO layer thickness of~5 μm. The EIS spectrum for the Li/LLZTO@-PEO/Li cell shows two semicircles at 60°C ( Supplementary Fig. 16b). The small semicircle at a high frequency corresponds to the resistance of the PEO layer, while the large semicircle at a low frequency is attributed to the resistance of the LLZTO/PEO and PEO/Li interfaces. The interfacial resistance resulting from the PEO modification is 683.2 Ω cm 2 at 60°C, one order of magnitude larger than the interfacial resistance of the EBS modification. Although the PEO layer is also electronically insulating, the sluggish Li + transport at the interface leads to large overpotentials during the CCD test. The CCD of the Li/LLZTO/PEO/Li cell is 0.6 mA cm −2 ( Supplementary Fig. 16c). In addition, polymer electrolytes with a Li salt as an intermediate layer always exhibit a low Li + transference number (<0.5), which can induce uneven Li deposition 47,48 . In contrast, LiPAA polymer guides homogenous Li deposition without the interference of anions. The Li/LLZTO@PEO/Li cell shows an overpotential over 0.8 V and short circuits after 50 h of cycling at 0.5 mA cm −2 (0.25 mAh cm −2 ) at 60°C ( Supplementary Fig. 17). The Li/LLZTO@EBS/Li cell operates continuously for 400 h at 0.5 mA cm −2 (0.25 mAh cm −2 ) at 25°C, indicating that the good Li + transport at the EBS interface is beneficial to performance.
To extend the substitution reaction to other polymers, PEO film was coated on the surface of the LLZTO pellets and reacted with the molten Li by the same method. The interfacial resistance of the Li/LLZTO(PEO)/Li cell is even one order magnitude larger than that of the Li/LLZTO/Li cell without modification (Supplementary Fig. 18). The blocked Li + transport could be attributed to the following reasons: (1) compared with the -OH in the PEO, -COOH of the PAA as an acid group is easier to react with the Li by the substitution reaction; (2) the -COOH group of the PAA is on the main polymer chain, while the -OH group of the PEO is the terminal group. This leads to a larger number of -COOH of the PAA for the substitution reaction and the Li + transportation by the segment movement; (3) PAA (M w : 450,000) shows the much smaller molecular weight than PEO (M w :~1,000,000), which may be beneficial for the Li + transport due to the decreased crystallinity. Full SSBs with a LiFePO 4 (LFP) cathode and a Li metal anode were constructed using LLZTO@EBS and compared to bare LLZTO. Supplementary Fig. 19a shows the configuration of the SSBs. The introduction of an ionic liquid as a wetting agent enhances Li + migration into the composite cathode for roomtemperature feasibility 23,40 . The LFP/LLZTO@EBS/Li cell shows smaller polarization than the LFP/LLZTO/Li cell at various current rates (Supplementary Fig. 19b and c). The LFP/ LLZTO@EBS/Li cell delivers a specific discharge capacity of 142.3 mAh g −1 at 0.1 C. The discharge capacity is 130.2, 119.5, and 95.4 mAh g −1 at 0.2, 0.5, and 1 C, respectively (Supplementary Fig. 19d). After high-rate cycling, the cell can recover a discharge capacity of 142.5 mAh g −1 at 0.1 C. The high capacity and excellent cycling stability can be ascribed to good interfacial contact, an electronically insulating interface, and accommodation of the Li volume change. In contrast, the LFP/LLZTO/Li cell delivers a discharge capacity of 125.4 mAh g −1 with a high Fig. 19d). Moreover, the SSB with LLZTO@EBS retains 82.8% capacity after 300 cycles at 0.2 C, and 83.1% capacity after 200 cycles at 0.5 C at room temperature (Supplementary Fig. 19e and f). This excellent cycling performance is better than LFP/LLZTO/Li, LFP/LLZTO@Au/Li, and LFP/ LLZTO@PEO/Li cells (Supplementary Fig. 20).
In summary, a flexible LiPAA EBS is formed between an LLZTO electrolyte and Li metal anode to suppress Li dendrite growth. Interfacial resistance is dramatically decreased from 1104.3 to 54.5 Ω cm 2 at 25°C due to a substitution reaction at the interface. The flexible EBS interface alleviates interfacial stress to maintain interfacial contact during cycling. The electronically insulating nature of the EBS is supported by electrostatic potential profiles and DOS results based on DFT simulations. EBSprotected LLZTO electrolytes prevent electronic degradation, avoiding the direct reduction of Li + to Li metal dendrites within LLZTO. Li/LLZTO@EBS/Li cells exhibit a CCD as high as 1.2 mA cm −2 at 25°C. Li/LLZTO@EBS/Li cells can continuously operate for over 1000 h at 0.2 mA cm −2 and 400 h at 1 mA cm −2 . The performance of an EBS layer is superior to electronconducting Au and traditional PEO polymer interfacial layers. This work represents the rational design of an interface for SSEs and Li metal anodes, and presents a promising strategy to achieve long-life and dendrite-free SSBs with high energy density and excellent safety.
Methods
Fabrication of LLZTO@PAA. Ta-doped garnet Li 6.4 La 3 Zr 1.4 Ta 0.6 O 12 (LLZTO) powders were fabricated by the solid-state reactions, while LLZTO pellets were sintered by the hot-pressing technique 36 . The LLZTO ceramic pellets show the high relative density of 99.5 ± 0.5%, which was evaluated by the Archimedes' principle (Supplementary Table 2 Material characterization. Crystal structures of samples were examined by XRD (Bruker D2 Phaser), using Cu Kα radiation with 2θ in the range of 10°-80°and a step size of 0.02°. Surface and cross-section morphologies of the LLZTO pellets were investigated by scanning electron microscopy (SEM, S3400). TOF-SIMS testing was conducted using TOF-SIMS IV (ION-TOF GmbH, Germany) with a 25 keV bismuth liquid metal ion source and a base pressure of ≈10 −8 mbar in the analysis chamber. Negative secondary ions were induced by primary ion beam bombardment on the surface of LLZTO. The analysis area was 334 µm × 334 µm. Depth profiles were obtained by sputtering ion beams of Cs + (3 keV) on a 100 µm × 100 µm square. Sputtering rate was measured on a Si wafer as 0.96 nm s −1 with a sputtered area of 100 µm × 100 µm. AFM (Dimension V equipped with a Nanoscope controller V and Nanoscope software 7.30, Veeco) was used to measure the elastic modulus of the LLZTO and LLZTO@PAA. The sensitivity and the spring constant of the AFM tip were measured under the contact model and thermal tune model, respectively. A force-strain mapping consisting of 10 × 10 points was measured in an area 10 μm × 10 μm. The elastic modulus mapping was fitted and plotted using the SPIP (Scanning Probe Image Processor) software. DEMS was used to detect the H 2 release and confirm the reaction mechanism. The structure and composition of LiPAA studied by SEM, XPS, FTIR, and Raman.
To investigate the interface between the LLZTO ceramic pellets and Li metal, SEM sample preparation was performed as follows: the Li metal on the LLZTO pellets was melted at 250°C for 30 min and then cooled to room temperature. The LLZTO pellets with Li metal were fractured using thin-tipped tweezers. Crosssectional samples were chosen for SEM investigation.
To investigate the Li dendrite growing along the grain boundary of LLZTO ceramic pellets, SEM sample preparation was performed as follows: short-circuited cells were disassembled in an Ar-filled glovebox. After completely removing the Li metal on the LLZTO pellets by sanding, dark spots were observed on the white LLZTO surface, indicating the endpoints of Li dendrite penetration. LLZTO pellets were fractured at dark spots using thin-tipped tweezers. Cross-sectional samples with a black line along grain boundaries were chosen for SEM investigation.
DFT calculations. Calculations were performed using the Vienna ab initio simulation package (VASP) code based on DFT 49,50 , employing the projector augmented wave (PAW) method as the potentials 51 , and the Perdew−Burke −Ernzerhof (PBE) generalized-gradient approximation (GGA) as the exchangecorrelation functional 52 . In the model of PAA/Li, a single chain was designed to attach to Li(001), the Li atoms of the innermost layer are fixed to bulk Li, and a 35 Å vacuum slab was built on the Li surface. The DFT-D2 method was employed to incorporate van der Waals interactions between atoms 53 . The Li 2 CO 3 (001)/Li (001) and LLZO(110)/Li(001) models were built following refs. [54][55][56] . For structural relaxation and energy/DOS calculations, an energy cutoff of 520 eV and a 1 × 1 × 1 K-point Monkhorst-Pack grid were used. The convergence criterion for energy and force for structural relaxation were set as 1.0 × 10 −5 eV and 0.01 eV Å −1 , respectively. All the structures were visualized by VESTA 57 .
Electrochemical performance tests. Ionic conductivity of the LLZTO samples was measured by an impedance analyzer (Novocontrol Beta High Performance Impedance Analyzer) with an AC of 10 mV from 0.1 to 20 MHz in frequency. Thin gold layers were plated on both sides of the ceramic pellets by sputtering to be used as electrodes for conductivity testing. The LLZTO@PAA pellets were sandwiched between two pieces of Li metal to construct symmetric cells. Li metal electrodes were melted onto the two sides of the LLZTO@PAA pellets at 250°C for 30 min in an Ar-filled glovebox before sealing in Swagelok-type cell molds. A pressure of 10 N cm −2 was exerted on the ceramic plates using springs to maintain good contact. EIS measurements were performed in a frequency range from 1 MHz to 0.1 Hz with an amplitude of 10 mV by an Autolab instrument. Galvanostatic cycling tests were conducted using a NEWARE battery cycler (CT-4000) using different current densities at 25°C. Li symmetric cells with other surface modifications were cycled under the same conditions.
The composite cathode was prepared as follows: 0.3 M Lithium bis (trifluoromethanesulfonyl)imide (LiTFSI, Sigma-Aldrich) was dissolved in ionic liquid (IL) (PY14TFSI, Sigma-Aldrich) to obtain a homogeneous IL-0.3 M solution. LiFePO 4 (LFP), super P conductive additive (SP), polyvinylidene fluoride (PVDF), and IL-0.3 M with a weight ratio of LCO:SP:PVDF:IL-0.3 M = 8:1:1:6 were then ground thoroughly in a mortar. Finally, the slurry was coated on Al foil to form a composite cathode with an active material loading of~2 mg cm −2 .
Data availability
The data that support the findings of this study are available from the authors on reasonable request, see author contributions for specific data sets.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. | 2023-02-09T14:19:20.671Z | 2021-01-08T00:00:00.000 | {
"year": 2021,
"sha1": "aa261e4e4039208a1adf016022fc96906c9960f6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-20463-y.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "aa261e4e4039208a1adf016022fc96906c9960f6",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": []
} |
232363984 | pes2o/s2orc | v3-fos-license | Effects of Dendrobium Polysaccharides on the Functions of Human Skin Fibroblasts and Expression of Matrix Metalloproteinase-2 under High-Glucose Conditions
The effects of Dendrobium polysaccharides (PDC) on the functions of human skin fibroblasts (HSFs) and expression of matrix metalloproteinase-2 under high-glucose conditions and exploration of the underlying mechanism remain unclear. We used the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) analysis and flow cytometry to evaluate the cell viability and apoptosis. The collagen levels were determined by the Sircol™ Collagen Assay. Real-time quantitative polymerase chain reaction (RT-PCR) was used to detect the expression of matrix metalloproteinase-2 (MMP-2) and matrix metalloproteinase inhibitor (TIMP-2) mRNA. We found the following: (1) under the high-glucose condition, the HSF cell viability, the expression of TIMP-2 mRNA, and the collagen levels were reduced, while the apoptosis rate and the expression of MMP-2 mRNA increased (P < 0.05). (2) In the high-glucose + PDC group, the PDC reversed the changes in the collagen level, viability, and apoptosis rate of the HSF cells caused by high glucose, with the expression of protein and TIMP-2 mRNA increased and the level of MMP-2 mRNA decreased (P < 0.05). This is the first time attempting to reveal that PDC can exhibit protective effects on HSF under high-glucose conditions, which may be related to the upregulation of the TIMP-2 expression and inhibition of the MMP-2 expression.
Introduction
e skin of diabetic patients is known to be susceptible to damage and does not heal easily following an injury. Diabetic skin lesions are related to diabetic vascular disease, neuropathy, cell dysfunction, and abnormal cytokine secretion; however, the specific mechanism for their formation remains unknown [1,2]. us, evaluation of the molecular mechanism leading to the development of diabetic skin ulcers and the determination of appropriate interventions are of great significance for the diagnosis and effective treatment of this condition. Human skin fibroblasts (HSFs) are a type of repair cells in the dermis and play an important role in moderate tissue metabolism. Cells of this type also secrete collagen as well as noncollagen components, such as the outer matrix, and play a critical role in wound healing [3,4]. In addition, they are important in maintaining cell elasticity, cultivating water, and supporting epidermal cells [5][6][7][8]. It has been determined that fibroblasts are crucial effector cells in the processes related to diabetic skin lesions and participate in the entire wound repair process. e changes in the biological characteristics of these cells are fundamental to the development of diabetic skin lesions. It is noteworthy that the decrease in the number and activity of fibroblasts is also one of the reasons for the reduced synthesis of collagen [9,10]. Many in-depth studies have shown that the amount and vitality of skin fibroblasts are also important in diabetic skin injury and healing [11,12]. Dendrobium polysaccharide, also called polysaccharides of Dendrobium candidum (PDC), is the main bioactive substance of Dendrobium candidum. Previous studies confirmed that PDC could inhibit islet cell apoptosis and necrosis, protect islet cells, and prevent diabetes. In addition, they have also been shown to prevent calcium overload and inhibit corneal epithelial cell apoptosis. PDC prevent skin photoaging; therefore, they display protection and repair properties [13][14][15]. However, the effects on human skin fibroblasts under high-glucose conditions are yet to be confirmed. Consequently, in the present study, we used various concentrations of PDC to treat HSF in vitro under high-glucose (HG) conditions. We observed the effects of the PDC on the activity and apoptosis of HSF, as well as on the expression of matrix metalloproteinase-2 (MMP-2). e key aim was to establish the protective properties of the PDC on HSF in diabetes and to provide new directions for the treatment of diabetic skin lesions.
Extraction of Polysaccharide of Dendrobium candidum.
e extraction process of polysaccharides from the protocorm of Dendrobium officinale has been studied. e process in our study is as follows: raw material, weighing, hot water extraction, rough filtration, evaporation concentration, filtrate standing, vacuum filtration, water bath concentration, freeze-drying, and crude polysaccharide. 2000 g fresh raw materials of Dendrobium officinale were weighed and cut into small sections to facilitate the extraction of polysaccharides. 200 g of them were reserved for freeze-drying. e remaining parts were extracted by hot water several times. According to the ratio of medicine and water, the medicine was extracted with hot water (medicine : hot water � 1 : 4) for 2 hours, and then the initial extraction solution is reserved, then the mixture was extracted (initial extraction solution : water � 1 : 2) for next one hour. en, the filtrate was evaporated and condensed to a certain volume, and then it was freeze-dried for 24 hours to obtain the crude polysaccharide. e extraction technology of polysaccharides from the protocorm of Dendrobium officinale was optimized. e extraction conditions were established by orthogonal design L (934). e experiment was repeated three times. e crude polysaccharide from the protocorm of Dendrobium officinale was extracted by optimized technology, and the purity was 51.3%. In the experiment, the crude polysaccharide of the Dendrobium candidum protocorm can be dissolved in three distilled water to prepare the initial solution of 10 mg/ ml, which is filtered and sterilized. Before use, the medium is added and diluted to the required concentration.
Primary Culture of HSF.
e procedure was conducted based on the previously reported method [16]. e infant foreskin was aseptically removed and treated with penicillin 100 μg/mL and streptomycin 100 μg/mL. It was then thoroughly washed with phosphate-buffered saline (PBS) and D-Hanks' solution. After removing the subcutaneous tissue, the skin specimen was cut into small pieces using sterile ophthalmic scissors and placed in a type II collagenase digestion solution at 4°C for overnight digestion. e epidermis and dermis were aseptically separated. e epidermis was discarded, whereas the dermis was cut and transferred to a solution containing 0.25% trypsin (excluding ethylenediaminetetraacetic acid (EDTA)) for approximately 15 min. e digestion was stopped by the addition of a highglucose DMEM medium containing 10% calf serum. e solution was centrifuged at 800 g for 5 min, and the resulting pellet was inoculated into a 25 mL plastic culture flask. Approximately 4 mL of high-glucose DMEM culture solution containing 10% calf serum was carefully added as not to float the tissue. e culture flask was placed in a 37°C, 5% CO 2 incubator overnight [16]. e following day, 5 mL of the medium was added. e primary cells were passaged when they were >80% full. During the passage, the cells were digested with 0.25% trypsin (excluding EDTA). e culture flask was patted during the retraction of the cell body under an inverted microscope before the addition of 10% calf serum. e digestion was stopped by the addition of the high-glucose DMEM medium. e cells on the wall of the flask were gently pipetted and collected into a 10 mL centrifuge tube. e cells were centrifuged at 800 g for 5 min, and the supernatant was discarded. e cells were inoculated and routinely cultured.
High-Glucose Model Establishment and Experimental
Grouping.
e HSF cells were cultured and passaged. When the cells reached 80% confluence, the serum-containing culture medium was discarded. e serum-free medium was changed after 24 h, and cultures containing different concentrations of glucose were subsequently prepared (5,10,15,20,25,30,35, and 40 mmol/L of glucose). After observation at 12, 24, 36, 48, and 72 h, the optimal glucose concentration for the high-glucose model of the HSF cells was established at 25 mmol/L. e experiment was divided into the following groups: (1) control group (control, C): HSF cells were cultured with the DMEM medium containing 10% fetal bovine serum for 48 h; (2) high-glucose group (high glucose, HG): HSF treated with the culture solution containing 25 mmol/L of glucose for 48 h; (3)(4)(5) PDC at different doses: HSF treated with the culture solution containing the 100, 200, and 400 μg/mL of PDC for 48 h; and (6-8) high glucose + different doses of the PDC group (HG + PDC): HSF treated with the culture solution containing 25 mmol/L of glucose and 100, 200, and 400 μg/mL of PDC for 48 h. High mannitol (25 mmol/L) was utilized as the osmotic pressure control.
MTT Cell Viability Test.
e 3-(4,5-Dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) method was used to detect cell viability. e principle of this assay is that the dehydrogenase enzymes in living cells reduce tetrazolium to a water-insoluble blue product, i.e., formazan, which precipitates in the cells. Conversely, dead cells do not exhibit this property. Dimethyl alum dissolves the blue-purple crystals in the cells, and the color depth is proportional to the amount of formazan [17,18].
Collagen Detection Method.
e cell supernatants of each group were collected for the determination of collagen content in the cell culture medium. 100 μL of the culture medium containing collagen was separated, and the concentrated reagents were added. e supernatant was discarded following centrifugation at 4°C overnight. Standard collagen was diluted to 0, 0.01, 0.05, 0.1, 0.2, and 1.00 mg/L, respectively. 500 μL of Sircol dye was added to each sample and incubated for 30 min. e samples were then washed with Acid-Salt Wash Reagent (R). 250 μL of alkaline metal reagent was added to each tube before mixing by the vortex. 200 μL of the sample was transferred onto a 96-well plate, which was analyzed by detection utilizing a microplate reader. e detection wavelength was 555 nm, and the absorbance value was adjusted to zero with water. e absorbances of blank reagents, standard collagen, and test samples were measured three times for accuracy. e test error was ±10%.
Detection of the Effect of PDC on the Apoptosis of HSF in the HG Environment Using Annexin V-FITC/PI and Flow
Cytometry.
e cells of the nine groups were separated during the logarithmic growth period. Following digestion by pancreatin without EDTA, the cells were placed in centrifuge tubes and rinsed twice with PBS. e cells were subsequently centrifuged for 5 min at 800 g. e cells were collected and placed in 1 mL Eppendorf tubes. 500 mL of binding buffer was added to suspend the cells. 5 mL of annexin V-FITC and 5 mL of propidium iodide were added and allowed to react for 5-15 min at room temperature.
Determination of the Effect of PDC on the Expression of TIMP-2 mRNA and MMP-2 mRNA in the HSF Cells in the HG Environment
Using RT-qPCR. Nine groups were treated according to the described procedures. e cells were collected to determine the expression of TIMP-2 mRNA and MMP-2 mRNA in the HSF cells using the real-time quantitative polymerase chain reaction (RT-qPCR). e extraction of the total RNA from the HSF cells, as well as the reverse transcription of RNA and RT-qPCR, was carried out utilizing the SYBR ® method. e target gene sequence was searched in the National Center for Biotechnology Information (NCBI). Primer 5 software was used to design the primer. System composition and amplification of quantitative PCR were conducted according to the method previously described by Megha [19] (Table 1).
Statistical Treatment.
All the data were analyzed using the SPSS version 19.0 software package. e measurement data were presented according to the differences between the average ± standard values. e t-test of the individual samples was performed to compare the average values between two groups satisfying the normal distribution and homogeneity of variance. Significance between groups was also evaluated by one-way analysis of variance (ANOVA) followed by a Tukey HSD post hoc test. e single-factor analysis of variance was used for the comparison of average values among the various groups. Findings were given as mean ± SD and compared by Dunnett's test. All groups were compared in pairs. P < 0.05 indicated that the difference was of statistical significance.
Effects of High Glucose Concentration and PDC on the Viability of the HSF Cells.
e obtained results are demonstrated in Figure 1 and Table S1. As can be seen, compared with the control group, the HSF cell viability of the highglucose group was significantly reduced (P < 0.05). On the contrary, the cell concentration of the analogous glucose content group was not significantly different from the control group (P > 0.05), indicating that high glucose concentration leads to reduced viability of the HSF cells. Moreover, this outcome also suggests that induced cell damage is not related to the osmotic pressure. No significant difference in the cell viability between the Dendrobium polysaccharide (100, 200, and 400 μg/mL) and the control groups was observed (P > 0.05). Compared with the highglucose group, the cell viability of the high-glucose + PDC (100, 200, and 400 μg/mL) groups significantly increased, demonstrating a concentration dependence (P < 0.05).
Effects of High Glucose Concentration and PDC on HSF Cell Apoptosis.
e results of the analysis are shown in Figure 2 and Table S2. It was determined that the apoptosis rate of the HSF cells in the high-glucose group significantly increased compared with the control group (P < 0.05). Conversely, the apoptosis rate of the same mannitol content group was not significantly different from the control group (P > 0.05), indicating that high glucose concentration leads to increased apoptosis of the HSF cells. Furthermore, the induced cell damage is not related to the osmotic pressure. Apoptosis rates of the PDC (100, 200, and 400 μg/mL) groups were not significantly different from those of the control group (P > 0.05). Notably, compared with the high-glucose group, the high-glucose + PDC (100, International Journal of Endocrinology 3 200, and 400 μg/mL) group had reduced high glucose-induced apoptosis in a concentration-dependent manner (P < 0.05).
Effects of High Glucose Concentration and PDC on HSF Cell Collagen Content in the Cell Culture Fluid.
Compared with the control group, the collagen level in the HSF cell culture fluid of the high-glucose group significantly decreased (P < 0.05) (Figure 3 and Table S3). Collagen levels in the cell culture fluid of the PDC (100, 200, and 400 μg/mL) groups did not change compared with the control group (P > 0.05). Moreover, in comparison with the high-glucose group, the collagen level in the culture fluid in the highglucose + PDC (100, 200, and 400 μg/mL) groups significantly increased in a concentration-dependent manner (P < 0.05).
Effect of High Glucose Concentration and PDC on the mRNA Expression of MMP-2 and TIMP-2 in the HSF Cells.
Compared with the control group, the expression of MMP-2 mRNA significantly increased in the high-glucose HSF cells (P < 0.05), while the expression of TIMP-2 mRNA decreased (Figures 4 and 5 and Tables S4 and S5). Compared with the control group, the PDC (100, 200, and 400 μg/mL) downregulated the expression of MMP-2 mRNA and upregulated the expression of TIMP-2 mRNA in the HSF cells in a concentration-dependent manner (P < 0.05). Compared with the high-glucose group, the expression of MMP-2 mRNA in the HSF cells in the high-glucose + PDC (100, 200, and 400 μg/mL) group was notably reduced, whereas the expression of TIMP-2 mRNA increased in a concentrationdependent manner (P < 0.05).
Discussion
e high glucose environment mimicing diabetic state can not only reduce the ability to migrate and proliferate of fibroblasts, but also increase the cell apoptosis. As previously mentioned, fibroblasts constitute the principal repair cells in wound healing and are some of the main components of the granulation tissue. ey synthesize and secrete the extracellular matrix, including collagen, fibronectin, and hyaluronic acid [20][21][22][23].
us, in the pathological state of diabetes, increased fibroblast apoptosis inevitably hinders the healing of skin ulcers. e dynamic balance of the matrix metalloproteinase (MMP) family is abnormal during diabetes. e increase in the expression of MMPs and enhanced h. e MTT assay was used to detect the cell viability. All the data were expressed as x ± S (n � 3). * P < 0.05, compared with the control group; # P < 0.05, compared with the high-glucose group; ▲ P < 0.05, compared with the high-glucose + Dendrobium polysaccharide (100 μg/mL) group; ◆ P < 0.05, compared with the highglucose + Dendrobium polysaccharide (200 μg/mL) group.
enzyme activity can lead to excessive degradation of the extracellular matrix, resulting in the formation of the chronic refractory wound surface. Studies have shown that MMP-2 is increased in undamaged dermal fibroblasts [24][25][26].
It has been demonstrated previously that, in traditional Chinese medicine, Dendrobium officinale could strengthen the spleen as well as nourishes the stomach, lungs, and kidneys. And it is typically used to treat chronic gastritis, hypertension, diabetes, and chronic nephritis, as well as h. e total RNA of the cells was collected, and the mRNA expression of MMP-2 in the cells was detected by real-time quantitative PCR. All the data were expressed as x ± S (n � 3). * P < 0.05, compared with the control group; ▼ P < 0.05, compared with the Dendrobium polysaccharide (100 μg/mL) group; ★ P < 0.05, compared with the Dendrobium polysaccharide (200 μg/mL) group; # P < 0.05, compared with the high-glucose group; ▲ P < 0.05, compared with the highglucose + Dendrobium polysaccharide (100 μg/mL) group; ◆ P < 0.05, compared with the high-glucose + Dendrobium polysaccharide (200 μg/mL) group.
International Journal of Endocrinology nephropathy, and is often employed as an antitumor and antiaging agent [27,28]. Similar substances were shown to have effects on keratinocytes or endothelial cells, which were also involved in diabetic wound healing. Mo et al. explored the effect of erianin, a bibenzyl compound extracted from Dendrobium chrysotoxum, on proliferation and apoptosis in HaCaT cells and demonstrated that erianin could be recognized as a potential antipsoriasis drug that inhibited proliferation and induced apoptosis of HaCaT cells through ROS-mediated JNK/c-Jun and AKT/ mTOR signaling pathways [29]. Erianin was also found to induce a JNK/SAPK-dependent metabolic inhibition in human umbilical vein endothelial cells [30]. e ethanol extract of Dendrobium chrysotoxum Lindl ameliorates retinal angiogenesis during the development of diabetic retinopathy via inhibiting the expression of VEGF/ VEGFR2 and other proangiogenic factors such as MMP-2/ 9 [31]. e present study found that, under high-glucose conditions, the HSF cell viability was significantly reduced, apoptosis rates considerably increased, and collagen levels were notably reduced. Furthermore, the mRNA expression of TIMP-2 decreased, while the mRNA expression of MMP-2 increased in the HSF cells. e addition of PDC (100, 200, and 400 μg/mL) in HSF cells did not have a significant effect on cell viability, apoptosis, or collagen levels. However, PDC (100, 200, and 400 μg/mL) could reverse the changes in the collagen level, viability, and apoptosis rate of the HSF cells caused by high glucose, with the expression of protein and TIMP-2 mRNA increased and the level of MMP-2 mRNA decreased in a concentration-dependent manner (P < 0.05). Hence, the present study confirms that PDC can exhibit certain protective effects on the human skin fibroblast function under highglucose conditions. Besides, we inferred that PDC could increase the collagen synthesis of the skin fibroblasts by upregulating TIMP-2 and inhibiting the mRNA expression of MMP-2, which may be conducive to the repair of diabetic skin lesions. Further research is needed to determine the signaling pathways involved in the regulation of collagen synthesis by TIMP-2 in skin fibroblasts. e overexpression of MMP-2 in the involved psoriatic epidermis was found to be accompanied by basement membrane alterations with degraded collagen type IV [32]. Previous studies have also shown that whole-brain irradiation mediates degradation of collagen type IV by altering the balance of MMP-2 and TIMP-2 levels in the brain [33], MMPs and TIMPs are responsible for remodeling in the healthy extracellular matrix, where they are produced in a coordinated manner, and Kozaci et al. found that pro-MMP-2 levels negatively correlated with the collagen content in herniated disc material [34]. e molecular mechanisms in the pathways regulating TIMP-2 and MMP-2 expressions have been introduced in many diseases, especially diabetes, or different diseases other than diabetes. e study conducted by Ho et al. [35] found that oxidative stress induced by high glucose might be involved in the opposite effects on MMP-2 activation and TIMP-2 downregulation.
is reactive oxygen species-(ROS-) dependent MMP-2 activation, in turn, mediated high-glucose-induced cell apoptosis in human umbilical vein endothelial cells (HUVECs). Besides, the transforming growth factor (TGF)/SMAD family member 3 (Smad3) pathway was found to regulate MMP/TIMP activity, inducing activation of the fibrosis mediators and suppressing h. e total RNA of the cells was collected, and the mRNA expression of TIMP-2 in the cells was detected by real-time quantitative PCR. All the data were expressed as x ± S (n � 3). * P < 0.05, compared with the control group; ▼ P < 0.05, compared with the Dendrobium polysaccharide (100 μg/mL) group; # P < 0.05, compared with the high-glucose group; ▲ P < 0.05, compared with the high-glucose + Dendrobium polysaccharide (100 μg/mL) group; ◆ P < 0.05, compared with the high-glucose + Dendrobium polysaccharide (200 μg/mL) group. the degradation of the extracellular matrix [36][37][38]. TIMPs could control the MMP activity, and MMP-2 could digest fibrillar collagen peptides and newly formed collagen fibers to degrade collagen [39,40]. Furthermore, more evidence showed that TIMP-2 and MMP-2 expressions could play an important role in the development of other diseases. MMP-2, TIMP-2, and MMP-2/TIMP-2 ratios may act as biomarkers for susceptibility to systemic lupus erythematosus (SLE) [41]. Pathogenesis of chronic rhinosinusitis might also be related to the regulation of MMP-2 and TIMP-2 expressions. Steroids could inhibit smoke-regulated MMP-2 and TIMP-2 production and activation through the reactive oxygen species (ROS)/PI3K, Akt, and NF-κB signaling pathways in nasal fibroblasts [42]. However, the causal relationship between MMP-2/TIMP-2 activity and collagen in skin fibroblasts remains unclear and needs to be further investigated. Selective MMP-2 inhibitors and MMP-2 knockout mice could be employed as in-depth pharmacological and genetic approaches to elucidate a mechanistic link among the MMP-2 expression, TIMP-2 expression, and changes in the collagen content in the skin fibroblasts exposed to elevated glucose.
In conclusion, our study revealed that PDC can exhibit protective effects on HSF under high-glucose conditions, which may be related to the upregulation of the TIMP-2 expression and inhibition of the MMP-2 expression, which provided new concepts for the prevention and treatment of diabetic skin ulcers or wounds using traditional Chinese medicine, including Dendrobium officinale.
Data Availability e data generated or analyzed during this study are included in this published article. e datasets generated and/ or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Supplementary Materials
Table S1: effects of high glucose concentration and PDC on the viability of the HSF cells. Table S2: effects of high glucose and PDC on apoptosis of HSF cells. Table S3: effect of high glucose concentration and PDC on the collagen content in the cell culture fluid. Table S4: effects of high glucose concentration and PDC on the mRNA expression of MMP-2 in HSF cells. | 2021-03-27T05:14:56.685Z | 2021-03-08T00:00:00.000 | {
"year": 2021,
"sha1": "47d45ea28b19266bb199810b932889300d0f19b8",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ije/2021/1092975.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47d45ea28b19266bb199810b932889300d0f19b8",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246281308 | pes2o/s2orc | v3-fos-license | The ALR-RSI score is a valid and reproducible scale to assess psychological readiness before returning to sport after modified Broström-Gould procedure
Purpose Psychological readiness scores have been developed to optimize the return to play in many sports-related injuries. The purpose of this study was to statistically validate the ankle ligament reconstruction-return to sport injury (ALR-RSI) scale after modified Broström-Gould (MBG) procedure. Methods A similar version of the ACL-RSI scale with 12 items was adapted to quantify the psychological readiness to RTS after MBG and to describe construct validity, discriminant validity, feasibility, reliability and internal consistency of the scale, according to the COSMIN methodology. The term “knee” was replaced by “ankle”. The AOFAS and Karlsson scores were used as references patient-related outcome measurements (PROMs). Results A total of 71 patients were included. The ALR-RSI score after MBG procedure was highly (r > 0.5) correlated to the AOFAS and Karlsson scores, with a Pearson coefficient r = 0.69 [0.54–0.80] and 0.72 [0.53–0.82], respectively. The mean ALR-RSI score was significantly greater in the subgroup of 55 patients who resumed sports activity compared to those that no longer practiced sport: 61.9 (43.8–79.6) vs 43.4 (25.0–55.6), (p = 0.01). The test–retest showed an “excellent” reproducibility with a ρ intraclass correlation coefficient of 0.93 [0.86–0.96]. The Cronbach’s alpha statistic was 0.95, attesting an “excellent” internal consistency between the 12 ALR-RSI items. Conclusion The ALR-RSI score is a valid and reproducible tool for the assessment of psychological readiness to RTS after an MBG procedure for the management of CLAI, in a young and active population. The ALR-RSI score may help to identify and counsel athletes on their ability to return to sport. Level of evidence III. Supplementary Information The online version contains supplementary material available at 10.1007/s00167-022-06895-7.
Introduction
Inversion ankle trauma is one of the most frequently sportsrelated injury and usually involves partial or complete rupture of the anterior talofibular ligament (ATFL) [7,12,29]. Conservative treatment and functional rehabilitation remain the standard management for acute ankle sprains, with satisfactory outcomes [24], but controversy persists about the prevention of ankle sprain recurrence [11]. There are a still number of patients who experience ankle swelling, pain and/or a feeling of instability [25,36]. Among them, > 20% experience chronic lateral ankle instability (CLAI), defined as recurrent acute sprain, giving way of the ankle with a perception of an insecure ankle by the patient, and avoidance/adaptation of sport activities, for at least 1 year [9,29]. This instability can lead to cartilage damage [30], kinematic disorders [4] and early osteoarthritis of the ankle [32]. Surgery should be considered for patients with sporting demands and symptomatic ankle instability.
The modified Broström-Gould procedure (MBG) remains the gold standard for the management of CLAI and consists in the association of the ATFL repair (retention and direct suture) and the transfer of the extensor retinaculum [6,16,29]. The all-arthroscopic approach is increasingly used, to assess and address any associated intra-articular lesions with reduced morbidity for the patient [19,21,37].
In a young and active population who underwent anatomic repair surgery, one of their expectations is the ability to resume sports activity and patients are usually anxious to know the achievable level of play. Although the MBG is safe and allows most patients to resume preinjury sports activities, it has been reported that around 25% are unable to return to sport (RTS) [18,21]. Some reports suggest that physical impairments are not sufficient to explain these low RTS rates following sport-related injuries and highlighted the role of psychological factors in the RTS process [1,23]. The assessment of the motivation, self-confidence in performance and fear remains a key element to take into account before resuming sport activities. Therefore, psychological measurements scales have been developed to optimize the RTS rate and reduce the risk of surgical failure in many athletic injuries. In particular, Webster et al. [33] designed a scale of 12 items in patients following anterior cruciate ligament reconstruction (ACLR) to assess their psychological readiness to resume sports. This was followed by the development of numerous psychological assessment scales for return to sport after sport-related injuries, including shoulder instability (SIRSI) [8], hip arthroscopy with femoroacetabular impingement (FAI) syndrome (Hip-RSI) [35] or ankle ligament reconstruction (ALR-RSI) [28]. To date, no tool exists to analyze the psychological readiness after MBG procedure.
The main purpose of this study was to statistically validate the ankle ligament reconstruction-return to sport injury (ALR-RSI) scale using a population of patients who underwent an MBG surgery. A similar version of the ACL-RSI scale with 12 items was adapted to quantify the psychological readiness to RTS after MBG and to describe construct validity, discriminant validity, feasibility, reliability and internal consistency of the scale. Physicians and patients could use this tool to ensure psychological readiness to return to sport after MBG procedure.
Study design
I n st i t u t i o n a l r ev i ew b o a r d a p p r ova l ( CO S -RGDS-2021-06-003) was granted for the study and all patients provided informed consent to participate.
This study identified and enrolled patients who underwent ankle ligament repair for the treatment of CLAI, in 2018 and 2019 via database of three surgical units by searching for relevant diagnostic codes. From this group, inclusion criteria comprised the following: minimum follow-up of 2 years, > 18 years old, sport activity prior to surgery and no associated lesion during the procedure. An all inside endoscopic MBG procedure was performed in all cases [10].
ALR-RSI scale
The ALR-RSI was similar of the scale validated for the ankle ligament reconstruction and adapted from the ACL-RSI score [2]. The word "knee" was replaced by "ankle". It contains 12 items thought to capture the psychological readiness before RTS and include: emotions (5 items), confidence in performance (5 items) and risk evaluation (2 items). The total score was equal to the sum of the 12 answers and divided by 1.2 to obtain a percentage. The score scale goes from 0 (lowest psychological readiness) to 100 (highest psychological readiness).
The last version of the ALR-RSI was validated following the international Consensus-based Standards for the selection of health status Measurement Instruments (COSMIN) guidelines [20]. An additional and standardized questionnaire was designed to capture demographic data and to collect two valid and reliable functional scores. The references PROMs used were the Karlsson score [26] and the American Orthopedic Foot and Ankle Society (AOFAS) score [15].
A total of 71 patients responded to a questionnaire including the ALR-RSI scale, the Karlsson score and the AOFAS score. Participants were also asked to give consent and to complete components relating to their return to sport. The ALR-RSI was completed twice at 15-day interval.
Statistical analysis
To describe quantitative variables, the mean and standard deviation (SD) were used. To describe dichotomous variables, the number of events and their percentage were used. A sample size of 71 produces a two-sided 95% confidence interval with a width smaller than 0.24 when the estimate of Spearman's rank correlation is above 0.75. To estimate the correlations between ALR-RSI, the total Karlsson score and the AOFAS, Spearman coefficients were used. If the coefficient was r > 0.5, the correlation was considered as "strong", "moderate" if 0.5 < r < 0.3 and "weak" if 0.3 < r < 0.1. A Wilcoxon test was used to compare the "patient" and "control" groups to assess the discriminant validity. We also compared the patients who resumed sport and those who had abandon their sport activity. The Cronbach alpha coefficient was calculated to estimate the internal consistency and was "excellent" if α ≥ 0.90. The ρ intraclass correlation coefficient (ICCC) was used to evaluate the reliability. The reproducibility was "excellent" (ρ > 0.75) or "good" (0.75 < ρ < 0.40). The percentage of missing responses, the ceiling and floor effects allowed to evaluate the feasibility [31]. The statistical analyses were performed using the R software (version 3.5).
Results
A total of 71 patients completed the survey and were included in the study. Each had undergone an ankle ligament repair with an all inside endoscopic MBG procedure.
Of these 71 cases, only two were professional athletes (2.8%), whereas 25 practiced sport activity in competition (35.2%), 33 as recreational and regular practice (46.5%) and 11 as occasional practice (15.5%). The main scores outcomes and the distribution of sports commonly practiced by the study population are summarized in Table 1.
Convergent and structural validity (Tables 2, 3)
The ALR-RSI scale was highly (r > 0.5) and significantly correlated to the reference PROMs with a Pearson correlation coefficient r = 0.69 [0.54-0.80] regarding the Karlsson score and r = 0.72 [0.53-0.82] for the AOFAS. Also, the ALR-RSI scores of RTS subgroup were found discriminant. The mean ALR-RSI score was significantly higher among the 55 patients who resumed sports activity compared to those that no longer practiced sport: 61.9 (43.8-79.6) vs 43.4 (25.0-55.6), (p = 0.01).
Feasibility
No item of the ALR-RSI was missed. The floor effect, defined as the proportion of patients with the minimum score, ranged from 0 to 1.7%, and the ceiling effect relating to the highest score ranged from 4.2 to 35.2%. Fig. 1; Table 4) The reliability of the ALR-RSI score was explored from the calculation of the ρ intraclass correlation coefficient (ICCC). In the current study, the ρ intraclass correlation coefficient (ICCC) was found to be 0.93 [0.86-0.96], reflecting a reproducibility that was considered "excellent". In addition, the mean ALR-RSI score was 58.3 (41.2-77.5) at the first survey completion and 59.19 (37.5-80.4) the second time.
Internal consistency
The Cronbach's alpha statistic for the ALR-RSI after MBG procedure was 0.95, attesting an "excellent" internal consistency between the 12 ALR-RSI items.
RTS at a minimum 2-year follow-up
Fifty-five participants (77.5%) returned to sport after MBG procedure. In this group, patients resumed to the same sport with the preinjury level in 21 cases (38.2%) and at a lower performance level in 19 cases (34.5%). Sports practice was modified in 15 patients (27.3%).
Discussion
The main finding of this study was the ALR-RSI score is a valid and reproducible tool for the assessment of psychological readiness to RTS after an MBG procedure for the management of CLAI, in a young and active population.
In the current study, 77.5% of patients returned to sport after MBG surgery, with a median follow-up of 2.6 years (2.0; 3.7). Previous literature supports this finding. In particular, a prospective study conducted by the French Arthroscopic Society observed an RTS rate around 90% following arthroscopic repair for CLAI in recreational athletes and 73% in competitive athletes. Regarding the RTS at the preinjury level, Maffulli et al. [18] presented long-term outcomes (8.7 years) about 42 athletes who underwent arthroscopic anterior talofibular Broström repair. In this cohort, the authors reported that 22 patients (58%) resumed sports at their preinjury performance level, while six changed to less demanding sports and ten patients had to abandon their sport activity. Similar outcomes were observed by Nery et al. [21] following MBG procedure at an average follow-up of 10-year follow-up. More recently, Feng et al. [6] compared outcomes between arthroscopic MBG procedure with and without repair of the ATFL remnant. No difference was found in terms of ankle function or RTS rate between the repair and non-repair group at a minimum 2-year follow-up. In particular, around 70% of patients resumed sports at their previous level in both groups. In a retrospective case series, Park et al. [22] confirmed that the absence of remnant does not affect functional outcomes. Therefore, the ALR-RSI could be used in the MBG procedure for CLAI, regardless the repair of ATFL. In a retrospective study, Lee et al. [17] focused their research on 18 elite athletes who underwent MBG operation for CLAI. The return to play (RTP) rate was 83.3% 4 months after the index surgery and 100% at 7 months. All pro athletes returned to their preinjury level. These excellent outcomes may be related to greater motivation often reported in pro athletes [3] but also to better access to professional monitoring and high-level rehabilitation. However, this report must be interpreted with cautious given the small number of patients in the cohort (n = 18).
The RTP timeline and the ability to resume sport at the preinjury level are primarily of concern for young and sportive population. White et al. [34] reported a lack of documented data to guide athletes in their RTP timeline. This is [29]. In a recent systematic review, Hunt et al. [13] highlighted the heterogeneity and the deficiency of consistent metrics for RTS in the included studies. The authors call for standardized, valid and reproducible tools for reporting RTS. In the same way, Clanton et al. [5] pointed out the need for subjective data to determine the ability to resume sports. To this end, the ALR-RSI scale should be used in routine, because functional testing coupled with psychological assessment allows taking RTS decisions safely. The ACL-RSI score remains an example of the interest of a psychological RTS evaluation after surgery. A strong and significant correlation between this psychological scale and return to sport has been demonstrated [27]. The choice of a survey with numeric answers allows simplifying the collect of data, compared to open questionnaire. Sports surgeons and physicians can easily refer to this questionnaire to counsel patients on RTS. However, Webster and Feller examined the responsiveness of the ACL-RSI score and found a moderate responsiveness over 6 months, using anchor-based methods. Specifically, the authors showed that the ACL-RSI scale had a sufficient responsiveness to detect clinical relevant changes at a group level and was more limited at an individual level.
There are several limitations in the current study. First, a possible selection bias may have been introduced, related to the inherent nature of a retrospective study. Second, the ALR-RSI scale was initially based on a modification of the ACL-RSI score and not developed for CLAI. However, the ALR-RSI has recently been validated for ankle instability after anatomic ligament reconstruction [28]. Moreover, many psychological assessment scores after sport-related injuries have been based on the ACL-RSI scale, which has been shown to be easily transferable to other joints and pathologies [8,14,35].
This study validated the ALR-RSI score as a routine practice tool to assess the psychological readiness to RTS after Broström procedure in patients with CLAI.
Conclusion
Through the results of this cohort, the ALR-RSI has to be considered as RTS metric tool to provide a clear message to patients who underwent MBG procedure for CLAI. Orthopaedic surgeons may use these findings to counsel and set expectations with their active patients, using evidence-based medicine on their ability to return to their favorite sport.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-01-26T14:38:39.818Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "e2d20c0efd6680afbe3acc07187aff593d0f856d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-022-06895-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "a3882403128df39db616ae29f517e44d5abb585d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34006200 | pes2o/s2orc | v3-fos-license | Polarization-independent chalcogenide glass nanowires with anomalous dispersion for all-optical processing
We demonstrate the design and fabrication of square Ge11.5As24Se64.5 (Ge11) nonlinear nanowires fully embedded in a silica cladding for polarization independent (P-I) nonlinear processing. We observed similar performance for FWM using both TE and TM modes confirming that a near P-I operation was obtained. In addition we find that the supercontinuum spectrum that can be generated in the nanowires using 1ps pulse pulses with around 30W peak power was independent of polarization. ©2012 Optical Society of America OCIS codes: (130.2755) Glass waveguides; (220.0220) Optical design and fabrication; (190.4410) Nonlinear optics, parametric processes. References and links 1. J. T. Gopinath, M. Soljacic, E. P. Ippen, V. N. Fuflyigin, W. A. King, and M. Shurgalin, “Third order nonlinearities in Ge-As-Se-based glasses for telecommunications applications,” J. Appl. Phys. 96(11), 6931– 6933 (2004). 2. J. M. Harbold, F. O. Ilday, F. W. Wise, and B. G. Aitken, “Highly nonlinear Ge-As-Se and Ge-As-S-Se glasses for all-optical switching,” IEEE Photon. Technol. Lett. 14(6), 822–824 (2002). 3. A. Prasad, C. J. Zha, R. P. Wang, A. Smith, S. Madden, and B. Luther-Davies, “Properties of GexAsySe1-x-y glasses for all-optical signal processing,” Opt. Express 16(4), 2804–2815 (2008). 4. A. Prasad, “Ge-As-Se chalcogenide glasses for all-optical signal processing,” in Laser Physics Center(Australian National University, 2010). 5. M. R. E. Lamont, B. Luther-Davies, D. Y. Choi, S. Madden, X. Gai, and B. J. Eggleton, “Net-gain from a parametric amplifier on a chalcogenide optical chip,” Opt. Express 16(25), 20374–20381 (2008). 6. F. Luan, M. D. Pelusi, M. R. E. Lamont, D. Y. Choi, S. Madden, B. Luther-Davies, and B. J. Eggleton, “Dispersion engineered As(2)S(3) planar waveguides for broadband four-wave mixing based wavelength conversion of 40 Gb/s signals,” Opt. Express 17(5), 3514–3520 (2009). 7. M. D. Pelusi, F. Luan, S. Madden, D. Y. Choi, D. A. Bulla, B. Luther-Davies, and B. J. Eggleton, “Wavelength Conversion of High-Speed Phase and Intensity Modulated Signals Using a Highly Nonlinear Chalcogenide Glass Chip,” IEEE Photon. Technol. Lett. 22(1), 3–5 (2010). 8. M. Galili, J. Xu, H. C. H. Mulvad, L. K. Oxenløwe, A. T. Clausen, P. Jeppesen, B. Luther-Davis, S. Madden, A. Rode, D. Y. Choi, M. Pelusi, F. Luan, and B. J. Eggleton, “Breakthrough switching speed with an all-optical chalcogenide glass chip: 640 Gbit/s demultiplexing,” Opt. Express 17(4), 2182–2187 (2009). 9. T. D. Vo, H. Hu, M. Galili, E. Palushani, J. Xu, L. K. Oxenløwe, S. J. Madden, D. Y. Choi, D. A. P. Bulla, M. D. Pelusi, J. Schröder, B. Luther-Davies, and B. J. Eggleton, “Photonic chip based transmitter optimization and receiver demultiplexing of a 1.28 Tbit/s OTDM signal,” Opt. Express 18(16), 17252–17261 (2010). 10. M. D. Pelusi, F. Luan, D. Y. Choi, S. J. Madden, D. A. P. Bulla, B. Luther-Davies, and B. J. Eggleton, “Optical phase conjugation by an As(2)S(3) glass planar waveguide for dispersion-free transmission of WDM-DPSK signals over fiber,” Opt. Express 18(25), 26686–26694 (2010). 11. G. P. Agrawal, Nonlinear Fiber Optics, (Academic Press Inc., 2001). 12. W. R. Headley, G. T. Reed, S. Howe, A. Liu, and M. Paniccia, “Polarization-independent optical racetrack resonators using rib waveguides on silicon-on-insulator,” Appl. Phys. Lett. 85(23), 5523–5525 (2004). 13. X. Chen and H. K. Tsang, “Polarization-independent grating couplers for silicon-on-insulator nanophotonic waveguides,” Opt. Lett. 36(6), 796–798 (2011). 14. S. M. Gao, X. Z. Zhang, Z. Q. Li, and S. L. He, “Polarization-Independent Wavelength Conversion Using an Angled-Polarization Pump in a Silicon Nanowire Waveguide,” IEEE J. Sel. Top. Quantum. Electron. 16(1), 250– 256 (2010). #167778 $15.00 USD Received 1 May 2012; revised 22 May 2012; accepted 24 May 2012; published 1 Jun 2012 (C) 2012 OSA 4 June 2012 / Vol. 20, No. 12 / OPTICS EXPRESS 13513 15. Y. Tian, P. Dong, and C. X. Yang, “Polarization independent wavelength conversion in fibers using incoherent pumps,” Opt. Express 16(8), 5493–5498 (2008). 16. S. P. Chan, C. E. Phun, S. T. Lim, G. T. Reed, and V. M. N. Passaro, “Single-mode and polarization-independent silicon-on-insulator waveguides with small cross section,” J. Lightwave Technol. 23(6), 2103–2111 (2005). 17. S. T. Lim, C. E. Png, E. A. Ong, and Y. L. Ang, “Single mode, polarization-independent submicron silicon waveguides based on geometrical adjustments,” Opt. Express 15(18), 11061–11072 (2007). 18. X. Gai, D. Y. Choi, S. Madden, and B. Luther-Davies, “Interplay between Raman scattering and four-wave mixing in As(2)S(3) chalcogenide glass waveguides,” J. Opt. Soc. Am. B 28(11), 2777–2784 (2011). 19. X. Gai, S. Madden, D. Y. Choi, D. Bulla, and B. Luther-Davies, “Dispersion engineered Ge(11.5)As(24)Se(64.5) nanowires with a nonlinear parameter of 136 W(−1)m(−1) at 1550 nm,” Opt. Express 18(18), 18866–18874 (2010). 20. A. B. Fallahkhair, K. S. Li, and T. E. Murphy, “Vector finite difference modesolver for anisotropic dielectric waveguides,” J. Lightwave Technol. 26(11), 1423–1431 (2008). 21. P. Lusse, P. Stuwe, J. Schule, and H. G. Unger, “Analysis of Vectorial Mode Fields in Optical Wave-Guides by a New Finite-Difference Method,” J. Lightwave Technol. 12(3), 487–494 (1994). 22. C. Koos, L. Jacome, C. Poulton, J. Leuthold, and W. Freude, “Nonlinear silicon-on-insulator waveguides for alloptical signal processing,” Opt. Express 15(10), 5976–5990 (2007). 23. X. Gai, R. P. Wang, C. Xiong, M. J. Steel, B. J. Eggleton, and B. Luther-Davies, “Near-zero anomalous dispersion Ge11.5As24Se64.5 glass nanowires for correlated photon pair generation: design and analysis,” Opt. Express 20(2), 776–786 (2012). 24. D. Y. Choi, S. Madden, A. Rode, R. P. Wang, A. Ankiewicz, and B. Luther-Davies, “Surface roughness in plasma-etched As2S3 films: Its origin and improvement,” IEEE T. Nanotechnol. 7(3), 285–290 (2008). 25. J. J. Hu, N. N. Feng, N. Carlie, L. Petit, J. F. Wang, A. Agarwal, K. Richardson, and L. Kimerling, “Low-loss high-index-contrast planar waveguides with graded-index cladding layers,” Opt. Express 15(22), 14566–14572 (2007). 26. Q. Lin and G. P. Agrawal, “Vector theory of four-wave mixing: polarization effects in fiber-optic parametric amplifiers,” J. Opt. Soc. Am. B 21(6), 1216–1224 (2004).
Introduction
Recently, nonlinear waveguides fabricated from chalcogenide glasses have been shown to be excellent materials for all-optical processing due to their attractive optical properties which include good light confinement as a result of their high refractive index; high nonlinear refractive index, n 2 [1][2][3]; and negligible two photon (TPA) and free carrier absorption (FCA) at telecommunications frequencies [2,4].Many all-optical processes have now been demonstrated using dispersion-engineered As 2 S 3 chalcogenide rib waveguides including parametric amplification [5]; wavelength conversion [6,7]; Tb/s demultiplexing [8,9]; and dispersion compensation by mid-span spectral inversion [10].Most of these functions involved four wave mixing (FWM) which becomes efficient when phase matching is achieved according to the condition: −4γP<β 2 ∆ω 2 <0, where γ = 2πn 2 /λA eff is the nonlinear parameter of the waveguide; P is the peak pump power; β 2 is the second order dispersion; and ∆ω is the difference between pump and signal frequencies.According to this phase matching relation, the waveguides need to have small and anomalous dispersion (β 2 ≤0) to achieve exponential gain from FWM [11].However, in a typical dispersion-engineered As 2 S 3 rib waveguide only one polarization state, the transverse magnetic mode (TM), exhibits anomalous dispersion whilst the transverse electric mode (TE) has a large normal dispersion that does not support FWM except over a vanishingly small bandwidth.Since light from a fiber is normally polarized randomly, the polarization-dependent dispersion of an As 2 S 3 rib waveguide means that both pump and signal waves must be adjusted to have the same polarization state and this requires many additional components to convert the light into the TM mode.This is undesirable for integrated optical circuits and impractical for commercial applications in fiber communication networks.As a result, structures that would allow FWM of randomly polarized signal beams in polarization-insensitive waveguides are of significant interest.
There have been several reports aimed at achieving polarization-independent (P-I) devices, e.g.P-I racetrack resonators [12]; P-I grating couplers [13]; P-I wavelength convertors [14,15]; and P-I waveguides [16,17].However, most of these studies focused on devices made from the silicon-on-insulator (SOI) platform where, in general, the silicon film (C) 2012 OSA thickness is not a free parameter.By comparison, there have been no studies of P-I chalcogenide glass waveguides.Furthermore, in previous work, the P-I waveguides were designed to operate at a fixed wavelength and the geometry of an asymmetric waveguide -its width, thickness and etch depth -was varied to find a crossing point where the effective index, n eff , of the TM and TE modes were the same so that zero birefringence could be obtained [16,17].However, this kind of structure maintains zero birefringence only over a very narrow bandwidth because n eff will change with frequency due to material and waveguide dispersion.Furthermore, zero birefringence indicates little about the relative values of β 2 for the TE and TM modes, which due to the structural asymmetry, will almost certainly be different.Since β 2 affects the bandwidth and shape of FWM gain spectrum, this can result in a very different FWM behavior between the TM and TE modes [18].Therefore, waveguides with P-I dispersion are required for all-optical processing.
Two conditions are required to achieve a waveguide with P-I dispersion suitable for FWM.Firstly, the structure should be symmetric since this is the most straightforward way to both eliminate birefringence and to achieve the same dispersion in both the TM and TE modes.This requires a waveguide with square cross-section embedded in a uniform cladding.Secondly, because most chalcogenide glasses exhibit very high normal material dispersion at telecommunications wavelengths, large anomalous waveguide dispersion is required to compensate the material dispersion and achieve a net dispersion that is small and anomalous.However, square waveguides with dimensions of a few microns do not create enough anomalous waveguide dispersion because the fundamental modes are too tightly confined within the core.In order to increase the anomalous waveguide dispersion, sub-micron nanowires are, therefore, required.It is also beneficial if the waveguide is very short since this minimizes the effects of any residual polarization-dependent loss, residual birefringence or any difference in the dispersion caused by fabrication errors.Since any nonlinear process requires some minimum nonlinear phase change to achieve adequate efficiency, a high nonlinear parameter γ is then essential and this can be achieved using a nanowire.
In our previous work we have shown that Ge 11.5 As 24 Se 64.5 (Ge 11 ) chalcogenide glass which has nonlinear index three times more than that of As 2 S 3 [3] can be used to make nanowires with a high nonlinear parameter γ of up to 136W −1 m −1 [19] which is more than ten times of that of As 2 S 3 rib waveguides [5][6][7][8][9][10].Compared with 6cm long As 2 S 3 waveguides widely used in our previous work a Ge 11 nanowire only has to be ≈0.5cmlong to obtain the same nonlinear response.Assuming a difference of 10ps/km/nm in the group velocity dispersion (GVD) between the TM and TE modes arises from fabrication errors, only 0.5 × 10 −5 ps/nm dispersion difference will be induced and this implies that polarization-independence can be achieved over a very broad bandwidth.
In this paper, we report, on the fabrication and properties of square Ge 11 nonlinear nanowires fully embedded in a silica cladding.We measure their performance for FWM using both TE and TM modes to confirm that a near P-I operation was obtained.In addition we demonstrate that the supercontinuum spectrum that can be generated in the waveguides using 1ps pulse pulses with around 30W peak power was independent of polarization.
Design and fabrication of P-I nanowires
In order to obtain a high nonlinear parameter γ in the nanowires, we need to minimize the effective area A eff of the waveguide mode.The mode intensity distribution is shown in Figs.1(a) and 1(b) for TM and TE polarizations respectively.Figure 1(c) shows the effective area as a function of waveguide dimension for a square Ge 11 nanowire fully embedded in a SiO 2 cladding at 1550nm with the eigenvalues and mode distribution calculated by the FDTD method [20,21] with the effective area defined as in [22].The effective area increases exponentially when the waveguide dimension drops below 0.4µm because the core-cladding index difference (≈1.2) cannot confine a smaller mode.In comparison, nanowires with dimensions between 0.4µm and 0.6µm achieve the smallest effective area ranging from 0.25µm 2 to 0.28µm 2 and this is, therefore, the range that we concentrated on.
In this range, as would be expected, the same anomalous dispersion is obtained for both TM and TE modes at 1550nm and a GVD of 165ps/km/nm is predicted for a 580nm × 580nm nanowire as show in Fig. 1(d).Although a smaller near-zero dispersion is preferred in most all-optical devices because it gives a broad bandwidth for FWM and reduces distortion on the pulses during propagation, this is not a problem for Ge 11 nanowires because only a short device is required to obtain sufficient nonlinear response.In fact, a 0.5cm long Ge 11 nanowire with GVD of 165ps/km/nm will lead to only about the half dispersive phase shift compared with a 6cm As 2 S 3 rib waveguide with GVD of 30ps/km/nm, and this implies a wider bandwidth can be achieved with less dispersive distortion of the pulses.Since the As 2 S 3 ribs waveguide already demonstrated a FWM bandwidth in excess of 10THz, the higher dispersion is, therefore, of no concern.On the other hand, the high anomalous dispersion leads to a very short soliton fission length which is critical for supercontinuum (SC) generation and allows SC to be created in a very short device.The effective index n eff and nonlinear parameter γ are shown in Fig. 1(e) and Fig. 1 The nanowires were fabricated using the following process.Firstly, a 580nm Ge 11 film was deposited onto an oxidized silicon wafer by thermal evaporation.A 200nm thick layer ZEP was then spin-coated onto the Ge 11 film as the e-beam resist.E-beam lithography (EBL) was used to transfer the nanowire pattern onto the ZEP using the fixed beam moving stage method (FBMS) in which the electron beam is scanned over a small area whilst the stage is moved simultaneously to draw the pattern.By using FBMS the stitching errors between different writing fields in EBL were eliminated which helps reduce the loss of the nanowires.Inductively coupled plasma dry etching (Plasmalab system 100) was applied to transfer the pattern into the Ge 11 film and the residual resist was removed by oxygen plasma.At the end of the process, about 1.5µm of SiO 2 was coated onto the nanowires as the top cladding using ion sputtering."Snake" structures were also fabricated to integrate longer nanowires on the small chips as shown in Fig. 2(a) and to allow loss measurements by the "cut-back" method.The bend radius was 20µm for which modeling predicted the bending loss to be negligible.shows SEM images of the profile of the resulting Ge 11 nanowires.The square nanowires were well defined with near vertical side-walls.The nanowire width was measured to be 584nm and the height 575nm.Therefore, the fabrication error was controlled under ± 5nm, which is less than one percent of the nanowire dimension.In order to analyze the mismatch of n eff and GVD between the TM and TE modes due to the fabrication errors, we calculated the n eff and GVD for 585nm × 575nm nanowires as shown in Fig. 2(c) and 2(d).The difference in n eff and GVD is predicted to be 0.004 and 1.7ps/km/nm at 1550nm respectively.Assuming a 5mm long device these differences have a negligible effect on nonlinear processes such as FWM.The loss of nanowires was measured to be 1.65dB/cm and 2.2dB/cm for TM and TE modes respectively by the cut-back method using three different waveguide lengths: 0.7cm, 1.2cm and 1.7cm (Fig. 2(e)).The coupling loss was 6dB at each facet.As a result, only 0.28 dB difference in the loss will be introduced over a 5mm long nanowire which is small enough not to affect the polarization of the propagating beams although lower PDL would be desirable in longer devices.An advantage of using SiO 2 as a cladding is that there are no additional losses due to cladding absorption.In our previous work, As 2 S 3 rib waveguide were generally clad with a UV cured polysiloxane inorganic polymer glass (IPG).However, it is found that the cured IPG has an absorption band from 1375nm to 1470nm with an dip of 2.5dB/cm and a very strong absorption band after 1620nm with the maximum absorption down to −32dB/cm at 1675nm, as shown in Fig. 3(a).Therefore, in a dispersion engineered waveguide where a significant part of the field penetrates the cladding, additional losses are present due to C-H and O-H overtone absorptions.We measured the transmission spectrum of a 6.5cm long, 2µm × 0.85µm As 2 S 3 rib waveguide with the IPG cladding using a super-continuum generated by passing 7ps pulses from a mode-locked Nd:YVO 4 laser through a photonic crystal fiber.The results are shown in Fig. 3(b).The loss increased by 8dB at the strongest IPG absorption peak leading poor performance in the L-band.Since the SiO 2 cladding does not show any significant absorption in the telecommunications bands these losses can be eliminated.Figure Since P-I Ge 11 nanowires have many applications in all-optical processing of telecommunication signals involving FWM, it is important to demonstrate P-I FWM in these nanowires.Because the device length is very short, it is difficult to measure GVD directly.However, the bandwidth of FWM is determined by the GVD through the phase matching condition, and hence we can evaluate differences in GVD by measuring the FWM spectrum.
Polarization independent FWM and Supercontinuum generation
The experimental set up is shown is Fig. 4(a).Here 4ps pulses from a mode-locked fibre laser at a repetition rate of 10MHz acted as a pump at 1554.7nm.The pump was combined with a tunable CW probe signal using a 10/90 coupler.Two polarization controllers were added to set the polarization state to either the TM or TE mode.Lensed fibres producing a beam with a diameter of 2.5µm were used to couple light in and out of the nanowires.In order to monitor the polarization of the beam, a x10 objective lens was used to image the light at the output onto an InGaAs CCD camera through a Wollaston prism which separated the TM and TE polarizations vertically in the image plane as shown in Fig.In the experiment, the pump that was coupled into 7mm long nanowires had a peak power of 2.8W and was combined with a CW probe with a power of about 10µW.By tuning the wavelength of the probe beam, FWM was obtained for both TE and TM modes over a bandwidth of more than 180nm as shown in Fig. 5.The increase in the FWM signal around 1602nm is due to a contribution from stimulated Raman scattering [18,23].As shown in [18,23] near the Raman peak, which in this material lies between 5 and 10THz below the pump frequency, the FWM conversion is modulated by the real part of the Raman response function.This causes a dispersion-like modulation of the conversion efficiency which first rises as the Raman peak is approached before dropping markedly and finally recovering to around 50%-70% of that for small frequency shifts before finally decreasing as the limits of phase matching are reached.The similar behavior observed for both TM and TE modes indicates that they have a very similar GVD confirming P-I dispersion.
To be more specific, we calculated the FWM spectrum by solving the nonlinear Schrödinger equation shown below using the split-step Fourier method: where A is the electric field amplitude, α = 1.67 dB/cm for TM and 2.33 dB/cm for TE is the linear loss of the waveguide.β m is the m th order dispersion with the GVD calculated at 155ps/km/nm as shown in Fig. 2, α 2 = 9.3 × 10 −14 m/W is the two-photon absorption coefficient and γ is nonlinear coefficient of 130W −1 m −1 for the square Ge is the response function, including the Raman contribution h R (t) and f R = 0.12 is the fractional Raman contribution.
We compared the conversion to the idler determined from the measurement with the simulation and obtained a good fit as shown in Fig. 6.According to both the experiments and modeling the conversion for the TE mode was about 0.5dB lower than that for the TM mode.This difference is mainly due to the different propagation losses of the TM and TE modes.Figure 6 clearly shows modulation of the FWM conversion around the Raman peak confirming the predictions in [18,23].To our knowledge, this is first experimental observation confirming the influence of Re[h(ω)] on FWM in a chalcogenide waveguide.Another important nonlinear process that is sensitive to the sign and magnitude of the dispersion is super-continuum (SC) generation.This was achieved using a 12mm long P-I nanowire pumped with 1ps duration pulses for both TM and TE modes as show in Fig. 7 (a).The power required to generate SC was slightly different for TE and TM modes due to the different losses and was ≈31W for the TM mode and 33W for the TE mode.Although the power required for SC was different, the spectra for both TM and TE modes were very similar.SC is a very complicated process involving self phase modulation, cross phase modulation, FWM, soliton fission, Raman scattering, etc, but the similarity in the SC spectrum indicates that these square Ge 11 nanowires had not only similar GVD but also similar higher order dispersions.We simulated the SC spectrum using the split-step Fourier method for both TM and TE polarization for different peak powers and the results are shown in Figs.7(b) and 7(c) respectively.The simulation results fit the measured spectra extremely well and also predict the measured difference in threshold power for generating SC for the two polarizations.According to the simulations, the SC extended over almost one octave from 1100nm to 2100nm.To our knowledge, this is the shortest glass waveguide ever to be used for broadband SC generation which indicates it should be possible to fabricate very compact P-I light sources on a photonic chip.
Conclusion
We have shown that it is possible to fabricate highly nonlinear chalcogenide glass nanowires that offer polarization independent operation for nonlinear process such as FWM and supercontinuum generation.The small residual PDL is likely to be due to roughness of the etched vertical sidewalls that has a bigger effect on the TE mode.It is possible that the PDL could be reduced by evaporating a thin Ge 11 overcoat after etching as this has been previously shown to reduce sidewall losses by smoothing high spatial frequency roughness [24,25].From our experimental results we achieved a structure with very small polarization dependent dispersion and this should open up the possibilities of achieving FWM with a randomly polarized input made possible with a pair of orthogonally-polarized pumps [26].To our knowledge this represents the first report of polarization independent nonlinear processing in an optical nanowire.
Fig. 1 .
Fig. 1.Design of square Ge11 nanowires full embedded in SiO2.(a) Intensity distribution for TM mode.(b) Intensity distribution for TE mode.(c) The effective area Aeff as a function of the waveguide dimensions.(d) GVD for 580nm × 580nm Ge11 nanowires.(e) Effective index neff for 580nm × 580nm Ge11 nanowires.(f) Nonlinear parameter γ for 580nm × 580nm Ge11 nanowires.Blue dots are the TM mode and the red line is for TE mode.
Figure 2 (
Figure 2(b) shows SEM images of the profile of the resulting Ge 11 nanowires.The square nanowires were well defined with near vertical side-walls.The nanowire width was measured to be 584nm and the height 575nm.Therefore, the fabrication error was controlled under ± 5nm, which is less than one percent of the nanowire dimension.In order to analyze the mismatch of n eff and GVD between the TM and TE modes due to the fabrication errors, we calculated the n eff and GVD for 585nm × 575nm nanowires as shown in Fig.2(c) and 2(d).The difference in n eff and GVD is predicted to be 0.004 and 1.7ps/km/nm at 1550nm respectively.Assuming a 5mm long device these differences have a negligible effect on nonlinear processes such as FWM.The loss of nanowires was measured to be 1.65dB/cm and 2.2dB/cm for TM and TE modes respectively by the cut-back method using three different waveguide lengths: 0.7cm, 1.2cm and 1.7cm (Fig.2(e)).The coupling loss was 6dB at each facet.As a result, only 0.28 dB difference in the loss will be introduced over a 5mm long nanowire which is small enough not to affect the polarization of the propagating beams although lower PDL would be desirable in longer devices.
Fig. 2 .
Fig. 2. (a) An optical micrograph shows the bends which are part of the "snakes" used to increase the length of the Ge11 nanowires for loss measurements.The radius is 20µm.(b) An SEM cross sectional image of the square Ge11 nanowires buried in SiO2 cladding.The width was measured to be 584nm and height 575nm.(c) The effective index of 585nm × 575nm nanowires.(d) The GVD of 585nm × 575nm nanowires.(e) The propagation loss measured by cut-back method.The green diamonds are for TM and red triangles for TE modes.
#
167778 -$15.00USD Received 1 May 2012; revised 22 May 2012; accepted 24 May 2012; published 1 Jun 2012 (C) 2012 OSA 3 (c) shows the transmission spectrum of the Ge 11 P-I nanowires.The transmission spectrum is essentially flat from 1250nm to 1700nm with no extra losses observed.
Fig. 4 .
Fig. 4. Experimental set up.The lower image shows how the polarization state was set to TM or TE modes by imaging the waveguide output though a Wollaston prism onto an InGaAs camera.
(Fig. 5 .
Fig. 5. FWM results for TM and TE modes.A pulsed pump was launched into the waveguide along with a low power CW probe beam.The idler output on the long wavelength side was measured a function of probe wavelength.(a) TM mode.(b) TE mode 11 nanowires.R(t) = #167778 -$15.00USD Received 1 May 2012; revised 22 May 2012; accepted 24 May 2012; published 1 Jun 2012 (C) 2012 OSA
Fig. 6 .
Fig. 6.Calculated (solid lines) and measured FWM conversion efficiency as a function of idler wavelength for TE (red) and TM (blue) modes.
#Fig. 7 .
Fig.7.7(a) measured SC spectra for TE and TM modes.7b, 7c, supercontinuum spectra for different pump powers calculated using the split-step Fourier method compared with the measured spectra. | 2018-04-03T04:42:52.493Z | 2012-06-04T00:00:00.000 | {
"year": 2012,
"sha1": "2037610c2779fcdc0cc5d5a560eec2d7127ef756",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.20.013513",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2037610c2779fcdc0cc5d5a560eec2d7127ef756",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
121950382 | pes2o/s2orc | v3-fos-license | Beam energy scan results from PHENIX
Probes like J/ψ and π0 production are very important for studying the properties of the strongly interacting partonic medium created in heavy ion collisions. Their production is strongly suppressed in =200 GeV Au + Au collisions, in comparison to the expectation from binary collision scaled p + p collisions. The recent low-energy scan at RHIC provided the PHENIX collaboration with an opportunity to study the evolution of the suppression at =39 and 62.4 GeV center of mass energies in order to disentangle multiple contributing mechanisms. The suppression of J/ψ observed is similar to those previously measured at 200 GeV. In contrast neutral pion suppression shows a distinct energy dependence at moderate pT region of the central collisions.
Introduction
The theory of Quantum Chromodynamics (QCD) predicts a phase transition from hadronic matter to a deconfined Quark Gluon Plasma (QGP) at high temperature and energy density. The Relativistic Heavy Ion Collider (RHIC) was build to achieve these conditions by colliding heavy nuclei at very high energies in order to test the prediction of QCD and understand the properties of the medium. One of the powerful probes is a hard parton scattered in the early stage of the collisions. The scattered parton then pass through the created matter and fragments. If a dense, colored medium is formed in Au+Au collisions, these hard-scattered partons may lose energy while traversing some of it. In case of quarkonia (qq) bound states like J/ψ(1S), the suppression mechanism is thought to be very different as there will be other additional effects like color screening in the medium which may hinder its production. Therefore, the observed hadron yield will be lower than that expected from binary collision scaling. This suppression is quantified in terms of the nuclear modification factor R AA : where dN AA /dy is the invariant yield in Au + Au collisions, dσ pp /dy is the p + p cross-section and T AA is the nuclear overlap function. The PHENIX experiment previously measured R AA for J/ψ and π 0 production at 200 GeV Au+Au collision and found a strong suppression in the most central collisions [1]. The measured J/ψ suppression was similar to that measured at CERN-SPS energies in Pb+Pb collisions at √ s N N = 17.2 GeV. This is in contradiction with the color screening interpretation that the suppression is expected to increase at higher temperature. So additional effects are need to be investigated. On the other hand the π 0 production is suppressed in Au + Au collisions while it is observed to be enhanced in the lighter system Cu+Cu at √ s N N =22.4 GeV. In order to study the transition of enhancement to suppression of π 0 R AA and shine light on the J/ψ puzzle, RHIC started the beam energy scan program in 2010 by varying the beam energy and studying the effect on R AA . This is an unique characteristic of RHIC to be able to collide different combination of species in a wide range of energies. Measurement of R AA in a wide range of system energies is an important diagnostic means that will help quantifying the medium properties. We present new measurements of π 0 and J/ψ R AA at lower energies at √ s N N = 39 and 62.4 GeV Au + Au collisions.
2. π 0 Measurement at 39 and 62.4 GeV Neutral pion productions were measured at midrapidity (|y| < 0.35) in several energies of Au+Au collisions. Our earlier π 0 measurements in √ s N N =130 and 200 GeV of Au+Au collisions revealed a strong suppression at high p T , which was interpreted very well by the partonic energy loss model in the medium [1]. Additionally data from √ s N N = 200 GeV d + Au collisions showed no suppression or enhancement indicating that the hadron suppression is a final state effect. Moreover, the PHENIX experiment also studied π 0 production in a lighter collision system, Cu + Cu, at three energies ( √ s N N =22.4, 62.4 and 200 GeV). An enhancement was observed at 22.4 GeV but their production was suppressed at higher energies. This was interpreted as the interplay of the multiple soft scattering (Cronin effects) in the medium at various energies. During the beam scan program in 2010, PHENIX collected data at √ s =39 and 62.4 GeV Au+Au collisions. These energies were specifically chosen to study this transition from enhancement to suppression as a function of collision energy in order to constrain the energy loss models.
Neutral pions are reconstructed via their π 0 → γγ decay with the electromagnetc caloriemeter (EMCal). Yields were extracted on statistical means by subtracting the photon pairs from the combinatorial background estimated by event-mixing in each p T and centrality bin [2]. In order to compute R AA , baseline p + p measurements are needed. Corresponding p + p reference data at 62.4 GeV were collected earlier in 2006 by PHENIX but RHIC was never run at 39 GeV p + p collisions. Therefore, p+p data from Fermilab E706 were used as a baseline for 39 GeV. Figure 2. π 0 Nuclear modification factor for p T > 6 GeV.
weaker the lower energy. The new measurements of π 0 suppression over wide energy range will help constrain the energy-loss models.
J/ψ Measurement at 39 and 62.4 GeV
Quarkonia bound states are expected to be suppressed in the Quark Gluon Plasma (QGP) due to color screening [3]. PHENIX measured a strong J/ψ suppression (a factor of ∼ 5 for the most central collisions) at both mid and forward rapidities in Au + Au collisions at 200 GeV. This suppression is very similar to that measured at the CERN-SPS at √ s N N = 17.2 GeV P b + P b collisions [4]. This contradicts the color screening interpretation that the dissociation of the quarkonia states will increase with energy density. It was clear that there are additional classes of effects like "cold nuclear matter" effects that are not due to the QGP and additional QGP mechanisms (e.g. coalescence, energy loss) are in play. PHENIX extended the J/ψ measurements to the lower energies √ s N N = 39 and 62.4 GeV [5] in order to disentangle different competing physics processes which might contribute. A broad measurement over √ s N N will not only vary the temperature and density of the medium but also the cc production and cold nuclear effects (CNM). Two forward spectrometers composed of the Muon Tracker and Muon Identifiers are used to reconstruct the muon tracks. The J/ψ candidates were reconstructed via the dimuon channel in the rapidity range 1.2 < |η| < 2.2 and full azimuth. There are no PHENIX measurements for J/ψ reference data in p + p collisions at these energies since there is no run in 39 GeV p + p and only a limited dataset was previously collected at 62.4 GeV p + p collisions, which could not provide a reasonable baseline for J/ψ. Hence a compilation of the measurements from other experiments in addition to a theoretical estimate (Color Evaporation Model by R. Vogt) were used to make a best estimate for the baseline cross-section. The details can be found in [5]. Figure 3 (right) shows the J/ψ suppression at forward rapidities at all three energies as a function of N part . At lower energies J/ψ suppression is similar to that at 200 GeV at forward rapidity. Although there is a modest decrease of suppression in central collisions at 39 GeV, overall they agree within systematic errors. The similarity between different energies are the cumulative effect of the different competing physics processes. However the strength of these processes remain unclear without detailed understanding of the production of J/ψ in nuclear target, termed cold nuclear matter effects. These effects are expected to be different in at each collision energy. It is therefore important to measure these effects experimentally in d(p)+Au collisions at the same energies as Au + Au collisions.
A new theory calculation which includes cold nuclear matter effects, regeneration and QGP suppression for the J/ψ suppression at forward rapidity is compared to the data in figure 3 (left). The contribution of direct J/ψ and regeneration are shown separately. It appears that the QGP suppression decreases while going from 200 GeV to 39 GeV as seen in the direct component but there will be a strong regeneration effect at higher energies due to higher number of cc pairs. The regeneration contribution is expected to the increase with collision energy due to the increase in the total number of charm pairs produced. The inclusion of the regeneration
Summary
The PHENIX experiment has measured both π 0 and J/ψ production at √ s N N =39 and 62.4 GeV Au + Au collisions. Using suitable p + p reference estimates from other experiments and theory calculations when our own measurements are not available, R AA was calculated and compared with our earlier 200 GeV measurements. The observed π 0 production is strongly suppressed in most central collisions at all energies but the suppression gets weaker at lower energies in the moderate p T region. The J/ψ R AA results are similar to that measured in 200 GeV with slightly less suppression at these lower energies. The J/ψ results are consistent with the theoretical calculation which shows a balance in effects of more QGP suppression as well as more regeneration at higher energy collisions. | 2019-04-19T13:03:52.420Z | 2013-08-23T00:00:00.000 | {
"year": 2013,
"sha1": "eb60f8fe8e491b885aad61e839c8b0c98cad553f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/458/1/012002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "92451f9237db97a694ee1ac8b20bae795f5e61bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119479239 | pes2o/s2orc | v3-fos-license | Efficient switching of Rashba spin splitting in wide modulation-doped quantum wells
We demonstrate that the size of the electric-field-induced Rashba spin splitting in an 80 nm wide modulation-doped InGaSb quantum well can depend strongly on the spatial variation of the electric field. In a slightly asymmetric quantum well it can be an order of magnitude stronger than for the average uniform electric field. For even smaller asymmetry spin subbands can have wave functions and/or expectation values of the spin direction that are completely changed as the in-plane wave vector varies. The Dresselhaus effect can give an anticrossing at which the spin rapidly flips.
I. INTRODUCTION
There is presently a strong interest in spin-related phenomena in semiconductors and the prospects of utilizing the spin rather than the charge of the electron for devices have given rise to a new research area called spintronics. 1 One important mechanism that can be used in spintronic devices is called the Rashba effect. 2 An applied electric field is seen in the frame of a moving electron as having a magnetic field component and yields a spin splitting even in the absence of magnetic field or magnetic ions. 3 The Rashba effect is the mechanism behind the Datta-Das spin field effect transistor 4 which is perhaps the most well-known spintronic device. The spin-orbit coupling in a quantum well gives a subband splitting that is usually described in the Rashba model by 2,5 ∆E = αk =h 2 ∆(2E g + ∆) where α is commonly called the Rashba coefficient, k is the in-plane wave vector and ε is the electric field. For the expression for α taken from Ref. 5 we insert the parameters of the well material.
The Rashba coefficient is related to the electric field perpendicular to the quantum well but so far little attention has been paid to the influence of the spatial variation of the electric field. We here find that under certain circumstances insertion of the different kinds of averages, e.g. the expectation value of the electric field, gives incorrect results. In particular we show how modulationdoping can give a strong Rashba effect with an applied field being an order of magnitude smaller than in the case of uniform doping and that it also can give rise to interesting anticrossing phenomena.
II. THEORY
We have gone beyond the Rashba model and performed self-consistent subband structure calculations in the Hartree approximation in a multi-band k · p envelope function approach. The interaction between the conduction band, heavy-hole band, light-hole band and splitoff band is included exactly in an 8 × 8 matrix and the contributions from the remote bands are included in perturbation theory. 3,6 We include terms due to the asymmetry of the zincblende lattice (Dresselhaus effect 7 ) and add the macroscopic potential along the diagonal of the matrix. This approach simultaneously gives accurate descriptions of the electron and hole subbands. We have here considered an 80 nm wide In 0.74 Ga 0.26 Sb quantum well (QW) surrounded by In 0.7 Al 0.3 Sb barriers. In this way we essentially retain the strong spin-orbit coupling of InSb and get lattice-matched well and barrier materials with a suitable conduction band offset. To illustrate one important effect of ours we compare the situation in a modulation-doped quantum well (MDQW) with that in a QW with a uniform electric field. The potential difference between the interfaces (below denoted quantum well bias, QWB) is the same in both the cases, 36 meV. We here take the wave vector to be in the [10] direction in the two-dimensional Brillouin zone. In this direction the Rashba effect dominates over the Dresselhaus effect. The potentials, squared wave functions and spin-split ground state subbands are shown in Fig. 1. In the modulation-doped case the carrier density was taken to be 6 · 10 11 cm −2 . We then have two weakly interacting electron gases in the interface regions.
It is seen that the spin splitting is an order of magnitude larger in the modulation-doped case. Relative to a symmetric QW without Rashba splitting we present a modified mechanism to apply a moderate QWB and take advantage of the much stronger built-in electric field to obtain a substantial Rashba splitting. The reason for this effect is seen from the wave functions. In a symmetric QW the wave functions of the two lowest subbands would be symmetric and antisymmetric, respectively, and thus spread over the entire QW. But for sufficiently large asymmetry each wave function becomes localized to one of the interface regions. There the electric field is quite strong and it is this local field, not any average field, that determines the size of the spin splitting, in contrast to common belief so far.
IV. WAVE FUNCTION DEPENDENCE ON IN-PLANE WAVE VECTOR
Interesting things happen if we consider a MDQW with very small QWB, 1.7 meV. This is comparable to the energy separation at k = 0 between the lowest two subbands, E 21 = 1.4 meV. This leads to interesting anticrossing phenomena and the influence of the next lowest subband must be seriously considered. We have found that anticrossings can be influenced strongly by the Dresselhaus effect which is stronger when k is in the [11] direction. From now on we consider k in this direction.
The wave function at k = 0 for the next lowest subband is mainly localized at the right interface region where the electric field is reversed (cf. Fig. 1a) and therefore we have the opposite order between "spin-up" and "spin-down" subbands. In order of increasing energy at small k it is therefore appropriate to label the lowest four spin subbands 1 ↓, 1 ↑, 2 ↑ and 2 ↓. For such a small asymmetry the wave functions at k = 0 also have a nonnegligible amplitude in the other interface region (Fig. 2). When we increase k the wave functions of two adjacent spin subbands (1 ↑ and 2 ↑) move towards the opposite interface region, which is rather unexpected at first sight. Near k = 0.03 nm −1 it is seen that the squared wave functions have an even distribution between the two interface regions and then the energy separation also has a local minimum. The other two wave functions (1↓ and 2↓), on the other hand, become more strongly localized to one interface region (not shown).
Another type of anticrossing takes place between the spin subbands 1 ↓ and 1 ↑ around k = 0.168 nm −1 (Fig. 3). It is seen that in a narrow range of k −values, 0.166 to 0.17 nm −1 , the wave functions are interchanged and, simultaneously, the expectation value of the spin 3 for a given spin subband is flipped.
V. ANALYSIS OF ANTICROSSING PHENOMENA
The interchange of properties is typical for anticrossing of subbands. For uncoupled spin subbands the Rashba model (Eq. (1)) predicts a linear increase of the energy splitting with k and it is clear that eventually it would exceed E 21 and the spin subbands 1 ↑ and 2 ↑ would cross. In our multi-band approach an anticrossing takes place around k = 0.03 nm −1 between these spin subbands instead. This anticrossing takes place over a rather wide range of k -values. Since these subbands have parallel spins no significant modification of the spin expectation values takes place.
The first anticrossing described above makes the character the second anticrossing possible. Between the anticrossings the next lowest spin subband (1 ↑) has the opposite wave function localization and spin direction compared to the lowest spin subband 1 ↓. Fig. 3 displays a different mechanism compared to the gradual spin precession utilized in the Datta-Das spin transistor 4 . The weak interaction between these two spin subbands makes it possible to reach an energy separation of only 0.4 meV and have such a rapid interchange of properties as k increases. Inclusion of the Dresselhaus effect is essential to get this behavior. Although the spin flip here occurs as k increases it should be possible to design a structure where the spin direction at the Fermi level of a spin subband is reversed when the bias is changed slightly.
It is clear that the wave vector range during which the anticrossing takes place strongly depends on the spin directions of the anticrossing spin subbands. The anticrossing can be conveniently controlled by well width, spacer layer widths, carrier density and applied bias.
One may argue that our results could be explained within the common Rashba model provided that we insert into Eq. (1) the expectation value of the electric field, which can be expected to be enhanced by the localization of the wave function. This procedure would be well-defined if both the spin subbands had the same expectation value of the electric field. However, for small electric fields we find that the expectation values averaged over the filled states (or evaluated at the Fermi energy) can be quite different for the different spin subbands as a result of the strong wave vector dependence of the wave functions.
VI. DEVICE ASPECTS
The strong enhancement of the Rashba splitting described in Fig. 1 due to modulation-doping can be expected to have important implications for spintronic devices like the spin transistor proposed by Datta and Das. 4 For its performance it is essential that one can achieve a large wave vector splitting ∆k of a spin-split subband with a small bias. Utilizing the built-in electric field one can achieve a given ∆k with a QWB that is an order of magnitude smaller than with a uniform electric field. We have previously 8 approximated the switch energy for n-type and p-type spin transistors by CV 2 , where C is the capacitance of a QW structure surrounded by two gates and V is the applied bias between them. We then concluded that n-type spin transistors with the original design would have problems to become competitive with conventional transistors unless fundamentally new ideas were presented. If we only consider the lowest spin subband pair and follow the approach of Ref. 8 we obtain a switch energy of 0.4 aJ in the modulation-doped case and 35 aJ in a spin transistor with the same length and uniform electric field. The former figure compares very well with present state-of-the-art transistors 9 where 3 aJ has been projected.
However, a complication with our design is that the second subband pair with the opposite sign of ∆k and spin precession direction is also filled. This does not prevent the possibility that the spins at the interfaces can have made a precession by the angle π but in opposite directions on the arrival to the drain. Furthermore the matter is complicated by the k -dependent wave functions and the redistribution of carriers in the QW. Still it is clear that interesting possibilities occurring from the controllable properties open up for the design of modified spin transistors, especially if one manages to contact the electron gases in the interface regions separately. Such considerations will be presented elsewhere.
VII. DISCUSSION
The effects described here also apply to p-type QWs. However, we have recently demonstrated 10 that for ptype spin transistors the optimal choice is quite a small electric field (∼ 2 -5 kV/cm) which is remarkably efficient to create a huge Rashba splitting ∆k.
We have implicitly assumed coherence of the wave function across the 80 nm QW with a high and broad barrier in the middle. Whether this coherence actually prevails should depend on the sample quality. This system with our predicted effects seems ideal for further studies of this fundamental problem.
VIII. SUMMARY
In conclusion we have demonstrated that the nonuniform electric field in wide modulation-doped quantum wells gives interesting and useful effects. One can use a bias corresponding to a moderate average electric field and still get a Rashba splitting typically enhanced by an order of magnitude due to the built-in local electric field in the interface region. The switching mechanism is based on localization of the wave function to one interface region with a barely sufficient bias. For very small bias the wave functions and spin directions can become strongly dependent on the in-plane wave vector. At anticrossing of spin subbands the wave function moves towards the opposite interface as k increases and sometimes the spin is also flipped. The device prospects are promising but require further analysis. | 2019-04-14T02:12:34.487Z | 2006-10-13T00:00:00.000 | {
"year": 2006,
"sha1": "1d18c77338c76ebeff7a09522307628bb45ee792",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0610371",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1d18c77338c76ebeff7a09522307628bb45ee792",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
268393746 | pes2o/s2orc | v3-fos-license | Commentary: Language Policy in Galicia, 1980-2020. An Overview
Galician is a minority language spoken in Galicia, an autonomous region in northwestern Spain. This paper will provide some basic data on the evolution of the sociolinguistic situation of Galician. It will consider the dynamics of change and revitalisation of the language and will examine the linguistic policies that underpin them. In 1981, the Statute of Autonomy of Galicia was approved, establishing the co-official status of Galician. In 1983, the Galician Parliament passed the Law on the linguistic normalisation of Galicia, which laid the ground for the language policies of successive autonomous governments. After four decades, there are several symptoms that the language policy is inadequate for promoting Galician, based on a non-confrontational model centred on the teaching and learning of Galician and the promotion of its literary and cultural prestige.
Introduction
Galician is a minority language spoken in the north-western region of Spain.This paper will provide some basic data on the evolution of the sociolinguistic situation of Galician.It will focus on the dynamics of change and revitalisation of the language and the language policies that have underpinned them, from the beginning of the autonomous regime (1981) to the present day.This article draws on the most important recent contributions to the field.
Preference will be given to references in English, although, when indispensable, some literature in Spanish and Galician will also be mentioned.i
The sociolinguistic situation of Galician. Some key points
Historically, the Galician linguistic community has been structured on the basis of diglossia, i.e., the subordination of the Galician language and discrimination against Galician speakers (Monteagudo, 2017).Centuries of minoritisation culminated in the policy of annihilation under the Franco dictatorship, which precipitated the dynamic of language shift that had been brewing since the beginning of modern times (Monteagudo & Santamarina, 1993, pp. 119-126;Neira, 2002;Beswick, 2007, pp. 53-74;Monteagudo, 2021a).The Statute of Autonomy of Galicia, approved in 1981, grants Galician the status of official language in the region alongside Spanish.In 1983, the Galician Parliament passed a Law on the linguistic normalisation of Galicia, which covers the linguistic rights of Galicians, the official use of Galician, its teaching and use in the education system and in the media, as well as the authority of the Royal Galician Academy over linguistic norms.On the ground, there is also support for the language from local organizations and advocacy groups that work to preserve and promote Galician culture and language (Monteagudo, 2019b).
Let us consider them briefly.
Vitality and status
Galician is the original and usual language of most of the population in Galicia.However, despite being the majority language in Galicia, Galician is a minority language within Spain, so it still faces challenges due to the pressure of dominant (Castilian) Spanish.1992-2018(Monteagudo, 2022, p. 319), p. 319).See also Monteagudo, Loredo, & Vázquez, 2016).
The Galician language enjoys remarkable demographic strength.According to the latest demolinguistic survey, 98% of Galicians claim to understand the language and almost 90% report that they know how to speak it (Figure 1).More than half of the Galician population (52%) declares that in informal communication they speak only or mostly in Galician, and a further 23% state that they speak it occasionally (Figure 2).(Monteagudo, 2022, p. 321).
In overall, the number of daily speakers of Galician may be close to 1,500,000, and around 2,000,000 people use it frequently.Other European minor (but not minority) languages have similar or even lower figures: Estonian has about 1,110,000 speakers, Latvian about 1,500,000 and Slovenian about 2,200,000 speakers.All three are official languages of their respective states and thus of the European Union.These figures make Galician the regional minority language in Europe with the highest proportion of speakers in relation to the region's population.However, this vitality is hindered by its low prestige, due to the persistence of discriminatory social and functional diglossia and a marked sociolinguistic polarisation according to socio-economic status, level of education, and place of residence.Spanish remains the language of social progress and integration, both in urban society and in the most significant parts of the labour market.
Galician is still considered by some sectors as a vulgar sociolect rather than a "real" language, while the process of language shift has accelerated in the last decades so that the use of Galician is rapidly decreasing among the younger generations (Beswick, 2007, pp. 205-223;Monteagudo, Loredo, & Vázquez, 2016;Monteagudo, Nandi, & Loredo, 2021).
Proximity to Spanish and Portuguese
Spanish and Portuguese are languages which are widely spoken internationally; Galician is a bridge language which allows easy passage between these two languages, but which, for the same reason, runs the risk of dialectalisation through the assimilationist pressure of the dominant language and the renunciation of its own identity vis-à-vis its big sister.
Comprehension between Galician and Castilian is spontaneous but asymmetrical and facilitates "bilingual encounters" which encourage communication and coexistence, although at the same time it discourages the use of Galician by Castilian speakers (Del Valle, 2000).
Galician competes with a widely-spoken language which, moreover, is the dominant language in countries that have received a large number of Galician migrants (Monteagudo & Reyna Muniain, 2019).There is also mutual intelligibility between Portuguese and Galician, but the diffusion and knowledge of the sister language in Galicia is much lower, both quantitatively and qualitatively, than could be reasonably expected (Beswick, 2007, pp. 108-138;Monteagudo 2019b).
Linguistic standardisation
The lack of a standard linguistic variety facilitates permeability towards Castilian and encourages dialectalisation, which manifests itself in a strong tendency towards hybridisation of popular speech.This point is further developed below.
Attitudes
Most Galician citizens express a dual identity, while at the same time associating the language with Galician identity in a nebulous way.On the one hand, since the second half of the nineteenth century, the Galician people had emigration on their horizon, which has facilitated the acceptance of Spanish as the language of education, since it has perceived as useful for integration in the host countries, which were mainly Spanish-speaking.On the other hand, unlike Catalonia and the Basque Country, Galicia did not experience the pressure of strong immigration, which fostered a sense of a distinct identity that contrasted with that of the foreign arrivals.Castilian is not primarily associated with foreigners, but rather with urban life and social status.Galician cities have developed slowly, gradually assimilating the Galician speakers who have settled there.Nevertheless, pro-Galician activism has a vigorous urban presence, as evidenced by civic mobilisation and popular responses to some advertising campaigns (Beswick, 2007, pp. 188-223;Monteagudo, 2019a).
Literary and cultural tradition, modern mass culture
Few European languages have such a medieval literary tradition as Galician, as well as such a brilliant literary culture in modern times.From music and song to popular festivals, indigenous cultural traditions remain fertile and are increasingly appreciated.There is a thriving publishing industry which disseminates high-quality and up-to-date literary production, yet which fails to reach a large market.There is a great deficit in mass culture, media, and audiovisual products in Galicia, so creative production only circulates in restricted spheres (Colmeiro, 2014;Gómez Viñas, 2014;Reimóndez, 2014;Ramallo 2017;Casares & Monteagudo, 2021).
These broad outlines must in turn be seen in the context of the economic, social, cultural, and political evolution of Galicia in modern times: its late industrialisation and urbanisation, the low standard of living, the importance of the primary sector until a few decades ago, the enormous importance of emigration, and the low average level of educational qualifications.However, in recent times, Galician society has undergone a radical metamorphosis, which has led to a radical change in the sociolinguistic dimension: the concentration of the population in the urban conglomerates of western Galicia, the rise in the standard of living and educational qualifications, the rapid de-ruralisation and massive increase in tertiary industries, the transformation of the migration profile (including a large amount of highly-qualified young people) etc.And, of course, all of this must be also considered in the context of the transformations that Spain, Europe, and the whole world are undergoing, with an extraordinary impact on the ecology of languages including the worldwide expansion of English, the erosion of the status of old nation-state languages and, at the same time, an accelerated reduction in humanity's linguistic and cultural diversity.
The late standardization of Galician
In 1980, Galician lacked a written standard variety and spelling rules, although foundations had begun to be laid in Ricardo Carballo's Gramática elemental del gallego común (1966), the first Normas ortográficas e morfolóxicas issued by the Real Academia Galega (RAG) (1971), and the Bases prá unificación das normas lingüísticas do galego (1977), promoted by the Instituto da Lingua Galega of the University of Santiago de Compostela (ILG).All of them follow an autonomist approach, on the basis that Galician is conceived as an independent language.At the end of the 1970s, a so-called reintegrationist current began to take shape, advocating the integration of Galician into the Portuguese language area (Monteagudo, 1993;Monteagudo & Santamarina, 1993, 151-165;Beswick, 2007, pp. 75-94 and 125-138;Herrero-Valeiro, 2003;Dayán & O'Rourke, 2020).The joint approval by the RAG and the ILG of the Normas Ortográficas e Morfolóxicas do Idioma Galego (1982), made official by the Xunta de Galicia, meant a qualitative leap forward.As mentioned above, the Law on the linguistic normalisation of Galicia gave the RAG the authority over linguistic norms, so this institution started to become a language academy and to incorporate an increasing number of professional linguists in its ranks.Thus, since 1982, Galician has had an official standard that has been used as the basis for the further development of grammars, dictionaries, textbooks, terminological glossaries, and style manuals.This standard is used in all official and institutional bulletins, in administrative forms, in most of the media, in the majority of the prolific publishing production in Galician and, of course, in the educational system.Standard Galician has evolved over the last few decades to an advanced level, although there are still some important gaps such as the lack of a reference grammar and of a comprehensive and modern dictionary (Ramallo & Rei-Doval, 2015).
However, some particularly problematic issues have emerged in the process.There is a certain carelessness in audiovisual media and in the public use of Galician, both on the part of some professionals as well as public figures, and its servitude to Castilian is sometimes evident.As for written Galician, the debates on spelling have not yet ended, although in recent decades this has been expressed in much less confrontational terms than in the past.Lastly, among native speakers there is a general feeling of a certain degree of alienation from standard Galician, which has led in some cases to a retreat towards the vernacular and dialectal (Roseman, 1995;Loureiro-Rodriguez, Boggess, & Goldsmith, 2012;O'Rourke, 2018;Recalde, 2021).Also noteworthy is the phenomenon of the emergence of an activist urban sector of Spanish-speakers who have adopted Galician as their habitual language (neofalantes or new speakers) (O'Rourke & Ramallo, 2013).In short, standard Galician is making its way with deficits in elaboration (lack of codification tools such as a complete normative grammar or a comprehensive dictionary) and socialisation (lack of means of disseminating the norm in society, such as a significant presence in the audiovisual media and the written press), in a complex dialectic in which dialectal tendencies, interference from Castilian and transfers from Portuguese play a role.
Language policies in autonomous Galicia. A short account
Spain's linguistic diversity was explicitly recognised in the drafting of the Spanish Constitution of 1978 which is the basis of the current official language regime.Article 3 of this Constitution states that Castilian (i.e., Spanish) is the official language of the State, that all Spanish citizens have the right to use it and the duty to know it, and that "the other Spanish languages shall also be official in the respective Autonomous Communities, in accordance with their statutes".Despite this important recognition for Galician and the other "peripheral" languages of Spain (i.e., Catalan and Basque), the constitutional status of the latter is relatively weak since the former is stipulated as the only official language of the State (i.e. of its central institutions, such as the Government, the Parliament, the High Court of Justice, etc.), and the personal principle of officiality (which benefits Spanish) is applied without restrictions to all citizens, whereas a restricted principle of territoriality ("in the respective Autonomous Communities") is provided for the other co-official languages (Ramallo, 2018a).
The co-official status of Galician was established in the 1981 Statute of Autonomy of Galicia, which in Article 5 declares Galician to be "the language of Galicia", determines its official status and the right of Galicians to know and use it.It also establishes the duty of the public authorities of Galicia to promote its use "at all levels of public, cultural and informative life" and to facilitate its knowledge, as well as enshrining the principle of nondiscrimination on the grounds of language.In 1983, during the first legislature of the autonomous region, the Galician Parliament unanimously approved a Law on the linguistic normalisation of Galicia.It should be borne in mind that "linguistic normalisation" in Galicia means the process of extending the use of Galician in all areas of social activity from which this language had historically been excluded, such as the education system, the local administration, and the media.This law established the legal framework for the official and public use of the language.At the same time, within the Xunta de Galicia a Language Policy Office was created, which was responsible for its implementation as well as other initiatives and regulations undertaken in the following years (Beswick, 2007, pp. 161-187;O'Rourke, 2014;Monteagudo, 2019a).
The vitality of the language itself and the recognition of its distinct cultural tradition and a differentiated Galician collective identity were fundamental factors in legitimising the autonomous regime.While the overt goal of the LNLG was to provide the legal framework for language promotion policies, at a more general level, the covert purpose was to strengthen the legitimisation of Galicia's autonomy.During the first years of autonomy, language policy was given a boost under both centre-right (1981-86) and centre-left (1986-89) governments.
The creation of autonomous institutions triggered a dynamic of strengthening Galician identity and the promotion of the language was associated with the whole current of social, political, and cultural change at the time.The Galician language is promoted by the regional government through various measures which include provisions for its use in the following domains: -Public administration: Knowledge of Galician is required for access to civil service positions, especially in education as well as local and regional administrations; -Education: Galician became a compulsory subject in schools alongside Spanish, both in primary and secondary education.A bilingual education system was slowly designed at the elementary and middle school levels (Vila, Lasagabaster, & Ramallo, 2017); -Media: Investments were also made in the media, with the creation of a television channel in Galician (Televisión Galega, TVG) which started broadcasting in 1985 (Ramallo, 2017); -Cultural production: promotion of literary and, to a lesser extent, audiovisual production in the Galician language.
However, the degree of support has varied depending on political priorities and budget constraints.As stated before, after the LNLG was approved, a Language Policy Office was created within the Regional Ministry of Education of the Autonomous Government, which was the most important institutional agent in this field.Following the codification of the standard variety, the compulsory teaching of Galician led to the creation of a large body of specialised teachers, the expansion of the reading public, the emergence of a nascent book market, and the consolidation of a flourishing publishing industry.
In the 1980s, an extraordinary effort was made to provide language training for teachers and civil servants through courses in Galician.In addition, language normalisation departments were created in different bodies and institutions -the Xunta and Parliament, Galician public radio and television, regional councils, universities, courts of justice -which played a very important role in facilitating the official use of Galician.
A 1983 decree made the teaching of the Galician language compulsory in all elementary and secondary schools.In 1987, another decree made it compulsory to teach at least one subject in Galician in addition to the Galician language.In 1995, it was established by decree that Galician should be the language of instruction for at least one-third of the subjects taught as part of compulsory education.The policy of creating chairs and centres of Galician studies in foreign universities also began.
On the other hand, in 1994, the children's programme Xabarín Club started broadcasting on TVG, with a great impact on its audience and soon afterwards several Galician-produced television series were launched, which were very well-received by the public.A fully-fledged audiovisual industry began to emerge.The 1990s also saw the explosion of "rock bravú" (independent rock bands that sing in the Galician language), which had a considerable impact on the young public.The first newspaper written entirely in Galician appeared (O Correo Galego, later to become Galicia Hoxe), as did the first stable digital media in Galician (Vieiros).
At the same time, the publication of the Sociolinguistic Map of Galicia in 1994-1996 raised alarm about the extent of the abandonment of Galician as detected in the linguistic practices of the younger population.In 2000, the Spanish parliament ratified Spain's accession to the European Charter for Regional and Minority Languages, committing the state to the highest standards of protection for Galician, Basque, and Catalan.However, the Committee of Experts, responsible for monitoring the fulfilment of the commitments acquired by the States, has identified serious shortcomings with regard to Galician, particularly in the key areas of education and the administration of justice.ii The inclusion of Galician in the UNESCO list of endangered languages ( 2002) led to a reflection on the need to strengthen support for this language.The approval by the Real Academia Galega of the reform of the official orthographic and morphological rules ( 2003) helped to appease the controversy about linguistic standardisation.Since then, the debate on the relationship between Galician and Portuguese has taken on new perspectives, with a positive focus on the teaching of Portuguese and initiatives to strengthen relations with Portuguese-speaking countries.
On the other hand, following the unanimous approval in the Galician Parliament of a General Plan for the Normalisation of the Galician Language in 2004, the level of political consensus in favour of the normalisation of Galician reached its peak.Thus, the 21 st century began with considerable momentum.This dynamic continued after the electoral victory of the left in the 2005 regional elections and the formation of a coalition government between socialists and nationalists.iii In 2007, the Xunta de Galicia approved a new decree for Galician in education, which provided that Galician should be used as a vehicular language in compulsory education for a minimum of 50% of the time.This was opposed by the right-wing Partido Popular, while an intense public dispute arose over language policy with the emergence of discourses against the 'imposition' of Galician and in favour of 'language freedom'.This dispute continued during the 2009 regional election campaign, which was won by the Partido Popular.
Summing up the objectives of successive governments until 2009 (both centre-right and centre-left), it can be said that the Xunta de Galicia has tried to promote the knowledge and use of Galician, avoiding conflicts and responding to the wishes of the majority of citizens, and seeking a balance between the social demands of the majority and the most pressing demands of cultural elites and activist minorities.Therefore, efforts have been concentrated on improving normative skills in Galician and the linguistic attitudes of the population, as well as on increasing the prestige of Galician through the promotion of cultural production and on encouraging its use in certain public and institutional spheres.This is what was meant by the policy of "promotion without conflict", a policy that found its emblematic approach in the formula of "harmonious bilingualism" which presented the normalisation of Galician in gradual, non-imposed, and convivial terms.However, this formula has also been criticised for being ambiguous and indecisive, as a thinly disguised kind of benign neglect (Nandi, 2017).
In 2009, the discourse of 'language freedom' was imposed.This discourse was articulated by the centralist intelligentsia, widely disseminated by the main Spanish media, and actively sponsored by right-wing and centralist political forces.Its basic tenet is denouncing the policies for the promotion of Spain's peripheral languages as impositions against the freedom of Spanish speakers.These discourses enjoyed a certain vogue for a while in Galicia, and they have strongly influenced the weakening of the Galician government's language policy since the return of the Partido Popular to power in Galicia (Monteagudo, 2021b).Thus, in 2009-2010, a series of measures aimed at promoting Galician that had been in place until then were repealed or scaled down.This regressive policy was met with strong political and social protests, including a series of mass demonstrations denouncing the lack of protection for Galician.The most emblematic and contested provision was the 2010 decree on multilingualism, which reduced the presence of Galician as a language of instruction.At the same time, the economic crisis hit the Galician-language press, both digital and print, and led to the disappearance of most of the titles.Neoliberal language policy in Galicia took the guise of a laissez faire approach, with disastrous consequences for the weak process of Galician's recovery.In 2013, the publication of data on the knowledge and use of the language collected by the Galician Institute of Statistics sounded the alarm by revealing a sharp decline in Galician among children and young people.
Since 2012, the polemical tone of the linguistic controversies has tended to diminish.
In recent times, less confrontational attitudes have tended to prevail.The current de-escalation may pave the way for broader political and social agreements, which are essential to set Galicia on the road to the future.However, so far, the party with the electoral majority in Galicia continues to stick to a policy of benign neglect (Monteagudo, 2022).
Language policy in Galicia in the last decades. A brief assessment
The revitalization of the Galician language faces several challenges, including: -Competition with Spanish: Spanish remains the language of social progress and integration, both in urban society and in the labour market.As stated before, mutual understanding between Galician and Spanish is easy and enables the "bilingual encounter", which favours communication between speakers of each language, although at the same time it discourages the use of Galician by Castilian speakers.The Spanish central state keeps imposing Spanish as the only "national language", and most of the mainstream media in Spain foster negative attitudes towards the minority languages (Galician, Catalan, and Basque).
-Faltering official support: The role of government and official institutions in supporting minority languages can have a significant impact on their revitalization.
The challenge of normalisation was taken up without any historical experience of language revitalisation policies and with very limited know-how.This has been compounded by the weak political will, which has resulted in policies that are more propagandistic than effective.
-Low social prestige and scant value in the labour market: Galician is still considered by some sectors as a vulgar sociolect rather than a "real" language.The use of Galician is sometimes associated with low social status and is not seen as desirable in some professional or social contexts.
-Language shift and lack of intergenerational transmission: The process of language shift has accelerated in the last decades, so that the use of Galician is rapidly decreasing among the younger generations.On top of this, Galician is not being passed down to the younger generation at the same rate as in the past, leading to a decline in its use and a decrease in the number of speakers (Monteagudo, Nandi, & Loredo, 2020).
-Media representation: The representation of minority languages in the media and online can help increase their visibility and reach.
-Technological advancements: The growth of digital media and communication technologies has facilitated access to and promotion of the Galician language, increasing its visibility and reach, but it has also increased the overwhelming dominance of Spanish, especially among young people.
The challenges faced in revitalizing the Galician language have evolved over time, reflecting changes in the political and social landscape of the region.Some of these changes include: -Changes in attitudes: Over time, a considerable amount of conflict has surfaced around the promotion of Galician, fuelled both by the mainstream Spanish media and by a very belligerent Spanish nationalist current mostly from the right and extreme right.
-Declining official support: In the hands of the conservative Partido Popular, the regional government has reduced its support for the Galician language in recent years.
-The growing pressure of English in the education system as well as in information and communication technologies, which can be added to that of Spanish.Learning Galician is presented not only as competing with and detrimental to acquiring Spanish, but also to learning English.
The main challenge is to rearticulate the Galician linguistic community on a fair and equitable basis, thus reinforcing its cohesion.Some current priorities include: -Equal and mutual bilingualism between Galician and Spanish: This implies a greater willingness on the part of Spanish speakers to use Galician rather than waiting for Galician speakers to switch to Spanish.
-Intergenerational transmission: Ensuring the intergenerational transmission of minority languages is important for their preservation and long-term viability.
Encouraging the transmission of Galician from one generation to the next is critical to its revitalization.
-The promotion of Galician among the younger population through education: Providing access to education and resources in minority languages is a key factor in promoting their use and revitalization.Ensuring that children and young people have access to Galician language education and resources is a key priority.
-The presence of Galician in the media and on the Internet: The presence of Galician in the media and on the Internet is vital to increase its visibility and reach.
-Value in the labour market: It is essential to increase the value of Galician in the labour market, not only in public administration but also in private companies.
-Engaging the community: Community engagement and involvement in language revitalization initiatives is crucial for their success.
-Celebrating culture: Promoting minority language culture through events and initiatives can increase appreciation for the language and its cultural significance.
-Overcoming negative attitudes: Addressing negative attitudes and beliefs about minority languages and challenging stereotypes can help promote their revitalization.
Discussion and conclusions
For centuries, the Galician linguistic community has been structured on the basis of social and functional diglossia; i.e., the subordination of the Galician language and discrimination against Galician speakers.Under current conditions, this type of diglossia is no longer sustainable, so the options that remain are: a) the complete substitution of Galician and imposition of monolingualism in Castilian; b) the promotion of Galician in order to achieve a bilingual, egalitarian, and cohesive community; c) building a monolingual society in Galician.
After the brutal experience of the dictatorship, the majority of citizens opted for the second option.Reshaping the Galician linguistic community on a more balanced basis remains a costly and extremely complex task, as it involves overturning entrenched privileges, eliminating prejudices, overcoming inertia and changing deeply internalised routines.
However, proposals to split the Galician community into two opposing monolingual communities have been rejected by the majority of citizens as a deeply unpopular option.At the same time, the majority of the Galician-speaking community is in favour of a selfidentified Galician language which is aware of its kinship with Portuguese, but does not want to be subsumed into it.
The challenge of normalisation was taken up without any historical experience of language revitalisation policies and with very limited know-how; moreover, circumstances changed at a rapid pace along the way.When Galicia received autonomy , no one had any idea about the rise of new information and communication technologies, and no one could suspect how radically computers and mobile phones would transform the global environment and everyday experiences: the term 'globalisation' did not even exist.In the heyday of this phenomenon, no one foresaw the global economic crisis or the explosion of the global COVID-19 pandemic, two cataclysmic events that have plunged humanity into uncertainty, disrupted the international order, and forced many aspects of civilisation to be rethought.
The continuing widespread phenomenon of language shift in favour of Castilian and, equally alarmingly, the breakdown of the intergenerational transmission of the Galician language could be taken as symptomatic of how indecisive language policies have been inadequate for the promotion of Galician.It has been argued that Galician's numerical strength may have led to a somewhat distorted view of the sociolinguistic reality of the language, failing to recognize the heavy legacy of history.The powers of the Spanish state and the larger part of Spanish society have still to reckon with the deep furrows that a markedly centralist and supremacist centuries-old language policy has left on the linguistic attitudes of the population, especially during the Franco dictatorship.This includes its main consequences, namely the dialectalisation of the Galician language and the unequal distribution of power and resources between Spanish and Galician speakers.
Whatever the case, while assuming the hegemony of Spanish in the economy, the media, and mass culture, the non-conflictual model focused on the teaching and learning of Galician and the promotion of its literary and cultural prestige has been revealed as ineffective (O'Rourke, 2014).After all, this model ends up maintaining the status quo and thus implicitly accepts Spanish as the most value-neutral language.After overcoming centuries of marginalisation, do Galician and its speakers not deserve an assured future of normality?
Figure 1 .
Figure 1.Ability to speak and write Galician among the population aged 15 and over,
Table .
Some basic sociolinguistic data.2018.Prepared by the author. | 2024-03-15T16:26:48.414Z | 2024-03-08T00:00:00.000 | {
"year": 2024,
"sha1": "6a4c7ac592148af2697ff2277e9182d50df7815f",
"oa_license": "CCBY",
"oa_url": "https://www.ecmi.de/JEMIE/index.php/journal/article/download/96/35",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "53edb507c92f73b862843812b4464dc875a1d21d",
"s2fieldsofstudy": [
"Political Science",
"Linguistics"
],
"extfieldsofstudy": []
} |
149459151 | pes2o/s2orc | v3-fos-license | The solution of the heat conduction problem for a rectangular area with mixed boundary conditions and internal source in general by using fast expansions method
The fundamentals of the fast expansions method are outlined, provided that the mixed Dirichlet-Neumann boundary conditions are specified. Then, using the fast expansions method, the solution of the problem in an explicit analytical form with an accuracy of quadrature of Fourier coefficients is obtained. Such a solution allows to explore temperature fields in a rectangle, depending on the given input data of the task, which include the size of the rectangle, the boundary conditions and the internal source. The resulting solution for a rectangle can also be used to construct a solution to a problem in a curvilinear domain, for a two-phase Stefan problem with a curvilinear boundary, and for other applied problems.
Introduction
Метод The method of fast expansions is based on the ideas of work [1], where fast expansions are given for Dirichlet + Dirichlet boundary conditions, or Neumann + Neumann. In this article, boundary conditions of the mixed type of Dirichlet + Neumann are set. This leads to fundamental differences from the first case in the construction of the used boundary functions, the compatibility conditions of the boundary conditions of the problem and the direct organization of fast expansions. The main advantages of the fast expansions method are in obtaining a solution in an analytical form, the possibility of term-wise multiple differentiation of the used Fourier series, which are considered on a finite segment, and their fast convergence. Conditions for the term-wise differentiation of the Fourier series of periodic functions defined on an infinite interval are considered in classical literature on a finite interval in [2]. The issues of convergence of spectral expansions are given in [3,4] and others. Mx is determined using the special polynomials s Px, called fast polynomials and defined by double integrals by recurrence formulas. This is a significant difference between the method proposed here and the method used in works [5][6][7][8]. In articles [5][6][7][8], to increase the rate of convergence of a Fourier series, the authors form a special system, from which each time for given fx they again find the coefficients for constructing an improvement function. In addition, the approach of the authors of the articles [5][6][7][8] is not generalized to the possibility of its application to solving multidimensional nonlinear boundary integro-differential problems for curvilinear domains. The fast expansions method allows to solve similar boundary value problems of heightened difficulty.
Solutions of multidimensional problems in partial derivatives with mixed boundary conditions are of definite applied and scientific interest. Such solutions for classical fields are considered in numerous literature, in [9] for the heat equation, in [10] for problems on the deformation of plates and shells, in [11,12] for elastic problems. The fast expansions method has proved to be especially effective in solving complex problems [13][14][15][16] and others, where numerical example sare also given.
The objective of the work is to show the great possibilities of using the fast expansions method considering multidimensional boundary value problems when the input conditions are given in general form, which is impossible to do in cases of using other known analytical methods with an increased convergence rate of Fourier series.
The organization of fast expansions when specifying the boundary conditions of the mixed type of Dirichlet + Neumann
where q some given integer), satisfy homogeneous boundary conditions without first setting any applied problem In (2.1), a function is given at the left end of the segment 0, a , and its derivative is given at the right end. In the future, to solve various problems, the use of Fourier series is assumed. There fore, we first consider the simplest trigonometric expression Here, the process of building the spectrum of eigenfunctions and eigenvalues is not tied to the Laplace operator, and there is also no need to consider the Sturm-Liouville problem. The obtained spectra will be used in the development of the fast expansions method. The fast expansions method proposed below is intended for solving multidimensional nonlinear integro-differential problems of an applied nature. The method uses Fourier series with a high rate of convergence. Usually in such cases, when deriving formulas for the Fourier coefficients, the integration formulas are reused in parts. When applied directly, a necessary condition is the continuity and smoothness of functions. If the functions are discontinuous, then in such formulas they take into account the magnitude of the discontinuity, which creates certain problems when considering applied multidimensional problems and significantly complicates the problem of their use when considering differential problems. In this connection, we will further assume that the functions under consideration are continuous and sufficiently smooth.
When considering applied multidimensional problems, in which it is necessary to take into account discontinuities, the discontinuity surfaces are curvilinear, and it creates great difficulties for obtaining a solution. Usually, in such cases, the region , in which the solution is sought and inside which the separately the application of the fast expansions method will be very effective, since the Fourier series will converge quickly and in order to maintain the required accuracy in their partial sums it will be sufficient to retain a small number of terms.
When dividing the into 12 , part sit is necessary to fulfill on the discontinuity surface S some conditions for conjugating solutions in 1 and 2 , that play the role of boundary conditions. These conditions depend on the formulation of the original problem. There fore, in the future we will consider only continuous and sufficiently smooth functions.
can be represented by a Fourier series in terms of eigenfunctions (2.4) In (2.5) we have the classical Fourier series, which generally converges slowly and therefore it is impractical to use it. To find the conditions under which the series (2.5) will quickly converge, the expression for n B will integrate in parts a a From (2.6) it can be seen if, in addition to the smoothness condition written in (2.5), we additionally require the equality Hence we have, if, in addition to (2.7), we also require the fulfillment of the second condition (2.14) For understanding the regularity of writing additional conditions like (2.13), under which the rate of convergence of the Fourier series continues to increase, the integration in parts is applicable to formula (2.14) for the fourth time Fulfilling conditions (2.17), or (2.19), the rate of convergence of Fourier series greatly increases, and the possibility of multiple term-by-term differentiation of these series appears. Then their use becomes promising, since in the partial sum of the series it is enough to use only a few first terms. These properties can be formulated by the following theorem. The proof of the first proposition of the theorem essentially follows from the procedure for obtaining formulas (2.18), (2.20): In (2.22), the equal sign is replaced by the match sign , since this equality has not yet been proved. We write the Fourier series in cosines and for the derivative Here, the expression Upon receipt (2.24), the first condition from (2.17) was used. After comparing the coefficients m f before the cosine in (2.22) with the last equation (2.24), we have the proof of the possibility of calculating the first derivative by term differentiation of the Fourier series (2.21). Now the match sign in (2.22) can be replaced by the equal sign. The validity of the multiple series (2.21) differentiation is proved in a similar way.
The proof of the third proposition of the theorem follows from the fact that if the inequality 00 f holds, then the last equality in (2.24) will not hold. The theorem is completely proved.
Solution of the heat equation for a rectangular region with arbitrary mixed boundary conditions and an arbitrary internal source
We write the heat equation for a rectangular region Here , F x y is a well-known internal source. The boundary conditions on two sides of the rectangle at 0 x and at 0 y are given in the Dirichlet form, on its opposite sides at xa and at yb in the Neumann form: 1 2 3 4 54 1 2 3 4 0, , ,0 , , , , , , , , , , .
x a y b U y g y U x g x U x g y U y g x U x y F x y g y g x C g y g x C The orders of smoothness conditions in (3.2) The substitution of the expansion (3.3) into (3.1) is legitimate, since the Fourier series in (3.3) allows term-by-term differentiation more than two times. The left part (3.6) will be considered as a function of , xy . Since the differential equation (3.1) of the second order, in which the Fourier series, taken from (3.3), is differentiated twice, so the first order operator The left and right sides of system (3.12) can be represented by a fast decomposition with the thirdorder | 2019-05-12T14:14:55.517Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "3cb3b0195bb7dcb14c9a1118d5ce6250b44dde48",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1203/1/012032",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e713af00006f0544a4bead5f1608aa879f5bf7cd",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
270575778 | pes2o/s2orc | v3-fos-license | Variation in the Content and Fluorescence Composition of Dissolved Organic Matter in Chinese Different-Term Rice–Crayfish Integrated Systems
: This study examines the fluorescence characteristics of dissolved organic matter (DOM) in soils from different periods of rice–crayfish integrated systems (RCISs) in China. Utilizing three-dimensional excitation–emission matrix (3D-EEM) fluorescence spectroscopy, the study investigated the hydrophobicity, molecular weight distributions, and fluorescence properties of DOM in 2-, 5, and 7-year RCIS operations, with rice monoculture (RM) serving as a control. The findings indicate that in the initial 2 years of an RCIS, factors such as rice straw deposition, root exudates, and crayfish excretions increase dissolved organic carbon (DOC) release and alter DOM composition, increasing the humic acid content in the soil. As the system matures at 5 years, improvements in soil structure and microbial activity lead to the breakdown of high-molecular-weight humic substances and a rise in small-molecular-weight amino acids. By the 7-year mark, as the aquatic ecosystem stabilizes, there is an increase in humic substances and the humification index in the soil DOM. These variations in DOM properties are essential for understanding the effects of integrated farming systems on soil quality and sustainability.
Introduction
The co-culture of rice and aquatic animals represents a multifaceted agricultural approach within the framework of green economics [1].By integrating aquatic organisms into rice fields and maximizing interactions among species, these systems yield both rice and aquatic products [2,3].Compared to rice monoculture (RM), rice-aquatic animal co-culture systems significantly increase protein yields and economic efficiency by more effectively utilizing environmental resources [4,5].Among these, the rice-crayfish (Procambarus clarkii) integrated system (RCIS) has achieved the largest development scale, spreading widely across many Asian countries [6].Particularly in China, as of a 2021 report, the total area dedicated to integrated rice-crayfish farming reached 1.4 million hectares, accounting for 52.95% of the country's total rice-aquatic animal co-culture area [7].With ongoing research and enhancements, the RCIS is expected to become more profitable and socially sustainable, addressing the critical needs of the growing global population [8].
Sustainability 2024, 16, 5139 2 of 14 The initial research on the RCIS mainly concentrated on rice yield, quality, and soil composition [9,10].Later, the focus expanded to include ecosystem services, microbial diversity, and greenhouse gas emissions in rice paddies [11][12][13].Notably, there has been limited examination of changes in paddy soil dissolved organic matter (DOM) in the context of varying RCIS terms.DOM, a highly mobile and active component of soil organic matter, is crucial in the biogeochemical cycles of carbon and other elements [14].Its surface, adorned with numerous functional groups, can bind various metal ions, altering their bioavailability [15].As a direct carbon source, DOM significantly influences the carbon, nitrogen, and other cycles vital for soil microorganisms, directly impacting soil fertility and quality [16,17].In RCIS, the extended flooding disrupts the normal wet/dry cycle, reducing oxygen supply and causing sulfide buildup and secondary soil gleization [18,19].The activity of crayfish enhances soil structure [20] by increasing porosity.However, residual diets, manure waste, and water movement from crayfish feeding may result in excessive nutrient accumulation [21], impacting DOM structure and composition, potentially altering soil fertility and properties over time.Further studies are needed to understand the impact of this aquaculture method on soil fluorescent DOM comprehensively.
Three-dimensional excitation-emission matrix (3D-EEM) fluorescence spectroscopy has been the primary method for studying DOM composition [22].This technique utilizes fluorescence fingerprinting-analyzing peak intensity, location, and distribution, along with fluorescence indices-for reliable DOM characterization under various conditions [23,24].Typically, it categorizes DOM into two main types: humic-like and protein-like substances.Coupled with fluorescence indices, EEM data can reveal variations in DOM composition and bioavailability under different land use types, climatic conditions, and management practices [25,26].
In this study, we collected paddy soil samples from Chinese RCISs of varying durations (2, 5, and 7 years) and used excitation-emission matrix (EEM) analysis to study the composition and changes in the fluorescence characteristics of DOM.We examined the DOM's changes in specific fluorescence intensity (SFI), Stokes shift and fluorescence indices such as spectral slope ratio, E2:E3, specific UV absorbance (SUVA), humification index (HIX), A:T ratio, fluorescence index (FI 370 ), freshness index (β:α).Additionally, we analyzed the hydrophobicity and molecular weight distributions of the DOM and assessed the dissolved organic carbon (DOC) content to quantitatively evaluate the DOM.Finally, we summarized the changes in DOM characteristics of RCIS over different periods and their possible causes.The potential impacts of DOM characteristics on soil fertility were further discussed.These findings provide a theoretical foundation for the sustainable development of soil ecosystems and fertility cultivation in the RCIS.
Soil Sample Collection
The RCIS was categorized into three groups based on operational years: RCIS-7 (RCIS operating for 7 years), RCIS-5 (RCIS operating for 5 years), and RCIS-2 (RCIS operating for 2 years), with an additional control group in the RM (rice monoculture) soil.Soil samples from the three RCISs and RM were collected during winter in Xuyi County, Jiangsu Province, China (Figure 1a).The region experiences a semi-humid, subtropical monsoon climate with an average annual temperature of 14.7 • C and an annual rainfall of approximately 1000 mm.Under the unified organization of the town government, agricultural management (field structure, farming practices, cultivation density) of the RCIS was almost consistent across the three farms.The RCIS included a 2 m wide, 0.5 m deep annular ditch, paddy field, and inner and outer ridges (Figure 1b,c).An RCIS can be divided into the rice season (June to October) and the non-rice season (October to May of the following year).During the rice season, rice was sown in June and harvested in October.In late October each year, the water in the rice paddy was drained, and the field was sterilized by sun-drying for 15-20 days.The field was irrigated in early November, and the water level exceeded the inner ridge.In December, 120-150 kg/ha of compound fertilizer (N:P:K = 25:10:10) was added to improve the fertility of the water.In February of the following year, juvenile crayfish were replenished in moderation into the fields.Commercial crayfish were sold from early April to late May.Before the rice season, an appropriate amount of water was slowly released from the rice paddy until the surface of the inner ridge was exposed, so that the crayfish can enter the circular ditch by the end of May.
= 25:10:10) was added to improve the fertility of the water.In February of the following year, juvenile crayfish were replenished in moderation into the fields.Commercial crayfish were sold from early April to late May.Before the rice season, an appropriate amount of water was slowly released from the rice paddy until the surface of the inner ridge was exposed, so that the crayfish can enter the circular ditch by the end of May.
For both RCIS and RM, nine topsoil samples were collected from each field from a depth of 0-20 cm, mixed, air-dried, and ground to a 100-mesh sieve for further analysis.The soil texture all belongs to sticky clay, and the basic properties of soil are shown in Table 1.For both RCIS and RM, nine topsoil samples were collected from each field from a depth of 0-20 cm, mixed, air-dried, and ground to a 100-mesh sieve for further analysis.The soil texture all belongs to sticky clay, and the basic properties of soil are shown in Table 1.
DOM Extraction and DOC Determination
All soil samples were air-dried and sieved through a 100-mesh sieve.A quantity of fifty grams of soil was accurately weighed and placed into a 100 mL Erlenmeyer flask, to which 300 mL of deionized water was added, creating a soil-to-water mass ratio of 1:6 [27].The flask was shaken for 24 h at room temperature (160 rpm).After resting, the mixture was filtered through a 0.45 µm microporous membrane to obtain the DOM solution.
DOC concentration in the soil was measured using a wet oxidation total organic carbon analyzer (Aurora 1030C, OI Analytical, College Station, TX, USA).This analyzer includes a sample injection needle module, an infrared CO 2 analyzer, and other components.For analysis, the sample was acidified with 0.2 mL of 2 M HCl (applied at a rate of 1 mL per injection).The CO 2 released during the acid addition step was then injected into the CO 2 analyzer.Any remaining carbon in the sample was burned off, and the DOC concentration was determined by the resulting difference.
3D-EEM Analysis
The 3D-EEM was examined utilizing an absorption and three-dimensional fluorescence scanning spectrometer (Aqualog, HORIBA Instruments INC., Edison, NJ, USA), outfitted with a 1 cm × 1 cm quartz fluorescence sample cell.This Aqualog fluorescence spectrometer features an aberration-corrected double-grating excitation monochromator and an emission detector.A xenon lamp served as the excitation light source for the fluorescence scanning spectrometer, ensuring a signal-to-noise ratio exceeding 20,000:1.The excitation wavelength (Ex) ranged from 211 to 618 nm, with a scan interval of 3 nm.Concurrently, the emission wavelength (Em) spanned from 240 to 600 nm, utilizing an electrically cooled CCD detector and a scanning interval of 3.54 nm.
For the 3D-EEM test, the DOM extract was diluted fivefold with ultrapure water for the test (18 MΩ•cm).Data processing for the measured three-dimensional fluorescence spectrum was performed using Aqualog V3.6 software, by removing Raman scattering, eliminating the first and second Rayleigh scattering, and correcting internal filtering effects.The correction formula for the internal filtering effect is expressed in Equation (1): F obs and F ideal represent the measured and corrected fluorescence intensities, respectively, while Abs E x and Abs E m denote the absorbance at the excitation wavelength and emission wavelength, respectively.
Fluorescence Indices
The calculation of various fluorescence spectrum indices, such as spectral slope ratio, E2:E3, SUVA, HIX, A:T ratio, FI 370 , and β:α, was performed using the corrected EEMs.The definition, calculation method and indicative meaning for fluorescence indices are shown in Table 2.
Spectral slopes
Ratio of the spectral slope of the absorbance index function curve for the wavelength of 275 nm to 295 nm and the wavelength of 350-400 nm.
Spectral slopes provide insights into the average characteristics (chemistry, source, diagenesis) of DOM.The larger the value, the smaller the DOM molecular weight.[28,29] E2:E3 Ratio of absorption intensities at 250 and 365 nm.
E2:E3 is used to track changes in the relative molecular size of DOM, generally inversely correlates with DOM's aromaticity and molecular weight.[28,29] SUVA 250-280 Ratio of the UV absorbance to the DOC concentration (mg/L), reported in units of A.U./(mg DOC L −1 ).
Higher SUVA 250-280 values indicate more complex aromatic structures, making the organic matter more resistant to decomposition and utilization.
[30] HIX Ratio of the integral value (or average) of the emission wavelength at 435-480 nm and 300-345 nm at an excitation wavelength of 254 nm.
HIX values are positively correlated with the degree of humification of DOM.HIX > 6 indicates high humification with large terrestrial contribution.4-6 HIX indicates high humification with weak autobiographical characteristics.HIX < 4 indicates that humification degree is weak and autogenous.
[ 31,32] FI 370 Ratio of fluorescence intensities at 470 nm and 540 nm emissions with a 370 nm excitation.
FI 370 is a simple and sensitive indicator of DOM sources, useful for distinguishing between terrestrial and microbial origins of DOM.[33,34] A:T Ratio of peak A (Ex = 260 nm/Em = 450 nm) to peak T (Ex = 275 nm/Em = 304 nm.
The A:T ratio measures the relationship between humic-and tryptophan-like fluorescence intensities.
β:α indicates the proportion of newly produced microbial DOM components and native inputs in aquatic systems. [35,36]
Hydrophobic and Hydrophilic
To investigate the influence of hydrophilicity on DOM behavior, researchers use DAX (or XAD) resin columns to separate DOM into hydrophilic and hydrophobic fractions [37].The separation involves adsorbent resin chromatography, using a Supelite™ DAX-8 resin column to isolate hydrophobic and hydrophilic components of DOM with critical retention factors set at 5, 10, 25, 50, and 100.The columns are made from Plexiglas, measuring 1.0 cm in diameter and 20 cm in length, and packed with 40-to-60-mesh DAX-8 resin, resulting in a dead volume of 10 cm 3 [38].The operational steps include: (1) Passing a neutral water sample through the column, adjusting the filtrate, which was then adjusted to a pH value of 2. Subsequent passage through the column yielded the hydrophilic substances (HIS); (2) Flushing the column with a 0.1 mol L −1 HCl solution, double the column volume, to collect hydrophobic alkaline components (HOB); (3) Applying a 0.1 mol L −1 NaOH solution, with four times the column volume of ultra-pure water, to extract hydrophilic acid components (HOA).
DOC in Content of Soil DOM
The determination of DOC is a commonly used quantitative indicator for assessing soil DOM in many studies [39][40][41].Figure 2 displays the DOC content findings of this study.Specifically, the DOC concentrations in the soil of RM were measured at 9.83 ± 0.95 mg/L.For soils from RCIS, the concentrations were 9.35 ± 0.32 mg/L for RCIS-7, 10.80 ± 0.68 mg/L for RCIS-5, and 12.97 ± 0.22 mg/L for RCIS-2.Contrary to expectations, the DOC content in the RCIS soil did not increase with the accumulation of farming years but instead decreased.There was a notable increase in the soil DOC concentration in RCIS-2, significantly different compared to the RM.Initially, the aquatic ecosystem in RCIS was unstable, influenced by factors such as rice straw deposition, root exudation, and crayfish excretion, which elevated the DOC content [42,43].As the operational duration increased, the soil DOC concentrations in RCIS-5 and RCIS-2 showed no significant variance from RM.In prolonged RCIS, it is likely that the soil structure was compromised due to extensive land tilling for rice cultivation and crayfish growth activities [44], accelerating the mineralization process of rice soils.Consequently, the biological community within a long-term RCIS exhibits greater stability and structural integrity, fostering microbial activities that enhance the decomposition of organic matter [45].This suggests that with the extended duration of the RCIS, there was a notable and gradual decrease in DOC content.
significantly different compared to the RM.Initially, the aquatic ecosystem in RCIS was unstable, influenced by factors such as rice straw deposition, root exudation, and crayfish excretion, which elevated the DOC content [42,43].As the operational duration increased, the soil DOC concentrations in RCIS-5 and RCIS-2 showed no significant variance from RM.In prolonged RCIS, it is likely that the soil structure was compromised due to extensive land tilling for rice cultivation and crayfish growth activities [44], accelerating the mineralization process of rice soils.Consequently, the biological community within a long-term RCIS exhibits greater stability and structural integrity, fostering microbial activities that enhance the decomposition of organic matter [45].This suggests that with the extended duration of the RCIS, there was a notable and gradual decrease in DOC content.
Gao et al. reported significant disparities in DOC contents within soils located in subtropical monsoon climate regions [27].Given that Jiangsu Province is within this climatic zone, this study observed a peak DOC content at 86.01 mg/L.However, the DOC content in our study was lower compared to the findings of Gao et al.Li et al. explored DOM mineralization in subtropical rice soils and noted seasonal variations influencing DOC content, with the highest and lowest concentrations observed in November and July, respectively [46].Therefore, the relatively lower DOC concentration in our study may be attributed to seasonal fluctuations affecting temperature, precipitation, microbial activity, and other relevant factors, particularly during the winter months [47].
Hydrophobicity and Molecular Weight Distributions of the DOM
Hydrophobicity is a key physicochemical property of DOM [48].The overall hydrophobicity and molecular weight distribution of DOM from the soils of RM and RCISs are depicted in Figure 3.The DOC proportions from different soils-HIS, HOA, and HOBgenerally follow the order HOA > HIS > HOB (Figure 3a).Compared to RM, RCIS soil shows an initial increase followed by a decrease in HOA content over time, while HOB content initially decreases then increases.HIS content remains largely constant.
Regarding the chemical composition of DOM, polysaccharides are mostly found in HIS, whereas proteins are more prevalent in HOA and HOB (Figure 3b,c).The distribution of these chemicals between hydrophobic and hydrophilic fractions matches well with their functional groups.Polysaccharides, which are rich in hydroxyl groups, are significantly more hydrophilic than proteins [49].Aromatic amines and acids in proteins [46].Therefore, the relatively lower DOC concentration in our study may be attributed to seasonal fluctuations affecting temperature, precipitation, microbial activity, and other relevant factors, particularly during the winter months [47].
Hydrophobicity and Molecular Weight Distributions of the DOM
Hydrophobicity is a key physicochemical property of DOM [48].The overall hydrophobicity and molecular weight distribution of DOM from the soils of RM and RCISs are depicted in Figure 3.The DOC proportions from different soils-HIS, HOA, and HOBgenerally follow the order HOA > HIS > HOB (Figure 3a).Compared to RM, RCIS soil shows an initial increase followed by a decrease in HOA content over time, while HOB content initially decreases then increases.HIS content remains largely constant.
quently, some of this organic matter transforms into smaller dissolved organic molecules.While most of this transformed matter degrades biologically, a residual amount remains at the soil surface [48].This could explain the observed variations in molecular weight and hydrophobicity over different years within the RCIS.The broad range of hydrophobicity and molecular weight distribution is useful for comparing various fractions and understanding the impact of these properties on fluorescence.
EEM Spectra of the DOM Fractions
The EEM fluorescence spectra of DOM from RM soil and various RCIS terms are illustrated in Figure 4a.The general pattern of two spectra, in terms of peak distribution, is similar for RM and RCIS-7, but the FI of RCIS-2 is slightly higher than that of RM.The most distinct peaks, seen in the spectra of RM and RCIS-2, occur at the wavelengths (Ex, Em) = (260, 425) nm, (325, 425) nm, and (275, 310) in spectra of RM and RCIS-2.The first two peaks are identified as humic acid-like substances, primarily originating from human activities such as agricultural practices [38].The third fluorescence peak indicates tyrosine, a protein primarily derived from internal biodegradation processes [50].The FI of the RCIS-5 spectra is significantly lower than RM and RCIS-2, detecting only the peak corresponding to tyrosine peak near (Ex, Em) = (275, 310).The FI of the RCIS-7 spectra, although higher than that of RCIS-5, is still notably lower than that of RM and RCIS-2.In the RCIS- Regarding the chemical composition of DOM, polysaccharides are mostly found in HIS, whereas proteins are more prevalent in HOA and HOB (Figure 3b,c).The distribution of these chemicals between hydrophobic and hydrophilic fractions matches well with their functional groups.Polysaccharides, which are rich in hydroxyl groups, are significantly more hydrophilic than proteins [49].Aromatic amines and acids in proteins typically make up the hydrophobic components in HOA and HOB [37].In RCIS soil, the proportion of polysaccharides in HIS has increased compared to RM, while the proportion from proteins remains unchanged.DOM in various rice paddy soils shows a wide range of molecular weights (Figure 3d).In RM, molecular weights mostly range from 5 to 50 kDa.Notably, there is a significant rise in low-molecular-weight substances (under 5 kDa) in RCISs, increasing with the duration of cultivation.
In the RCIS, land tilling-promoted by microbial activities-likely enhances the infiltration of large organic molecules in a granular form into deeper soil layers.Subsequently, some of this organic matter transforms into smaller dissolved organic molecules.While most of this transformed matter degrades biologically, a residual amount remains at the soil surface [48].This could explain the observed variations in molecular weight and hydrophobicity over different years within the RCIS.The broad range of hydrophobicity and molecular weight distribution is useful for comparing various fractions and understanding the impact of these properties on fluorescence.
EEM Spectra of the DOM Fractions
The EEM fluorescence spectra of DOM from RM soil and various RCIS terms are illustrated in Figure 4a.The general pattern of two spectra, in terms of peak distribution, is similar for RM and RCIS-7, but the FI of RCIS-2 is slightly higher than that of RM.The most distinct peaks, seen in the spectra of RM and RCIS-2, occur at the wavelengths (Ex, Em) = (260, 425) nm, (325, 425) nm, and (275, 310) in spectra of RM and RCIS-2.The first two peaks are identified as humic acid-like substances, primarily originating from human activities such as agricultural practices [38].The third fluorescence peak indicates tyrosine, a protein primarily derived from internal biodegradation processes [50].The FI of the RCIS-5 spectra is significantly lower than RM and RCIS-2, detecting only the peak corresponding to tyrosine peak near (Ex, Em) = (275, 310).The FI of the RCIS-7 spectra, although higher than that of RCIS-5, is still notably lower than that of RM and RCIS-2.In the RCIS-7 spectra, two peaks are evident at (Ex, Em) = (250, 425) and (Ex, Em) = (325, 425), both associated with humic acid-like substances.
SFI is obtained by dividing fluorescence intensity (FI) by TOC concentration, expressed in R.U./(mg-TOC/L) [28] (Figure 4b).The trends in SFI from RM to RCIS-2 are consistent with the changes in FI, suggesting that these changes are mainly due to variations in the concentration of fluorescent substances rather than shifts in other DOC components.
Changes in Fluorescence Indices and Stokes Shift in DOM
The fluorescence indices of RM and RCISs are summarized in Table 3.The variations in SUVA for RM and RCIS across the wavelength range of 240-600 nm are illustrated in Figure 5.The SUVA for all samples decreases with increasing wavelength, showcasing an absorption peak between 250 and 280 nm.Compared to RM, the SUVA for the RCIS initially decreases and then increases over the cultivation period.Specifically, the SUVA of sample RCIS-5 is significantly lower than that of RCIS-2 and RCIS-7, with RCIS-2 exhibiting slightly higher SUVA than RCIS-7.SFI is obtained by dividing fluorescence intensity (FI) by TOC concentration, expressed in R.U./(mg-TOC/L) [28] (Figure 4b).The trends in SFI from RM to RCIS-2 are consistent with the changes in FI, suggesting that these changes are mainly due to variations in the concentration of fluorescent substances rather than shifts in other DOC components.
Changes in Fluorescence Indices and Stokes Shift in DOM
The fluorescence indices of RM and RCISs are summarized in Table 3.The variations in SUVA for RM and RCIS across the wavelength range of 240-600 nm are illustrated in Figure 5.The SUVA for all samples decreases with increasing wavelength, showcasing an absorption peak between 250 and 280 nm.Compared to RM, the SUVA for the RCIS initially decreases and then increases over the cultivation period.Specifically, the SUVA of sample RCIS-5 is significantly lower than that of RCIS-2 and RCIS-7, with RCIS-2 exhibiting slightly higher SUVA than RCIS-7.Spectral slopes further elucidate the general characteristics-such as chemistry, source, and diagenesis-of DOM [29].The spectral slope values for samples RM, RCIS-7, and RCIS-2 are similar, whereas RCIS-5's value is notably higher, suggesting that RCIS-5's soil DOM has the lowest molecular weight and weakest aromaticity after five years of RCIS (Figure 6) [29].The E2:E3 ratio, which measures the absorbance ratio between 250 nm and 280 nm, generally inversely correlates with DOM's aromaticity and molecular weight [51].The E2:E3 ratios are 4.7 for RM and range from 5.1 to 5.8 for RCIS, indicating a lower molecular weight of DOM in RCIS.This observation aligns with previous studies on molecular weight distribution.The 250-280 nm range primarily absorbs aromatic groups in organic macromolecules, and thus, the SUVA in this region reflects DOM's aromaticity [52].SUVA values between 250 and 280 nm are positively associated with the degree of aromatic condensation [53].Higher SUVA values indicate more complex aromatic structures, making the organic matter more resistant to decomposition and utilization.In this study, the SUVA value between 250 and 280 nm for RM is 0.017, which is higher than RCIS's range of 0.006-0.013.This suggests that RM's soil DOM has greater aromaticity and structural complexity, likely due to the rapid consumption of easily decomposable materials in RM, leading to the accumulation of humic substances rich in aromatic structures.In contrast, the RCIS process enhances microbial activity, promoting the decomposition of aromatic organic macromolecules.Spectral slopes further elucidate the general characteristics-such as chemistry, source, and diagenesis-of DOM [29].The spectral slope values for samples RM, RCIS-7, and RCIS-2 are similar, whereas RCIS-5's value is notably higher, suggesting that RCIS-5's soil DOM has the lowest molecular weight and weakest aromaticity after five years of RCIS (Figure 6) [29].The E2:E3 ratio, which measures the absorbance ratio between 250 nm and 280 nm, generally inversely correlates with DOM's aromaticity and molecular weight [51].The E2:E3 ratios are 4.7 for RM and range from 5.1 to 5.8 for RCIS, indicating a lower molecular weight of DOM in RCIS.This observation aligns with previous studies on molecular weight distribution.The 250-280 nm range primarily absorbs aromatic groups in organic macromolecules, and thus, the SUVA in this region reflects DOM's aromaticity [52].SUVA values between 250 and 280 nm are positively associated with the degree of aromatic condensation [53].Higher SUVA values indicate more complex aromatic structures, making the organic matter more resistant to decomposition and utilization.In this study, the SUVA value between 250 and 280 nm for RM is 0.017, which is higher than RCIS's range of 0.006-0.013.This suggests that RM's soil DOM has greater aromaticity and structural complexity, likely due to the rapid consumption of easily decomposable materials in RM, leading to the accumulation of humic substances rich in aromatic structures.In contrast, the RCIS process enhances microbial activity, promoting the decomposition of aromatic organic macromolecules.Spectral slopes further elucidate the general characteristics-such as chemistry, source, and diagenesis-of DOM [29].The spectral slope values for samples RM, RCIS-7, and RCIS-2 are similar, whereas RCIS-5's value is notably higher, suggesting that RCIS-5's soil DOM has the lowest molecular weight and weakest aromaticity after five years of RCIS (Figure 6) [29].The E2:E3 ratio, which measures the absorbance ratio between 250 nm and 280 nm, generally inversely correlates with DOM's aromaticity and molecular weight [51].The E2:E3 ratios are 4.7 for RM and range from 5.1 to 5.8 for RCIS, indicating a lower molecular weight of DOM in RCIS.This observation aligns with previous studies on molecular weight distribution.The 250-280 nm range primarily absorbs aromatic groups in organic macromolecules, and thus, the SUVA in this region reflects DOM's aromaticity [52].SUVA values between 250 and 280 nm are positively associated with the degree of aromatic condensation [53].Higher SUVA values indicate more complex aromatic structures, making the organic matter more resistant to decomposition and utilization.In this study, the SUVA value between 250 and 280 nm for RM is 0.017, which is higher than RCIS's range of 0.006-0.013.This suggests that RM's soil DOM has greater aromaticity and structural complexity, likely due to the rapid consumption of easily decomposable materials in RM, leading to the accumulation of humic substances rich in aromatic structures.In contrast, the RCIS process enhances microbial activity, promoting the decomposition of aromatic organic macromolecules.The changes in various fluorescence and absorbance indices (HIX, FI370, A:T, and β:α) for both RM and RCIS were examined further (Figure 7).HIX is strongly linked to the aromaticity of DOM and is inversely associated with its carbohydrate content [54].Higher HIX values indicate either more condensed aromatic structures or greater conjugation in aliphatic chains.Among the four sample groups, RM recorded the highest HIX value at 5.05, while RCIS-5 had the lowest at 1.24.The HIX values for RCIS-7 and RCIS-2, at 4.10 and 4.59, respectively, were similar and markedly higher than that of RCIS-5.The values for RM, RCIS-7, and RCIS-2, which ranged from 4 to 6, suggest a weak humic character with recent contributions from autochthonous sources to DOM formation [55].In contrast, the HIX value for RCIS-5 was below 4, pointing to a predominantly biological or aquatic bacterial origin of DOM [56].Insights from previous studies on molecular weight distribution suggest that RCIS could enhance microbial abundance and diversity, facilitating the breakdown of high-molecular-weight humic substances.However, this did not lead to the anticipated significant increase in HIX values for RCIS compared to RM.The Stokes shift is calculated from Ex −1 to Em −1 and reflects the energy relaxation loss during fluorescence [57].The distribution of the Stokes shift in DOM samples varied across different management practices, as illustrated in Figure 8.The RM samples under RM showed a distinct peak at a Stokes shift of 1.07 µm −1 .For RCIS, samples RCIS-7 and RCIS-2 displayed Stokes shift peaks similar in shape and value to RM, whereas the RCIS-5 sample had a reduced peak at 1.07 µm −1 and an increased fluorescence peak intensity at smaller Stokes shifts (0.27-0.62 µm −1 ).These variations in Stokes shift, corresponding with changes in the EEM spectra, indicate that shifts in hydrophobic and hydrophilic properties affect fluorescence characteristics.This is supported by the understanding that hydrophobic compounds generally exhibit higher Stokes shifts due to higher aromaticity and larger conjugated systems [23].According to Xiao et al., it may be because hydrophilic compounds have a considerable number of carboxyl groups (as the main type of acidic groups), which can act as electron-withdrawing substituents to increase molecular hardness and reduce Stokes shift [48,58].The alterations in the RCIS-5 sample's Stokes shift spectrum can be explained by the consumption of hydrophobic organic acids, such as humic acids.The FI 370 was used to assess the source of DOM, whether from plant residues, soil organic matter, or microorganisms [34].FI 370 values between approximately 1.7 and 1.9 suggest microbial origins, whereas values from 1.3 to 1.4 indicate terrestrial-derived fulvic acid-like components [28].In this study, RCIS-5 showed an FI 370 value of 1.71, nearing 1.9, suggesting a microbial source for DOM.Other samples had FI 370 values above 1.4,around 1.5, indicating that DOM in RM, RCIS-7, and RCIS-2 largely stems from organic matter inputs due to water and fertilizer management in rice cultivation.DOM in RCIS soil may also derive from crayfish and their excreta.Differences in FI 370 values between RM and RCIS were not substantial, likely because the sampled topsoil contained fewer plant residues, thus resulting in a dominance of microbially derived organic matter in deeper soil layers over time.Further analysis of humus and protein fractions in these deeper layers is needed to substantiate these findings.Additionally, FI 370 was negatively correlated with aromatic content [28], with the order of FI 370 values being RM < RCIS-2 < RCIS-7 < RCIS-5.The highest aromatic content in DOM was found in RM soil, aligning with the findings mentioned above.
The Stokes shift is calculated from Ex −1 to Em −1 and reflects the energy relaxation loss during fluorescence [57].The distribution of the Stokes shift in DOM samples varied across different management practices, as illustrated in Figure 8.The RM samples under RM showed a distinct peak at a Stokes shift of 1.07 µm −1 .For RCIS, samples RCIS-7 and RCIS-2 displayed Stokes shift peaks similar in shape and value to RM, whereas the RCIS-5 sample had a reduced peak at 1.07 µm −1 and an increased fluorescence peak intensity at smaller Stokes shifts (0.27-0.62 µm −1 ).These variations in Stokes shift, corresponding with in the EEM spectra, indicate that shifts in hydrophobic and hydrophilic properties affect fluorescence characteristics.This is supported by the understanding that hydrophobic compounds generally exhibit higher Stokes shifts due to higher aromaticity and larger conjugated systems [23].According to Xiao et al., it may be because hydrophilic compounds have a considerable number of carboxyl groups (as the main type of acidic groups), which can act as electron-withdrawing substituents to increase molecular hardness and reduce Stokes shift [48,58].The alterations in the RCIS-5 sample's Stokes shift spectrum can be explained by the consumption of hydrophobic organic acids, such as humic acids.The Stokes shift is calculated from Ex −1 to Em −1 and reflects the energy relaxation loss during fluorescence [57].The distribution of the Stokes shift in DOM samples varied across different management practices, as illustrated in Figure 8.The RM samples under RM showed a distinct peak at a Stokes shift of 1.07 µm −1 .For RCIS, samples RCIS-7 and RCIS-2 displayed Stokes shift peaks similar in shape and value to RM, whereas the RCIS-5 sample had a reduced peak at 1.07 µm −1 and an increased fluorescence peak intensity at smaller Stokes shifts (0.27-0.62 µm −1 ).These variations in Stokes shift, corresponding with changes in the EEM spectra, indicate that shifts in hydrophobic and hydrophilic properties affect fluorescence characteristics.This is supported by the understanding that hydrophobic compounds generally exhibit higher Stokes shifts due to higher aromaticity and larger conjugated systems [23].According to Xiao et al., it may be because hydrophilic compounds have a considerable number of carboxyl groups (as the main type of acidic groups), which can act as electron-withdrawing substituents to increase molecular hardness and reduce Stokes shift [48,58].The alterations in the RCIS-5 sample's Stokes shift spectrum can be explained by the consumption of hydrophobic organic acids, such as humic acids.
Discussion
Based on the observed variations in the EEM spectra and other fluorescence spectral indices, the impact of years of cultivation under RCIS on soil DOM can be summarized as follows.In the initial two years, the aquatic ecosystem remains unstable.Factors like rice straw deposition, root exudates, and crayfish excretions increase DOC release, enhancing humic acid content and altering DOM composition per unit concentration (mg/L DOC).By the fifth year, soil structure and microbial activity improve, leading to the decomposition of large-molecular-weight humic substances and an increase in small-molecular-weight amino acids.By the seventh year, the ecosystem tends to stabilize, and the content of humic substances in soil DOM rises.
Liang et al. compared the soil DOM characteristics of different management practices in RCIS and RM, finding that the HIX index for RM was 0.664, while for RCIS it was approximately 0.50 [59].Although the HIX index obtained in Liang's study differs significantly from our study (possibly due to variations in soil texture, sampling, and management practices), both studies reached the same conclusion: contrary to expectations, RCIS does not enhance the soil's humification capacity.Therefore, the RCIS process should incorporate the application of various organic fertilizers to increase the content and diversity of DOM.Particularly during the mid-term of RCIS, it is advisable to appropriately increase the application of organic fertilizers.However, further sampling and analysis are needed to understand these changes under extended RCIS cultivation
Conclusions
The study highlights significant changes in DOM fluorescence characteristics over different durations of RCIS cultivation through 3DEEM analysis.Early in the RCIS process (2 years), factors such as rice straw deposition, root exudates, and crayfish excretions promote DOC release and alter the DOM composition, increasing the soil's humic acid content.By the midpoint (5 years), improvements in soil structure and microbial activity lead to the breakdown of large-molecular-weight humic substances and a rise in smallmolecular-weight amino acids.In the later stages (7 years), as the ecosystem nears stability, there is an increase in both the content of humic substances and the humification index in the soil DOM.This research emphasizes the potential environmental impacts of integrated farming systems on soil DOM, which is crucial for developing management strategies for soil health and sustainability in RCIS.
Figure 2 .
Figure 2. The DOC concentrations of the DOM from of the different soil samples.
Figure 2 .
Figure 2. The DOC concentrations of the DOM from of the different soil samples.Gao et al. reported significant disparities in DOC contents within soils located in subtropical monsoon climate regions [27].Given that Jiangsu Province is within this climatic zone, this study observed a peak DOC content at 86.01 mg/L.However, the DOC content in our study was lower compared to the findings of Gao et al.Li et al. explored DOM mineralization in subtropical rice soils and noted seasonal variations influencing DOC content, with the highest and lowest concentrations observed in November and July, respectively[46].Therefore, the relatively lower DOC concentration in our study may be attributed to seasonal fluctuations affecting temperature, precipitation, microbial activity, and other relevant factors, particularly during the winter months[47].
Figure 3 .
Figure 3.The distribution of the hydrophobic/hydrophilic fractions of (a) TOC, (b) polysaccharides, (c) protein; (d) the molecular weight distribution of TOC in soil samples.
Figure 3 .
Figure 3.The distribution of the hydrophobic/hydrophilic fractions of (a) TOC, (b) polysaccharides, (c) protein; (d) the molecular weight distribution of TOC in soil samples.
Figure 4 .
Figure 4. EEM fluorescence spectra of DOM from the soil of RM and different-term RCIS (a) FI, (b) SFI.
Figure 4 .
Figure 4. EEM fluorescence spectra of DOM from the soil of RM and different-term RCIS (a) FI, (b) SFI.
Figure 5 .
Figure 5. Changes in SUVA among RM and different-term RCIS.
Figure 5 .
Figure 5. Changes in SUVA among RM and different-term RCIS.
Figure 8 .
Figure 8. Stokes shift distributions of the different samples.
Table 1 .
The basic properties of RCIS and RM soil.
Table 2 .
Description of fluorescence indices.
Table 3 .
Summary table for fluorescence indices of RM and RCIS.
Table 3 .
Summary table for fluorescence indices of RM and RCIS. | 2024-06-19T15:16:41.903Z | 2024-06-17T00:00:00.000 | {
"year": 2024,
"sha1": "7eb1dd2ed3ab28b96b8598a418857e7a72eccc6c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/16/12/5139/pdf?version=1718616636",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "074957cea7f5c0f8bc23b4e3a8fce079a1365cbb",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
53113953 | pes2o/s2orc | v3-fos-license | Machine learning approaches to understand the influence of urban environments on human's physiological response
This research proposes a framework for signal processing and information fusion of spatial-temporal multi-sensor data pertaining to understanding patterns of humans physiological changes in an urban environment. The framework includes signal frequency unification, signal pairing, signal filtering, signal quantification, and data labeling. Furthermore, this paper contributes to human-environment interaction research, where a field study to understand the influence of environmental features such as varying sound level, illuminance, field-of-view, or environmental conditions on humans' perception was proposed. In the study, participants of various demographic backgrounds walked through an urban environment in Zurich, Switzerland while wearing physiological and environmental sensors. Apart from signal processing, four machine learning techniques, classification, fuzzy rule-based inference, feature selection, and clustering, were applied to discover relevant patterns and relationship between the participants' physiological responses and environmental conditions. The predictive models with high accuracies indicate that the change in the field-of-view corresponds to increased participant arousal. Among all features, the participants' physiological responses were primarily affected by the change in environmental conditions and field-of-view.
Introduction
Understanding influence of the environmental conditions on human perception is complex.Various environmental features e.g., sound level, temperature, and illuminance affect our senses.Therefore, we adopted enhanced measurement and analysis techniques to define and measure what influences citizens in dynamic urban environments.The environmental features measured in this research include sound level, dust, temperature, humidity, illuminance and the field-of-view since they influence a person's The features of the data were recorded through devices and sensors at varying frequencies, which had both temporal and spatial properties.The features had a temporal property due to continuous recording, and the features had spatial characteristics because the recording's association with the change in locations-global positioning system (GPS).Hence, in this research, we proposed a framework that perform signal preprocessing, signal filtering, signal quantifications, data fusion, and data labeling to answer the defined research questions.
Machine learning based techniques have been successfully applied for knowledge mining and pattern recognition in various real-world situations [32,39] since they are useful in identifying the underlying patterns within data [1,25].Thus, we formulated the processed data such that four state-of-the-art machine learning techniques, classification, fuzzy rule-based inference, feature selection, and clustering, were applied for discovering patterns in the participants' physiological responses related to the urban environmental conditions.The first step in this research was to assess the predictability of participants' perception (physiological responses) of the urban environment.Thus, a ten-fold cross-validation was performed on a reduced error-pruning tree (REP-Tree) classification model [29].Following the classification approach, a fuzzy rule-based learning inferential model was built, using fuzzy unordered rule induction algorithm (FURIA) [17], to investigate the relationship between the urban environmental features and the physiological response measures.Subsequently, the importance of various urban environmental features was analyzed by applying backward linear feature elimination filter (BFE) [22].Furthermore, self-organizing map (SOM) [18] was applied to visualize the impact of urban environment features on participants' physiological responses.In the final step, a method for referencing GPS location (geo-location) to compute mean physiological response across all participants was developed.Since various methods were involved in data processing, additional graphics and multimedia can be found on the project website [12].
In summary, following are three essential contributions of this research: (a) a field study design to understanding human perception of the urban environment; (b) a framework design comprising signal processing, signal quantification, and data fusion methods that invokes a novel of approach in physiological data quantification; (c) a comprehensive analysis using four machine learning methods to discover the patterns which are crucial to our understanding of human perception in urban settings.
We organized this paper into seven Sections.Section 2 places this research in the context of literature and describes the experimental procedure.Section 3 describes signal preprocessing, multi-sensor information fusion, and machine learning techniques in detail.Section 4 is devoted to explaining the obtained results followed by a comprehensive discussion in Section 5.The challenges and opportunity of the research are presented in Section 6, and Section 7 concludes the findings of this research.
2 Human perception of the urban environment
Literature review
The process of measuring physiological data as an indicator of human perception is complex, particularly in real-world application since perception can be influenced by various factors [2].However, physiological pattern recognition can derive significant evidence about human perception [27].Similar to our research, Picard et al. [27] focused on physiological sensor data, specifically skin conductance, and they related high and low arousals as positive and negative biological reactions.Also, Picard et al. [27] focused on the collection and filtering of the physiological data to construct good quality data void of failure and corrupt signals.They formulated physiological data so that a k-nearest-neighbor classifier can predict human's physiological arousal-based perception.Krause et al. [19,20], on the other hand, used wearable device data, including physiology based sensor data (galvanic skin response), to identify user's state in terms of physiological and activity context using SOM based clustering.Specifically, they performed unsupervised learning to classify sensor data to determine the context from which the signals were generated.
In Wang et al. [38], pattern recognition and classification of physiological sensor signals were performed by first decomposing signals into its constituent features and by applying support vector machine to classify negative and positive emotion labels.Here, the label associated with the signals were predefined during the experiment by exposing the participant to negative and positive environments during the recording of signals.Rani et al. [31] performed an empirical study of four machine learning techniques: k-nearest neighbor, regression tree, Bayesian network and support vector machine for the recognition of the emotional state from physiological response data.They performed signal processing to evaluate features from the physiological data and labeled them with the emotional state reported by the participants.
Since we investigate "cause and effect" between the environmental conditions and the human's perception, unlike Wang et al. [38] and Rani et al. [31], we performed signal processing on the physiological data to evaluate skin conductance response (SCR) arousals [40].Subsequently, we assigned labels to signal fragments based on the degree of arousal within a specified time.While doing this, we considered physiological data as the output in the classification model and the signals from the environment as the inputs.Whereas, Wang et al. [38] and Rani et al. [31] considered features of the processed data as the inputs and the reported environment as the output.Our approach, to first determine arousal level was adopted because of the complexities of the urban environment and because we cannot accurately consider an urban environment to be positive or negative towards the perceptual quality of a participant.Thus, we labeled environmental conditions as the positive and negative by considering physiological data as the target in the classifier's training.
Ragot et al. [30] found that the physiological response signals from the Empatica E4 wearable device were closely comparable to laboratory-based measurement devices.They also found that the data from such wearable devices could be used to train a support-vector-machine classifier to recognize the participants' emotional state.Similarly, Poh et al. [28] confirmed that EDA data from wearable devices is comparable to laboratory devices and the data are a valid physiological measure.Hence, was our approach in this study to employ Empatica E4 to perform physiological measure.
Study design and measurements
We designed a study to understand the general pattern(s) of human perception related to events which occur in a dynamic urban environment.An event indicates the change in the environmental condition, and also, a sample of the measured environmental data.As a case study, we selected a neighborhood in Zürich, Switzerland (Fig. 1a), and invited participants to take a leisure walk on a predetermined path (Fig. 1b).The participants were equipped with a "sensor backpack [14]" and an Empatica E4 wearable device [11].The 1.3 km walking path was carefully selected , which covered a diverse urban scenario [15], e.g., spacious and narrow streets, green and urban areas, and loud and quieter locations.
Our sensor kit [14] measured the changes in sound level (decibel, dB), the amount of dust (mg/m 3 ), temperature ( • C), relative humidity (%), and illuminance (lx).We also calculated field-of-view based on the GPS information and spatial configuration of the neighborhood.The field-of-view is formerly described as the Isovist descriptor, which refers to the open space a person can view from a single vantage point [4].Since participants were walking in a forward direction, we considered 180 • fieldof-view with a distance of 100 m.Subsequently, the Isovist descriptor for each participants' walk was measured by drawing a polygon around the participants' 180 • field-of-view at their specific GPS locations.From this, the following measures of the Isovist polygons were calculated: Area-polygon's surface area; Perimeter-polygon's perimeter length; Compactness-the ratio of area to the perimeter (relative to an ideal circle); and Occlusivity-the length of occluding edges.The EDA measures the individuals' physiological state [6], which was recorded using Empatica E4 wearable device, similar to studies by [11,12,13].We placed the wearable device on participants' non-dominant hand and let it adjust for 10 minutes according to Empatica guidelines [11].The data were recorded on the Empatica website and corrected for motion artifact [11].The EDA measure (physiological response) was a time-series signal and has temporal dependencies.The sensor backpack, on the other hand, was designed to capture the contextual-based events that occur in an urban environment.In the context of this study, an event is non-temporal since an event is dependent on the instance of its observation.Therefore, the continuous signals recorded for environmental features and the continuous signals recorded for participants' physiological responses were quantified in two different manners (Section 3.2).Moreover, since the recorded signals were associated with the geographical location, they also had spatial properties.The primary infrastructure of the urban environment and season (April 2016) were uniform.However, inherent diversity occurred from different experiment days, time-of-day, and participants demographic background.The data for both environment measures and corresponding participants' physiological response measures are summarized in Table 1.
Methodologies
A comprehensive signal processing and data-preprocessing framework was proposed in order to apply select machine learning methods.Fig. 2 illustrates the framework and describes how it was used for information fusion and knowledge mining approaches.Here, e i and r i indicate i-th quantified event (a sample in the quantified environmental data) and response (a sample in the quantified physiological response data) respectively.The variable m j for j ∈ {1, 2, . . ., N } indicates the total number of samples belonging to the j-th participant p j .The information, therefore, was fused in three stages: (a) Each participants' event-based data (e) are collected from five sensors, which were re-sampled to a unique frequency and samples were aligned as per with on their time (Fig. 2, mark "A").
(b) The environment and response data from each participant were independently cleaned, filtered, and quantified.Each participants' quantified event and response data were fused (paired) by assigning a quantified response r i to event e i (Fig. 2, mark "B").
(c) The paired participants' data were then stacked (Fig. 2, mark "C").The three-stage information fusion approach produced the compiled dataset, which was fed to select machine learning techniques.For each machine learning technique, the compiled dataset (Fig. 2, mark "C") was arranged and configured as per the techniques' requirements and objectives.
Frequency unification
The environmental features sound and dust were collected at 0.4 Hz frequency; while GPS position, temperature, humidity, and illuminance were collected at 1 Hz frequency (Table 1).Therefore, an up-sampling mechanism with a linear interpolation was applied to sound and dust data [5] to unify the frequencies of the gathered data.All features were then aligned to the same timestamp, which was crucial to ensure that all sensor values belong to an exact event during the study.
Signal filtering and smoothing
The physiological response data (EDA signals) were kept at their original 4Hz frequency to maintain the information required for arousal detection from the physiological data.With close inspection, we found that some participants EDA signals were unusable and were discarded.The remaining (accepted) EDA signals were first smoothed and then filtered to remove artifacts as recommended in EDA literature [6,8]. in [8] suggested an adaptive method for SWT-based smoothing for EDA signals recorded for long periods (30 hours).In our study, EDA signals were recorded for 25-29 minutes.Therefore, we applied a one-level SWT and reverse-SWT for smoothing.Each EDA signal was transformed using "Haar" as a mother wavelet in the SWT [24].A one-level SWT transformation was performed on each signal;
Physiological data selection
and on the obtained wavelet coefficients, a threshold of value ±0.001 was applied to eliminate larger fluctuation in the signal.That is, the values of wavelet coefficients above +0.001and below −0.001 were cut off (Fig. 4a).Finally, a reverse SWT was applied to the transformed signal to produce a smoothed signal (Fig. 4b).
Truncation of the unwanted signal fragments SWT based treatment to the EDA signals eliminated the large fluctuations from the signal.However, some sharp drops in signal (corrupt fragment) caused by artifact were not filtered out completely.Thus, the corrupt fragments and participants' waiting time fragments of EDA signal were truncated from both original (raw) and smooth EDA
Signal quantification and labeling
Signal quantification involved three steps: time-window marking, arousal detection, and data labeling.
In fact, these are the critical steps in the fusion of the environmental data and the physiological response data.As shown in Fig. 2, at first, physiological data were quantified, and then, the timestamp information was passed to the environmental data for its quantification.
Time window marking
Each EDA signal's timestamp information was compared with the timestamps recorded at various stages during a participants' walk.Based on signal filtering shown in Fig. 4b and available timestamp information, the signal fragment belonged to the walking duration-indicated by Start and End in Fig. 5a-were marked with a regular interval of time-window size t seconds.Such a time-window marking was crucial to our data analysis to observe participants physiological states in relation to their experience of the events occurring at a regular interval of t seconds (Fig. 5a).
Therefore, for each time-window, event e p j i for i = 1 to m j experienced by participant p j is a vector of the environmental features and was computed by averaging the values of signal fragment (environmental measurement) at the i-th corresponding time-window.On the other hand, the participants physiological response r detection method described in Section 3.2.2.Additionally, the participants' field-of-view (Isovist descriptors: area, perimeter, occlusivity, and compactness) were computed at the start of each timewindow.Thus, participant quantified data p j had an identically independent vector of environmental conditions (event e p j i ) and a corresponding physiological state (response r p j i ) for each time-window.
Arousal detection (EDA)
The level of arousal r p j i in an EDA signal depends on identifying a specific signature (pattern) called skin conductance response (SCR) or arousal [3,6,9,33,35].The state of arousal in an EDA signal is typically defined as a peak having a specific signature [6].We processed the EDA signals using a skin conductance processing tool Ledalab [3].Ledalab offers a continuous decomposition analysis (CDA) method for analyzing an EDA signal.In CDA, an EDA signal is decomposed into tonic skin conductance level (SCL) and phasic drivers SCR.
We performed CDA on each EDA signal data-of each participant-by using the recommended settings in Ledalab [3].That is, the signal's optimization procedure was performed two times, which automatically determined the optimization parameters for evaluating the number of significant SCR (nSCR) above a defined threshold 0.01µSiemens within a time-window.We used nSCR, because we could not, in a theory-driven manner, define what stimulus (event) caused a change in participants "physiological arousal state."Thus, we relied on a data-driven approach by analyzing phasic SCR, a non-specific fast changing EDA measure; i.e., the number of peaks in phasic skin conductance response measures nSCR to any kind of event for the given time-window.Therefore, the nSCR gave us the measures of r p j i shown in Fig. 5b.
Data labeling
When aggregating all participants data (Fig. 2, mark "C"), we observe that nSCR value for a timewindow vary from 0 to 12.An nSCR value 0 indicate that, in a time-window, a participant had a normal physiological condition.On the other hand, an nSCR value greater than 0 for a time-window indicates that a participant experienced a state of arousal at least once in that time-window.Thus, for the labeling of each time-window-of each participant data-a binary-class label indicating a binary state of phasic nSCR r p j i can be used, where (a) class 0 is "normal" physiological response ("N"), i.e., an nSCR value equal to 0; and (b) class 1 is "aroused" physiological response ("A"), i.e., an nSCR value greater than to 0.
A multi-class classification was also used, in which case, aroused physiological response, "A" has two categories: class "LA" indicating low arousal response, i.e., 0 < nSCR < 6 and class "HA" indicating high arousal response, i.e., nSCR ≥ 6.A total of 6,057 samples and 9 input features were available in the compiled dataset for a time-window size t (quantification rate) of 5-seconds.In the compiled data, 3,491 samples belonged to the category "N" and 2,566 samples belonged to the category "A," i.e., approximately 60% and 40% of the samples respectively belong to "N" and "A."Furthermore, in the multiclass classification, 2,079 samples were labeled "LA" and 487 samples were labeled "HA."
Non-inferential modeling
We build a predictive model consisting of the environmental features as the inputs, and binary (and multiclass) quantified arousal level as the output using REP-Tree, which is a decision tree learner [29].
In a decision tree, a tree-like predictive model is built, where the leaves represent the target (e.g., the class labels: "N" or "A") and the branches represent an observation for a feature (e.g., sound level) at a node.REP-Tree is a method applied to reduce the size of a decision tree, where it keeps pruning subtrees by replacing it with a leaf (a class label) as long as the error does not increase (i.e., the accuracy of the model does not decrease).
We chose REP-Tree to build a predictive model because the algorithm constructs a decision tree, where each node makes a decision for a feature, and its specific value produces a particular class label.While making a predictive model, REP-Tree chooses the most significant features based on their contribution to the model's accuracy, which is advantageous for this problem since it is uncertain which environmental features influence physiological responses.For the validation of the model's predictive performance, we chose ten-fold cross-validation (10-fold CV).Section 4 describes the test accuracies of 10-fold CV based REP-Tree training.
Inferential modeling
Contrary to non-inferential modeling, inferential modeling explains the relationships between the input features and the output feature.A fuzzy rule-based inference system is capable of describing how independent environmental features are related to the dependent physiological response (phasic nSCR) feature.For this, we applied FURIA, which is a fuzzy rule-based classifier [17].
Unlike conventional rule-based classifiers, FURIA gives a fuzzy rule [17].FURIA produces fuzzy rules with operators ≤, =, and ≥; the operators define clear conditions for a feature's association with a class label (e.g., "N" or "A").FURIA also provides a range (e.g., x → y) indicating fuzziness in feature's condition, which may be considered as a soft boundary while associating a feature with a class label [17].This ability was particularly useful in this study since we wanted to observe the specific values range of the environmental features that corresponded to a participants' state of arousal.For instance, we needed to determine for which particular sound level range, a participant experienced a state of arousal.Since FURIA fulfills this requirement, it was selected as the technique for inferential analysis.Interpretation of the obtained rules is described in Section 4.
Feature selection
Feature selection is a process to determine the ability of each input feature to predict the output.
Moreover, feature selection involves making a model using a subset of features and testing its predictive accuracy.We applied backward feature elimination (BFE) method in this research for its ability to examine all possible combinations of feature subsets [22].BFE starts with all features in a set (in this case, it begins with 9 features) to build and test the model.Subsequently, BFE iteratively eliminates features one-by-one while propagating high accuracy feature subsets to the next iteration.Finally, BEF gives a list of subsets with their corresponding accuracies, from which a subset can be selected depending on the accuracy or the number of features required.In addition to REP-Tree, MLP [16] and SVM [7] were used for a more comprehensive analysis in BFE.Therefore, the feature selection result was an assessment of three different predictors.During the feature selection, at each iteration, BFE used 60% randomly selected samples for training and the rest 40% samples to test the model.
Pattern discovery
In general, the primary aim of self-organizing map (SOM) is to map m-dimensional data onto a 2dimensional (2D) plane.The 2D plane of SOM consists of a network of neurons (nodes).The network's nodes acquire the underlying property of the input data samples (e.g., events in the environmental data).Moreover, a SOM projects similar data samples to a cluster center (a node in a SOM) as per the similarity (Euclidean distance) of the data sample to the node [18,37].
SOM is an appropriate choice for this problem since it is tedious to define the number of clusters, especially when problems have complex relations between the features.SOM produced clusters automatically (see Section 4.4).Additionally, to analyze pattern related to the geo-locations, geo-locations referenced mean physiological response r meani = (r p 1 x i ,y i + r p 2 x i ,y i + . . .+ r p N x i ,y i )/N across all participants was computed by matching GPS location information (x i : latitude, y i : longitude) and aggregating the samples.Geo-location referenced mean physiological responses r meani were computed to visually understand patterns in participants' physiological responses related to the actual map of the neighborhood, described in Section 4.4.
Sensitivity analysis (non-inferential modeling)
First, a classifier (REP-Tree described in Section 3. curve plot [26] in Fig. 6. The model's performance improved as the quantification rates decreased (Fig. 6).The model's high predictability for smaller quantification rates is an indicator of the participants' strong sensitivity towards the changes in the urban environment.The model's performance for smoothed EDA data (red square) was better than the model's performance for raw EDA signal (circles).Thus, the smooth EDA data more accurately draw the association between a change in environmental features and participants' physiological states of arousal.
The results of the 10-fold CV training of the RET-Tree classifier for both binary and multiclass classification for the dataset where smooth EDA data were quantified at 5-second time-window as shown in Table 2.The classifier's predictive accuracy was found to be 87% for the binary-class classification and 80% for the multiclass classification.
Sensitivity range analysis (inferential modeling)
The non-inferential model indicates that the participants' physiological responses are sensitive to the environmental changes.Therefore, we build an inferential model to understand how environmental features influence participants' physiological responses.A fuzzy rule-based inferential model was built using FURIA whose parameter settings are mentioned in Table A.1.We adopted a binary-class classification of nSCR, where nSCRs were categorized into two classes: normal physiological response, "N" and aroused physiological response, "A."The FURIA algorithm offered an average test accuracy of 70.23% after a 10-fold CV training.Such accuracy is notably high for the complex problem of understanding the humans' perception of their urban environmental conditions.
We analyzed the set of fuzzy rules generated by FURIA by segregating the rules between the participants' "N" and "A." Fig. 7 is a visual interpretation of the obtained fuzzy rules for both classes "N" and "A."We interpreted and represented the FURIA rules in Fig. 7 to find the values (range of values) of the environmental features that (a) were linked to class "A," which indicates participants' aroused physiological state; (b) did not significantly influence the participants' aroused physiological state.
To validate the knowledge obtained from the visual interpretation of fuzzy rules, distributions of the environmental features were examined through histograms in Figs.7b, 7d, 7f, 7h, 7j, and 7l.The visual interpretation and summarization of the rules for sound level in Fig. 7a and its corresponding distribution in Fig. 7b indicate that the participants normal physiological responses match a particular sound level distribution.For example, the sound level distribution around 60 dB to 66 dB (Fig. 7b) correspond normal physiological state (Fig. 7a).Furthermore, the participants had a tendency to exhibit aroused physiological state when experienced sound level above 66 dB.This result indicates that loud sound levels correspond to increased participant arousal.
The result was similar for temperature, where temperature degrees greater than 21-22 • C were associated with aroused physiological state (Fig. 7e).However, it can be observed that the samples in the dataset for temperatures above 22 • C were fewer than for the temperature degrees below 22 • C (Fig. 7f), which we could take as confidence that heat alone did not cause the physiological arousal of participants.In (Fig. 7i), the participants exhibited physiological arousal for darker locations (illuminance level below 580 lx).
Simultaneous impact of environmental features
Inference modeling provided the values for environmental features that were responsible for normal and aroused physiological states.However, it is also essential to discover which of the environmental feature(s) have the strongest influence on the participants' physiological responses.Hence, we constructed a backward linear filter elimination (BFE) based feature selection framework and analyzed the obtained results to build a significance hierarchy of feature subsets (Fig. 8).A feature subset's significance was estimated on its ability to predict "N" and "A" classes with high accuracy.
Fig. 8 is a significance hierarchy triangle of the feature subsets, where a subset's predictability reduces when the number of features in the subset decreases.Three predictors provided three feature selection result sets.Fig. 8 is the compilation of the three result sets from all three predictors.The MLP, REP-Tree, and SVM agreed on the feature subset temperature, humidity, illuminance, and Isovist area, where the REP-Tree had the highest accuracy, followed by SVM and MLP.Therefore, temperature, humidity, illuminance, and Isovist area, were noted as the most significant feature set but is a matter of trade-off between accuracy and number of features as indicated in hierarchy triangle (Fig. 8).
Patterns of perceptual variations
The predictive modeling confirmed the sensitivity of participants' physiological responses towards dynamic environmental conditions.The fuzzy rule-based analysis described the relationship between the environmental features and the physiological response.Feature selection indicated the most significant environmental features.However, pattern discovery explains: (a) which participants were experiencing a similar environmental conditions and what were their response; (b) whether the participants' physiological responses for certain environmental conditions were similar; (c) the patterns of the environmental features that influence the participants physiological arousal.
The compiled data (see Fig. 2) were analyzed using SOM.Fig. 9 is a result of automatic clustering from a trained SOM, where the 9-dimensional input data were mapped onto the 20 × 20 dimension 2D plane consisting of hexagonal nodes.Each node in the map acquired the property of a set of samples.
Fig. 9a shows the maps of the environmental features on feature matrices (F-matrices).On a feature matrix (F-matrix) of an environmental feature (e.g., sound level), the features' value assigned to Fmatrix nodes are corresponding to the nodes on the SOM's unified distance matrix (U-matrix) in Fig. 9b and Label matrix (L-matrix) in Fig. 9c.Hence, the position and value of the nodes in all the maps (matrices) in Fig. 9 are comparable to each other.More specifically, the U-matrix is the result of the F-matrices of the environmental features, and the L-matrix is the corresponding dominant label associated with the nodes.Therefore, to make sense of the pattern, we need to compare all matrices with one another.
The U-matrix in Fig. 9b shows the clusters of similar data points.The nodes with small differences (in terms of Euclidean distance) are shown in dark blue, and the nodes with high differences and are shown in bright yellow.In addition, the patches of nodes with similar colors, separated by lighter colors, indicate the clusters of data samples.Moreover, the data samples corresponding to a cluster in the U-matrix share a commonality, and dissimilar data samples are further apart.It is therefore implied that the participants' ID label belonging to a cluster experienced similar environmental conditions.ganization of the dataset.This could carefully be interpreted as a "cause" (Fig. 9a) and "effect" (consult with Fig. 9b and Fig. 9c) of the dynamic and simultaneous environmental features with the participants' physiological responses.
On the U-matrix (Fig. 9b) a bright yellow patch separates itself from all the other nodes clusters.This distinctly available yellow spot is the result of a high concentration of a set similar input samples, which in this case, is due to the concentration high illuminance values as evident from F-matrix for illuminance (Fig. 9a).Fig. 9c shows that at the exact same spot, participants' had aroused physiological state (most of the nodes are colored blue) and nodes were labeled with participants ID's (8, 13, 23, and 29) indicating that all the participants exposed to extremely high illuminance also experienced an equal aroused physiologically state.In pattern analysis, the mean physiological response across all participants was mapped onto the geographic location along the path.The geo-location referenced mean physiological response was Fig. 10: Geo-location referenced mean physiological responses across all participants.An animation of this graphic indicating real-time simulation is available at [12].
computed and normalized between 0 and 1.The geo-location referenced physiological responses highlighted specific locations on the neighborhood's map where participants experienced aroused physiological state (Fig. 10).The locations, where on average all participants exhibited high physiological arousal response are indicated in red while low physiological arousal is indicated by yellow.Varying size of dots on the map in Fig. 10 is proportional to the degree of participants' physiological arousal.
Discussion
Through this research, we extracted patterns from the data gathered during a controlled study, where we asked participants to walk through an urban environment (Section 2.2).Our data analysis methods had the following dimensions: signal processing, multi-sensor information fusion, and knowledge mining using machine learning techniques.The sensor frequency unification and quantification led to the preparation of identically independent data samples of events and corresponding physiological response.During the data processing phase, we categorized physiological response data (EDA signals) into clean and erroneous signals (Section 3.1).EDA signal recording is susceptible to artifacts and the suggested definition identifies an erroneous EDA signal.Finally, the quantification method segmented the continuous temporal data into regular time intervals of t-seconds ( time-window size) and the ological arousal state) and expected to fall into the same cluster or node on the map.For example, a cluster formed due to extremely high illuminance and another for low illuminance conditions (Fig. 9).
This indicates that a particular environmental condition influences most of the participants equally and the majority of participants responded a similar physiological response state when experiencing similar conditions.Furthermore, because the participants walked at different speeds, the number of quantified events corresponding to each participant slightly varied.Therefore, the geo-location referenced normalized mean of the events was the best method to show the geolocation of the participants' average physiological responses on the map (Fig. 10).This map can be used to visually inspect the impact of urban features, such as street-width, street-type, traffic, type of area (residential and industrial) and their potential impact on the participants' physiological response.
Challenges and opportunities
The methods developed for this investigation help reveal patterns from complex human-environment interactions.The analysis predominantly focused on improved quantification methods for physiological arousal level detection and a means to correlate arousal level with environmental stimuli.This approach allows us to observe an increase in physiological arousal in response to specific environmental conditions (Section 5).The primary challenge of this study was the process of selecting the appropriate tuning parameters to quantify and evaluate the arousal label.For example, the accuracy of the methods (Fig. 6) varied depending upon the quantification rate.Similarly, the accuracy of the method depends on the procedure and threshold adopted for the nSCRs level detection [6].Moreover, we captured 9 features of a real-world dynamics situation.Hence, increased number of features may further improve the predictive model's accuracy.
Future studies can utilize the presented experimental design and quantification methodology.For instance, it can be extended to capture citizen's public transport commuting experience (physiological response while walking, waiting, and riding), and for traffic safety, the method can be potentially applied to understand the physiological arousal pattern of vehicle riders while they ride through cities [10,34].Moreover, the developed predictive model can be used to extrapolate the potential citizen's arousal levels to a larger geographic area when combined with the isovist values and measured environmental data beyond the selected path.
In this research, we recognized factors influencing humans perception.Whereas to meet the refereed challenges, our findings suggest that further employing virtual reality set-up could help reducing noise that may be induced by unknown factors.Additionally, our findings suggest that a subjective thresholding skin conductance can also be employed to mitigate the challenges.
Moreover, in the field of urban studies, it is crucial to understand how the built environment influences human behavior and perception.This question has been central to the practice and research ever since and poses a fundamental methodological problem since it is especially difficult to a) objectively measure perception and b) deal with the multitude dynamic environmental factors preventing to identify the effect of pure urban form on human perception.As an answer to this problem, this research provides a major contribution by presenting and empirically testing a novel research framework for predicting and inferring the effects of planning decisions on human perception.In essence, the framework provides insides into How, and Why do architecture and urban design influence human perception which is particularly helpful for evaluating planning proposals and to guide the design decisions.For this purpose, we adopt the state of the art mobile sensing technologies as well as machine learning methods which are specifically chosen and adapted for needs of architecture and urban design research.
Conclusions
This research presented a specific methodology to evaluate a complex dataset from an experiment with physiological responses of 30 participants linked to environmental conditions.The measurements in the dataset came from seven sensors with differing frequencies and four additional geometric features.The proposed data quantification and multi-sensor information fusion methods linked participants' physiological state of arousal to environmental conditions.Four categories of machine learning techniques (non-inferential modeling, inferential modeling, feature selection, and clustering) revealed patterns in the dataset: The high accuracy of the non-inferential predictive model was an evidence of the participants' physiological state sensitive to the changes in environmental conditions.The fuzzy rule-based inferential modeling results indicate that the occurrence of "normal" and "aroused" physiological conditions corresponds to specific values (and range of values) for each environment feature.It suggested that the changes in the participant physiological arousal state primarily occurred due to the fluctuations in the environmental conditions.Feature selection showed that some environmental features, such as temperature, humidity, illuminance, and the-filed-of-view were more dominant in their influence on participants' physiological response than sound level and dust.Pattern analysis from self-organizing map indicated that, primarily, the participants who experience similar environmental conditions responded in similar physiological arousal state.Finally, the geo-location referencing of average physiological response across all participants produced a means to visually inspect how participants respond during the actual walk in relation to permanent urban features.The proposed data analysis framework revealed patterns from the complex spatial-temporal environmental and physiological data that impact our understanding of urban settings.
Figs. 3a, 3b, 3c, and 3d were considered for the data analysis.The EDA signals belonging to the two erroneous EDA profile types illustrated in Figs.3e and 3f were discarded.In total 10 EDA signals were discarded.The erroneous EDA signal types were classified as: (a) Type-1 error, when EDA signal values only fluctuate between two values, i.e., the EDA signal behaved like a step function, and the signal may also contain a significant amount of sensor loss (no sensor response record).(b) Type-2 error, when the majority of the sample values were zero (significant sensor response loss), despite the otherwise normal fluctuations (correct sensor response) in EDA signal.
Fig. 3 :
Fig. 3: Signals in (a), (b), (c), and (d) are the most commonly found EDA signal profiles and considered for the analysis.Most commonly found error in signals are shown in (e) and (f).
Fig. 4 :
Fig. 4: Stationary Wavelet Transform based smoothing.(a) Wavelet transform of an original EDA signal using Haar wavelet, and smoothing by applying a threshold over wavelet coefficient.(b) Original and smoothed EDA signal with filtering of corrupt and unnecessary fragments.
Fig. 5 :
Fig. 5: (a) Timestamp is indicating Start and End of a participants' walk during the study.It illustrates the approach to quantify a participant's physiological response and environmental experience data (b) Timestamp and time-window marking for an EDA signals (physiological response) at every t seconds for the detection of arousal r p j i for i = 1 to mj.
Fig. 6 :
Fig. 6: ROC graph of classification models on two categories of datasets represented in two different shapes: square and circles.Square represents dataset prepared with the output feature being the quantified smoothed EDA data; circles represent dataset prepared with the output feature being the quantified original EDA data.
Fig. 7 :
Fig.7: Visual interpretation of the fuzzy rules.The color "red" indicates the range for which the fuzzy rules finds nSCR > 0, i.e., an indicator of aroused physiological state.The color "blue" indicates the range for which the fuzzy rules finds nSCR =0, i.e., an indicator of normal physiological state.The color "white" indicates a range of fuzziness.The color "gray" indicates the range for which rules do not provide any conclusive information.
Fig. 8 :
Fig. 8: Hierarchy of feature importance.The symbol I* appeared only in the REP-Tree based feature selection.The feature set {T,R,A,I} appear in all three predictor's results.
Fig. 9 :
Fig. 9: Trained SOM results; node value in the maps are indicated by color: lowest value is shown in dark blue, and the highest value is shown in bright yellow.(a) U-matrix: SOM clustering map.(b) F-matrix: maps for environmental features, which were linearly scaled with a variance of 1.0 so that they have equal importance in clustering.(c) L-matrix: participant ID and participants physiological response state label ("N" and "A") map.
Fig. 9c is
Fig. 9c is an L-matrix with each node was labeled with participant ID and the state of physiological response.White nodes indicate a normal physiological response and blue nodes indicate the aroused physiological response.By comparing these matrices, one can discover relevant patterns in the or- Fig.9a, we can find that the clusters at the bottom-left and the top-left in Fig.9bare the results of high values of sound and temperature and extremely low values of illuminance.These clusters, when compared to L-matrix in Fig.9c, indicate that the majority of participants responded with an aroused physiological state.Similarly, the cluster on the top-right is due to a combination of low values of dust and temperature.The corresponding L-matrix in Fig.9chas the majority of nodes indicating a normal physiologically state.Further, the F-matrix for Isovist area in Fig.9ashows that the high value of Isovist area resulted in an aroused physiological state, also evident from the L-matrix in Fig.9c.L-matrix also indicates that participant IDs 16, 23, 24, 29, 32, and 35 experienced such a high Isovist area and responded with a similar physiological state.
Table 1 :
Measured features in the study. | 2018-11-03T13:03:24.096Z | 2018-12-10T00:00:00.000 | {
"year": 2018,
"sha1": "713e96362f7e62901df4e3fdf3a181fd26f5d974",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ins.2018.09.061",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "00300493383864fc2bf8bd63f8e1e0cfa3eb3819",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
5081067 | pes2o/s2orc | v3-fos-license | HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads
Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms.
1 Background introduction for some classic encoding techniques Huffman coding: Huffman coding was developed by David A. Huffman 1 . It is a popular entropy encoding algorithm used for lossless data compression. The key is to encode a source symbol (such as a character in a file) by constructing a variable-length code table in a particular way based on the estimated probability of occurrence for each possible value of the source symbol.
Huffman code expresses the most common source symbols using shorter strings of bits than that are used for less common source symbols. The resulting code is known as "prefix-free codes", which means the bit string representing some particular symbol never serves a prefix of the bit string to represent any other symbol. Although Huffman's original algorithm is optimal for a symbol-by-symbol coding (i.e., a stream of unrelated symbols) with a known input probability distribution, it has no optimal guarantee when the symbol-by-symbol restriction is dropped, or when the probability mass functions are unknown or violate the independent and identically distributed (i.i.d.) condition.
Run-length Encoding: Run-length encoding (RLE) is usually used for the sequences, for which the same data value occurs consecutively. It stores the runs of data as a single data value and count, rather than the original run. It performs well for compressing files that contain many runs of same data value.
Delta Encoding: Delta encoding is a scheme that stores or transmits data in the form of differences between sequential data rather than complete files. It is also known as data differencing. It can significantly reduce data redundancy when encoding file where there is a small difference between consecutive data.
Dictionary coding -LZW: Dictionary coding, also known as a substitution coding, is a class of lossless data compression algorithms, which operate by searching for matches between the text to be compressed and a set of strings contained in a data structure (called the 'dictionary') maintained by the encoder. When the encoder finds such a match, it substitutes a reference to the string's position in the data structure.
LZW is a representative of dictionary coding published by Welch in 1984 2 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978 3 . At each stage in compression, LZW gathers input bytes into a sequence until the next character would make a sequence for which there is no code yet in the dictionary. The code for the sequence (without that character) is added to the output, and a new code (for the sequence with that character) is added to the dictionary.
Prediction by partial matching: Prediction by partial matching (PPM) 4 is an adaptive statistical data compression technique based on context modeling and prediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the stream. PPM reduces predictions to symbol rankings. The number of previous symbols, n, determines the order of the PPM model. If no prediction can be made based on all n context symbols, a prediction is attempted with n-1 symbols. This process is repeated until a match is found or no more symbols remain in context.
2 Detailed description of encoding techniques for fields other than 'Sequence' and 'Quality value' QNAME: This field corresponds to the query template name. Many reads exhibit a common long subsequence in this field. We only store the number of each query name and encode the difference between the query's local position and its existing readID if it finds an identical query name in the existing list. Otherwise, we store the flag '0' to indicate that it introduces a different query name. Since the identical query sequences are close to one another, we encode these small values with nonuniform distribution by Huffman coding. RNAME: This field corresponds to the reference sequence name. The majority of the query reads share the identical reference name over the entire BAM/SAM file. We label all appearing reference names and encode the numbers using Run-Length Encoding (RLE).
POS: This field is 1-based leftmost POSition/coordinate of clipped sequence and ranges from 0 to 2 29 -1 with an increasing trend. We apply the Delta Encoding (i.e., ∆ coding) followed by Huffman coding.
MAPQ: This field is the mapping quality (phred-scaled) ranging from 0 to 255. There are many consequent repetitions.
We apply Huffman coding that provides better compression efficiency than RLE.
CIGAR: This field contains values of alpha-numeric type of any length, and also exhibits long runs of the same value.
We apply the Lempel-Ziv-Welch (LZW) coding method for this field.
MRNM: This field refers to the mate reference sequence name, where most positions turn out to be '=', which means the mate's reference sequence is the same as this alignment's, or '*' if there is no mate. We use RLE for this field. OPTIONAL FIELDS: The optional fields are tab-separated and each read shares a similar format. In our experiments, we used the bzip2 tool, which works well for those identical descriptions. Figure S1: 4 Available sources of genome sequences and software referenced in the paper | 2015-07-06T21:03:06.000Z | 2013-12-24T00:00:00.000 | {
"year": 2013,
"sha1": "b7c94a43ea4821e1c583fbd2a4d5afe547863231",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3932469?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "dce351dfe2a5cb1ad92a9527007a9eebad6f31ea",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
221377124 | pes2o/s2orc | v3-fos-license | Floating-Point Multiplication Using Neuromorphic Computing
Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.
Introduction
Neuromorphic computing has recently become prominent as a possible future alternative to the traditional Von Neumann architecture (Zargham, 1996) of computing. Some of the problems that are commonly faced when working with classical CMOS-based Von Neumann machines are the limitations on their energy efficiencies, and also the absolute limits to speed and scaling on account of physical limits (Mead, 1990;Koch and Segev, 2003). Though Moore's Law held for long and made possible rapid and sustained progress in hardware performance (Moore, 1965), it is now quite clear that this will not last. Hence, there is a need to look for alternative computing architectures, including neuromorphic computing (aand Youjie Li et al., 2017;Kim et al., 2015;Esser et al., 2016). The Von Neumann architecture also has an inherent problem, commonly called the "Von Neumann bottleneck," because of the limited bandwidth between the CPU and the main device memory. Thus, newer architectures often avoid a wide gap between processing and main memory (Monroe, 2014;Moore, 1965).
Rapid growth in cognitive applications is one of the important motivations for interest in neuromorphic computing, which promises the ability to perform a high number of complex functions through parallel operation. Neural solutions are possible for machine learning problems that involve complex mathematical calculations (Eliasmith, 2013;Pastur-Romay et al., 2017). There have been some attempts to develop systems of computation on neuromorphic architectures (Koch and Segev, 2003;Gosmann and Eliasmith, 2016) but not much has been done in the specific area of numerical computations, particularly for floating-point arithmetic.
Floating-point arithmetic (IEEE, 2019) is ubiquitous in scientific as well as general computing. It is a basic operation that should be supported by any computational architecture. In this paper, we describe a system which can perform the multiplication of two IEEE 754-compliant floatingpoint numbers on a neuromorphic architecture. Our work is an extension to George et al. (2019) who showed how floating point addition can be achieved using neuromorphic computing. We have designed a modular architecture which performs the conventional multiplication process (Erle et al., 2009), but instead of logic gates it uses groups of neurons as the basic unit. The architecture is easily scalable to double-precision floating point numbers.
The system is designed on the basis of the Neural Engineering Framework (NEF) which, as the name suggests, provides a basic framework to develop a neuromorphic system. For the implementation, simulation and testing of our design we used Nengo (Nengo, c;Bekolay et al., 2014), a graphical and scripting-based software package for simulating large-scale neural systems. To use Nengo, we define groups of neurons called ensembles, and then form connections between them based on what computation (Nengo, a,b) should be performed.
The architecture is divided into four components: Exponent Adder, Bias Subtractor, Mantissa Multiplier, and Sign/Overflow and Underflow. The Exponent Adder uses a stage-wise adder which takes 8-bit exponents and produces an 8-bit output along with carry. The Bias Subtractor takes the output of the Exponent Adder and subtracts the bias and produces 8-bit output. The subtraction is done using 2's complement method. The Mantissa Multiplier is the core of our system design; it follows a stage-wise process, taking two 23-bit mantissa inputs, and outputs a 23-bit resultant mantissa (see Section 3.3). Our system also indicates if there is an overflow or underflow during the exponent addition process (see Section 3.5). The rest of the paper is structured as follows. We first give a brief description of the IEEE 754 floating-point multiplication process in Section 2.1, and then briefly describe the Neural Engineering Framework (NEF) and its three basic principles: representation, transformation and dynamics, in Section 2.2. After this we explain the overall architecture in Section 3 using Figure 3. The performance analysis metrics in Section 4 deal with the two metrics that we have used to evaluate our system: the Mean Absolute Error (MAE) and Mean Encoded Error (MEE). In Section 4.1 we describe the relationship between the number of neurons and accuracy, and in Section 4.2 we describe the relationship between the number of neurons and bit error. In Section 4.3 we describe how we estimated the optimal number of neurons required for all the ensembles, and list them in Table 1. Finally, we present the conclusions of our work in Section 5.
Background
First we briefly discuss the floating-point multiplication process as per the IEEE 754 standard (Erle et al., 2009), then we describe the Neural Engineering Framework (NEF) which we have used to design, simulate and evaluate our system (Stewart, 2012). In Figure 2, The exponents E 1 and E 2 are added. The Bias value (127) is subtracted from the sum of E 1 and E 2 . The difference is placed in the Exponent field (see Figure 1). Each mantissa is of 24 bits (23 bits + 1 hidden bit). Mantissa M 1 and M 2 are multiplied and give a 48 bit output; if the 48 th bit is 1 then the result is normalized by right shifting and incrementing the resultant exponent (if it is 0, then nothing further is to be done). To find the resultant mantissa, we take the first 24 bits (23 bits + 1 hidden bit). The resultant sign field is the XOR of the two sign bits S 1 and S 2 .
IEEE 754 floating-point multiplication
For a better understanding of the above algorithm, see Yi and Ding (2009).
Neural Engineering Framework
The Neural Engineering Framework (NEF) (Stewart, 2012;) is a computational framework which is used for mapping computations to the biological network of spiking neurons. It provides a general way to generate circuits that have analytically determined synaptic weights to provide the desired functionality. NEF consists of three principles: representation, transformation, and dynamics (Nengo, c;Eliasmith and Anderson, 2002). Using these principles we can implement NEF for constructing complex neural models.
Representation
Neural representations are defined by the combination of nonlinear encoding and weighted linear decoding. (We use the notation given by Stewart (2012).) If x is the value represented by a neural ensemble and e i is the encoding vector for which that neuron fires most strongly, then activity a i for each neuron can be represented as follows: where G is neural non-linearity, α i is the gain parameter, and b i is the constant background bias current for the neuron. Given an activity, estimating the value of x can be done by finding a linear decoder d i .
Decoding weights d i can be seen as a least-squares minimization problem, as d i is set of weights that minimizes the difference between x and its estimate (Stewart, 2012).
Transformation
Section 2.2.1 shows how to encode and decode a vector in the distributed activity of a population of neurons. To perform computation, these neurons need to be connected and information needs to be transferred from one group of neurons to another. This is done via synaptic connections. In other words, we want our connections to compute some functions. Transformation is used for approximation of these functions (Stewart, 2012). Transformation is another weighted linear decoding for approximating function f (x); the decoded weights d f (x) can be computed as: In general, the more non-linear and discontinuous function is, the lower is the accuracy of its computation. Accuracy also depends on other factors like neuron properties, number of neurons, and the encoding method. The NEF is using the same trick seen in support vector machines (Cristianini and Shawe-Taylor, 2000) to allow complex functions to be computed in a single set of connections as we choose e i , α i and b i . The function f (x) is constructed by a linear sum of tuning curves of neurons, so a wider variety of tuning curves leads to better function approximation (Stewart, 2012).
Dynamics
Dynamics of the neural systems can also be modeled in NEF using controltheoretic state variables. However, NEF also provides a direct method for computing dynamic functions of the form: where x is the value getting represented, u is some input, and F and G are some arbitrary functions.
System Architecture
We have designed a system that performs floating-point multiplication according to the IEEE standard (IEEE, 2019). Figure 3 illustrates the system architecture. The two inputs are represented as (S 1 ,M 1 ,E 1 ) and (S 2 ,M 2 ,E 2 ) and the output is represented as (S out ,M out ,E out ). Here S i represents the sign bit, M i represents the mantissa bit, and E i represents the exponent bit, where i ∈ {1, 2, . . . , out}. This representation follows the IEEE-754 32-bit floating point standard (IEEE, 2019). Each of the components is described in the following subsections.
Exponent Adder
As shown in Figure 3, the Exponent Adder takes three inputs: E 1 , E 2 and a normalization bit produced by the Mantissa Multiplier (see Section 3.3). It performs addition of 8-bit E 1 ,E 2 and Normalization bit (as C in ) produces an 8-bit output E ′ and a carry bit C out . To implement this stage-wise addition process, we construct a network that takes two inputs (the corresponding bits of two exponents, i.e., a i and b i , where 0 ≤ i ≤ 7, and represent them using two different ensembles, say A ensemble and B ensemble. These two ensembles are then connected to another ensemble, say C ensemble, through synaptic connections. Now the sum of A ensemble and B ensemble is represented by C ensemble. The adder is implemented in same way as in prior literature (George et al., 2019;Nengo, a). The C out bit produced by the Exponent Adder is used in the calculation of overflow and underflow (see Section 3.5).
Mantissa Multiplier
The Mantissa Multiplier component is the core of our system. It is a stagewise process. Figure 5 shows its working. We use an AND ensemble and adders as building blocks for multiplication (see Figure 4). The AND Ensemble is used to implement neuromorphic AND logic. The encoding scheme for it is given in (10). In the AND ensemble we connect two inputs. If both inputs are 1 then the output is more than 1.5, so the output is set to 1; otherwise it is 0. The working and connection of each block at every stage is described below in detail by taking two mantissa A and B: • Each block j of stage i is given four inputs A i , B j , sum s in produced by block (j + 1) of stage (i − 1) and carry c in from block (j − 1) of stage i, where 0 ≤ i, j ≤ 23.
• As shown in Figure 5, the last block of each stage i takes c out of the previous stage's last block as s in .
• The AND Ensemble of each block of every stage performs AND operation on A i and B j and outputs A i B j .
• The adder of blocks performs 3-bit addition of A i B j , s in and c in and produces s out and c out (George et al., 2019; Nengo, a) • s out and c out produced as outputs are fed as input to the next stage and next block respectively.
The first block of every stage is given c in as 0. The output obtained at each stage ensemble is encoded and fed to the next stage ensemble as input. Encoding of the output at each stage helps to filter and boost up the output signal. At each stage the first block's s out represents the output bit of the mantissa as shown in Figure 5. At the end of this process we get a 48-bit product. If the 48 th bit is 1, then we set the normalization bit, right shift the product by one, which thereby results in incrementing the exponent by one (see Section 3.2). The resultant product is in the 1.M form as per IEEE standard. We take the first 23 bits from M and stores it as a resultant mantissa M out .
Bias Subtractor
As shown in Figure 3, this component subtracts the bias from the result which we get from exponent addition. The subtraction is done using the 2's complement method (Lilja and Sapatnekar, 2005). This is achieved by taking the 2's complement of the bias and then performing addition. To perform 2's complement, we design a converter, which takes 8-bit bias and represents it using a neural ensemble. We take a 1's complement of bias by flipping its bits, and then take the 8-bit adder and add 1 to 1's complement of bias. The final output is stored as a resultant exponent Eout.
S out and OF/UF
This component computes S out bit of the output along with OF/UF (overflow/underflow) flag which can then be used for rounding. It computes output sign bit S out by performing a neuromorphic XOR operation on two sign bits S 1 and S 2 (George et al., 2019). Overflow is indicated by setting the OF/UF flag as 1 if a carry is found during exponent addition.
Observations and Results
We simulated the individual components of the system and integrated them to arrive at fully functional IEEE floating point multiplication. We probed the outputs of each component at a time interval of 10ms and computed errors in each of them. We used the following two techniques for evaluating the performance of each component.
Mean Absolute Error =
|Computed val − Actual val| number of values The Mean Absolute Error is the measure of the absolute difference between the actual bit value and the value computed by our system, averaged over all the bits. In our case MAE obtains due to approximating a discontinuous function using NEF, plus noise and randomness in spiking neurons.
Mean Encoded Error = |Actual bit ⊕ Encoded val| number of bits We encoded the output value of each component and compare it with actual bit value. In other words we calculated hamming distance between the encoded bit value and actual bit value then averaged it over all the bits. We varied the number of neurons starting from 100 to maximum of 800 per bit, and observed the accuracy across all components. We observed that the accuracy initially increases with the number of neurons but after some threshold value of neurons, increase in accuracy is not significant. In the Mantissa Multiplier component we can see that accuracy increases rapidly until the number of neurons reach 300; after that there is no significant improvement.
Bit error v/s number of neurons:
For each Mantissa Multiplier component we observed that bit error is high when the number of neurons is very low. In the Mantissa Multiplier, when the number of neurons are below 200, we got 1 bit error out of 48 bits which is roughly equivalent to 2%. After increasing the number of neurons to 300 we get no bit errors. For the Exponent Adder and Bias Subtractor we get no bit errors even for number of neurons below 200.
Total number of neurons
We observed in Section 4.1 that the accuracy increases with an increase in the number of neurons. We estimated the optimal number of neurons required in all for all ensembles, as in Table 1 5 Conclusion In this paper we describe an approach to build an IEEE-754 standard floating point unit using neuromorphic hardware with spiking neurons. Such Sign and OF/UF 100 devices can mimic aspects of the brain's structure, and may be an energyefficient alternative to the classical Von Neumann architecture. Such a neuromorphic floating-point unit is a critical step in developing an alternative, neuromorphic CPU architecture. Our architecture comprises a complex floating-point multiplication process. The most complex part of the process is the Mantissa Multiplier, which we have realized successfully by using stage-wise multiplication and a robust encoding scheme. The architecture is easily scalable to doubleprecision floating point numbers also. We have checked the presence of overflow and underflow errors which than can be handled separately. We have studied the affect of number of neurons on accuracy and bit error. Finally we derive the optimal number of neurons required for each component, giving an indication of the hardware resources required to implement this approach. | 2020-09-01T01:01:18.978Z | 2020-08-30T00:00:00.000 | {
"year": 2020,
"sha1": "b09f303e78f6ea9f94559c6b17649649687c5d1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b09f303e78f6ea9f94559c6b17649649687c5d1c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
10125016 | pes2o/s2orc | v3-fos-license | Shot noise from action correlations
We consider universal shot noise in ballistic chaotic cavities from a semiclassical point of view and show that it is due to action correlations within certain groups of classical trajectories. Using quantum graphs as a model system we sum these trajectories analytically and find agreement with random-matrix theory. Unlike all action correlations which have been considered before, the correlations relevant for shot noise involve four trajectories and do not depend on the presence of any symmetry.
One of the most prominent methods to describe spectral and transport properties of ballistic quantum systems with classically chaotic dynamics relies on the semiclassical representation of the Green's function in terms of classical trajectories.For closed systems this leads to Gutzwiller's trace formula [1], which expresses the oscillating density of states as sum over periodic orbits.For open systems the semiclassical theory of chaotic scattering [2,3], and in particular its applications to electronic transport through mesoscopic devices [4], are based on this approach.The power of this method as compared, e. g., to random-matrix theory (RMT) [5,6] is its potential to account for system-specific details.Its main drawback lies in the difficulty to handle the resulting sums over huge sets of classical trajectories.Until recently the only method to deal with this problem was Berry's diagonal approximation [7], which neglects any nontrivial correlations between the trajectories.As a consequence many interesting phenomena such as weak localization, universality in spectral statistics and in conductance fluctuations, or the supression of shot noise cannot be described properly.Although the role of correlations between the actions of classical orbits has been appreciated for a long time [8][9][10], there is only one special case where they can be accounted for explicitly: Sieber and Richter recently calculated the leading order weak localization corrections from correlations between specific orbit pairs [11].This result stimulated further intense research [12,13] and it is clearly a very promising approach.However, as mentioned above, weak localization is just one besides a variety of other phenomena for which action correlations are relevant as well, but cannot be accounted for by the orbit pairs considered in [11][12][13].
In this paper we address shot noise in ballistic mesoscopic conductors, which is an important source of (experimentally accessible) information about the dynamics in such systems [18][19][20][21].We identify the action correlations which are responsible for shot noise and explain how due to these correlations the universal result of RMT [5,[14][15][16][17][18][19][20] can be recovered from a single system without ensemble average.The relevant correlations are funda-mentally different from all those considered previously [8][9][10][11][12][13] because they involve four instead of just two classical trajectories.Moreover they do not depend on the presence of symmetries.We emphasize that action correlations are no small correction.They are needed to understand shot noise to leading order.We will perform all explicit calculations in a specific model system, the quantum graph of Fig. 1b.Quantum graphs (networks) have a long record as models for electronic transport (see [22] and Refs.therein).Since the pioneering work of Kottos and Smilansky they are also established in quantum chaos [23][24][25]12].They are particularly suitable for our purpose as the representation in terms of classical trajectories is exact and also the analogue of action correlations amounts to exact degeneracies.Previous studies showed that despite these analytical simplifications the mechanism and the role of correlations between classical trajectories are equivalent to other systems such as billiards [12].
Consider a chaotic cavity with two attached waveguides supporting N 1 , N 2 transversal modes, respectively (Fig. 1a).For small bias voltage and temperature, negligible electron interactions and fully coherent dynamics all information about electron transport through this system is contained in the scattering matrix at the Fermi energy Shot noise represents temporal current fluctuations due to the discreteness of the electron charge [26].At zero temperature the average power of the noise can be expressed in terms of the transmission matrix t as [5, 27] while the assumption of uncorrelated electrons yields Here G = G 0 Tr tt † denotes the conductance, e is the electron charge, V the voltage and G 0 = 2e 2 /h the conductance quantum.RMT yields [5] and for the conductance Each result comes with a weak localization correction which is small ( δP ≪ P , δG ≪ G ) for large N 1 , N 2 and will not be considered here.For N 1 = N 2 the Fano factor F = P / P Poisson = 1/4 is obtained.
We will show that Eq. ( 3) can be recovered semiclassically.To this end, the element t n2n1 of the transmission matrix is expressed as a sum over all classical trajectories [30] connecting the incoming mode 1 where S p denotes the classical action and A p is an amplitude related to the stability of the trajectory.While for the conductance we have to evaluate a sum over pairs of trajectories the shot noise involves also a term combining four classical paths Here, the trajectories p, q, r, s connect two incoming to two outgoing modes as shown in Fig. 2a.As Eqs. ( 5), ( 6) describe one particular system rather than an ensemble, the average • is to be taken over an energy window.
It should be small enough to keep the classical dynamics and in particular the amplitudes A p essentially unchanged.Nevertheless, in the semiclassical limit h → 0 the phase factor is rapidly oscillating and only those orbit combinations survive the averaging for which the action changes are correlated.
In particular, setting p = q in Eq. ( 5) the phase drops out and we are left with a sum over classical probabilities |A p | 2 .This is the diagonal approximation [7].Provided that the dwell time inside the open cavity is large compared to the time needed for equidistribution over the available phase space, the probability is the same for all outgoing modes and G RMT is exactly recovered [4].Hence, the contribution from other pairs of correlated trajectories that might exist must vanish although no explicit demonstration of this fact has been given up to now.In presence of time-reversal symmetry the above remains valid to leading order in the mode number N .FIG. 2: (a) shows schematically four classical trajectories p, q, r, s connecting two incoming modes m1, n1 to two outgoing modes m2, n2 such that the diagram contributes to the semiclassical approximation of Eq. ( 2).A contribution to Eq. ( 6) results only if the trajectories are correlated such that the action difference between p, r (full lines) and q, s (dashed lines) remains small for variying energy.(b) and (c) shows the simplest configurations where this is the case: two trajectories are pairwise equal (diagonal approximation).(d) shows the configuration which completely accounts for the universal shot noise in leading order.
For shot noise the diagonal approximation has two analogues (Fig. 2b,c): we can have (i) p = q, r = s for m 1 = n 1 and (ii) p = s, r = q for m 2 = n 2 .In both cases no phases are left in Eq. ( 6) and the remaining summation is over two independent classical trajectories p, r with the only constraint that they begin or end at the same mode, respectively.Proceeding as in the diagonal approximation to the conductance we obtain [31] Combining these two results we find according to Eq. ( 2) this means that within the diagonal approximation there is no shot noise.This is no surprise: the diagonal approximation reduces the quantum to the classical problem and since classical dynamics is deterministic there is no uncertainty if an incoming electron is transmitted or not and hence no noise [18,29].On the other hand Eq. ( 9) is quite remarkable as it means that within the semiclassical approximation shot noise is entirely due to nontrivial correlations between different trajectories.
What is the general mechanism for such correlations?Previous research [9][10][11][12][13] showed that pairs of trajectories have correlated actions if they explore the same (or symmetry-related) parts of phase space with a different itinerary.In terms of symbolic dynamics the code words for the two orbits are composed of the same sequences, in permuted order.The analogy to diagrammatic perturbation theory and some recent results [11,12] suggest further that the importance of the correlations decreases with growing number of sequences needed to represent the code of the trajectories: in the diagonal approximation to the conductance the codes are equal and the result is correct to leading order, orbit pairs composed of two loops give the next-to-leading order correction etc.
In the case of shot noise we have exhausted the diagonal approximation and consider therefore trajectories p, q, r, s whose codes can all be represented in terms of two subsequences.Inspection shows that the only option is the diagram of Fig. 2d.Indeed the phase in Eq. ( 6) will almost vanish for such contributions since the combination of p, r (full lines) almost coincides with the combination of q, s (dashed lines) such that the respective actions cancel.A remaining small total action difference comes from the different behaviour of the trajectories inside the crossing region.In this respect the correlated trajectories of Fig. 2d are very similar to those giving rise to weak localization effects [11][12][13], i. e. the methods developed there for various specific systems should allow for a straightforward generalizion to shot noise.
In the remainder of this paper we treat one of those systems explicitly, namely the quantum graph shown in Fig. 1b.The closed version of this graph consists of a central vertex with valency B and b = 1 . . .B attached bonds with incommensurate lengths L b .Following the standard quantization [23] the dynamics of a particle with wavenumber k = (2mE) 1/2 /h is represented by a B × B bond-scattering matrix Σ bb ′ (k) = σ bb ′ e 2ikL b containing energy-dependent phases 2kL b from the free motion on the bonds and complex amplitudes describing the scattering at the central vertex.A basic requirement is unitarity (current conservation) otherwise σ can be chosen according to the physical situation.For simplicity we set such that all classical transition probabilities are equal |σ bb ′ | 2 ≡ 1/B.This model was first considered by Tanner [25] who showed numerically that its specral statistics follows RMT.We open the graph by extending N = N 1 +N 2 bonds to infinity and model a two-channel geometry by considering N 1 (N 2 ) of these leads as the modes in the left (right) contact (Fig. 1b).For fixed N 1 , N 2 we will consider the limit B → ∞ in order to meet the condition of a long dwell time which was already mentioned in connection with the diagonal approximation.Within the leading-order in B we also neglect lower-order corrections in the mode numbers N 1 , N 2 in order to compare our result to Eq. ( 3).For a graph the N × N unitary scattering matrix Eq. ( 1) can be expressed in terms of subblocks of the bond-scattering matrix Σ via S = Σ LL + Σ LG (I − Σ GG ) −1 Σ GL [23].L = L 1 ∪ L 2 denotes here the set of N = N 1 + N 2 leads and G comprises the B − N bonds inside the graph.Expanding the Greens function of the internal part (I −Σ GG ) −1 into a geometric series we arrive at Eq. ( 4) which is in the case of graphs an identity rather than a semiclassical approximation.The sum is over all trajectories (=bond sequences) p = [n 1 p 1 . . .p t n 2 ] connecting the lead n 1 ∈ L 1 to the lead n 2 ∈ L 2 via an arbitrary number t ≥ 0 of internal bonds p j ∈ G.The action is related to the total length of the trajectory, S p /h = kL p , where L p = t j=1 L pj .Finally the amplitude is given as A p = σ n1p1 σ p1p2 . . .σ ptn2 such that the classical probability of the trajectory is |A p | 2 = 1/B t+1 .Consequently, we have from which we do indeed recover the diagonal approximation Eqs. ( 7)-( 9).Next we consider trajectories p, q, r, s which are composed of four sequences a, b, c, d as shown in Fig. 2d and assume that each of these sequences has a length t ≥ 1.In order to avoid overcounting we have to ensure that for any given set p, q, r, s the definition of a, b, c, d is unique.Potential problems arise if all four trajectories coincide in the crossing region for one (or more) steps: p = [aγc], q = [bγc], r = [bγd], s = [aγd].It is a matter of taste if γ in this situations is considered as part of a and b or of c and d.We use the first representation and enforce it by the restriction c i = d i (The subscripts i/f are used for the initial/final bond in a, b, c, d.).
Returning to Eq. ( 6) we note that the actions of p, r and q, s cancel exactly such that the phase factor is absent [32].As in Eq. ( 12) we can perform the summation over all internal bonds of the subsequences a, b, c, d and also over the leads m 1 , n 1 , m 2 , n 2 .Only the amplitudes from transitions right at the intersection of a, b, c, d do not combine into classical probabilities and must be considered explicitly.We obtain + To perform this calculation we have repeatedly used Eq. ( 10) in a form which allows to transfer summations from the graph G to the leads Moreover we used that according to Eq. ( 11) we have which means that the two sums in the second line of Eq. ( 14) yield only a negligible correction O(B −1 ) because the number of terms are N 3 (N − 1) and N (N − 1)(B − N ), respectively.It is the first line in Eq. ( 14) which gives the dominant contribution.With exactly the same methods we can consider the contribution from special cases of the diagram in Fig. 2d where the length of one of the subsequences a, b, c, d vanishes.We find that for vanishing a or b we get t We have checked that our result remains unchanged if we substitute in Eq. (2) Tr tt † (1 − tt † ) = Tr tt † rr † .Quantum mechanically this is just a consequence of unitarity, but within the semiclassical approach it is a nontrivial result since entirely different trajectories contribute and unitarity is restored only if all relevant correlations between them are properly accounted for.
We are grateful to O. Agam and H. Schomerus for stimulating our interest in the problem.
FIG. 1 :
FIG. 1: (a) A chaotic cavity with two attached waveguides and a classical trajectory contributing to conductance and shot noise.(b) The quantum graph used to model this situation.The transversal modes of the waveguides correspond to the infinite leads attached to the graph (bold lines), while the internal system is represented by many finite bonds.
abcd while for vanishing c or d no contribution results.Finally we have Tr (tt † ) 2 = t
3
and substitution of this result into Eq.(2) shows that we have indeed reproduced the RMT result from correlated classical trajectories. | 2017-08-02T01:33:08.609Z | 2003-04-11T00:00:00.000 | {
"year": 2003,
"sha1": "33b433bde54a43b451087d7ea67c6debe5c31e31",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/cond-mat/0304265",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "33b433bde54a43b451087d7ea67c6debe5c31e31",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
159137768 | pes2o/s2orc | v3-fos-license | Growth effect of trade and investment in Sub-Saharan Africa countries: Empirical insight from panel corrected standard error (PCSE) technique
The pre-eminence of trade and investment in the economic prosperity of developed and developing countries cannot be overemphasized. Many studies have shown a strong positive impact of trade on economic growth across developed and the emerging market. However, very little is known about the simultaneous effect of trade and investment on growth in SSA when institutional control variables are introduced in the model. Therefore, this study examines the role of trade and investment in the growth process in the SSA using trade openness (% GDP), export (% of GDP) and import (% of GDP) as a measure of trade. We embrace an ideographic perspective that allows methodology and design that are sensitive to the nature of the study by deploying panel corrected standard error (PCSE). In this paper, we draw on 35 countries within the SSA. The research outcomes reveal that trade domestic investment and import affect growth in the region positively while export affects growth negatively. A possible reason for this is the nature of export of sub-Saharan African economies which are mostly affected by price volatility in the ABOUT THE AUTHORS Fredrick Ikpesu is a lecturer at Pan-Atlantic University, Lagos Nigeria. Prior to entering academics, he worked with a consulting firm where he rose to the group head of finance and administration. His research areas include trade finance, corporate financial distress, and corporate finance. Olusegun Vincent is a Senior Lecturer in School of Management and Social Sciences at Pan Atlantic University in Nigeria. Prior to academic life, he worked as a chartered accountant in Ernst & Young, few financial institutions and one-time Director of Economic Intelligence Unit in Lagos State, Nigeria. His research interests include sustainability, strategy, corporate governance, and corporate finance. Olamitunji Dakare (PHD) is a senior lecturer at Pan-Atlantic University, School of Management and Social Sciences (SMSS), Lagos, Nigeria. His research areas are trade finance, competitive strategy, business strategy and management, strategy thinking, business development, and strategic management in small and medium enterprises. PUBLIC INTEREST STATEMENT The relevance of trade and investment in the economic prosperity of any nation cannot be neglected. The research has looked at whether trade and investment have aided or discouraged growth in the sub-Saharan Africa (SSA) economies. We embarked on this study because very few studies in the past evaluated the simultaneous effects of both trade and investment on economic growth. It is equally of importance to state that our conceptual model factored in institutional control variables (government effectiveness, rule of law and regulatory quality) which may equally affect the relationship between the dependent and independent variables. The study result revealed that trade openness and import positively impact growth while export negatively impacts growth in the SSA. The outcome of the study further revealed that investment affects growth positively in the region. Ikpesu et al., Cogent Economics & Finance (2019), 7: 1607127 https://doi.org/10.1080/23322039.2019.1607127 © 2019 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license.. Received: 05 September 2018 Accepted: 07 March 2019 First Published: 17 April 2019 *Corresponding author: Fredrick Ikpesu, Pan-Atlantic University, Lagos, Nigeria E-mail: fikpesu@pau.edu.ng Reviewing editor:: Christian Nsiah, Department of Business Administration, Baldwin Wallace University, USA Additional information is available at the end of the article
Abstract: The pre-eminence of trade and investment in the economic prosperity of developed and developing countries cannot be overemphasized. Many studies have shown a strong positive impact of trade on economic growth across developed and the emerging market. However, very little is known about the simultaneous effect of trade and investment on growth in SSA when institutional control variables are introduced in the model. Therefore, this study examines the role of trade and investment in the growth process in the SSA using trade openness (% GDP), export (% of GDP) and import (% of GDP) as a measure of trade. We embrace an ideographic perspective that allows methodology and design that are sensitive to the nature of the study by deploying panel corrected standard error (PCSE). In this paper, we draw on 35 countries within the SSA. The research outcomes reveal that trade domestic investment and import affect growth in the region positively while export affects growth negatively. A possible reason for this is the nature of export of sub-Saharan African economies which are mostly affected by price volatility in the ABOUT THE AUTHORS Fredrick Ikpesu is a lecturer at Pan-Atlantic University, Lagos Nigeria. Prior to entering academics, he worked with a consulting firm where he rose to the group head of finance and administration. His research areas include trade finance, corporate financial distress, and corporate finance.
Olusegun Vincent is a Senior Lecturer in School of Management and Social Sciences at Pan Atlantic University in Nigeria. Prior to academic life, he worked as a chartered accountant in Ernst & Young, few financial institutions and one-time Director of Economic Intelligence Unit in Lagos State, Nigeria. His research interests include sustainability, strategy, corporate governance, and corporate finance.
Olamitunji Dakare (PHD) is a senior lecturer at Pan-Atlantic University, School of Management and Social Sciences (SMSS), Lagos, Nigeria. His research areas are trade finance, competitive strategy, business strategy and management, strategy thinking, business development, and strategic management in small and medium enterprises.
PUBLIC INTEREST STATEMENT
The relevance of trade and investment in the economic prosperity of any nation cannot be neglected. The research has looked at whether trade and investment have aided or discouraged growth in the sub-Saharan Africa (SSA) economies. We embarked on this study because very few studies in the past evaluated the simultaneous effects of both trade and investment on economic growth. It is equally of importance to state that our conceptual model factored in institutional control variables (government effectiveness, rule of law and regulatory quality) which may equally affect the relationship between the dependent and independent variables. The study result revealed that trade openness and import positively impact growth while export negatively impacts growth in the SSA. The outcome of the study further revealed that investment affects growth positively in the region. global market among other factors such as low prices, vagaries of weather, etc.. We discuss the policy implication of the study.
Introduction
Both trade and investment have been widely acknowledged as catalytic agents in the growth process of developing countries and developed countries. For instance, much of the prolific studies on international trade, both theoretical and empirical have a general consensus that trade has become one of the major economic growth strategies used by nations more especially the sub-Saharan Africa (SSA) to ensure surplus production, enlarge potential markets, superior innovation, and efficient competition (Ali & Xialing, 2017;Emeka, Frederick, & Peter, 2012;Were, 2015). On the other hand, Mohsen (2015) pointed out that globally inclined countries have not only seen the need to improve their investment goals but also the need to create an attractive investment climate as these lead to a better source of output, employment creativities, higher income and economic growth in the country.
Upon the recognition that trade and investment are seen as the strategic engines through which the general economic growth of any country can be achieved, there is relatively fewer evidence on the connections among trade, investment and economic growth (Chaudhary & Qaisrani, 2002). Much of the prolific empirical studies have focused on the nexus between trade and growth (Geda & Seid, 2015;Goff & Singh, 2014;Were, 2015;Zahonogo, 2014) while the few formal research studies on trade, investment, and economic growth have focused on a single country study (Champa, Mohammed, & Debasish, 2017;Ali & Xialing, 2017;Paul & Milanzi, 2016). Although, the empirical analyses of these studies as mentioned above have shown that there is a significant correlation between trade, investment and economic growth at the individual country level. Meanwhile, the extant literature on the link among trade, investment, and economic growth is still inconclusive partly because the proxies and methodologies used to show the nexus among these threshold variables (i.e. trade, investment and growth) at individual specific country level may not be at best to generalise the multidimensional effects at cross-country level, thus the main concern of this study. According to Kenen and Voivodas (1972) and MacBean (1976), the impacts of trade may differ from country to country; given their volume of trade and dependency on the foreign sector, same also in the case of domestic investment as pointed out by Chaudhary and Qaisrani (2002).
MACROECONOMI
Considering the fact that findings and results of earlier empirical studies in an attempt to explain and understand the multidimensional effects of trade, investment and growth remain inconsistent owing to factors such as the use of small samples size, nature of the methodologies and data used. The above premise, therefore, underscores the relevance of this study. In this study, we, however, re-examine the role of trade and investment in the growth process by empirically investigating the growth effect of trade and investment in sub-Saharan African countries using a panel corrected standard error (PCSE) technique. The empirical evidence is based on a sample size of 35 sub-Saharan African economies using annual data covering the periods 2000 to 2016. We employ the neoclassical augmented growth model developed by Mankiw, Romer, and Weil (1992) in the specification of the study model. The primary motivation underpinning the specification of the model is the inclusion of human capital which enhances growth and productivity of labour and also the study objectives which the study aims to achieve.
The remaining section of the paper is sub-divided as follows. Part two reviews the empirical literature connecting trade, investment, and growth while part three presents the empirical model and issues of the study. Part four present and discuss the empirical result. Part five presents the conclusion and policy insights from the research while the last part shows the limitation of the study and area of further study.
Empirical review of literature
Extant empirical studies on how trade influence economic growth abounds in the literature. Most of these studies have reported a positive and statistically significant relationship between trade and growth, as well as investment and growth. However, the degrees of causality vary significantly across countries, regional blocks, and continents. Although some scholars raised concerns over the dataset and statistical methods employed in establishing causality. Frankel and Romer (1999) study concluded that trade has a significant and positive effect on income and the effect of trade on income is like a one percent increase in trade increasing income per person from one to two percent. Hence, they concluded that the effect of trade on income is overwhelming.
In the overview of the previous cross-country empirical investigations in the 1980s and 1990s carried out by Harrison (1996), Giles and Williams (2000), and Lewer & Berg, (2003), they found that the relationship between trade and economic growth was statistically significant. Their findings were consistent across many empirical investigations in terms of the size of the relationship, which on the average, showed that one percent increase in trade (export) was associated with a one-fifth percent point increase in the gross national product (GNP). This consistency was robust across all the samples and inferential statistical methods deployed. Many studies in the 1990s were unequivocal in the direction of causation between trade and growth (Fosu, 1996;Frankel & Romer, 1999;Greenaway, 1998;Sachs, Warner, Åslund, & Fischer, 1995). Fosu's (1990) study revealed that export positively impacted economic growth using a sample of 28 developing countries in SSA.
The empirical inquiry of Onafowora and Owoye (1998) opined that export affects growth positively using a sample of 12 Sub-Saharan Africa (SSA) countries. Sachs et al. (1995) using the speed of integration measure to proxy trade, found that the fast people to integrate are the East Asian exporting economies while the weak integrators were mostly the low-income countries of SSA and some middle-income countries of Latin America. Some other studies suggested that trade grows considerably faster after the implementation of trade liberalization (Falvey, Foster, & Greenaway, 2012, Salinas, Gueye, & Korbut, 2014Wacziarg & Welch, 2008). Despite the response of trade reforms, however, not all reforms have been successful (Singh, 2010).
Recent studies have operationalized trade in the context of trade openness rather than the narrow perspective of trade as export activities (Winters & Masters, 2013). Trade openness gives trade a much wider definition, which includes export and import activities of a nation, unlike previous studies where trade was operationalized as export activities. Literature has shown that trade (export and import) has positively impacted growth and important for economic progress (Rodrik, 1999). Defining trade from the openness perspective (Savvides, 1995), research findings showed that trade significantly accounts for growth in Africa. Yanikkaya (2003) reported a significant positive association between trade and growth when trade was proxy by constructs as technology transfer, economic of scales, and comparative advantage. On a flip side, trade barriers including excise duties, import duties, and taxes on international trade demonstrated a positive association with the growth. Although, Yanikkaya (2003) conceded to the inherent limitations in measuring trade barriers. In a study, considering the effect of trade openness on growth and real income, a negative impact was experienced by developing countries while developed countries recorded a significant positive association (Kim, 2011).
Despite the scholarly contributions to the trade-growth relationship, very little is heard on the combined effect of trade and investment on growth. This obvious gap inevitably snowballs to a knowledge gap requiring scientific enquiry.
Model and econometric issues
The study utilizes the neoclassical augmented growth model which was developed by Mankiw et al. (1992) in a bid to estimate the growth effect of trade and investment. The main motivation underpinning the choice of the model is the inclusion of human capital which enhances growth and productivity of labour and also the aim of the study which is to investigate the growth effect of trade and investment. In line with previous empirical studies, the study adopts three measures of trade that is Trade Openness (%GDP); Export (%GDP) and Import (%GDP). Following similar studies and taking into account the heterogeneity of the coefficient, variables of interest (Trade and Investment) and control variables, the study model is expressed by adopting a standard growth regression as: where Y it is GDP per capita for country i at time t, α i shows the country-specific effect. Trade it is trade measures. The trade measures are Trade Openness (%GDP); Export (%GDP) and Import (% GDP). INV it is gross domestic investment (% GDP) for country i at time t, Z is the vector of control variables (life expectancy at birth (LE), population growth (POPGR), real exchange rate (REXR), inflation (INF), government effectiveness (GE), regulatory quality (RQ), and rule of law (RL)). The life expectancy at birth and population growth was included in the model to capture the impact of human capital while real exchange rate and inflation were used as a substitute for macroeconomic stability. In addition, government effectiveness, regulatory quality, and rule of law were included in the model to account for the institutional variables. The ε it is the error term while β, δ and θ are the parameter coefficients to be estimated in the study.
The above model is estimated using the panel corrected standard error (PCSE) technique. The technique is employed because it provides an estimate that is free from autocorrelation, accurate standard error estimate, and it is less sensitive to outlier estimates. Furthermore, the panel corrected standard error (PCSE) technique is used when working with dynamic heterogeneous panel data (Bailey & Katz, 2011;Eboiyehi, 2017;Reed & Webb, 2010).
Data and variable definition
Annual data covering the period 2000 to 2016 for 35 sub-Saharan African economies were employed in the study. The covering period and selection of countries in the study were based on the availability of data. The dependent variable used in the study is GDP per capita (PCY) while the independent variables are trade and investment as a percentage of GDP (INV). The study employs three measures of trade: Trade Openness (%GDP); Export (%GDP) and Import (%GDP) in line with previous studies. A set of control variables usually employed in the growth equation were also included in the study model. Table 1 shows the variables, definition, and sources of all the variables used in the study.
Empirical result and discussion
Prior to investigating the growth effect of trade and investment and growth, the stationarity properties of the variables were first examined as a preliminary test. As shown in Table 2, the research outcome revealed that at first difference, all the variables became stationary. This implies that the variables have no unit root. Hence, the null hypothesis of the existence of a unit root test is rejected.
We also conducted the cointegration test using Kao (1999) in a bid to investigate whether the variables are cointegrated. The result as revealed in Table 3 showed that the variables are cointegrated. This implies that the variables have a long-run relationship.
The PCSE estimation result for the effect of trade and investment on growth is summarized in Tables 4-6, respectively. The PCSE estimation result in Table 4 revealed that trade openness as a share of GDP affects growth in the region positively. This implies that trade openness as a share of GDP has significantly influenced the growth of sub-Saharan Africa economy which is consistent with previous empirical studies. The result also revealed that domestic investment affects growth in the region positively. This implies that domestic investment has significantly contributed to the growth of the sub-Saharan African economy which is consistent with previous empirical studies.
In Table 5, the PCSE estimate revealed that export has a negative and insignificant relationship with growth. This implies that export as a share of GDP has not contributed immensely to the growth in the region. The reason for this is the nature of export of sub-Saharan Africa countries which are mostly primary commodities that are subject to price volatility and fetch low prices in the global market. Also, weak institution, inadequate infrastructures, pest and vagaries of weather has contributed to the low competitiveness of export of sub-Saharan African economies (Were, 2015). The outcome of the PCSE regression also showed that domestic investment has a positive effect on growth in sub-Saharan African countries. This suggests that domestic investment has been a catalyst for growth in the region.
In Table 6, the PCSE regression estimate showed that import as a share of GDP affects growth in the region positively. This suggests that import as a share of GDP has contributed immensely to the growth in the region. The result also indicates that domestic investment affects growth positively in the region. This implies that in sub-Saharan African economy domestic investment has immensely contributed to the growth in the region.
In addition, across the PCSE estimates in Tables 4-6 the effect of exchange rate on growth in the region is positive while the effect of inflation on growth is negative which indicates the significance of a stable macroeconomic environment for growth in the region. Furthermore, the PCSE regression estimates on population growth and life expectancy at birth as presented in tables 4-6 showed that population growth affects growth in the region positively. This implies that sub-Saharan African economies stand to benefit from the positive growth in population. The life expectancy at birth indicates a negative and significant effect on growth which is an indication of the economic burden of the ageing population (Were, 2015). The PCSE regression estimates presented in tables 4-6 also indicates that the institutional control variables such as government effectiveness and rule of law have a positive effect on growth while the regulatory quality has Note: The ***, **, and * the shows the rejection of the null hypothesis of a unit root at 1%, 5%, and 10% while the values in parentheses show the standard error.
Concluding remarks
The extant studies have unequivocally stated the positive impact of trade on economic growth across developed and the emerging markets. However, very little is known about the impact of trade and investment on growth in the SSA. The study investigated the growth effect of trade and investment in sub-Saharan Africa economies between the periods 2010 to 2016 by employing panel corrected standard error (PCSE) technique. The dependent variables used in the study are growth which is measured as GDP per capita while the independent variables used in the study are trade and investment. In line with previous studies, control variables were also included in the model.
The research outcome revealed that Trade openness (%GDP) and Import (%GDP) affect growth in the region positively while Export (%GDP) affects growth negatively. Possible reason for this is the nature of export of sub-African economies which are mostly affected by price volatility in the global market among other factors such as low prices, vagaries of weather, etc.. The research outcome also revealed that domestic investment affects growth positively in the region which is an indication that domestic investment has significantly contributed positively to the growth of SSA economies.
The study recommends that government in the region should create a conducive environment by reducing the cost of doing business and providing infrastructures, granting tax rebate so as to encourage local producers. Furthermore, the region should deploy technology with a view of increasing the value addition of its primary commodities. For the region to unlock its growth and trade potential, the study recommends the need for effective trade integration at the regional and global level.
Policy implication
The influences of both trade openness (trade) and capital formation (investment) remain potent factors contributing to aggregate income in SSA. The capital formation build-up is on the increase in SSA through both domestic investment and Foreign Direct Investments (FDI) and encouragement of local manufacturers through various incentive programmes of various governments that could aid the purchase of capital stock for firing the production plants. The positive impact of trade on economic growth is not by serendipity rather trade policies have evolved remarkably since 1960 in SSA. The trade policy formulation and implementation in SSA was a consideration for protectionist policies in a bid to protect the domestic market from foreign competition and encourage domestic industrial development. Between the period 1960-2015, the trade policies such as import substitution, investment incentive policy, nationalization policy, guided privatization, import prohibition, exchange rate control, deregulation of interest rate, export incentives, and foreign exchange restrictions have been implemented purposely to boost trade.
However, the study results indicate that the major factor contributing to a positive impact of trade on aggregate income is import while export indicated an inverse relationship to aggregate income. An important question is: why export activities of SSA countries have not contributed to the growth in the region. The answer may be that exporting activities are shallow and could be equated with a heavy reliance on exports of one or two primary commodities only (e.g. crops and crude oil). The over-reliance on the primary commodity by the SSA has been identified as a root cause of economic problems that have been afflicting developing countries.
The data have pointed out that in many developing countries primary commodity exports accounted for a very high percentage of total exports. As a consequence, a shortfall in production and/or a decline in commodity prices can plunge the exporting developing countries into economic crisis. According to Prebisch-Singer thesis (see Prebisch, 1950;Singer, 1950) as a result of a continuous decline in the terms of primary commodity trade, the developing countries are increasingly able to import a fewer amount of manufactured goods for a given amount of primary commodities they export. In other words, primary commodities exporters will have to keep increasing the volume of primary commodities exports in order to import the necessary manufactured goods (Todaro, 2000). For several decades, until the 1980s, the Prebisch-Singer thesis generated considerable interest among the economists and spurred numerous empirical studies on the topic. Therefore, export under this circumstance will negatively impact on aggregate income.
This paper posits that governments of SSA countries must re-strategise by shifting their focus from the export of primary products (including cash crops, mineral, and crude oil) to valueadded products. A comprehensive study of the entire value-chain of each product or mineral must be conducted with the expectation of processing primary products into intermediate and final products for export. For instance, crude oil could be processed into about eight or more other finished products, including butane, diesel fuel, premium motor spirit, gasoline, kerosene, liquefied natural gas, liquefied petroleum gas, and propane. It is interesting to note that Nigeria spends at least fifty percent of its total crude oil export value on the importation of refined petroleum. This is like throwing away industrialization, investment and employment opportunity to other countries.
Governments of SSA countries must now create policies that will encourage local conversion of raw materials or primary products to both intermediate and finished products for real economic prosperity and growth. Export-led growth that only geared towards exportation of primary products is counterproductive in the medium to long run. The current asymmetrical shape of trade occasioned by booming importation giving rise to aggregate income is voodoo and not sustainable economic growth. The World Bank (1993) identified Singapore, South Korea, Taiwan, Thailand Hong Kong, Indonesia, Japan, and Malaysia, as eight East Asian nations with vibrant economic growth because of their export-led growth.
Major policy shift has become inevitable. The policies that will encourage local conversion of primary products to intermediate or finished goods before exportation is needed in promoting growth in the region. Therefore, the governments must be prepared to institute a regime of waivers, grant and tax incentives (e.g. pioneer status) that will encourage conversion of all the cash crops, minerals, and metals to at least semi-finished goods rather than primary product before exportation.
Limitation of the study and area of further study
The study investigated the growth effect of trade and investment in sub-Saharan Africa economies. Future research should be carried out to compare the growth effect of trade and investment between sub-Saharan Africa countries, Developed countries, and Latin American. Furthermore, the dynamic interaction among growth, trade, and investment can also be investigated at both crosscountry level and single country level. | 2019-05-21T13:05:59.348Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "ac11ee0b46dd3321571433e6e8913bc83e317908",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23322039.2019.1607127",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ac11ee0b46dd3321571433e6e8913bc83e317908",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
102487538 | pes2o/s2orc | v3-fos-license | Inversion, Iteration, and the Art of Dual Wielding
The humble $\dagger$ ("dagger") is used to denote two different operations in category theory: Taking the adjoint of a morphism (in dagger categories) and finding the least fixed point of a functional (in categories enriched in domains). While these two operations are usually considered separately from one another, the emergence of reversible notions of computation shows the need to consider how the two ought to interact. In the present paper, we wield both of these daggers at once and consider dagger categories enriched in domains. We develop a notion of a monotone dagger structure as a dagger structure that is well behaved with respect to the enrichment, and show that such a structure leads to pleasant inversion properties of the fixed points that arise as a result. Notably, such a structure guarantees the existence of fixed point adjoints, which we show are intimately related to the conjugates arising from a canonical involutive monoidal structure in the enrichment. Finally, we relate the results to applications in the design and semantics of reversible programming languages.
Introduction
Dagger categories are categories in which each morphism X f − → Y can be assigned an adjoint Y f † −→ X subject to certain equations. In recent years, dagger categories have been used to capture aspects of inversion in both reversible [27,28,30] and quantum [2,35,12] computing. Likewise, domain theory and categories enriched in domains (see, e.g., [3,14,15,4,6,38]) have been successful since their inception in modelling both recursive functions and data types in programming via fixed points.
A motivating example of the interaction between adjoints and fixed points is found in the reversible functional programming language Rfun [40], as the interaction between program inversion and recursion. In this language, inverses of recursive functions can be constructed in a particularly straightforward way, namely as recursive functions with function body the inverse of the function body of the original function. Previously, the author and others showed that this phenomenon appears in join inverse categories, a particular class of domain-enriched dagger categories suitable for modelling classical reversible computing, as fixed point adjoints [30] to the functionals (i.e., second-order continuous functions) used to model recursive functions.
Several questions remain about these fixed point adjoints, however. Notably: Are these fixed point adjoints canonical? Why do they arise in classical reversible computing, and do they arise elsewhere as well? To answer these questions requires us to develop the art of wielding the two daggers offered by dagger categories and domain-enriched categories at once. We argue that well-behaved interaction between the dagger and domain-enrichments occurs when the dagger is locally monotone, i.e., when f ⊑ g implies f † ⊑ g † . We show that the functionals on C form an involutive monoidal category, which also proves surprisingly fruitful in unifying seemingly disparate concepts from the literature under the banner of conjugation of functionals. Notably, we show that the conjugate functionals arising from this involutive structure coincide with fixed point adjoints [30], and that they occur naturally both in proving the ambidexterity of dagger adjunctions [22] and in natural transformations that preserve the dagger (including dagger traces [36]).
While these results could be applied to model a reversible functional programming language with general recursion and parametrized functions (such as an extended version of Theseus [28]), they are general enough to account for even certain probabilistic and nondeterministic models of computation, such as the category Rel of sets and relations, and the category DStoch ≤1 of finite sets and subnormalized doubly stochastic maps.
Overview: A brief introduction to the relevant background material on dagger categories, (DCPO-)enriched categories, iteration categories, and involutive monoidal categories is given in Section 2. In Section 3 the concept of a monotone dagger structure on a DCPO-category is introduced, and it is demonstrated that such a structure leads to the existence of fixed point adjoints for (ordinary and externally parametrized) fixed points, given by their conjugates. We also explore natural transformations in this setting, and develop a notion of self-conjugate natural transformations, of which †-trace operators are examples. Finally, we discuss potential applications and avenues for future research in Section 4, and end with a few concluding remarks in Section 5.
Background
Though familiarity with basic category theory, including monoidal categories, is assumed, we recall here some basic concepts relating to dagger categories, (DCPO)-enriched categories, iteration categories, and involutive monoidal categories [25,7]. The material is only covered here briefly, but can be found in much more detail in the numerous texts on dagger category theory (see, e.g., [35,2,20,31]), enriched category theory (for which [33] is the standard text), and domain theory and iteration categories (see, e.g., [3,15]).
Dagger categories
A dagger category (or †-category) is a category equipped with a suitable method for flipping the direction of morphisms, by assigning to each morphism an adjoint in a manner consistent with composition. They are formally defined as follows.
A given category may have several different daggers which need not agree. An example of this is the groupoid of finite-dimensional Hilbert spaces and linear isomorphisms, which has (at least!) two daggers: One maps linear isomorphisms to their linear inverse, the other maps linear isomorphisms to their hermitian conjugate. The two only agree on the unitaries, i.e., the linear isomorphisms which additionally preserve the inner product. For this reason, one would in principle need to specify which dagger one is talking about on a given category, though this is often left implicit (as will also be done here).
Let us recall the definition of the some interesting properties of morphisms in a dagger category: By theft of terminology from linear algebra, say that a morphism X f − → X in a dagger category is hermitian or self-adjoint if f = f † , and unitary if it is an isomorphism and f −1 = f † . Whereas objects are usually considered equivalent if they are isomorphic, the "way of the dagger" [22,31] dictates that all structure in sight must cooperate with the dagger; as such, objects ought to be considered equivalent in dagger categories only if they are isomorphic via a unitary map.
We end with a few examples of dagger categories. As discussed above, FHilb is an example (the motivating one, even [35]) of dagger categories, with the dagger given by hermitian conjugation. The category PInj of sets and partial injective functions is a dagger category (indeed, it is an inverse category [32,11]) with f † given by the partial inverse of f . Similarly, the category Rel of sets and relations has a dagger given by R † = R • , i.e., the relational converse of R. Noting that a dagger subcategory is given by the existence of a faithful dagger functor, it can be shown that PInj is a dagger subcategory of Rel with the given dagger structures.
DCPO-categories and other enriched categories
Enriched categories (see, e.g., [33]) capture the idea that homsets on certain categories can (indeed, ought to) be understood as something other than sets -or in other words, as objects of another category than Set. A category C is enriched in a monoidal category V if all homsets C (X, Y ) of C are objects of V , and for all objects X, Y, Z of C , V has families of morphisms C (Y, Z) ⊗ C (X, Y ) → C (X, Z) and I → C (X, X) corresponding to composition and identities in C , subject to commutativity of diagrams corresponding to the usual requirements of associativity of composition, and of left and right identity. As is common, we will often use the shorthand "C is a V -category" to mean that C is enriched in the category V .
We focus here on categories enriched in the category of domains (see, e.g., [3]), i.e., the category DCPO of pointed directed complete partial orders and continuous maps. A partially ordered (X, ⊑) is said to be directed complete if every directed set (i.e., a non-empty A ⊆ X satisfying that any pair of elements of A has a supremum in A) has a supremum in X. A function f between directed complete partial orders is monotone if x ⊑ y implies f (x) ⊑ f (y) for all x, y, and continuous if f (sup A) = sup a∈A {f (a)} for each directed set A (note that continuity implies monotony). A directed complete partial order is pointed if it has a least element ⊥ (or, in other words, if also the empty set has a supremum), and a function f between such is called strict if f (⊥) = ⊥ (i.e., if also the supremum of the empty set is preserved 1 ). Pointed directed complete partial orders and continuous maps form a category, DCPO.
As such, a category enriched in DCPO is a category C in which homsets C (X, Y ) are directed complete partial orders, and composition is continuous. Additionally, we will require that composition is strict (meaning that ⊥ • f = ⊥ and g • ⊥ = ⊥ for all suitable morphisms f and g), so that the category is actually enriched in the category DCPO! of directed complete partial orders and strict continuous functions, though we will not otherwise require functions to be strict.
Enrichment in DCPO provides a method for constructing morphisms in the enriched category as least fixed points of continuous functions between homsets: This is commonly used to model recursion. Given a continuous function C (X, Y )
Parametrized fixed points and iteration categories
Related to the fixed point operator is the parametrized fixed point operator, an operator pfix assigning morphisms of the form X × Y ψ − → X to a morphism Y pfix ψ −−−→ X satisfying equations such as the parametrized fixed point identity and others (see, e.g., [24,14]). Parametrized fixed points are used to solve domain equations of the form x = ψ(x, p) for some given parameter p ∈ Y . Indeed, if for a continuous function X × Y ψ − → X we define ψ 0 (x, p) = x and ψ n+1 (x, p) = ψ(ψ n (x, p), p), we can construct its parametrized fixed point in DCPO in a way reminiscent of the usual fixed point by In fact, a parametrized fixed point operator may be derived from an ordinary fixed point operator by (pfix ψ)(p) = fix ψ(−, p). Similarly, we may derive an ordinary fixed point operator from a parametrized one by considering a morphism X ϕ − → X to be parametrized by the terminal object 1, so that the fixed point of X ϕ − → X is given by the parametrized fixed point of The parametrized fixed point operation is sometimes also called a dagger operation [14], and denoted by f † rather than pfix f . Though this is indeed the other dagger that we are wielding, we will use the phrase "parametrized fixed point" and notation "pfix" to avoid unnecessary confusion.
An iteration category [15] is a cartesian category with a parametrized fixed point operator that behaves in a canonical way. The definition of an iteration category came out of the observation that the parametrized fixed point operator in a host of concrete categories (notably DCPO) satisfy the same identities. This lead to an elegant semantic characterization of iteration categories, due to [15]. Note that the original definition defined iteration categories in relation to the category CPO m of ω-complete partial orders and monotone functions, rather than to DCPO. However, the motivating theorem [15,Theorem 1] shows that the parametrized fixed point operator in CPO m satisfies the same identities as the one found in CPO (i.e., with continuous rather than monotone functions). Since the parametrized fixed point operator of DCPO is constructed precisely as it is in CPO (noting that ω-chains are directed sets), this definition is equivalent to the original.
Involutive monoidal categories
An involutive category [25] is a category in which every object X can be assigned a conjugate object X in a functorial way such that X ∼ = X. A novel idea by Egger [13] is to consider dagger categories as categories enriched in an involutive monoidal category. We will return to this idea in Section 3.1, and recall the relevant definitions in the meantime (due to [25], compare also with bar categories [7]).
Borrowing terminology from linear algebra, we call X (respectively f ) the conjugate of an object X (respectively a morphism f ), and say that an object X is self-conjugate if X ∼ = X. Note that since conjugation is covariant, any category C can be made involutive by assigning X = X, f = f , and letting id ι = ⇒ (−) be the identity in each component; as such, an involution is a structure rather than a property. Non-trivial examples of involutive categories include the category of complex vector spaces Vect C , with the involution given by the usual conjugation of complex vector spaces; and the category Poset of partially ordered sets and monotone functions, with the involution given by order reversal.
When a category is both involutive and (symmetric) monoidal, we say that it is an involutive (symmetric) monoidal category when these two structures play well together, as in the following definition [25].
Definition 4. An involutive (symmetric) monoidal category is a (symmetric) monoidal category V which is also involutive, such that the involution is a monoidal functor, and id ⇒ (−) is a monoidal natural isomorphism.
This specifically gives us a natural family of isomorphisms X ⊗ Y ∼ = X ⊗ Y , and when the monoidal product is symmetric, this extends to a natural isomorphism X ⊗ Y ∼ = Y ⊗ X. This fact will turn out to be useful later on when we consider dagger categories as enriched in certain involutive symmetric monoidal categories.
Domain enriched dagger categories
Given a dagger category that also happens to be enriched in domains, we ask how these two structures ought to interact with one another. Since domain theory dictates that the well-behaved functions are precisely the continuous ones, a natural first answer would be to that the dagger should be locally continuous; however, it turns out that we can make do with less.
Definition 5. Say that a dagger structure on DCPO-category is monotone if the dagger is locally monotone, i.e., if f ⊑ g implies f † ⊑ g † for all f and g.
In the following, we will use the terms "DCPO-category with a monotone dagger structure" and "DCPO- †-category" interchangably. That this is sufficient to get what we want -in particular to obtain local continuity of the dagger -is shown in the following lemma. Lemma 1. In any DCPO- †-category, the dagger is an order isomorphism on morphisms; in particular it is continuous and strict.
Proof. For C a dagger category, for all objects X, Y ; that this isomorphism of hom-objects is an order isomorphism follows directly by local monotony.
Let us consider a few examples of DCPO- †-categories.
Example 1. The category Rel of sets and relations is a dagger category, with the dagger given by R † = R • , the relational converse of R (i.e., defined by (y, x) ∈ R • iff (x, y) ∈ R) for each such relation. It is also enriched in DCPO by the usual subset ordering: Since a relation X → Y is nothing more than a subset of X ×Y, equipped with the subset order − ⊆ − we have that sup(∆) = R∈∆ R for any directed set ∆ ⊆ Rel(X , Y). It is also pointed, with the least element of each homset given by the empty relation. To see that this is a monotone dagger structure, let X by definition of the relational converse, and by the assumption that R ⊆ S we also have (x, y) ∈ S. But then (y, x) ∈ S • by definition of the relational converse, so R † = R • ⊆ S • = S † follows by extensionality.
Example 2. We noted earlier that the category PInj of sets and partial injective functions is a dagger subcategory of Rel, with f † given by the partial inverse (a special case of the relational converse) of a partial injection f . Further, it is also a DCPO-subcategory of Rel; in PInj, this becomes the relation that for x ∈ X, if f is defined at x and f (x) = y, then g is also defined at x and g(x) = y. Like Rel, it is pointed with the nowhere defined partial function as the least element of each homset. That sup(∆) for some directed ∆ ⊆ PInj(X, Y ) is a partial injection follows straightforwardly, and that this dagger structure is monotone follows by an argument analogous to the one for Rel.
Example 3. More generally, any join inverse category (see [16]), of which PInj is one, is a DCPO- †-category. Inverse categories are canonically dagger categories enriched in partial orders. That this extends to DCPO-enrichment in the presence of joins is shown in [30]; that the canonical dagger is monotonous with respect to the partial order is an elementary result (see, e.g., [30,Lemma 2]).
Example 4. The category DStoch ≤1 of finite sets and subnormalized doubly stochastic maps is an example of a probabilistic DCPO- †-category. A subnormalized doubly stochastic map X f − → Y , where |X| = |Y | = n, is given by an n × n matrix A = [a ij ] with non-negative real entries such that n i=1 a ij ≤ 1 and n j=1 a ij ≤ 1. Composition is given by the usual multiplication of matrices. This is a dagger category with the dagger given by matrix transposition. It is also enriched in DCPO by ordering subnormalized doubly stochastic maps entry-wise (i.e., A ≤ B if a ij ≤ b ij for all i, j), with the everywhere-zero matrix as the least element in each homset, and with suprema of directed sets given by computing suprema entry-wise. That this dagger structure is monotone follows by the fact that if A ≤ B, so a ij ≤ b ij for all i, j, then also a ji ≤ b ji for all j, i, which is precisely to say that As such, in terms of computational content, these are examples of deterministic, nondeterministic, and probabilistic DCPO- †-categories. We will also discuss the related category CP * (FHilb), used to model quantum phenomena, in Section 4.
The category of continuous functionals
We illustrate here the idea of dagger categories as categories enriched in an involutive monoidal category by an example that will be used throughout the remainder of this article: Enrichment in a suitable subcategory of DCPO. It is worth stressing, however, that the construction is not limited to dagger categories enriched in DCPO; any dagger category will do. As we will see later, however, this canonical involution turns out to be very useful when DCPO- †-categories are considered.
Let C be a DCPO- †-category. We define an induced (full monoidal) subcategory of DCPO, call it DcpoOp(C ), which enriches C (by its definition) as follows: Definition 6. For a DCPO- †-category C , define DcpoOp(C ) to have as objects all objects Θ, Λ of DCPO of the form C (X, Y ), C op (X, Y ) (for all objects X, Y of C ), 1, and Θ × Λ (with 1 initial object of DCPO, and − × − the cartesian product), and as morphisms all continuous functions between these.
In other words, DcpoOp(C ) is the (full) cartesian subcategory of DCPO generated by objects used in the enrichment of C , with all continuous maps between these. That the dagger on C induces an involution on DcpoOp(C ) is shown in the following theorem. Proof. On objects, define an involution (−) with respect to the cartesian (specifically symmetric monoidal) product of DCPO as follows, for all objects Θ, Λ, Σ of DcpoOp(C ): C (X, Y ) = C op (X, Y ), C op (X, Y ) = C (X, Y ), 1 = 1, and Θ × Λ = Θ × Λ. To see that this is well-defined, recall that C ∼ = C op for any dagger category C , so in particular there is an isomorphism witnessing is well-defined follows by analogous argument.
This is functorial as id
Finally, since the involution is straightforwardly a monoidal functor, and since the natural transformation id ⇒ (−) can be chosen to be the identity since all objects of DcpoOp(C ) satisfy Θ = Θ by definition, this is an involutive symmetric monoidal category.
⊓ ⊔
The resulting category DcpoOp(C ) can very naturally be thought of as the induced category of (continuous) functionals (or second-order functions) of C .
Notice that this is a special case of a more general construction on dagger categories: For a dagger category C enriched in some category V (which could simply be Set in the unenriched case), one can construct the category V Op(C ), given on objects by the image of the hom-functor C (−, −) closed under monoidal products, and on morphisms by all morphisms of V between objects of this form. Defining the involution as above, V Op(C ) can be shown to be involutive monoidal.
Example 5. One may question how natural (in a non-technical sense) the choice of involution on DcpoOp(C ) is. One instance where it turns out to be useful is in the context of dagger adjunctions (see [22] for details), that is, adjunctions between dagger categories where both functors are dagger functors.
Dagger adjunctions have no specified left and right adjoint, as all such adjunctions can be shown to be ambidextrous in the following way: Given F ⊣ G between endofunctors on C , there is a natural isomorphism C (F X, Y ) αX,Y −−−→ C (X, GY ). Since C is a dagger category, we can define a natural isomorphism which then witnesses G ⊣ F (as it is a composition of natural isomorphisms). But then β X,Y is defined precisely to be α Y,X when F and G are endofunctors.
Daggers and fixed points
In this section we consider the morphisms of DcpoOp(C ) in some detail, for a DCPO- †-category C . Since least fixed points of morphisms are such a prominent and useful feature of DCPO-enriched categories, we ask how these behave with respect to the dagger. To answer this question, we transplant the notion of a fixed point adjoint from [30] to DCPO- †-categories, where an answer to this question in relation to the more specific join inverse categories was given: Note that this is symmetric: If ϕ ‡ is fixed point adjoint to ϕ then fix(ϕ ‡ ) † = (fix ϕ) † † = fix ϕ, so ϕ is also fixed point adjoint to ϕ ‡ . As shown in the following theorem, it turns out that the conjugate ϕ of a functional ϕ is precisely fixed point adjoint to it. This is a generalization of a theorem from [30], where a more ad-hoc formulation was shown for join inverse categories, which constitute a non-trivial subclass of DCPO- †-categories. Theorem 2. Every functional is fixed point adjoint to its conjugate.
Proof. The proof applies the exact same construction as in [30], since being a DCPO- †-category suffices, and the constructed fixed point adjoint turns out to be the exact same. Let C (X, Y ) and so This theorem is somewhat surprising, as the conjugate came out of the involutive monoidal structure on DcpoOp(C ), which is not specifically related to the presence of fixed points. As previously noted, had C been enriched in another category V , we would still be able to construct a category V Op(C ) of V -functionals with the exact same involutive structure.
As regards recursion, this theorem underlines the slogan that reversibility is a local phenomenon: To construct the inverse to a recursively defined morphism fix ϕ, it suffices to invert the local morphism ϕ at each step (which is essentially what is done by the conjugate ϕ) in order to construct the global inverse (fix ϕ) † .
Parametrized functionals and their external fixed points are also interesting to consider in this setting, as some examples of DCPO- †-categories (e.g., PInj) fail to have an internal hom. For example, in a dagger category with objects L(X) corresponding to "lists of X" (usually constructed as the fixed point of a suitable functor), one could very reasonably construe the usual map-function not as a higher-order function, but as a family of morphisms LX Indeed, this is how certain higher-order behaviours are mimicked in the reversible functional programming language Theseus (see also Section 4).
To achieve such parametrized fixed points of functionals, we naturally need a parametrized fixed point operator on DcpoOp(C ) satisfying the appropriate equations -or, in other words, we need DcpoOp(C ) to be an iteration category.
That DcpoOp(C ) is such an iteration category follows immediately by its definition (i.e., since DcpoOp(C ) is a full subcategory of DCPO, we can define a parametrized fixed point operator in DcpoOp(C ) to be precisely the one in DCPO), noting that parametrized fixed points preserve continuity.
Lemma 2. DcpoOp(C ) is an iteration category.
For functionals of the form C (X, Y ) × C (P, Q) ψ − → C (X, Y ), we can make a similar definition of a parametrized fixed point adjoint : We can now show a similar theorem for parametrized fixed points of functionals and their conjugates: Theorem 3. Every functional is parametrized fixed point adjoint to its conjugate.
Proof. Let C (X, Y ) × C (P, Q) ψ − → C (X, Y ) be a functional. We start by showing and n ∈ N, by induction on n. For n = 0 we havē Assuming now the induction hypothesis for some n, we havē Using this fact, we now get which was what we wanted. ⊓ ⊔ Again, this theorem highlights the local nature of reversibility, here in the presence of additional parameters. We observe further the following highly useful property of parametrized fixed points in DcpoOp(C ): Note that a lemma of this form only makes sense for parametrized fixed points, as the usual fixed point of a functional C (X, Y )
Naturality and self-conjugacy
We now consider the behaviour of functionals and their parametrized fixed points when they are natural. For example, given a natural family of functionals C (F X, F Y ) αX,Y −−−→ C (GX, GY ) natural in X and Y (for dagger endofunctors F and G on C ), what does it mean for such a family to be well-behaved with respect to the dagger on C ? We would certainly want that such a family preserves the dagger, in the sense that α X,Y (f ) † = α Y,X (f † ) in each component X, Y . It turns out that this, too, can be expressed in terms of conjugation of functionals.
If a natural transformation α satisfies α X,Y = α Y,X in all components X, Y , we say that it is self-conjugate. An important example of a self-conjugate natural transformation is the dagger trace operator, as detailed in the following example.
Example 6. A trace operator [29] on a braided monoidal category D is family of functionals
subject to certain equations (naturality in X and Y , dinaturality in U , etc.). Traces have been used to model features from partial traces in tensorial vector spaces [19] to tail recursion in programming languages [1,8,18], and occur naturally in tortile monoidal categories [29] and unique decomposition categories [17,23].
A dagger trace operator on a dagger category (see, e.g., [36]) is precisely a trace operator on a dagger monoidal category (i.e., a monoidal category where the monoidal functor is a dagger functor) that satisfies Tr U X,Y (f ) † = Tr U Y,X (f † ) in all components X, Y . Such traces have been used to model reversible tail recursion in reversible programming languages [27,28,30], and also occur in the dagger compact closed categories (see, e.g., [37] Given the connections between (di)naturality and parametric polymorphism [39,5], one would wish that parametrized fixed points preserve naturality. Luckily, this does turn out to be the case, as shown in the proof of the following theorem.
is natural in X and Y , so is its parametrized fixed point.
Proof. See appendix.
⊓ ⊔ This theorem can be read as stating that, just like reversibility, a recursive polymorphic map can be obtained from one that is only locally polymorphic. Combining this result with Lemma 4 regarding self-conjugacy, we obtain the following corollary.
Proof. If α X,Y = α Y,X for all X, Y then also pfix α X,Y = pfix α Y,X , which is further natural in X and Y by Theorem 4. But then pfix α X,Y = pfix α X,Y = pfix α Y,X , as parametrized fixed points preserve conjugation.
Applications and future work
Reversible programming languages Theseus [28] is a typed reversible functional programming language similar in syntax and spirit to Haskell. It has support for recursive data types, as well as reversible tail recursion using so-called typed iteration labels as syntactic sugar for a dagger trace operator. Theseus is based on the Π-family of reversible combinator calculi [27], which bases itself on dagger traced symmetric monoidal categories augmented with a certain class of algebraically ω-compact functors. Theseus also supports parametrized functions, that is, families of reversible functions indexed by reversible functions of a given type, with the proviso that parameters must be passed to parametrized maps statically. For example, (if one extended Theseus with polymorphism) the reversible map function would have the signature map :: (a ↔ b) → ([a] ↔ [b]), and so map is not in itself a reversible function, though map f is (for some suitable function f passed statically). This gives many of the benefits of higher-order programming, but without the headaches of higher-order reversible programming.
The presented results show very directly that we can extend Theseus with a fixed point operator for general recursion while maintaining desirable inversion properties, rather than making do with the simpler tail recursion. Additionally, the focus on the continuous functionals of C given by the category DcpoOp(C ) also highlights the feature of parametrized functions in Theseus, and our results go further to show that even parametrized functions that use general recursion not only have desirable inversion properties, but also preserve naturality, the latter of which is useful for extending Theseus with parametric polymorphism.
Quantum programming languages An interesting possibility as regards quantum programming languages is the category CP * (FHilb) (see [12] for details on the CP * -construction), which is dagger compact closed and equivalent to the category of finite-dimensional C * -algebras and completely positive maps [12]. Since finite-dimensional C * -algebras are specifically von Neumann algebras, it follows (see [9,34]) that this category is enriched in the category of bounded directed complete partial orders; and since it inherits the dagger from FHilb (and is locally ordered by the pointwise extension of the Löwner order restricted to positive operators), the dagger structure is monotone, too. As such, the presented results ought to apply in this case as well -modulo concerns of boundednessthough this warrants more careful study.
Dagger traces in DCPO- †-categories Given a suitable monoidal tensor (e.g., one with the zero object as tensor unit) and a partial additive structure on morphisms, giving the category the structure of a unique decomposition category [17,23], a trace operator can be canonically constructed. In previous work [30], the author (among others) demonstrated that a certain class of DCPO- †-categories, namely join inverse categories, had a dagger trace under suitably mild assumptions. It is conjectured that this theorem may be generalized to other DCPO- †categories that are not necessarily inverse categories, again provided that certain assumptions are satisfied.
Involutive iteration categories As it turned out that the category DcpoOp(C ) of continuous functionals on C was both involutive and an iteration category, an immediate question to ask is how the involution functor ought to interact with parametrized fixed points in the general case. A remarkable fact of iteration categories is that they are defined to be cartesian categories that satisfy all equations of parametrized fixed points that hold in the category CPO m of ω-complete partial orders and monotone functions, yet also have a complete (though infinite) equational axiomatization [15].
We have provided an example of an interaction between parametrized fixed points and the involution functor here, namely that DcpoOp(C ) satisfies pfix ψ = pfix ψ. It could be interesting to search for examples of involutive iteration categories in the wild (as candidates for a semantic definition), and to see ifÉsik's axiomatization could be extended to accomodate for the involution functor in the semantic category.
Conclusion and related work
We have developed a notion of DCPO-categories with a monotone dagger structure (of which PInj, Rel, and DStoch ≤1 are examples, and CP * (FHilb) is closely related), and shown that these categories can be taken to be enriched in an induced involutive monoidal category of continuous functionals. With this, we were able to account for (ordinary and parametrized) fixed point adjoints as arising from conjugation of the functional in the induced involutive monoidal category, to show that parametrized fixed points preserve conjugation and naturality, and that natural transformations that preserve the dagger are precisely those that are self-conjugate. We also described a number of potential applications in connection with reversible and quantum computing.
A great deal of work has been carried out in recent years on the domain theory of quantum computing, with noteworthy results in categories of von Neumann algebras (see, e.g., [34,9,26,10]). Though the interaction between dagger structure and the domain structure on homsets was not the object of study, Heunen considers the similarities and differences of FHilb and PInj, also in relation to domain structure on homsets, in [21], though he also notes that FHilb fails to enrich in domains as composition is not even monotone (this is not to say that domain theory and quantum computing do not mix; only that FHilb is the wrong category to consider for this purpose). Finally, dagger traced symmetric monoidal categories, with the dagger trace serving as an operator for reversible tail recursion, have been studied in connection with reversible combinator calculi [27] and functional programming [28].
A.1 Proof of Theorem 4
Suppose that α is natural in X and Y , i.e., the following diagram commutes for all X, Y .
Under this assumption, we start by showing naturality of α n for all n ∈ N, i.e., for all GX by induction on n. For n = 0 we have where F f • ⊥ X,Y • F g = ⊥ X ′ ,Y ′ by strictness of composition. Assuming the induction hypothesis now for some n, we have X,Y (⊥ X,Y , p) • F g so α n is, indeed, natural for any choice of n ∈ N. But then | 2019-04-02T21:42:30.000Z | 2019-04-02T00:00:00.000 | {
"year": 2019,
"sha1": "1b39ddf0decee332688381f77dc1eae70cf72dfd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.01679",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1b39ddf0decee332688381f77dc1eae70cf72dfd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
119326950 | pes2o/s2orc | v3-fos-license | Bi-periodic Fibonacci matrix polynomial and its binomial transforms
In this paper, we consider the matrix polynomial obtained by using bi-periodic Fibonacci matrix polynomial. Then, we give some properties and binomial transforms of the new matrix polynomials.
Introduction and Preliminaries
The bi-periodic Fibonacci {q n } n∈N sequence is defined by q n = aq n−1 + q n−2 , if n is even bq n−1 + q n−2 , if n is odd , where q 0 = 0, q 1 = 1 and a, b are nonzero real numbers. Also, the bi-periodic Fibonacci {F n } n∈N matrix sequence is given as where a, b are nonzero real numbers and ε(n) = 1, n odd 0, n even .
(1. 3) In addition to these sequences, the other sequences appear in many branches of science and have attracted the attention of mathematicians (see [1]- [4], [8]- [13] and the references cited therein).
Also, the polynomials have attracted the attention of some mathematicians [6,7,14]. In [14], the authors gave the bi-periodic Fibonacci polynomial as q n (a, b, x) = axq n−1 (a, b, x) + q n−2 (a, b, x) , if n is even bxq n−1 (a, b, x) + q n−2 (a, b, x) , if n is odd (1.4) which q 0 (a, b, x) = 0, q 1 (a, b, x) = 1 and a, b are nonzero real numbers and they obtained some properties of this polynomial. Hoggatt and Bicknell, in [7], defined the Fibonacci, Tribonacci, Quadranacci, r-bonacci polynomials. They generalized Fibonacci polynomials and their relationship to diagonals of Pascal's triangle. In [6], they give k-Fibonacci polynomials and offered the derivatives of these polynomials in the form of convolution of k-Fibonacci polynomials.
While on the one hand the sequences and polynomials was defined, on the other hand it was introduced some transorms for the given sequences. Binomial transform, k-Binomial transform, rising and fallling binomial transforms are a few of these transforms (see [5,15]).
In this study, firstly, we introduce bi-periodic Fibonacci matrix polynomial and give some properties of this polynomial. In Section 3, we have the new matrix polynomial by using bi-periodic Fibonacci matrix polynomial. And, we get the binomial, k-binomial, rising and falling transforms for the matrix polynomial as the first time in the literature. Then, we give the recurrence relations, generating functions and Binet formulas for these generalized Binomial transforms.
The bi-periodic Fibonacci matrix polynomial
In this section, we focus on the bi-periodic matrix polynomial and give some properties of this generalized polynomial. Hence, we firstly define the biperiodic Fibonacci matrix polynomials.
Definition 2.1 For n ∈ N and any two nonzero real numbers a, b, the biperiodic Fibonacci matrix polynomial (F n (a, b, x)) is defined by In Definition 2.1, the matrix F 1 is analogue to the Fibonacci Q-matrix which exists for Fibonacci numbers.
Theorem 2.2 Let F n (a, b, x) be as in (2.1). Then the following equalities are valid for all positive integers: where q n (a, b, x) is nth bi-periodic Fibonacci polynomial.
Proof. By using the iteration, it can be obtained the desired results.
We obtained the Cassini identity for bi-periodic Fibonacci polynomials [14]. Using the determinant of F n (a, b, x) in Theorem 2.2, again we get
Theorem 2.3 For bi-periodic Fibonacci matrix polynomial, we have the generating function
Proof. Assume that G(t) is the generating function for the polynomial {F n (a, b, x)} n∈N . Then, we can write Therefore, and as a result, we get which is desired equality.
Theorem 2.4 For every n ∈ N, we write the Binet formula for the biperiodic Fibonacci matrix polynomial as the form and α, β are roots of r 2 − abx 2 r − abx 2 = 0 equation.
Proof. Using the partial fraction decomposition, we can rewrite G (t) as Since the Maclaurin series expansion of the function A−Bt t 2 −C is given by the generating function G (t) can also be expressed as Thus, we obtain Combining the sums, we get Therefore, for all n ≥ 0, from the definition of generating function, we have which is desired.
Now, for bi-periodic Fibonacci matrix polynomial, we give the some summations by considering its Binet formula.
Binomial transforms for Fibonacci matrix polynomial
In this section, we mainly focus on the new matrix polynomial that obtained by using the bi-periodic Fibonacci matrix polynomial.
Definition 3.1 For n ∈ N, the matrix polynomial (A n (a, b, x)) obtained by using bi-periodic Fibonacci matrix polynomial is defined by where a, b are nonzero real numbers and ε(n) = n − 2 n 2 .
In the following, we introduce the binomial transform and k-binomial transform of the this matrix polynomial.
respectively, where a, b are nonzero real numbers.
Throughout this section, we will take k = x √ ab.
Now, we give some properties for the binomial transform of the matrix polynomial (A n (a, b, x)). (A n (a, b, x)) verifies the following relations:
Theorem 3.3 The binomial transform of the matrix polynomial
where r 1 (x), r 2 (x) are roots of the r 2 − x √ ab + 2 r+x √ ab = 0 equation Proof. We will prove the first two equalities because the proof of the others can be done in similar ways.
(i) By considering the property of binomial numbers, we can write If necessary arrengements are made, we have Also, we can write as b n+1 (a, b, (ii) By using the equation (i), we can write From the definition of binomial and k-binomial transform, we obtain a, b, x) . Thus, for every n ∈ N, in the following equalities are true.
where r 3 (x) and r 4 (x) are roots of . Now, we introduce the rising k-binomial transform of the matrix polynomial (A n (a, b, x)).
Definition 3.4
For n ∈ N, the rising k-binomial transform of the matrix polynomial (A n (a, b, x)) is defined by where a, b are nonzero real numbers. Proof. From the Theorem 2.4, we can write Consequently, making the necessarry arrangements, we have Theorem 3.6 For every n ∈ N, the recurrence relation for rising k-binomial transform of the matrix polynomial (A n (a, b, x)), where r 0 = √ bxF 0 (a, b, x) and Proof. For the matrix polynomial (A 2n (a, b, x)), the following relation can be written Therefore, from the Theorem 3.5, we find the desired result.
In the following, we introduce the falling k-binomial transform of the matrix polynomial (A n (a, b, x)). where a, b are nonzero real numbers. where f 0 (a, b, x) = √ bxF 0 (a, b, x) and f 1 (a, b, x) = bx √ axF 0 (a, b, x) + √ axF 1 (a, b, x) .
• If we take a = b = k in Section 2, we get the some properties of the k-Fibonacci polynomial.
If we choose x = 1 in Section 3, then we obtain some properties for binomial transforms of bi-periodic Fibonacci matrix sequence and bi-periodic Fibonacci numbers.
Also, for different values of a and b, we obtain the some properties of binomial transforms of the well-known matrix sequence and number sequence in the literature: • If we choose a = b = 1, we obtain the some properties for binomial transforms of Fibonacci matrix sequence and Fibonacci numbers.
• If we choose a = b = 2, we obtain the some properties for binomial transforms of Pell matrix sequence and Pell numbers.
• If we choose a = b = k, we obtain the some properties for binomial transforms of k-Fibonacci matrix sequence and k-Fibonacci numbers. | 2017-05-15T09:28:59.000Z | 2017-05-15T00:00:00.000 | {
"year": 2017,
"sha1": "4a3c8d3cf198a9c1aebadc07575d300189c6e4c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ef0dc3fa459a053dd063b6da95bec0597abaa1bd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
884249 | pes2o/s2orc | v3-fos-license | An Old Solution for a New Problem: Antiserum against Emerging Infectious Diseases
Heterologous neutralizing serums or antiserums consist of neutralizing antibodies produced mainly in horses or sheep and have been effectively used for more than a century. Antiserums were born in the golden age of microbiology when, in 1890, von Behering and Kitasato showed that the serum of a diphtheria-infected animal confers immunity against the same disease on naive animals (1, 2). Four years later, antiserum was used in humans. From that point, this method has always been demonstrated to be highly effective in the treatment of both infection and envenoming. However, antiserums did not have good outcomes with respect to safety in their initial applications, causing many life-threatening side reactions (3). Currently, in many applications, heterologous serums have been replaced by other drugs, such as antibiotics or homologous serums. However, in the case of envenoming from snakebites, scorpions and arachnids, antiserums remain the only effective treatment (4). In recent applications, antiserums have demonstrated a good safety profile, with <15% of patients having mild adverse reactions and <1% having severe reactions (4–6). The only weakness antiserums have is that, like most biological products, the induced reactivity in patients generates antibodies against the antiserum (7). This weakness causes the effectiveness and safety to be compromised in successive treatments, or, in other words, heterologous serums can only be used once.
inTRODUCTiOn
Heterologous neutralizing serums or antiserums consist of neutralizing antibodies produced mainly in horses or sheep and have been effectively used for more than a century. Antiserums were born in the golden age of microbiology when, in 1890, von Behering and Kitasato showed that the serum of a diphtheria-infected animal confers immunity against the same disease on naive animals (1,2). Four years later, antiserum was used in humans. From that point, this method has always been demonstrated to be highly effective in the treatment of both infection and envenoming. However, antiserums did not have good outcomes with respect to safety in their initial applications, causing many life-threatening side reactions (3). Currently, in many applications, heterologous serums have been replaced by other drugs, such as antibiotics or homologous serums. However, in the case of envenoming from snakebites, scorpions and arachnids, antiserums remain the only effective treatment (4). In recent applications, antiserums have demonstrated a good safety profile, with <15% of patients having mild adverse reactions and <1% having severe reactions (4-6). The only weakness antiserums have is that, like most biological products, the induced reactivity in patients generates antibodies against the antiserum (7). This weakness causes the effectiveness and safety to be compromised in successive treatments, or, in other words, heterologous serums can only be used once.
HETEROLOGOUS SERUMS AS AnTiMiCROBiALS
In the first decades of the twentieth century, before the advent of antibiotics, heterologous serums were the best treatment choice against infectious diseases (8,9). Many diseases were treated with heterologous serum with high effectiveness but with variable safety. For example, in 1904, a Neisseria meningitidis epidemic in New York City was controlled with a heterologous specific serum, decreasing the mortality by one-third (10).
Later in the twentieth century, antiserums began to be displaced by drugs with better safety profiles, antibiotics, and vaccination. However, for the treatment of envenomation, tetanus, diphtheria, and rabies, antiserums have seen continued successful use. Currently, the treatment for tetanus and diphtheria has been changed from antiserums to homologous serums obtained from healthy human donors, but in many countries, antiserums remain the only option for such treatment. In the case of snakebite and other envenomations, the antiserum is the only effective treatment.
FROM nOW OnWARD
For many emerging diseases, such as the Ebola virus, the risk-benefit equation for the use of antiserum appears to be highly tilted toward benefit. Additionally, it is necessary to use the antiserum only once because surviving patients demonstrate immunity after first contact. Another benefit is the low production cost, which makes this type of drug affordable for most countries (6,11,12). Unfortunately, while vaccination, monoclonal and homologous antibodies became the most popular solutions, antiserums had less success. Against many emerging diseases and future threats, however, antiserums could have a chance. Dixit and coworkers (6) are proposed antiserum as a possible solution to avian influenza, MERS-CoV, and viral hemorrhagic fevers. Heterologous serum also has the advantage that it can be made using recombinant proteins, which avoids the risk of manipulating the infective pathogen during the production stage.
COnCLUSiOn
Antiserums are an old drug with more than a century of use. Perhaps for this, most pharmaceutical companies and scientist consider this type of drug as obsolete and opt for the new generation of antibodies (monoclonal, humanized).
However, antiserums still have some advantages to monoclonal antibodies, such as shorter product development time, and above all reduced costs in development and production. In summary, antiserums can be a good option for the treatment of emerging infectious diseases when other drugs are unavailable.
AUTHOR COnTRiBUTiOnS
The author confirms being the sole contributor of this work and approved it for publication. | 2017-05-05T09:59:48.747Z | 2016-08-26T00:00:00.000 | {
"year": 2016,
"sha1": "3691c1e9130028008093d3d62e16a43883631cae",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2016.00178/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3691c1e9130028008093d3d62e16a43883631cae",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247140696 | pes2o/s2orc | v3-fos-license | Understanding the science of fungal endophthalmitis - AIOS 2021 Sengamedu Srinivas Badrinath Endowment Lecture
Fungal endophthalmitis is a potentially blinding condition. It is more often reported from Asia, including India. The incidence is lower than bacterial endophthalmitis. But it is relatively more challenging to treat than bacterial endophthalmitis. Many eyes may need therapeutic keratoplasty and/or evisceration. The current mainstays of treatment are vitrectomy irrespective of the presenting vision, intravitreal antifungal agents, and systemic therapy; additionally, the patients could require prolonged treatment with repeat vitreous surgeries and intravitreal injections. Difficulty in clinical diagnosis, delay in microbiological culture, and limited options of antifungal drugs make the treatment more difficult and less rewarding. Three common fungi causing endophthalmitis are Aspergillus, Fusarium, and Candida. The former two are molds, often identified in exogenous endophthalmitis, postoperative and traumatic; the latter is yeast and is more often identified in endogenous endophthalmitis. A faster diagnosis with newer molecular microbiological technologies might help institute treatment earlier than it is currently possible. A target trial using big data from different regions of the world might emulate a randomized clinical trial to design a definite treatment strategy. Given fewer antifungal drugs, one must be mindful of antifungal stewardship to prevent resistance to the existing drugs.
Fungi are eukaryotic organisms and are ubiquitous. Three types of fungi-molds, yeasts, and diphasic-mainly cause ocular infections. The molds are filamentous fungi and could be septate or nonseptate. There are 1.5-5 million species of fungi that can grow almost anywhere-water, soil, plants, and animals. Some fungi form spores which we inhale (e.g., Aspergillus and Fusarium sp.), and others live as human commensal organisms (e.g., Candida and Malassezia sp.). Despite these close encounters, our immune system recognizes and protects us from fungal infections. The primary pathogenic fungi usually have an environmental reservoir. Opportunistic pathogens take advantage of debilitated or immunocompromised hosts to cause infection. [1] All fungi do not have innate pathogenicity. It is acquired from the environment or may be endogenous in the few instances where they are members of the resident flora. The pathogenesis of fungal disease involves an interplay between fungal virulence factors and host immune responses.
Three key factors define the outcome of the infecting fungi: the infectivity, the pathogenicity, and the virulence. Infectivity is an organism's ability to infect a host. Exposure may lead to carriage (colonization) or symptomatic disease, transient or chronic. Pathogenicity is the ability of an organism to cause disease that depends on the human host-pathogen interactions. Virulence is the relative capacity of the microbe to cause damage to the host, reflected by the ability of the pathogen to multiply within the host and the virulence factors.
The primary fungal pathogens can cause disease in noncompromised patients. But most pathogenic fungi are opportunistic and do not usually cause disease unless there are alternations in immune defense. Immunosuppressive drugs, the human immunodeficiency virus (HIV) infection, and excessive use of systemic antibiotics could result in such alternations. In these situations, the opportunistic pathogens produce virulence factors that allow the organisms to grow as commensals as long as the host is healthy and takes its opportunity to become pathogenic when the host's immune system is lowered. The virulence factors allow the fungi to grow at elevated temperatures (up to 42°C), with increased ability for tissue adherence, tissue penetration, and dissemination. The innate immune system is the first line of defense against pathogens. The innate cells use genetically inherited receptors, the pattern recognition receptors (PRRs), to recognize the conserved pathogen-associated molecular patterns (PAMPs) present in nearly all microorganisms. Signaling downstream, the PRRs activate the cellular responses to the killing mechanisms and shape the adaptive immune responses. [2,3] The most important PAMPs in filamentous fungi are mannan, β-glucan, and chitin.
Infection occurs when fungi accidentally penetrate barriers (such as intact skin and mucous membrane linings) or breach the immunological defense (such as the immunocompromised state, debilitating conditions of the host). Fungi also gain access to the host tissues after penetrating trauma or inhalation. The severity of the disease depends upon the size of the inoculum, the magnitude of tissue destruction, the ability of the fungi to multiply in tissues, and the immunologic status of the host. [4] Fungal infection, including endophthalmitis of the eye, occurs more often in tropical climates, including India. It is more challenging to treat than a bacterial infection. One must understand the science of fungal endophthalmitis to recognize early and treat effectively. This review examines various related issues of fungal endophthalmitis.
Current Knowledge Epidemiology
Common ocular fungal infections are keratitis and endophthalmitis; rarer ones are the orbital infection and dacryoadenitis [ Table 1]. Fungal endophthalmitis is more challenging to treat. It is infrequently reported from Europe and North America, [5,6] than Asia. [7,8] India has reported more fungal endophthalmitis than other Asian countries (postoperative endophthalmitis 16.7%, and traumatic endophthalmitis 14.4%). [9,10] Aspergillus and Fusarium sp. are commonly isolated in exogenous fungal endophthalmitis; the patients are usually immunocompetent. The risk factors in endogenous infection include a history of recent hospitalization, diabetes mellitus, renal or liver failure, indwelling intravenous lines, catheterization, organ transplantation, intravenous drug use, and immunosuppressive (particularly corticosteroid) treatment. [11,12] Yeasts (Candida sp.) are more often isolated than molds in endogenous endophthalmitis. The common fungi identified in a large series of 723 fungal endophthalmitis collected across India were Aspergillus sp. (37.06%), Fusarium sp. (16.87%), and Candida sp. (10.65%). [13] Pathobiology The mechanical barriers of the eye to prevent infection are the eyelids, eyelashes, the blink reflex, the tear film, the nonkeratinized squamous epithelium of the conjunctiva and cornea, and the lacrimal excretory system. A breach in these barriers either by trauma, surgery, or extension of fungal keratitis (including corneal perforation) causes exogenous endophthalmitis. The endogenous infection spreads from choroidal capillaries to the vitreous through disrupted Bruch's membrane. In either situation, there is acute suppurative inflammation and necrosis of the vitreous. Dense infiltration of polymorphonuclear leukocytes forms microabscesses and foreign body type of multinucleated giant cell reaction with or without granuloma formation [14] [Figs. 1 and 2].
The polymorphonuclear leukocytes can ingest and kill microorganisms by two main pathways: oxygen-dependent and oxygen-independent pathways. The oxygen-dependent pathway is based on the post-phagocytic intracellular production of oxygen radicals. The oxygen-independent pathway is based mainly on the function of an antimicrobial protein, "defensins." Defensins are peptides that possess broad-spectrum antimicrobial activity in vitro, killing a variety of gram-positive and gram-negative bacteria and some fungi. [15] Aspergillus flavus and Aspergillus fumigatus are the common pathogenic fungi reported from India. Aspergillosis occurs in people with chronic pulmonary diseases, organ transplants (liver, renal and bone marrow), leukemia, and drug abuse. The fungus usually gains access to the eye as it spreads from the lungs to the choroid and invades the retinal and choroidal vessel walls. Histologically, Aspergillus grows preferentially along the subretinal pigment epithelium and subretinal space; the visual outcome is invariably poor because of the preferred macular involvement. [16] Candida is an opportunistic pathogenic yeast and becomes pathogenic in various conditions. The common species infecting humans are C. albicans, C. tropicalis, C. parapsilosis, and C. glabrata. Fusarium is a filamentous fungus found in soil and on the plant. Fusarium keratitis is more common than endophthalmitis. It could occur de novo or as an extension of the nonhealing corneal ulcer; when it happens, it is destructive because it produces extracellular proteases that cause matrix degeneration. [17] Fusarium; n=14 1.0-4.0 2.0-8.0 Hyderabad, India** [48] Candida; n=12 0.047-0.125 0.032-1.0 *endophthalmitis; **keratitis
Clinical diagnosis
The symptoms and clinical signs of fungal endophthalmitis often mimic chronic bacterial endophthalmitis. The typical characteristics are longer time to symptoms, lower frequency of hypopyon, and indolent inflammation. A history of long-time topical corticosteroid use to reduce recurrent and persistent redness and inflammation is not uncommon. The signs of more typical fungal endophthalmitis include yellow-white infiltrates at the corneoscleral wound (in cataract surgery), nodular exudates over the iris, and crystalline/intraocular lens surface, and vitreous exudates arranged like a string of pearls and creamy white circumscribed chorioretinal lesion [18] [ Fig. 3].
Time to symptoms is lesser in traumatic endophthalmitis; injury with vegetative matter and/or retained intraocular foreign body may be a pointer to the diagnosis of traumatic endophthalmitis. Hematogenous seeding from an underlying systemic disease or recent surgery/catheterization is common in patients with endogenous fungal endophthalmitis. [19] Systemic investigations are essential in endogenous endophthalmitis and possibly in other forms of fungal endophthalmitis too. These investigations include complete blood count, serum urea and electrolytes, liver function tests, peripheral blood culture, sputum and urine culture, chest radiogram, liver ultrasonogram, and transthoracic echocardiogram.
Laboratory confirmation
Histopathology of formalin-fixed tissue and microbiology of ocular fluid are the two primary sources of laboratory confirmation. In histopathology, the inflammatory cells are best seen by haematoxylin and eosin (H and E) stains [Figs. 1 and 2]. The fungal hyphae appear as refractile to pale acidophilic filaments, thin or broad, septate or aseptate, and with or without branching. These hyphae are highlighted by special stains like Periodic Acid Schiff (PAS) and Gomori Methenamine Silver (GMS) [Figs. 1 and 2], and Masson Fontana stains. The latter stain helps distinguish pigmented fungi from the rest. However, the characterization of the fungus on histopathology is limited, and hence it needs further ancillary techniques or microbiological culture correlation. [20] The ancillary techniques include immunohistochemistry with antibodies (such as an anti-Aspergillus antibody), fluorescence in-situ hybridization (FISH), and real-time quantitative PCR (qPCR) on formalin fixed paraffin-embedded tissues.
The microbiology confirmation of fungi includes direct microscopy, culture, polymerase chain reaction (PCR), and DNA sequencing. Direct microscopy of the ocular specimen is rapid and is the most commonly employed method. Sabouraud's dextrose agar (SDA) and potato dextrose agar (PDA) are selective media for fungi, while chocolate agar/blood agar/brain heart infusion broth could also be used. SDA and PDA are incubated at 25°C for 2-4 weeks [ Fig. 4].
Molecular microbiology
The value of conventional smear and culture of intraocular fluids is never denied though it may take a longer time, sometimes 1-2 weeks, for fungi to grow in culture. The fungi may also be sequestered under the lens/capsular bag, making the detection of these organisms difficult; [21] it is not surprising that these are often reported culture negative. Advances in molecular microbiology have improved microbiological diagnosis. Some of these methods include polymerase chain reaction (PCR) and real-time PCR, matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF), and peptide nucleic acid fluorescent in situ hybridization (PNA FISH).
Polymerase Chain Reaction is based on DNA polymerase; it is an in vitro replication of specific DNA sequences. Pan-fungal primers complementary to the 18 S rRNA sequences or the 28 S rDNA have been reported previously, but the most commonly used probes are the ITS 1 and ITS 4 (corresponding to the internal transcribed spacer region, ITS2, of the ribosomal small subunit RNA for fungus) primer sets. [22,23] These tests are highly specific and sensitive. The ITS2-gene-based PCR detects more infecting fungi than conventional culture in fungal endophthalmitis. [24] Various methods are available for species identification following the pan fungal PCR, such as DNA sequence analysis, amplification of fragment length polymorphism (AFLP), and restricted fragment length polymorphism (RFLP). The quantitative real-time PCR quantifies the fungal load in ocular fluids. [25] MALDI-TOF is an ionization technique that uses a laser energy absorbing matrix to create ions from large molecules with minimal fragmentation. Compared to the conventional methods, the MALDI-TOF is faster and more sensitive in detecting the infecting microorganism in endophthalmitis. [26] PNA FISH is a technique whereby DNA probes labeled with fluorophores are attached to a target DNA for identification. The FISH technique has been used for over two decades for the detection of genetic disorders. This technique has shown promising application in the detection of fungi and infectious endophthalmitis. [27,28] Antifungal drugs Five families of antifungals are extensively used to treat human fungal infections, but only a handful of them are used to treat ocular fungal infection due to the lack of therapeutic concentration obtained in the ocular tissues. These are: (1) polyenes represented by amphotericin B; (2) azoles with several derivatives such as imidazole (miconazole, econazole, ketoconazole), and triazoles (itraconazole, fluconazole, voriconazole, posaconazole); (3) echinocandins (such as caspofungin, micafungin, anidulafungin); (4) flucytosine, a pyrimidine analogue; and (5) allylamines (such as terbinafine). The three general mechanisms of action for the antifungal agents are cell membrane disruption, cell division inhibition, and cell wall formation. [29] All these drugs inhibit the synthesis of or directly interact with ergosterol, which is the predominant component of the fungal cell membrane.
The antifungal agents are usually fungistatic in the concentrations used in clinical practice but are fungicidal in higher concentrations; some exhibit fungicidal action against selective fungi in a dose-dependent manner. Systemic therapy with antifungal agents is also known to cause systemic adverse effects. Therefore, intravitreal injection is the primary route of administration of these agents, often supplemented with systemic (more often) and topical application of antifungal agents [30] [ Table 2].
The choice of antifungal agents for intravitreal therapy is currently confined to only two molecules: amphotericin B and voriconazole. Following the intravitreal injection, the half-life (t 12 ) of amphotericin B in the vitreous of noninflamed phakic eyes is 8.9 days and in the aphakic vitrectomized eye 1.8 days. Yeasts and filamentous fungi are susceptible to amphotericin B, but many species of Aspergillus are resistant too. The half-life of voriconazole in vitreous of noninflamed phakic eyes is 2.5-6.5 h and in aphakic vitrectomized eyes is 2.5 h; it has broad-spectrum activity against molds and yeasts. [29,31] Bioavailability following systemic treatment is superior with voriconazole than amphotericin B. Voriconazole can be used in all three routes-intracameral, intravitreal, and oral. [32,33] It is most effective against many Candida, Aspergillus, and Cryptococcus species.
Echinocandins (such as caspofungin, micafungin, and anidulafungin) have antifungal activity against Candida and Aspergillus species. [34] They exhibit their antifungal activity by inhibiting D-glucan synthase, an enzyme specifically involved in fungal cell wall synthesis. Due to the target-specific activity, echinocandins could be an ideal antifungal therapy though there are reports of both successful and unsuccessful treatment of endophthalmitis after intravitreal caspofungin. [35,36] Treatment Two important interventions of proven efficacy in the treatment of fungal endophthalmitis are vitrectomy and intravitreal antifungal drugs. The variation of treatment protocol of fungal endophthalmitis from bacterial endophthalmitis is early vitrectomy, more than one-time intravitreal injections, and frequent use of systemic antifungal therapy. [37] In recalcitrant cases of post cataract surgery fungal endophthalmitis, the intraocular lens explantation along with the lens capsule could benefit. [38] The visual outcome of fungal endophthalmitis is poorer than bacterial endophthalmitis.
Achieving adequate concentrations of antifungal drugs in the infected tissues is crucial to the treatment success. In refractory disease, intravitreal antifungal antibiotics may be repeated after vitrectomy. Systemic voriconazole has good intraocular bioavailability and may need a long treatment course, 6-8 weeks. [39] Topical natamycin could be used in eyes with associated keratitis.
An analysis of 730 consecutive cases of fungal endophthalmitis collected from large tertiary eye care facilities across India (doi: 10.1016/j.oret.2021.09.006) showed that: Aspergillus species was the most common infecting microorganism across the causative events, time to symptoms was longest in postoperative endophthalmitis, less than half eyes had hypopyon at presentation, nearly every eye required vitreous surgery, and each eye required multiple intravitreal injections of antifungal agents. Additionally, there was a variable need for therapeutic keratoplasty. Despite treatment, at least a third of the eyes became blind, and up to 6% of eyes required evisceration.
Maximum benefits are derived from two interventions instituted together-intravitreal therapy with antifungal agents and vitrectomy. [37,40] Systemic therapy is required in many instances and, when begun with renal and liver function tests, it must be continued for 4-6 weeks. Based on the international committee of Intraocular Inflammation Society (IOIS) recommendation, the Bristol Eye Hospital has proposed systematic antifungals alone for mild vitritis and a combination of intravitreal antifungals and vitrectomy for moderate to severe vitritis. [41] In clinical practice, three situations arise. Situation one: bacterial endophthalmitis is suspected, but vitreous culture grows fungus; in this situation, the treating physician switches to antifungal drugs with intravitreal and systemic therapy. If not performed before, it might also call for vitrectomy and possible additional vitreous surgery (vitreous lavage) if such was done earlier. Situation two: fungal infection is suspected primarily and treated with vitrectomy and antifungal drugs from the beginning. Third situation: infective endophthalmitis is suspected clinically, treated either as bacterial or fungal endophthalmitis, but the vitreous culture does not grow any organism.
SARS-CoV2 and Endophthalmitis
Endogenous endophthalmitis is not uncommon in hospitalized patients treated for SARS-CoV2 (COVID-19) viral infection. In our analysis of 24 consecutive patients (33 eyes), time to symptoms was an average of 15 (range 6-72) days after discharge from the designated hospitals; over 90% (n = 22) patients had multiple pre-COVID-19 systemic co-morbidities, and over 66% patients were admitted to the intensive care unit (ICU). The commonest systemic disease was diabetes mellitus (87.5%). At presentation, the mean presenting vision was <20/400, and over 69% had a complete vitreous abscess. Corticosteroids are known to cause immunosuppression and increase the risk of bacterial/fungal infection. [42] Broad-spectrum antibiotics kill the bacteria and allow growth and multiplication of the commensals, including the yeasts. [43] The IL-6 inhibitors (such as tocilizumab) impair the function of neutrophil, macrophage, and T cells, thus increasing the risk of fungal infection. [44] In our cohort (doi: 10.4103/ijo. IJO_1474_21) in South India, Candida sp. was the commonest isolated fungus in endogenous endophthalmitis in people with SARS-CoV2 infection; a similar trend is recently reported in five patients (seven eyes) from North India. [45] Knowledge Gap Antifungal susceptibility and resistance Minimum inhibitory concentration (MIC) is the most common test used for antibiotic susceptibility. It is defined as the lowest concentration of an antimicrobial agent that prevents the visible growth of microorganisms. It is determined by E-test or microbroth dilution method as per the Clinical and Laboratory Standards Institute (CLSI) guidelines. MIC for antifungal agents is not routinely performed. In addition, colorimetric flow cytometry and ergosterol quantitation are available to measure the MIC of anti-fungal agents. Ergosterol is the major sterol component of the fungal cell membrane and is responsible for maintaining cell integrity and function. MIC breakpoints are available for amphotericin B, fluconazole, itraconazole, voriconazole, and flucytosine against Candida and some species of filamentous fungi. [46] Similar to MIC is the minimum fungicidal concentration (MFC). MFC is defined as the lowest drug concentration that achieves ≥ 98%-99.9% killing of particular fungi. MFC correlates better with clinical outcomes. A comparative susceptibility of three common fungi tested at Miami, USA [47] and Hyderabad, India [48] against two commonly used antifungal drugs in ocular fungal infection is shown in Table 3.
Antifungal resistance
Resistance to antifungal drugs is not uncommon. It occurs through a variety of mechanisms and includes (1) nonsynonymous point mutations within the gene encoding the target enzyme (leading to alteration in the amino acid sequence), (2) increased expression of the target enzyme through increased transcription of the gene encoding it, (3) decreased concentrations of the drug within the fungal cells due to drug efflux, and (4) changes in the biosynthetic pathway resulting in reduced production of the target of the antifungal agents. [49] Biofilm and antifungal resistance Biofilm is one of the major causes of resistance to various antibiotics. [50] Structurally, a biofilm is a slimy layer of an extracellular matrix made of polymeric substances produced by microorganisms. This forms an architectural colony providing resistance not just against antibiotics but also against the human immune system. The role of biofilm has been studied in various ocular conditions, both implants-associated (such as intraocular lens, scleral buckles, punctal plugs, and lacrimal intubation devices) and nonimplant-associated pathologies (such as keratitis, chronic dacryocystitis, and endophthalmitis). [51] The potential to form biofilms has been demonstrated in some ocular fungi (such as Aspergillus fumigatus, Candida albicans, Fusarium solani, Cladosporium sphaaerospermum, and Acremonium implicatum). [52][53][54][55] Our group has reported one corneal isolate of C. albicans resistant to three antifungal drugs as a biofilm producer; the thickness of the biofilm, measured by scanning electron microscopy (SEM), increased from a monolayer/bilayer of cells at 24 h to a more than 7-cell thickness layer at 72 h. These cells were less sensitive, up to 200 × MIC of the antifungal agents than nonbiofilm cells. [56] Similar mechanisms may act in-vivo, resulting in poor outcomes in patients with infectious endophthalmitis.
Antifungal stewardship
Antimicrobial stewardship (AMS) is a coordinated program that promotes the appropriate use of antimicrobials, improves patient outcomes, reduces microbial resistance, decreases the spread of infections caused by multidrug-resistant organisms, and finally reduces the cost of care. AMS is defined as "the optimal selection, dosage, and duration of antimicrobial therapy that results in the best clinical outcome for the treatment or prevention of infection, with minimal toxicity to the patient and minimal impact on subsequent resistance." [57] AMS is more relevant now as fewer new antimicrobials are introduced every year, and there is a need to conserve what we have without developing resistance to these drugs. Principally, there are three goals of AMS. [58] These are (1) right treatment (4Ds [59] -right Drug, right Dose, right Directed therapy, right Duration); (2) Investigating culture-negative endophthalmitis A common limitation of conventional microscopy and culture is culture-negativity. This is more of a possibility in a tertiary eye care setting, where the patients are often referred after receiving intravitreal and systemic antibiotics. [69] Additionally, the classic clinical characteristics could be masked due to delayed presentation. While direct sequencing and PCR of the ITS region can be applied on clinical specimens to detect the presence of microorganisms, low pathogen loads or polymicrobial infections are usually challenging to differentiate ambiguous signals from mixed chromatograms of samples, and the sequences often remain unidentified/misidentified.
Next-generation sequencing (NGS) is a novel platform that can simultaneously detect and independently sequence virtually all the DNA sequences of the infectious agents present in a sample. [70] A culture-free platform using targeted NGS of the ITS2 region would be ideal for overcoming the divide between conventional microbiological methods and whole-genome sequencing. The NGS is less complicated than whole genome sequencing and because it is also relatively less expensive could possibly be used in diagnostic laboratories.
Targeted NGS refers to a selective capture or amplification of specific genomic regions of interest before subjecting to massive parallel sequencing. Targeted NGS provides better sensitivity and specificity in addition to the ease of downstream analysis; it also lowers the cost by allowing more samples to be tested in one run. [71] We have shown that targeted NGS is a good tool for microbial research in culture-negative endophthalmitis, with a 71.9% rate of detection of fungal pathogens in culture-negative samples. Targeted NGS is also more efficient in detecting polymicrobial infection. The NGS could be the future diagnostic tool in routine ocular microbiological laboratories when the procedures and the bioinformatics are better standardized and validated. A reduced cost will bring additional benefits.
Real-world data and evidence
Real-World Data (RWD) are "data relating to patient health status and/or the delivery of health care collected routinely from various sources." Real-World Evidence (RWE) is the "clinical evidence about the usage and potential benefits or risks of a medical product derived from analysis of RWD." [72] Classically, randomized controlled trials (RCTs) are considered the gold standard for demonstrating the product or procedure efficacy for regulatory approval or to create evidence for clinical care. It provides much-needed information to both treating physicians and patients to make scientific judgments and informed choices. RCTs are always a good investment; it pays back the money spent and improves the quality of life. [73] The evidence generated from observational studies of RWD is often considered inferior because of nonrandomized treatment assignment and less rigorous data collection that could compromise internal validity. But, as personalized medicine becomes increasingly common, patient recruitment into RCTs would be affected, and sometimes it is not possible to include a control arm. Efforts are made to make the observational data collected through RWD address research questions where a traditional RCT may be unfeasible or unethical.
Target Trial is one such approach. [74] It closely emulates RCTs. It navigates through two steps: step 1-a causal question is asked (as is asked in RCT); step 2-the causal question is prevent antimicrobial overuse, misuse, and abuse; and (3) minimize the development of resistance.
Life-threatening fungal diseases such as invasive Aspergillus and Candida infections are associated with high mortality. [60,61] In addition, indiscriminate use of antifungal agents and widespread agricultural antifungal exposure have resulted in the spread of resistant fungal pathogens for one or more antifungal drugs. Antifungal stewardship (AFS) is responsible for the appropriate usage and conservation of antifungal drugs. The core principles of AFS are similar to AMS; the three principal concerns are: (1) the physicians have less opportunity to switch therapy because susceptibility tests are not done routinely; (2) there is a limited choice of antifungal drugs; (3) there is no well-defined endpoint. [62,63]
Inflammatory markers
Recent studies have evaluated the use of galactomannan (GM) and 1,3 β-D-glucan (BDG) biomarkers as ancillary tests to diagnose invasive fungal endophthalmitis using commercial ELISA Kits.
1,3-BDG is a major polysaccharide cell wall component in many fungal species, including Candida and Aspergillus sp. An elevated BDG level in the vitreous fluid of patients with endogenous fungal endophthalmitis has been reported. [64] It was also suggested that testing the BDG values in the vitreous fluid could be more sensitive than the culture methods for diagnosing fungal endophthalmitis. 1,3-BDG is also released into the bloodstream in invasive fungal diseases. [65] Hence, there could be additional values of measuring serum BDG level to monitor the disease progression and prognosis in endophthalmitis. [66,67] GM is a cell wall component of mainly Aspergillus sp. And its detection via enzyme immunoassay is part of the diagnostic criteria for invasive aspergillosis. A recent report has explored its use in the diagnosis of A. fumigatus in the vitreous sample and recommended its assay when the standard mycology is negative and if pan fungal PCR is not available. [68] Our data (unpublished) confirms significantly higher levels of vitreous GM in patients with culture-proven fungal infections than patients with noninfectious retinal disorders. The Area under the ROC curve (AUC) value for GM was 0.81 with a sensitivity of 0.88 and a specificity of 0.73 for a cut-off value of 51.36 pg/ml, and the AUC value for BDG was 0.93 (95% CI: 0.84-0.1) with sensitivity and specificity of 0.94 and 0.82, respectively, with the cut-off value of 1.19 pg/ml. Therefore, these tests could be considered in conjunction with clinical and microbiological tests. The added advantage is that the results of these tests are available within 2-3 h compared to several days of conventional culture. answered through a suitable RWD (instead of RCT) [75] [Fig. 5]. Target trial uses the RWD to answer the causal question to create evidence nearly similar to RCT. The most important one is to use RWD to design a fungal endophthalmitis management RCT as it is nearly impossible due to the worldwide paucity of cases, the time required to recruit an adequate number of patients, and the cost needed for such a study.
RCTs and RWE generated from RWD are complementary, and each contributes valuable information about patient outcomes. Rigorously collected data is the key. Well-designed and conducted observational studies may offer valuable information that complements the evidence from clinical trials. Advances in statistical approaches to the causal estimation of treatment effects and experience using RWD and RWE may increase our confidence in observational studies of treatment effectiveness. [76] An analysis of real-world data of 7 years (May 2014-April 2021) and 256 consecutive patients of culture-proven fungal endophthalmitis collected from our electronic medical record (EMR) has added few additional information as follows: (1) elderly age group (51-70 years) were more prone for fungal infection; (2) the age-specific principal events of fungal endophthalmitis were trauma in "younger" age up to 20 years, endogenous in the "mid-adult" age group 21-40 years, and postoperative infection in the "elderly" age group 71 and above; (3) keratitis progressed to fungal endophthalmitis in 12% to 18% instances.
Ocular surface, Mycobiome, and fungal endophthalmitis
The microbes on the ocular surface are one of the important sources of exogenous endophthalmitis. The current practice of topical 5% povidone-iodine application on the conjunctival surface aims to reduce the microbial load of the ocular surface. [77] Many investigators have studied the ocular surface bacterial flora in health and disease. In general, more microorganisms reside on the lids than on the conjunctiva. A variation in ocular surface microbiota is expected between individuals and between people from different countries. The most common cultured microorganisms include coagulase-negative Staphylococci (S. epidermidis is most common) and Propionibacterium sp.; less common ones are Micrococcus sp. and Corynebacterium sp., and the least common are the gram-negative bacteria. [78] These bacteria cause endophthalmitis in conducive conditions. The ocular surface fungi are not studied as much as the bacteria. Normally, fungi are not residents of the human eye but are acquired from the surrounding. Fungi usually colonize on the lid margins and conjunctiva. Aspergillus sp. is the most common fungal isolate from the conjunctiva of healthy individuals reported from India. [79] In healthy individuals, the conjunctival fungal flora has been determined using both conventional culture and culture-independent (NGS, using internal transcribed spacer 2, ITS2, sequencing as a proxy for fungi) methods.
These resident conjunctival fungal flora could also cause exogenous fungal endophthalmitis similar to the resident bacterial flora. Conjunctival disinfection significantly impacts the occurrence of post cataract surgery endophthalmitis, and the contact kill time of currently used 5% povidone-iodine is less than 30 s for the common infecting fungi. [81] Conclusion Globally, over 300 million people are afflicted with a severe fungal infection, and 25 million are at high risk of dying or losing their sight. It is more frequent in South-East and South Asia. [82] Fungal keratitis is more frequent than other fungal infections of the eye, and there is no published data on the global incidence of fungal endophthalmitis. Assessment of the global burden and epidemiologic trends of fungal diseases is critical to prioritizing prevention strategies, diagnostic modalities, and therapeutic interventions. But quantifying the global burden of fungal diseases is challenging. Fungal diseases are often difficult to diagnose because they are not routinely suspected. The difficulty is further accentuated because fungi do not always grow in culture, histopathologic identification is challenging, and fungal antibody tests may cross-react. [83] In the absence of a global incidence report and randomized clinical trial, there is no universally accepted diagnosis and management protocol of fungal endophthalmitis. This opens new opportunities to test the new technologies managing this difficult disease. These new technologies include newer molecular techniques, next-generation sequencing, and real-world data. The world also needs newer antifungal drugs, avoid indiscriminate use of antibiotics, immunosuppressives, and antifungal stewardship.
Financial support and sponsorship TD-Hyderabad Eye Research Foundation (2021).
Conflicts of interest
There are no conflicts of interest. | 2022-02-27T16:16:16.747Z | 2022-02-25T00:00:00.000 | {
"year": 2022,
"sha1": "fad5308a221611eb009df1bdf22359f446dcc245",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijo.ijo_2329_21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b206860fe5b4b022fcbe69e32e95939aefabc7a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235446791 | pes2o/s2orc | v3-fos-license | A Predictive Coding Account for Chaotic Itinerancy
As a phenomenon in dynamical systems allowing autonomous switching between stable behaviors, chaotic itinerancy has gained interest in neurorobotics research. In this study, we draw a connection between this phenomenon and the predictive coding theory by showing how a recurrent neural network implementing predictive coding can generate neural trajectories similar to chaotic itinerancy in the presence of input noise. We propose two scenarios generating random and past-independent attractor switching trajectories using our model.
Introduction
Chaotic Itinerancy (CI) describes the behavior of large non-linear dynamical systems consisting in chaotic transitions between quasi-attractors [14,7]. It was first observed in a model of optical turbulence [4], using globally coupled map in a chaotic system [6] and in high dimensional neural networks [14]. From a neuroscientific point of view, this phenomenon is interesting as such systems exhibit complex behaviors that usually require a hierarchical structure in neural networks. Studying CI could help better understanding the mechanisms responsible for the emergence of structure in large populations of neurons.
In cognitive neuroscience, it is believed that attractors or quasi-attractors could represent perceptual concepts or memories, and that cognitive processes such as memory retrieval or thinking would require neural trajectories transitioning between such attractors. CI is also gaining interest in neurorobotics, as it allows to design agents with the ability to autonomously switch between different behavioral patterns without any external commands. Several studies have tried to model CI with learned attractor patterns. [15,10] propose a method where this functional structure emerges from a multiple-timescale RNN. Behavioral patterns are encoded in a rapidly varying recurrent population while another population with a longer time constant controls transitions between these patterns. [5] models CI, using reservoir computing techniques [9], with the interplay This work was funded by the CY Cergy-Paris University Foundation (Facebook grant) and partially by Labex MME-DII, France (ANR11-LBX-0023-01).
between an input RNN and a chaotic RNN where desired patterns have been learned with innate trajectory training [8].
In this work, we try to model the attractor switching behavior of CI with a RNN implementation taking inspiration from the Predictive Coding (PC) theory. We propose a model performing random and past-independent transitions between stable and plastic limit-cycle attractors.
According to PC [12,2], the brain is hierarchically generating top-down predictions about its sensory states, and updating its internal states based on a bottom-up error signal originating from the sensory level. This view can be implemented by having the generative model intertwined with error neurons that propagate the information in a bottom-up manner through the hierarchy. An online computation of the error at each level of the generative model makes it possible to dynamically infer the hidden states, using only local update rules. The proposed model implements PC using the free-energy formulation [3], providing a variational Bayes frame for the inference mechanisms.
We show how an RNN implementation based on PC can be trained to generate a repertoire of limit cycle attractor trajectories, and how adding noise into the neural dynamics causes random transitions between the learned patterns.
Methods
In this section, we present the proposed RNN model and the corresponding derivations for the free-energy. We then describe the two hypothesized situations in which our model could exhibit attractor transitions dynamics, that we label mode A and mode B. Figure 1 represents our proposed RNN model implementing predictive coding. This implementation takes inspirations from several works on RNN modeling [11,13,3].
RNN model
RNNs can be introduced as directed graphical models forming temporal sequences of hidden states h t . RNNs can also include a sequence of input variables, and a sequence of output variables. The model we present here only considers outputs, that we denote x t . Such RNNs are parameterized by recurrent weights controlling the temporal evolution of h t , and output weights translating h t into outputs x t .
Taking inspiration from [3], we introduce hidden causes into our generative model. Hidden causes, that we denote c t , are variables influencing the temporal dynamics of h t . Contrary to hidden states, this variable is static and doesn't evolve according to recurrent weights. Hidden causes differ from model parameters, as they are a random variable on which we can perform inference. They also differ from inputs, as they are not an observable variable with known value. We still use the subscript t on c t , since our model will perform inference at each time step, providing new estimates of the hidden causes variable. To model the influence of the hidden causes variable c t onto the temporal dynamics of the hidden states h t , we use a three-way tensor of shape (n, n, p) where n is the hidden state dimension and p is the hidden causes dimension. The outcome of the dot product of this tensor by the hidden causes c t is a matrix of shape (n, n). We can thus see the three-way tensor as a basis of size p in a dimensional space of recurrent weight matrices, and hidden causes as coordinates in this basis used to select particular temporal dynamics. Following this intuition that different hidden causes will lead to different hidden state dynamics, we choose to have one hidden causes vector for each attractor we want to learn with our model. To make sure these attractors don't interfere with each other during the training phase, we enforce one-hot embeddings for the hidden causes, with the activated neuron corresponding to the index of the attractor we want to learn. It ensues that the hidden causes dimension will be equal to the number of attractors we learn with this model. This three-way tensor comprises a large number of parameters, causing this model to scale poorly if we increase the dimension of the hidden causes (i.e. the number of attractor patterns we learn). To address this issue, [13] proposes to factor the tensor into three matrices such that for all i, j, k, We introduce a factor dimension d that we can be set arbitrarily to control the number of parameters. In our experiments, we used d = n/2.
The top-down, prediction pass through our network can thus be described with the following equations: Where we have introduced a time constant τ for the hidden state dynamics.
Free-energy minimization
As explained in introduction, our model implements PC with a bottom-up error propagation circuitry, represented with green lines in figure 1. The error neurons, denoted and , compute the difference between predicted and target values at each layer. By propagating these errors originating from the output layer, onto the upper layers, this architecture is able to perform online inference of the hidden variables (states and causes) of the RNN. Inference in the proposed model can be formulated as a free-energy minimization process. The detailed derivations of our model's equations based on the free-energy principle are provided in annex A. We obtain the following equation for the free-energy E(h, c): In this equation, x and h denote prior predictions while h * denotes the approximate posterior estimation based on bottom-up information. x * denotes the observed value. C is a constant value that does not impact gradient calculations.
The probability p(c) is the prior probability on the hidden causes variable. In this article, we use a Gaussian mixture prior, defined in the following equation: Note that the number of Gaussians in the mixture model is equal to p, which is the number of attractors, also equal to the dimension of c.
The temporal dynamics of h and c can be found by computing the freeenergy gradients with regard to these variables. The bottom-up, inference pass through our network is described by the following equations: The last term in equation 10 will pull c towards values with high prior probability.
Compared to the RNN proposed in [11], our model comprises hidden causes in the generative model. Additionally, the feedback connections perform gradient descent on the free-energy, instead of being additional parameters to be learned.
Training
The model can be trained with gradient descent on the free-energy functional using only local update rules. The output weights W out can be trained in order to reduce the discrepancy between the observed value x * t and its prediction x t . Similarly, all the weights W p , W f and W c , responsible for the temporal dynamics of h, can be trained in order to reduce the error between the posterior estimation h * t and its prior estimation h t . However, such learning rules would not consider the delayed influence of the recurrent weight parameters onto the trajectory. In this article, we instead use the backpropagation through time algorithm for the training of the model parameters, using only the forward pass described in equations (2) and (4) for gradient computations (all the bottom-up updates are detached from the computation graph).
For each limit cycle attractor (x * 0,k , x * 1,k , . . . , x * T,k ) of the p trajectories we want to learn, we initialize the hidden causes to the one-hot encoding of k (all coefficients set to 0 except for the k-th coefficient that is set to 1). All trajectories start from a same random initial hidden state h init . The training method is described in algorithm 1.
Where I denotes the number of training iterations, T denotes the length of the target trajectories. During our training, we used the Adam optimizer with a learning rate of 0.01, and a batch size of p corresponding to the inner loop in the previous algorithm. In the general case, the prior means µ k will correspond to the one-hot vectors activated on the k-th dimension, and the mixture coefficients π k will be set uniformly : π k = 1/p.
Mode A
Here we describe one way to simulate attractor switching behavior using the proposed model. This method, that we label mode A, varies the parameters σ c used to dynamically infer hidden causes during the trajectory.
First, we are in a situation where no target x * is provided by the environment, in other words, the RNN performs a closed-loop trajectory generation. In this situation, we replace the error in the bottom level by low amplitude noise. This noise propagates in the RNN with feedback connections and in particular, influences the hidden causes variable.
As represented in figure 2, the parameter σ c determines the shape of the prior distribution on hidden causes. With low values of σ c , the complexity term in equation (10) will pull the hidden causes variable towards one of the prior means µ k . These values for c correspond to temporal dynamics that have previously been trained to match each of the desired attractors. With high values of σ c , the Gaussians merge into a concave function with a global maximum corresponding to the average of all the prior means µ k . In this situation, the complexity term in equation (10) will pull the hidden causes variable towards this average value, for which no training was performed.
The idea of mode A is to periodically vary σ c in order to alternate between phases where the hidden causes are pulled towards learned attractor dynamics values, and phases where the hidden causes are pulled towards the average of the prior means.
Mode B
We describe a second method to simulate attractor switching behaviors, that we label mode B. In mode B, the parameter σ c remains constant and equal to 0.4, instead we vary the parameter σ h .
We can see from equation (10) that this parameter controls the importance of the bottom-up signal in the hidden causes update. In our case, since the error that is propagated up into the model is pure noise, the parameter σ h can be seen as controlling the noise level that we add to the hidden causes at each time step. For high values of σ h , the additive noise level will remain too low to pull the hidden causes outside of the basin of attraction created by the last term of equation (10) and represented in figure 2a. For values of σ h that are low enough, the additive noise can make the hidden causes c escape from its basin of attraction.
Similarly to mode A, the idea behind mode B is to periodically vary σ h in order to alternate between low noise phases where hidden causes remain close to a value corresponding to the learned attractor dynamics, and high noise phases where the hidden causes escape their attraction basin.
Results
In this section, we present the results we obtained with the proposed model. We analyze the simulations of our network in mode A and mode B for the generation of attractor switching trajectories.
Training
We initialize our model with an output dimension of 2, a hidden state dimension of n = 100, and a hidden causes dimensions of p = 3, equal to the number of attractor trajectories we want to learn. The network has a time constant of τ = 5. Finally, we set σ o = 1, σ h = 10 and σ c = 0.1 during training. Note that the parameters σ h and σ c will be varying during the simulations in mode A and B.
The three target trajectories are periodic patterns representing a circle, a square, and a triangle, with a period of 60 time steps, repeated to last for 1000 time steps.
The model was trained during 1000 iterations using the method described in Algorithm 1. We now use the trained network in mode A, with the parameters settings σ o = 10, σ h = 0.1, and σ c varying according to the function σ c (t) = 0.2 * exp{2 sin(t/100)}. The results are recorded in figure 3.
Mode A
We can observe that the RNN switches between the three attractors. When σ c is high, the hidden causes converge towards the center value. This center value corresponds to the hidden state dynamics and output dynamics depicted in gray. This value of the hidden causes seems to correspond to a point attractor, which was not something directly enforced by the training procedure. Starting from this configuration, when σ c decreases, the hidden causes falls into one of the three attracting configurations that were trained to correspond to the three limit cycle attractors.
Mode B
We now use the trained network in mode B, with the parameters settings We can observe that the RNN again switched between the three attractors. When σ h is high, the hidden causes remain in a stable position corresponding to the learned limit cycle attractor dynamics. When we decrease σ h , the noise level applied onto the hidden causes at each time step increases to the point where c escapes its basin of attraction, to fall back into one of the three stable configuration once the noise level resettles.
Transition matrices
In this section, we want to verify whether the attractor switching behavior follows a uniform probability distribution or if some transitions are more likely to occur than others. We view the RNN as a Markov chain with three configurations. For modes A and B, we record 2000 attractor transitions that we use to build an estimation of the transition matrix of that Markov chain. The results are displayed in figure 5.
For mode A, we can see that the probability of switching to a certain state seems independent from the previous state. This result can be explained by the fact that the intermediary, neutral configuration that the networks reaches before switching to a new configuration corresponds to a fixed point. If we let enough time for the hidden state to reach this fixed point, it would no longer hold any memory of the previous configuration. Additionally, the probability distribution is not uniform, as rectangle states happen more often than others. For mode B, this bias is still present but contrary to mode A, the probability to reach a certain state depends on the previous state. The transitions are thus past-dependent.
Conclusion
In this study, we have shown how an RNN model implementing PC could exhibit attractor switching behaviors using an input noise signal. Here, we compare our results with other works aiming at modeling this behavior.
The approach described in [15] requires to train a separate RNN for each primitive. In opposition, we have shown that our model can embed different dynamics within one RNN, and as such should scale better to an increased number of primitives. On the other hand, one limitation of the model presented by [5] is that quasi-attractors have a set duration, and the behaviour they yield can't last longer than this trained duration. In contrast, since our model relies on real trained limit-cycle attractors, any periodical behavior can be maintained for as long as desired.
In this article, we have tried to propose mechanisms that will provide random transitions between attractors, regardless of the past attractor state. However, if we were to model cognitive mechanisms such as memory retrieval, it could be interesting to have such a dependency. Following this idea, we could envision a mode C where we would periodically set the parameter σ c to a very large value. When σ c is very high, the prior probability over c converges to a flat function, thus making the last term of equation 10 negligible. In such a setup, c would evolve following a Gaussian random walk. When σ c is reset to its initial value, c should converge to the closest mixture mean. Alternating between low values of σ c and very high values would thus result in a succession of random walk and convergence phases for c, that should maintain information about the previously visited attractor configurations.
A Free-energy derivations
In this section, we provide the derivations for equation 5. We start from the following probabilistic graphical model: Where f and g correspond to the top-down predictions described respectively in equation 2 and 4. Note that here, c, h and x denote random variables, and should not be confused with the variables of the computation model presented in the main text. Since free-energy will be used to perform inference on the hidden variables, and that it's not possible to update the past hidden variable h t−1 , we treat it as a parameter of function f and only perform inference on c and h = h t , where we have dropped the subscript.
We introduce approximate posterior density functions q(h) and q(c) that are assumed to be Gaussian distributions of means m h and m c . Given a target for x, denoted x * , the variational free energy is defined as : = −E q [log p(c, h, x * )] + E q [log q(c, h)] The second term of equation 15 is the entropy of the approximate posterior distribution, and using the Gaussian assumption, does not depend on m h and m c . As such, this term is of no interest for the derivation of the update rule of m h and m c , and is replaced by the constant C 1 in the remaining of the derivations. Using the Gaussian assumption, we can also find simplified derivations for the first term of equation 15, and grouping the terms not depending on m h and m c under the constant C 2 , we have the following result: E(x * , m h , m c ) = − log p(x * |h) − log p(m h |c) − log p(m c ) + C 1 + C 2 (16) Where C = C 1 +C 2 +C 3 and C 3 corresponds to the additional terms obtained when developing log p(x * |h) and log p(m h |c).
[1] provides more detailed derivations and a deeper hindsight on the subject.
B Linked videos
Here is the link to a video showing animated example trajectories in modes A and B (https://youtu.be/LRJQr8RmeCY). | 2021-06-17T01:15:51.807Z | 2021-06-16T00:00:00.000 | {
"year": 2021,
"sha1": "6e757ba47c24ca4d003a7dae54631f4e9941ffce",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.08937",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6e757ba47c24ca4d003a7dae54631f4e9941ffce",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
25407428 | pes2o/s2orc | v3-fos-license | A Radiographic Healing Classification for Osteochondritis Dissecans of the Knee Provides Good Interobserver Reliability
Background: Recent studies have examined radiographic factors associated with healing of osteochondritis dissecans (OCD) lesions of the knee. However, there is still no gold standard in determining the healing status of an OCD lesion. Purpose: We examined temporally associated patterns of healing to (1) evaluate the practicality of a classification system and (2) elucidate any associations between healing pattern and patient age, sex, lesion location, treatment type, and physeal patency. Study Design: Cohort study (diagnosis); Level of evidence, 3. Methods: We retrospectively screened 489 patients from 2006 to 2010 for a total of 41 consecutive knee OCD lesions that met inclusion criteria, including at least 3 consecutive radiographic series (mean patient age, 12.8 years; range, 7.8-17.1 years; mean follow-up, 75.1 weeks). Radiographs were arranged in sequential order for ratings by 2 orthopaedic sports medicine specialists. Healing patterns were rated as boundary resolution, increasing radiodensity of progeny fragment, combined, or not applicable. Repeat ratings were conducted 3 weeks later. Results: Patients were most commonly adolescent males aged 13 to 17 years, with a medial femoral condyle lesion that was treated operatively. Interobserver reliability of the healing classification was good (intraclass correlation coefficient, 0.67; 95% CI, 0.55-0.79). Boundary and radiodensity healing was observed for all ages, sexes, lesion locations, treatment types, and physeal patency states. Conclusion: This study evaluated a valuable radiographic paradigm—boundary resolution, increasing radiodensity of progeny fragment, or combined—for assessment of OCD lesion healing. The proposed system of healing classification demonstrated good inter- and intraobserver reliability. Healing patterns were not significantly associated with any particular age, sex, lesion location, treatment type, or physeal patency status. The development of a classification system for knee OCD may eventually improve clinical assessment and management of OCD lesions.
Osteochondritis dissecans (OCD) is commonly described as an acquired lesion of subchondral bone with potential for secondary alteration of articular cartilage. 6,11,13 Various authors have expressed the view that OCD lesions of the knee are becoming increasingly common in children and adolescents. 11 Although the etiology of OCD continues to be disputed, with discussions suggesting ischemia and genetic predisposition as contributors, 15 numerous authors have proposed repetitive microtrauma as an underlying mechanism; as such, it is possible that the rising incidence may be due in part to increased participation in sports at increasingly younger ages, especially for males. [3][4][5]11 Fortunately, the potential for healing is notable for skeletally immature individuals, with some authors reporting resolution in as many as two-thirds of patients with bracing and activity modification alone. 14,18 Despite extensive investigation, a lack of consensus exists concerning the definition of "healing" or even how radiographic findings may correlate to healing. 7 Healing may also be defined by clinical outcomes, such as resolution of symptoms, as opposed to any particular finding on image studies. 18 However, pathology has been correlated with particular markers on imaging, such as presence of a perilesional sclerotic ring from failure of reparative neovascularization, which leads to loosening of a progeny fragment from the parent bone. 8,13 Various studies have provided illustrations of a healing lesion or have used arbitrary cutoffs for radiographic healing as a study outcome, but minimal consistency exists among authors. 1,10 Recently, however, Wall et al 16,17 and the Research in Osteochondritis of the Knee (ROCK) Study Group have presented substantial work revealing high inter-and intrarater reliability on overall healing and various radiographic parameters and characteristics of each lesion. This work may lead to a consensus on which factors indicate that an OCD lesion of the knee has undergone healing.
Accordingly, a study involving the use of plain radiographs remains particularly relevant. Radiographs are the standard imaging modality of choice during the follow-up period, as findings of instability on magnetic resonance imaging correlate poorly with findings during arthroscopy, calling into question the reliability of this imaging modality for this pathology. 9 The goals of the current study were primarily to evaluate the practicality of a radiographic classification system for evaluating healing of the OCD lesion with plain radiographs and, secondarily, to elucidate any associations between these healing patterns and patient age, sex, lesion location, treatment type, or physeal patency.
METHODS
After approval from an institutional review board, a retrospective analysis was conducted on consecutive patients who were treated for OCD lesions of the knee at a level 1 tertiary care pediatric hospital. Patients were identified by hospital database search and were included if they were <18 years of age, had at least 3 consecutive radiographic image series (including lateral or notch view), and had an OCD lesion of the knee. Although some patients had notch views obtained only after their initial clinical visits, likely for the purpose of follow-up of lesion appearance, all available views for each patient were compiled and provided to the physician rater in the form of blinded PowerPoint slides. In the absence of at least 2 orthogonal radiograph views at each clinical encounter, the patient was excluded from the study. Lesions in patients with bilateral pathology were considered distinct entries. Patients were excluded if they did not meet age criteria or if they lacked sufficient imaging as described. Data collected included age, sex, lesion location, and treatment type, further divided into 2 groups: those receiving operative care within 6 months of diagnosis and those receiving nonoperative treatment for >6 months from diagnosis. A formal informed consent process was not required by the institutional review board, given the retrospective nature of the study.
Radiographic images were collected and arranged in sequential order for rating. Two fellowship-trained orthopaedic sports medicine specialists classified the consecutive images according to lesion location, healing pattern, and physeal patency while blinded to all demographic and treatment details except time from initial presentation. For each included patient, healing patterns were rated as 1 of the following: resolution of the boundary between progeny fragment and parent bone (ie, from distinct boundary to indistinct boundary), increasing radiodensity of the progeny fragment (ie, from radiolucent to the same radiodensity as the parent bone), combined (features of boundary resolution and increasing radiodensity patterns), or not applicable. Results were collected in the form of 1 categorical response from the stated options. Figure 1 presents representative lesions and healing patterns. Physeal patency was rated by examination of the radiograph at initial patient presentation regardless of physeal patency at the time of final follow-up. Both physicians repeated the blinded rating process 3 weeks after the initial reading.
Statistical analysis included primary outcomes of intraclass correlation coefficient (ICC) for inter-and intrarater reliability, overall percentage agreement, and demographic trends. For interpretation of the resulting ICCs, standards for the magnitude of the reliability coefficient were obtained from Altman 2 ( Table 1). In addition to these increasing radiodensity of progeny fragment from radiolucent to similar radiodensity as parent bone, and (C) combined features of boundary resolution and increasing radiodensity patterns. Arrows indicate areas of healing OCD lesion. Raters were presented with image series in the same format for classification into 1 of the following healing categories, respectively: boundary resolution, increasing radiodensity, and combined.
measures, percentage agreement and the Randolph freemarginal multirater kappa were calculated for the agreement between readers on whether "boundary" or "radiodensity" healing patterns were exhibited in a lesion, since a rating of "combined" would qualify if both were present. The Fisher exact test was employed for the secondary outcome examining associations between healing types and age, sex, lesion location, operative versus nonoperative treatment, and physeal patency. For the latter analysis, lesions were assigned a single healing type and physeal status according to combined ratings; this was determined by considering all 4 reader measurements and assigning by majority. In the event of an even discrepancy between readers, a tiebreaker rating from the senior attending was used. Calculations were conducted with SPSS for Windows (v 20; IBM).
RESULTS
Of 489 patients screened, 41 consecutive knee OCD lesions were evaluated, representing all cases that met inclusion criteria for a single surgeon from 2006 to 2010. The mean follow-up period was 75.1 weeks (range, 14-276); the mean time between radiograph studies was 22 weeks. The mean patient age was 12.8 years (SD, 2.1; range, 7.8-17.1), and the sex distribution was 35 males and 6 females. Patients were most commonly male adolescents between the ages of 13 and 17, with open physes, receiving operative treatment, with a medial femoral condyle lesion.
All healing types, with the exception of "not applicable," were observed in patients across both sexes and all age groups, lesion locations, treatment types, and physeal patency status. The percentage agreement across all healing ratings combined for both attending physician raters was 0.78. The inter-and intraobserver reliabilities (ICCs) of the proposed healing classification were 0.67 (95% CI, 0.55-0.79). These are considered "good" by the Altman 2 standard. For the physeal patency rating, the interobserver reliability was 0.87 (95% CI, 0.81-0.92), and the intraobserver reliability was 0.82 (95% CI, 0.75-0.89), which are both considered "very good" by the Altman standard (Table 2). Table 3 delineates results for percentage agreement and the Randolph free-marginal multirater kappa between raters according to agreement concerning presence/absence of individual healing types. Conduction of Fisher exact test with the combined ratings revealed no statistically significant associations between any of the healing types and patient age, sex, lesion location, operative versus nonoperative treatment, or physeal patency.
DISCUSSION
The study results suggest the utility of the proposed radiographic classification system for healing of OCD lesions of the knee, as it provides good inter-and intraobserver reliability. As demonstrated by multiple authors, there is a lack of consensus on what "healing" entails in OCD lesions, whether clinically or radiographically. 1,7,9,10,12,18 Indeed, Parikh et al 12 recently examined the reliability the of determining healing on radiographs and found that physicians did not consistently agree on the healing status of OCD lesions. Given the aforementioned state of the literature concerning classification discrepancies and inconsistencies in the use of imaging to determine healing status, the primary aim of this study was to examine the feasibility of a framework with which to approach the progression of lesion healing.
As of the current study, the majority of previous studies focused on individual prognostic factors for healing potential or findings on imaging that indicate lesion instability. 4,7,13,16 The need for interprovider agreement on a standardized definition of lesion healing has been established in the literature. 7,12,15 In a study of 47 knees that examined OCD lesion reossification on serial plain films over 6 months of nonoperative treatment, Wall et al 18 found that smaller-sized lesions were more likely to progress to healing. They also found that patient age, lesion sidedness (left knee vs right), and lesion location (lateral vs medial femoral condyle) did not play a role in predicting healing potential. Similarly, in a study of 59 OCD lesions, Edmonds et al 6 found a significant difference in the rate of healing between small and large lesions. These findings support those of the current investigation, which failed to establish any statistically significant correlation between the aforementioned factors and any particular healing pattern, with the exception of lesion size (not examined in the current study). The most significant recent investigations concerning this topic were carried out by Wall et al and the ROCK Study Group. 16,17 In a multicenter study, they found excellent inter-and intrarater reliability using a continuous scale of radiographic images, as well as excellent reliability for what were termed the 5 "subfeatures" involved in healing on radiographs (boundary, sclerosis, size, shape, and ossification). Of note, the ordinal rating of overall healing had significantly lower inter-and intrarater reliabilities of 0.61 and 0.68, respectively. 16 The current study assigned similar radiographic features of boundary and radiodensity/sclerosis to group healing patterns and translated them into a categorical rating system, resulting in interand intraobserver ICCs of 0.67-findings similar to those of the ordinal trial findings. In an earlier study by the same group, the radiographic parameter of progeny bone boundary had greater interrater reliability than that of progeny bone center radiodensity (0.62 vs 0.52). 17 This study had multiple limitations, the most important likely being the small size of the cohort and the relatively few number of physician raters. Applicability of the proposed categorization to a variety of clinical settings is limited, given that both raters were sports fellowshiptrained orthopaedic surgeons. Inclusion of professionals from various disciplines would enhance the strength of the findings. A larger cohort may elucidate associations between healing types and factors such as age, treatment type, and so on, as the data would be more likely to approach statistical significance. Additionally, our results lack correlation with clinical outcome measures; this analysis was outside the scope of the current investigation. A future study combining clinical outcome data with the proposed radiographic healing types could augment OCD treatment protocols. However, these limitations did not hinder the course of the study nor the conclusions drawn, as the data revealed that the employed system still had substantial reliability despite these factors.
We recognize that this study provides mostly a preliminary, yet important, framework on which future studies can directly build. Correlation of healing patterns with pathological specimens to determine whether these lesions are histologically distinct would prove extremely valuable. Subsequently, if lesions are conceptualized as unique entities, investigations concerning how treatment might be tailored to the patient would be appropriate. Additionally, correlation of healing pattern to clinical outcomes such as pain scores, time to clinical recovery, or return to pretreatment levels of activity would be necessary, perhaps in a prospective analysis.
This study builds on the previous body of work concerning radiographic factors involved in assessment of OCD lesion healing. The employed system of healing classification on radiographs (boundary resolution, increasing radiodensity of progeny fragment, and combined) demonstrated good inter-and intraobserver reliability. Healing patterns were not significantly associated with any particular age, sex, lesion location, treatment type, or physeal patency status. The current study presents an important framework on which future correlations with tissue pathology and clinical outcomes may be based. | 2018-04-03T05:57:20.736Z | 2017-11-23T00:00:00.000 | {
"year": 2017,
"sha1": "0a7fe9f4030aee21523036d5aaed9ed84ea11360",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325967117740846",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a7fe9f4030aee21523036d5aaed9ed84ea11360",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17592846 | pes2o/s2orc | v3-fos-license | SCUBA-2: The 10000 pixel bolometer camera on the James Clerk Maxwell Telescope
SCUBA-2 is an innovative 10000 pixel bolometer camera operating at submillimetre wavelengths on the James Clerk Maxwell Telescope (JCMT). The camera has the capability to carry out wide-field surveys to unprecedented depths, addressing key questions relating to the origins of galaxies, stars and planets. With two imaging arrays working simultaneously in the atmospheric windows at 450 and 850 microns, the vast increase in pixel count means that SCUBA-2 maps the sky 100-150 times faster than the previous SCUBA instrument. In this paper we present an overview of the instrument, discuss the physical characteristics of the superconducting detector arrays, outline the observing modes and data acquisition, and present the early performance figures on the telescope. We also showcase the capabilities of the instrument via some early examples of the science SCUBA-2 has already undertaken. In February 2012, SCUBA-2 began a series of unique legacy surveys for the JCMT community. These surveys will take 2.5 years and the results are already providing complementary data to the shorter wavelength, shallower, larger-area surveys from Herschel. The SCUBA-2 surveys will also provide a wealth of information for further study with new facilities such as ALMA, and future telescopes such as CCAT and SPICA.
INTRODUCTION
The submillimetre waveband, which encompasses the spectral range from 0.3 to 1 mm, contains a wealth of information about the cold Universe. Observations of gas and dust probe the earliest stages in the formation of galaxies, stars and planets. For example, the blackbody emission of a 10 K source (or a 40 K source at redshift ∼3) will peak around 300 µm. The continuum emission from dust is usually optically thin, so observations can probe to the heart of the most crucial processes, with the consequence that, for example, embryonic star-forming core masses and the surrounding structure of their molecular clouds are determined in a less model-dependent way than in the optical and infrared (e.g. Di Francesco et al. 2007;Ward-Thompson et al. 2007). On larger scales, much of the UV/optical light emitted from stars inside young galaxies is trapped within enshrouding dust clouds and re-emitted in the submillimetre. Only by observing at these longer wavelengths can the total energy budgets be determined. This is essential to derive an unbiased census of the star formation rate density with redshift and thus determine the "formation epoch" of galaxies (e.g. Blain et al. 1999;Murphy et al. 2011).
Undertaking submillimetre observations from groundbased observatories has always been fraught with difficulty, since atmospheric transparency is often poor and the high background power and sky emission variability limit the observing sensitivity. Nevertheless, 10-15 m class single-dish telescopes, routinely operating with high efficiency for the past 25 years, have led to enormous advances in our understanding of the formation of galaxies, stars and planets. For example, over the past two decades remarkable discoveries have taken place including the discovery of ultra-luminous high redshift galaxies responsible for the majority of the far-IR background (e.g. Smail et al. 1997;Hughes et al. 1998), pin-pointing cold dense regions in molecular clouds where new stars are forming (e.g. Motte et al. 1998;André et al. 2010), and imaging of vast clouds of cold dust around nearby stars believed to be analogues of the Kuiper Belt in our Solar System (e.g. Holland et al. 1998;Wyatt 2008).
The submillimetre revolution began in earnest in the late 1990s with the arrival of the first imaging cameras, SHARC (Wang et al. 1996) on the Caltech Submillimeter Observatory telescope (CSO) and the Submillimetre Common-User Bolometer Array (SCUBA; Holland et al. 1999) on the JCMT. However, with two arrays containing only 91 and 37 bolometers, mapping even moderately-sized areas of sky (tens of arcminutes across) with SCUBA to any reasonable depth was painfully slow. Bolometer cameras Figure 1. Layout of a SCUBA-2 focal plane unit showing the major components of the assembly. The sub-arrays are butted together, with an approximate 4 pixel gap, to form a focal plane with an approximate 45 arcmin 2 field-of-view on the sky. Ceramic printed circuit boards (PCB) are wire-bonded to the arrays and these fan out the signal connections to ribbon cables that run to the 1 K amplifiers and eventually via additional cables to room temperature.
with similar total pixel counts also followed on other groundbased telescopes, such as Bolocam (Glenn et al. 1998) and SHARC-II (Dowell et al. 2002) at the CSO, LABOCA (Siringo et al. 2009) on the Atacama Pathfinder Experiment telescope, and the MAMBO cameras (Kreysa et al. 1998) on the Institut de Radio Astronomie Millimetrique 30 m telescope. With existing bolometer technology being non-scalable to more than a few hundred pixels, the next challenge was to develop a way to increase substantially the pixel count by up to a factor of 100. The solution came in the form of new detectors incorporating superconducting transition edge sensors (TES; Irwin 1995), and the ability to adapt techniques such as high-precision silicon micro-machining to produce large-scale array structures (Walton et al. 2005). Furthermore, Superconducting Quantum Interference Device (SQUID) amplifiers could also be chained together to form a complementary, multiplexed readout system (deKorte et al. 2003). These technology advances meant that cameras of many thousands of pixels became conceivable for the first time, and thus formed the major motivation for the SCUBA-2 project.
SCUBA-2 is a dual-wavelength camera with 5120 pixels in each of two focal planes. A focal plane consists of 4 separate sub-arrays, each with 1280 bolometers, and butted together to give the full field (as shown in Fig. 1). Both focal planes have the same field-of-view on the sky and are used simultaneously by means of a dichroic beam-splitter. The instrument operates at the same primary wavelengths as SCUBA, namely 450 µm for the short and 850 µm for c 0000 RAS, MNRAS 000, 000-000 Optical layout for SCUBA-2 from the tertiary mirror to the detector arrays inside the cryostat. The beam envelope, shown in red, is a combined ray trace of the on-axis and two extremes of the field-of-view for this projection of the optics. The arrow shows the direction of light propagation. Mirror N3 is located just inside the cryostat window, whilst mirrors N4 and N5 relay the optical beam into the array enclosure ("1-K box") which houses the focal plane units (FPUs). the long waveband. SCUBA-2 was delivered from the UK Astronomy Technology Centre (Edinburgh) to the Joint Astronomy Centre (Hilo, Hawaii) in 2008 April with one engineering-grade sub-array at each waveband. The first two science-grade sub-arrays (one for each focal plane) arrived at the JCMT in late 2009, and a period of "shared risk observing" was undertaken between February and April 2010. The remainder of the science-grade sub-arrays were delivered in summer 2010 and the first astronomical data with the full array complement were taken in early-2011.
In this paper Section 2 gives an overview of the instrument design including the optics and cryogenics. Section 3 describes in detail the design, manufacture and testing of the superconducting detector arrays. In Sections 4 and 5 we discuss how the instrument takes data and processes the information into astronomical images. Section 6 describes the rudiments of flux calibration whilst Section 7 presents the initial on-sky performance, including sensitivity and optical image quality. In Section 8 we give an overview of the algorithms used in reducing SCUBA-2 data to produce publication-quality images. Finally, Section 9 illustrates the scientific potential of SCUBA-2 with a selection of early results.
INSTRUMENT DESIGN
The opto-mechanical design of SCUBA-2 is driven by two principal requirements: (1) to maximise the available fieldof-view; and, (2) to provide an ultra-low detector operating temperature in the 100 mK regime. The re-imaging of a large field onto a relatively small detector array (Section 3.1.1), as well as infrastructure limitations at the telescope, results in a complex optical path necessitating some extremely large mirrors (up to 1.2 m across). Furthermore, to minimise power loading on the detector arrays, the last 3 of the re-imaging mirrors are cooled to temperatures below 10 K. Together with the complex cryogenic system, this leads to a large cryostat, the vacuum vessel of which is 2.3 m high, 1.7 m wide and 2.1 m long, with a pumped volume of 5.3 m 3 and a weight of 3400 kg.
Optical design
Early designs clearly showed that it was not possible to accommodate SCUBA-2 in the JCMT "receiver cabin" close to the Cassegrain focus. The left-hand Nasmyth platform (as viewed from the rear of the telescope), previously home to SCUBA, was a more realistic location in which the unvi-gnetted field-of-view of the JCMT is ∼11 arcmin in diameter, restricted by the aperture of the elevation bearing. Given that the focal-plane has a square geometry (as dictated by the array manufacturing process), a maximum field of 8 × 8 arcmin was possible. Hence the SCUBA-2 optics re-image the field at the Cassegrain focus to a size compatible with the focal plane footprint at the arrays. To maximise the sensitivity of the instrument and provide excellent image quality this has to be achieved with high efficiency and minimum field distortion. The optics are also designed to ensure that a high-quality pupil image of the secondary mirror is produced at a cold-stop within the cryostat thereby minimising "stray light" that could potentially degrade detector sensitivity. Subsequent changes to the array size (Section 3.1.1) restricts the final field-of-view to ∼45 arcmin 2 .
The detailed optics design and manufacture of the re-imaging mirrors are described by Atad-Ettedgui et al. (2006). Referring to Fig. 2, the design consists of a tertiary mirror located in the receiver cabin just above the nominal Cassegrain focus. At the exit of the cabin a relay of three mirrors (labelled C1-C3) re-images the telescope focal plane at a point just beyond the elevation bearing on the Nasmyth platform, thereby converting the f/12 telescope beam to f/7. A second relay (N1 and N2) re-images the focal plane at f/2.7 just inside the instrument, thereby allowing for a small cryostat window diameter. The cold optics, consisting of a further 3 mirrors (N3, N4 and N5), forms an approximate 1:1 system that re-images the focal plane at f/2.7 onto the detector arrays.
The mirrors were manufactured by TNO Science and Industry 1 to have complex free-form surfaces that provide sufficient degrees of freedom to optimise the optical design. This proved necessary to maintain a high Strehl ratio across the field as a function of telescope elevation, as well as minimum field distortion. Packaging the optics within the overall structure of the telescope results in a cryostat location just below the existing Nasymth platform and tilted at an angle of 22 • to the vertical. This required a large amount of infra-structural changes at the telescope, as documented in Craig et al. (2010). The overall optical path length is ∼20 m from the tertiary mirror to the arrays. An alignment accuracy of ± 0.25 mm, well within acceptable tolerances, is achieved in all axes using an optical datum positioned in the bearing tube (Craig et al. 2010).
Wavelength of operation
Submillimetre observations from ground-based sites are restricted to wavebands within transmission windows in the atmosphere. For a good observing site such as Mauna Kea these windows extend from 300 µm to 1 mm and throughout the region atmospheric water vapour is the main absorber of radiation from astronomical sources. The selection of observing wavelength is made by a bandpass filter, which as shown in Fig. 3, is carefully tailored to match a particular transmission window. For SCUBA-2, these are multi-layer, metalmesh interference filters , located just in front of the focal planes, and have excellent transmission (typically peaking around 80 per cent) and very low (< 0.1 per cent) out-of-band power leakage. The half-power bandwidths of the bandpass filters are 32 and 85 µm at 450 and 850 µm, respectively, corresponding to λ/∆λ ∼ 14 and 10.
A decision was made early in the design to conservatively filter the instrument. Hence there are a series of thermal and metal-mesh edge filters to ensure that heat loads and stray light are kept to a minimum in such a large instrument. Fig. 4 shows the position of all the filters within SCUBA-2, including the dichroic that reflects the shorter wavelengths and transmits the longer. The addition of extra low-pass edge filters at 4,K and 1,K is not a large penalty compared with potentially having to track down stray light sources that could mar image quality, or contending with additional heat loads that could degrade sensitivity. For example, at the entrance of the 4,K optics box it is necessary to keep the thermal power to a minimum to prevent heating of the optics and possible subsequent loading of the 1,K stage. To ensure good frequency selection low-pass edge filters are also used with the bandpass filters. The net transmission of the instrument, in both wavebands and including the cryostat window and the detector absorption efficiency, is ∼40 per cent.
Main instrument
The cryostat is made up of a series of sub-systems (see Fig. 2) and is designed with nested radiation shields and baffles to minimise stray light and magnetic fields. For example, the arrays themselves contain SQUID amplifiers (Section 3.2.2) that are sensitive magnetometers and so must be shielded from magnetic fields. Immediately inside the vacuum vessel is a high magnetic permeability shield, a multi-layer insulation blanket and a radiation shield operating at ∼50 K. These provide radiation shielding for the main optics box, Figure 4. The arrangement and operating temperature (colour coded) of the bandpass (BP), thermal blocking and metal-mesh edge filters and dichroic in SCUBA-2. Top: the main cryostat; Bottom: the 1-K box and focal plane units. "LP" and "HP" represent low-pass and high-pass filter cut-off edges, respectively. Thermal edge filters reject power shortward of their wavelength cut-off.
that houses the cold re-imaging mirrors at ∼4 K. The radiation shield and optics box are cooled by a pair of pulse-tube coolers (Section 2.5). The main optics box provides the support for the three cold mirrors and the 1 K enclosure ("1-K box"). Mounted within the 1-K box are the two focal plane units (FPUs) that contain the cold electronics and the detector arrays. The still and the mixing chamber of a dilution refrigerator (DR) cool the 1-K box and arrays respectively (Section 2.5). The 1-K box and the outer casing of each FPU are also wrapped in superconducting and high magnetic permeability material (Hollister et al. 2008a;Craig et al. 2010).
1-K box and focal plane units
The removable 1-K box creates the required environment for the detector arrays (Woodcraft et al. 2009). In addition to radiation shielding, it provides a cold-stop aperture at the entrance to help minimise stray light. Furthermore, it gives mechanical support for magnetic shielding, a cold shutter (used to take dark frames), filters, and the dichroic that splits the incoming beam onto the two focal planes. The 1-K box consists of an outer shell with aluminium alloy panels that hold the high permeability material for magnetic shielding (Hollister et al. 2008a). In addition, the box accurately and reproducibly supports and positions the FPUs with respect to the cold-stop. Fig. 5 (left) shows a 3-D CAD drawing of the box, highlighting the main components. There are two separate focal plane units, each containing four sub-arrays (Section 3.1.1). Key elements of the FPU design include the thermal link to the DR, optical filtering and further magnetic shielding. The 1-K box is a separate sub-system and interfaces to the main cryostat assembly via a support frame. Fig. 5 (right) shows a photograph of the fully assembled 1-K box during installation into the instrument.
Thermal design and cryogenics
The overall thermal design of SCUBA-2 is described by Gostick et al. (2004). In summary, two Cryomech 2 PT410 pulse-tube coolers keep the radiation shields and the ∼300 kg of cold optics at 50 and 4 K, respectively. However, since their cooling power is insufficient for the initial cool-down phase on a reasonable timescale, pre-cool tanks are attached to the 50 and 4 K shields. After pre-cooling with liquid nitrogen (LN2) the instrument is kept cold without the need for any liquid cryogens in the main instrument. A modified Leiden Cryogenics 3 dilution refrigerator was commissioned to run with a pulse-tube cooler (PT410) and Joule-Thompson heat exchanger, eliminating the need for a conventional 1 K pot and liquid cryogens. The still of the DR cools the 1-K box, whilst the mixing chamber cools the ∼30 kg of focal planes to around 100 mK. The DR is a key element of the system and has to cope with a substantial thermal load from the arrays themselves, heat leaks down the mechanical array supports and wiring, as well as radiation loading from the warmer parts of the instrument, telescope and sky (Hollister et al. 2008b). The thermal design is complex, with the need to transfer cooling power at temperatures of 1 K and 100 mK over a distance of 1.5 m to various locations in the FPUs, and to support the arrays rigidly whilst keeping sufficient thermal isolation to the 100 mK stage. A large number of thermal links are therefore required, with the added need for several bolted interfaces to allow the FPUs to be removed from the instrument. Nevertheless, with the benefit of extensive thermal modelling, the instrument reached the required cryogenic performance on the first cool-down. Under a total thermal load of 70 µW the mixing chamber of the DR achieves a base temperature of 70 mK in regular operation (Bintley et al. 2012a).
Early operation on the telescope revealed two main problems. The first was that the DR was prone to blocking after only 4-5 weeks of continuous operation. This was due to a gradual build-up of contamination not removed by the LN2 cold traps, which over time causes a blockage, most likely in the flow impedance of the cold insert. An additional external 4 K cold-trap (cooled by liquid helium) significantly extended the run-time, allowing the instrument to remain cold for more than 6 months continuously. The second issue was that a very distinct oscillation (period ∼ 25 s) was seen in the bolometer output signals. This was traced to a temperature oscillation originating in the still of the DR. The oscillation is a result of both tilting the DR to 22 • and the strong interaction between the still and the circulation of 3 He gas (this being a consequence of using the still to condense the gas as part of the new DR design; Bintley et al. 2012a). A new temperature control system on both the support structure underneath the arrays and 1-K box minimised the amplitude of the oscillation. The temperature fluctuations have been reduced by at least a factor of 10, to ± 20 µK at the array supports. Under temperature control the mixing chamber achieves a base temperature of 78 mK.
Pixel count and array geometry
To fully Nyquist sample the sky instantaneously, the detector spacing must be 0.5 f λ, where f is the final focal ratio of the optics. With f/2.7 this corresponds to a spacing (and approximate detector diameter) of 0.61 and 1.14 mm at 450 and 850 µm, respectively. To cover the maximum available field-of-view requires approximately 25,600 and 6,400 bolometers for the two wavebands. Early design work showed that there was an approximate 1.1 mm minimum constraint on the size scale of the multiplexer (MUX) unit cell, rendering a fully-sampled 450 µm focal plane impractical. The detector size and spacing was therefore relaxed to ∼f λ at 450 µm producing an array that under-samples the sky by a factor of 4, leading to a subsequent reduction in mapping speed (Section 7.3). However, this decision greatly simplified the fabrication process, since the multiplexer wafers became identical at the two wavelengths (Sec-tion 3.2.2). Furthermore, fabrication limitations meant that the maximum size of an individual detector or MUX wafer was 50 mm 2 , and hence the focal planes are populated with four separate quadrants, or sub-arrays. Finally, the need for space on the MUX wafer for wire-bond pads, extra bumpbonds, and the second stage SQUID configuration, means that the size of a sub-array is further restricted to 32 columns by 40 active rows (Section 3.2.2).
NEP and power handling requirements
The key performance requirements for the detectors are the bolometer noise (or noise equivalent power, NEP; Section 3.5.2), the power handling capability (saturation power), and the speed of response (time constant). The fundamental requirement is that the overall SCUBA-2 sensitivity be limited by the background photon noise due to sky, telescope and instrument, under all observing conditions. A detailed model, based on the heritage of SCUBA, was constructed to allow the background power levels and performance figures to be established. As shown in Table 1, the 450 µm bolometers have to cope with larger sky power levels than their 850 µm counterparts, and under the driest observing conditions the background power is approximately 10 times less at 850 µm than at 450 µm. The specification of the total power handling capability therefore takes this into account, and also has to include additional margin for electrical TES bias and the calibration heater power (Section 3.4.1). In terms of the NEP, the 850 µm waveband sets the most stringent requirement as the sky background power is considerably lower than at 450 µm. The minimum background-limited Table 1. Summary of the predicted power levels under the best and worst atmospheric conditions, and the per-bolometer requirements in terms of power handling, NEP, transition temperature, thermal conductance and time constant. The minimum background power is estimated in the best observing conditions for which zenith sky transmissions of 40 and 85 per cent at 450 and 850 µm have been adopted. For the maximum power levels transmission values of 3 and 40 per cent have been used.
Parameter
Units 450 µm 850 µm Minimum background power (pW) 70 7 Maximum background power (pW) 120 16 Total power handling/saturation power (pW) 230 (± 10 per cent) 50 (± 10 per cent) Minimum background NEP NEPs (NEP bkg ) are 2.7 × 10 −16 and 5.6 × 10 −17 W s 1/2 at 450 and 850 µm, respectively. Hence, the intrinsic NEP of a bolometer must be less than these values to be background limited. For an ideal TES bolometer, measured in the absence of background power, phonon noise dominates the NEP at low frequencies (Section 3.2.1). Hence, the specification adopted for SCUBA-2 is that the phonon noise limited NEP for an individual bolometer is < 0.5 × NEP bkg . Given that the NEP will be degraded by additional noise in the readout circuit the formal specification is that the measured dark NEP (i.e. measured in the absence of background power) is < 0.7 × NEP bkg . These values are summarised in Table 1.
Frequency response
SCUBA-2 is designed to conduct large-area surveys by scanning the telescope in a rapid, overlapping pattern (Section 5.1). If the detector response is too slow, some of the higher spatial frequencies in the science signal will be attenuated. The telescope acts as a spatial filter, since the measured response is the convolution of the response of the astronomical signal and the telescope beam. The maximum frequency present in the system response is given by v tel /(pf λ) where v tel is the telescope scanning speed and p the plate scale (5 arcsec mm −1 ). Since the telescope can scan at speeds up to 600 arcsec sec −1 with high positional accuracy, resulting data will therefore have maximum frequencies present of 100 and 50 Hz at 450 and 850 µm, respectively. Thus the detector time constants must be < 1.5 and < 2.8 ms to avoid significant attentuation of the signal during fast scanning. More details of the derivation of the detector and array requirements are given in Hollister (2009).
Design and fabrication
The SCUBA-2 sub-arrays are based on transition edge sensors and time-division SQUID-based multiplexers developed at the National Institute of Standards and Technology in Boulder, Colorado (Irwin & Hilton 2005). The superconducting elements themselves are formed from a molybdenum/copper (Mo/Cu) bilayer of material, the relative thickness of each layer determining the superconducting transition temperature. The geometry of each sub-array is 32 columns by 40 active rows of bolometers. As shown in the schematic diagram of a single bolometer in Fig. 6, each subarray consists of two separate wafers, fabricated separately and then hybridised together.
Detector wafer
The "detector wafer" upper surface is implanted with phosphorus ions to provide an absorbing layer to incoming electromagnetic radiation, whilst the Mo/Cu bilayer on the lower surface forms a highly reflective backshort. For efficient radiation absorption the thickness of the wafer is made equal to an odd number of quarter wavelengths, 3λ/4 at 450 µm and λ/4 at 850 µm (Audley et al. 2004). An underside silicon nitride membrane mechanically supports each TES bolometer on the wafer and provides a weak thermal link to the cold bath. For ease of manufacture the sub-arrays for 450 and 850 µm are identical, except for the aforementioned thickness of the detector wafer. The final part of the detector wafer design is a heater circuit arranged in a thinline geometry around the edge of each bolometer (Section 3.4.2). In terms of fabrication, and to ease handling, a second silicon wafer is fusion bonded to the upper surface of the detector wafer (after the implantation stage). This is eventually removed after hybridisation with the MUX wafer, just prior to the post-processing step that thermally isolates each bolometer via a deep-etched 10 µm wide trench to the silicon nitride layer (Section 3.2.3).
The detector operating temperature and thermal conductance of the link to the cold bath govern the theoretically achievable NEP, according to NEP phonon = 4γkBT 2 G, where T is operating temperature (approximated by the superconducting transition temperature, Tc), G the thermal conductance and γ is a factor that accounts for the temperature gradient across the silicon nitride membrane (assumed to be 0.7 in this case; Mather 1982). Target values of Tc and G are chosen to provide background limited performance, and also to contend with varying degrees of power as the sky emission changes. As discussed in Section 3.1.2 the 850 µm waveband sets the most stringent requirement in terms of NEP. In addition, to minimise sensitivity to temperature fluctuations Tc should be roughly twice the expected base temperature. At 850 µm the value of Tc is therefore set to 130 mK. At 450 µm, where the sky power is higher, Figure 6. A schematic representation of a single SCUBA-2 bolometer showing the principal components. These include the absorbing resistive layer on the top of the detector wafer and the deep trenches that thermally isolate each bolometer from their neighbours. The TES bilayer sits between the silicon nitride membrane and the bottom of the detector wafer. The heater circuit runs around the edge of each bolometer and is used for calibration. The multiplexer wafer containing the SQUID amplifer circuitry is indium bump-bonded to the detector wafer. Note: Diagram is not to scale.
Tc can also be made higher, in line with the relaxed NEP requirement. Strictly, only a value of 380 mK is needed to ensure the 450 µm waveband is background limited. However, given the desire to keep the array fabrication common to both wavebands, and that a 100 mK operating temperature regime is needed in any case for 850 µm, a Tc of 190 mK was adopted at 450 µm, thereby giving even more margin on the required NEP. The value of G is given by G ∼ (nP )/T , where P is the saturation power and n is a power-law constant, which is typically 3.5 for silicon nitride membranes. Hence, the required values of G are 4.2 and 1.3 nW K −1 at 450 and 850 µm, respectively. The target Tc and G values are given in Table 1.
Multiplexer wafer
The bottom or "multiplexer" wafer contains the input coupling coils and SQUID amplifiers of the readout circuit (as shown in Fig. 7). Current flowing through the TES generates a magnetic field at the first-stage SQUID (SQ1) through an input transformer on the wafer (the details of which are not shown in Fig. 7 for clarity). Each column of SQ1s is then coupled by a summing coil to the second stage SQUID (SQ2). The signals are amplified by a SQUID Series Array (SSA) that has 100 SQUIDS in series per channel and is located on the 1 K PCB of the cold electronics module (see Section 3.3). The MUX wafer is designed with 41 rows with the first row being a "dark row", without any corresponding TES element but containing a SQ1. The output of the dark row has been used to investigate common-mode noise on a percolumn basis, but is not currently implemented by default in the data reduction software. It is the SQUID MUX that makes large-scale TES arrays, such as SCUBA-2, practical by vastly reducing the wire count between the detectors and Figure 7. A two-column, two-row schematic representation of the SQUID time-division multiplexed readout scheme for SCUBA-2 (Doriese et al. 2007). Each TES is inductively coupled to its own first-stage SQUID (SQ1). A summing coil carries the signals from the SQ1s in a column to a common second-stage SQUID (SQ2). Rows of SQ1s are sequentially switched on using an address current (I ad ), so the signal from one TES at a time (per column) is passed to the SQ2. Finally, the output of each SQ2 is passed to a 100 SQUID series array amplifer (SSA) and then to the room temperature electronics (MCE). the room temperature electronics (deKorte et al. 2003). The SCUBA-2 MUX design reduces the wire count from 82000 to 2700 -a MUX factor of approximately 30. The MUX wafers are independently tested prior to hybridisation with the detector wafer, using a dedicated facility at the University of Waterloo that measures yield and critical currents (Ic) of the MUX wafers. Testing at this stage allows fabrication faults to be identified and corrected.
Hybridisation and post-processing
The detector and multiplexer wafers are hybridised together using a low-temperature indium bump-bonding process developed at Raytheon Vision Systems 4 . The bump-bonds provide both thermal and electrical contact between the two wafers. There are 74 bumps surrounding each detector element (including 4 bumps that make the electrical connection between wafers for the bias and heater) and a further 100,000 bumps per sub-array around the perimeter of the wafers to give extra mechanical support. The first step of post-processing is to etch away the "handle wafer" to the level of the implanted absorbing layer. This is followed by thermally isolating each individual bolometer by deep etching a trench in the main detector wafer to the silicon nitride membrane (see Fig. 6; Walton et al. 2005). The trenches are 10 µm wide and either 60 or 100 µm deep depending on the thickness of the detector wafer (Section 3.2.1). Maintaining this width at the bottom of the trench across the entire subarray is critical as this (largely) controls the value of G. At this stage a final electrical continuity check allows any remaining fabrication issues to be repaired (such as electrical shorts that may have been introduced in the hybridisation process). The final step in the array processing is to laser dice the circular wafer assembly into the final rectangular sub-array geometry.
Array integration
Completed sub-arrays are packaged as stand-alone modules that are mounted in the focal plane of the instrument. The sub-array is first epoxy-bonded onto an array support holder. As thermal conduction laterally through the subarray is poor, the holder needs to make thermal contact to the entire back surface of the sub-array to provide sufficient cooling. The holder must also be made from a metal for effective thermal conduction, but this results in a large mismatch in thermal contraction between the holder and the (largely silicon) sub-array. The array holder is therefore designed in the form of a beryllium-copper block in which individual spark-eroded tines make contact with the underside of the MUX wafer through an epoxy bond, with the pitch of the tines being identical to that of the MUX unit cells. By allowing for differential thermal contraction during cooling damage to the sub-array is avoided. Once attached to this "hairbrush" array holder the sub-array is integrated into the "sub-array module", making electrical connection to a ceramic PCB (see Fig. 1) through aluminium wire bonds. Phosphor bronze-clad niobium titanium (NbTi) wires, woven into Nomex R cables (manufactured by Tekdata Inter-connections 5 ), carry the signals from the ceramic PCB to a 1 K PCB, that houses the magnetically shielded SSAs. Further woven ribbon cables (monel-coated NbTi) take the signals to the warm electronics on the outside of the cryostat. The design of both sets of cable is critical to minimise any heat leaks from either 1 K or higher temperatures. Fig. 8 shows a photograph of 4 sub-array modules folded into position in a FPU.
Sub-array set up and bias optimisation
Before operation can begin the arrays must be set up in their optimum configuration. This process has three main steps, the first two of which are performed quite rarely as the parameters are fixed and unlikely to vary with time (at least on a per cool-down basis). The first stage is called "full array setup" and refers to the process of determining the optimal SQUID bias for each level of SQUIDs. It sets the SSA bias to Ic(max) for maximum modulation, second stage SQUID bias to 1.5-2 Ic(max) for optimal bandwidth, and the firststage SQUID bias to the mode value of the 32 bias settings that gave maximum modulation of the SQ1 for each row. The second stage or "detector setup" refers to the process of selecting the optimal TES bolometer and heater biases for each array. This involves sweeping out the available parameter space and selecting operating values such that the NEP across a sub-array is minimised. The final step, and the one that is performed regularly, is "fast array setup" and refers to the process of determining the flux offsets for each level of SQUIDs with the SQUID, TES and heater biases set to their nominal operating values. The array setup process is more fully described in Gao et al. (2008).
Heater tracking
One of the innovative features of SCUBA-2 is the inclusion of a resistive heater arranged around the edge of every bolometer. The heaters play a fundamental role in the operation during observing in that they are used to compensate for changes in optical power as the sky background changes, enabling the TES bias point to be constant for a wide range of sky powers. Furthermore, each bolometer is individually calibrated by measuring its responsivity using a small ramp of the heater current (Section 5.2). The optical power from the sky is directly measured using a process called "heater tracking". This involves running a servo loop on the heater to keep the bolometer output constant while opening and closing the cold shutter to the sky. Periodic heater tracking transfers the slow changes in sky power to the heater setting, thereby maintaining the optimal power balance in a bolometer as determined during the array setup. The absolute level of power depends on the heater resistor values. In practice the average heater current from approximately 100 of the most stable bolometers on a sub-array are monitored. Although the resistors are nominally 3 Ω, all the power from the resistors is not necessarily coupled to the TES film. Thus each sub-array has a "heater coupling efficiency factor" (Section 3.5.1) to ensure that the responsivity and hence the NEP (Section 3.5.2) is well-calibrated.
Sub-array operation
The SCUBA-2 TES bolometers are operated in an approximate voltage-biased mode using a small 5 mΩ shunt resistor (R sh ) located on the MUX wafer (as shown in Fig. 7). The advantage of voltage-biasing the TES is that negative electro-thermal feedback (ETF) stabilises the bolometer against thermal runaway. An increase in background power warms the device and causes an increase in resistance, which in turn causes the bolometer current to decrease, thereby cooling the TES. Strong ETF essentially keeps the temperature of the TES constant, while providing a simple and direct relation between any applied power (optical or heater) and the current flowing through the device. Negative feedback also makes the bolometer self-biasing in terms of temperature in the transition. Variations in the incident power are automatically compensated for by changes in the bias current power on timescales shorter than the time-constant of the bolometer and via the heater for longer-term drifts (Section 3.4.2). As with all such devices, with too much applied power (optical, thermal or electrical bias) the TES becomes normal and ceases to work as a bolometer, and with too little applied power the TES becomes superconducting, with the same effect.
The current flowing through each TES element is measured by its own first stage SQUID (SQ1). The output of a SQUID is periodic with magnetic flux from the input coil, the periodicity being given by a flux quantum (deKorte et al. 2003). Since there is no unique output for a given detector current the SQ1 is used as a null detector. Current is applied by the room temperature electronics (Section 4.1) to the SQ1 feedback coil to null the field from the TES current in the input coil. By applying a flux locked loop, the applied feedback current is proportional to the current flowing through the TES. The dynamic range of the detector feedback circuit is limited by the available first stage SQUID feedback current and the mutual inductance of the SQ1 input coil. These parameters are carefully chosen to meet the stringent noise requirements of the instrument.
Sub-array performance
The first two science-grade sub-arrays were tested individually in a dedicated cryostat at Cardiff University (Bintley et al. 2010). All of the sub-arrays were then either re-tested or tested for the first time in the SCUBA-2 instrument at the telescope. This aimed to characterise the sub-array performance initially under dark conditions (i.e. with the shutter closed; as presented in this section) and then on the sky under observing conditions (Section 7.2). The power leakage around the shutter when closed is small (< 0.5 pW) compared with, for example, a minimum sky power of ∼7 pW at 850 µm.
Thermal and electrical characteristics
As discussed in Section 3.2.1 the operating (and transition) temperature and the thermal conductance to the cold bath dictate the achievable detector NEP and control the total power handling capability. The measurement of Tc and G starts with the bolometers in the normal state. The heater current is gradually reduced until the TES passes through its transition, with a small amount of bias power helping to identify the start of the transition. This process is then repeated at different temperatures. The measurement technique requires an accurate calibration of the heater resistance. As discussed in Section 3.4.2, the "effective" heater resistance will be lower than the design value because of the imperfect coupling between the heater and the TES element, and inevitably some heat will flow into the walls between bolometers. The effective resistance is determined from a series of I-V curves at different heater settings. The response of each sub-array is then normalised by a "heater coupling efficiency" factor based on optical measurements performed with ambient and LN2 temperature loads at the window of the cryostat and by observation of standard calibration sources (Section 6.2). This ensures that each sub-array reports equal power when observing the same source. Table 2 gives the mean Tc and G across each of the 8 sub-arrays. The detector time constants are measured by applying a square wave function to the heater and measuring the bolometer response using a fast readout mode available with the room temperature electronics. The measured time-constants are typically 1 ms.
Although test "witness" samples were taken during the Tc deposition processes, the measured values of Tc are, in most cases 8-10 per cent higher than the specification of 190 (± 5) and 130 (± 5) mK for the 450 and 850 µm bolometers respectively. The higher-than-expected Tc is not well understood but one possibility is that the wafers suffered from annealing in the processing after the bilayer deposition stage. The variation in Tc on an individual sub-array is mainly radial, with values being lower in the (offset) centre position and typically increasing by 10 per cent towards the edges (Bintley et al. 2012b). This is a consequence of the sputtering process in which the detector wafer spins as the copper and molybdenum are deposited.
The values of G are much higher than the design, typically by factors of 2-3. G tends to be more uniform across the array although there is a slight radial dependence similar to Tc, being smaller in magnitude towards the centre. The reason why G is so much higher than the requirement is not well understood. Sub-array s8b, which was the first one fabricated (a year ahead of the others), appears to be somewhat anomalous in terms of having Tc and G much closer to the specification. G is controlled by the geometry of the silicon nitride membrane and it is known that phonon transport across thin film membranes at very low temperatures is a complex and poorly understood process, and may, for example, depend on factors such as the roughness of the membrane surface. This is a particularly important consideration for the ultra low NEP detectors needed for groundbased Cosmic Microwave Background experiments (where sensitivity is paramount over number of bolometers) and space-borne instruments of the future (where background power levels will be very low). From Table 2 it can be seen Table 2. The measured thermal and electrical properties of the SCUBA-2 science-grade sub-arrays, including a comparison of the expected phonon noise limit with the measured NEP in the dark. The naming convention for the sub-arrays is s4a, s4b, s4c and s4d for the 450 µm focal plane, and s8a, s8b, s8c and s8d at 850 µm. that expected phonon noise NEP, based on measured values of G and Tc, is significantly higher than the requirement to ensure background limited performance at 850 µm (Section 3.1.2; Table 1).
Dark NEPs
The noise equivalent power (NEP) is conventially defined as the signal power that gives a signal-to-noise ratio (SNR) of unity for an integration time of 0.5 s. The dark NEP per bolometer is calculated from the ratio of the measured dark current noise (determined over a frequency range of 2-10 Hz) to the responsivity (calculated from a ramp of the heater current; Section 5.2) 6 . Taking simply the mean of the NEP of every bolometer per sub-array would be skewed by poorly performing detectors (as the distribution of values is non-Gaussian and so a weighted mean is used 7 . Since a dark-noise measurement is routinely carried out at the start of every astronomical observation a huge database of measurements now exists. The values given in Table 2 are a median value of 6,500 dark noise measurements taken between the period 2012 February and 2012 July. The measured dark NEP is typically 2-4 times higher than the expected phonon noise limited NEP. The high NEPs could be due to excess low frequency noise and/or lower-than-expected responsivities. There are several possible mechanisms to generate excess noise over the 2-10 Hz range, including aliased noise from high frequency sources and effects due to magnetic flux trapped in the SQUIDs during cool-down (there is some evidence from the dark SQUID data that this could be a significant factor). The SCUBA-2 bolometers also exhibit excess noise at frequencies below 1 Hz with a typical "1/f " knee at around 0.7 Hz. Although excess noise mechanisms are still under investigation, the source of 1/f noise is believed to be largely intrinsic to the detector itself and not associated with the SQUIDs or readout circuit (based on measurements of the dark SQUID data). This fundamental limitation is the 6 The SCUBA-2 software calculates noise values for an integration time of 1 s, and so the measured values in Table 2 have been multiplied by √ 2 to allow for a comparison with the theoretical phonon NEP that assumes a post-detection bandwidth of 1 Hzequivalent to an integration time of 0.5 s, as given by the equation in Section 3.2.1. 7 The weighted NEP, NEP weight = NEP −2 NEPmean. main reason why fast scanning modes had to be developed to move the signal frequencies beyond the 1/f knee. Fig. 9 shows typical dark NEP "images" for each of the 8 sub-arrays and histograms of the NEP distribution. As can be seen, there are a number of non-functional bolometers. Some rows, columns and individual bolometers are faulty as a result of an issue during fabrication and show no response at all (e.g. a broken wire bond or non-functional SQ2 can knock out an entire column). Others are deliberately switched-off in a "bad-bolometer" mask, if, for example, they show sign of instability (e.g. an oscillating output). As well as the higher-than-expected Tc and G, the variation of these properties across a given sub-array has performance and operational implications. There are some sub-arrays (e.g. s8b and s8d) that show distinct gradients or variations in NEP as a result of this. A single TES and heater bias (per sub-array) is insufficient to overcome these variations, resulting in regions of the sub-array where the bolometers are not biased into transition. Furthermore, other bolometers are less-than optimally biased in terms of minimum noise and maximum responsivity (i.e. minimum NEP). One possible way to smooth out the effects of the variation in Tc is a novel technique called "Tc flattening". By applying a higher SQ1 bias on selected rows for a short period in the MUX cycle, the SQ1 can be used as a secondary heater, thereby allowing rows of bolometers to be more optimally biased. With reference to Fig. 9 this would particularly benefit subarrays s8b, s8d and s4a. However, it is a limited technique in that it can only work on a row of bolometers and cannot correct for any Tc variations across a row. At the time of writing this technique remains under investigation and is not currently implemented.
Overall yield and stability
The sub-array yields presented in Table 2 are the typical percentages of bolometers that contribute to an observation (these having been through a flat-fielding quality assurance test; Section 5.2). The average yield is about 70 per cent, which was the target goal at the start of the array design and fabrication process. Further quality assurance checks on the bolometers during the map-making process typically reject another 5 per cent of bolometers. The final map yields are therefore typically ∼ 65 per cent, corresponding to approximately 3700 (out of 5120 bolometers) operational in a focal Figure 9. Top: Typical dark NEP images recorded with the shutter closed for each of the SCUBA-2 sub-arrays. These are derived from a "dark noise" measurements of ∼10 sec at the start of each observation. The colour scale on each image represents the NEP in units of W s 1/2 ); Bottom: Dark NEP histograms for each sub-array, each on the same scale for ease of comparison.
plane. Whilst minimising the NEP at the same time as maximising the yield of a sub-array remains work in progress, the SCUBA-2 working bolometer counts are by far the highest of any submillimetre instrument.
The sub-array stability has significantly improved from the time when the instrument was first installed on the telescope. In the early commissioning phase, bolometers often became unstable during even modest slews of the telescope. This was attributed to pickup in the SQUID summing coil as the sub-arrays bisect the local magnetic field. Additional magnetic shielding in the instrument (Craig et al. 2010) and enhancements to the array setup procedure improved the stability significantly, to such an extent that the majority of bolometers now remain stable during even the largest of scans. As a precaution, regular fast setups are still performed after a lengthy telescope slew. The SCUBA-2 bolometers can show occasional distinct jumps or steps in the time series data, most likely caused by cosmic ray events. The steps are now identified and corrected by an algorithm in the data reduction software (Section 8; Chapin et al. 2012). From repeated measurements it has been shown that the dark per-formance is usually stable and very repeatable, with less than 5 per cent variation in the dark NEP between successive measurements.
SIGNAL AND DATA PROCESSING
The overall signal and data flow for SCUBA-2 are summarised in Fig. 10. This also includes monitoring of the instrument (temperatures and pressures) as well as temperature and mechanism control (SC2CCS). Each sub-array is read out using room temperature electronics (known as multi-channel electronics, or MCE) which in turn are each controlled by a data acquisition computer (DA). The data from the arrays transfers as frames at a rate of approximately 180 Hz and are combined by the data reduction pipelines into images. The raw data and reduced images are stored on disk and transferred to the data archive centre. More details on the integration of SCUBA-2 into the JCMT observatory control system can be found in Walther et al. (2010).
Room temperature electronics and data acquisition
The MCE is a self-contained crate that performs a number of functions. It sets the detector and heater bias, the bias (and feedback values as appropriate) for the three SQUID stages, controls the multiplexing rate and reads the DC-coupled signals from a 32 × 41 sub-array. In the standard data readout mode the MCE reports a low-pass filtered feedback value for every bolometer. There is one MCE crate per sub-array and the units are physically located on the outside of the main instrument. An address card in the MCE controls the timedivision multiplexing by turning on one row of first stage SQUIDs at a time (see Fig. 7). Each bolometer is revisited at a rate of 13 kHz (80 µs) during the multiplexing, which far exceeds the bolometer response time. Separate readout cards are coupled to a set of 8 columns. As the current through a bolometer changes, as a result of power changes during an observation, a digital feedback servo (PID loop) is used to calculate the appropriate change to the feedback values sent to the SQ1 stage. Hence, these feedback vales represent a measurement of the optical power changes and are the nominal MCE outputs. The SQ1 signals of one column are summed in a coil coupled to one second-stage SQUID (Section 3.4.3). More information on the design and operation of the MCE can be found in Battistelli et al. (2008).
Each sub-array has a dedicated DA computer that sends commands to the MCE and receives data packets in return. The data acquisition software is based on a system running RTAI Linux. Data are packaged by the MCE into frames that consist of a house-keeping block followed by the data. The SCUBA-2 Real Time Sequencer (SC2RTS) coordinates and controls the tasks on each of the DA computers. The SC2RTS is a VME bus crate that takes commands from the main observatory RTS for coordinating instrument datataking with the telescope actions. The sync box ensures that all sub-arrays clock out their data frames at exactly the same time. The data frames, together with house keeping information, are packaged by the DA computers into data files that are then subsequently passed to the data reduction pipelines. With SCUBA-2 operating in scan mode these data taking sequences can last up to 40 min and contain many hundreds of thousands of frames. Since these datasets can be very large they are broken down into smaller sub-files typically written to disk every 30 sec. The files are written to disk in Starlink NDF format (Jenness et al. 2009) and contain header and house-keeping information. The 180 Hz frame rate for SCUBA-2 translates to a data rate of approximately 4 MB s −1 (raw, uncompressed data) at each wavelength. In terms of a 12 hr observing night this is equivalent to typically 100 GB of compressed data.
Data reduction pipelines
Data processing pipelines have been developed for SCUBA-2 using the established ORAC-DR pipeline infrastructure (Cavanagh et al. 2008). There are four pipelines running simultaneously at the telescope, two for each wavelength (see Fig. 10), which provide rapid feedback to observers on the quality of the data in real time. The "quality assurance" (QA) pipeline processes data for assessing the instrument performance and produces sensitivity estimates, flat-field updates and sub-array noise performance plots. The "summit pipeline" is designed to produce a quick-look map of the data. For the summit pipeline to run in real time it uses a curtailed version of the data reduction software described in Section 8. The pipeline can also be run in a highlyflexible and configurable off-line mode ("science pipeline"), making use of science data derived from the whole night (or multiple nights), and the optimised data reduction recipes available from the SMURF map-maker (Section 8.1). The data files are transferred to the JCMT Science Archive (JSA; Gaudet et al. 2008) at the Canadian Astronomy Data Centre (CADC) in Victoria (Economou et al. 2011) within a few minutes of their appearance on disk. The primary aim of the JSA is to increase the productivity of the telescope by making science-ready data products available to the JCMT community. Hence, the data are reduced on a daily basis and fully processed images are made available to the project Principal Investigator within 24 hr.
OBSERVING MODES
Since the major goal of SCUBA-2 is to conduct wide-field surveys of the sky the most efficient way to do this is to scan the telescope. To be able to recover large-scale structures in the presence of slowly-varying baselines (caused primarily by sky emission, extinction, and instrumental 1/f noise) the scan pattern must modulate the sky both spatially and temporally in as many different ways as possible. Spatial modulation is achieved by scanning the same region at a number of different position angles to achieve cross-linking. Temporal modulation is incorporated by visiting the same region on different timescales. A number of scan patterns have been developed giving optimum coverage within the constraints of telescope motion .
Scan modes
The telescope operates in a routine scanning mode for SCUBA-2 for which the type of scanning pattern adopted depends on the size of field to be observed. The scan pattern parameters (primarily the telescope speed and scan spacing) are chosen to ensure the effective integration times across the mapped region are as uniform as possible, as well as making it easy to define the shape of the region.
Small-field observations
For small fields, less than about the array footprint on the sky, constant speed "daisy" scans are the preferred observing pattern 89 . In this mode the telescope moves in a pseudocircular pattern that keeps the target coordinate on the arrays throughout the integration. The telescope is kept moving at a constant speed to maintain the astronomical signal at a constant frequency. The pattern on the sky is defined by two parameters: R0, the radius of the requested map, and RT, the turning radius. The optimisation of the daisy observing mode involves identifying the parameters that provide a pattern that: (a) maximises the on-source integration time for a given elapsed time (minimising noise); and (b) gives uniform coverage within a 3 arcmin diameter at the centre of the image. The daisy scan pattern in Fig. 11 (top left and right) is optimised for the case in which R0 and RT are both equal to 0.25 times the array footprint.
The limitation of this mode is that the speed is constrained by the acceleration limit of the telescope (600 arcsec sec −2 in true azimuth). When 1/cos(elevation) reaches ∼ 3 (elevation of ∼ 70 • ) this acceleration limit is exceeded and the pattern tends to fail. Fig. 12 (top) shows the image plane and exposure time map for the standard daisy pattern. Although the daisy scan is designed for small and compact sources of order 3 arcmin or less in diameter, there is significant exposure time in the map to more than double this size. The daisy scan maximises the exposure time in the centre of the image. For example, an image in which the output map pixel sizes have been set to 2 and 4 arcsec (at 450 and 850 µm, respectively), has an exposure time in the central 3 arcmin region of ∼0.25 of the total elapsed time of an observation. Fig. 12 (top right) shows how the uniformity of the noise varies as a function of radius for a daisy scan. Given that the noise level increases by 40 per cent at a radius of 3 arcmin, this mode is useful for mapping pointlike (unresolved) or compact objects of order 3-6 arcmin in diameter and less. All calibration sources (Section 6) are observed with the daisy scanning mode.
Large-field observations
For fields just larger than the instrument field-of-view up to degree-sized scales, a map pattern called "pong" is used as the scan mode 10 . In this case the map area is defined to be square and the telescope tracks across the defined sky area, filling it in by "bouncing" off the walls of this area. A further innovation is to round-off the corners, making the transition at the walls curved and thereby keeping the telescope acceleration more uniform ("curvy pong"). Once a pattern is completed the map is rotated and the pattern repeated at a new angle. This fulfils the criterion of cross-linking scans and providing as much spatial modulation as possible. Fig. 11 (bottom left and right) shows an example telescope track for a pong map with a diameter of 30 arcmin. The parameter space of the telescope speed, the spacing between successive rows of the basic pattern and the number of rotations have been optimised to give the most uniform coverage across the requested field. Fig. 12 (bottom) shows the image plane and exposure time map for a 30 arcmin diameter pong pattern. The pong scan maximises the field coverage and maintains even time uniformity. In this case output map pixel sizes of 2 and 4 arcsec (at 450 and 850 µm, respectively) give an exposure time in the central 3 arcmin region that is ∼ 0.014 of the elapsed time. Fig. 12 (bottom right) shows how the uniformity of the noise varies as a function of radius for a pong scan. The noise remains uniform across the field, never increasing above 20 per cent relative to the centre of the map, out to the edge of the field.
Flat-fielding
The SCUBA-2 sub-arrays are flat-fielded using responsivity measurements derived from fast heater ramps. The bolometer signal current is determined from a series of different heater outputs consisting of a triangle wave of order a few pW (peak-to-peak) about a reference level. The inverse of a linear fit to the current as a function of heater power is the flat-field solution, with the responsivity (A W −1 ) being the gradient. Bolometers are rejected that do not meet specific responsivity criteria (i.e. are deemed to be physically too low or high in value), if their response is non linear, or if the signal-to-noise ratio (SNR) of the measurement is poor. A flat-field measurement is performed at the start and end of every observation using a 5-10 s repeating current ramp. The resulting flat-field is applied in the data reduction process for science maps (Section 8.1). The stability of the flat-field is usually excellent, with less than 1 per cent variation in the number of bolometers meeting the acceptance criteria and less than 2 per cent variation in mean responsivity on a sub-array over an entire night of observations.
Pointing and focussing
The telescope is accurately pointed and focussed using images derived from short daisy scans of a bright, compact source. For pointing, a fitted centroid to the resultant image generates offsets from the nominal position. These are then passed back to the telescope control system to make adjustments in the azimuth/elevation position. For focus, an image is taken for each of 5 different offsets of the secondary mirror (in three-axes). A parabolic fit to the peak signal in each image generates an optimum focus offset which is passed to the secondary mirror controller.
CALIBRATION
The calibration of ground-based submillimetre observations can be particularly problematic because of changes in the atmospheric opacity on short timescales (Archibald et al. 2002). The process of calibrating an observation requires two major steps. Firstly, the attenuation of the astronomical signal by the atmosphere is determined preferably along the line-of-sight. Secondly, astronomical images are calibrated by reference to a flux standard. The companion paper Dempsey et al. (2012) describes the calibration of SCUBA-2 data in more detail.
Extinction correction
The transmission of the atmosphere in the submillimetre is highly wavelength dependent (as shown in Fig. 3) and depends primarily on the level of PWV. At the JCMT weather conditions are categorised in terms of a "weather band" with a scale from 1 to 5, with 1 being the driest and 5 the wettest. The weather band is derived from either direct measurements made at 225 GHz using a radiometer at the nearby CSO, or from a dedicated water vapour monitor (WVM) at the JCMT (Wiedner et al. 2001). The "CSO tau" measurement is derived from a fixed azimuth sky-dip (due south) and reports the zenith opacity every 15 min. However, since the PWV can change on very short timescales at the JCMT, it is monitored at a faster rate using a separate WVM looking directly along the line-of-sight of the observation. The WVM estimates the level of PWV from the broadening of the 183 GHz water line in the atmosphere at intervals of 1.2 s. Scaling the WVM measurement to a zenith opacity value shows a very close correlation to the "CSO tau", par-ticularly during the most stable parts of the night (9 pm until 3 am) (Dempsey et al. 2012).
Over the commissioning period the extinction relationships (at each SCUBA-2 waveband) with PWV and hence τ225 have been derived by analysing observations of sources of known flux density. The following relationships have been derived between the opacities at the SCUBA-2 wavebands and the 225 GHz scaled measurements from the WVM: τ450 = 26.0(τ225 − 0.012); (1) τ850 = 4.6(τ225 − 0.0043). (2) These relationships are subsequently used in the extinction correction stage during the process of making maps (Section 8.1).
Flux calibration
Primary calibration is taken from brightness temperature models of Mars (Wright 1976) and Uranus (Moreno, 2010 11 ), and has been extended to include a number of compact "secondary" sources evenly spread over the sky. These secondary calibrators can take the form of late-type stars or compact Hii regions. A flux conversion factor (FCF) is derived from the daisy observation of a standard source and converts the raw bolometer signals into Janskys. The calibration of the bolometer heater (Section 3.4.2) ensures that each subarray in a focal plane reports the same optical power when observing an astronomical source and hence only a single FCF is needed at each waveband. The FCF depends on the photometry required for a particular source morphology and values are derived that are appropriate for both estimating the peak flux (usually applicable for an unresolved, point source) or the integrated flux (for an extended source). A database of secondary calibrators continues to be established to cover as much of the right ascension range as possible (Dempsey et al. 2012).
Typical observing sequence
Each SCUBA-2 observation, based on either a daisy or pong observing pattern, follows an identical sequence. Once the telescope has been slewed to the appropriate source a fast array setup is carried out (Section 3.4.1). An observation then starts with a 10 sec dark-noise measurement undertaken with the shutter closed. As the shutter opens to the sky, the power change is dynamically balanced by the heater tracking process (Section 3.4.2). Once the shutter is fully open and the power balance is stable, a flat-field measurement is carried out (Section 5.2). The heater carries out another small track at the end of the flat-field to compensate for any final sky power change. A science observation 11 Moreno, R. "Neptune and Uranus brightness temperature tabulation", ESA Herschel Science Centre, ftp://ftp.sciops.esa.int/pub/hsc-calibration, 2010 is then undertaken and typically lasts 30-40 min, although pointing, focussing and calibration observations are much shorter (typically 5 min). At the end of the observation there is another heater track before a final flat-field is carried out. Finally, the shutter closes and heater tracking restores the power balance to the dark value.
On-sky sensitivity
The sensitivity on the sky is represented by the noise equivalent flux density (NEFD) which is the flux density that produces a signal-to-noise of unity in 1 s of integration time. At the shorter submillimetre wavelengths the NEFD is particularly heavily dependent on the weather conditions. The NEFD values are calculated in a similar way to the dark NEP (see Section 3.5.2). A sky NEP value for each bolometer is calculated from the time series of the first sub-scan of an observation, and the responsivity as before from the flat-field measurement. The NEFD is then given by NEFD = (NEP sky FCF λ )/η, where the FCF is the flux conversion factor determined from a flux calibrator (Section 6.2) and η is the sky transmission. As in the case of the dark NEP a weighted average is used for the corresponding sky value. Fig. 13 shows how the NEFD varies as a function of sky transmission for both the SCUBA-2 wavebands. In terms of a direct bolometer-to-bolometer comparison, the SCUBA-2 values are 5-10 per cent better than SCUBA at 450 µm, and about the same at 850 µm. The NEFD values in "good" observing conditions are typically 400 and 90 mJy sec 1/2 at 450 and 850 µm, respectively, at least a factor of 2 worse than predicted based on a model of the instrument, telescope and Mauna Kea sky. A major contributor to these sensitivity figures is undoubtedly the higher-than-expected measured dark NEP (Section 3.5.2), although it is also possible that there are contributions from instrument and/or telescope that are not accounted for. This remains work under investigation.
Sensitivity limits and mapping speed
The RMS noise in a map has been shown to integrate down as expected according to time −1/2 as shown in Fig. 14 for a ∼7 hr observation. In practical terms, a daisy field of 3 arcmin in diameter can reach a level of ∼1 mJy at 850 µm in around 3 hrs (in good conditions and including observing overheads), whilst for a 1 deg diameter field a sensitivity limit of 6 mJy can be obtained in about 7 h. Table 3 lists a selection of detection limits for the SCUBA-2 wavebands for various observing mode configurations.
Since the per-bolometer NEFDs are very similar to SCUBA, the SCUBA-2 mapping speed improvement is largely governed by the increase in detector count. Other factors include significantly lower observing overheads for the SCUBA-2 mapping modes than for the scan strategies used by SCUBA (e.g. no sky chopping). This results in mapping speed improvements of 100 and 150 times that of SCUBA for 450 and 850 µm, respectively. 7.4 Image quality Fig. 15 shows high signal-to-noise images of the beam shapes at 450 and 850 µm, based on, respectively a 54 and 80 image mosaic of daisy scans of Uranus (typical disk diameter of 3 arcsec). The beams are fitted using two Gaussian components, namely a narrow main-beam and a wider secondary component. The main-beam widths (full-width at half-maximum), after de-convolving the Uranus disc, are 7.9 and 13.0 arcsec at 450 and 850 µm, respectively, whilst the secondary component has widths of 25 and 49 arcsec. It is estimated that the main-beam widths are 6 and 2 per cent higher than expected from a perfect optical system. The two component fit reveals that the main-beam has an amplitude of 94 and 98 per cent at 450 and 850 µm, respectively, which equates to integrated power levels of 60 and 75 per cent (i.e. 40 and 25 per cent of the total power lies in the secondary component). The large ring visible in Fig. 15 is due to scalloping of the telescope panels since the focal length of the primary dish and panels are not exactly the same. This has an amplitude ∼ 0.1 per cent of the peak at 450 µm. Further details of the beam characterisation can be found in Dempsey et al. (2012).
To reconstruct maps to the highest possible degree of accuracy and image quality the relative position on the sky of each bolometer in the focal plane must also be determined. This is achieved by scanning every single bolometer in each focal plane across a bright source (such as Saturn or Mars), so that a map can be created from each bolometer. Since the position of the planet and telescope are known, the relative position of the bolometers can be determined. The results also demonstrate that there is very low fielddistortion (∼ 2 per cent) across each focal plane.
DATA REDUCTION AND MAP-MAKING
SCUBA-2 data are reduced and images constructed using the Submillimetre User Reduction Facility (SMURF; Figure 15. Measured beam using daisy scanning of Uranus. Both plots have a log colour table to show the detail in the diffraction pattern. Top: 450 µm with contours set at 0.1 (white), 1 (green) and 10 per cent (white) of the peak amplitude, Bottom: 850 µm with contours set at 0.1 (white), 1 (black) and 10 per cent (blue). see the companion paper Chapin et al. 2012), a software package written using the Starlink software environment (Jenness et al. 2009). By utilising SMURF within the data reduction pipeline fully-calibrated, publication quality images can be obtained.
Dynamic iterative map-maker
The foundation of map-making within SMURF is an iterative technique that removes most of the correlated noise sources in parallel with a simplified map estimator. To accomplish this an overall model of the observed signal is constructed, breaking down the contributing components as ap- Apply flat field to raw data Figure 16. The SMURF map-making algorithm presented as a flowchart showing how the raw data is first pre-processed and then iteratively forms an output map through the application of a series of model components (Chapin et al. 2012).
propriate. For example, the signal will have a time-varying component due to atmospheric extinction, a fixed astronomical source signature and various other sources of noise. The typical map-making algorithm is shown in the flowchart in Fig. 16. The initial step in the map-maker takes the individual sub-scans and combines the data into a contiguous time-series. Pre-processing applies the flat-field correction, re-samples the data at a rate that matches the requested output map pixel scale, and finally cleans the data by repairing spikes/DC steps and subtracting off a polynomial baseline from each bolometer.
The iterative section then commences with estimating and removing a common-mode signal (com), usually dominated by the atmosphere, and scaling it accordingly for each bolometer (gai) so that a common calibration can be applied later for an entire sub-array. The com model component is the average signal from all working bolometers on a sub-array at each time-step and flags bolometers as bad if their response does not resemble that from the majority of other bolometers. A time-dependent extinction correction factor (ext) is then applied based on measurements from the WVM (Section 6.1). The data are subsequently Fourier transformed and a high-pass filter is applied to remove residual excess low-frequency noise (flt). The resulting cleaned and extinction-corrected data are re-gridded to produce an initial map estimate using nearest neighbour sampling. Since each map pixel will contain many bolometer samples the noise is significantly reduced compared to the raw time series data. The map is then projected back into the time domain, thus producing the ast model containing signals that would be produced in each bolometer by the signal represented in the map. This model is then removed from the time-series data giving a residual signal from which the noise for each bolometer can be determined (noi) with an associated value of χ 2 used to monitor convergence. Since each signal component is slightly biased by signals from other components the entire process is iterated using a convergence tolerance. If the map has not changed from the previous iteration within this tolerance then the final output map is produced. If the map has changed, then the process is repeated.
The map-making process is controlled by versatile configuration files that contain all the model settings and userdefinable control parameters. For example, the high-pass filter cut-off is one parameter that can be easily adjusted. The convergence tolerance can also be bypassed by setting a fixed number of iterations. However, in reality, there are a small number of standard config files that are customised for use with different types of observations. More details can be found in Chapin et al. (2012).
LEGACY SURVEYS AND EARLY SCIENTIFIC RESULTS
The key scientific driver for SCUBA-2 is the ability to carry out large-scale surveys of the submillimetre sky. Six "legacystyle" survey programmes have been developed that are very broad-based, ranging from the studies of debris disks around nearby stars to galaxy populations and evolution in the early Universe. These surveys have been approved to run from 2010 February 1 until 2014 September 30. In summary these surveys are 12 : • Although the main strength of SCUBA-2 is in widefield mapping, the camera can also image compact sources very quickly. Fig. 17 is a short (2 hour) 850 µm observation 12 Further information on the survey programme can be found at: http://www.jach.hawaii.edu/jcmt/surveys Figure 17. SCUBA-2 image of the debris disc around Fomalhaut at 850 µm. Contours start at 3-σ and increase in steps of 2-σ. The "star" symbol shows the position of Fomalhaut with respect to the disc. The diameter of Pluto's orbit in our Solar System is also shown, indicating that the disc may represent a Kuiper-Belt like structure around the star. Image provided courtesy of the SONS Legacy Survey team.
of the famous debris disc surrounding the main sequence star Fomalhaut (Holland et al. 2003) which extends to just under 1 arcmin in length. Debris discs arise from collisions amongst planetetesimals in which the dusty residue spreads into a belt around the host star. Their study reveals much about the material left over after planet formation, the size of such systems compared with our own, the clearing out of comets that preceded the appearance of life on Earth, and even the detection of distant debris-perturbing planets (e.g. exo-Neptunes) that cannot be found by any other technique. The SCUBA-2 image took just one-fifth of the time of the previous SCUBA map to the same S/N level. Given that the per-bolometer NEFD values are very similar, the gain over SCUBA for compact and point-like source is largely due to not having to employ sky chopping to remove the atmosphere and "jiggling" of the seconday mirror to produce a Nyquist-sampled image. The disc is comparable to the size of the Kuiper Belt in our own Solar System and studying such discs therefore gives valuable insight into planetary system formation and evolution in our Galaxy.
Wide-field imaging of sites of star formation in our own Galaxy is one of the key elements of several of the legacy surveys. A full understanding of the star formation process also requires an appreciation of how the rare, massive stars form and shape the evolutionary history of giant molecular clouds and subsequent star and planet formation. The early stages of high-mass star formation are not well understood, largely because they occur so fast and are consequently rare. A census of high-mass star formation throughout the Galaxy is possible with SCUBA-2. Fig. 18 shows a SCUBA-2 map at 850 µm of the W51 star forming region, containing a ridge Figure 18. The high-mass star forming region W51 in Aquila as observed by SCUBA-2 at 850 µm. The ridge of compact cores extending to the lower right in the figure runs parallel to the Galactic plane. The field size of this image is ∼ 1 deg and the dynamic range is such that cores ranging in flux density from 40 Jy to < 20 mJy are detected. Image provided courtesy of SCUBA-2 commissioning team.
of massive star-forming cores runing parallel to the Galactic Plane. Studies such as this will show the rarest of evolutionary phases and allow an understanding of what defines the highest mass end of the stellar initial mass function. The sensitivity of SCUBA-2 equates to a mass sensitivity of 1 M⊙ at a distance of 3 kpc and bf 180 M⊙ at 40 kpc, sufficient to detect all the significant high-mass and cluster forming regions throughout the Galaxy.
Another key area of the survey programme is to image the cold dust in nearby spiral galaxies. The bulk of star formation activity in nearby spirals is often missed by IR studies, since most of the dust mass resides in cold, extended, low-surface brightness discs, often far from the galactic nucleus. The studies so far have revealed that up to 90 per cent of the total dust mass can be located within galactic discs. Dust temperatures are around 10-20 K and so radiate strongly in the submillimetre region. Fig. 19 shows a Hubble Space Telecope (HST) image of the famous "Whirlpool galaxy" M51 (and associated companion NGC 5195) overlaid with SCUBA-2 colours (blue for 450 µm; red for 850). SCUBA-2 clearly detects the nuclei of these two interacting galaxies and the fainter 850 µm emission traces the opticallyhidden dust lanes. Furthermore, the imaging power and spatial resolution achievable allows the study of regions of hot star formation in the outer arms of the spiral galaxy.
The final example of the versatility of SCUBA-2 is an observation of one of the most massive known cluster lenses, Abell 1689 at z = 0.18. Rich clusters are nature's telescopes that can be used to more efficiently study distant, starforming galaxies. Figure 20 is a 850 µm deep daisy map of the Abell 1689 cluster field. The total field is approximately 13 arcmin in diameter and is known to contain over Figure 19. A composite image of the famous Whirlpool Galaxy with SCUBA-2 colours (blue for 450 µm; red for 850) superimposed on a green-scale HST image. SCUBA-2 traces star formation via the emission from cold dust in the outer most regions of galaxy. The SCUBA-2 image is provided courtesy of Todd MacKenzie, and the HST image is acredited to NASA, ESA, S. Beckwith (STScI) and the Hubble Heritage Team (STScI/AURA). 50 lensed sources spanning a redshift range from 1-6. When SCUBA observed this field it detected 2 sources with SNR of greater than 4 and another 5 with tentative 3-σ detections (Knudsen et al. 2008). SCUBA-2 imaged the field in a fraction of the time and detects 15 sources at greater than 5-σ with many dozens at greater than 3-σ, confirming a mapping speed of over 100 × SCUBA. SCUBA-2 is clearly a very powerful instrument for studying the distant Universe.
CONCLUSIONS
SCUBA-2 is the world's largest format camera for submillimetre astronomy. It represents a major step forward in submillimetre instrumentation in terms of the detector and array architecture, observing modes and dedicated data reduction pipelines. The new technologies developed for SCUBA-2 represent a major strategic investment on behalf of the JCMT and instrument funding agencies. The instrument has already shown incredible versatility with astronomy applications being very broad-based, ranging from the study of Solar System objects to probing galaxy formation in the early Universe. An imaging polarimeter (Bastien et al. 2011) and Figure 20. The massive lensing galaxy cluster Abell 1689 observed by SCUBA-2 at 850 µm. The central inset shows the approximate region observed by SCUBA and the top right inset shows an HST ACS image of the field. The new SCUBA-2 map detects 15 far-IR sources seen through the massive core of this cluster. The foreground mass amplifies the fluxes of the background source, enabling fainter sources to be detected, below the blank-field confusion limit of the JCMT. Image is provided courtesy of the SCUBA-2 Guaranteed-Time team.
Fourier transform spectrometer (Gom & Naylor 2010) will also be available to allow the mapping of magnetic field lines and imaging medium-resolution spectroscopy, respectively. SCUBA-2 maps large-areas of sky 100-150 faster than SCUBA to the same depth, and such improved imaging power will allow the JCMT to exploit fully the periods of excellent weather on Mauna Kea. SCUBA-2 is currently undergoing a series of 6 unique legacy surveys for the JCMT community. These are highly complementary to the wider, but shallower, surveys undertaken by Herschel, and are vital to fully exploit the capabilities of the new generation submillimetre interferometers and future facilities such as ALMA, CCAT and SPICA. | 2013-01-16T10:53:25.000Z | 2013-01-16T00:00:00.000 | {
"year": 2013,
"sha1": "e4879e1c1f5bfe7982d9dc6108fe0487a19fab86",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/104887/1/sts612.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "30ccc07053cece8ebb6764ba1c95cbd25383b602",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270972074 | pes2o/s2orc | v3-fos-license | Phosphorylation at the disordered N-end makes HuR accumulate and dimerize in the cytoplasm
Abstract Human antigen R (HuR) is an RNA binding protein mainly involved in maintaining the stability and controlling the translation of mRNAs, critical for immune response, cell survival, proliferation and apoptosis. Although HuR is a nuclear protein, its mRNA translational-related function occurs at the cytoplasm, where the oligomeric form of HuR is more abundant. However, the regulation of nucleo-cytoplasmic transport of HuR and its connection with protein oligomerization remain unclear. In this work, we describe the phosphorylation of Tyr5 as a new hallmark for HuR activation. Our biophysical, structural and computational assays using phosphorylated and phosphomimetic HuR proteins demonstrate that phosphorylation of Tyr5 at the disordered N-end stretch induces global changes on HuR dynamics and conformation, modifying the solvent accessible surface of the HuR nucleo-cytoplasmic shuttling (HNS) sequence and releasing regions implicated in HuR dimerization. These findings explain the preferential cytoplasmic accumulation of phosphorylated HuR in HeLa cells, aiding to comprehend the mechanisms underlying HuR nucleus-cytoplasm shuttling and its later dimerization, both of which are relevant in HuR-related pathogenesis.
Quantitative PCR (qPCR)
2 μL of cDNA were mixed with specific primers and SYBR Select master mix (4472908, Invitrogen) constituting a final volume of 7 μL, in MicroAmp Optical 384-Well Reaction Plates (4309849, Applied Biosystems).Each reaction was performed in triplicate using the ViiA 7 Real-Time PCR system (Applied Biosystems).qPCR conditions involved an initial denaturation step (90 s at 95 ºC), followed by 40 cycles of annealing (15 s at 95 ºC and 1 min at 59 ºC), and a final extension phase (15 s at 95 ºC, 1 min at 60 ºC and 15 s at 95 ºC).Ct values were extrapolated from the melt curve and gene expression levels were normalized with GAPDH housekeeping expression by implementing the 2 ∆∆Ct formula.GAPDH sequence was detected using oligonucleotide pair GAPDH-fw/-rv.PTMA was detected using PTMAfw/-rv.
Circular dichroism spectroscopy
Thermal stability circular dichroism (CD) spectra were recorded using a Jasco J-815 spectropolarimeter equipped with a Peltier temperature control system.Thermal unfolding was monitored between 10 ºC and 105 ºC, with a heating rate of 1 ºC min -1 , by recording the CD signal at far-UV (185-250 nm) in a 1 cm quartz cuvette.For these assays, 3 M HuR protein were diluted in 20 mM sodium phosphate pH 7, supplemented with 50 M tris(2carboxyethyl)phosphine (TCEP) (20491, ThermoScientific).The midpoint melting temperature (Tm) values were obtained from the fits of the integration of experimental data area between 233-241 nm.Data were processed and analysed using Origin 2019b (OriginLab).
Synthesis of pCMF
Synthesis of pCMF was accomplished as previously described (7).A solution of 2-(4bromomethyl phenyl)-acetic acid (174.62 mmol) in 400 mL anhydrous methanol was stirred with 31.52 mmol of trimethyl silyl chloride for 2 h at RT. Subsequently, the solvent was evaporated under vacuum and the resulting residue was purified using flash column chromatography.
The obtained methyl 2-(4-bromomethyl phenyl)-acetate (82.27 mmol) was combined with N-(diphenyl methylene)-glycine ethyl ester (107.07 mmol), potassium iodide (8.43 mmol) and 410 mL dioxane.The mixture was incubated for 10 min at 10 ºC with 112.89 mmol of benzyl trimethyl ammonium hydroxide in a 40% aqueous solution.It was then diluted with 500 mL ethyl acetate and 500 mL brine, followed by extraction with ethyl acetate (2 x 300 mL) and washing with brine (2 x 1 L), and subsequent evaporation under vacuum.The resulting yellow syrup was resuspended in 210 mL of tetrahydrofuran and cooled in ice.
The reaction mixture was incubated for 1 h at RT with 240 mL of 1 N HCl.The pH was adjusted to 7.5 with NaHCO3, and the reaction mixture was extracted with ethyl acetate ( 4x 300 mL).The organic layer was dried over MgSO4, evaporated under vacuum and further purified using flash column chromatography.
To the solution of p-(2-methoxy-2-oxoethyl)-D,L-phenylalanine ethyl ester (131.92mmol) in 120 mL tetrahydrofuran, 255 mL of 1 N NaOH was added and the reaction was incubated for 12 h.The solvents were then evaporated under vacuum.The resulting product, a lyophilized white solid, was pCMF in the presence of sodium chloride.
Mass spectrometry analysis
Mass spectrometry (MS) assays were performed at the biomolecular mass, proteomics, and metabolomics (BIO-MS) laboratory at the University Pablo de Olavide (UPO).
Acrylamide gel bands were detained with ammonium bicarbonate and acetonitrile.DTT and iodoacetamide were respectively used to break disulphide bonds and carbamidomethylate cysteine residues.Samples were incubated overnight at 37 ºC with bovine trypsin at ratio 1:10 (enzyme:substrate).After extraction with acetonitrile and acidification, samples were desalted and concentrated using C18-filled tips.To detect phosphorylated residues, phosphorylated peptides were enriched with TiO2-filled tips.
To test the incorporation of the non-canonical amino acid pCMF, MS spectra were analysed with a MALDI-TOF Ultraflextreme system (Bruker) configured on positive reflectron mode.
For each spectrum, results of 3000 laser shots were averaged.
For phosphorylated residues detection, MS spectra were recorded with a Q-Exactive-Plus spectrometer (ThermoScientific) coupled with an Easy n-LC chromatographer (ThermoScientific).Each sample was dissolved in 10 L of 0.1% formic acid.A non-lineal gradient was used between phase A (0.1% formic acid) and phase B (80% acetonitrile, 20% water, 0.1% formic acid), during 120 min in reverse phase.A nano-column Easy-spray PepMap C18 of 50 cm and 100 Å was coupled with a pre-column C18 PepMap100 of 2 cm.
Voltage was fixed at 2.7 kV and temperature, at 300 ºC.Spectra were obtained after every 30 ms and the 10 main parental ions were fragmented for each scan.
Figure 4 .Supplementary Figure 5 .Supplementary Figure 7 .
Mass spectrometry confirmation of pCMF incorporation.Mass spectrometry (MS) results after trypsin digestion confirming pCMF incorporation in HuR2-99 Y5pCMF (A) and HuR1-326 Y5pCMF constructions (B).pC stands for pCMF.Structural characterization of HuR Y5pCMF.(A) Thermal denaturation profile of HuR2-99 Y5pCMF mutant.Projections of the integration values in 233-241 nm wavelength interval (left panel) from Circular Dichroism (CD) spectra (right panel) measured during a temperature ramp.Vertical dashed line indicates the fitted Tm value.(B) Overlap between 2D 1 H-15 N HSQC spectra from HuR2-99 Y5pCMF mutant (orange) and HuR2-99 wild-type (cyan).Inset of a concrete NMR spectral region shows the Y5 amide signal exclusively in HuR2-99 wildtype spectrum.The non-canonical amino acid pCMF in the mutant lacks 15 N-labelling.(C) CSPs of HuR2-99 Y5pCMF amide signals upon HuR2-99 wild-type as a function of residue number.helices are represented with red rectangles and -sheets with blue arrows.Residues in secondary structures are shaded in grey.Orange asterisk indicates the Tyr-to-pCMF mutation at position 5. Supplementary Figure 6.Molecular Dynamics simulations for HuR2-99 and HuR1-326 constructions.Root-mean standard fluctuations (RMSF) (left panels), radius of gyration (RG) (middle panels) and root-mean standard deviation (RMSD) (right panels) of HuR2-99 (A) and HuR1-326 (B) wild-type (cyan), pY5 (pink), Y5pCMF (orange) and Y5F (green, only for HuR2-99) constructions.RMSF was calculated for the 800-1000 ns interval for HuR2-99 constructs, aligning residues 20-50 and 70-95, and for the 200-300 ns interval for HuR1-326 constructs, aligning residues 20-95.(C) Contact maps of full length HuR (HuR1-326) wild-type, Y5pCMF and pY5 from 300 ns Molecular Dynamics simulations.Black boxes indicate the zooms shown in Figure 3A.Phosphorylation at HuR Y5 alters the ion distribution of HuR.(A) Chemical structures of the analogous paramagnetic probes used in NMR Solvent Paramagnetic Relaxation Enhancement (sPRE) assays: The negative probe carboxy-PROXYL colored by blue, and the neutral probe, carbamoyl-PROXYL, by pink.(B) sPRE 2 rates for the negative (blue) and the neutral (pink) probes of HuR2-99 wild-type (left panel) and HuR2-99 Y5pCMF (right panel) proteins.-helices are represented with red rectangles and -sheets with blue arrows.Residues in secondary structures are shaded in grey.(C) Surface representation of the last structure obtain for HuR2-99 wild-type and HuR2-99 Y5pCMF from the 1 s MD trajectories.Upper panel: surface map of electrostatic potentials for HuR2-99 wild-type and HuR2-99 Y5pCMF.Lower panel: surface map of differences between the PRE 2 rates for the negative and neutral probes of HuR2-99 wildtype and HuR2-99 Y5pCMF proteins.Higher 2 rates for negative probe than for neutral probe indicate anions accumulation (blue).The opposite indicates anions exclusion (pink).Residues that could not be analyzed by sPRE experiments are colored in grey.Codes in the left panels indicate the used color gradients.Supplementary Figure 8. HuR DNA-binding analysis.(A) BLI binding analysis of HuR2-99 wildtype (cyan) and Y5pCMF mutant (orange) with a 7mer DNA-oligonucleotide (5'-ATTTTTA-3').(B) ITC analysis of the interaction between the 7mer DNA-oligonucleotide and HuR2-99 wild-type (cyan) or Y5pCMF mutant (orange).Thermograms and binding isotherms are shown in the upper and lower panels, respectively.First injections (red points) were not used for the fitting.(C) Superimposed 2D 1 H-15 N HSQC spectra of 15 N-labeled HuR2-99 wild-type free and bound to DNA up to a ratio 1:5 (protein:DNA).Signals corresponding to distinct titration steps are colored according to the code in the panel.Insets of NMR spectral regions showing signals of the residues L22, A57 and R97.Increasing in HuR2-99:DNA ratio is indicated by arrows.(D) Superimposed 2D 1 H-15 N HSQC spectra of 15 N-labeled HuR2-99 Y5pCMF free and bound to 7mer DNA oligonucleotide up to a ratio 1:5 (protein:DNA), represented as in C. (E) CSPs of free 15 N-labeled HuR2-99 wild-type (blue) upon the 1:5 ratio (protein:DNA) as a function of residue number.The dashed lines indicate 1 and 2 standard deviations ().Residues exhibiting CSPs higher than 2 for HuR2-99 wild-type and HuR2-99 Y5pCMF are represented by dark blue/orange and those ones with values between 1 and 2, in cyan/light orange.Orange asterisk indicates the Tyr-to-pCMF mutation at position 5. -helices are represented with red rectangles and -sheets with blue Supplementary Figure 9. 15 N-HuR2-99 Y5pCMF NMR experiments performed for lysine sidechain amino group signals assignment.(A) 2D (H2C)N(CCH)-TOCSY experiment for 15 Nζ correlations with 1 H Lys, performed at 308 K to improve 13 C-13 C Hartmann-Hahn cross polarization (1).(B) Left panel: 2D H2CN spectra to monitor the changes of 15 N resonance signals across temperature.Right panel: 2D 1 H-15 N-HISQC spectrum used to assign lysine side-chain amino groups signals.TCCACGCGGAACCAGTTTGTGGGACTTGTTGGTTTTGAAGG Mutation sites are indicated in italic letters.*17mer oligonucleotide was designed based on AU17 from Pabis et al. (5).were incubated with 10 μg of V5 Tag monoclonal antibody (R960-25, Invitrogen) in 150 μl NT2 buffer overnight under rotation at 4 ºC.For sample precleaning tubes, 25 μL of Protein G Sepharose beads slurry were prepared and incubated with 7.5 μg of Purified Mouse IgG1, κ Isotype Control antibody (557273, BD Pharmingen), as explained.Cells were lysed in polysome lysis buffer (PLB) (100 mM KCl, 5 mM MgCl2, 10 mM HEPES pH 7.0, 0.5% NP40) supplemented with 1 mM DTT, 100 U/ml RNaseOUT Recombinant Ribonuclease Inhibitor (10777019, Invitrogen) and Complete Mini EDTA-free Protease Inhibitor Cocktail (11836170001, Roche).Lysates were centrifuged twice at 14,000 × g, 30 min, 4 ºC.Approximately 1/10 part of the clarified lysates was reserved for RNA isolation of the input fraction and the remaining part of the sample was incubated with the anti-IgG1 precoated beads during 30 min under rotation at 4 ºC.The supernatant was recovered by centrifugation (10,000 × g, 5 min, 4 ºC) and the presence of protein was confirmed with the Micro BCA Protein Assay Kit.Precleaned samples were next incubated with the anti-V5 precoated beads during 1 h under rotation at 4 ºC.The beads containing the bound ribonucleoprotein complexes were washed five times with NT2 buffer by centrifugation (5,000 × g, 5 min, 4 ºC). | 2024-07-06T06:17:13.030Z | 2024-07-05T00:00:00.000 | {
"year": 2024,
"sha1": "4791ce2366d7144047e88fd9075e49e5b3b68b24",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/nar/gkae564",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9cbd59d43a82ba6bc6fbc06f1615d3836c09231",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266920411 | pes2o/s2orc | v3-fos-license | The Role of Proline-Proline-Glutamic Acid (PPE) Proteins in Mycobacterium tuberculosis Virulence: Mechanistic Insights and Therapeutic Implications
For decades, tuberculosis (TB), caused by Mycobacterium tuberculosis (MTB), has remained a global health challenge. Central to this issue are the proline-proline-glutamic acid (PPE) proteins, which play a pivotal role in the pathogenesis and persistence of MTB. This article explores the molecular mechanisms of PPE proteins and their roles in facilitating MTB’s evasion of the host’s immune system while enhancing virulence and transmission. Focusing on the structural and functional aspects of PPE proteins, this review provides a detailed analysis of antigenic variation, a crucial mechanism allowing MTB to elude immune detection. It also probes the genetic diversity of these PPE proteins and their complex interactions with host immunity, offering insights into the challenges they pose for therapeutic development. This review delves into the potential of targeting PPE proteins in novel therapeutic strategies, discussing the prospects of drug and vaccine development. The evidence reviewed in this article underscores the pressing need for innovative approaches to combat TB, especially in the face of increasing drug resistance. Ultimately, this review article highlights the untapped potential of PPE proteins in revolutionizing TB treatment, paving the way for breakthroughs in drug and vaccine development.
Introduction And Background
The scourge of tuberculosis (TB), caused by Mycobacterium tuberculosis (MTB), continues to be a global health crisis and is responsible for millions of deaths annually [1].The persistent challenge in combating this disease lies not only in its widespread prevalence but also in the complex pathogenic mechanisms of the causative microorganism.Among the key elements of MTB's virulence macromolecules are the prolineproline-glutamic acid (PPE) proteins, named for their characteristic amino acid sequence [2].These proteins have garnered significant scientific interest due to their role in the pathogenesis of MTB and their implications for novel therapeutic strategies.Understanding the role of PPE proteins in MTB infection is crucial, given the ongoing challenge TB poses to global health.TB is one of the top 10 causes of death worldwide and the leading cause from a single infectious agent, surpassing even human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) [3].The impact of the disease is particularly pronounced in low-and middle-income countries and exacerbated by factors such as co-infection with HIV and the emergence of multidrug-resistant MTB strains [4].These challenges reinforce the urgency of developing a deeper understanding of MTB's pathogenic mechanisms and exploring new therapeutic avenues.The PPE protein family, characterized by a conserved Pro-Pro-Glu motif at the N-terminus, represents a significant proportion of the MTB genome.Initially identified through genome sequencing, these proteins have been implicated in various aspects of MTB's interaction with its host [5].The diversity and abundance of PPE proteins indicate a complex role in MTB's pathogenicity, potentially involving immune modulation, cell-to-cell spread, and adaptation to different host environments.However, the specific mechanisms through which PPE proteins contribute to these processes remain only partially understood.Despite a reduction in TB-related mortalities following the implementation of short-course directly observed treatment, it remains a leading cause of death globally [6], with approximately 25% of the global population being infected [7].Equally important, colleges and universities are at-risk populations [8] with prior research reporting TB outbreaks among students in Italy, China, and Northwest Ethiopia, often resulting from repeated exposure to untreated TB cases [9][10][11][12].Research attributes the high TB prevalence in higher educational institutions to inadequate understanding and awareness about this disease [13,14].
Prior studies have uncovered the association between PPE proteins and MTB's evasion of host immune responses [4,5].These proteins interfere with antigen processing and presentation pathways, eventually modulating the host's immune response.This immune evasion capability allows MTB to establish latent infections and persist within the host for extended periods.Additionally, PPE proteins are implicated in bacterial cell wall integrity and influence MTB's survival under various stress conditions, including those encountered during infection of the human host [15].Given their role in MTB virulence, PPE proteins are potential targets for therapeutic interventions and drug development.Inhibitors that disrupt PPE protein functions could weaken MTB's ability to evade the immune system or survive under hostile conditions within the host.Furthermore, PPE proteins are also being explored in vaccine development.To protect against MTB infection, the ideal vaccine should elicit a robust immune response against PPE proteins.
Overview of Mycobacterium tuberculosis
TB primarily affects the lungs but can also affect other body parts, such as the kidneys, spine, and brain [16].According to the Centers for Disease Control and Prevention (CDC) and World Health Organization (WHO), TB is responsible for claiming 1.5 million lives annually and ranks as one of the foremost infectious disease threats worldwide [17,18].Despite considerable advancements in its management, TB remains a pressing global health issue.
Symptoms and diagnosis of Mycobacterium tuberculosis
Early diagnosis and timely treatment are imperative in managing MTB infection and preventing its transmission.A combination of clinical evaluation, diagnostic tests, and understanding common signs and symptoms is crucial for healthcare professionals to identify and address TB effectively [5].MTB is a rodshaped bacterium about 3-4 µm long and 0.3-0.6 µm wide.It is a slow-growing bacterium, and it can take up to eight weeks for a single colony to grow on a culture plate [19].MTB is a hardy bacterium that survives for long periods outside the body.Latent MTB infection remains asymptomatic; however, signs and symptoms of active TB vary depending on the severity [20].Associated symptoms include persistent cough, coughing up blood, chest pain, chest pain, fatigue, fever, night sweats, weight loss, and shortness of breath [19,20].
Diagnosis of Mycobacterium tuberculosis
Clinical evaluation: Diagnosis often begins with a comprehensive clinical assessment.Medical history, including risk factors and exposure to TB, is considered along with a physical examination.
Tuberculin skin test (TST): TST, a tuberculosis screening test, utilizes the Mantoux technique.It involves injecting a small amount of TB protein under the skin.A positive reaction suggests exposure to TB but cannot differentiate between latent and active infection [20].
Interferon-gamma release assays: Blood tests, such as the QuantiFERON-TB Gold test, detect MTB infection by measuring the immune response [20].
Chest X-ray: Abnormalities in the lungs may be identified through X-rays.While it can hint at the presence of TB, further tests are needed for confirmation [19].
Sputum smear microscopy: Examining a sputum sample under a microscope helps detect TB bacteria.This test is commonly used for diagnosing pulmonary TB [20].
Culture: MTB is grown and identified from sputum or other bodily fluid samples through culture, confirming the diagnosis and revealing the specific strain [19].
Molecular tests: Polymerase chain reaction and nucleic acid amplification tests identify TB DNA in clinical samples, enabling faster diagnosis [19].
Treatment of Mycobacterium tuberculosis
The management of MTB infection necessitates a prolonged regimen of combined antibiotic therapy [21].The treatment aims to eliminate the bacteria and prevent the development of drug-resistant strains.The specific treatment regimen varies depending on the type of TB infection (latent or active), drug susceptibility, and individual patient circumstances [21].According to the CDC, the most common drugs used include isoniazid (INH), rifampin (RIF), ethambutol (EMB), and pyrazinamide (PZA) [22].The treatment lasts six months on average and may be extended in certain cases.The first two months involve all four drugs, followed by a continuation phase with INH and RIF [23].Latent TB infection (LTBI) treatment aims to prevent the progression of active TB disease.The most common LTBI drug is INH, which is recommended to be taken daily for six to nine months [20].To ensure that patients take their medications consistently and complete the treatment, many healthcare providers utilize directly observed therapy [24].This involves healthcare workers or skilled individuals documenting patient vitals during their medication periods.Regularly monitoring the patient's progress and adherence to the medication is crucial.Non-adherence can lead to treatment failure, the development of drug-resistant TB, and continued transmission of the disease.Fully recovered patients undergo routine follow-up evaluations to confirm complete clearance of MTB [24].
Alveolar macrophage invasion
Individuals who interact with TB patients are mostly asymptomatic and remain healthy as long as they maintain self-care and an environment unsuitable for the disease's growth [25].MTB is transmitted through the air when an infected person coughs, sneezes, or talks.People can become infected with MTB by inhalation of bacteria into the lungs [26].Once inside the lungs' alveoli, the bacteria replicate to cause an infection.The outcome of MTB entry and infection varies among individuals [19].It can be LTBI, where the immune system can control the infection, or it can progress to active TB, characterized by symptoms and the potential for transmission to others.Alveolar macrophages are the most abundant immune cells in the lungs and protect the body from infection and other injuries [27].MTB invasion of alveolar macrophages is a critical step in the early stages of TB infection [28].Alveolar macrophages are the first line of defense against inhaled pathogens and play a key role in controlling MTB infection [27].However, MTB has evolved several strategies to evade and subvert the macrophage immune response [29].One of the mechanisms through which MTB invades alveolar macrophages is phagocytosis.[30].MTB triggers phagocytosis by interacting with various receptors on the surface of alveolar macrophages.Once inside the macrophage, MTB replicates and survives within a specialized vacuole [31].Alternatively, MTB invades alveolar macrophages via direct penetration of the macrophage plasma membrane.This process is mediated by the MTB type VII secretion system (T7SS), a complex protein complex that allows MTB to inject proteins into the host cell.MTB T7SS proteins disrupt the macrophage plasma membrane, allowing the bacteria to enter the cell without being phagocytosed [30].Once inside the macrophage, MTB subverts the macrophage's immune response in a few ways.For example, inhibiting the production of pro-inflammatory cytokines and chemokines, which are important for recruiting other immune cells to the site of infection.MTB also interferes with the macrophage's ability to kill bacteria while promoting apoptosis, as detailed in Figure 1.
Mycobacterium tuberculosis reactivation and immune response
While some individuals exposed to MTB remain asymptomatic, the bacterium can persist in their bodies, resulting in LTBI.Many individuals who inhale MTB do not immediately develop active TB disease; instead, the bacteria remain dormant in the body, often residing within granulomas in the lungs, a state referred to as LTBI [32].The transition from LTBI to active TB disease is known as reactivation [32].Several factors can trigger this transition, including an immunocompromised system, with conditions such as HIV infection, malnutrition, certain medications (e.g., immunosuppressive drugs), and other illnesses significantly reducing the body's ability to control the latent infection [26].The immune response to MTB is complex and involves various cells and signaling pathways [31].The innate immune response is the first line of defense against MTB infection, followed by the adaptive immune response.The adaptive immune response is more specific to MTB and is essential for controlling MTB infection, but MTB has evolved several mechanisms to evade it [28,31].A study by Afkhami and colleagues investigated the efficacy of a multivalent adenoviralvectored vaccine against replicating and dormant MTB in conventional and humanized mice [33].The vaccine was delivered intranasally, a more natural route of infection for MTB.The researchers found that the vaccine was highly effective in protecting mice against replicating and dormant MTB.The vaccine induced a strong immune response, including both humoral and cellular immunity.The same vaccine enhanced the development of tissue-resident memory T cells, which is critical for long-term protection against TB [33].Similarly, studies have uncovered a novel technology that detects active MTB infection antibodies using a peptide enzyme-linked immunosorbent assay (ELISA) test, which is substantial in TB serodiagnosis [34].The test measures levels of IgG antibodies against three peptides from the MTB transketolase enzyme.With 292 subjects in this study, the researchers found that TB patients had significantly higher TKT-specific antibody levels than healthy controls and patients with LTBI [34].This suggests that the TKT-peptide ELISA test can distinguish between active TB and LTBI.
Granuloma formation
Granuloma formation is an important immune response technique to MTB infection.Granulomas are walled-off inflammation areas containing infected macrophages, lymphocytes, and other immune cells.They contain the infection and prevent the spread of MTB to other body parts [31].Granuloma formation begins when MTB invades alveolar macrophages.The macrophages release pro-inflammatory cytokines and chemokines, which recruit other immune cells to the site of infection [35].These include T and B lymphocytes and phagocytic cells, such as neutrophils and dendritic cells.The immune cells recruited to the infection site form a closed network around the infected macrophages, creating a granuloma [35].The granuloma wall consists of epithelioid cells, which are specialized macrophages fused to form a barrier.The lymphocytes within granulomas help coordinate the immune response to MTB.Granulomas can be either active or inactive [36].Active granulomas contain replicating MTB and are characterized by a high level of inflammation and many infected macrophages [37].Inactive granulomas are those in which MTB is dormant or dead.These granulomas are characterized by a lower level of inflammation and a smaller number of infected macrophages [37].Granuloma ensures the containment and prevents body-wide dissemination of MTB.
Proline-proline-glutamic acid proteins
The PE/PPE (proline-glutamate/proline-proline-glutamate) protein family represents a cluster of proteins in the cell wall of mycobacteria, including the human pathogen MTB [19].Although the precise functions of most PE/PPE proteins remain elusive, current studies reveal involvement in a range of crucial processes, including interactions between the host and pathogen, virulence, and the development of drug resistance [38].PPE proteins are secreted to the cell surface through the T7SS.Thus, T7SS allows PPE proteins to interact directly with host cells and modulate the immune response.PPE proteins are unique in that they are highly glycosylated; glycosylation helps PPE proteins adhere to host cells and resist the host immune response [39].PPE proteins aid in MTB survival in the host environment by protecting MTB from antibiotics and the host's immune system.Since the discovery of PPE proteins in the early 2000s, there has been a surge in research to uncover details about their structure, function, and role in MTB pathogenesis [39,40].
Cellular location and classification of proline-proline-glutamic acid proteins
PPE proteins form a diverse family of proteins abundant in the MTB cell wall.Over 160 PPE genes have been identified in the MTB genome, and PPE proteins comprise about 10% of the MTB proteome [41].The abundance and diversity of PPE proteins emphasize their important role in MTB pathogenesis.PPE proteins are involved in many functions, including adhesion to and invasion of host cells, host immune response modulation, and host environment survival [31].PPE proteins are also highly polymorphic, which makes it difficult for the immune system to recognize and respond effectively in cases of index infection [42].Localized PPE proteins within the MTB cell wall are secreted directly into the host cell via the T7SS.This allows PPE proteins to interact directly with host cells and modulate the immune response [42].Once secreted to the cell surface, PPE proteins are anchored in the cell wall via the PPE domain, a highly conserved domain throughout all PPE proteins [43].It anchors PPE proteins to the cell wall and mediates their interactions with host cells, thus enabling MTB to modulate and evade the host's immune system.PPE proteins are classified into several groups based on different criteria [42,43].They can be categorized based on their sequence, with PPE-PPW proteins involved in adhesion and invasion of host cells, PPE-MPTR proteins modulating the host immune response, and other PPE proteins whose functions are not yet fully understood [44].Functional classification comprises adhesion and invasion, immune modulation, and aiding in intra-host survival.PPE proteins can be further distinguished by their secretion pathways, glycosylation status, and polymorphism, with some showing significant variation across MTB strains and others remaining relatively conserved.These classifications enhance our understanding of the diverse roles of PPE proteins in MTB pathogenesis, shown in Figure 2, and immune evasion [44].
Mycobacterium tuberculosis
PPE proteins in MTB play a crucial role in creating antigenic variation, a strategy similarly employed by pathogens such as the influenza virus to elude the host's immune defense by continuously altering their surface antigens [45,46].This ability to induce heightened antibody responses is observed in TB patients compared to healthy individuals vaccinated with Bacille Calmette-Guérin (BCG), indicating a probable upregulation of PPE proteins during active TB infection.In diagnostic applications, the purified protein derivative (PPD), a composite of MTB antigens used in the TST, demonstrates a response nearly equivalent to that elicited by synthetic PPE peptides across diverse TB patient categories, indicating the consistency in immune recognition of these proteins [47].Moreover, Rv2430c, a specific PPE protein, has been shown to induce robust B-cell immune responses in infected individuals, underscoring its role in the immunological landscape of MTB infection, as documented by Choudhary and colleagues [48].Together, these findings accentuate the significance of PPE proteins in immune evasion, their applications in TB diagnosis, and understanding the host-pathogen interactions.The PPE protein families in MTB play a significant role, with a substantial number of them being upregulated under stressful conditions, which could potentially enhance the bacterium's resilience and adaptive capabilities within the host cells [49][50][51].In particular, PPE31 (Rv1807) and PPE68 (Rv3873) have proven to be crucial for the growth of MTB in mouse models [52,53], while PPE44 (Rv2770c) has been found to induce T-helper 2 cell immune response under stressful conditions [54].PPE41, conversely, is known to elicit the production of cytokines such as interferon-gamma, tumor necrosis factor-alpha (TNFα), and interleukin 2 (IL-2), playing a pivotal role in the host immune response [55].Focusing on PPE68 (Rv3873), located in the RD1 region, it has displayed remarkable immunogenicity in mice, and studies by Okkels and associates have identified it as a potent T-cell antigen in individuals infected with MTB [56,57].Moreover, proteins such as Rv2108 (PPE36), Rv3873 (PPE68), Rv1818c, and Rv1196 (PPE18) have been associated with the cell wall, hinting at their potential roles in mediating hostpathogen interactions [58].Intriguingly, in the context of active TB infection, patients exhibit a diminished Th1 response to the PPD, and PPE18 has been implicated in this immune modulation, inhibiting the proliferation of anti-PPD T cells and steering the immune response toward a Th2-type profile [59].
The Rv1168c protein can accurately identify cases of pulmonary TB, a task that sometimes proves challenging for conventional diagnostic methods [60].On a similar note, Rv3347c demonstrates a unique capacity to differentiate patients with latent TB from those showing early signs of active disease.Regarding protein interactions, the synergy observed within PE/PPE protein complexes has garnered significant attention in recent studies [59,61,62].Experimental immunization of mice using the PE25/PPE41 complex enhanced T-cell proliferation, with increasing CD8+ and CD4+ T-cell populations, outperforming the immune response generated when immunizing with PE25 alone [54].Further investigations into PPE proteins revealed a direct interaction between PPE18 and macrophages through the TLR2 receptors, influencing phagocytic activities [59].This interaction facilitates MTB survival and replication by promoting IL-10 production and suppressing IL-2 and TNFα levels in the host, a process associated with an upregulation of phosphorylated SOCS3 protein (59).Moreover, PPE18 has been found to form various heterodimeric complexes through its interactions with PE13 and PE31 [58,60].These specific interactions may play a role in regulating the functions of the PPE18 protein during host-pathogen interactions, adding another layer of complexity and specificity to the immune response against MTB [58].
Proline-proline-glutamic acid proteins as drug targets
The PPE proteins, integral to the pathogenicity of MTB, present a novel therapeutic frontier in the struggle against TB.These proteins, due to their significant representation in the MTB genome and their multifaceted role in pathogenesis, particularly in mechanisms of immune evasion, offer a unique target for drug development [74].Given the rising challenge of multidrug-resistant MTB strains, the quest for innovative pharmacological interventions targeting these proteins is more than just opportune but exigent.Contemporary research has identified a subset of PPE proteins as critical to the virulence and survival of MTB [42].These discoveries have laid the groundwork for synthesizing novel pharmacological agents to inhibit these specific proteins, thereby impeding the pathogen's lifecycle [42].The conceptualization of small molecule inhibitors targeting distinct PPE proteins holds significant promise in disrupting the pathophysiological processes of TB [75].Emergent research has brought to light several candidate molecules with potent efficacy against specific PPE proteins with substantial virulence attenuation in both in vitro and in vivo models [75].The exploration of PPE-targeted drug development, elucidated through various case studies, offers a comprehensive view of both the potential and the challenges inherent in this therapeutic approach.
Proline-proline-glutamic acid proteins in vaccine development
Beyond pharmacological interventions, PPE proteins are also the focus of innovative vaccine research.The inherent immunogenicity and variability of these proteins render them viable candidates for inclusion in vaccine formulations [76].Current research endeavors are concentrated on identifying PPE proteins that elicit robust immune responses to develop a vaccine that surpasses the protection offered by the current BCG vaccine [77].Recent advancements have demonstrated that integrating specific PPE proteins into vaccines enhances immunogenicity, thereby conferring improved protection in preclinical models [78].These findings are pivotal in steering the development of next-generation vaccines against TB.The field of PPE protein research is dynamic, with novel discoveries and methodologies continually advancing the field.Advanced molecular and immunological techniques are currently employed to unravel the intricate interactions between these proteins and the host immune system [45,73].Notwithstanding the potential of PPE protein-targeted therapies, several challenges impede their clinical translation.The genetic heterogeneity of PPE proteins may limit the effectiveness of therapies aimed at specific variants, posing a significant hurdle in drug and vaccine development [79].Moreover, the complexity of the host-pathogen interaction raises concerns about unforeseen impacts on host immunity [80].Additionally, the perennial issue of emerging drug resistance necessitates monitoring and strategic development of new therapeutic agents.The profound challenge is fully dissociating downstream associations between PPE and PE proteins to fully inhibit PPE protein activity [81].The potential of PPE proteins as targets for drug and vaccine development offers a paradigm shift in TB treatment.
Conclusions
Our investigation into the role of PPE proteins in the pathogenesis of MTB and their therapeutic potential has yielded significant insights, particularly in the context of the ongoing global TB crisis.The critical role of PPE proteins in the MTB genome, aiding in immune evasion and promoting bacterial transmission, emphasizes their influence on the virulence and survival of the pathogen.A pivotal aspect of this research is the exploration of PPE proteins as therapeutic targets for new TB drug development.The discovery of compounds effective against these proteins opens promising avenues for transforming TB treatment, especially amid rising drug resistance.Furthermore, the possibility of integrating PPE proteins into vaccine strategies, potentially enhancing the efficacy beyond that of the current BCG vaccine, presents an avenue for future research and development.
FIGURE 1 :
FIGURE 1: Sequential steps by which Mycobacterium tuberculosis evades the host immune system via PPE proteins.The flowchart begins with the pathogen-expressing PPE proteins, which modulate the immune response.These proteins interact with host immune cells, such as macrophages and dendritic cells, altering their function and inhibiting immune response.Consequently, the pathogen survives and proliferates within the host, potentially resulting in chronic infection or latency.The pathogen can be transmitted to new hosts, completing the infectious cycle.The pathogen adapts and develops mechanisms to further evade the immune response.PPE: proline-proline-glutamic acid Created with BioRender.com.
FIGURE 2 :
FIGURE 2: Progression of Mycobacterium tuberculosis infection and role of PPE proteins.(1) PPE proteins aid in the initial survival of the bacteria.Following alveolar deposition, bacilli encounter and infect macrophages.(2) PPE proteins facilitate immune evasion.The subsequent immune response leads to the formation of granulomas.(3) PPE proteins contribute to a latent infection.(4) The potential reactivation of MTB.(5) Active TB: PPE proteins modify the bacterial phenotype to promote replication and disease progression.Damaged respiratory epithelial cells are depicted in the background, indicating the pathological effect of an active infection.While indirectly related to MTB pathogenesis, the allergen icon alludes to external factors that can exacerbate lung damage and influence the course of the disease.MTB: Mycobacterium tuberculosis; PPE: proline-proline-glutamic acid; TB: tuberculosis Created with BioRender.com.
Table 1
highlights the functions and cellular characteristics of PPE proteins observed in MTB. | 2024-01-11T16:18:54.868Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "7b739f5aa10b405ff2f7cec3d88ddb293ca0b856",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/219344/20240109-31531-1pbe042.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cbcf6e652267b5d1e45b8c986a00624a32d4c57e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8043057 | pes2o/s2orc | v3-fos-license | Laparoscopy assisted transjejunal endoscopic retrograde cholangiography for treatment of intrahepatic duct stones in a post Roux-en-Y patient
We report a case of a 17-year-old female patient, who was operated on for choledocal cyst with Roux-en Y hepatojejunostomy. She was admitted to hospital with recurrent attacks of acute ascending cholangitis due to left intrahepatic duct stones. After a failed attempt at conventional endoscopic retrograde cholangiopancreatography through the anatomical route, she was treated successfully with laparoscopy assisted transjejunal endoscopic retrograde cholangiography.
Laparoscopy assisted transjejunal endoscopic retrograde cholangiography for treatment of intrahepatic duct stones in a post Roux-en-Y patient
Salah M. Mansor, MD, Salem I. Abdalla, MD, Rashed S. Bendardaf, MBChB. E ndoscopic access to the biliary system can be difficult in patients with surgically altered anatomy of the upper gastrointestinal tract (GIT), such as Roux-en-Y reconstruction, because of the changed anatomy. However, endoscopic retrograde cholangiopancreatography (ERCP) is challenging in cases such as our patient; 1,2 this is due to the distance needed to be traversed and looping. Our objective in presenting this particular case is to describe and highlight a laparoscopy assisted transjejunal ERCP to permit successful treatment and removal of intrahepatic duct stones in a post Roux-en-Y patient, and to minimize surgical inetervention to reduce unnecessary risks to the patient.
Case Report. A 17-year-old female, born with choledochal cyst, for which a hepatojejunostomy with Roux-en-Y reconstruction was performed at the age of 4. Postoperatively, she was doing well until 2 years ago, when she had multiple hospital admissions due to recurrent attacks of right upper quadrant and epigastric abdominal pain, fever, and jaundice. She was diagnosed with acute ascending cholangitis, which was managed conservatively. The frequency of the cholangitis attacks increased in the last few months. Therefore, she was referred to our clinic for further evaluation. On admission, clinically, she was sick, in pain, febrile with a temperature of 38.6 o C, heart rate 105 beats per minute, blood pressure of 110/60 mm Hg. She was jaundiced, with mild epigastric tenderness on abdominal examination. Hemoglobin (Hb) 13.3 g/dl, white blood cells (WBC) 11.1x10 3 /ul, Platelet (PLT) 249x10 3 /ul, alkaline phosphates (Alk ph), 587u/l, total bilirubin 3.7 mg/dl, direct bilirubin 0.9 mg/dl, and indirect bilirubin 2.8 mg/dl, and international ratio (INR) 1.6. Abdominal ultrasound scan (USS) showed dilatation of the intrahepatic biliary tree with multiple stones. MRCP showed dilated left intrahepatic biliary ducts, which were filled with multiple large stones (Figure 1). Endoscopic access to the anastomosis site (hepatojejunostomy) proved impossible through the conventional route. Therefore, after thorough discussion at the multidisciplinary meeting, a decision was made to carry out ERCP through a laparoscopy approach. She underwent laparoscopy assisted ERCP under general anesthesia in which 3 trocars were placed. An optic tractor was placed in the infraumbilical position and the other 2 5mm trocars at the right and left mid clavicular lines at the level of the umbilicus. The operation began with complete laparoscopy exploration, which was of moderate difficulty due to massive adhesions secondary to the previous operation. Adhesions were released without complications or bowel injury. At the beginning, it was difficult to determine which limb, was the afferent limb, with gentle bowel dissection, we were able to identify both afferent, efferent, and the anastomosis portion of the bowel. The bowel was easily drawn up to the abdominal wall through the optic port laparoscopy incision. A longitudinal enterotomy was performed and a therapeutic channel video gastroscopy (TJF-160VF, Olympus Corporation, Center Valley, PA, USA) was inserted into the enterotomy and advanced to the level of the hepatojejunostomy (Figure 2), which was just approximately 10 cm away from the enterotomy. Under the portable C-arm fluoroscopy (Figure 3), the intrahepatic duct was easily cannulated and cholangiography was performed, which showed evident dilatation of the left intrahepatic bile ducts with multiple stones (Figure 4). Balloon dilatation of the anastomotic site and irrigation of both right and left intrahepatic ducts with saline were performed yielding some pus from the left intrahepatic duct. Some stones were consequently extracted using a 12-15 mm balloon catheter, while a few small stones remained and were expected to pass after our balloon dilatation. The bowel was subsequently freed from the skin and the enterotomy was closed, and one non suction tube drain was inserted into the peritoneal cavity. The post operative period passed smoothly, and she was kept fasting for 3 days and was maintained on dextrose saline, Ceftriaxone one gm twice daily and Acetaminophen 250 mg 4 times a day. She was discharged home 5 days later in a very good general condition. Her investigations on discharge day were, Hb 12.1 g/dl, WBC 10.5x10 3 /ul, RBC 4.3x10 6 /ul, PLT 171x10 3 /ul, total bilirubin 1.0 mg/dl, direct 0.4 mg/dl, and indirect 0.6 mg/dl. An abdomen USS showed small stones at the site of anastomosis. She continued to improve, and one month later, a follow-up abdomen USS showed minimal dilatation of intrahepatic ducts, which were free from stones. She followed in the outpatient clinic for one year after the procedure with no surgical complications reported.
Discussion. Roux-en-Y gastric bypass is a surgical procedure that leads to alteration of upper gastrointestinal tract anatomy, which may be performed in management of congenital diseases such as choledochal cyst (as in our case), management of benign disorders like bariatric surgery in obesity, pancreaticoduodenectomy (Whipple procedure) in treatment of chronic pancreatitis, liver transplantation, and repair of bile duct injuries with formation of hepatico-jejunostomy. It is also a part of the management of malignant diseases such as partial or total gastrectomies in gastric cancer, Whipple procedure in pancreatic cancer, distal cholangiocarcinoma, and periampullary carcinoma.
Due to increased popularity of the Roux-en-Y bypass operation, we should expect a parallel increase in the prevalence of bile duct diseases that occur in these patients. In this situation, stone extraction, and bile duct clearance remain challenging. [3][4][5] Our patient had a hepatojujenotomy anastomosis, which was working very well for the previous 13 years, but lately she formed stones in the left intrahepatic ducts making her susceptible to cholangitis. At this point, we did not want to expose her to a major operation as revision of the satisfactory functioning anastomosis to remove the stones. Furthermore, we did not want to subject her to the risk of operative dissection in a massive adhesion area of a previous operation, which carries the risk of adjacent organ injury, and biliary leak from the new anastomosis. Therefore, we took the decision to attempt a routine per oral ERCP to access the biliary tree first. Unfortunately, the attempt did not succeed. In patients with complex upper GIT anatomy per oral ERCP is challenging 1,2 due to the changed and long-length anatomy.
Intraoperative ERCP is often performed in patients with Roux-en-Y gastric bypass, in which the papilla is usually not accessible through endoscopy. 6 Intraoperative transjejunal ERCP uses an open approach with a small incision, it was first reported by Mergener et al. 7 In that case, a successful biliary intervention took place in a patient with a Roux-en-Y hepatojejunostomy. With the development of laparoscopy, and clear appearance of its advantages; such as lower rates of wound complication, less post operative pain, and early return to normal activity, we decided to use the laparoscopy approach for this procedure. Using laparoscopy assisted endoscopic retrograde cholangiography; we could achieve a minimally invasive ERCP procedure to remove stones without the need to expose the patient to the major risks of operation. Cannulation and dilatation, at the site of anastomosis, washing of intra-hepatic bile ducts with normal saline and stone extraction through an easy, short way by opening of the efferent jejunal loop near to the site of the hepatojujenal anastomosis. In this paper, we described our experience in the diagnosis and treatment of a biliary disease using laparoscopy assisted transjejunal ERCP in a patient who had altered anatomy of the upper GIT due to previous surgical intervention. Access to the Roux limb was easily obtained, a diagnostic cholangiography was carried out, and therapeutic interventions were performed at the same time. These results are in line with recent reports that demonstrated the safety and feasibility of the laparoscopy assisted transjejunal ERCP procedure for this indication, and in selected cases of hepato-biliary-pancreatic lesions, since these procedures require expertise in laparoscopic surgery and ERCP.
By reviewing the literature, Lopes et al 8 concluded that laparoscopy assisted ERCP is a valuable option in patients with Roux-en-Y anatomy. They reported a patient with partial gastrectomy and Roux-en-Y reconstruction who presented with abdominal pain due to sphincter of Oddi dysfunction. After failed conventional ERCP, the procedure was successfully performed by laparoscopic assistance through an enterotomy into the biliopancreatic limb.
Saleem et al 9 also concluded that laparoscopy assisted ERCP is a useful modality in patients with surgically altered anatomy. After treating a patient with subtotal gastrectomy with Roux-en-Y gastrojejunostomy, the procedure became furtherly complicated by recurrent left pleural effusion due to pancreaticopleural fistula. After a failed conventional ERCP, the fistula was managed successfully with laparoscopy-assisted transjejunal ERCP (Table 1). To date, there is no single standard therapeutic method for treating biliary duct stones in post Roux-en-Y patients. However, we found several techniques that have been reported. Double balloon endoscopy techniques were used to examine the entire small bowel and to access the biliary tree. 10 Percutaneous open surgical and endoscopic gastrostomy allows antegrade transgastric ERCP, 11 and percutaneous transhepatic cholangiography for accessing the biliary tree and treatment of choledocholithiasis after laparoscopic gastric bypass surgery. 12 Schreiner et al 13 demonstrated the feasibility of laparoscopy assisted ERCP as a minimally invasive technique in managing biliary stones in a patient with Roux-en-Y gastric bypass patients. The indications for each of the above methods depends upon various factors such as, the experience of the managing team, fitness of the patient for the procedure, fitness for general anesthesia, and the cost of the procedure.
In conclusion, laparoscopic assisted transjejunal endoscopic retrograde cholangiography is a possible alternative method as a diagnostic and therapeutic option for the treatment of intrahepatic biliary ducts disease in patients with a Roux-en-Y operation. | 2016-06-02T03:28:14.305Z | 2015-01-26T00:00:00.000 | {
"year": 2015,
"sha1": "f6ad5a09081d37370032f1da556efff49c22ac80",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.15537/smj.2015.1.10404",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6ad5a09081d37370032f1da556efff49c22ac80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256587843 | pes2o/s2orc | v3-fos-license | A Rare Case of Hepatocellular Carcinoma Presenting as a Massive Abdominal Hematoma and Shock: A Case Report
Hepatocellular carcinoma (HCC) has an affluent blood supply stemming from the hepatic artery. Subsequent spontaneous tumor rupture can lead to massive abdominal hematoma and shock, a rare fatal gastrointestinal incident. The diagnosis of rupture is complicated, with most patients presenting with abdominal pain and shock. Prompt correction of hypovolemic shock is the primary goal of treatment. This rare case presents a 75-year-old male who presented to the emergency department because of abrupt and increasing abdominal pain after a meal. Laboratory data revealed elevated alanine aminotransferase, aspartate aminotransferase, and alpha-fetoprotein levels. Immediate computed tomography demonstrated a defect in the right ventral abdominal wall. The patient underwent an emergency exploratory laparotomy. Despite massive intra-abdominal adhesions, the identified source of bleeding was from the left lobe of the liver at the base of the lesser sac above the pancreas. There was a maximum effort to cease bleeding and minimize blood loss. An ensuing biopsy of the liver revealed HCC. After improving, the patient received instructions to follow up on an outpatient basis. Two months after surgery, the patient endorses no complications. The success outlined in this case highlights the essence of prompt action in an emergency, which delineates the significance of surgical experience in handling unorthodox patient presentations.
Introduction
Abdominal hematomas are uncommon and often mimic other acute abdominal disorders [1]. Hematomas occur because of an accumulation of blood secondary to an injury to the epigastric vessels, their perforating branches, or a solid organ [2]. Hematomas have been commonly associated with elderly females with a history of anticoagulant use or those who suffered from trauma [1,3]. Despite the established association, some hematomas can occur spontaneously, with no predispositions [4]. For example, a spontaneous hemorrhage can occur because of visceral injuries in the liver, spleen, kidneys, or adrenal glands [5].
Hepatocellular carcinoma (HCC) is an uncommon but fatal cause of abdominal hemorrhage, with an incidence of up to 15% in patients diagnosed with the disease [6]. It is associated with a poor prognosis, especially in the setting of cirrhosis and severe coagulopathy [6]. It requires prompt treatment to prevent dire consequences, like uncontrolled hemorrhage and death [7]. While a hematoma can resolve with no intervention, in rare cases, patients need surgery when the hematoma is large or progressing.
Because of the infrequency and imitative nature, it is challenging to diagnose abdominal hematomas, and clinicians often mistake them for inflammatory diseases [1,3]. Therefore, there is a need for accurate diagnosis. Computed tomography (CT) and other imaging modalities can support the diagnosis by identifying the hallmark features of active bleeding. Naturally, clinicians should emphasize identifying the source promptly and determining the etiology of the bleeding to prevent drastic complications. This report presents an elderly male with a sudden onset of abdominal pain and hypotension who underwent emergent surgical treatment after surgeons discovered an abdominal hematoma secondary to a spontaneous hemorrhage of HCC on CT.
Case Presentation
Here, we describe a case of a 75-year-old male who presented to the emergency department because of abrupt and increasing abdominal pain after having a meal. The patient had a past medical history of hypertension and hepatitis. The patient became hemodynamically unstable en route to the hospital, with his systolic blood pressure dropping below 50 mmHg. When he arrived in the emergency room, he was experiencing non-bloody, nonbilious emesis. His blood pressure was 69/47 mmHg, and his heart rate was 84 beats per minute. The patient received immediate fluid resuscitation. There was moderate abdominal distension and diffuse tenderness. Laboratory data revealed an elevation of alanine aminotransferase (ALT) at 232 IU/L, aspartate aminotransferase (AST) at 171 IU/L, and alpha-fetoprotein (AFP) at 3000 ng/mL levels.
In addition, the patient tested positive for hepatitis C antigen but negative for hepatitis C RNA.
The patient got an immediate CT of the abdomen. Imaging revealed a defect in the right ventral abdominal wall with multiple metallic fragments consistent with shrapnel that extended into the right iliopsoas muscle. The liver had multiple hypodense masses with a metallic fragment lodged in the caudate lobe. This corresponds with his past medical history. Approximately 50 years prior, the patient sustained a gunshot wound to the abdomen and underwent extensive surgery for an abdominal wall and recurrent small bowel obstruction repair.
There was also an intraperitoneal hematoma without evidence of a ruptured aorta. Upon confirmation of diagnosis, the patient underwent an emergency exploratory laparotomy ( Figure 1).
FIGURE 1: Emergency exploratory laparotomy with massive hematoma.
The surgeons opened the abdomen and found the lesser sac to have massive blood in the space. They evacuated the blood from the lesser sac and found that a portion of the liver was bleeding. During this time, the patient became hemodynamically unstable. Maximum effort to cease bleeding began; manual pressure, sterile compressed sponge, and powder hemostat proved helpful. Eventually, surgeons controlled the bleeding after placing liver sutures. He received six units of packed red blood cells, with subsequent improvement. Next, the surgeons biopsied the part of the liver that had been bleeding and sent it for pathological examination. The biopsy revealed atypical hepatocytes and dilated blood vessels, favoring a diagnosis of HCC. The patient's postoperative course was without further complications, with signs of significant improvement. His ALT, AST, and AFP levels had improved significantly. He continues to follow up in an outpatient setting and endorses feeling well with no nausea or pain.
Discussion
Several factors can lead to abdominal hematomas; however, spontaneous HCC rupture is a very plausible cause of hemorrhage in this patient. The exact pathophysiology is unknown, but some researchers have proposed several mechanisms to explain the rupture mechanism [8]. The first hypothesis suggests that a rupture can occur secondary to vascular injuries. An increased collagenase expression in injured vessels degrades type IV collagen, which is essential for mechanical stability [9]. These arteries become stiff and can rupture. Second, research suggests that venous congestion caused by a tumor or chronic liver disease could lead to an HCC rupture. When the hepatic vein becomes occluded, it can prevent blood from flowing out of the liver. This causes more blood to flow into the tumor, increasing intra-tumoral pressure and making the tumor more prone to rupture [8]. Other hypotheses base the tumor's susceptibility to rupture on the hepatic parenchyma surrounding the HCC, size, and location. Normal parenchyma surrounding the HCC can help prevent the rupture of the tumor, but thinner parenchyma can make the tumor more prone to rupture. The tumor's location also plays an important role; if the HCC is in a subscapular location, it can rupture earlier than an HCC tumor in the center of the liver. If the HCC is in the left lobe of the liver, it has a higher risk of rupture because the left lobe has a smaller area for space-occupying lesions [8].
Spontaneous hepatic hemorrhage secondary to rupture of HCC is challenging to diagnose and treat. Not detecting and controlling early can lead to shock and death [10]. As it has a poor prognosis, it must be a differential diagnosis in patients with higher levels of liver dysfunction [11]. Despite the poor prognosis, some studies show that emergency laparotomy is detrimental to patient survival [12]. One study by Chedid et al. [12] outlined three separate cases of spontaneous hemorrhages because of HCC rupture. All patients had a similar background and clinical manifestation, including a history of chronic liver disease, specifically cirrhotic liver, as in our patient. They admitted patients who presented with abdominal pain and hypotension for an emergent laparotomy with varying degrees of success.
The patient's liver had become enlarged and cirrhotic in the first case discussed. Successive pathological workup revealed HCC and cirrhosis. Similarly, the second patient's imaging revealed a tumor on the right side of the liver with free peritoneal fluid. This patient underwent an elective right trisegmentectomy, and subsequent histology confirmed HCC and cirrhosis. However, 16 months later, the patient returned with diffuse abdominal pain with signs of hypovolemia, chronic liver disease, and a tense abdomen. CT revealed an encapsulated hematoma, and laparotomy revealed a hematoma in the remaining left liver. Biopsy showed that the HCC had spread to the left lobe. The last case showcased a patient with a history of viral hepatitis C and cirrhosis. Imaging revealed large intra-abdominal fluid and an ulcerated lesion in the right hepatic lobe. Sadly, the patient died from hypovolemic shock with external bleeding through the abdominal drain, which she developed during the immediate postoperative period of the right hepatectomy.
The patient's acute presentation suggests an HCC rupture caused his hematoma. Per past literature, our patient's symptoms mirror those of other patients who presented for HCC rupture. Seemingly, our patient presented with abnormal liver function tests, severe hypotension, and abdominal distension. Diagnosing ruptured HCC in patients is challenging because there are various presentations. However, most studies describe shock as the most critical indicator of rupture [13]. Optimal treatment is required as rupture of HCC is a life-threatening complication. Even though there is a plethora of treatment methods, an emergent laparotomy in the present case proved essential in the treatment of rupture and diagnosis of HCC.
Conclusions
In brief, abdominal hematomas often have varying presentations yet can rarely be associated with cancer. HCC has a rich blood flow and can lend itself to a more aggressive course if ruptured. Although trauma and chronic diseases can also cause hematomas, the overall goal is to diagnose and treat presenting patients promptly. Although one would assume that an exploratory laparotomy is the most successful course of action with hematoma, treatment is patient-dependent. With this patient, an exploratory laparotomy was critical to the patient's survival, but it must be determined what will provide the best outcome for the patient. The early diagnosis of hematomas will improve outcomes and allow the management of injury before it reaches stages that require emergent operations.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-02-05T16:17:58.457Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "c555352d4f33ba987f27418d01d3b76578542b4c",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/136785/20230203-21122-1hq03lp.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b31cfb184f54b8573f592634332ff31ac6bd0edd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
28107215 | pes2o/s2orc | v3-fos-license | Ant queens increase their reproductive efforts after pathogen infection
Infections with potentially lethal pathogens may negatively affect an individual's lifespan and decrease its reproductive value. The terminal investment hypothesis predicts that individuals faced with a reduced survival should invest more into reproduction instead of maintenance and growth. Several studies suggest that individuals are indeed able to estimate their body condition and to increase their reproductive effort with approaching death, while other studies gave ambiguous results. We investigate whether queens of a perennial social insect (ant) are able to boost their reproduction following infection with an obligate killing pathogen. Social insect queens are special with regard to reproduction and aging, as they outlive conspecific non-reproductive workers. Moreover, in the ant Cardiocondyla obscurior, fecundity increases with queen age. However, it remained unclear whether this reflects negative reproductive senescence or terminal investment in response to approaching death. Here, we test whether queens of C. obscurior react to infection with the entomopathogenic fungus Metarhizium brunneum by an increased egg-laying rate. We show that a fungal infection triggers a reinforced investment in reproduction in queens. This adjustment of the reproductive rate by ant queens is consistent with predictions of the terminal investment hypothesis and is reported for the first time in a social insect.
Infections with potentially lethal pathogens may negatively affect an individual's lifespan and decrease its reproductive value. The terminal investment hypothesis predicts that individuals faced with a reduced survival should invest more into reproduction instead of maintenance and growth. Several studies suggest that individuals are indeed able to estimate their body condition and to increase their reproductive effort with approaching death, while other studies gave ambiguous results. We investigate whether queens of a perennial social insect (ant) are able to boost their reproduction following infection with an obligate killing pathogen. Social insect queens are special with regard to reproduction and aging, as they outlive conspecific non-reproductive workers. Moreover, in the ant Cardiocondyla obscurior, fecundity increases with queen age. However, it remained unclear whether this reflects negative reproductive senescence or terminal investment in response to approaching death. Here, we test whether queens of C. obscurior react to infection with the entomopathogenic fungus Metarhizium brunneum by an increased egg-laying rate. We show that a fungal infection triggers a reinforced investment in reproduction in queens. This adjustment of the reproductive rate by ant queens is consistent with predictions of the terminal investment hypothesis and is reported for the first time in a social insect.
Infection with pathogens has been used to test whether animals can react to a decline of physical condition independent of chronological age. In some species, infections indeed resulted in reinforced investment in reproduction (e.g. [9,10,17]) while in others the activation of the immune system was associated with a decrease in reproduction (e.g. lizards [18]). Here, we use a social insect to investigate whether ant queens are capable of reacting to infection by an increased egg-laying rate, as expected from the terminal investment hypothesis.
Queens of perennial social insects (ants, bees, termites) are exceptional with regard to life-history trade-offs and senescence. First, they show an extreme extension of lifespan compared to most solitary insects [19,20]. Second, queens and also reproductive workers live longer than their non-reproductive worker nest-mates, suggesting the absence of a trade-off between reproduction and lifespan on the level of the individual [21][22][23]. In Cardiocondyla ant queens, weekly egg-laying rates were shown to be positively associated with longevity and to gradually increase with age [24][25][26], suggesting negative senescence. Here we show that, beyond that, infection with an entomopathogenic fungus triggers a boost in reproduction in queens of Cardiocondyla obscurior, independent of chronological age. Our data in a social insect therefore are in line with the terminal investment hypothesis.
Methods
Cardiocondyla obscurior lives in small colonies with less than 100 sterile workers and one or a few queens [27,28], which have a mean lifespan of 26 weeks [29]. We used large laboratory stock colonies that had been kept in the laboratory for several years to set up experimental colonies, each with one queen pupa, one pupa of a wingless male, and 20 workers. Worker number was kept constant over time by adding worker pupae from stock colonies or removing surplus workers. Sexuals mated after hatching and all eggs produced by the mated queen were counted at least twice per week. Queen pupae that developed from the brood were removed to avoid hatching of a second queen. Colonies were reared in incubators with a 12 h 28°C/12 h 23°C cycle and fed twice per week with cockroaches or fruit flies and honey. The queens were about nine weeks old when the treatment started (high infection: median age 65 days, To investigate whether queens increase their reproductive efforts in response to a pathogen, we exposed some of the queens to conidiospores of the entomopathogenic fungus, Metarhizium brunneum. Metarhizium brunneum is an obligate-killing pathogen, requiring host death for the completion of its life cycle [30][31][32]. Metarhizium brunneum penetrates the host cuticle within 48 hours after exposure and thereafter grows hyphae in the insect body [33]. Fungal growth and toxins released by the fungus after infection lead to host death, followed by outgrowth of a new generation of infectious conidiospores [34]. It is known from other ants that exposure does not always lead to lethal high-level infections, but can also result in asymptomatic low-level infections [35]. To increase the proportion of queens that developed a lethal infection, we performed a preliminary test that revealed that dipping ant queens in a spore suspension with a concentration of 5 × 10 7 spores ml −1 leads to 53.3% of C. obscurior queens developing lethal infections within 35 days after exposure, which is required to test the terminal investment hypothesis. Queens from 43 successfully established experimental colonies were subjected to three different treatments: (i) treatment with a 5 × 10 7 spores ml −1 Metarhizium brunneum spore suspension (strain Ma 275; KVL 03-143; as in [36], in 0.05% Triton X; n = 31), (ii) treatment with 0.05% Triton X solution (n = 6) and (iii) an untreated control group (n = 6). Queens of the 1st and 2nd group were grasped with forceps and completely dipped into the M. brunneum spore suspension or the 0.05% Triton X solution, respectively, for 15 seconds (in a small bowl, approx. 2 ml) or until they had become immobile. Excessive liquid was removed from the queen's surface by placing the queen on filter paper directly following the dipping procedure. Treated queens were isolated from their colony for 30 h to avoid grooming and spore removal by workers. Queens of the third group were removed from the colonies to control for the effects of isolation. Preliminary data had shown that many queens die within a few hours after treatment with spore suspension, hence the sample size for this group was much larger than that for the two control groups 2 ('Triton X') and 3 ('Untreated'), in which survival rate was high.
After the return of the queens, colonies were kept at room temperature (approx. 23°C). All eggs were counted daily and dead queens were removed and frozen to determine their pathogen load by quantitative real-time PCR on the fungal ITS2 rRNA gene region [37,38]. Queens that were still alive after 24 days were frozen for the detection of their pathogen load and the verification that control queens were not infected, respectively. Prior to DNA extraction individual ants were homogenized using a sterile micropestle. To ensure rupture of the spores the samples were additionally homogenized with acidwashed glass beads (425-600 µm; Sigma-Aldrich) using a TissueLyzer II (Qiagen, Hilden, Germany). DNA extractions were performed using Qiagen DNeasy Blood and Tissue Kit (Qiagen, Hilden, Germany) per manufacturer's instructions, using a final elution volume of 50 µl.
For quantification of the fungal pathogen load by quantitative real-time PCR, we designed primers based on GenBank sequence AY755505.1 to bind to the Metarhizium brunneum ITS2 rRNA gene region (Met-ITS2-F: 5 -CCCTGTGGACTTGGTGTTG-3 , Met-ITS2-R: 5 -GCTCCTGTTGCGAGTGTTTT-3 ). Reactions were performed in 20 µl volumes using KAPA SYBR ® FAST Bio-Rad iCycler 2X qPCR Master Mix (Kapa Biosystems), 3 pmol each of both primer (Sigma-Aldrich), and 2 µl of DNA template. The PCR program used for amplification was 95°C for 5 min, followed by 40 cycles of 10 s at 95°C and 30 s at 64°C. Each sample was run in triplicate. Each run included a negative control. Concentrations were determined using the standard curve method. Standards were obtained by extracting DNA of pure Metarhizium spores. The dilution series for the standard curve spanned the following DNA concentration range: 1 ng µl −1 to 1 × 10 −6 ng µl −1 . Specificity was confirmed by performing a melting curve analysis after each run.
Four control queens were killed by freezing 24 days after the treatment to exclude fungal infection. Similarly, nine queens treated with M. brunneum spores were killed to quantify their fungal pathogen load. To verify actual infection in our experiment and to exclude external contamination by attached conidiospores, we determined the amount of fungus on the ant's cuticle directly after exposure. To do so we exposed nine additional queens from stock colonies as described above and after 10 min killed them by freezing. We determined the value of this exposure dose by the same method of quantitative real-time PCR, and used it as a baseline to determine if the pathogen load increased as compared to this initial exposure dose, or not (see electronic supplementary material, S1). Only successful infection and pathogen replication inside the host body can lead to higher values in the experimental queens than in the exposure baseline. Queens with a higher than baseline value were thus categorized as highly infected ('High infection'). Queens exposed to M. brunneum but showing pathogen loads below the exposure baseline, but above the negative control threshold, were categorized as having a low-level infection ('Low infection'), so that exposed queens were separated into a high and low infection group according to their pathogen load. Only 19 of the originally 31 exposed queens could be used in the analysis (High infection n = 10; median age at treatment: 65 days; Q1: 60.8; Q3: 69.3), Low infection n = 9; median age at treatment: 67 days; Q1: 60; Q3: 70), as several queens did not survive the treatment procedure, did not resume egg laying, or their corpses could not be retrieved. For the same reasons, only six of 12 originally set up control queens (Triton X + untreated control group) could be used in the analysis, so that we pooled the two control treatments (Triton X, n = 3; Untreated, n = 3) to a single control group (n = 6 queens; median age at treatment: 63 days; Q1: 54.8; Q3: 71.3), as egg laying did not differ between them (Wilcoxon rank-sum test: before treatment W = 84.5, p = 0.5; after treatment W = 223, p = 1).
As the weekly reproduction rate of C. obscurior queens increases with lifespan [25], we compared individual egg-laying rates (daily egg number) during the week before (Before Treatment, BT, four scans; every second day) and the week after the treatment (After Treatment, AT, seven scans; daily). Data were analysed using R v. 3.2.3 [39] using packages 'ggplot2' [40] for all graphs (boxplots and LOESS curve) and 'survival' [41] for the Kaplan-Meier (KM) survival analysis and graph. Lifespans of queens, that had not died during the experimental period of 24 days and had to be frozen for qPCR analysis, were included as censored data in the survival analysis. Egg numbers before and after the treatment were not normally distributed (Shapiro-Wilk test, W = 0.97, p < 0.0001 and quantile-quantile plot analysis). Therefore, Kruskal-Wallis test was used for group comparisons followed by a pairwise Wilcoxon ranksum test as post hoc test. We used Wilcoxon signed-rank test (paired) for two-sample comparisons. p-Values were adjusted using the Benjamini-Hochberg correction to protect against a false discovery rate of 5% in the library 'fdr' [42].
Results
Of the 19 queens treated with the Metarhizium brunneum spore suspension that had been considered for analysis, nine (47.4%) died within 7 days after treatment. All surviving queens were killed after 24 days to check their infection status. Survival time was strongly influenced by infection level. Nine of 10 highly Egg-laying rates (daily egg number) did not differ among the three groups before the treatment (Kruskal-Wallis test: χ 2 = 1.1, d.f. = 2, p = 0.57, figure 2). However, treatment affected egg-laying rates: independent of infection level, infected queens significantly increased their egg-laying rate after treatment relative to that before treatment (High infection: increase in egg-laying rate 1.3 times; Wilcoxon signed-rank test: V = 104.5, p = 0.0005; low infection: increase in egg-laying rate 1.5 times; V = 73, p = 0.0001, table 1). In contrast, weekly reproductive rate did not change in the control group (V = 66; p = 0.25).
In none of the groups did egg-laying rate differ between the day before and the day after the isolation (Wilcoxon signed-rank test: Low infection: V = 2, p = 0.15; High infection: V = 7, p = 0.21; Control: V = 3, p = 0.60), suggesting that the manipulation with forceps and subsequent isolation did not have a negative effect on reproduction. As infection only occurs approximately 48 h after exposure, changes in egg-laying rate can thus be related to changes in infection state, independent of the handling procedure.
Egg-laying rate in the week after treatment did not differ between highly and lowly infected queens, but both produced more eggs than the control group (Kruskal-Wallis test: χ 2 = 10.5, d.f. = 2, p = 0.005; Pairwise Wilcoxon rank-sum test: High versus Control; p = 0.013; Control versus Low p = 0.007; High versus Low, p = 0.73, Benjamini-Hochberg corrected p-values). Queens with a low infection continued to lay more eggs than control queens throughout the remaining experimental time (Wilcoxon rank-sum test day 8-24, W = 3788, p < 0.0001, figure 3), while all except one highly infected queen had died by then. Infection load of queens treated with M. brunneum spores (estimated by qPCR) and mean egg number per queen were not correlated (Spearman's rank correlation: ρ = 0.08, p = 0.74; for infection loads see electronic supplementary material, table S1).
Discussion
The short lifespan of queens of the ant Cardiocondyla obscurior makes them a useful model to study the interrelation among life-history traits in social insects. We have previously shown that the fecundity of queens of C. obscurior increases gradually with age [25] and that they are capable of adjusting their egg-laying rate to changing social or environmental conditions without a reduction of lifespan [43,44].
Here we document that queens infected with an entomopathogenic fungus increase their egg-laying rate compared to uninfected queens of the same age. Metarhizium brunneum occurs in many parts of the world, e.g. in Europe [45] and Central America [32], and an infestation with this and other pathogens might be a permanent threat for ant colonies. Almost half of all exposed queens (47.3%) of C. obscurior died within one week after spore contact. An infection with M. brunneum is therefore associated with a severely reduced lifespan and the capability of queens to increase their reproductive efforts in response to infection is in line with the predictions of the terminal investment hypothesis [4], which has not been previously tested in social insects. Fungal exposure did not always lead to lethal high-level infection. However, the behavioural and immunological defences of the ants led to asymptomatic low-level infections in 47% of the exposed queens, which did not induce mortality during the experimental duration of 24 days. Here we can show that also these low-level infected queens showed an increased egg-laying rate. This suggests that the reduction of lifespan in highly infected queens was not caused by a reinforced reproduction but by the infection. Low-level infections, as well as non-pathogenic injury, such as the experimental amputation . Temporal change in egg-laying rate (displayed by LOESS curve with 95% confidence intervals) for the week before treatment and the entire experimental time after the treatment. All but one highly infected queens had died within the first 7 days, so their curve is stopped after one week. While egg laying does not differ between the three groups before treatment, infected queens of both the high infection (blue) and low infection group (red) show increased egg laying over the control group (yellow).
of parts of a queen's leg, cause immune reactions [35,46]. Both these treatments do not have immediate lethal effects, but may still have later effects on the queens. These could not be examined in this study, given its limited experimental time of 24 days to determine their infection status.
In contrast to a pathogenic threat, non-infectious wounding induces a strong, but transient decrease of the egg-laying rate [46]. This indicates that infection and injury can have opposite effects on fecundity, despite both involving an immune response. The wounding study reveals that amputation leads to a trade-off between costly wound repair and reproduction in C. obscurior queens, in line with a general trade-off between resource allocation for reproduction and life sustaining processes [46,47]. As egglaying rate was only decreased temporarily and returned to the level before the injury [46], we suggest that the costs for recovery were only transient. A short increase of investment into the immune system might have accelerated the recovery process, so that queens could resume their normal egg-laying rate quickly. The fact that fungal infection increased egg-laying rate indicates that it did not trigger a costly immune response. In any case, the net result is increased fecundity, which may compensate at least in part for the prospective decrease of residual reproductive value. Queens of social insects are supplied with food by workers and hence are usually not resource-limited [48]. This might allow them to invest into both immune response and reproduction, if necessary, as queens with low infections seem to be able to reduce spore proliferation compared to highly infected queens and are nevertheless able to increase their egg-laying rate.
Interestingly, studies on social insect workers showed a different response to an immune challenge. Rather than staying in the nest, infected or injured workers leave the nest and commence foraging earlier than unmanipulated nest-mates [49][50][51] and later die away from their natal colony [38,52,53]. Similarly, CO 2 narcosis affects queen and worker behaviour in opposite directions: it initiates egg laying in queens but inhibits ovary activation and causes precocious foraging and death in isolation in workers [38,[52][53][54][55]. This suggests a contrarious, caste-specific regulation of the physiological and behavioural responses to stressors, such as pathogens, injuries, or CO 2 (e.g. [54]).
In conclusion, in addition to the previously shown increase of fecundity with age, our results clearly show that queens are able to adjust their egg-laying rate after infection with an obligate-killing fungal pathogen that induces queen mortality if causing high-level infection. Hence, our study strongly corresponds to the predictions of the terminal investment hypothesis. | 2018-01-23T00:09:27.886Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "ed0831bf8f01a229d3f6c9821c95bd66e6b41fab",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.170547",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd87fb4a7423463a16bc1b42aac3309de19a26fa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258321609 | pes2o/s2orc | v3-fos-license | Polyhydroxybutyrate Production from Methane and Carbon Dioxide by a Syntrophic Consortium of Methanotrophs with Oxygenic Photogranules without an External Oxygen Supply
Here, a syntrophic process was developed to produce polyhydroxy-β-butyrate (PHB) from a gas stream containing CH4 and CO2 without an external oxygen supply using a combination of methanotrophs with the community of oxygenic photogranules (OPGs). The co-culture features of Methylomonas sp. DH-1 and Methylosinus trichosporium OB3b were evaluated under carbon-rich and carbon-lean conditions. The critical role of O2 in the syntrophy was confirmed through the sequencing of 16S rRNA gene fragments. Based on their carbon consumption rates and the adaptation to a poor environment, M. trichosporium OB3b with OPGs was selected for methane conversion and PHB production. Nitrogen limitation stimulated PHB accumulation in the methanotroph but hindered the growth of the syntrophic consortium. At 2.9 mM of the nitrogen source, 1.13 g/L of biomass and 83.0 mg/L of PHB could be obtained from simulated biogas. These results demonstrate that syntrophy has the potential to convert greenhouse gases into valuable products efficiently.
Introduction
The mitigation of greenhouse gases is essential for developing a carbon-neutral society. Among these gases, methane (CH 4 ) is particularly significant due to its high heat-trapping potential, with a global warming potential around 80 times greater than CO 2 in the short term [1]. CH 4 emissions account for 18% of the total GHG emissions, even if it is emitted into the atmosphere in much smaller quantities than CO 2 [2]. This high impact led to international initiatives such as the Global Methane Pledge. Consequently, the development of efficient processes to convert CH 4 into value-added products is crucial.
Biological conversion is one potential approach to CH 4 mitigation [3]. Biological routes for CH 4 conversion offer the advantage of transforming CH 4 into valuable substances under mild physicochemical conditions, unlike thermochemical routes, which are capital-and energy-intensive processes [4]. Methanotrophs, crucial players in the global methane cycle, are utilized as catalysts for this process. These bacteria are classified into type I, type II, and type X based on their physiological characteristics and methane assimilation pathways. The unique metabolic pathway of methanotrophs involves the initial step of methane oxidation, resulting in methanol production. Subsequently, methanol is further oxidized to formaldehyde, which is assimilated to form intermediates of the central metabolism, leading to the synthesis of various compounds [5]. Several bioproducts have been reported from CH 4 conversion using methanotrophs, including hydrogen [6], alcohols [7,8], organic acids [9], amino acids [10], lipids [11], singlecell proteins [12,13], and biodegradable plastics [14][15][16]. Among these bioproducts, the production of biodegradable plastics using methanotrophs has garnered attention as it has the potential to address both plastic pollution and climate change, two of humanity's most pressing challenges. Type II methanotrophs can store excess carbon as polyhydroxy-βbutyrate (PHB), a representative biodegradable polymer, when certain growth-essential elements, such as nitrogen, sulfur, and phosphorous, are limited [17,18]. Previous research has indicated that nitrogen deficiency is the most effective method for inducing the accumulation of PHB in type II methanotrophs, including M. trichosporium OB3b [19].
However, there are challenges to the full-scale implementation of biological CH 4 conversion for biodegradable polymers, including inherent properties of the gaseous substrates required for methanotrophs, i.e., CH 4 and O 2 [20]. Gas transfer attributes are known to dictate overall productivity; the solubility of these gases amounts to only 19 mg/L for water in contact with a 100% CH 4 headspace and 38 mg/L for water in contact with a 100% O 2 headspace under typical growth conditions at 30 • C and 1 atm [21,22]. The solubility issue is further exacerbated by the dilution of nitrogen in the air for the two gaseous substrates. Additionally, safety concerns arise due to the flammability range of CH 4 explosions in the air [13], which is between 5.5 and 14%. While anaerobic CH 4 oxidation routes may address some of these drawbacks, the slow growth of the corresponding microorganisms presents obstacles to the practical application [23].
Recently, there have been attempts to simulate the division of labor strategy, which has been employed by microbial species to solve natural difficulties for various applications [24]. One such application is the syntrophic culture of methanotrophs and phototrophs [25][26][27][28], in which the O 2 required for methanotrophs is supplied through photosynthesis. This approach could address mass transfer limitations and safety issues and transform CH 4 and CO 2 simultaneously. While there are some reports on syntrophic cultivations of methanotrophs and phototrophs, most of these studies have focused on wastewater treatment or greenhouse gas conversion rather than on specific bioproducts such as biodegradable polymers.
Here, we demonstrate a syntrophic process of methanotrophs with phototrophs in oxygenic photogranules (OPGs) to produce PHB from a gas stream containing CH 4 and CO 2 . OPGs consist of phototrophic cyanobacteria interacting syntrophically with heterotrophic bacteria and have been recently described in the context of wastewater treatment [28]. It may be possible to convert this syntrophy into a community where methanotrophs mainly substitute the heterotrophic partners. This syntrophic community may then be able to transform methane into metabolites of interest. Furthermore, we anticipated that the nitrogen limitation in the culture medium under excess carbon may enhance PHB accumulation by the consortium of methanotrophs with OPGs.
Prior to assessing the production of PHB through syntrophy, the properties of cocultures of type I and type II methanotrophs were evaluated concerning their gas intake profile and population dynamics to select the most suitable type of methanotroph for the syntrophic process. Subsequently, a series of experiments were conducted to improve PHB production in the syntrophy by adjusting the nitrogen source concentration. To the best of our knowledge, this is the first investigation into the production of PHB through biological CH 4 conversion without an external O 2 supply.
Microorganisms and Culture Conditions
Methylosinus trichosporium OB3b (NCIMB 11131) and Methylomonas sp. DH-1 strains were cultivated in 500 mL baffled flasks (polycarbonate) with a butyl-rubber septum containing 100 mL of nitrate mineral salts (NMS) medium, which was composed of KNO 3 (26 g) per liter), and 1 mL vitamin stock (calcium pantothenate (50 mg), nicotinamide (50 mg), thiamine HCl (50 mg), riboflavin (50 mg), folic acid (20 mg), biotin (20 mg), and vitamin B 12 (1 mg) per liter) [29]. All components of the medium were added before sterilization, except phosphate and vitamin solutions. The sterilization was performed for 15 min at 121 • C. Phosphate and vitamin solutions were sterilized separately and then added to the medium. A gas mixture of 30% (v/v) CH 4 and 70% (v/v) air was supplied by a mass flow controller (IMC1300, ISVT Co., Ltd., Yongin-si, Republic of Korea). The headspace gas was replenished every two days. The flasks were incubated in a shaking incubator at 230 rpm and 30 • C.
The oxygenic photogranules (OPGs) were cultivated in 1 L glass baffled flasks sealed with a butyl-rubber septum using 500 mL of simplified synthetic wastewater (SW) medium, which consisted of 139.45 mg CH 3 O, which were separately sterilized and then added to the medium. Carbon dioxide and air were supplied with a ratio of 15% (v/v) CO 2 and 85% (v/v) air by headspace gas substitution with a mass flow controller (IMC1300, ISVT Co., Ltd., Yongin-si, Republic of Korea). The headspace gas was replenished once every five days. The flasks were incubated on an orbital shaker at room temperature and 100 rpm under LED at a light intensity of 145 µmol/m 2 /s.
Gas Consumption of the Syntrophic Consortiums
Each strain of methanotrophic bacteria was co-cultured with OPGs separately in 60 mL serum bottles sealed with a butyl-rubber septum. The working volume in each serum bottle was 12 mL of mixed media (NMS medium and SW medium at a ratio of 1:1). All media components were added before autoclaving except urea, KH 2 PO 4 , FeSO 4 ·7H 2 O, phosphate, and vitamin solutions, which were added after autoclaving. The inoculation ratio of methanotrophic bacteria and OPGs was 1:1 (0.1%: 0.1% (wet w/v)). Carbon-rich conditions (30% CH 4 and 20% CO 2 with N 2 balance) and carbon-lean conditions (6% CH 4 and 4% CO 2 with N 2 balance) were employed with both consortiums. Gases were supplied by headspace gas substitution using a mass flow controller (IMC1300, ISVT Co., Ltd., Yongin-si, Republic of Korea). The serum bottles were incubated on an orbital shaker at room temperature and 100 rpm, maintaining a constant light intensity of 145 µmol/m 2 /s. All experiments were performed in duplicate. The gas composition in the headspace was measured periodically to determine gas intake. Samples were taken after cultivation to check the microbial community using 16S rRNA amplicon sequencing.
16S rRNA Amplicon Sequencing
DNA was extracted using a DNeasy PowerSoil Kit (Qiagen, Germany) and quantified by Quant-IT PicoGreen (Invitrogen, Carlsbad, CA, USA). The Illumina standard protocols were used to amplify the 16S rRNA gene (V3 and V4 regions) for preparing the sequencing libraries. PCR amplification was performed using 2 ng of the extracted DNA, 500 nM of each universal primer (F/R), 0.5 U of Herculase II fusion DNA polymerase (Agilent Technologies, Santa Clara, CA, USA), 1 mM of dNTP mix, and 5x reaction buffer. The cycle conditions for 1st PCR were 95 • C for 3 min, then 25 cycles of 95 • C (0.5 min), 55 • C (0.5 min), and 72 • C (0.5 min), with a final extension for 5 min (72 • C). The following forward and reverse primers with Illumina overhang adapter sequences were employed for 1st PCR amplification: AMPure beads (Agencourt Bioscience, La Jolla, CA, USA) were used to purify the 1st PCR product. After that, 2nd PCR was carried out using 2 µL of 1st PCR purified product and NexteraXT Indexed Primer to construct the final library. The 2nd PCR cycle conditions were analogous to those of the 1st PCR cycle, except that 10 cycles were performed instead of 25. The purification of 2nd PCR product was conducted with AMPure beads, and the quantification was carried out using qPCR following the protocol guide (KAPA Library Quantification kits for Illumina platforms). TapeStation D1000 ScreenTape (Agilent Technologies, USA) was used to qualify the 2nd PCR product. Sequencing of pairedend (2 × 300 bp) was conducted by Macrogen using the MiSeq™ platform (Illumina, San Diego, CA, USA). QIIME (version 1.9.0) was employed to analyze the sequencing data [30]. BLASTn (version 2.4.0) was used to align the sequence of each operational taxonomic unit (OTU) sequence against the NCBI 16S rRNA database reference sequences to obtain the taxonomic affiliation of each OTU.
Flask Cultivation of Syntrophic Consortium for PHB Production Using CH 4 and CO 2
Methylosinus trichosporium OB3b and OPGs were co-cultured in 500 mL polycarbonate baffled flasks sealed with a butyl-rubber septum. Various concentrations of N-NO 3 and N-NH 4 as nitrogen sources (11.5 mM, 5.7 mM, 2.9 mM, and 1.4 mM) were employed to investigate the effect of nitrogen concentration. The working volume of each flask was 100 mL of media based on the NMS and SW medium at 1:1, except for the nitrogen source. All media components were added before sterilization except urea, KH 2 PO 4 , FeSO 4 ·7H 2 O, phosphate, and vitamin solutions, which were added after sterilization. The inoculation ratio of methanotrophic bacteria and OPGs was 1:1 (0.1%: 0.1% (wet w/v)). The mixture comprising 30% CH 4, 20% CO 2, and 50% N 2 was supplied by headspace gas substitution using a mass flow controller (IMC1300, ISVT Co., Ltd., Yongin-si, Republic of Korea). The headspace gas was replenished every two days. All flasks were incubated on an orbital shaker at room temperature and 100 rpm, maintaining a constant light intensity of 145 µmol/m 2 /s. The total incubation time was 7 days. All experiments were conducted in duplicate. The gas composition in the headspace was measured every two days before the gas exchange. Total dry cell biomass and PHB production were determined at the end of the experiments.
Analytical Methods
The headspace gases were analyzed using a gas chromatograph (YL6500 GC, YOUNG IN Chromass, Anyang-si, Republic of Korea) equipped with 80/100 Porapak N and 45/60 Molecular Sieve 13X columns (Supelco Inc., Bellefonte, PA, USA) and a thermal conductivity detector (TCD). The carrier gas was argon (15 mL/min flow rate). The temperatures of the oven, injector, and detector were 40 • C, 120 • C, and 120 • C, respectively. All analyses were performed in duplicate.
PHB contents were analyzed using the GC-FID method [31,32]. PHB was extracted from the dried cell pellet samples by the solvent extraction method using methanol and chloroform and then trans-esterified with an acid catalyst [32]. The amount of PHB was determined using a gas chromatograph (YL6500 GC, YOUNG IN Chromass, Anyang-si, Republic of Korea) equipped with a J&W DB-WAX 123-7033 GC column from Agilent Technologies and a flame ionization detector (FID) using benzoic acid as an internal standard. The carrier gas was helium (3 mL/min flow rate). The injector and detector were maintained at 280 • C and 300 • C, respectively. The oven temperature was initially maintained at 85 • C for 5 min and then increased gradually to 200 • C. All analyses were carried out in duplicate.
Gas Consumption by the Syntrophic Consortiums under Different CH 4 Contents
To verify the implementation of the syntrophic system of methanotrophs and phototrophs for CH 4 conversion, we cultivated the consortium with a CH 4 /CO 2 gas mixture in the absence of externally supplied O 2 . Two different conditions were tested, differing in the carbon content of the gas mixture: a carbon-rich condition (50% carbon) and a carbon-lean condition (5% carbon). Both conditions had a CH 4 to CO 2 ratio of 3:2, representing a typical biogas composition [33]. In addition to CH 4 and CO 2 , the gas mixture contained only N 2 as an inert gas.
Under the carbon-rich condition, both Methylomonas sp. DH-1 and M. trichosporium OB3b strains were cultivated with OPGs to achieve simultaneous consumption of the CH 4 /CO 2 mixture. During the first few days of cultivation, CH 4 consumption was higher than CO 2 consumption in both cases, even though the aqueous solubility of CO 2 is known to be higher than that of CH 4 [34], which can be attributed to the higher growth rate of methanotrophs compared to phototrophs in OPGs, resulting in an imbalance between CO 2 production by methanotrophs and CO 2 consumption by phototrophs, and thus a significant decrease in the proportion of CO 2 in the headspace was not observed. After that, the growth rate of phototrophs increased, resulting in concurrent consumption of the CH 4 /CO 2 mixture (Figure 1a,c). Significantly, the proportion of CH 4 and CO 2 in the headspace was observed to be below 5%, suggesting that the co-culture strategy can achieve complete abatement of both greenhouse gases. Comparing the performances of the two consortiums, the consortium of M. trichosporium OB3b with OPGs showed a higher carbon consumption rate than that of Methylomonas sp. DH-1 with OPGs. Despite the reports that type I methanotrophs (namely, Methylomonas sp. DH-1) exhibit superior growth rates generally [17], it is worth noting that the consortium using M. trichosporium OB3b, a type II methanotroph, is better at consuming the gaseous substrate. These results may be due to the inferior adaptability of the type I methanotrophs, such as Methylomonas sp. DH-1, to grow with low concentrations of O 2 produced in situ by phototrophic cyanobacteria. Type I methanotrophs prefer relatively low CH 4 and high O 2 concentrations, while type II methanotrophs, including M. trichosporium OB3b, favor the opposite [35]. The accumulation of O 2 and the consumption of CO 2 during cultivation suggest that photosynthetic cyanobacteria were more active than methanotrophic bacteria in the consortium. In the control condition with only OPGs (Figure 1e), CH 4 consumption was negligible, indicating that the CH 4 in the system was assimilated by the added M. trichosporium OB3b or Methylomonas sp. DH-1. It was also observed that CO 2 was consumed more rapidly in the co-culture with OB3b than in the culture using only OPGs (Figure 1c,e). Taking into account the production of CO 2 from CH 4 metabolism, the difference in performances is likely even more significant, highlighting the synergistic effect of the syntrophic conversion of greenhouse gases. Despite OPGs being initially inoculated as granules, methanotrophs and OPGs grew individually suspended without integration in the co-cultivation, implying a potential shift in the composition of the syntrophic populations.
Under the carbon-lean condition, using the M. trichosporium OB3b strain, the syntrophic consortium consumed the CH 4 /CO 2 mixture (Figure 1d). However, the consortium using the Methylomonas sp. DH-1 strain displayed low efficacy (Figure 1b). While both CH 4 and CO 2 were consumed upon the initial gas injection, the subsequent gas exchange resulted in decreased activity of the methanotrophs, leading to insufficient CH 4 consumption. After examining the pH of the final samples, it was determined that the consortium with Methylomonas sp. DH-1 and M. trichosporium OB3b had pH of 6.95 and 6.79, respectively, under carbon-rich conditions. On the other hand, under carbon-lean conditions, their respective pH was 8.60 and 7.22. The pH increase may be attributed to the low concentration of CO 2 and/or the unequal cellular uptake of cations and anions under these conditions. The higher pH observed with the consortium with Methylomonas sp. DH-1 impeded the activity of the methanotroph. Most methanotrophic bacteria are neutrophilic microorganisms, as methane oxidation occurs optimally under neutral conditions [29]. Microorganisms 2023, 11, x FOR PEER REVIEW 6 of 12 Under the carbon-lean condition, using the M. trichosporium OB3b strain, the syntrophic consortium consumed the CH4/CO2 mixture (Figure 1d). However, the consortium using the Methylomonas sp. DH-1 strain displayed low efficacy (Figure 1b). While both CH4 and CO2 were consumed upon the initial gas injection, the subsequent gas exchange resulted in decreased activity of the methanotrophs, leading to insufficient CH4 consumption. After examining the pH of the final samples, it was determined that the consortium with Methylomonas sp. DH-1 and M. trichosporium OB3b had pH of 6.95 and 6.79, respectively, under carbon-rich conditions. On the other hand, under carbon-lean conditions, their respective pH was 8.60 and 7.22. The pH increase may be attributed to the low concentration of CO2 and/or the unequal cellular uptake of cations and anions under these conditions. The higher pH observed with the consortium with Methylomonas sp. DH-1 impeded the activity of the methanotroph. Most methanotrophic bacteria are neutrophilic microorganisms, as methane oxidation occurs optimally under neutral conditions [29].
Repeated batch cultivations were carried out under the carbon-lean conditions by replenishing the headspace of the culture with a CH4/CO2 mixture whenever the CH4 was Figure 1. Gas uptake profile of syntrophic system of Methlyomonas sp. DH-1 with 50% (v/v) N 2 (a) and 90% (v/v) N 2 (b), M. trichosporium OB3b with 50% (v/v) N 2 (c) and 90% (v/v) N 2 (d), and OPGs (control) with 50% (v/v) N 2 (e). Excluding N 2 , the only gas species are CH 4 and CO 2 , and the ratio of CH 4 to CO 2 is 3:2.
Repeated batch cultivations were carried out under the carbon-lean conditions by replenishing the headspace of the culture with a CH 4 /CO 2 mixture whenever the CH 4 was entirely consumed. The conversion of CH 4 and CO 2 consistently decreased with increasing batches, suggesting that the syntrophic populations changed differently under these conditions compared to the carbon-rich condition.
Metagenome Sequencing of the Syntrophic Consortiums after Cultivation with Different Gas Ratios
We sequenced 16S rRNA amplicons of syntrophic consortia cultivated at different carbon availabilities and using the two types of methanotrophs. The dominant bacterial species present in OPGs were Sphingomonas echinoides, Stanieria cyanosphaera, Bradyrhizobium namibiense, and Bosea vaviloviae, accounting for 51.0%, 25.6%, 10.3%, and 5.8%, respectively ( Figure 2). The proportion of heterotrophs in OPGs decreased when co-cultured with the methanotrophs. This effect was more pronounced in the presence of Methylomonas sp. DH-1, a type I methanotroph. In the consortium containing Methylomonas sp. DH-1, the methanotroph was the most abundant sequence in the amplicon, with a relative abundance of 52.0%. The second most abundant strain was Stanieria cyanosphaera, with a relative abundance of 33.8%. Stanieria cyanosphaera is a unicellular and spherical cyanobacterium with cell sizes ranging from 5 to 40 µm [36]. Finding unicellular cyanobacteria in OPGs is unusual since previous studies mostly found filamentous cyanobacteria as keystone species for OPG [28,37] and it explains the reason for the suspension culture of OPGs and methanotrophs with the disintegration of the granules. Other less abundant species, such as Pseudomonas veronii, Pseudoxanthomonas mexicana, and Pelomonas puraquae, were found, accounting for 3.4%, 3.0%, and 2.7%, respectively. In the consortium containing M. trichosporium OB3b, Stanieria cyanosphaera was the most dominant strain, accounting for 65.1%. A higher proportion of Stanieria cyanosphaera may have regularly supplied enough O 2 to M. trichosporium OB3b, leading to a more efficient CH 4 metabolism. This may explain the faster CH 4 consumption of M. trichosporium OB3b compared to Methylomonas sp. DH-1 under carbon-rich conditions. The relative abundance of M. trichosporium OB3b was 9.0%. Other bacteria in this consortium included Flavihumibacter cheonanensis, Altererythrobacter aurantiacus, and Caedimonas varicaedens, with a relative abundance of 14.2%, 3.6%, and 2.7%, respectively. Under the carbon-lean condition, the relative abundance of Stanieria cyanosphaera decreased to 42.7%, while M. trichosporium OB3b increased to 20.9%. The proportion of Altererythrobacter aurantiacus and Caedimonas varicaedens also increased to 10.1% and 9.7%, respectively. Flavihumibacter cheonanensis was not detected under this condition. These findings suggest that the ratio of cyanobacteria to the remaining other bacteria in the consortium is lower under the carbon-lean condition, which may contribute to decreased gas ingestion. The O 2 supply plays a crucial role in the consortium's performance under all conditions. Given that the syntrophic system relies on photosynthetic reactions to O 2 production, the M. trichosporium OB3b strain, which is robust against carbon availability and O 2 concentrations, may be more suitable as a workhorse for this system. In addition, M. trichosporium OB3b can synthesize PHB, a biodegradable polymer derived from the acetyl-CoA pool of the serine cycle [38].
Improvement of PHB Accumulation by Adjusting Nitrogen Source Concentrations
We cultivated a syntrophic consortium comprising M. trichosporium OB3b and phototrophs in OPGs without an external O2 supply at various nitrogen source concentrations in the culture media to investigate the effect of nitrogen limitation on PHB accumulation. Under 11.5 mM nitrogen (100% of NMS and SM medium), the O2 concentration in the flask headspace significantly increased (Figure 3a) while the CO2 concentration decreased. These changes in concentrations may be attributed to the higher phototrophic activity of OPGs under this condition, as they were producing O2 from CO2 at a higher rate than its consumption by M. trichosporium OB3b. These results are consistent with those of Rasouli
Improvement of PHB Accumulation by Adjusting Nitrogen Source Concentrations
We cultivated a syntrophic consortium comprising M. trichosporium OB3b and phototrophs in OPGs without an external O 2 supply at various nitrogen source concentrations in the culture media to investigate the effect of nitrogen limitation on PHB accumulation. Under 11.5 mM nitrogen (100% of NMS and SM medium), the O 2 concentration in the flask headspace significantly increased (Figure 3a) while the CO 2 concentration decreased. These changes in concentrations may be attributed to the higher phototrophic activity of OPGs under this condition, as they were producing O 2 from CO 2 at a higher rate than its consumption by M. trichosporium OB3b. These results are consistent with those of Rasouli et al. [39], who observed an accumulation of 27% O 2 at the end of co-culturing Chlorella sorokiniana and Methylococcus capsulatus. The high level of CH 4 consumption under these conditions also suggests the active growth of M. trichosporium OB3b, which can utilize CH 4 as a sole energy and carbon source [5,40] in conjunction with the abundant O 2 produced by OPGs.
Microorganisms 2023, 11, x FOR PEER REVIEW et al. [39], who observed an accumulation of 27% O2 at the end of co-culturing C sorokiniana and Methylococcus capsulatus. The high level of CH4 consumption unde conditions also suggests the active growth of M. trichosporium OB3b, which can utili as a sole energy and carbon source [5,40] in conjunction with the abundant O2 pro by OPGs.
As nitrogen source concentration in the culture medium decreased, the O2 conc tion in the flask headspace decreased, while the CO2 concentration increased (Figu d). It may be due to the reduction in the phototrophic activity of OPGs rather th higher growth of M. trichosporium OB3b, as CH4 consumption also decreased. Biomass and PHB concentrations of the syntrophic consortium are presented in 1. The maximum biomass (1.30 g/L) was achieved using 11.5 mM nitrogen in the c medium, with 33.3 mg/L of PHB concentration and 2.6 mg/100 mg of PHB content, r tively. While biomass production was almost comparable at 1.29 g/L with 5.7 mM gen, PHB production (36.6 mg/l) and content (2.8 mg/100 mg biomass) increased. mM nitrogen, biomass concentration decreased to 1.13 g/L, but PHB concentratio content increased to 83.0 mg/L and 7.4 mg/100 mg biomass, respectively. At the nitrogen concentration of 1.4 mM, PHB concentration decreased to 67.3 mg/L even content increased to 13.7 mg/100 mg biomass, resulting from a significant reduc As nitrogen source concentration in the culture medium decreased, the O 2 concentration in the flask headspace decreased, while the CO 2 concentration increased (Figure 3b-d). It may be due to the reduction in the phototrophic activity of OPGs rather than the higher growth of M. trichosporium OB3b, as CH 4 consumption also decreased.
Biomass and PHB concentrations of the syntrophic consortium are presented in Table 1. The maximum biomass (1.30 g/L) was achieved using 11.5 mM nitrogen in the culture medium, with 33.3 mg/L of PHB concentration and 2.6 mg/100 mg of PHB content, respectively. While biomass production was almost comparable at 1.29 g/L with 5.7 mM nitrogen, PHB production (36.6 mg/l) and content (2.8 mg/100 mg biomass) increased. At 2.9 mM nitrogen, biomass concentration decreased to 1.13 g/L, but PHB concentration and content increased to 83.0 mg/L and 7.4 mg/100 mg biomass, respectively. At the lowest nitrogen concentration of 1.4 mM, PHB concentration decreased to 67.3 mg/L even if PHB content increased to 13.7 mg/100 mg biomass, resulting from a significant reduction in total biomass (0.50 g/L). It is likely due to the nitrogen limitation in the culture medium, which stimulates PHB accumulation within M. trichosporium OB3b cells but hinders the growth of the syntrophic consortium. Overall, the best PHB accumulation by the syntrophic consortium in this study was achieved at a nitrogen concentration of 2.9 mM in the culture medium. PHB production and content were lower than pure methanotrophic cultivations. Despite similar methane consumption rates in the syntrophic culture and pure culture of M. trichosporium OB3b (Figure 3c,e), PHB productivity in the co-culture was inferior. This lower performance can be attributed to the higher prevalence of oxygen-producing OPGs relative to PHB-producing methanotrophs within the consortium. Thus, further optimization of microbial populations may be necessary to improve PHB synthesis in co-culture conditions.
Conclusions
In this study, we explored applying a syntrophic community using methanotrophs and phototrophs in OPGs to address the challenges in the biological conversion of CH 4 concerning economic feasibility and effective utilization of gaseous substrates. It was found that the steady supply of O 2 is a crucial factor in the performance of the consortium. Type II methanotrophs that are less sensitive to carbon availability and O 2 levels are a more suitable partner for co-cultivation with photosynthetic bacteria. We confirmed the production of biodegradable polymers from greenhouse gases using the consortium under nitrogen limitation in the culture medium. Additionally, the syntrophic relationship between methanotrophs and phototrophs, involving the exchange of CO 2 and O 2 , allows for the complete removal of CO 2 generated by CH 4 consumption, potentially leading to a zero-carbon emission process. These results demonstrate the potential to transform greenhouse gases, comprising CH 4 and CO 2 , into biodegradable polymers through syntrophy without an external O 2 supply. However, issues such as O 2 accumulation or the remaining CH 4 due to an imbalance between CH 4 and CO 2 consumption must be addressed. Methanotrophs and OPGs grew individually suspended without integration in the co-cultivation. Notably, 16S rRNA amplicon sequencing revealed that unicellular cyanobacteria dominate over filamentous cyanobacteria, which play an essential role in granulation, explaining this suspension growth. For practical applications such as cell immobilization and light transmission, methanotrophs should be fully incorporated into the OPGs to create single methanotrophic photogranules. Further research on optimizing parameters in the culture environment, including controlling populations in the consortium and designing co-culture media, is necessary to address these limitations. | 2023-04-26T15:12:16.332Z | 2023-04-24T00:00:00.000 | {
"year": 2023,
"sha1": "bd09b5724d5a2d07b52ed0b56424a51c72d511da",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/11/5/1110/pdf?version=1682321449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19e70c5e7027bda01b89e178a7f6d975875fe635",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4567064 | pes2o/s2orc | v3-fos-license | Development of a Smoking Abstinence Self-efficacy Questionnaire
Background Self-efficacy beliefs are an important determinant of (changes in) health behaviors. In the area of smoking cessation, there is a need for a short, feasible, and validated questionnaire measuring self-efficacy beliefs regarding smoking cessation. Purpose The purpose of this study is to investigate the psychometric properties of a six-item questionnaire to assess smoking cessation self-efficacy. Methods We used longitudinal data from a smoking cessation study. A total of 513 smokers completed the Smoking Abstinence Self-efficacy Questionnaire (SASEQ) and questionnaires assessing depressive symptoms and motivation to quit smoking. After that, they set a quit date and attempted to stop smoking. One year after the quit date, smoking status of participants was assessed by self report. The psychometric properties of the SASEQ were studied and we investigated whether SASEQ scores predicted successful smoking cessation. Results Factor analysis yielded one factor, with an Eigenvalue of 3.83, explaining 64% of variance. All factor loadings were ≥0.73. We found a Cronbach’s alpha of 0.89 for the SASEQ, low correlations for the SASEQ with depressive symptoms, and motivation to quit, indicating that self-efficacy is measured independently of these concepts. Furthermore, high baseline SASEQ scores significantly predicted smoking abstinence at 52 weeks after the quit date (OR = 1.85; 95% CI = 1.20~2.84). Conclusions The SASEQ appeared to be a short, reliable, and valid questionnaire to assess self-efficacy beliefs regarding smoking abstinence. In the present study, this instrument also had good predictive validity. The short SASEQ can easily be used in busy clinical practice to guide smoking cessation interventions.
Introduction
Self-efficacy is defined as the confidence a person has in his or her ability to perform and sustain a certain behavior in a given situation [1,2]. It is an important component of several theories of behavior change. Efficacy expectations are proposed to be better predictors of behavior than are previous or current behaviors alone [3]. Self-efficacy depends on past experience with the behavior, influence of others, physiological state, and outcome expectations [4]. The concept of self-efficacy is particularly relevant for smoking cessation. People with a high confidence in their ability to quit smoking are more often successful in smoking cessation [5][6][7] and relapse less often after a quit attempt [8]. As self-efficacy is an important psychological construct with immediate relevancy and practical implications to smoking cessation, it is useful to measure it in routine clinical practice, for example in pregnant women or in heart patients.
In the past, various questionnaires to measure selfefficacy with regard to smoking cessation have been used [8][9][10][11][12][13][14][15]. These questionnaires generally consist of a list of smoking situations for which respondents can rate their confidence in their ability to refrain from smoking [16]. However, these instruments are not always feasible for use in routine clinical care as they are composed of twelve to 48 items. Therefore, a new, six-item self-efficacy scale was constructed: the Smoking Abstinence Self-efficacy Questionnaire (SASEQ).
In the current study, we investigate the psychometric properties of this six-item self-efficacy scale in a prospective smoking-cessation trial.
As self-efficacy may be influenced by motivation to quit smoking and depression [17][18][19][20], we investigated the association of self-efficacy with measures of motivation and depression. Self-efficacy is a separate concept from depression and motivation to quit smoking, so we hypothesized that correlations between these concepts would be low. Furthermore, we hypothesized that high scores on the SASEQ would predict smoking abstinence at 52 weeks after the quit date.
Participants
Between January 2004 and January 2007, 513 smokers participated in a smoking cessation program (STOPPERS). They were recruited by general practitioners of 15 general practices and specialists of the two departments of Máxima Medical Centre hospital in Eindhoven and Veldhoven. The inclusion criteria were willingness to discuss smoking behavior and sufficient understanding of the Dutch language. The only exclusion criterion was any participant suffering from a severe psychiatric disorder in immediate need of treatment.
Procedure
All smokers received smoking cessation advice from their general practitioner or medical specialist. When patients showed interest in smoking cessation, they were referred to the study project. All participants were from Eindhoven and its surrounding areas, in the South East of the Netherlands. They all signed informed consent forms. The study protocol was approved by the Maxima Medical Centre ethics committee, which is certified by the Central Committee on Research involving Human Subjects in the Netherlands.
Participants were asked to complete several questionnaires. The quit date was set usually approximately 4 weeks after the inclusion in the study. Fifty-two weeks after the quit date, smoking status was assessed.
Measures
Smoking Abstinence Self-efficacy Questionnaire The SASEQ was constructed based on extensive experience with smoking cessation interventions and knowledge of the literature [8,14,15,[21][22][23][24][25][26]. The eight-item self-efficacy subscale as developed by Dijkstra, de Vries, and Roijackers [24] was used as a basic and further refined. It consists of two dimensions: four items describing "social" situations and four items describing "emotional" situations. Based on face validity two items were removed, one item: "going out with friends," because it describes more or less the same situation as in the item: "being in a café or at a party," and another emotional item ("feeling bored") because it is quite different from the other emotional items: agitated, angry, and sad [24]. The remaining six items describe situations for which smokers can indicate on a 5-point Likert scale (0-4) whether they will be able not to smoke (Appendix I).
The higher the score, the higher the level of self-efficacy regarding smoking cessation is. The range of the SASEQ scale is 0-24.
Edinburgh Depression Scale
The Edinburgh Depression Scale (EDS) [27][28][29][30] is a tenitem self-report scale which measures depressive symptoms. Respondents can rate on a 4-point Likert scale (0-3) to what extent they have had depressive feelings and thoughts over the past 7 days. The higher the score, the more depressive symptoms the respondent has. The range of the EDS is 0-30.
Symptom Checklist List-90 anxiety subscale
Anxiety was assessed by means of the anxiety subscale of the Symptom Checklist List-90 (SCL-90). The SCL-90 is used to assess psychopathology and has extensively been validated in the Netherlands [31]. The anxiety subscale consists of ten items that can be rated on a 5-point Likert scale [1][2][3][4][5]. The higher the score, the more anxious the respondent is. The range of the anxiety subscale is 10-60.
Motivation to Quit Smoking
Motivation to quit smoking was assessed with the following question "How motivated are you to quit smoking completely?" This question is derived from questionnaires of the MAYO Clinics in the USA. Respondents were presented with a 5-point Likert scale: (0) not at all motivated, (1) not very motivated, (2) neutral, (3) a little motivated, and (4) very motivated.
Demographic characteristics of the participants and smoking habits were also registered.
Smoking Status
Smoking status was assessed by self-report. At 52 weeks after the quit date, participants were asked if they had smoked since the original date of quitting. Long-term abstinence was defined as abstinence for at least 6 months. When participants did not provide follow-up data, we assumed they had started smoking again.
Analyses
Explorative factor analysis was used to identify the underlying factors of the questionnaire. We used the principal axis factoring method with Varimax rotation. Prior to this analysis, the Kaiser-Meyer-Olkin measure of sampling adequacy and the Bartlett's test of sphericity were examined to evaluate whether the data fulfilled the assumptions for carrying out a factor analysis. The Kaiser-Guttman criterion (eigenvalue>1) was utilized to decide on the number of factors retained. Homogeneity of factor solution(s) was determined by calculating item-total correlations and internal consistency by Cronbach's alpha. An alpha of ≥0.7 was regarded as sufficient [32].
The discriminant validity of the SASEQ was investigated by calculating Pearson correlations with the SCL anxiety subscale, the EDS, and motivation to quit smoking.
To determine the predictive validity of the SASEQ scale as a predictor of successful smoking cessation, we used logistic regression analysis. Successful smoking cessation was defined as not having smoked for the past six months. In the regression analysis, we also included known predictors for smoking cessation [33] in order to determine the role of the SASEQ score. We included the following predictors: gender, duration of longest quit attempt, smoking status of partner, average number of cigarettes per day, and duration of being a smoker. We also conducted a t test to see whether people who had achieved long-term abstinence at 52 weeks, scored higher on the SASEQ at baseline.
Participants
Demographic characteristics are summarized in Table 1. The sample consisted of 52% women. The mean age was 51 years (SD011). Most participants had completed medium level education and were married or living with a partner. They smoked on average 20 cigarettes/day (SD010). The mean age when they smoked their first cigarette was 15 (SD03.35), and the mean age when they started smoking daily was 17 (SD04. 19). Participants had undertaken on average 3.6 quit attempts (SD0 4.24). The average score on the item regarding their motivation to quit smoking was 3.4 (SD00.9); the mean SASEQ score was 11.7 (SD05.5).
Factor Analysis
The Kaiser-Meyer-Olkin measure (0.86) and Bartlett's test of sphericity (p<0.001) indicated that the assumptions for factor analysis were met. Exploratory factor analysis yielded one factor (eigenvalue>1), with an eigenvalue of 3.8, explaining 64% of the variance. The second factor had an eigenvalue of 0.79; therefore, it is not taken into account. All factor loadings were ≥0.73 ( Table 2). The factor structure was the same for men and women and across different educational levels.
Internal Consistency
The internal consistency of the SASEQ was good, we found a Cronbach's alpha of 0.89, and if items were deleted, Cronbach's alpha decreased. Item-total correlations for items 1-6 ranged between 0.68 and 0.73.
Discriminant and Predictive Validity
We found significant very low and negative correlations for the SASEQ with the EDS depression score (r0−0.145; p0 0.001). Furthermore, we found a low, positive, significant correlation for the SASEQ with motivation to quit (r00.205; p<0.001).
The logistic regression analysis was conducted with smoking status as the dependent variable, and with selfefficacy, gender, duration of longest quit attempt, smoking status of partner, average number of cigarettes per day, and duration of being a smoker as covariates.
We found that only the SASEQ score significantly predicted smoking status at 52 weeks after the quit date. Participants with higher scores on self-efficacy were significantly less likely to start smoking again (OR0 0.95; 95% CI00.91~0.99; p00.02). We also conducted a t test to see whether people who had achieved longterm abstinence at 52 weeks, scored higher on the SASEQ. We found that non-smokers at 52 weeks indeed had significantly higher SASEQ self-efficacy scores (t02.68; df0511; p00.008): the mean SASEQ score for smokers was 11.41 (SD 05.41); whereas the mean SASEQ score for nonsmokers was 13.00 (SD05.60).
Discussion
This study investigated the psychometric properties of a sixitem self-efficacy scale for smoking abstinence. Factor analysis of the SASEQ showed one factor with an explained variance of 64%. All factor loadings were adequate. The SASEQ had high internal consistency (Cronbach's alpha00.89) and good discriminant validity. We found a significant, very low, negative correlation for the SASEQ with depression, and a significant, low, positive correlation with motivation to quit smoking. These findings support the disciriminant validity of SASEQ, indicating that this instrument does not measure depression or motivation to quit smoking, and confirms that self-efficacy is indeed a separate concept from these two concepts.
To investigate the predictive validity of the SASEQ, we analyzed whether our respondents' SASEQ scores predicted smoking status. We found that the SASEQ score before the planned quit date significantly predicted smoking abstinence at 52 weeks after the quit date. The odds ratio of 1.85 indicates that people who score high on the SASEQ, have a much higher chance to abstain from smoking, compared with people who score low on the SASEQ (95% CI01.20~2.84). Furthermore, we found that non-smokers at 52 weeks had rated themselves significantly higher on self-efficacy before quitting.
Our results indicate that the SASEQ is a very good questionnaire for use in a clinical setting. It is psychometrically sound and very short: with only six items, it can be easily completed in a waiting-room, or incorporated in a larger questionnaire booklet without adding too many extra questions. It should be noticed that in the Netherlands, nowadays, quit smoking strategies have been implemented in large chronic health care programs (diabetes, cardiovascular risk management, and COPD), managed by GP nurses in Primary Care [34]. Unfortunately, within these health care programs and in contrast to the assessment of concepts as depression and anxiety, appropriate instruments are lacking to detect the patient's characteristics with regard to capability of changing behavior. This is not only important for quit smoking strategies but also in motivating diabetics for improving daily activities or obese patients to change their eating behavior. Because the GP nurse is often confronted with chronic patients with a high degree of co-morbidity and the outpatient clinic consultation time is limited, short instruments are needed which can easily be used in daily practice. Moreover, when reliable instruments do exist which are able to discriminate between patients with high and low self-efficacy, it might be speculated that-in view of cost-effectiveness-in the future different programs of different intensity can be offered to different patients.
A limitation of the study is that there were no other selfefficacy measures available to correlate the SASEQ score with, in order to assess convergent validity. Another limitation is the fact that motivation to quit smoking was measured with one item, instead of making use of a questionnaire. Strong points of the study are the prospective design and the large sample size.
In conclusion, the SASEQ seems to be an instrument that can assess self-efficacy regarding smoking abstinence reliably and validly. SASEQ scores appeared to be significant predictors of successful smoking cessation. We would like to emphasize that this six-item questionnaire can be completed in approximately 1 min and is therefore feasible for use in busy clinical practice. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 2016-05-12T22:15:10.714Z | 2012-02-20T00:00:00.000 | {
"year": 2012,
"sha1": "5fe1db06fc7fa8e16ef5419790f878ccf2dabd71",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12529-012-9229-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fe1db06fc7fa8e16ef5419790f878ccf2dabd71",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
16657977 | pes2o/s2orc | v3-fos-license | Deadly Partners: Interdependence of Alcohol and Trauma in the Clinical Setting
Trauma is the leading cause of death for Americans aged 1 to 45. Over a third of all fatal motor vehicle collisions and nearly eighty percent of completed suicides involve alcohol. Alcohol can be both a cause of traumatic injury as well as a confounding factor in the diagnosis and treatment of the injured patient. Fortunately, brief interventions after alcohol-related traumatic events have been shown to decrease both trauma recidivism and long-term alcohol use. This review will address the epidemiology of alcohol-related trauma, the influence of alcohol on mortality and other outcomes, and the role of prevention in alcohol-related trauma, within the confines of the clinical setting.
Introduction
It has been well established that alcohol use increases the risk of a traumatic event, as well as overall poor health outcomes. An international study [1] concluded that alcohol contributes to 3.8% of all global deaths and 4.6% of all global disability-adjusted life years (DALYs). Alcohol is involved in 37% of all fatal motor vehicle collisions (MVCs) [2]. Further, risk of a MVC shows a dose-response with blood alcohol concentration (BAC), from five-fold at 80 mg/dl to 25-fold at 150 mg/dl [3].
OPEN ACCESS
Alcohol also plays a significant role in the 3.7 million non-fatal falls, 3.6 million non-fatal transport collisions (either motorized, bicycle, or pedestrian), and 60,000 non-fatal firearm injuries that occur annually in American adults 18-65 [4]. For example, alcohol use increases the risk of falling three to four times among young and middle-aged adults, according to a meta-analysis by Kool and colleagues [5]. According to the Institute for Research, Education, and Training in Addictions, over 20,000 (7.6 million per year) people enter emergency departments every day for alcohol-related injuries and illnesses [6].
Epidemiology
Heavy alcohol use was associated with nearly double the risk of violent injury among trauma patients presenting to an urban trauma center [7], and a Swiss study by Kuendig and colleagues [8] showed that nearly half of all emergency department (ED) patients who sustained non-fatal injuries report alcohol use prior to admission. This was consistent with all types of injuries and even low levels of alcohol consumption. Repeated studies confirm the very high prevalence of either acute or chronic alcohol use among trauma patients [9,10]. There is a very high unmet need for alcohol rehabilitation among these patients. Among a population of orofacial trauma victims who report any recent use of alcohol or drugs, 58% met criteria for alcohol abuse, although only a very small percentage report even receiving alcohol treatment. Penetrating trauma and male gender were two risk factors significantly associated with likelihood of testing positive for alcohol or illicit drugs at time of admission [11].
Patterns of alcohol use and abuse often begin early in life. A parental history of alcoholism is a very strong risk factor for problem drinking in offspring [12]. A survey conducted among over 43,000 members of the general American adult population queried participants about alcohol abuse and traumatic events. 34,653 were re-interviewed three years later. Results showed that the earlier the respondents reported drinking, the higher likelihood that they unintentionally injured themselves or someone else while drinking. More than a third of these events occurred in young adults (under age 25), despite this population only comprising 7% of those sampled [13]. Early interventions aimed at preventing alcohol use and abuse in the pediatric and adolescent population may prove to be an effective technique to decreasing alcohol-related traumatic events. However, young adults aren't the only population at risk of alcohol-related trauma. Almost 10% of trauma patients over the age of 65 involve alcohol [14]. Alcohol was most highly associated with fall injuries in this population; therefore, clinicians should have a high suspicion of alcohol involvement in falls involving elderly patients.
Physiologic Outcomes
The role alcohol plays in trauma outcomes, including morbidity, mortality, and length of stay, remains controversial, both in human and animal studies. Multiple published studies report directly contradictory conclusions. The largest, and most recent study by Salim and colleagues [15], showed that mortality was significantly lower in moderate to severe traumatic brain injury (TBI) patients with a positive serum alcohol level on admission, although overall complications were higher. However, the retrospective nature of the study, as well as the absence of quantitative blood alcohol levels, calls in to question the statistic validity of these conclusions. Despite this, the authors suggested that, in the future, administering ethanol to TBI patients may be considered to improve mortality.
It has been more definitively established that alcohol does act as a confounder in clinical assessment. Intoxicated patients can present with a falsely depressed Glasgow Coma Scale (GCS), which may delay appropriate treatment, such as intubation or insertion of an intracranial pressure monitor. Golan and colleagues showed a 151 minute in delay in the insertion of such device in severely intoxicated patients (Blood alcohol level (BAL) > 21.7 mmol/L) as compared to patients with negative BALs [16].
A study similar to Salim and colleagues' [17] concluded that toxicology screens among TBI patients that were positive for methamphetamines or alcohol were associated with lower mortality, as well. However, when examining the effect of varying levels of serum alcohol among TBI patients, Shandro and colleagues found no difference in either short-term or long-term mortality [18]. Further, a prospective cohort study of TBI patients showed that higher BALs were associated with poorer performance on the Disability Rating Scale (DRS), but there was no association with short-term clinical outcomes or scores on the Functional Independence Measure (FIM) [19].
Psychological Outcomes
Further complicating the treatment of patients who experience alcohol-related trauma is the high rate of subsequent posttraumatic stress disorder (PTSD) and accompanying increase in alcohol use after the event. Patients with both alcohol dependence and PTSD have significantly worse physical and mental functioning than either affliction alone [20]. Therefore, not only does alcohol use increase the risk of experiencing a traumatic event, the converse is also true (i.e., experiencing a traumatic event can increase the risk of subsequent alcohol use). McFarlane and colleagues found that patients who developed PTSD after a traumatic event had an increased risk of developing an incident alcohol use disorder [21]. Obviously, the compounded relationship between alcohol use begetting violence which begets heavier alcohol use complicates treatment for alcohol abuse after a traumatic event. It is important for practitioners to recognize the high prevalent co-morbidity of PTSD with alcohol abuse in patients who have experienced alcohol-related trauma and to focus treatment on both conditions.
Health Care Costs
Caring for intoxicated patients is more expensive than caring for non-intoxicated patients, especially in the trauma setting. This is partially explained by the increased number of required interventions and studies, given the unreliability of histories obtained from and physical examinations performed on these patients. In a 2009 study, O'Keefe and colleagues showed that intoxicated trauma patients were more likely to require invasive procedures (including intubation and urinary catheter insertion) and be admitted to either an inpatient unit or intensive care unit (ICU), when compared to non-intoxicated patients with similar clinical characteristics. His team calculated mean hospital charges were $1,833 greater per patient [22]. When this figure is multiplied by the millions of trauma patients seen in hospitals annually, the burden is substantial.
The Universal Policy Provision Law (UPPL) further complicates reimbursement for hospitals. This punitive law, which is only prohibited in thirteen U.S. states [23], allows insurance companies to deny coverage to individuals who have sustained alcohol-related healthcare charges. This places an unfair financial burden on trauma centers, as well as treating alcohol abuse as a crime, instead of a disease. This is compounded by the fact that alcohol abuse is more prevalent in the poor and correlates highly with mental illness and drug abuse, who are the most likely to have no insurance at all. More than 1% of the gross national product in high-and middle-income countries is attributable to the social and health costs of alcohol [1].
Prevention
By 2007, all 50 states had instituted a legal limit of 0.08% to be considered legally drunk, which was reduced from 0.1% starting in the late 1990s. This resulted in a statistically significant 5.2% reduction in single-vehicle-nighttime fatal traffic crashes in a before-and-after study of 19 jurisdictions [24]. Further, ten states have instituted "zero tolerance" laws for drivers under 21 years of age. The remainder of states uses 0.01 or 0.02% as the legal limit for drivers under 21, in compliance with the National Highway Systems Designation Act of 1995, which is required in order to receive federal highway funds [2]. An analysis of the effect of the zero tolerance law, as well as other underage drinking laws such as purchase and possession, suggests that 732 lives per year are saved as a result of its implementation, which gives support to the argument for all states to institute similar zero tolerance laws [25]. A similar study by the same group confirmed that legislation aimed at curbing drunk driving was effective in reducing fatal MVCs among adults, as well.
Another intriguing technique for preventing alcohol-related violence is increasing the price of alcohol. The strong relationship between violence and alcohol use has been repeatedly established [7], so one could hypothesize that increasing the price of alcohol results in a decrease in alcohol consumption and, subsequently, violent injury. A similar premise has been proven in studies that demonstrated that increasing cigarette prices decreases youth smoking [26]. In fact, multiple studies discussed by Jonathan Sheperd in his article discussing public health interventions aimed at decreasing alcohol-related violence [27] showed an inverse relationship between acts of violence (specifically child and intimate partner abuse) and alcohol prices [28,29]. Hindering access to alcohol, either by increasing taxes or restricting retail sales, could provide an effective technique for preventing alcohol-related traumatic events.
Interventions
Hospital admission after a traumatic event while intoxicated offers an excellent opportunity for therapeutic interventions aimed at rehabilitation. In 2005, The American College of Surgeons' Committee on Trauma (ACSCOT) issued the SBI (screening and brief intervention) mandate that required that all Level I trauma centers systematically screen for problem drinkers and provide brief interventions for those that screen positive [6]. Work by Gentilello and colleagues suggested that an SBI performed in an inpatient setting could potentially result in long-term decrease in alcohol intake by the injured patient [30].
Unfortunately, not all studies have shown that interventions by health care providers are effective. One study by Roudsari and colleagues did not detect any decrease in repeat injuries, either alcohol-related or otherwise, six and twelve months after the administration for a brief alcohol intervention to trauma patients as compared to patients who did not receive the intervention [31]. However, one innovative study involved active Alcoholics Anonymous members visiting patients after alcohol-related trauma that required admission to the hospital. This 30-to 60-minute visit resulted in a statistically significant increase in abstinence from alcohol up to six months after discharge, as well as for initiation of treatment or self-help. This approach is especially enticing since it involves individuals outside of the treatment team, which could be seen as more approachable and empathetic to the trauma victim. Further, this type of community outreach fulfills the twelfth step of the AA program for alcoholics and does not add to health care costs or time burdens on the care team [32].
One barrier to proper treatment is identifying which patients are at the greatest risk of repeat injury. A study of all level I trauma centers in the U.S. reported only a 25% screening rate of patients who were deemed to be problem drinkers, despite the American College of Surgeons (ACS) alcohol screening and brief intervention mandate [33]. A study at Los Angeles County Hospital examined a single item binge drinking screen that was 76% sensitive in identifying patients who met criteria for alcohol abuse. Risk factors for alcohol abuse were male gender and substance abuse at the time of injury [34]. This simple screen could more efficiently identify those who would benefit from assessments or interventions with the hopes of decreasing trauma recidivism.
Given the conflicting evidence regarding the effectiveness of in-hospital interventions aimed at encouraging abstinence in trauma patients, clinicians may be reluctant to invest time and thus cost in performing these interventions. However, a cost-benefit analysis performed by Gentilello and colleagues confirmed that a brief intervention results in a net cost savings of 89 USD per patient screened and 330 USD for each patient offered an intervention, which is estimated as 1.82 billion USD annually [35].
Conclusions
Alcohol use not only doubles the risk of being involved in a traumatic event, both penetrating and blunt, but it also can complicate the initial evaluation and result in higher health care costs per traumatic event. The interdependence between subsequent development of PTSD and either incident or prevalent alcohol abuse further increases the complexity and cost of caring for this population after the event. Although the success of in-hospital interventions has been mixed, given the relative low cost associated with brief interventions by either clinicians or peers, the benefits likely outweigh the costs. A number of tools have been developed that have been helpful at identifying problem drinking and targeting which patients may benefit from such an intervention. However, perhaps the greatest impact in reducing alcohol-related trauma is made via preventative efforts aimed at children and adolescents, and especially via legislation regarding speeding and blood alcohol level limits, aimed at both adult and underage drivers. | 2014-10-01T00:00:00.000Z | 2009-12-01T00:00:00.000 | {
"year": 2009,
"sha1": "e8efa050f9f444f1ae6b3c523133654ad0be05be",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/6/12/3097/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b40424f4301e1bb16eb6b083f294a194ab4d6985",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266732882 | pes2o/s2orc | v3-fos-license | A 29-Year-Old Patient With Patau Syndrome: A Case Report on Medical Management
Patau syndrome (trisomy 13) is a chromosomal abnormality with multiple malformations due to an additional copy of chromosome 13. This genetic condition has a systemic impact on the development of the human body, which can result in, but is not limited to, microphthalmia, microcephaly, low-set ears, cleft palate, cardiac abnormalities, and abdominal wall defects. It is associated with severe physical and intellectual disabilities and a limited lifespan. Here, we present a 29-year-old female with a high suspicion of the mosaic form of Patau syndrome. She decided to opt for an elective robotic-assisted vaginal hysterectomy (RAVH) due to worsening menorrhagia and recurrent miscarriages. In addition, the importance of medical interventions from surgery to anesthesia is discussed, with their role in improving the quality of life of the patient.
Introduction
Trisomy 13 was first described as a chromosomal aneuploidy by Dr. Patau in 1960 [1].It is characterized by the presence of three copies of chromosome 13, producing an amalgamation of symptoms such as microphthalmia, microcephaly, low-set ears, cleft lip, cleft palate, holoprosencephaly, polydactyly, cutis aplasia, congenital heart disease, polycystic kidney disease, and omphalocele [1][2][3][4].The most common cause of Patau syndrome is the nondisjunction of chromosome 13 during meiosis, leading to midline defects that are often incompatible with life [2].Patau syndrome can also be due to an unbalanced Robertsonian translocation t(13;14) [2].The least common cause of Patau syndrome is mosaicism, where some cells in the body have three copies of chromosome 13, while others do not [4].Only 5% of cases have the mosaic form of trisomy 13, which tends to have a better prognosis with a limited impact on intellectual disabilities [4].
Trisomy 13 occurs in one in 10,000-20,000 live births, with the majority of patients dying in utero [1].Median survival for individuals with Patau syndrome who survive childbirth is seven to 10 days and 90% die before the age of one [1].According to a recent study in Japan, intensive management such as resuscitation and surgical intervention to these patients can extend their life expectancy to 733 days [3].
Case Presentation
Here, we present a 29-year-old female, with a medical history significant for the mosaic form of Patau syndrome, which was diagnosed a few months after birth.She explained that according to the genetics clinic, she is the longest trisomy 13 survivor in their records.The patient presented to the hospital for a scheduled hysterectomy due to worsening menorrhagia and recurrent miscarriages.She had one spontaneous abortion at 19 weeks and the others between six and 11 weeks.She has had three dilation and curettage (D&C) procedures and is at significant risk of having genetically abnormal pregnancies and increased risks of problems with a pregnancy.She has no desire for future fertility and has failed conservative therapy with Nexplanon, oral contraceptive pills, and intrauterine devices.During the pelvic examination, the uterus was anteflexed, no adnexal masses were palpable, and minimal uterine descensus was present.During the operation, there was normal upper abdominal anatomy but a small uterus (Figure 1).She also presented with elongated ovaries bilaterally but no specific gross lesions were found upon inspection.A robotic-assisted vaginal hysterectomy (RAVH) was conducted and ovaries were left in place since there was no abnormality.The cervix displayed mild to moderate dysplasia and cervical intraepithelial dysplasia (CIN) stages 1-2, which was widely excised.The patient tolerated the procedure well and was taken to the recovery room in stable condition.With the patient's history of Patau syndrome, special consideration was taken regarding anesthesiology and securing the airway.A glidescope was utilized to minimize damage to surrounding anatomy and for direct visualization of the airway.A size 7.0 endotracheal tube with stylet was introduced into the trachea with appropriate placement on the first attempt.The appropriate position was confirmed by direct visualization of vocal cord passage, fogging of the tube, and visualized symmetric chest rising.The patient tolerated the procedure well and proceeded under normal conditions for the rest of the surgery.The patient was transferred to the post-anesthesia care unit (PACU) and recovered well without respiratory complications.
Discussion
Individuals with trisomy 13 are born with congenital anomalies that drastically decrease their survival rate.The correctional surgeries they undergo can be physically and mentally taxing; therefore, the advantages and disadvantages must be thoroughly assessed to optimize their quality of life.In this ethical dilemma, it is critical to acknowledge that a diagnosis of Patau syndrome alone is not enough to make the patient ineligible for correctional procedures [5].In this case, the patient was constantly dealing with worsening menorrhagia and recurrent miscarriages, which were poorly controlled with medical management, leaving hysterectomy as the only definitive treatment.At the age of five, she had a palatoplasty followed by extensive speech therapy, which improved her ability to communicate with others [5].At the age of 13, spinal decompression surgery was conducted to remove the T1 vertebrae to improve the patient's intense neuropathy [6].In all these cases, various conservative medical interventions were discussed with the patient, but to improve her day-to-day life, aggressive care was deemed necessary.
In regards to anesthesiology, a glidescope should be utilized in all intubation procedures, as patients with Patau syndrome have an increased risk of difficult airways, secondary to possible oral-facial-maxillary surgeries and increased incidence of scoliosis [7].The patient's history of severe scoliosis, severe cervical stenosis, T1 laminectomy, and an increased risk of potential anterior airway, warranted the use of a glidescope for intubation.Additionally, the patient had a history of palatoplasty, which increases the patient's risk of adverse airway events [8].Therefore, utilizing the glidescope was an essential factor in protecting the patient from damage to her airway while also reducing movement to her cervical and thoracic vertebrae.Patients with Patau syndrome may have a smaller or more anterior airway, and therefore the smallest adult intubation tube should be utilized, if possible [7].Providers may consider the utilization of a pediatric tube if deemed medically appropriate or necessary [7].Special care should be taken during the removal of the endotracheal tube to minimize damage to any surrounding anatomy, especially in patients with any history of oral-facial surgery.After the procedure, the patient should be closely monitored postanesthesia for any respiratory complications and quick intervention should be taken if encountered [7].
Conclusions
Trisomy 13 is a chromosomal aneuploidy that has a low survival rate.The mosaic form of the disease is thought to have a better prognosis, which can be further improved with aggressive medical interventions in all dimensions from anesthesia to surgery.It is critical to balance improving the patient's day-to-day life | 2024-01-03T16:06:32.198Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "745f71296e4e38bd106a3348c889b95ca01a7e4f",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/213117/20240101-11517-v5ibe5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6130d662272240d34bf8e7231891658849d3c941",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231789683 | pes2o/s2orc | v3-fos-license | Anti-Inflammatory Effects of Rosmarinic Acid-Loaded Nanovesicles in Acute Colitis through Modulation of NLRP3 Inflammasome
Ulcerative colitis (UC), one of the two main types of inflammatory bowel disease, has no effective treatment. Rosmarinic acid (RA) is a polyphenol that, when administered orally, is metabolised in the small intestine, compromising its beneficial effects. We used chitosan/nutriose-coated niosomes loaded with RA to protect RA from gastric degradation and target the colon and evaluated their effect on acute colitis induced by 4% dextran sodium sulphate (DSS) for seven days in mice. RA-loaded nanovesicles (5, 10 and 20 mg/kg) or free RA (20 mg/kg) were orally administered from three days prior to colitis induction and during days 1, 3, 5 and 7 of DSS administration. RA-loaded nanovesicles improved body weight loss and disease activity index as well as increased mucus production and decreased myeloperoxidase activity and TNF-α production. Moreover, RA-loaded nanovesicles downregulated protein expression of inflammasome components such as NLR family pyrin domain-containing 3 (NLRP3), adaptor protein (ASC) and caspase-1, and the consequent reduction of IL-1β levels. Furthermore, nuclear factor erythroid 2-related factor 2 (Nrf2) and heme oxygenase-1 (HO-1) protein expression increased after the RA-loaded nanovesicles treatment However, these mechanistic changes were not detected with the RA-free treatment. Our findings suggest that the use of chitosan/nutriose-coated niosomes to increase RA local bioavailability could be a promising nutraceutical strategy for oral colon-targeted UC therapy.
Introduction
Inflammatory bowel disease (IBD), comprising UC and Crohn's disease (CD), is characterised by chronic and relapsing intestinal inflammation. Accumulating evidence indicates that these inflammatory disorders are multifactorial, triggered by interactions between genetic, environmental and immune factors [1]. Although the etiology of IBD is still unclear, an aberrant innate immune response against internal and external threatening factors has been suggested to have a crucial role in IBD pathogenesis [1]. Inflammasome is a cytosolic multiprotein complex involved in the innate immune response, which is stimulated by many types of tissue damage. NLRP3 is one of the best representative inflammasomes, which has been associated with UC pathogenesis [2,3]. NLRP3 stimulation leads to proteolytic activation of caspase-1, which is crucial for the cleavage of pro-IL-1β into its active form. Mature IL-1β is a key inflammatory mediator that participates in the inflammation occurring in IBD patients, and its overproduction is related to augmented disease severity [4].
In the inflamed colon, activated macrophages and neutrophils lead to overproduction of reactive oxygen species (ROS). Nrf2 is a key factor in the protection against oxidative stress and inflammation. Upon oxidative stress, Nrf2 induces nuclear transcription of endogenous antioxidant enzymes, such as HO-1 [5]. Activation of the Nrf2/HO-1 pathway has been reported to attenuate the inflammatory response in experimental colitis [6], while suppressing Nrf2 increased inflammation and oxidative stress [7]. Thus, pharmacological induction of this pathway may be a useful strategy for IBD treatment.
It has been reported that dextran sodium sulfate (DSS) administration to mice in drinking water induces a very reproducible acute inflammation limited to the colon. This model is widely used in mice for inducing UC since it morphologically and symptomatically resembles epithelial damage found in human patients suffering from UC [8]. In this line, DSS is a sulphated polysaccharide that acts as a direct toxin leading to the disruption of the colonic epithelium. This results in increased permeability and the entry of luminal pathogens and associated antigens into the mucosa, leading to immune cell activation with the consequent production of proinflammatory cytokines, which, in turn, aggravates epithelial barrier dysfunction [9].
Current IBD treatments include amino-salicylates, corticosteroids, immunosuppressants and, recently, biological agents, which have been effective in minimizing inflammation and inducing prolonged remission. However, these therapies fail to effectively control symptoms and have severe adverse effects [10]. For this reason, there is an urgent need to seek new therapeutic strategies for clinical prevention and treatment of IBD. In this line, dietary supplements such as omega 3 fatty acids, vitamin D and polyphenols, including curcumin and resveratrol, have been shown to have a therapeutic effect on IBD [11]. Rosmarinic acid (RA) is a natural polyphenol found in the Labiatae family of herbs, such as Rosmarinus officinali, Salvia miltiorrhiza, and Prunella vulgaris, as well as in Zostera marina seagrass beds [12]. RA has been shown to have many biological properties, including antioxidant, anti-inflammatory, anticancer, anti-infectious, antinociceptive and neuroprotective activities [13][14][15]. Nevertheless, when administered orally, its poor water solubility and high gastric degradation may compromise its beneficial effects [16]. In addition, previous papers have reported a high metabolization of RA in the upper gastrointestinal tract. In this regard, in vitro digestion studies have described that RA is hydrolysed by different esterases and metabolised by the gut microbiota before its absorption, which could affect the functionality of this polyphenol as reviewed in [17]. The oral administration of appropriately formulated RA would improve its bioavailability and allow an adequate RA concentration to reach the inflamed colon.
The use of nanotechnology in medicine has gained extensive attention in the delivery of drugs in inflamed intestinal mucosa. In this regard, niosomes are vesicles mostly formed by cholesterol and nonionic surfactants. Their structure is similar to liposomes and, like them, can contain both hydrophilic and lipophilic molecules. However, niosomal carriers could be used as an interesting alternative to liposomes due to their lower production cost and higher stability [18]. Colon-specific delivery strategies based on nanosystems are being used to increase drug local bioavailability and reduce dosing frequency, minimizing adverse effects. Chitosan is a natural polymer extensively used for coating nanovesicles with interesting properties including biocompatibility, biodegradability and low toxicity. Moreover, chitosan is highly mucoadhesive, which consequently increases drug absorption and promotes persistent drug release [19]. This polysaccharide administered orally can resist gastrointestinal degradation and passes unaltered to the colon where it can be partially degraded by microbiota. A previous paper evaluating the effects of quercetin in 2,4,6-trinitrobenzene sulfonic acid (TNBS)-induced colitis reported that phospholipid vesicles coated with a combination of chitosan and nutriose, a water-soluble dextrin, protected quercetin from degradation in the upper gastrointestinal tract, thus allowing its colonic release [20].
Although RA has been previously encapsulated in a niosomal gel [21], ethosomes and liposomes [22], nanoemulsion-based hydrogels, and polyethylene glycol (PEG)ylated RA-derived nanoparticles [23], there are no reports on RA-loaded niosomes and their use in IBD. Taking into account these findings, in the present study a DSS-induced acute colitis model in mice was established to evaluate the effects of chitosan and nutriose-coated niosomes loaded with RA. Since the potential mechanisms of RA in treatment of UC are not fully understood, we also investigated the effects of this polyphenol on modulation of inflammasome and the Nrf2/HO-1 signaling pathway.
Preparation of Chitosan and Nutriose-Coated Niosomes Loaded with Rosmarinic Acid
Niosomes were purchased from Nanovex Biotechnologies SL (Asturias, Spain) and all reagents, including RA, were purchased from Sigma-Aldrich (St. Louis, MO, USA). A thin film hydration method was carried out for preparing niosomes either loaded or not with RA, and chitosan and nutriose-coated or uncoated, as previously described [20]. Briefly, Pronanosome Nio-N (50 mg/mL), rosmarinic acid (5 mg/mL) and vitamin E (2 mM) were dissolved in a methanol:chloroform mixture. The organic solvents were rotary-evaporated to form a dry film, which was hydrated with phosphate buffered saline (PBS) (10 mM) and vitamin C (2 mM) at 60 • C for 20 min and vortexed for 4 min. Subsequently, the obtained nanocarriers were sonicated for 30 min at 60 • C to create small unilamellar nanovesicles. To prepare chitosan-coated niosomes loaded with RA, chitosan (30 mg/mL) was dissolved in 0.1% acetic acid solution (pH 3), then this solution was added dropwise to RA-loaded niosomes under stirring at 25 • C. Later, the subsequent dispersion was added to a 5% (w/v) nutriose solution. Empty chitosan and nutriose-coated nanovesicles were prepared following the same procedure but with no RA.
Vesicle Characterisation
Particle size and polydispersity index (PDI) were evaluated by a dynamic light scattering method using a Zetasizer Nano ZS90 (Malvern Instruments, Worcestershire, UK). Zeta potential (ZP) was determined with the Zetasizer Nano ZS90 by means of the M3-PALS (Mixed Mode Measurement-Phase Analysis Light Scattering), based on electrophoretic mobility. For encapsulation efficiency (EE) determination, chitosan-coated or uncoated niosomes loaded with RA were dialyzed using dialysis bag with a cutoff of 10 K (MWCO, Thermo Fisher Scientific, Waltham, MA, USA) for 24 h against PBS (pH 7.4) to completely removing unloaded RA. RA content before and after separation of free drug from the encapsulated drug was quantified with reverse-phase high-performance liquid chromatography (RP-HPLC) after disruption of niosomes with methanol. EE percentage was calculated as follows: % EE = (amount of drug retained into the vesicles/total amount of drug in the sample) × 100.
In Vitro Release Studies
The in vitro release profile of RA from vesicles was measured in a buffered solution at pH 7.0 to simulate large intestine conditions, using the dialysis bag method [19]. In brief, 100 µL of chitosan-coated or uncoated niosomes loaded with RA or control solution (nonencapsulated RA) were placed in a dialysis bag with a cutoff of 10 K (Slide-A-Lizer MINI Dialysis units, Thermo Fisher Scientific, Waltham, MA, USA) and immersed in the dissolution medium containing 15 mL PBS (10 mM) with 300 mM NaCl (pH 7.0) at 37 • C under magnetic stirring at 100 rpm. At scheduled time intervals (1, 2, 3, 4, 5, 6, 7 and 8 h), an aliquot of the release medium was collected and equally replenished with fresh medium. RA content in the samples was analyzed by RP-HPLC. All experiments were performed in triplicate.
Experimental Animals
A total of 70 seven-week-old male C57BL/6 mice weighing 18-20 g were supplied by Janvier Labs (Le Genest St Isle, France) and acclimated for seven days in Animal Laboratory Centre (Animal Service, School of Pharmacy, University of Seville). They were allowed free access to a laboratory diet (Panlab, Barcelona, Spain) and water ad libitum.
All experiments were carried out following the guidelines of the European Union in relation to animal experimentation (Directive of the European Council 2010/63/EU). The experimental protocols were approved by the Animal Ethics Committee of the University of Seville (Protocol 06/04/2018/042).
Induction of Acute Colitis and Treatments
Experimental acute colitis was induced by giving mice drinking water ad libitum containing 4% (w/v) DSS (TdB Consultancy AB, Uppsala, Sweden), for seven days [24]. After an acclimation period, mice were randomly divided into seven experimental groups (n = 10/group). Except the sham group (that consumed water), the remaining experimental groups received 4% DSS solution ad libitum. According to the experimental protocol, the following solutions were administered by oral gavage: (i) Sham and (ii) DSS groups, vehicle solution (PBS 10 mM) with empty nanovesicles; (iii) 5-aminosalicylic acid group (5-ASA), 5-ASA at 75 mg/kg/day, used as positive reference compound; (iv) RA group, free RA at 20 mg/kg/day; (v) RA-N5 group, RA-loaded nanovesicles at 5 mg/kg/day; (vi) RA-N10 group, RA-loaded nanovesicles at 10 mg/kg/day; and (vii) RA-N20 group, RA-loaded nanovesicles at 20 mg/kg/day [25]. Mice received oral pretreatment with all solutions from three days prior to colitis induction and during days 1, 3, 5 and 7 of DSS administration.
Animals were carefully monitored to verify that the consumption of water containing DSS was approximately the same in all groups. Animal body weights were measured daily throughout all the experiments. Mice were sacrificed on 8th day and the entire colon was removed, cleaned with physiological saline, weighed and measured. Subsequently, small sections from the middle to distal colon were cut and stored at −80 • C for measurement of all biochemical parameters.
Evaluation of Severity of Colitis
To determine the disease activity index (DAI), the clinical signs of colitis was evaluated during experimentation as previously reported [26]. The presence of diarrhea, rectal bleeding, and weight loss were independently scored by a blinded-researcher on a 0 to 3 scale, and the average of the three determinations constituted the DAI.
Histopathological Evaluation
For histological examination, sections of approximately 1 cm from the middle to distal colon were fixed in 4% paraformaldehyde in PBS (pH 7.4), dehydrated and embedded in paraffin. Next, samples were sectioned at 5 µm by using a rotary microtome (Leica Microsystems, Wetzlar, Germany) and mounted onto glass slides. Colon sections were dewaxed, hydrated, and stained with Haematoxylin and Eosin or Alcian blue for colonic injury examination or mucus content, respectively [27]. All samples were evaluated in an Olympus BH-2 microscope (GMI, Ramsey, MN, USA). The tissues were analysed by a blinded observer to establish a composite histological score as previously described [28]. Criteria include loss of mucosal architecture (0, absent; 1, mild; 2, moderate; 3, severe), cellular infiltration (0, none; 1, infiltrate around the crypt basis; 2, infiltrate reaching the muscularis mucosae; 3, infiltrate reaching the submucosa) and goblet cell depletion (0, absent; 1, present). The semiquantitative histopathological score of each variable was added to give a total microscopic damage score.
Myeloperoxidase Activity Assay
For myeloperoxidase (MPO) activity determination, colon samples were homogenized in 10 volumes of 50 mM PBS at pH 7.4, following the method of Grisham et al. (1990) [29] and then were centrifuged at 20,000 g for 20 min at 4 • C. Next, the pellet was homogenized in 10 volumes of 50 mM PBS at pH 6.0 containing 0.5% hexadecyl trimethylammonium bromide (HETAB) and 10 mM EDTA. Subsequently, samples were exposed to three cycles of freezing/thawing and later sonication. For colorimetric assay, 50 µL of homogenate were incubated at 37 • C for three min with a mixture containing 0.067% O-dianisidine dihydrochloride, 0.5% HETAB, and 0.3 mM hydrogen peroxide in a 96-well microplate. The absorbance at 655 nm was evaluated with a microplate reader (Labsystem Multiskan EX, Helsinki, Finland). One unit of MPO activity was expressed as the amount of enzyme generating a change in absorbance of 1.0 U/min at 37 • C in the final reaction volume. Data were expressed as U/mg tissue.
Determination of Cytokine Levels
Colon tissue samples for cytokine determination (TNF-α and IL-1β) were homogenized in ice-cold lysis buffer (1:5 w/v) containing PBS (pH 7.2), 1% bovine serum albumin (BSA) and protease inhibitors. Next, samples were centrifugated at 12,000 g for 10 min at 4 • C to obtain the supernatants, which were stored at −80 • C until determination. Cytokine concentrations were quantified using specific ELISA kits (Peprotech, London, UK), following the manufacturer's protocol. Cytokine levels were expressed as picograms per milligram of tissue.
Extraction of Cytoplasmic Proteins and Western Blot Analysis
Colon samples were homogenized in lysis buffer as previously described [30] then were centrifuged (12,000 g for 15 min at 4 • C) and the supernatants stored at −80 • C. Protein concentration was quantified by Bradford´s method (Bradford, 1976) [31]. Next, equal amounts of protein (50 µg) were separated by sodium dodecyl sulphate-polyacrylamide gel electrophoresis and, subsequently, were transferred onto a nitrocellulose membrane at 120 mA for 90 min. Later, the membranes were blocked with 5% w/v BSA in PBS-Tween 20 for 1 h. After blocking, the membranes were incubated with the following primary antibodies at 4 • C overnight: rabbit anti-ASC (1:1000), rabbit anti-NLRP3 (1:1000) (Cell Signaling, Danvers, MA, USA), rabbit anti-Nrf-2 (1:500; Santa Cruz Biotechnology, Dallas, TX, USA), rabbit anti-HO-1 (1:500; Enzo Life Sciences, New York, NY, USA), and rabbit anti-Caspase-1, (1:1000 Abcam, Cambridge, UK). All the membranes were also incubated with an anti-β-actin antibody (Sigma-Aldrich, St. Louis, MO, USA) to verify equal loading. Then, the blots were washed three times for 15 min and incubated with the secondary antibody, horseradish peroxidase-linked anti-rabbit (Pierce Chemical, Rockford, IL, USA) for 60 min at room temperature. After the membranes were washed again three times, the bands were visualized using an enhanced chemiluminescence light-detecting kit (Super-Signal West Pico Chemiluminescent Substrate, Pierce, IL, USA). Densitometric analysis was performed after normalisation with the control (house-keeping gene). The signals were quantified with Scientific Imaging Systems (Biophotonics Image J Analysis Software, Bethesda, MD, USA) and plotted as a percentage in relation to DSS group.
Statistical Analysis
All data in the figures and text are exhibited as arithmetic means with their standard errors. Statistical analysis was carried out using SPSS statistical software (IBM SPSS Statistics version 26.0 SPSS Inc., Chicago, IL, USA). The Shapiro-Wilk test was used to verify the normality of the data. Student's t-test was used to compare between the two control groups (sham vs. DSS). The Mann-Whitney U test was chosen for nonparametric values. Statistical differences between multiple groups were compared by one-way ANOVA followed by Bonferroni's post hoc test for parametric data. Nonparametric data were analysed by the Kruskal-Wallis test for multiple comparisons. A p-value less than 0.05 was considered as statistically significant. The statistical test used for individual analyses is provided in the figure legends. For the histological study, the results presented are representative of at least five independent experiments performed on different days.
Vesicle Characterisation
RA-loaded niosomes were prepared by the thin film hydration technique, as described in Material and Methods. Next, vesicles were coated with chitosan and nutriose for the delivery of RA to the colon. The particle size (nm), ZP (mv) and encapsulation efficiency (%) of formulations are presented in Table 1. The average diameters of niosomes were 260.7, 429.7 and 480.5 nm for uncoated niosomes loaded with RA, empty chitosan and nutriose-coated niosomes and chitosan and nutriose-coated niosomes loaded with RA, respectively. Surface charge on both uncoated and chitosan-coated niosomes was evaluated by measuring their ZP. Uncoated niosomes exhibited negative ZP values, which were inverted to positive values after coating with chitosan and nutriose, showing in an indirect manner the presence of this polymeric complex on the noisome surface.
In Vitro Release Studies
The in vitro release profiles were analysed at pH 7.0, i.e., the large intestine pH, comparing RA profile from a standard drug solution and chitosan-coated or uncoated niosomes loaded with RA ( Figure 1). As regards the control, a relatively rapid release of RA was found, with 74% released within the first hour and a nearly complete release (approximately 90%) within 2 h. The amount of RA released from uncoated niosomes was similar to that obtained from chitosan-coated vesicles at each time point, exhibiting around 70% drug released after 8 h. The study was carried out using dialysis bag method at dissolution medium containing PBS (10 mM) with 300 mM NaCl (pH 7.0). All data repre ± SD (n = 3).
It is known that DSS causes damage to the colonic epithelium, mimickin aspects of UC [24]. As expected, the administration of drinking water ad libitum The study was carried out using dialysis bag method at 37 • C, and dissolution medium containing PBS (10 mM) with 300 mM NaCl (pH 7.0). All data represent mean ± SD (n = 3).
Rosmarinic Acid-Loaded Nanovesicles Protected against DSS-Induced Acute Colitis in Mice
It is known that DSS causes damage to the colonic epithelium, mimicking several aspects of UC [24]. As expected, the administration of drinking water ad libitum containing 4% (w/v) DSS, for seven days resulted in acute UC, characterised by a marked weight loss in relation to the sham group (Figure 2a). To evaluate the therapeutic effects of RA on acute colitis, RA-loaded nanovesicles were administered at the doses of 5, 10 and 20 mg/kg by oral route. Mice received pretreatment with nanovesicles from three days prior to the colitis induction and during days 1, 3, 5 and 7 of DSS administration. The positive control group was given 5-ASA at a dose of 75 mg/kg. Treatment with RA-loaded nanovesicles slightly increased body weight on day seven, being only significant with the dose of 10 mg/kg. Next, to assess the external signs of colitis, DAI score was evaluated (Figure 2b). This index showed no evidence of symptoms in sham animals. As expected, mice receiving DSS evidenced a significant increase in DAI score from the 5th day, reaching the peak on the 7th day. Free RA treatment failed to suppress the progression of colitis, resulting in a DAI index like that in the DSS group. However, administration of either 5-ASA or RA-loaded niosomes at all the doses assayed exhibited a marked decrease in DAI score from the 6th day, compared with DSS mice (p < 0.001). As shown in Figure 2c, a marked rise in colonic weight/length ratio, an indicator of colon inflammation, was found in the DSS group when compared with sham mice (p < 0.001). Free RA administration resulted in no significant changes in this parameter. Nevertheless, treatment with either 5-ASA or RA-loaded niosomes significantly attenuated colon inflammation (p < 0.01 and p < 0.05, respectively), as evidenced by the suppression of the weight/length ratio of the colon. According with these observations, macroscopic examination of the colons showed a colon shortening in DSS group, which was reversed following treatment with RA-loaded nanovesicles (Figure 2d). Altogether, these findings demonstrated that RA-loaded niosome administration substantially alleviated DSS-induced colitis.
Rosmarinic Acid-Loaded Nanovesicles Administration Alleviated Microscopic Colon Damage and Increased Mucus Production
The histopathological study of the colon of sham mice revealed a normal colonic appearance (Figure 3a). Consistent with macroscopic changes, the DSS group exhibited a
Rosmarinic Acid-Loaded Nanovesicles Administration Alleviated Microscopic Colon Damage and Increased Mucus Production
The histopathological study of the colon of sham mice revealed a normal colonic appearance (Figure 3a). Consistent with macroscopic changes, the DSS group exhibited a higher inflammation score with mucosal damage, a massive inflammatory infiltrate (neutrophils, lymphocytes and histiocytes) mostly in the mucosa and submucosa and ulceration of the mucous epithelium (Figure 3c,k). Alcian blue staining, which reveals acid mucin positive goblet cells, displayed substantial mucin depletion near of the ulcerative areas of DSS-treated mice (Figure 3d) compared with sham group (Figure 3b). Free RA administration showed a slight improvement of microscopic signs of colitis and partial replacement of mucous secretion (Figure 3g,h). However, treatment with 5-ASA (Figure 3e) or RA-loaded nanovesicles revealed evident findings of mucosal reparation and a decrease of inflammatory infiltrate in the lamina propria with all the doses used in relation to DSS mice (Figure 3i, RA-loaded nanovesicles at 20 mg/kg) as well as a significant reduction of microscopic damage score (Figure 3k). Furthermore, Alcian blue-positive goblet cells were evident in the preserved areas of the mucosa after 5-ASA (Figure 3f) or RA-loaded nanovesicles treatment (Figure 3j).
Biomolecules 2021, 11, 162 9 of 17 higher inflammation score with mucosal damage, a massive inflammatory infiltrate (neutrophils, lymphocytes and histiocytes) mostly in the mucosa and submucosa and ulceration of the mucous epithelium (Figures 3c,k). Alcian blue staining, which reveals acid mucin positive goblet cells, displayed substantial mucin depletion near of the ulcerative areas of DSS-treated mice (Figure 3d) compared with sham group (Figure 3b). Free RA administration showed a slight improvement of microscopic signs of colitis and partial replacement of mucous secretion (Figures 3g,h). However, treatment with 5-ASA (Figure 3e
Rosmarinic Acid-Loaded Nanovesicles Treatment Reduced Neutrophil Infiltration and Colonic TNF-α Production
Neutrophil infiltration found in histological analysis of colons from DSS mice correlated with increased colonic MPO activity, a marker for inflammatory cell infiltration (p < 0.001 vs. sham group) (Figure 4a). As expected, 5-ASA treatment markedly reduced MPO activity in relation to the DSS group (p < 0.001). Similarly, this parameter was significantly decreased after treatment with free RA (20 mg/kg) or RA-loaded nanovesicles at all doses used (p < 0.05).
Rosmarinic Acid-Loaded Nanovesicles Treatment Reduced Neutrophil Infiltration and Colonic TNF-α Production
Neutrophil infiltration found in histological analysis of colons from DSS mice correlated with increased colonic MPO activity, a marker for inflammatory cell infiltration (p < 0.001 vs. sham group) ( Figure 4a). As expected, 5-ASA treatment markedly reduced MPO activity in relation to the DSS group (p < 0.001). Similarly, this parameter was significantly decreased after treatment with free RA (20 mg/kg) or RA-loaded nanovesicles at all doses used (p < 0.05).
In addition, colonic damage by DSS administration was characterised by a marked increase in TNF-α levels in comparison with sham animals (p < 0.001). Administration of 5-ASA significantly reduced this cytokine production in relation to DSS mice (p < 0.01). Similar findings were detected following administration of free RA (20 mg/kg) (p < 0.01) or RA-loaded nanovesicles at 5, 10 and 20 mg/kg (p < 0.01, p < 0.05 and p < 0.01, respectively) ( Figure 4b).
Biomolecules 2021, 11,162 10 of 17 In addition, colonic damage by DSS administration was characterised by a marked increase in TNF-α levels in comparison with sham animals (p < 0.001). Administration of 5-ASA significantly reduced this cytokine production in relation to DSS mice (p < 0.01). Similar findings were detected following administration of free RA (20 mg/kg) (p < 0.01) or RA-loaded nanovesicles at 5, 10 and 20 mg/kg (p < 0.01, p < 0.05 and p < 0.01, respectively) (Figure 4b).
Rosmarinic Acid-Loaded Nanovesicles Administration Reduced Inflammasome Activation.
To support the beneficial effects of RA-loaded nanovesicles on acute colitis and investigate the potential action mechanisms, we studied the expression levels of different inflammasome-related proteins in colon samples. Our results evidenced that DSS administration led to upregulation of NLRP3, ASC and caspase-1 expression (Figure 5a-d) and, consequently, induced a significant increase in IL-1β levels in relation to sham animals (Figure 5e). Treatment with either 5-ASA or RA-loaded nanovesicles at all the doses used significantly downregulated the expression levels of inflammasome-related proteins. However, free RA administration did not induce significant changes in these proteins. Interestingly, significant differences in ASC protein expression were observed between RA-loaded nanovesicles at the doses of 10 and 20 mg/kg and free RA group (p < 0.05). As regards IL-1β production, administration of 5-ASA, free RA or RA-loaded nanovesicles resulted in a significant suppression of these cytokine levels as compared with DSS animals. Moreover, a marked difference was found in RA-loaded nanovesicles at 20 mg/kg in relation to free RA (p < 0.01).
Rosmarinic Acid-Loaded Nanovesicles Administration Reduced Inflammasome Activation.
To support the beneficial effects of RA-loaded nanovesicles on acute colitis and investigate the potential action mechanisms, we studied the expression levels of different inflammasome-related proteins in colon samples. Our results evidenced that DSS administration led to upregulation of NLRP3, ASC and caspase-1 expression (Figure 5a-d) and, consequently, induced a significant increase in IL-1β levels in relation to sham animals (Figure 5e). Treatment with either 5-ASA or RA-loaded nanovesicles at all the doses used significantly downregulated the expression levels of inflammasome-related proteins. However, free RA administration did not induce significant changes in these proteins. Interestingly, significant differences in ASC protein expression were observed between RA-loaded nanovesicles at the doses of 10 and 20 mg/kg and free RA group (p < 0.05). As regards IL-1β production, administration of 5-ASA, free RA or RA-loaded nanovesicles resulted in a significant suppression of these cytokine levels as compared with DSS animals. Moreover, a marked difference was found in RA-loaded nanovesicles at 20 mg/kg in relation to free RA (p < 0.01). . (e) IL-1 β production was evaluated by ELISA assay. Results are representative of five experiments performed on different simples. Data are expressed as the mean ± SEM. Mean value was significantly different compared with the sham group (** p < 0.01, *** p < 0.001; Mann-Whitney U test). Mean value was significantly different compared with DSS group (+ p < 0.05, ++ p < 0.01, +++ p < 0.001; Kruskal-Wallis test) or RA group (# p < 0.05, # # p < 0.01; Kruskal-Wallis test).
Treatment with Rosmarinic Acid-Loaded Nanovesicles Increased Nrf-2 Antioxidant Signaling Pathway
To further explore the protective mechanism of RA-loaded nanovesicles, we investigated their ability to activate Nrf2 pathway, which stimulates the transcription of antioxidant genes and detoxification of enzymes such as HO-1 to protect against DSS-induced Densitometry analysis of (b) nucleotide-binding domain, leucine-rich-repeat-containing family, pyrin domain-containing 3 (NLRP3), (c) inflammasome adaptor protein (ASC) and (d) caspase-1 were performed following normalization to the control (β-actin housekeeping gene). (e) IL-1 β production was evaluated by ELISA assay. Results are representative of five experiments performed on different simples. Data are expressed as the mean ± SEM. Mean value was significantly different compared with the sham group (** p < 0.01, *** p < 0.001; Mann-Whitney U test). Mean value was significantly different compared with DSS group (+ p < 0.05, ++ p < 0.01, +++ p < 0.001; Kruskal-Wallis test) or RA group (# p < 0.05, # # p < 0.01; Kruskal-Wallis test).
Treatment with Rosmarinic Acid-Loaded Nanovesicles Increased Nrf-2 Antioxidant Signaling Pathway
To further explore the protective mechanism of RA-loaded nanovesicles, we investigated their ability to activate Nrf2 pathway, which stimulates the transcription of antioxidant genes and detoxification of enzymes such as HO-1 to protect against DSS-induced oxidative damage [5]. Our results reported that DSS prevented the increase of Nrf2 and HO-1 expression (Figure 6a-c). Administration of 5-ASA or RA-loaded nanovesicles at all the doses assayed significantly upregulated Nrf2 and HO-1 expression levels, reaching higher Nrf2 values that those in sham animals. Nevertheless, treatment with nonencapsulated RA did not induce significant changes in the expression of these antioxidant proteins. Remarkably, significant differences in Nrf2 and HO-1 levels were found between RA-loaded nanovesicles at 10 and 20 mg/kg and free RA (p < 0.05).
Biomolecules 2021, 11,162 12 of 17 oxidative damage [5]. Our results reported that DSS prevented the increase of Nrf2 and HO-1 expression (Figure 6a-c). Administration of 5-ASA or RA-loaded nanovesicles at all the doses assayed significantly upregulated Nrf2 and HO-1 expression levels, reaching higher Nrf2 values that those in sham animals. Nevertheless, treatment with nonencapsulated RA did not induce significant changes in the expression of these antioxidant proteins. Remarkably, significant differences in Nrf2 and HO-1 levels were found between RA-loaded nanovesicles at 10 and 20 mg/kg and free RA (p < 0.05).
Discussion
IBD, including UC and CD, are chronic and recurrent disorders of the gastrointestinal tract [1]. Since current IBD treatments have limited results with many side effects, many researchers aimed to find new strategies for controlling symptoms and preventing relapses. The use of nutraceuticals with anti-inflammatory properties is gaining considerable attention for IBD treatment due to their safety. The anti-inflammatory effects of RA have been previously evidenced in different experimental models of inflammatory diseases, including arthritis, colitis and atopic dermatitis [15]. As regards colitis, the previous papers assayed higher doses of RA than those used in our study (25-200 mg/kg) and none of them evaluated the effects of RA on the modulation of the NLRP3 inflammasome or the antioxidant signaling pathway Nrf-2/ HO-1 [25,32,33]. On the other hand, the poor water solubility and low bioavailability limited the clinical use of this polyphenol [23].
Discussion
IBD, including UC and CD, are chronic and recurrent disorders of the gastrointestinal tract [1]. Since current IBD treatments have limited results with many side effects, many researchers aimed to find new strategies for controlling symptoms and preventing relapses. The use of nutraceuticals with anti-inflammatory properties is gaining considerable attention for IBD treatment due to their safety. The anti-inflammatory effects of RA have been previously evidenced in different experimental models of inflammatory diseases, including arthritis, colitis and atopic dermatitis [15]. As regards colitis, the previous papers assayed higher doses of RA than those used in our study mg/kg) and none of them evaluated the effects of RA on the modulation of the NLRP3 inflammasome or the antioxidant signaling pathway Nrf-2/HO-1 [25,32,33]. On the other hand, the poor water solubility and low bioavailability limited the clinical use of this polyphenol [23]. The application of nanosystems based on colon targeted drug delivery is receiving considerable attention for local treatment of UC since they can decrease drug loss in proximal intestine and thus increase drug concentration in the colon [34]. Conventional nanovesicles are scarcely used since they can suffer from gastric and enzymatic degradation, reducing the drug oral bioavailability. To overcome such limitation, chitosan is widely used for coating nanovesicles since this polymer resists gastric degradation, thus improving drug bioavailability. In this line, it has been previously reported that the recovery of phospholipid vesicles with chitosan and nutriose protected quercetin from upper gastrointestinal tract degradation, carrying it to the colon where the drug was released in inflamed tissue [20]. In the present study, we evaluated the effects of RA-loaded niosomes coated with the combination of chitosan and nutriose on DSS-induced colitis.
In terms of niosome characterisation, as expected our observations revealed that chitosan and nutriose coating increased the overall vesicle size. As regards the ZP study, a marked difference between uncoated and coated niosomes was observed. This parameter was negative for uncoated vesicles (approx. −17 mV) and an increase in the positive charges in the surface of chitosan-coated niosomes was observed (approx. +50 mV, data not shown), suggesting the adsorption of this positively-charged polysaccharide onto the niosomal surface. These positive charges were only partially neutralized by nutriose (approx. +38 mV), showing the formation of the polymeric complex by electrostatic interaction. In relation to in vitro drug release studies, uncoated and coated niosomes showed similar release profiles, which were significantly lower than that of the RA solution. Nevertheless, coated niosomes were selected for the present studies in order to protect nanovesicles from gastric degradation and increase colonic release [19].
Currently, DSS is widely used to induce experimental acute colitis since its mimics the clinical and histological characteristics of human UC [35]. In the present study, DSS induced colonic inflammation and mucosal damage, leading to body weight loss, colon shortening and increased DAI score. Treatment of mice with RA-loaded niosomes attenuated body weight loss and colon shortening, as well as relieved clinical symptoms of colitis. Furthermore, this formulation prevented histological damage by decreasing neutrophil infiltration and restoring epithelial cell injury. Intestinal mucus, synthesised by goblet cells, forms a gel-like layer that fills the crypts and serves as a barrier to protect the intestinal epithelium from the deleterious effects of luminal stimulants. Disruption of goblet cells can lead to intestinal inflammation [36]. It has been reported that IBD patients have reduced numbers of goblet cells and mucus layer thickness [37]. Our study exhibited mucin-depleted crypts in colitic mice, as evidenced by the loss of Alcian blue-stained goblet cells. Remarkably, RA-loaded nanovesicles administered orally enhanced mucus accumulation inside the goblet cells, suggesting a protective role of this formulation on colonic epithelial damage.
It has been reported that inflammatory cell infiltration into colon tissue has a pathogenic role in IBD. Therefore, its control is very important for the attenuation of this disease. MPO activity serves as a marker for measuring neutrophil inflammatory response after colitis induction. Once released, this enzyme catalyses the production of ROS, which are involved in IBD development. In addition, immune cells release pro-inflammatory cytokines such as TNF-α, which is involved in the initiation and maintenance of mucosal inflammation in IBD [27]. Our data evidenced that free RA or RA-loaded nanovesicles effectively inhibited polymorphonuclear infiltration into the colon, as shown by reduced colonic MPO activity and, consequently, decreased colonic TNF-α levels. In according with this findings, the anti-inflammatory effects of RA, mediated by reduction of TNF-α release, have been previously reported in experimental colitis [25,32]. Moreover, a recent paper showed that PEGylated RA-derived nanoparticles (10, 20 and 30 mg/kg) inhibited TNF-α production dose-dependently in DSS-induced colitis [23].
The NLRP3 inflammasome is the most extensively investigated inflammasome complex and is composed of NLRP3 protein, ASC adaptor and caspase-1. This last protein can eventually produce the maturation and release of IL-1β [3]. In previous studies, RA was shown to inhibit inflammasome activation in experimental models of inflammation in epidermal keratinocytes [38], neuroinflammatory injury [39], atherosclerosis [40] and premature ovarian failure [41]. Despite these findings, there is no previous evidence that RA may exert its anti-inflammatory effects through suppression of inflammasome activation in UC. In the present study, we reported for the first time that orally-administered RA-loaded niosomes markedly reduced inflammasome-related proteins such as NLRP3, ASC and caspase-1 in a DSS-induced colitis experimental model in mice. However, this effect was not detected with the parent compound. In terms of IL-1β production, the colonic levels of this cytokine were decreased by both free RA and RA-loaded niosomes. Since nonencapsulated RA did not significantly modify inflammasome-associated proteins expression, the reduced levels of IL-1β after RA treatment could be alternatively explained by the inhibitory effect of RA on NF-kB activation, as previously reported [25,42].
Although the mechanism underlying UC is not clear, oxidative stress has been reported to play an important role in colitis pathogenesis. Inflammatory cells produce high ROS levels, which lead to oxidative damage in the colon. Nrf2 is a transcription factor that modulates cellular antioxidant response. Under oxidative conditions, this cytoplasmic factor is translocated into the nucleus where it reacts with the antioxidant response element (ARE) and induces transcription of antioxidant genes such as HO-1 [5]. RA has been previously reported to activate Nrf2/HO-1 signaling pathway in different experimental models including spinal cord injury [43], high-fat diet-induced intestinal damage [44], acute liver injury [45] and streptozotocin-induced diabetes [46]. In the present study, our data evidenced that DSS decreased Nrf2 protein expression and its targeted gene HO-1, in comparison to the sham group. The levels of both proteins were reverted by treatment with the positive control 5-ASA and RA-loaded niosomes. However, oral administration of nonencapsulated RA did not induce significant changes in the expression levels of these antioxidant proteins. In agreement with these observations, a recent in vitro study by our group reported that the treatment of UVB-exposed HaCaT cells with RA alone had no significant effects on Nrf2/HO-1 pathway regulation. However, the combination of the carotenoid fucoxanthin and RA protected against UVB-induced oxidative stress through upregulation of Nrf2 transcriptional factor and its main target gene HO-1 [47].
In the present study, it is noteworthy that the nonencapsulated RA treatment was only able to significantly reduce MPO activity and TNF-α levels, early markers for acute inflammation. However, the RA-free treatment did not induce significant changes in regulation of NLRP3 inflammasome or the Nrf2/HO-1 signaling pathway in relation to niosomes treatment. The loss of biological activity of free RA could be explained by its gastric degradation and metabolization in the upper gastrointestinal tract. In this regard, the use of chitosan and nutriose-coated niosomes loaded with RA may be a useful alternative to nonencapsulated RA because these formulations could increase RA local bioavailability and allow a controlled release in the inflamed colon. As a consequence, lower doses of RA could be used, which would reduce production cost for a possible future application in the treatment of patients with colitis.
Conclusions
Our findings indicate that chitosan and nutriose-coated niosomes loaded with RA showed a beneficial effect in acute experimental colitis, with all the doses being effective, although a dose-response relation was not detected. Our study is the first to show that these RA nanovesicles could protect the colonic mucosa against DSS-induced damage by attenuating inflammation and oxidative stress through modulation of NLRP3 inflammasome and reestablishment of the Nrf2/HO-1 signaling pathway. Therefore, this formulation could be a novel nutraceutical approach to oral colon-targeted UC therapy. However, further studies in a chronic model of colitis are needed in order to deepen the dose-response effect and the pharmacokinetic profile of these RA-loaded niosomes. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of University of Seville (06/04/2018/042)).
Data Availability Statement:
The data presented in this study are available from the corresponding author (E.T.) upon reasonable request. | 2021-02-04T06:16:21.876Z | 2021-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "f950b0b7675a935a7167bead56bbd2549da5aeb1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/11/2/162/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9281f49258d16b1f906f6dd7b1f1fa00fd65828",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247782655 | pes2o/s2orc | v3-fos-license | Divergence in the Regulation of the Salt Tolerant Response Between Arabidopsis thaliana and Its Halophytic Relative Eutrema salsugineum by mRNA Alternative Polyadenylation
Salt tolerance is an important mechanism by which plants can adapt to a saline environment. To understand the process of salt tolerance, we performed global analyses of mRNA alternative polyadenylation (APA), an important regulatory mechanism during eukaryotic gene expression, in Arabidopsis thaliana and its halophytic relative Eutrema salsugineum with regard to their responses to salt stress. Analyses showed that while APA occurs commonly in both Arabidopsis and Eutrema, Eutrema possesses fewer APA genes than Arabidopsis (47% vs. 54%). However, the proportion of APA genes was significantly increased in Arabidopsis under salt stress but not in Eutrema. This indicated that Arabidopsis is more sensitive to salt stress and that Eutrema exhibits an innate response to such conditions. Both species utilized distal poly(A) sites under salt stress; however, only eight genes were found to overlap when their 3′ untranslated region (UTR) lengthen genes were compared, thus revealing their distinct responses to salt stress. In Arabidopsis, genes that use distal poly(A) sites were enriched in response to salt stress. However, in Eutrema, the use of poly(A) sites was less affected and fewer genes were enriched. The transcripts with upregulated poly(A) sites in Arabidopsis showed enriched pathways in plant hormone signal transduction, starch and sucrose metabolism, and fatty acid elongation; in Eutrema, biosynthetic pathways (stilbenoid, diarylheptanoid, and gingerol) and metabolic pathways (arginine and proline) showed enrichment. APA was associated with 42% and 29% of the differentially expressed genes (DE genes) in Arabidopsis and Eutrema experiencing salt stress, respectively. Salt specific poly(A) sites and salt-inducible APA events were identified in both species; notably, some salt tolerance-related genes and transcription factor genes exhibited differential APA patterns, such as CIPK21 and LEA4-5. Our results suggest that adapted species exhibit more orderly response at the RNA maturation step under salt stress, while more salt-specific poly(A) sites were activated in Arabidopsis to cope with salinity conditions. Collectively, our findings not only highlight the importance of APA in the regulation of gene expression in response to salt stress, but also provide a new perspective on how salt-sensitive and salt-tolerant species perform differently under stress conditions through transcriptome diversity.
Salt tolerance is an important mechanism by which plants can adapt to a saline environment. To understand the process of salt tolerance, we performed global analyses of mRNA alternative polyadenylation (APA), an important regulatory mechanism during eukaryotic gene expression, in Arabidopsis thaliana and its halophytic relative Eutrema salsugineum with regard to their responses to salt stress. Analyses showed that while APA occurs commonly in both Arabidopsis and Eutrema, Eutrema possesses fewer APA genes than Arabidopsis (47% vs. 54%). However, the proportion of APA genes was significantly increased in Arabidopsis under salt stress but not in Eutrema. This indicated that Arabidopsis is more sensitive to salt stress and that Eutrema exhibits an innate response to such conditions. Both species utilized distal poly(A) sites under salt stress; however, only eight genes were found to overlap when their 3′ untranslated region (UTR) lengthen genes were compared, thus revealing their distinct responses to salt stress. In Arabidopsis, genes that use distal poly(A) sites were enriched in response to salt stress. However, in Eutrema, the use of poly(A) sites was less affected and fewer genes were enriched. The transcripts with upregulated poly(A) sites in Arabidopsis showed enriched pathways in plant hormone signal transduction, starch and sucrose metabolism, and fatty acid elongation; in Eutrema, biosynthetic pathways (stilbenoid, diarylheptanoid, and gingerol) and metabolic pathways (arginine and proline) showed enrichment. APA was associated with 42% and 29% of the differentially expressed genes (DE genes) in Arabidopsis and Eutrema experiencing salt stress, respectively. Salt specific poly(A) sites and salt-inducible APA events were identified in both species; notably, some salt tolerance-related genes and transcription factor genes exhibited differential APA patterns, such as CIPK21 and LEA4-5. Our results suggest that adapted species exhibit more orderly response at the RNA maturation step under salt stress, while more saltspecific poly(A) sites were activated in Arabidopsis to cope with salinity conditions. Collectively, our findings not only highlight the importance of APA in the regulation of gene expression in response to salt stress, but also provide a new perspective on how salt-sensitive and salttolerant species perform differently under stress conditions through transcriptome diversity.
Keywords: alternative polyadenylation, salt tolerance, Arabidopsis thaliana, Eutrema salsugineum, PAT-seq, RNA processing INTRODUCTION Salt stress is a major global issue for agricultural production. More than 800 million hectares of cultivated land is affected by high salinity (Munns and Tester, 2008). Rising salt concentration in soil or water can have a significant detrimental effect on crop yields. Excess salt represents a major threat to germination, growth, and the production of plants in saline soil. Understanding how plants respond to salt conditions and the molecular mechanisms of salt tolerance is important for stress biology research and also meaningful for genetic improvements of salt resistance in crops.
Eutrema salsugineum is closely related to Arabidopsis thaliana but it can grow in natural harsh environments. Eutrema is widely used as a model system to investigate how plants cope with high salinity, extreme cold, and water shortage (Khanal et al., 2015;Li et al., 2021a). Although the divergence time between Eutrema and Arabidopsis is approximately 43.2 MYA, these plants share over 80% of genes and exhibit highly homologous orthologs (Yang et al., 2013). How they respond to salt stress differently has been intriguing, and the underlying mechanisms that control salt acclimation at transcriptional level are not well understood.
Messenger RNA polyadenylation is a pre-mRNA processing event that affects gene expression. It involves two main steps: cleavage of the 3′ end of pre-mRNAs by polyadenylation factors and the addition of a poly(A) tail, which bridges other transcriptional and post-transcriptional processes, such as splicing (Deng and Cao, 2017), and transcriptional termination (Antosz et al., 2017). It has been reported that plant genes possess multiple polyadenylation sites, and over 70% of genes in Arabidopsis and rice are alternatively polyadenylated Berkovits and Mayr, 2015;Fu et al., 2016;Kim et al., 2016). Alternative polyadenylation (APA) can enhance the diversity of the transcriptome, affect mRNA stability, export, localization, and influence translation processes (Xing and Li, 2011). Genome-wide APA dynamics in development and stress responses have been reported in several species of plants, including A. thaliana (Yu et al., 2019), Oryza sativa (Fu et al., 2016), Medicago truncatula , Sorghum bicolor (Abdel-Ghany et al., 2016), bamboo (Wang et al., 2017), and algae like Chlamydomonas reinhardtii (Zhao et al., 2014) and diatom (Fu et al., 2019).
Alternative polyadenylation is tightly associated with many environmental responses in plants, including oxidative stress (Zhang et al., 2008), hypoxia (de Lorenzo et al., 2017, drought , heat (Chakrabarti et al., 2020), and heavy metal stresses (Cao et al., 2019). Several studies on polyadenylation factors, including CPSF30, FIP1, and FY, suggested that polyadenylation factors-mediated APA is important for stress responses (Chakrabarti and Hunt, 2015;Tellez-Robledo et al., 2019;Yu et al., 2019). Previous research has indicated that APA is involved in the expression of genes related to salt tolerance. For example, AtSOT12 exhibits saltinducible expression and the manner in which the poly(A) site is used has been shown to change under conditions of salt stress, thus identifying novel mechanisms of salt-responsive gene regulation (Chen et al., 2015). It was demonstrated that transcripts of AtARK2 and a zinc ion binding protein generated by APA play roles in salt and oxidative stress responses (Yu et al., 2019). Besides, Sorghum showed APA-mediated transcriptome remodeling in response to salt stress (Chakrabarti et al., 2020).
Here, we performed high-throughput poly(A) tag sequencing (PAT-seq) with a salt-sensitive species A. thaliana and a salttolerant species E. salsugineum when treated with 200 mM of NaCl. We provided a comprehensive map of poly(A) profiles of the two species under salt conditions, identified differential gene expression patterns and distinct poly(A) profiles, and revealed a new perspective on the potential role of APA in plant response to salt stress.
Plant Materials and Salt Stress Treatments
Arabidopsis thaliana (ecotype: Col-0; CS60000) and Eutrema salsugineum (ecotype: Shandong; formerly known as Thellungiella halophila; thus, the gene names were still in prefix Thhalv according to its genome annotation files) were used for root growth phenotyping. Seeds were sterilized with sodium hypochlorite for 3 min and rinsed with distilled water for five times. Then, seeds were synchronized at 4°C in the dark for 3 days (Arabidopsis) or for 7 days (Eutrema). Eutrema seeds were sterilized 4 days ahead of Arabidopsis seeds so that they could be sowed at the same time. Seeds were sowed on 1/2 Murashige and Skoog (MS) medium (with 2% sucrose) and placed vertically in a growth chamber with 16 h-light and 8 h-dark cycles at 21 ± 1°C for seedling growth. Five-day-old seedlings were transferred onto 1/2 MS medium containing 0, 50, 150, 200, or 300 mM NaCl, and the positions of the root tips were marked. Photographs were taken 8 days later and tap root elongation was determined by Image J. Three biological replicates were performed for each concentration, and each replicate contained five seedlings.
For short-term treatment, Arabidopsis and Eutrema seeds were sterilized and synchronized as described above and then sowed on 1/2 MS medium and kept for 13 days to allow vertical growth. Then, the seedlings were transferred to 1/2 MS medium containing 0 or 200 mM NaCl and treated for 3 h. Next, whole seedlings were immediately frozen in liquid nitrogen and stored at −80°C until RNA extraction. Three biological replicates were performed, and six seedlings were pooled into each replicate.
PAT-seq Library Preparation
Total RNA was extracted with a TaKaRa MiniBEST Plant RNA Extraction Kit and genomic DNA was removed by DNaseI (New England Biolabs). PAT-seq libraries were prepared as previously described with modifications (Lin et al., 2020). Two micrograms of total RNA were fragmented by 5× first strand buffer (TaKaRa) at 94°C for 4 min. Poly(A) RNAs were then Frontiers in Plant Science | www.frontiersin.org enriched by oligo(dT) 25 beads (New England Biolabs). Reverse transcription was performed with oligo d(T) 18 primers by SMARTScribe™ Reverse Transcriptase (TaKaRa) for 2 h at 42°C. Then, a modified 5′ adaptor and SMARTScribe Reverse Transcriptase were added for another 2 h at 42°C. The cDNA generated was then purified with AMPure beads and amplified with Phire II (Thermo Fisher Scientific). The amplification products were then separated on a 2% agarose gel and 300-500 bp fragments were purified with a Zymoclean Gel DNA Recovery Kit. The concentration and quality of libraries were tested by a Qubit 2.0 and an Agilent Bioanalyzer 2100, and then sequenced on an Illumina HiSeq 2500 platform with 100-bp rapid sequencing mode.
Identification of Poly(A) Sites
Raw reads were filtered by the FASTX-Toolkit with a threshold of q = 10 and low-quality reads were discarded. The remaining reads were mapped to the A. thaliana TAIR10 genome and the E. salsugineum genome (Yang et al., 2013) by Bowtie 2 (Langmead and Salzberg, 2012). Poly(A) site analysis was performed as previously described (Lin et al., 2020). Internal priming events were filtered out by custom perl script and poly(A) tags (PATs) within 24 nucleotides (nt) were clustered into one poly(A) cluster (PAC), which represented a poly(A) site. As 70% of the Eutrema poly(A) sites were located within 200 nt downstream of the annotated genes (Supplementary Figure S1), we extended the 3′ untranslated region (UTR) by 200 nt to recover the PACs that fell within this region . In the case of genes that did not have a 3′ UTR annotation, we extended by an extra 218 nt (the average length of 3′ UTRs in Eutrema). PACs with less than 10 PATs were filtered out and DEseq2 (Love et al., 2014) was used to normalize PAT counts and analyze differential expression among the samples, an adjusted value of p < 0.05 was set as the threshold for significance. PAT-seq coverage of genes was visualized by Integrative Genomics Viewer (IGV) v2.8.3 (Robinson et al., 2011).
3′ UTR Length Analysis
The weighted length of the 3′ UTRs in genes was analyzed as described previously (Lin et al., 2020). Genes with at least two PACs in their 3′ UTRs were used to identify shortening and lengthening events in the 3′ UTR. Pearson's correlation coefficient was used to indicate the strength of 3′ UTR shortening (<0) or 3′ UTR lengthening (>0). Adjusted p values from Chi-square tests were used to indicate the significance of changes in the length of the 3′ UTR.
RT-qPCR Analysis
Two micrograms of DNA-free total RNA were used for reverse transcription. RT-qPCR was performed on a CFX96™ Real-Time PCR Detection System (Bio-Rad) with SYBR green PCR master mix. Primers are shown in Supplementary Table S3. AtACTIN2 was used as the reference gene for Arabidopsis while EsTUB6 was used as the reference gene for Eutrema.
Statistical Analysis SPSS R.23.0.0 was used for data analysis; one-way ANOVA and the Least Significant Difference test were used to determine statistical significance. Wilcoxon matched-pairs signed rank test was used to test the significance in boxplot. The mean values and SDs were calculated from three biological replicates. Significant differences were indicated as *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 10e−04.
Data Availability
The PAT-seq data generated by this study are available in the NCBI BioProject database 1 under accession number PRJNA782687.
The Growth of Arabidopsis and Eutrema Roots Under Salt Stress
Root elongation under salt conditions was measured to evaluate the salt tolerance of Arabidopsis and Eutrema. Five-day-old seedlings were transferred to 1/2 MS medium containing different concentrations of NaCl (0, 50, 150, 200, and 300 mM) and primary root elongation was measured after 8 days. Under normal condition or a relatively low concentration of NaCl (50 mM), Arabidopsis grew longer roots than Eutrema (0 mM, p < 0.001, 50 mM, p < 0.01, Figure 1A). However, under conditions with higher concentrations of NaCl (>150 mM), the root growth of Arabidopsis was significantly restricted (p < 0.001, Figure 1A). At a NaCl concentration of 200 mM, the elongation of Arabidopsis roots was reduced to 2.5% of that at 0 mM NaCl (p < 0.001); in comparison, 66% of root growth was maintained in Eutrema ( Figures 1A,B). These results suggest that Eutrema performed significantly better than Arabidopsis under salt stress; these findings are consistent with previous studies which showed that Eutrema is highly tolerant to salt (Kazachkova et al., 2013). On the basis of these results, we selected 200 mM NaCl for the construction of PAT-seq libraries as most significant differences were seen at this concentration between the two species.
Frontiers in Plant Science | www.frontiersin.org
Profiles of the Poly(A) Sites of Arabidopsis and Eutrema Under Salt Stress
To determine poly(A) site profiles (hence APA events) in Arabidopsis and Eutrema under salt stress, we collected seedlings of the two species under control (CK, 0 mM NaCl) and salt stress (ST, 200 mM NaCl) conditions for PAT-seq. After raw data processing, 44,395 PACs were identified in Arabidopsis; these were dispersed amongst 20,208 genes. Of these genes, 54% possessed more than one poly(A) site; these were defined as APA genes (Figure 2A). In contrast, 30,226 PACs were identified in Eutrema; these were dispersed in 17,939 genes; 47% of these were classified as APA genes ( Figure 2B). These results suggest that APA occurs commonly in Arabidopsis and Eutrema.
Notably, salt stress induced more than 400 APA genes in Arabidopsis while no significant changes were observed in Eutrema ( Figure 2C), thus indicating that Arabidopsis is more sensitive to salt stress. Furthermore, salt stress reduced the proportion of PACs in the 3′ UTRs of Arabidopsis but increased those in intergenic regions; however, no such changes were evident in Eutrema (Figure 2D), thus suggesting that salt stress induced lower levels of interference in the Eutrema transcriptome.
Arabidopsis and Eutrema Showed Distinct Poly(A) Profiles and Gene Expression Patterns Under Salt Stress
As Arabidopsis and Eutrema are known to respond differently to salt stress, we applied principal component analyses to determine Table S2). These DE-PACs were located in 2,566 and 849 genes, respectively, and were designated as DE-PAC genes.
Next, we investigated the potential functions of these DE-PAC genes by performing GO enrichment and KEGG pathway analyses. In both species, DE-PAC genes were enriched in a range of biological processes, including hyperosmotic salinity response, hormone-mediated signal pathways, response to wounding, response to heat and cold; and a range of cellular components, including plasmodesma, apoplast, and cell wall (Figure 3). However, several terms of biological processes were identified to be different in the two species, including negative regulation of programmed cell death, positive regulation of transcription, flavonoid biosynthetic process that only showed in Arabidopsis; whereas response to oxidative stress, biosynthetic process of wax and lignin only showed in Eutrema (Figure 3). Besides, for both Arabidopsis and Eutrema, DE-PAC genes were enriched in a range of different molecular functions. In Arabidopsis, we identified DE-PAC genes that were associated with transcription factors; however, in Eutrema, the DE-PAC genes were related to protein heterodimerization activity (Figure 3). Upregulated DE-PAC genes in Arabidopsis were significantly enriched in several KEGG pathways, including plant hormone signal transduction, starch and sucrose metabolism, and fatty acid elongation ( Table 1). For downregulated DE-PAC genes, no pathways were significantly enriched. However, in Eutrema, upregulated DE-PAC genes were significantly enriched in biosynthetic pathways (stilbenoid, diarylheptanoid, and gingerol) and metabolic pathways (arginine and proline). Downregulated DE-PAC genes were enriched in protein processing in the endoplasmic reticulum. Collectively, these results revealed that Arabidopsis and Eutrema respond to salt stress differently with distinct gene expression profiles; it is likely that they also possess different molecular mechanisms.
Gene expression levels were determined by adding total counts of PATs located in the gene. Compared to CK, 3,681 genes in Arabidopsis and 1,544 genes in Eutrema were differentially expressed (DE) under ST. Venn analysis showed that 68% and 54% of the DE genes in Arabidopsis and Eutrema, respectively, had DE-PACs (Figures 4A,B). DE-PAC genes with more than one poly(A) site were defined as DE-APA genes. We found that a significant proportion of DE genes overlapped with DE-APA genes [42% in Arabidopsis ( Figure 4C) and 29% in Eutrema ( Figure 4D)], thus highlighting the importance of APA in the regulation of gene expression in response to salt stress.
Genes Tended to Use Distal Poly(A) Sites in 3′ UTRs Under Salt Stress
The 3′ UTR contains cis-elements that may affect mRNA metabolism, thus leading to the fine-tuning of mRNA stability, translation, nuclear export, and cellular localization (Xing and Li, 2011). Over 70% of PACs were located in 3′ UTRs of Arabidopsis and Eutrema ( Figure 2D); therefore, we investigated APA events in this region and determined the length of 3′ UTRs in genes. These analyses suggested that there were a higher number of genes with longer 3′ UTRs than those with shorter 3′ UTRs in both Arabidopsis and Eutrema under salt stress. Compared to Arabidopsis, Eutrema possessed fewer genes that exhibited a change in the length of 3′ UTR (Figure 5A), thus indicating that 3′ UTR poly(A) sites were less affected in Eutrema under conditions of salt stress. Furthermore, we measured the 3′ UTR length of 3′ UTR lengthen and shorten genes in Arabidopsis and Eutrema. We found that salt stress caused significant changes in the length of 3′ UTR in both species (Figure 5B). Of the genes with longer 3′ UTRs, we found that more of these genes are upregulated than downregulated in Arabidopsis (267 vs. 190, with p adj < 0.05, Figure 5C) and Eutrema (60 vs. 53, with p adj < 0.05, Figure 5D). Of the genes with a shorter 3′ UTR, the numbers of upregulated genes and downregulated genes were very similar in both Arabidopsis (26 vs. 24) and Eutrema (8 vs. 9). The analysis of homologous genes with significantly longer 3′ UTRs in the two species showed that only eight genes overlapped (Supplementary Figure S3), thus Frontiers in Plant Science | www.frontiersin.org revealing their distinct gene sets that responded to salt stress via APA in 3′ UTRs. Next, we used GO analysis to investigate the functionality of genes undergoing significant changes in the length of their 3′ UTRs. No terms were enriched for the genes that exhibited shorter 3′ UTRs; this was most likely due to the limited number of genes; data related to the genes with longer 3′ UTRs are shown in Supplementary Figure S4. We found that the genes with a longer 3′ UTR in Arabidopsis were significantly enriched in GO terms related to salt stress, including response to salt stress and cation transport; such enrichment was not detected in Eutrema. These findings suggest that the regulation of APA in response to salt stress was more significant in Arabidopsis in terms of the poly(A) site choice in 3′ UTRs.
Differential APA of Genes Related to Salt Tolerance in Arabidopsis and Eutrema
Interestingly, we found that some genes related to salt tolerance exhibited differential APA patterns in Arabidopsis and Eutrema.
For example, MAP3Kδ4 plays an important role in ABA signaling and plant responses to various environmental stimuli, including high salt concentrations. The over-expression of MAP3Kδ4 was previously shown to enhance tolerance to salt stress in Arabidopsis (Shitamichi et al., 2013). Our data further revealed that AtMAP3Kδ4 (AT4G23050) exhibited a longer 3′ UTR under salt stress (from 280 nt in CK to 373 nt in ST). PAT-seq coverage of the gene was visualized by IGV and validated by RT-qPCR (Figures 6A,B). Four poly(A) sites were expressed under control conditions and the gene mostly used the proximal site (PA1). However, salt stress significantly increased the utilization of the distal poly(A) site (PA4, Figure 6A). The homolog of AtMAP3Kδ4 in Eutrema (Thhalv10024532m) only showed increased gene expression level without APA regulation (Figures 6C,D).
When a gene exhibited alternative usage of two or more poly(A) sites (e.g., one PAC was upregulated while another was downregulated), then the gene was designated an APA switching gene. This type of APA switching under salt stress was detected in 70 and 23 genes in Arabidopsis and Eutrema, respectively. Table 2 shows APA switching genes for which a functional role has been described previously. In Arabidopsis, these genes are related to response to salt stress, mRNA processing, and growth by gravitropism. In Eutrema, these genes are related to dehydration stress, low temperature, and ABA response. It was previously reported that ERD14 and ERD10 were alternatively spliced following salt treatment (Ding et al., 2014) and that erd10 mutants exhibited a reduced tolerance to dehydration (Kim and Nam, 2010). The homologous gene of Thhalv10008280m in Arabidopsis encodes AtU2AF35a, a small subunit of splicing factor U2. Interestingly, the gene that encodes the conserved subunit AtU2AF35b (AT5G42820) also underwent APA switching under salt stress in Arabidopsis ( Table 2). In addition, considering stress conditions can induce the specific expression of genes, we investigated salt-specific PACs (i.e., PACs that were only expressed in ST samples) and saltinducible APA (i.e., APA events that were only found in ST samples) in Arabidopsis and Eutrema. In total, 1,021 salt-specific PACs were identified in Arabidopsis, these were dispersed among 569 genes; 86 of these genes were enriched in GO terms related to transcription factors and 46 genes were enriched in GO terms related to response to salt stress. Notably, 50 genes showed salt-inducible APA; furthermore, some transcription factors that positively regulate drought and salt stress only underwent APA under conditions of salt stress. For example, AT4G34410 only used one poly(A) site under normal conditions, whereas four PACs were induced by salt stress (Figure 6E). This indicated that salt stress changed the poly(A) tailing position of AT4G34410 transcripts. This gene encodes the transcription factor ERF109, which improves the resistance of Arabidopsis to salt. Compared with knockout mutants, mutants that overexpressed ERF109 were shown to possess a longer root length, more leaves, and larger rosette leaf areas under salt conditions (Bahieldin et al., 2016). Another gene, AT5G62470 is known to encode the MYB96 transcription factor; in this gene, only one poly(A) site was used in the absence of salt stress, while two PACs were produced under salt stress ( Figure 6F). MYB96 transcription factor has been shown to improve tolerance to drought in Arabidopsis by regulating the biosynthesis of cuticular wax (Seo et al., 2011).
In Eutrema, we identified 190 salt-specific PACs from 169 genes. Of these genes, 18 were significantly enriched in GO terms related to transcription factor activity and sequence-specific DNA binding; 14 were enriched in response to water deprivation. Sixteen genes showed salt-inducible APA; likewise, some transcription factors that positively regulate drought and salt stress only exhibited APA under salt stress. These included Thhalv10011676m, which encodes a homolog of Arabidopsis NAC019 transcription factor; this gene did not undergo expression under normal conditions but produced two PACs following salt treatment ( Figure 6G). Thhalv10014897m encodes a homolog of AtLEA4-5 that typically accumulates in response to conditions of low water availability (Li et al., 2021b). This gene exhibited only one PAC in the absence of salt but exhibits three PACs under salt stress ( Figure 6H). Moreover, we used salt-specific PAC genes in Eutrema to identify homologous genes in Arabidopsis for comparative purposes. Venn analysis showed that only 28 genes overlapped ( Figure 6I); these genes were significantly enriched in GO terms related to water deprivation, response to abscisic acid, and transcription factor activity. However, more salt-specific PAC genes in Arabidopsis are distinct from that in Eutrema, thus suggesting that APA plays an important role in both species during salt stress response but with different patterns of gene regulation; a higher number of salt-specific PACs were activated in Arabidopsis to cope with salt conditions.
Polyadenylation Factors Exhibited Different Expression Levels Under Salt Stress
The differential use of APA sites is normally related to the different functions of poly(A) factors. Changes in the Frontiers in Plant Science | www.frontiersin.org expression of core polyadenylation factors will also lead to global APA events in 3′ UTRs (Thomas et al., 2012). To explore the mechanisms responsible for the modulation of 3′ UTR length, we determined the expression levels of 26 genes that encode polyadenylation factors and compared these data between CK and ST samples. In Arabidopsis, three polyadenylation factor genes (FIPS5, PCFS1, and PCFS5) were significantly upregulated under salt stress ( Figure 7A).
The homologous genes of PCFS5 in Eutrema also showed upregulation under salt stress ( Figure 7B); these data were consistent with previous studies that reported AtPCFS1 and AtPCFS5 to exhibit increased expression levels under salt stress . In contrast, CstF50 and PABN3 were significantly downregulated in Arabidopsis under salt stress ( Figure 7A); while CstF50 was downregulated in Eutrema ( Figure 7B). PCFS factors are homologs of Pcf11p in yeast and CF II in mammals and are essential for pre-mRNA 3′-end processing. Yeast Pcf11p binds to the C-terminal domain of the largest subunit of RNA polymerase II and is involved in transcription termination, and its C-terminal part interacts with polyadenylation factor Clp1p, Rna14p, and Rna15p (Haddad et al., 2012). In mammals, CstF50 is a subunit of the cleavage stimulation factor complex and interacts with BRCA1-associated RING domain protein to inhibit polyadenylation in vitro (Kleiman and Manley, 1999). In Arabidopsis, CstF50 interacts with CstF64, PAPS, and CPSF factors . Therefore, polyadenylation factors may play important roles in salt-induced APA by interacting with other polyadenylation factors and by modulating the expression of genes that are responsive to salt stress. The abundance of transcripts of PCFS factor genes were visualized by IGV and the gene expression levels were validated by RT-qPCR. Under control conditions, AtPCFS1 (AT1G66500) mainly used the poly(A) site located in the CDS region; however, under conditions of salt stress, the use of the distal poly(A) Frontiers in Plant Science | www.frontiersin.org site in the 3′ UTR increased dramatically ( Figure 8A). A similar phenomenon was also evident for AtPCFS5 (AT5G43620, Figure 8B). Interestingly, the homologous gene of AtPCFS1 and AtPCFS5 in Eutrema, EsPCFS5 (Thhalv10018488m), also showed increased expression level of the distal poly(A) site under salt stress ( Figure 8C). These results suggest that Arabidopsis and Eutrema might use APA to increase the expression levels of functional transcripts of polyadenylation factors in response to salt stress.
DISCUSSION
In this study, we provide a comprehensive map of poly(A) profiles of a salt-sensitive species (A. thaliana) and a salt-tolerant species (E. salsugineum), and compare their APA patterns under salt stress. Although APA occurs commonly in Arabidopsis and Eutrema, Arabidopsis possesses a higher number of APA genes than Eutrema (54% vs. 47%). Furthermore, the proportion of APA genes increased significantly in Arabidopsis under salt stress, but not in Eutrema. Both species tend to use distal poly(A) sites under salt stress, while their 3′ UTR lengthen genes showed different enrichments in GO terms and KEGG pathways. Salt stress affected the use of poly(A) sites within 3′ UTRs in a larger number of genes in Arabidopsis than in Eutrema (507 vs. 130). Eutrema exhibits an innate response to salt stress; therefore, gene expression was less affected in this species. APA was found to be associated with 42% and 29% of DE genes in Arabidopsis and Eutrema under salt stress, respectively, thus suggesting the potential role of APA in the regulation of gene expression in response to salt stress. Salt-specific PACs and saltinducible APA events were identified in both species; interestingly, some genes related to salt tolerance and transcription factor genes showed differential APA patterns. Our results suggest that the more adaptive species showed less alteration at the transcriptional level under stress while more salt-specific PACs were activated in Arabidopsis to cope with salt conditions.
Polyadenylation Factors and Wide-Ranging APA Under Stress Conditions
A large group of protein factors are required for pre-mRNA polyadenylation process in plants. These factors recognize polyadenylation signals and form complexes that control mRNA 3′-end formation. The polyadenylation factor subunits not only show extensive protein-protein interactions, but also coordinate with other RNA processing events in the course of gene expression . Previous studies on AtCPSF30, AtCPSF100 and FY suggested that changes in the activity of polyadenylation factors may lead to wide-ranging APA (Thomas et al., 2012;Lin et al., 2017;Yu et al., 2019). In addition, abiotic stress treatments can incite changes in poly(A) site choice in a large number of genes. Some APA patterns have been shown to change extensively under abiotic stresses, including drought, heat, and salt stress in Sorghum (Chakrabarti et al., 2020); oxidative stress (Liu et al., 2014), and hypoxia in Arabidopsis (de Lorenzo et al., 2017); drought, heat shock, and cadmium stress in rice . Interestingly, abiotic stresses tend to increase the usage of non-canonical poly(A) sites in plants (de Lorenzo et al., 2017;Chakrabarti et al., 2020). In our study, by comparing the expression levels of polyadenylation factors under control and salt stress conditions in Arabidopsis and Eutrema, we found that five polyadenylation factors in Arabidopsis changed significantly in their expression levels when responded to salt stress, whereas only two polyadenylation factors in Eutrema showed significant changes (Figure 7). Notably, AtPCFS1 and AtPCFS5 showed highly significant changes. Moreover, the expression levels of many polyadenylation factors of Eutrema were lower than that of Arabidopsis in both stressed and unstressed conditions. Thus, the changes in the expression levels of core polyadenylation factors in Arabidopsis may widely affect the selection and usage of poly(A) sites during the salt stress response. Meanwhile, most polyadenylation-related genes in Eutrema responded modestly. This may explain the result that more APA events were identified in Arabidopsis in response to salt stress than that in Eutrema.
Consequences of APA in Different Regions of Genes Under Stress
Alternative polyadenylation that happened in different regions of genes would lead to various stabilities of mRNAs. Those mRNAs generated by polyadenylation in CDS regions, which lack stop codons, are likely to be degraded through non-stop mRNA decay pathways; and the mRNAs end in introns may be targeted by nonsense-mediated decay pathway (Frischmeyer et al., 2002). Interestingly, the process of mRNA degradation could be downregulated under stress conditions (Shaul, 2015), thereby promoting the accumulation of non-canonical mRNAs. This brings a possible explanation to the increase in non-canonical isoforms in response to stresses. Within the APA events in 3′ UTRs, we identified more genes possess longer 3′ UTRs rather than shorter 3′ UTRs, and a larger proportion of the 3′ UTR lengthen genes showed significantly upregulation under salt stress. This is consistent with the findings reported previously. UV light caused DNA damage in the Saccharomyces cerevisiae gene and led to changes in poly(A) sites along with the extension of transcripts (Graber et al., 2013). Another study
AT1G70940 PIN3
A regulator of auxin efflux and involved in differential growth; gravitropism.
Thhalv10019152m ERD14
Induced early on in response to dehydration stress Kiyosue et al., 1994 Thhalv10008313m ERD10 Induced by low temperature and dehydration Kim and Nam, 2010 Thhalv10008280m U2AF35A U2 auxiliary factor small subunit Wang and Brendel, 2006 Thhalv10001457m CIR Pre-mRNA splicing factor Maita et al., 2005 Thhalv10003590m NPX1 A nuclear factor regulating abscisic acid responses Kim et al., 2009 Frontiers in Plant Science | www.frontiersin.org reported that osmotic stress caused by KCl in human fibroma cells, along with dehydration stress in Arabidopsis; both resulted in 3′ UTR extension to nuclear chromatin combination areas and long non-coding regions (Vilborg et al., 2015;Proudfoot, 2016). These findings indicate that repression of the proximal poly(A) sites and utilization of the distal poly(A) sites in 3′ UTRs might be a general mechanism for stress responses. Although 3′ UTR-APA does not change the coding sequence or total expression levels of mRNA, this process may affect post-transcriptional gene regulation in various ways, including mRNA stability, the modulation of mRNA translation, nuclear export, cellular localization, and the localization of encoded proteins (Berkovits and Mayr, 2015;Tian and Manley, 2017). We did observe gene expression level changes in some translation elongation factors such as TFIIS; this may have had an impact on mRNA translation efficiency by altering the use of polyadenylated transcripts (Cui and Denis, 2003).
APA As a Part of Response to Stresses or Disorder?
In the current study, the poly(A) profiles of Arabidopsis were widely affected by abiotic stress. This phenomenon also exists in several prior observations of different plant species upon exposure to abiotic stresses (Zhang et al., 2008;de Lorenzo et al., 2017;Ye et al., 2019;Chakrabarti et al., 2020 suggesting the potential role of APA mediated by polyadenylation factors. Besides, many APA switching events we identified in this study and previous studies are related to stress response genes. For example, many APA switching genes were found in rice samples of different tissues and developmental stages, and these genes have functions related to salt and drought stress responses (Fu et al., 2016). This indicates that APA not only regulates the developmental process of plants, but also regulates the adaptation process of plants to abiotic stresses. Furthermore, it is well studied that stresses incite the expression of many stress-related genes (Chen et al., 2015). Similarly, we herein observed a large group of stress-responsive poly(A) sites and APA events. That said APA under salt stress could provide extensive plasticity for the plants to adapt to stress conditions.
On the other hand, stresses may reduce the fraction of 3′ UTR poly(A) sites and lead to an increase of non-canonical poly(A) sites, as we observed in Arabidopsis when exposed to salt conditions. This is consistent with prior observations showing that the usage of poly(A) sites in CDS, intron and 5′ UTR regions were promoted by salt, drought, heat treatment, and hypoxia (de Lorenzo et al., 2017;Tellez-Robledo et al., 2019;Chakrabarti et al., 2020). Notably, the isoforms end in CDSs and introns were less stable and underrepresented in polysomes; conversely, transcripts generated by 5′ UTR poly(A) sites were as stable as canonical isoforms (de Lorenzo et al., 2017). Nevertheless, the re-directing of transcriptional output may represent a form of negative regulation under stresses. Some researchers believed that the stress-inducible remodeling of transcripts The PAT-seq coverage of EsPCFS5 (Thhalv10018488m) and RT-qPCR result. CK, control; ST, salt stress. PA represents poly(A) site. Arrows beside gene names indicate gene orientation. Statistical significance was determined by one-way ANOVA, **p < 0.01, ***p < 0.001, and ****p < 10e−04.
Frontiers in Plant Science | www.frontiersin.org mediated by APA represents an important part of the regulatory network in plant stress responses (Chakrabarti et al., 2020).
Collectively, we believe that APA plays a functional role in the regulatory response to stresses. Although the contribution of genome-wide changes mediated by APA requires further exploration, they may need to be considered carefully on a case-by-case basis.
CONCLUSION
Eutrema has adapted to salty environments throughout its evolutionary history while Arabidopsis has not. In the present study, comparison of their poly(A) site usage (reflecting RNA processing) under salt stress revealed that their responses are distinct in that Eutrema are relatively stable while Arabidopsis shows significant changes in gene expression via APA. These results are suggestive that innate responses to environmental insults in plants relate to inherited ability. Such ability could be written into the genetic circuits for gene expression in a particular species. Further elucidation of these circuits would be of significant benefit to the genetic engineering of crops.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number can be found at: https://www.ncbi.nlm.nih. gov/, PRJNA782687.
AUTHOR CONTRIBUTIONS
LC and KZ prepared the plant materials and salt treatments, and LC made PAT-seq libraries. HM and LC performed the data analyses and prepared the manuscript. JL participated in the data analyses and revised the manuscript. QL conceived and supervised the project and revised the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This research was supported in part by a grant from Chinese Ministry of Science and Technology (2016YFE0108800). HM received funding support from China Scholarship Council while visiting Western University of Health Sciences. | 2022-03-30T14:06:19.468Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "9fbf9409f8eefa4c7bbbd1bb9b54466a434165f1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "9fbf9409f8eefa4c7bbbd1bb9b54466a434165f1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
417526 | pes2o/s2orc | v3-fos-license | A Strategy for Screening Monoclonal Antibodies for Arabidopsis Flowers
The flower is one of the most complex structures of angiosperms and is essential for sexual reproduction. Current studies using molecular genetic tools have made great advances in understanding flower development. Due to the lack of available antibodies, studies investigating the localization of proteins required for flower development have been restricted to use commercial antibodies against known antigens such as GFP, YFP, and FLAG. Thus, knowledge about cellular structures in the floral organs is limited due to the scarcity of antibodies that can label cellular components. To generate monoclonal antibodies that can facilitate molecular studies of the flower, we constructed a library of monoclonal antibodies against antigenic proteins from Arabidopsis inflorescences and identified 61 monoclonal antibodies. Twenty-four of these monoclonal antibodies displayed a unique band in a western blot assay in at least one of the examined tissues. Distinct cellular distribution patterns of epitopes were detected by these 24 antibodies by immunofluorescence microscopy in a flower section. Subsequently, a combination of immunoprecipitation and mass spectrometry analysis identified potential targets for three of these antibodies. These results provide evidence for the generation of an antibody library using the total plant proteins as antigens. Using this method, the present study identified 61 monoclonal antibodies and 24 of them were efficiently detecting epitopes in both western blot experiments and immunofluorescence microscopy. These antibodies can be applied as informative cellular markers to study the biological mechanisms underlying floral development in plants.
INTRODUCTION
generated the first monoclonal antibody. Since then, 100s and 1000s of monoclonal antibodies have been widely used in various fields of medical and biological research (Weiner, 2015;Teo et al., 2016). In the 1980s, Aris and Blobel built an antibody library using purified nuclei proteins from yeast (Aris and Blobel, 1988). This library screen directly contributed to the discovery of the nuclear pore complex, promoting further studies of its role in regulating intracellular signaling pathways (Aris and Blobel, 1988). Because monoclonal antibodies can bind to an epitope with high degrees of specificity and sensitivity, they have become more and more important for detecting protein subcellular localization in many organisms, including plants.
The flower is the reproductive structure of angiosperms, producing the male and female gametes and providing the physical and nutritional environment for seed formation (Soltis et al., 2004).
The basic structure of most flowers consists of four whorls from the outer to the inner whorl: sepals, petals, stamens and carpels (Smyth et al., 1990;Ma, 2005). In recent decades, molecular genetic studies have made great progress in understanding regulatory mechanisms of flower development, such as the classic ABC model (Alvarez-Buylla et al., 2010;Bowman et al., 2012). Due to the lack of available antibodies against the proteins underlying ABC model, information about their subcellular localization remains limited. At the same time, knowledge regarding subcellular structures in floral tissues is very limited due to the lack of cellular markers, especially for structures that are not conserved from animals and fungi, which have been studied using cell biological tools much more extensively.
To generate monoclonal antibodies that can be used as molecular markers for studying cellular structures during flower development, we constructed a library of monoclonal antibodies using total proteins from the inflorescences of Arabidopsis thaliana. Our initial screens using western blot (WB) identified a total of 61 antibodies that displayed bands in Arabidopsis total proteins. 24 of these antibodies detected a single weight protein band of various sizes from floral protein extracts. We then performed WB using total proteins extracted from different organs such as stems, leaves and inflorescences and grouped these antibodies into three categories: tissue-specific, preferential, and broad expression. Further characterization of these antibodies by performing immunofluorescence microscopy in Arabidopsis inflorescence paraffin sections revealed that different protein signals specifically localized in Arabidopsis inflorescence, with some exhibiting expression in specific cell layers. Finally, we used immunoprecipitation (IP) to enrich putative antigens (or antigen complexes) and performed mass spectrometry (MS) analysis to discover the target antigens of these antibodies. Taken together, this is the first time that monoclonal antibodies were generated using total plant proteins as antigens. Furthermore, the identified antibodies could be used as molecular markers for studying floral organ development.
Plant Material and Flower Protein Extraction
The A. thaliana wild-type plant used in this study was the Col ecotype. The plants were grown in the greenhouse with 16 h of light and 8 h of darkness under constant 22 • C. The flower from stages 1-12 were collected and ground to a fine powder in liquid nitrogen; the proteins were extracted by using the extraction buffer [100 mM Tris-HCl, pH = 7.5; 300 mM NaCl; 2 mM EDTA, 10% Glycerol; 0.1% Triton X-100; 1x complete protease inhibitor (11697498001, Roche, USA)]. The protein-buffer mixture was centrifuged at 13000 rpm for 10 min at 4 • C. The supernatant was collected. The protein concentration of the supernatant was measured by using a Bio-Rad Protein Assay Kit (Bio-Rad, Berkeley, CA, USA). This extract was then used to immunize mice.
Generation of the Monoclonal Antibody Library toward Proteins from Flower
Total proteins were exacted as above and diluted to a concentration of 1 mg/mL to be used as the antigen. The antigen was emulsified with Complete Freund's adjuvant (CFA) with a volume ratio of 1:1 before immunizing the mice. Monoclonal antibodies were generated using standard method as previously described (Yokoyama et al., 2013;Greenfield, 2014). Briefly, BALB/c mice were immunized with 150 ng of antigen, followed by a booster of 150 ng on day 14 intervals and then injected on day 28. The mouse's spleen cells (1.0 × 10 7 /mL) were isolated and fused with mouse P3X63Ag8.653 cell line (2.0 × 10 7 /mL) to generate the hybridoma cells. Polyethylene glycol (PEG) was used as adjuvants in later immunization steps. The hybridoma cells were screened by western blot twice. Positive cells were picked for sub-cloning by limiting dilution. The hybridoma cell clones were also screened by western blot twice. Positive clones were then collected for expansion culture. The supernatant of the antibody was harvested and purified using protein A.
Immunoblotting and Immunoprecipitation
The total protein used was the same as described above. For immunoblotting, the proteins were separated on a 4-15% polyacrylamide gradient gel (4561086, Bio-Rad, USA) and transferred onto a nitrocellulose membrane (10600002, Amersham, USA). The membrane was blocked with 5% non-fat milk (9999, Cell Signaling, USA) in TBST and incubated with the monoclonal antibodies (1:500 dilution) over night at 4 • C. The membrane was washed three times for 5 min each with TBST. HRP-conjugated anti-mouse IgG secondary antibody was added for 1 h at room temperature. The membrane was washed three times again with TBST before being treated with ECL (RPN3243, GE Healthcare, USA) and scanned by a Typhoon scanner (FLA 9500, GE Healthcare, USA). For immunoprecipitation, the antibodies were added to the protein extract at the previously described concentration and incubated for 2 h at 4 • C before incubation with protein A-conjugated beads for another 1 h. The beads were collected by centrifugation at 2000 g for 2 min at 4 • C and washed three times with TBST before boiling in SDS loading buffer for 10 min. The samples were then analyzed by 4-15% SDS-PAGE and silver staining as described (Chevallet et al., 2006).
Immunofluorescence Microscopy
Immunofluorescence staining was performed as described previously (Wang et al., 2012). The slides were blocked with goat serum (AR0009, Boster Biological Technology, China) at 37 • C for 30 min, followed by the incubation with one of the monoclonal antibodies (1:500 dilution) at 4 • C overnight. The slides were then washed three times with PBS for 10 min each before incubating with goat anti-Mouse IgG (H+L) Secondary Antibody, Alexa Fluor R 488 conjugate (A-11001, Invitrogen, USA), at a 1:1000 dilution in PBS for 1 h at room temperature. After washing three times with PBS, the slides were stained with 1.5 mg/mL 4,6-diamidino-2-phenylindole (DAPI) in vectashield antifade medium (H-1200, Vector Laboratories, USA). The slides were imaged using an AxioCam HRc (Zeiss) camera.
Identification of the Antigens by Mass Spectrometry
After the silver staining (Chevallet et al., 2006), the targeted band was excised for gel digestion by trypsin as described previously (Shevchenko et al., 2006). After digestion, the extracted peptides were analyzed by a Finniqan LTQ mass spectrometer (Thermo, USA) coupled with a surveyor HPLC system (Thermo, USA).
Generation of a Monoclonal Antibody Library Using Total Proteins from Arabidopsis Inflorescences as Antigens
Arabidopsis is a widely used model system for plant molecular genetics and its flower development has been extensively studied in the last three decades (Chang and Meyerowitz, 1986;Alvarez-Buylla et al., 2010;Irish, 2010;Bowman et al., 2012). However, only a few antibodies were successfully produced to trace floral proteins and to study floral development. To understand molecular mechanisms underlying flower development, previous studies used transgenic plants expressing a fusion of the target protein with an epitope tag to examine the target protein level, modification or localization with commercial tag antibodies (Terpe, 2003). However, this approach is inefficient for detecting the same protein in different genetic backgrounds, especially due to the time and effort needed to introduce the transgene into various backgrounds. Therefore, as many proteins are discovered with crucial roles in development, specific antibodies have become more and more important. In this study, we used a procedure (Figure 1) to generate a monoclonal antibody library against the total proteins extracted from Arabidopsis stage 1-12 inflorescences, as defined previously (Smyth et al., 1990;Alvarez-Buylla et al., 2010). Total proteins were used to immunize the mice and the spleen cells from each immunized mouse was isolated and fused with myeloma cells to generate hybridoma cells, which were then cultured in HAT (hypoxanthine-aminopterin-thymidine) medium. The culture media of these hybridoma were tested twice by WB using total floral proteins to determine whether the hybridoma produced antibodies that recognized floral proteins. A single clone was then selected from each antibodypositive culture and the culture media of the single clones were tested again by WB to confirm the production of antibodies recognizing flower proteins. All positive clones were then selected for subsequent culturing. Thus, we generated about 1000 individual clones from which 61 of those clones specifically recognized floral proteins in a WB assay. Protein A was then used to purify these antibodies for further study.
FIGURE 1 | Flowchart of systematically generating monoclonal antibodies using Arabidopsis flower total proteins as antigen. Arabidopsis flowers at stages 1-12 were collected. The total protein was extracted and quantified using a Bio-Rad Protein Assay Kit. The proteins were then prepared for mice immunization. The spleen cells from immunized mice were fused with myeloma cells to produce hybridoma cells. The hybridoma cells were screened twice by western blot. Positive clones were kept for antibody production.
Testing Monoclonal Antibodies for Organ Specificity by Western Blot
To validate the ability of the 61 monoclonal antibodies to detect floral proteins and examine their specificity for particular proteins, we performed WB assays using total proteins extracted from Arabidopsis leaves, stems and inflorescences. According to the protein specificities recognized by the individual antibodies (Figure 2, Supplementary Figure S1, and Table 1), 24 of the 61 antibodies were able to detect a single weight protein band and were selected for subsequent analyses. According to similarity FIGURE 2 | Validation of the monoclonal antibodies by western blot. All the monoclonal antibodies were validated by western blot using total protein extracted from leaves (L), stems (S), or flowers (F). According to the tissue specificity, the antibodies were divided into six groups. The antibodies in the first two groups recognized proteins mainly in flowers (A) or stems (B). In the next three groups, the targeted proteins were mainly from leaves and flowers (C), stems and flowers (D), leaves and stems (E), respectively. All three tissues were detected with signals in the last group (F).
or difference of signal profiles, the 24 antibodies were divided into six groups from A to F (for convenience, those antibodies were named as No. 1 to No. 24, hereafter). Group A contains four antibodies, No. 1-4, recognized an organ-specific (flower) protein. Group B contains two members: No. 5 and No. 6, which detected a protein at higher levels in stems. The antigen for No.6 was highly expressed in stems compared to leaves and flowers, whereas No. 7 showed an opposite pattern with the recognition of a protein expressed at higher levels in leaves and flowers than in stems ( Figure 2C). Group D includes five antibodies (8)(9)(10)(11)(12), for which all antigens had relatively high expression levels in both stem and flower, but with low or undetectable levels in leaves ( Figure 2D). In Group E, three antibodies (No. 13-15) detected proteins at higher levels in the leaves and stems than in the flowers ( Figure 2E). Finally, Group F includes nine antibodies, whose antigens were similarly expressed in all the three examined organs (Figure 2F). We also estimated the signal intensity of WB bands for each antibody by Image J, as shown from 1A to 5A (Table 1). Together, the WB data demonstrated that the 24 antibodies reported here have antigens that are either specific to certain organs or are ubiquitously expressed.
Subcellular Localization of Inflorescence Proteins by Immunofluorescence
To investigate the subcellular localization of the proteins recognized by these antibodies, we performed immunofluorescence assay on floral histological sections. The basic structure of the Arabidopsis flower consists of four whorls of organs from the outer to the inner: sepals, petals, stamens, and carpels (Ma, 2005). Five antibodies (No. 7,9,12,18, and 23) detected cell-type dependent signals in anthers (Figure 3), three antibodies (No. 19,21,and 24) showed signals in the whole floral structures (Supplementary Figure S2), while the other antibodies did not show positive signals in anthers. Based on the signal patterns recognized by the antibodies, we grouped the antigens into four categories (Figures 3K-N). The first group consists of antigens that were detected by No. 9 and No. 12 and localized in the sepal veins and anther epidermis (Figures 3A-D,K). In the second group, No.18 recognized a specific signal in the sepal veins ( Figures 3E,F,L). In the third group, No.23 displayed signals in both the sepal veins and the vascular bundles of anthers ( Figures 3G,H,M). The last group includes No. 7, which detected antigens in all cell types within the anther, with slightly stronger signals within the vascular bundles of the anther (Figures 3I,J,N). In summary, five of the antibodies (No. 7, 9, 12, 18, and 23) detected specific signals and could be used as markers for immunofluorescence assay during flower or anther development.
Identification of the Candidate Antigens of the 24 Antibodies by Mass Spectrometry
To identify the antigens for the 24 antibodies, we conducted IP experiments with total proteins extracted from Arabidopsis inflorescences. The antibody-antigen complex was precipitated by protein A/G beads and then detected by WB with the same antibody used in IP. Subsequently, No. 9, 18, and 21 antibodies detected specific protein bands, whose sizes were The 24 antibodies were divided into five groups (I-VI) according to tissue specificity. The table also includes the molecular weight of the band from the western blot in Figure 2 and the relative intensity of each band with the lowest defined as 1A and the highest defined as 5A. The detailed information from the mass spectrometry results are shown in Supplementary Table S1. Here we listed the possible targets for each antibody according to the molecular weight size based on western blot data consistent with that in input samples (Figures 4A-C), suggesting that the antigens of the three antibodies could be enriched by IP. Prior to analysis of the IP-enriched samples by MS (mass spectrometry), the IP-enriched samples were run on a SDSpage gel followed by silver staining (Figure 4D). Based on the molecular weight detected by WB, we excised the corresponding band ( Figure 4D) for subsequent MS analysis. The candidates subsequently detected by MS are shown in Supplementary Table S1. The molecular weight of the antigens detected by WB and the peptide sequence revealed by MS allowed us to conclude that the corresponding antigen recognized by No. 9 was likely AT5G53170 ( Table 2), which is an FtsH protease 11 (Sakamoto et al., 2003). The No. 18 antigen was probably AT1G11860, the glycine cleavage T-protein, which has aminomethyltransferase activity and is involved in the mitochondrial conversion of glycine to serine during the major photorespiratory pathway (Douce and Neuburger, 1999). Finally, the No. 21 antigen is most likely AT2G25140, a casein lytic proteinase B4 (Lee et al., 2007). The results for the other 7 antibodies analyzed by MS and the potential antigens for each corresponding antibody are listed in Supplementary Table S1. Using this approach, we identified 10 candidate antigens that can be recognized by their corresponding monoclonal antibodies. Particularly, the antigens for No. 9, 18, and 21 can be efficiently obtained using FIGURE 4 | Immunoprecipitation of the antigens using the identified antibodies. Immunoprecipitation was performed with these 24 antibodies. The total proteins from flowers was loaded as the input. Three antibodies showed efficient performance for IP. Western blot showed here with antibody No. 18 (A), No. 9 (B) and No. 21 (C). The target band was shown by the black arrow. Silver staining result was shown in (D). The targeted bands with red arrow were cut for mass spectrometry analysis.
immunoprecipitation and can serve as positive markers in such experiments.
Functional Implication of the Three Potential Candidate Antigens
To further investigate the potential relevance of the three identified genes in relation to flower development, we checked expression data of these three genes from AtGenExpress Visualization Tool (AVT) and generated a heat map (Supplementary Figure S3). We found that genes encoding candidate antigens of No. 9 and No. 18 antibodies showed a similar expression pattern with higher levels in vegetative tissues than that of reproductive tissues such as leaves, consistent to their similar subcellular localization by immunofluorescence assay, suggesting a potential role in vegetative tissues. Indeed, the candidate antigen of No. 9 encodes an FtsH protease 11, Arabidopsis genome has 12 FtsH proteases, which play important role in the repair cycle of photosystem II in thylakoid membranes (Sakamoto et al., 2003). The candidate antigen of No. 18 glycine cleavage T-protein is one of the four different proteins required for glycine decarboxylase reaction in photorespiratory pathway (Peterhansel et al., 2010). These previous findings further support the similar localization of both proteins at vegetative tissues. In contrast, the gene of candidate antigen of No. 21 antibody was highly expressed in reproductive tissues rather than at vegetative tissues such as sepals, stamen, and carpels (Supplementary Figure S3), suggesting a special role in these tissues. Consistently, previous studies showed that, in comparison to the other members, mutation of CLPB4 (No. 21) did not cause any defects in vegetative development (Lee et al., 2007), but the reproductive development in capb4 mutant was not tested. We also checked the expression patterns of the other potential seven targets, which showed various expression at vegetative and reproductive tissues (Supplementary Figure S4), but this needs further study for confirmation.
DISCUSSION
In previous studies, several methods have been used to construct antibody libraries (Iba and Kurosawa, 1997). For example, phage display technology which has become a powerful method for making an antibody library (Winter et al., 1994). For antigens discovery, they fully used available genetic information for genes encoding antigens to screen the antibody pools. With the development and sensitivity of MS technique, which has been used for identification of antigen to antibody in human systemic autoimmune diseases (Cottrell et al., 2012). Each method has its advantages and disadvantages. However, it is still a major challenge to identify unknown antigens in large-scale manner.
The flower is essential for reproduction and is a unique structure in angiosperms. Flower development has been extensively studied using molecular genetic tools. However, very few antibodies are available for recognizing proteins involved in the floral development. Here, we used a systematic procedure to generate a monoclonal antibody library from total protein extracted from Arabidopsis flowers and hope that the identified antibodies can be used as cell-type markers to study flower development. The procedure is quite similar to the method described previously for human proteins (Cottrell et al., 2012). By WB screening, we identified 24 antibodies that each detected one specific band in the leaf, stem, or flower. We then performed an immunofluorescence microscopy assay to determine the ability of the antibodies to recognize the subcellular localization of their corresponding antigens. Consequently, five antibodies as described above displayed tissue-specific signals in the flower section. We then performed immunoprecipitation with these antibodies and determined that three of them enriched for a specific band after running a SDS-PAGE gel. Subsequent MS analysis identified the candidate antigens for three antibodies. Although the identities of the antigens need to be confirmed by further genetic experiments, the antibodies could certainly specifically recognize cellular biomarkers useful for studying flower development. Moreover, these biomarkers could potentially help in elucidating the process of flower development in other plant species.
Unfortunately, proteins with higher expression levels in flower total protein will always have a higher chance to be recognized by the immune system and thus a higher chance to stimulate production of antibodies compared to proteins with lower expression levels. Thus we may have developed several antibodies that are specific for those ubiquitous housekeeping proteins rather than organ-specific proteins.
Here, we show that this is the first time we are successfully able to produce antibodies for plant proteins using a library screen method. Interestingly, we were able to produce antibodies that were specific for certain proteins expressed in distinct plant organs from total protein extracted from the flower organ. We were also able to find some specific antibodies that bound more ubiquitous proteins, which we hypothesize to be prevalent but important housekeeping proteins. Future studies may focus on elucidating the exact function of these unknown housekeeping proteins. Further optimization may lead to the discovery of more useful antibodies that may specifically recognize proteins important for flower development. In the future studies, we can optimize the methods we used here to find more useful antibodies to study flower development in plants. This study is potentially of value and can lead to the generation of very useful molecular tools.
AUTHOR CONTRIBUTIONS
QS, YW, and HM designed experiments, QS and YW collected tissues. QS conducted most experiments. QS, LZ, YW, and HM wrote the paper. All authors read and approved the final manuscript.
FUNDING
This work was supported by National Natural Science Foundation of China (31370347 and 31130006), and by funds from the State Key Laboratory of Genetic Engineering and Fudan University.
ACKNOWLEDGMENT
We greatly appreciate Yamao Chen and Ji Qi at Fudan University for valuable comments and editing on this manuscript. | 2017-05-05T05:57:47.828Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "96e49dc47cf08197360a10d04869aa0cca3b8ba3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.00270/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96e49dc47cf08197360a10d04869aa0cca3b8ba3",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221379574 | pes2o/s2orc | v3-fos-license | Views of People with Diabetes Regarding Their Experiences of the Facilitators and Barriers in Type 1 Diabetes Inpatient Care: An Interpretative Phenomenological Analysis
Background: The aim of this study was to comprehend how people with diabetes view their experiences of the possible barriers and facilitators in inpatient care for type 1 diabetes from non-specialized nurses. Design: An interpretative phenomenology analysis (IPA) was conducted. Methods: The sample consisted of people with type 1 diabetes 1 (n = 24) who use the services of the state hospitals in Cyprus. The data were collected in two phases: firstly, focus groups with people with diabetes (n = 2) were conducted and analysed, and then individual semi-structured interviews with people with diabetes (n = 12) were conducted. Results: It is evident from the findings that people with diabetes experienced several barriers in diabetes inpatient care, which is concerning since this can have adverse effects on patients’ outcomes. No facilitators were reported. Conclusion: Significant results were found in relation to the barriers to diabetes inpatient care. Crucially, the findings demonstrate that all these factors can negatively affect the quality of care of patients with diabetes, and most of these factors are related not only to diabetes care but also generally to all patients who receive inpatient care. Interestingly, no participant reported any facilitators to their care, which further affected the negative perceptions of the care received.
Introduction
Type 1 diabetes mellitus (T1DM) is a prevalent condition affecting between 21 million and 42 million people globally [1]. Onset of diabetes in childhood and adolescence is associated with numerous complications, including diabetic kidney disease, retinopathy, and peripheral neuropathy, and has a substantial impact on public health resources [2], concurrently increasing the burden on the healthcare systems [3]. Hospitalized patients commonly experience a number of complications that are associated with longer admissions, more frequent readmissions, and higher mortality [4].
Inpatient diabetes care is a growing concern because people with diabetes are more frequently admitted to hospital than those without the condition, with diabetes reported to be among the five most prevalent comorbidities in hospitalized and readmitted patients [5]. This worries governments data, and setbacks in being released because of diabetes-particularly when diabetes was not the original reason for admission [7,18].
The above situation is worrying because, during the last decades, governments have taken many initiatives to support those with diabetes, especially by giving extra attention to primary care and prevention, whereas they have paid little attention to diabetes inpatient care and to people with T1DM. Furthermore, no evidence has been gathered about the barriers of inpatient care from the perspectives of people with T1DM who are the recipients of this care. Therefore, since nurses have an important role in diabetes care, it is relevant to eliminate any barriers that prevent them from providing adequate care, and to enhance any facilitators that allow them to provide the best quality of care. Taking into consideration all the above, the aim of the current study was to explore and understand the views of people with T1DM regarding the inpatient care they received from non-specialized nurses. If they expressed that the care was good, we sought to find out why they thought the care was good, and what factors facilitated that care. If they thought the care was inadequate, we wanted to understand what specifically prevented them from receiving better care.
Materials and Methods
The current study reflects interpretivist epistemology, is informed by phenomenological social theories, and strictly follows the IPA methodology in order to better understand how people with T1DM, through their experiences, perceive the facilitators and barriers that affect the received care. In line with the theoretical underpinnings of IPA, the participant sample for this study was purposive and homogenous. More specifically, 24 people with T1DM participated in the current study. Two sources for gathering data were used-namely, focus groups and interviews. The two focus groups had six people each. Twelve individual interviews followed in order to gain a deeper understanding of the experiences of the participants. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Cyprus National Bioethics Committee (EEBK EΠ 2012 01.104). The sample covered the entire area of Cyprus, since participants were recruited from each city in Cyprus. This research followed the general principles of research ethics, consent, anonymity, and confidentiality. All the necessary approvals and licences were granted from the responsible bodies. All participants gave their informed consent for inclusion before they participated in the study.
The analysis followed the four stages suggested by Smith and Osborn [26,27], which helped to identify shared experiences across the group of participants, and in order to ensure the quality of coding the process was carried out by two researchers independently. At the 1st stage of the analysis (which Smith and Osborn named "Looking for the Themes of the First Case"), the transcripts of the first focus group were read many times and the identified themes were written down in the right-hand margin. The coding process continued in the same way with the next focus group until clusters of themes were generated. In the same way, the themes that emerged were used to formulate the interview schedule for the semi-structured interviews. After completing the semi-structured interviews, the first interview transcript was read multiple times, finalizing the cluster of themes. The themes from the first interview oriented the following analysis.
At the 2nd stage (which is the "Connecting Themes" stage), an Excel document was prepared with the themes that emerged in order to look for connections or combinations. After several examinations, some clustering themes were found and a table was prepared with the final themes. Once the results were compiled for the focus groups, the same procedure was followed as for the first interview with people with diabetes.
The "Continuing the Analysis of Other Cases" is the 3rd stage, according to Smith and Osborn (2008) [26]. Since there were two focus groups with people with diabetes, at this stage of the analysis the researcher incorporated the second focus group into the first focus group. This followed the idiographic approach to analysis, which starts with analysing particular examples and then finally arriving at more general themes [26,27]. For the purpose of the current study, we decided to use the themes that emerged from the first focus group to orient the following analysis, while we recognised new issues emerging from the following transcript. This allowed us to identify new and different themes. After analysing both transcripts, a final table with the dominant themes was developed and they were then reduced not according to their prevalence but according to their richness within the particular passages.
And at the 4th stage, the "Writing up the Results" stage, we translated the analytic themes into a narrative account.
Quality Markers
The current research was focused on five criteria: credibility, transferability, dependability, confirmability, and reflexivity to ensure quality. We employed the triangulation method by collecting data through focus groups, and we compared the themes extracted during the focus groups and the new themes that emerged during interviews. As Constantinou et al. (2017) [28] and Vasileiou et al. (2018) [29] maintained, the saturation of the data is a main quality marker, so we used the comparative method for themes saturation (CoMeTS) to achieve themes saturation in qualitative interviews. Through this process we confirmed that there were no new data to discover, and data collection stopped.
Views of People with Diabetes about Facilitators and Barriers to Diabetes Care
Participants were asked to describe the factors that they perceived as facilitating nursing care or making it difficult for nurses to provide inpatient care to diabetic patients according to their lived experiences. Interestingly, none of the participants referred to anything that facilitated nurses to care for people with diabetes.
Resources
People with diabetes referred to the lack of resources, which might prevent nurses from providing effective nursing care to diabetic patients. The main barrier they identified was the state of the health system, because the participants asserted that it was the government's responsibility for the financial constraints. They believed that this caused staff shortages, and medications and other necessary consumables to be unavailable for diabetes care. The following quotes show that patients tended to highlight the importance of available resources: In the public hospital, the resources they use are made with the cheapest materials . . . because the root of the problem is financial. They use the money for their cars, their limousines, their suits. Two of the participants also referred to the government's responsibility for the financial constraints that limited continuing education for nurses on diabetes care. More specifically, Participant 5 and Participant 1 explained: My perception is that everything is about the money. That's how it is in my mind. From my experience, they do not provide funding for a learning programme or exchange nurses, as they do for student exchanges. (Participant 5) Let me say that as a health ministry they cannot provide these services due to a low amount of money...I believe the health ministry should see these things and do these seminars we said earlier. (Participant 1)
Healthcare System Barriers
Participants from both focus groups claimed that the healthcare system caused barriers for diabetes nursing care. They blamed governmental and organizational factors for the inadequate care and also indicated that there were nurses who were well-educated about diabetes care; however, bureaucratic procedures resulted in underutilising these nurses. Most of the participants who participated in one-to-one interviews also supported the above view by describing the healthcare system in Cyprus as an "unhealthy system". The words of a participant from Focus Group 1 illustrated these findings quite well: It is the fault of the ministry, the administration...They do not give generously to health. (Focus Group 1) Participant 10 also blamed the general healthcare system in Cyprus, but she correlated this with nurses' lack of continuing education: I have lost my faith in public service since I started working. In Cyprus, we have this system "I am working for the government for the rest of my life, full stop." I assume that in all public services this is done. But I do not know if the nurses are to be blamed for this or if the presidents should be. As doctors are constantly learning, nurses need to do that as well." (Participant 10)
Diabetes Specialist Nurses
Some of the participants referred to the role of the diabetes specialist nurse. They recognised the importance of this role despite the fact that they did not see anyone with this specialization in the wards. Specifically, a participant from Focus Group 2 stated that there are nurses who are trained and ready to undertake such roles, however, the healthcare system does not take advantage of this resource.
There are specialized nurses for diabetes. There are many who have attended one or two seminars. They went to the Diabetes Association and learned a lot of things. Nurses exist, they simply are not being exploited. I believe there are more than 27. Each year, 10 nurses go for training. There are enough nurses ready to work but they are lost in the process in state hospitals. If we recognized the title of diabetes specialist, then they will work on this shift in this position. (Focus group 2) When Participant 9 was asked whether she would have accepted any information from nurses, she replied that from her experience one of the major problems with inpatient diabetes care was the lack of a diabetes specialist nurse.
Interviewer: Would you listen to a nurse if s/he was talking to you? Participant 9: Yeah, surely. But I didn't have to, because they did not have a specialist nurse and I think that was the biggest issue.
Participant 8 expressed a similar view in that he said he would have accepted and trusted any information provided by nurses only if they were specialized: If he was a specialist, yes. It means he studied this, and I would have trusted him. (Participant 8)
Information Provided to the Participants
Some of the participants expressed concerns that health professionals, including nurses, did not provide them with the necessary information. On the other hand, participants complained that the health professionals overloaded them with information regarding their treatment and the lifestyle changes that a person with diabetes needs to make. Most of them considered it as a barrier to effective diabetes management because it sometimes had the opposite effect on a patient's adherence to the treatment.
I want them to explain things to me, and they started: "You have to go on a diet, you have to do sports, you have to control your sugar." It was a traumatic experience for me and I said to myself, "I don't care: as long as I live." (Participant 4) The same participant reported that healthcare professionals provided conflicting information that might cause patients to become confused and to be uncertain about the care they receive.
Because there are many opinions, but sometimes they are contradicting, and we do not know whether they are right. For example, for the insulin dose . . . , there are doctors who will tell you that you have to use as many units as you can, and another doctor may tell you to calculate the units based on your meal, the amount of carbohydrates, or, for example, for the treatment of hypoglycaemia, some say you have to drink juice too... While others say you have to wait for 5 min and redo the analysis.
(Participant 4)
Furthermore, two of the participants from the Larnaca district referred to the repeated and negative information nurses gave them as a factor that inhibited them from following their instructions, and it was the reason for them choosing the private sector.
They insist on some issues and they say the same thing again and again. I go to the nurse and she talks, and the next time I go, she talks again. No, I don't go now. I was tired of hearing the nurse for four years telling me the same things. (Focus Group 1) Everything they say is negative. Only negative words you heard . . . Thus I decided to go to the private sector. (Focus Group 1)
Lack of Empathy
Some of the participants revealed difficulties in communicating with nurses, and this made them feel that there were misunderstandings between them. A participant from the focus group explained that the nurses did not understand her very well, and she was frustrated that nurses criticised her regarding her glucose levels and dietary habits. She explained that one day when she was at her scheduled doctor appointment, the on-duty nurse and the lab analyst were upset with her: She also had the woman holding the analyses that will comment on your glucose because it is high, there is the nurse who will comment on why you ate that thing and your glucose is up while you're in insulin . . . They do not understand . . . Yes, I have diabetes, yes. I have the right to eat everything, everything . . . (Focus group 1) The feeling that nurses did not understand them and their need to be understood was also expressed by Participant 5, who stated that currently she felt she did not have somebody to understand her, and this left her feeling insecure: I need to feel secure. I would like to feel that someone understands me, I want to talk to someone and feel that I'm talking at the same levels . . . Now, I feel like I'm talking and no one understands me.
(Participant 5)
Another participant believed that nurses did not respect people with diabetes in relation to their disease because of their lack of knowledge about diabetes, and because they did not pay attention to their condition: They do not respect us because they do not know. It is not something for them. It's not something serious for them. Okay she has diabetes, we will give her white bread, she will eat it and she will not talk. (Focus group 1)
Availability of Time
When people with diabetes were asked about factors that helped or prevented nurses from providing adequate care to their patients, some of the participants recognised that the limited amount of time available to nurses was a barrier. Patients understood that nurses were busy with overcrowded hospitals, which meant that nurses did not have enough time to pay them adequate attention and, as a result, some aspects of patient care were left incomplete. More specifically, patients said: The people don't understand that we all have limits, and there are too many people in the hospital, to a point that it is so full that there is no space for others. (Focus Group 2) In the public hospitals, there are so many people. Doctors are in a hurry; nurses are in a hurry, and they will not give you that importance you need. (Participant 1) Because of the pressed programme and the crowd, they were trying to explain to me what my problem was and they did not have the time to help me. (Participant 2)
Nurses' Interest in Diabetes
Some of the participants maintained that nurses did not have an interest in diabetes. They described nurses as relying on doctors to critically think about how to deal with diabetes cases because they did not care about it. They assumed that nurses considered diabetes a simple disease and that they preferred paying more attention to more critical cases.
He will not go into the process to think, "This patient has 200 mg/dl sugar [glucose], so how much should he eat?" He will just call the doctor and ask him how much insulin he has to administer. So they do not care. (Focus Group 1) I think they see diabetes just as a disease...They are diabetics; a thousand of people have diabetes, so what? They will give more importance to another person whose illness is more severe. (Participant 1) Furthermore, Participant 5, stated that through her inpatient experience she noticed the indifference of the nurses regarding her care and said that nurses were only involved in typical tasks, such as measuring blood glucose.
They are typically involved only in measuring. Usually, I am admitted to the hospital because of ketoacidosis. When this happens, I vomit. When this happens, I would like someone to take the dirty vomiting bowl and bring me a new one. I saw some indifference. The indifference of the people. (Participant 5)
Lack of Nurses' Autonomy
One important finding that participants mentioned was that nurses did not have the autonomy to take responsibility for their patients. Participant 8 clearly stated that: Nurses do not take responsibility. They do not take responsibility to tell you whether you will reduce or increase your insulin. (Participant 8) Participant 10 added that it was a problem that nurses were afraid to take responsibility, and she described her friends' experience with a nurse who did not take the responsibility to provide him with the necessary equipment for his insulin pump: Another problem is the nurses' fear of responsibility. It's not my own experience, but a friend of mine who has a pump and he had to pick up some sensors from the hospital. They told him that they did not have any. My friend stayed there and pushed them to give some to him, so the nurse went down to the warehouse with my friend and they found two boxes filled with sensors. The nurse who was with him still did not give him the sensors and disagreed. In the end, the nurse had to ask three doctors to take the responsibility and give the sensors to my friend. The nurse could not make a decision on his own. (Participant 10) Participant 12 also said that nurses relied on physicians and that they did not "dare" take responsibilities on their own: They have specific instructions to follow. They will not go into the process of thinking about doing something on their own. For example, I've had high levels of sugar for three days and they didn't go to the trouble of thinking about me needing anything else. They would not dare to find a doctor and ask him or to suggest to him something else. (Participant 12)
Focus on Physicians' Roles
Participants experienced different emotions when they referred to nursing care. Most of the participants claimed that they trusted and relied on their physicians, while sometimes it was apparent that they undervalued nurses' competence.
However, there was variation in participants' views and experiences. More specifically, a participant from the focus group affirmed that diabetes care was a personal issue and that each patient knew better about his disease than health professionals. Therefore, this participant thought there was no need to pay any attention to the nurse but to pay some attention to the doctor whenever it was needed. More specifically, a patient explained, I do not believe that you need doctors to support you or nurses to explain to you, I will not get support either from the doctor or the nurse or anyone.. Another participant from a focus group emphasised that physicians were available and trustworthy. They were there any time a patient needed them and they knew more, while the nurses relied on physicians' orders: But the doctor is there at all times. It happened when I was measuring my glucose level and it was 200, and I told the nurse that is only 4 o'clock, what will happen until 6 o'clock when my meal will come? And the nurse did not know, so she called the doctor. They do not know how to deal with us. But the doctor is there. The doctor will tell her what she has to do. (Focus group 1) The above views were further supported by participants who had one-to-one interviews. They said that they trusted their physicians more because doctors had greater knowledge than nurses, and that nurses needed more education in order to make them feel safe. For example, when Participant 1 was asked why she trusted her physician more, she replied: Because I know that the doctor will know more things. That is what I believe . . . I believe that nurses need to continue their education, especially in diseases like that, so that he can come to help me. (Participant 1) Participant 9 agreed with that opinion in saying that she would have accepted advice from a nurse but she felt she could trust her physician more.
In support, Participant 6 also said that he did not trust nurses' knowledge specifically about pump insulin, and he preferred specialist physicians to educate him because of the greater knowledge they have.
I believe that doctors specialized in diabetes have more knowledge in this area. Especially for the pump-I would not trust a nurse to regulate it. (Participant 6) Participant 2 also trusted that physicians knew more and regarded them as the most appropriate professional to guide them. He believed that nurses could give him general information regarding diabetes but the responsibility for his medication clearly belonged to his physician. This opinion had been cultivated since he was a child. He viewed the nurse's role as simply to perform the more technical actions.
For example, the nurse might also tell me that diabetes is a way of life and that it is a pancreatic insufficiency. I don't need a doctor to tell me this. However, as far as the issue is concerned about how to regulate my insulin units, the doctor has the responsibility because he is more qualified. (Participant 2) From an early age, I was told that the doctor knows better. My parents told me that the doctor is the right person to guide me. Based on this reasoning I never asked for anything from the nurses. My nurses just changed the IV fluids and asked if I felt well. (Participant 2) The same view was shared by Participant 10, who said that, based on her friends' experiences, she did not trust nurses in the state hospitals as they did not give the proper attention to their patients. Furthermore, she stated that she trusted physicians more because nurses' education was limited in comparison to doctors: I don't trust the state hospitals ever since I was diagnosed with diabetes. I would not trust to have an operation and stay there. And from the things I hear from acquaintances and friends who have diabetes and have had to be hospitalized, I have no trust [in the hospitals]. They will not give me the attention they need to give me. (Participant 10) When she was asked if she would accept any suggestions about her medications from the nurses, she clearly stated: Participant 8 also regarded physicians as the only source for getting advice and information regarding his medications, while he did not trust nurses because they were not physicians. When he was asked whether the nurses could undertake this role, he stated that nurses did not have any responsibility to advise about their medications and this was the physician's job.
Interviewer: Would you accept a nurse telling you to reduce or increase your insulin? A range of barriers was reported by our participants. Interestingly, none of the people with diabetes referred to any facilitator for diabetes care. However, a range of barriers was reported, which is of great concern, since this can have adverse effects on patients' outcomes. In the following chapter, there will be an extensive discussion about all the results of the current study.
Discussion
Several barriers were identified by our participants in the provision of diabetes care by nurses in the hospital setting. The lack of resources was well documented in the literature in previous studies, in both developed and developing countries, and our findings add to the evidence. The findings of our study are in line with those of studies carried out previously [30][31][32]. Diabetes is a complex disease that requires specific medicines and equipment, such as insulin, pens, syringes, and pumps, and as the technology is constantly being updated, it can be seen that the lack of resources in diabetes care is still a barrier, according to studies published in both developed and developing countries. However, we consider the lack of resources to be related to the lack of time, mainly because both are factors that lead to patients missing care. For example, a study conducted by Rivaz et al. [33] found that the participants acknowledged the physical resources, such as sufficient and modern equipment in the workplace, as the facilitators of care and medical processes. Participants indicated the lack of sufficient equipment as a significant obstacle that negatively affected their work because they sometimes had to miss important care activities or they were delayed in delivering care, and this resulted in emotional pressures. Blackman et al. [34] also reported that inadequate physical resources and equipment predict missed care, while the accessibility of adequate contemporary equipment has a significant impact in enabling care delivery, decreasing stress levels, and improving patient satisfaction.
This can be considered a healthcare system issue because many countries, including Cyprus, are currently dealing with financial constraints, and our participants also indicated that it is the government's responsibility because of the financial restrictions. Adding to this, Williams et al. [20] confirmed that lack of money for equipment affects quality of care, and they explained that the importance of "priority setting" is to decide at various system levels how to distribute limited funds to certain groups of patients and available treatments. Therefore, it is important that decision makers at all levels understand that "priority setting" and lack of resources are issues that lead to several other negative outcomes and that they address this scarcity in order to improve both nurses' efficiency as well as patient outcomes.
Furthermore, our participants referred to the lack of diabetes specialist nurses in the belief that the development of this role could amend the situation because of their expert knowledge. However, their roles and work settings differ among countries, with some of the countries having them available only in the primary settings. Therefore, the care provided in inpatient services by non-specialized nurses who do not have adequate knowledge is still questionable since there are non-universal and announced measures for dealing with this issue. This is supported by the next concern of our participant, who referred to the misleading information provided by nurses. The provision of misleading information to patients can be from the lack of knowledge by nurses or from outdated knowledge. However, no research was found in the literature to study or support this finding.
Our participants also referred to the lack of empathy by nurses, which was well documented and explored in the literature. Empathy is a prosocial behaviour that is beneficial to others and is fundamental to ethical nursing practice [35]. However, Jefferey [36] argued that there is currently a problem in the balance between the scientific-technical and psychosocial elements of patient care, and that the reasons behind the lack compassion are the fatigue, overwork, excess demand, lack of continuity, and a failure to see patients as a fellow human beings despite their illness. In the literature, the most reported reason for lack of empathy from health professionals was burnout [35]. More specifically, studies reported that nurses who experienced burnout could not show empathy to their patients during nursing activities, and this could affect negatively the quality of care [37,38]. This was strongly related with the participants' responses in relation to the lack of available time from nurses. Since nurses are struggling in overcrowded hospitals and do not have sufficient time to provide support to patients, this can result in nurses experiencing burnout and, consequently, in a lack of compassionate care.
Another correlated factor that can inhibit the role of the nurses in diabetes care is the "lack of autonomy" in their profession, and this was identified by several participants. Over the past 50 years, there have been several studies exploring the value of autonomy in the nursing profession. Our findings do not support the evidence in the literature, which found that approximately 60.4% of non-nursing health workers [39] versus only 6.7% of nurses [40] considered that a nurse has professional autonomy, which is alarming [39].
Autonomy allows nurses to make clinical decisions and exercise judgement about the care provided to their patients, using their own professional knowledge [41][42][43]. Nurse autonomy should be encouraged and allowed by the workplace, enabling nurses to make active decisions regardless of individual characteristics [44]. The authority to make decisions can occur at three levels: clinical, operational, and professional [41,43,45,46]. At the clinical level, nurses are allowed to make clinical decisions about the type of care they will provide to their patients [41]. Job autonomy has to do with the decisions nurses make in collaboration with the administrators. The autonomy at the professional level refers to the mutual decisions that nurses make according to specific professional practices and policies to guide them within an organization [44].
Literature on nurses' autonomy showed evidence of job satisfaction as a result of autonomy, which is an important element of the work environment that allows nurses to have better outcomes [46,47]. Furthermore, the literature provided evidence that nurses in settings that support nursing autonomy express more satisfaction with their jobs, have lower rates of burnout, and are less likely to leave the profession. Furthermore, nurses relate their ability to make autonomous decisions with better quality of care and greater teamwork [48].
However, the literature revealed factors that limit the autonomy of the nurse in the hospital environment that are consistent with other findings of our study. These factors include: the influence of the physician on the work of the nurse; the deficiency of technical-scientific knowledge; physical and emotional exhaustion from work overload; inadequate physical structure; scarcity of material; compliance with medical orders; and the nurse's dependence on the physician to perform some care and/or action [49]. Therefore, one could argue that the views of our participants confirmed that nurses lack autonomy, since most of these factors were also identified in our study. Furthermore, it is interesting that the lack of autonomy in our nurses can negatively impact the quality of care for patients as well as the nurses' self-satisfaction.
Additionally, patients' "lack of trust" in nurses was one of the most important findings that was not consistent with the literature. For instance, our participants showed they value physicians' roles rather than nurses', whereas in the literature, studies that estimated patients' levels of trust in nurses in other fields, and not specifically in diabetes care, indicated that nurses are highly trusted by patients. This inconsistency between our findings and the wider literature might be due to the fact that the medical profession has a privileged position in Cyprus society, while nurses do not. This confirmed Loizous' [50] findings that people with diabetes seem to rely to a great extent on their physicians not only for their medications but also for psychological support. This is an interesting finding. Although general trust in physicians plays a significant role in patient care, evidence in the global literature showed that public and patients' trust toward their medical profession has seemingly reduced [51].
However, the lack of trust in nurses from the patients' perspective could be related to factors previously reported. For example, some of the preconditions for the development of trust between patients and nurses were things like nurses' levels of education and time employed; nurses' availability and accessibility; their being adequately informed and communicated with respectfully; nurses' technical or pedagogical competence; their experience and good bedside manner, continuity of service, and holistic approach to caring [52]. Factors that hindered the development of trust in nurse-patient relationships were things like lacking the necessary knowledge and skill to undertake nursing procedures; using difficult scientific language; not understanding patients' needs; depersonalising the patient by not calling them by their names but by a room number or their diagnosis; avoiding nursing care activities; and keeping a distance from patients. Factors that were related to their job, such as lack of time, demanding workload, and absence of understanding [52], also undermined patients' trust in nurses. Most of these factors were already identified as barriers to diabetes care in our study. Therefore, one could conclude that the nurse participants' observations of a lack of trust from the patients' perspectives is reasonable and that it is a cycle with factors affecting each other.
Conclusions
Significant results were found in relation to the barriers to diabetes inpatient care. Crucially, the findings demonstrate that all these factors can negatively affect the quality of care of inpatients with diabetes, and that most of these factors are not only related to diabetes care but generally to all patients who receive inpatient care. Interestingly, no participant reported any facilitators to his or her care, which further affected the negative perceptions of the care received. No other studies have reported on the inpatient diabetes care aspect, and the evidence generated in this study demonstrates an important strength of the diabetes care nurses provide, which may also help alleviate the concerns expressed by other healthcare professionals and groups of patients. Although beyond the scope of the current study, future studies should also evaluate other aspects of diabetes inpatient services, such as doctors' provision of care or the role of other health professionals involved in diabetes care, in order to have additional data that would more comprehensively illustrate diabetes inpatient care, in general. Further research can be conducted on the factors affecting people with diabetes, such as the data that patients are more prone to trust doctors rather than nurses, which is opposed to other research evidence.
Author Contributions: M.N.: Made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; was involved in drafting the manuscript and revising it critically for important intellectual content; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. C.S.C.: Made substantial contributions to conception and design; was involved in drafting the manuscript or revising it critically for important intellectual content; gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. E.A.: Provided academic guidance and critically revised earlier drafts of the paper; gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved; was involved in drafting the manuscript. E.L.: Made substantial contributions to analysis and interpretation of data; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved; was involved in drafting the manuscript. M.D.: Provided academic guidance and critically revised earlier drafts of the paper; gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors have read and agreed to the published version of the manuscript.
Declarations: Not applicable.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare that they have no conflict of interest. | 2020-07-26T13:05:23.976Z | 2020-07-22T00:00:00.000 | {
"year": 2020,
"sha1": "e7906e5d2adeebcd4b5b56b8a0759d1422d7053a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-328X/10/8/120/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2834cf606240551f8de7ed1459f17b9c57671b52",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
156051935 | pes2o/s2orc | v3-fos-license | Ore Genesis of the Chuduoqu Pb-Zn-Cu Deposit in the Tuotuohe Area, Central Tibet: Evidence from Fluid Inclusions and C–H–O–S–Pb Isotopes Systematics
: The Chuduoqu Pb-Zn-Cu deposit is located in the Tuotuohe area in the northern part of the Sanjiang Metallogenic Belt, central Tibet. The Pb-Zn-Cu ore bodies in this deposit are hosted mainly by Middle Jurassic Xiali Formation limestone and sandstone, and are structurally controlled by a series of NWW trending faults. In this paper, we present the results of fluid inclusions and isotope (C, H, O, S, and Pb) investigations of the Chuduoqu deposit. Four stages of hydrothermal ore mineralization are identified: quartz–specularite (stage I), quartz–barite–chalcopyrite (stage II), quartz–polymetallic sulfide (stage III), and quartz–carbonate (stage IV). Two types of fluid inclusions are identified in the Chuduoqu Pb-Zn-Cu deposit: liquid-rich and vapor-rich. The homogenization temperatures of fluid inclusions for stages I–IV are 318–370 ◦ C, 250–308 ◦ C, 230–294 ◦ C, and 144–233 ◦ C, respectively. Fluid salinities range from 2.07 wt.% to 11.81 wt.% NaCl equivalent. The microthermometric data indicate that the fluid mixing and cooling are two important mechanisms for ore precipitation. The H and O isotopic compositions of quartz indicate a primarily magmatic origin for the ore-forming fluids, with the proportion of meteoric water increasing over time. The C and O isotopic compositions of carbonate samples indicate that a large amount of magmatic water was still involved in the final stage of mineralization. The S and Pb isotopic compositions of sulfides, demonstrate that the ore minerals have a magmatic source. On a regional basis, the most likely source of the metallogenic material was regional potassium-enriched magmatic hydrothermal fluid. Specifically for the Chuduoqu Pb-Zn-Cu deposit, the magmatic activity of a syenite porphyry was the likely heat source, and this porphyry also provided the main metallogenic material for the deposit. Mineralization took place between 40 and 24 Ma. The Chuduoqu deposit is a mesothermal hydrothermal vein deposit and was formed in an extensional environment related to the late stage of intracontinental orogenesis resulting from India–Asia collision. The determination of the deposit type and genesis of Chuduoqu is important because it will inform and guide further exploration for hydrothermal-type Pb and Zn deposits in the Tuotuohe area and in the wider Sanjiang Metallogenic Belt. intercalated with purple-red feldspar debris sandstone, and gray feldspathic quartzitic sandstone; (b) the middle section has a lithological association of purple-red feldspathic lithic quartz sandstone interbedded with bioclastic crystalline limestone; and (c) the upper section comprises purple-red feldspar arkose intercalated with gray-green feldspathic quartzitic sandstone. Sedimentary rocks of the Xiali Formation (J 2 x ) are distributed primarily in the central part of the mining area. In the northern-central part of the mining area, they strike at 170–200 ◦ and dip at 30–70 ◦ to the east, and in the southern-central part, the beds strike at 130–160 ◦ and dip at 30–70 ◦ to the NE. The Xiali Formation conformably underlies the Upper Suowa Formation (J 3 s ). The Upper Jurassic Suowa Group (J 3 s ) is distributed mainly in the eastern part of the mining area. The lower part of this group consists of gray-green calcareous siltstone and mudstone intercalated with biological calcareous siltstone, interbedded with thick layers of muddy crystal limestone, whereas the upper part comprises thick layers of dark-gray muddy crystalline limestone interbedded with thin layers of muddy crystalline limestone. The Suowa Group rocks strike at 110 ◦ and dip at 30–50 ◦ to the east.
Introduction
From north to south, the Tibetan Plateau comprises the Songpan-Ganze flysch complex, the eastern Qiangtang Terrane, the western Qiangtang Terrane, the Lhasa Terrane, and the Himalaya (Figure 1a, [1]) separated by the Jinsha, Longmu Tso-Shuanghu, Bangong-Nujiang, and Indus-Yarlung Zangbo suture zones, respectively. These blocks and terranes represent relicts of Tethyan oceanic material of various ages. The Sanjiang Metallogenic Belt is located between the Jinshajiang and Bangonghu-Nujiang sutures along the eastern and northern margins of the Tibetan Plateau and extends for nearly 1500 km [2]. This belt is an important Pb-Zn-producing region within the Himalayan-Tibetan metallogenic domain. The Sanjiang Metallogenic Belt evolved as part of a Paleozoic-Mesozoic Tethys archipelagic arc basin over which was sequentially superimposed a Tertiary foreland basin, a strike-slip pull-apart basin, and a thrust-nappe structure that formed during Himalayan orogenesis [3]. [4]. The map also shows the locations of the Chuduoqu deposit and other important Pb-Zn deposits in the Sanjiang Metallogenic Belt; (b) The inset map shows the location of Tibet within the eastern Asian continent, modified after Liu et al. [5].
The conditions of metallogenesis in the Sanjiang Metallogenic Belt were favorable for the formation of large-scale deposits [6,7]. A series of Pb-Zn deposits developed during the Cenozoic, mainly along the margins of Mesozoic-Cenozoic continental basins. The main deposits are distributed from southeast to northwest and include the Jinding and Baiyangping super-large Pb-Zn deposits in the Lanping basin [8,9], the Zhaofayong Pb-Zn deposit in the Changdu Basin [10], and the Dongmozhazhua and Mohailaheng Pb-Zn deposits in the Yushu basin ( Figure 1a) [11,12]. The Tuotuohe area, located in the northern part of the Sanjiang Metallogenic Belt, hosts several mediumto low-temperature hydrothermal-vein-type, porphyry-type, MVT and VMS-type deposits [13][14][15]. Many Pb-Zn deposits and other sites of mineralization have been discovered in the Tuotuohe area, including the Chaqupacha super-large Pb-Zn deposit, the Chuduoqu large Pb-Zn-Cu deposit, and the Basihu medium-sized Pb-Zn deposit, as well as the Nariniya, Nabaozhalong, and Zhalaxiageyong Pb-Zn deposits ( Figure 2). [4]. The map also shows the locations of the Chuduoqu deposit and other important Pb-Zn deposits in the Sanjiang Metallogenic Belt; (b) The inset map shows the location of Tibet within the eastern Asian continent, modified after Liu et al. [5].
Figure 2.
Simplified geological map of the Tuotuohe area, showing stratigraphy, thrust structure, and locations of the Chuduoqu ore deposit and other ore deposits; the map is simplified from six 1:250,000 scale geological maps, modified after [16].
Exploration of the Chuduoqu Pb-Zn-Cu deposit started in 2007. From 2007 to 2011, the Qinghai Fifth Institute of Geological and Mineral Survey carried out work in the northern part of the deposit, where geochemical data indicate favorable conditions for mineralization, and a set of~N-S-oriented ore-bearing zones was discovered. These zones show good surface and shallow-subsurface mineralization, weak deep mineralization, and generally unsatisfactory indications for ore-prospecting. In 2011, fracture zone SBIII was discovered, which is the major ore-controlling structural zone formed by the NWW-oriented fault, indicating that ore-prospecting be undertaken on this fracture zone. The Pb-Zn-rich ore body M9 in fracture zone SBIII was the first such body to be discovered in the hanging wall of the main fault, followed by Cu-Ag ore body M10 and Pb-Ag ore body M11 (Figure 3a) [15]. The Chuduoqu Pb-Zn deposit has estimated metal reserves of 402,547 t Pb, 112,672 t Zn, 9197 t Cu, and 593 t Ag [15]. Various studies have reported the geological features, mineralization, fluid inclusions, ore-controlling structures, and results of exploration of the Chuduoqu deposit [13,15]. Most of these studies have shown that this deposit formed in relation to Cenozoic magmatic hydrothermal activity. The deposit is a mesothermal hydrothermal type that is most likely associated with the intrusion of Cenozoic syenite porphyry dykes. However, the detailed characteristics and mechanisms of the mineralization of the Chuduoqu deposit are poorly constrained, especially when compared with the detailed information available for other Cenozoic Pb-Zn deposits in the Tuotuohe area.
The origin, properties, and evolution of the ore-forming fluids, as well as the genesis of the Chuduoqu deposit, are still not fully understood, which limits our overall understanding of the genesis of hydrothermal vein-type Pb-Zn mineralization in the region. Using field observations and petrographic studies, we investigated the ore-controlling structures, the composition and characteristics of fluid inclusions, and the stable (C-H-O-S) and radiogenic (Pb) isotope systematics of the Chuduoqu Pb-Zn-Cu deposit. In this paper, we report the results of our study, discuss the characteristics of the mineralizing fluids and metal sources as well as the mechanisms of mineralization, and constrain the genesis of the deposit. In doing so, we provide an important basis for understanding the Chuduoqu Pb-Zn-Cu deposit and similar deposits in the Tuotuohe area. Our findings should prove valuable for prospecting in the Tuotuohe area and in the Sanjiang Metallogenic Belt.
Geological Background
The Chuduoqu Pb-Zn-Cu deposit is located in the Tuotuohe area in the northern part of the Sanjiang Metallogenic Belt, central Tibet ( Figure 1, [17]). The Tuotuohe area is positioned on the margin of the northern Qiangtang Terrane between the Jinsha River suture zone and the Longmucuo Shuanghu suture zone [6,15,16]. The oldest rocks in the Tuotuohe area are Carboniferous clastic and carbonate sediments that are thought to have formed in a passive continental margin setting (Figure 2, [18]); the Permian to Triassic units consist mainly of marine carbonate, clastic, and volcanic rocks. Recent studies of Permian magmatic rocks in the Yushu area have suggested that these units were deposited in a continental-margin-arc setting associated with northward subduction of the Shuanghu oceanic plate between ca. 275 and 248 Ma [19]. Lower and Middle Triassic rocks are absent from the area, meaning that the Upper Triassic rocks unconformably overlie the underlying units. During the Late Triassic, the Tuotuohe area was in a subduction setting involving the southward subduction of the Jinsha oceanic plate [20].
Lower Jurassic rocks are absent from the study area ( Figure 2, [16,21]; Middle to Upper Jurassic units in the area consist of clastic and carbonate rocks of (from bottom to top) the Qumocuo, Buqu, Xiali, Suowa, and Xueshan Formations [22]. During the Cretaceous, the Tuotuohe area entered a continental sedimentary stage, when thick successions of clastic deposits were laid down. The Cenozoic units comprise terrigenous clastic and carbonate rocks of the Eocene Tuotuohe Formation, the Eocene-Oligocene Yaxicuo Formation, and the Miocene Wudaoliang Formation [18,23,24], which are exposed mainly in the northern part of the Tuotuohe area ( Figure 2).
The Tuotuohe area contains a large-scale thrustnappe structure and strike-slip system as a result of India-Eurasia collisional orogenesis during the Cenozoic [2,9]. The thrustnappe structural belt comprises a series of NWW-trending thrust faults and folds, most of which dip to the southwest [25]. The large-scale thrustnappe in the Tuotuohe area underwent two main episodes of thrusting, one at around 52-42 Ma and the other at around 24 Ma [25]. Between these two episodes, strike-slip activity developed with the formation of a series of strike-slip fault systems [26].
Magmatic activity in the Tuotuohe area started during the late Paleozoic and ended during the Cenozoic. The magmatism was characterized by relatively weak intrusive and strong volcanic activity. Volcanic rocks are widespread at a regional scale. These volcanic rocks comprise Permian basaltic andesite interlayered with basalt; Late Triassic andesite, basalt, and pyroclastic rocks; and Cenozoic trachyte. The Cenozoic volcanic rocks are distributed mostly around the locality of Nariniya and dated at . Magmatic intrusions are widely dispersed, with numerous igneous rock outcrops, but the total area covered by these rocks is quite small. Magmatic rocks include those formed during the Indosinian, Yanshanian, and Himalayan periods. Late Permian-early Triassic syenites and diorite bodies are found in the Chaqupacha deposit [30], and Middle Triassic diorite is present in the Basihu mine [31]. Late Cretaceous granites occur in the Longyala and Munai areas of the Tanggula mountains [32]. Paleogene olivine gabbro-diabase has been discovered in the Quemocuo mining area (Figure 2, [33]), and Cenozoic porphyry bodies have been discovered in the Zhamuqu, Nariniya, Zhalaxiageyong, and Saiduopugangri areas [13,32,34]. The Cenozoic volcanic rocks and granites mentioned above consist predominantly of shoshonitic to high-K calc-alkaline rocks and were formed in a geodynamic setting of crustal shortening, thickening and melting [27][28][29]32,35]. Their occurrence and ages are supporting evidence for Cenozoic crustal shortening and uplift of the plateau in the study area.
Ore Deposit Geology
Rocks exposed in the studied mining area include those of the Middle Jurassic Buqu and Xiali formations, the Upper Jurassic Suowa Formation, and Quaternary deposits ( Figure 3a). The Middle Jurassic Buqu Formation (J 2 b) is composed predominantly of light-gray to dark-gray limestone, with purple-red and gray argillaceous siltstone and quartz-feldspar sandstone. This formation is rich in marine fossils and is distributed primarily in the southwestern part of the mining area. The Buqu Formation rocks strike at 110-130 • , dip at 30-70 • to the NNE, and conformably overlie the Upper Xiali Formation (J 2 x). The Middle Jurassic Xiali Group (J 2 x) is an important ore-bearing group in the study area and consists of the following three lithological sections: (a) the lower section comprises blue-gray crystalline limestone, purple-red muddy siltstone intercalated with purple-red feldspar debris sandstone, and gray feldspathic quartzitic sandstone; (b) the middle section has a lithological association of purple-red feldspathic lithic quartz sandstone interbedded with bioclastic crystalline limestone; and (c) the upper section comprises purple-red feldspar arkose intercalated with gray-green feldspathic quartzitic sandstone. Sedimentary rocks of the Xiali Formation (J 2 x) are distributed primarily in the central part of the mining area. In the northern-central part of the mining area, they strike at 170-200 • and dip at 30-70 • to the east, and in the southern-central part, the beds strike at 130-160 • and dip at 30-70 • to the NE. The Xiali Formation conformably underlies the Upper Suowa Formation (J 3 s). The Upper Jurassic Suowa Group (J 3 s) is distributed mainly in the eastern part of the mining area. The lower part of this group consists of gray-green calcareous siltstone and mudstone intercalated with biological calcareous siltstone, interbedded with thick layers of muddy crystal limestone, whereas the upper part comprises thick layers of dark-gray muddy crystalline limestone interbedded with thin layers of muddy crystalline limestone. The Suowa Group rocks strike at 110 • and dip at 30-50 • to the east.
The studied mining area is characterized by NWW-and N-trending faults. The NWW-trending faults have four associated fracture zones: SBIII, SBIV, SBV, and SBVI. Of these, fracture zone SBIII (Figure 3b) is the main ore-controlling fracture zone in the area and measures 1000 m in length and 200-300 m in width, strikes at 120-135 • , and dips at 65-75 • to the SSW. A series of parallel secondary faults are developed in the hanging wall of the main fault and are associated with fracture zones SBIV, SBV, and SBVI. A second group of faults trends N-S and occurs in the footwall of fracture zone SBIII. These N-trending faults have formed several fracture zone structures, including SBI and SBII.
The intrusive rocks in the Chuduoqu mining area include syenite porphyry veins, diabase veins, and fine-grained granite dykes, syenite porphyry veins are oriented NE to E and NW, diabase veins are oriented E, fine-grained granite dykes are oriented NE (Figure 3a). Syenite porphyry veins have been identified in boreholes. These veins occur in the rocks of the Xiali Formation (J 2 x) and show strong alteration, mainly baritization and limonitization, as well as carbonation and silicification. The contact zone of the syenite porphyry with the host rocks of the Xiali Formation (J 2 x) is highly mineralized, with the development of massive, vein-like, and disseminated pyrite, chalcopyrite, and galena ( Figure 3b). Locally, crystal lithic tuff overlies the Xiali Group (J 2 x) across an unconformity (Figure 3b). The U-Pb age of the crystal lithic tuff is 68.3 ± 0.7 Ma (unpublished data) and was erupted prior to the onset of mineralization.
The mineralization in the ore bodies is most enriched in and around the intersections of secondary faults (including fracture zones SBIV, SBV and SBVI) with the main fracture zone SBIII. There are six main mineralized alteration zones in the mining area. Eleven polymetallic ore bodies have been identified, including four Pb-Zn-Ag ore bodies (M1, M5, M7, and M9), four Pb-Ag ore bodies (M2, M3, M4, and M11), one Pb-Cu-Ag ore body (M8), one Cu-Ag ore body (M10), and one Pb ore body (M6) (Figure 3a). The main ore-body-hosting rocks of the Chuduoqu deposit are those of the Xiali Formation (J 2 x), chiefly cataclastic micritic silty limestone and cataclastic quartz-feldspar sandstone. Of the eleven identified ore bodies, six are oriented~N-S and five are oriented NWW. The six approx N-S-trending ore bodies (M1-M6) occur in N-oriented fracture zones as layers and veins. These ore bodies have lengths of 150-1350 m and thicknesses of 4-16 m, dip at 42-62 • to the SE, and host good quality surface mineralization but poor discontinuous mineralization at depth. The five NWW-trending ore bodies are distributed in NWW-oriented fractured zones.
In the Chuduoqu Pb-Zn-Cu deposit, the geology and metal resources of ore bodies M1, M2, M8, M9, and M10 have been investigated (Table S1), whereas the resources of the remaining six ore bodies remain unknown. The main ore body, M9, occurs in the altered fracture zone SBIII, which is the main ore-controlling structure. Ore body M9 is layered, extends for more than 500 m, varies in thickness from 3.0 to 24.7 m, and dips at 20 • to the south. This body contains an average Pb grade of 2.22% (locally reaching 21.13%), an average Zn grade of 1.41% (locally up to 8.69%), and an average Ag grade of 49.5 g/t (locally up to 220 g/t) (Table S1), indicating very good prospecting potential. The degree of host-rock fragmentation in the main fracture zone of ore body M9 varies greatly, with the alteration and mineralization being strongest in regions of highly fractured limestone and sandstone, and weakest in regions of weak host-rock fragmentation.
The ore minerals of the Chuduoqu Pb-Zn-Cu deposit include specularite, magnetite, pyrite, chalcopyrite, bornite, tetrahedrite, pearceite, galena, sphalerite, limonite, malachite, and azurite and the gangue minerals include quartz, calcite, dolomite, barite, sericite, chlorite, and epidote. The ores show mainly xenomorphic granular texture, with subordinate idiomorphic-hypidiomorphic granular texture. In addition, the ores exhibit cataclastic and metasomatic characteristics. The ores show mainly block and vein structures, as well as local disseminated structures. Hydrothermal alteration is widespread in the Chuduoqu Pb-Zn-Cu deposit, with the most intensive alteration occurring in and around the mineralized Pb-Zn-Cu veins. The key components of alteration assemblages include silicification, chloritization, epidotization, sericitization, carbonation, and baritization. Distinct episodes of hydrothermal alteration are recognized: an early episode of silicification, three intermediate episodes (baritization, phyllic and propylitic), and a late carbonatization. Silicification is the most widespread alteration type in the Chuduoqu Pb-Zn-Cu deposit, which coexists with minor early precipitated specularite (Figure 4a). Silicification was overprinted by baritization and phyllic alteration, which consists of barite, quartz and sericite. Baritization and phyllic alteration appear closely related to Cu metal sulfides deposition (Figure 4c,d). Phyllic alteration was overprinted by propylitic alteration, characterized by an assemblage of chlorite, epidote, and quartz. Propylitic alteration appears closely related to base metal sulfides deposition (Figure 4d,e). The final stage of hydrothermal alteration is carbonatization, which overprinted all the previous alteration types coexisting with minor pyrite. In addition, there is no obvious spatial zonation of various hydrothermal alteration types, in most cases, the alteration assemblages are superimposed upon one another.
Based on field observations, mineral assemblages, and crosscutting relationships (Figure 4), we divided the mineralization history of the deposit into a hydrothermal mineralization phase (which is subdivided into four mineralization stages) and a supergene phase. The characteristics of the mineral associations in the four hydrothermal stages are as follows ( Figure 5).
Quartz-Specularite Ore (Stage I)
In this stage, specularite ore is the main metallic mineral and occurs as needle-like crystals with idiomorphic-hypidiomorphic texture. The mineral assemblage of this stage is quartz + specularite, cut by late-stage quartz-barite-chalcopyrite veins (Figure 4a).
Quartz-Barite-Chalcopyrite (Stage II)
In this stage, the gangue minerals are mainly quartz and barite, and the ore minerals are mainly pyrite and chalcopyrite (Figure 4c). Small amounts of bornite and tetrahedrite are found, mostly with idiomorphic-hypidiomorphic texture. The mineral assemblage is quartz + barite + pyrite + chalcopyrite + bornite + tetrahedrite (Figure 4g,h). This stage is the main metallogenic stage for Cu.
Quartz-Polymetallic Sulfide Stage (Stage III)
This stage is the main stage of deposit formation. The gangue minerals are dominated by quartz, and the metallic minerals are chiefly galena, sphalerite, pyrite, and minor pearceite and chalcopyrite (Figure 4d), mostly showing idiomorphic and hypidiomorphic textures. The mineral assemblage is quartz + galena + sphalerite + pyrite + chalcopyrite+ pearceite (Figure 4e,i). This stage is the main metallogenic stage of Pb and Zn.
Quartz-Carbonate Stage (Stage IV)
This stage is characterized by quartz-calcite veins with fewer metal sulfides compared with stage III. Some fine-veined disseminated pyrite is found. Numerous calcite veins are present ( Figure 4f), with minor quartz veins. The mineral assemblage is calcite + quartz + pyrite.
The supergene phase involved the formation of cerussite, malachite, azurite and limonite.
Fluid Inclusion Measurements
The study of fluid inclusions (FIs), including petrography, micro-thermometry, and laser Raman spectra analyses, was conducted at the Key Laboratory of Geological Fluid, Jilin University, Changchun, China. A total of 27 samples from the four hydrothermal stages of mineralization were prepared as two-sided, 0.2 mm-thick polished sections. FIs were observed under a binocular microscope, following which representative primary FIs were selected for micro-thermometric measurements. Secondary FIs, presenting locally as trails penetrating crystal boundaries, were not analyzed [36].
The petrographic and micro-thermometric studies were performed using a heating-freezing stage (THMS-600, Linkam Scientific Instruments Ltd, Epsom, UK) with a temperature range of −195 to 600 °C. The estimated precision of the measurements is ±0.1 °C for the interval from −120 to 70 °C and ±2 °C for the interval 100-500 °C. International standard samples (synthetic NaCl-H2O FIs) containing pure water and 25% salinity were used for calibration. The heating rate for testing was generally 0.2-5.0 °C/min, although 0.5-1.0 °C/min around phase transformation points.
The compositions of individual FIs were determined using an RM-2000 laser Raman microprobe (Renishaw, New Mills, UK) with an argon ion laser and a laser source of 514 nm. The scanning range of spectra was set between 100 and 4300 cm −1 with an accumulation time of 60 s for each scan. The laser beam width was 1 μm, and the spectral resolution was 0.14 cm −1 .
Ion Chromatography Analysis
Five quartz samples were chosen for ion chromatographic analyses of grouped fluid inclusion compositions. These samples were carefully chosen quartz particles with purities of >98% and particle sizes ranging from 0.2 to 0.5 mm under a binocular microscope. The liquid phase composition analyses of the FIs were conducted at the Institute of Mineral Resources, Chinese
Quartz-Carbonate Stage (Stage IV)
This stage is characterized by quartz-calcite veins with fewer metal sulfides compared with stage III. Some fine-veined disseminated pyrite is found. Numerous calcite veins are present (Figure 4f), with minor quartz veins. The mineral assemblage is calcite + quartz + pyrite.
The supergene phase involved the formation of cerussite, malachite, azurite and limonite.
Fluid Inclusion Measurements
The study of fluid inclusions (FIs), including petrography, micro-thermometry, and laser Raman spectra analyses, was conducted at the Key Laboratory of Geological Fluid, Jilin University, Changchun, China. A total of 27 samples from the four hydrothermal stages of mineralization were prepared as two-sided, 0.2 mm-thick polished sections. FIs were observed under a binocular microscope, following which representative primary FIs were selected for micro-thermometric measurements. Secondary FIs, presenting locally as trails penetrating crystal boundaries, were not analyzed [36].
The petrographic and micro-thermometric studies were performed using a heating-freezing stage (THMS-600, Linkam Scientific Instruments Ltd, Epsom, UK) with a temperature range of −195 to 600 • C. The estimated precision of the measurements is ±0.1 • C for the interval from −120 to 70 • C and ±2 • C for the interval 100-500 • C. International standard samples (synthetic NaCl-H 2 O FIs) containing pure water and 25% salinity were used for calibration. The heating rate for testing was generally 0.2-5.0 • C/min, although 0.5-1.0 • C/min around phase transformation points.
The compositions of individual FIs were determined using an RM-2000 laser Raman microprobe (Renishaw, New Mills, UK) with an argon ion laser and a laser source of 514 nm. The scanning range of spectra was set between 100 and 4300 cm −1 with an accumulation time of 60 s for each scan. The laser beam width was 1 µm, and the spectral resolution was 0.14 cm −1 .
Ion Chromatography Analysis
Five quartz samples were chosen for ion chromatographic analyses of grouped fluid inclusion compositions. These samples were carefully chosen quartz particles with purities of >98% and particle sizes ranging from 0.2 to 0.5 mm under a binocular microscope. The liquid phase composition analyses of the FIs were conducted at the Institute of Mineral Resources, Chinese Academy of Geological Sciences, Beijing, China. The liquid phase composition analyses were performed using an ion chromatograph (HIC-SP Super; Shimadzu Corporation, Kyoto, Japan) for which the limit of detection is µg/g. Each sample was placed in a quartz glass tube, following which the sample was heated for 15 min at 500 • C. After cooling, 5 mL of water was added to the tube, followed by 10 min of ultrasonic oscillation. Finally, the liquid composition was determined using the ion chromatograph.
Hydrogen and Oxygen Isotope Analyses
Seven representative quartz samples were chosen for H and O isotope analysis. H and O isotopes were analyzed using a MAT253 mass spectrometer at the Analysis and Testing Research Center of Nuclear Industry, Beijing Institute of Geology, Beijing, China, referenced to Standard Mean Ocean Water (SMOW). The procedure used for the H isotope analysis was as follows. Quartz grains were crushed to a grain size of 40-60 mesh and handpicked under a binocular microscope, resulting in a purity of >95%. Water was obtained from the primary inclusions by the heating burst method [37]. Hydrogen was produced using the zinc method [38]. The H isotope composition was then determined using the mass spectrometer, for which the analytical precision was ±1% . The procedure used for the O isotope analysis was as follows. The selected quartz was crushed to 200 mesh and then dried. Around 10-30 mg of the crushed sample was taken and combined with BrF 5 at 550 • C to produce O and CO 2 using a carbon furnace. The O isotope composition was then determined using the mass spectrometer. The analytical precision is better than 2% for δD and 0.2% for δ 18 O. The isotope data are reported in standard δ notion (% ) relative to the Vienna Standard Mean Water (V-SMOW) for oxygen and hydrogen.
Carbon and Oxygen Isotope Analyses
Four calcite samples from quartz-calcite veins were chosen for C and O isotope analysis. The C and O isotope analyses of calcite were performed using the 100% phosphoric acid method [39] with a MAT-251EM mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) at the Analysis and Testing Research Center of Nuclear Industry, Beijing Institute of Geology, Beijing, China. The CO 2 gas produced by the reaction of phosphoric acid with the sample at 25 • C was analyzed to yield the C and O isotopic compositions of calcite. δ 13 C was referenced to the Pee Dee Belemnite (PDB) standard, and δ 18 O was referenced to the SMOW standard. The δ 18 O SMOW values were calculated using the following equation [40]: 86. The analytical precisions were ±0.1% for carbon isotopes and ±0.2% for oxygen isotopes.
Sulfur and Lead Isotope Analyses
Fourteen sulfide samples were chosen for S isotope analysis. Four sulfide samples were chosen for Pb isotope analysis. The S and Pb isotopes of the metal sulfides were analyzed at the Analysis and Testing Research Center of Nuclear Industry, Beijing Institute of Geology, Beijing, China. The S isotope analyses were performed using a MAT253 gas isotope mass spectrometer with an analytical precision of better than ±0.2% . The sulfide reference materials were the GBW-04414 and GBW-04415 Ag sulfide standards, and their determined δ 34 S values were −0.07% ± 0.13% and 22.15% ± 0.14% , respectively. The Pb isotopes were measured by thermal ionization mass spectrometry using an ISOPROBE-T mass spectrometer, and the analytical precision was better than 0.005% for 1 µg of 208 Pb/ 206 Pb.
Petrographic Characteristics
The petrographic investigation revealed that the primary FIs of the various hydrothermal metallogenic stages are distributed mainly in groups and a few isolated examples, indicating that they were captured concurrently [36].
On the basis of phases, the degree of filling, and combination relationship of FIs at room temperature, two distinct fluid inclusion types were recognized in the studied quartz, barite, and calcite samples from the Chuduoqu Pb-Zn-Cu deposit, as follows.
Liquid-rich (L-type) FIs are the most abundant fluid inclusion type in the various mineralization stages. These inclusions consist of two phases (vapor and liquid water) at room temperature. They contain a vapor phase occupying 5-20 vol. % of the inclusion volume. The sizes of these inclusions range from 5 to 20 µm, and they exhibit round, sub-rectangular, and irregular shapes (Figure 6a-c,e,g). The L-type inclusions occur in isolation or as clusters along healed crystals and homogenize to liquid during heating. On the basis of phases, the degree of filling, and combination relationship of FIs at room temperature, two distinct fluid inclusion types were recognized in the studied quartz, barite, and calcite samples from the Chuduoqu Pb-Zn-Cu deposit, as follows.
Liquid-rich (L-type) FIs are the most abundant fluid inclusion type in the various mineralization stages. These inclusions consist of two phases (vapor and liquid water) at room temperature. They contain a vapor phase occupying 5-20 vol. % of the inclusion volume. The sizes of these inclusions range from 5 to 20 μm, and they exhibit round, sub-rectangular, and irregular shapes (Figure 6a-c,e,g). The L-type inclusions occur in isolation or as clusters along healed crystals and homogenize to liquid during heating.
Vapor-rich (V-type) FIs are identified in the quartz crystals of mineralization stage III. These inclusions consist of two phases (vapor and liquid water) at room temperature and 60-90 vol. % of the inclusion volume is occupied by vapor bubbles (Figure 6d,f). These FIs are generally round or oval in shape and measure 5-15 μm in size. The V-type FIs occur in isolation or coexist with L-type FIs and homogenize to vapor during heating. Vapor-rich (V-type) FIs are identified in the quartz crystals of mineralization stage III. These inclusions consist of two phases (vapor and liquid water) at room temperature and 60-90 vol. % of the inclusion volume is occupied by vapor bubbles (Figure 6d,f). These FIs are generally round or oval in shape and measure 5-15 µm in size. The V-type FIs occur in isolation or coexist with L-type FIs and homogenize to vapor during heating.
Microthermometry
On the basis of field investigations of the Chuduoqu Pb-Zn-Cu deposit, samples were collected from veins in ore bodies M8-M11. The obtained quartz, barite and calcite samples of the different mineralization stages were prepared as inclusion tablets. The microthermometric data for the FIs from the four different stages of mineralization are summarized in Table 1 and presented in Figure 7. In addition, on freezing/warming, FIs from stage II and stage III exhibit eutectic temperatures varying from −31.5 • C to −29.5 • C (Table 1), which is obviously below the eutectic temperature of H 2 O-NaCl (−21.2 • C) or the H 2 O-NaCl-KCl (−22.9 • C) system [41], indicating the presence of other ions besides Na + and K + (detected as Mg 2+ , and Ca 2+ by the ion chromatography method, see Section 5.4). According to data shown by Crawford [42], in many of the chloride systems in fluid inclusions, for concentrations below 10 wt.%, freezing point depression curves are very similar, so we assume the fluids as approximating the H 2 O-NaCl system in our salinity estimations by using the equation of Bodnar [43].
FIs characteristics for stage I quartz are based on data for L-type inclusions. All the L-type FIs have homogenization temperatures of 318 to 370 • C, final ice-melting temperatures of −8.1 to −5.6 • C, and calculated salinities of 8.68 to 11.81 wt.% NaCl equivalent (Figure 7a,b). Initial ice melting temperatures on freezing/warming were not confidently obtained.
FIs characteristics for stage II quartz and barite are based on data for L-type FIs. These L-type FIs have homogenization temperatures of 250 to 308 • C for quartz and 250 to 287 • C for barite. The final ice-melting temperatures range from −7.8 to −4.8 • C and from −7.2 to −3.3 • C, respectively. The calculated salinities for FIs in quartz and barite range from 7.59 to 11.46 wt.% and from 5.41 to 10.73 wt.% NaCl equivalent (Figure 7c,d), respectively. On freezing/warming, FIs exhibit eutectic temperatures of −29.5 ± 0.5 • C for quartz and −31.2 ± 0.5 • C for barite, respectively, suggesting the presence of NaCl, as well as Mg and Ca chlorides in solution [42].
FIs characteristics for stage III quartz are based on data for L-type and V-type FIs. The L-type FIs have homogenization temperatures of 231 to 294 • C, final ice-melting temperatures of −6.5 to −2.9 • C, and calculated salinities of 4.80 to 9.86 wt.% NaCl equivalent (Figure 7e,f). The V-type FIs have homogenization temperatures of 230 to 259 • C, final ice-melting temperatures of −5.7 to −2.7 • C, and calculated salinities of 4.49 to 8.81 wt.% NaCl equivalent. On freezing/warming, L-type FIs from stage III exhibit eutectic first melting temperatures of −31.5 ± 0.5 • C, indicating the presence of NaCl, as well as Mg and Ca chlorides in solution [42]. For V-type FIs, their liquid phase is too small to observe the eutectic temperature. The microthermometry results show that the FIs are medium temperature and medium to low salinity. FIs characteristics for stage IV quartz and calcite are based on data for L-type FIs. The L-type FIs in quartz and calcite have homogenization temperatures of 162 to 233 • C and 144 to 219 • C, and final ice-melting temperatures of −4.4 to −2.6 • C and −3.1 to −1.2 • C, respectively. The calculated salinities for FIs in quartz and calcite range from 4.34 to 7.02 wt.% and from 2.07 to 5.10 wt.% NaCl equivalent (Figure 7g,h), respectively. Initial ice melting temperatures on freezing/warming were not confidently obtained.
The microthermometry results show that the FIs are medium temperature and medium to low salinity.
Laser Raman Spectroscopy
Representative FIs from the Chuduoqu Pb-Zn-Cu deposit were studied using laser Raman spectroscopy to determine their gas compositions. The results suggest that the vapor phases of the L-type FIs, either coexisting with the V-type FIs or as individual assemblages, are dominated by H 2 O, with trace amount of CO 2 and N 2 (Figure 8a). Trace amount of CO 2 and N 2 are also found in the vapor phases of the V-type FIs in the Quartz-polymetallic sulfide stage (Figure 8b). In summary, the FIs can be described as the H 2 O -NaCl ± CO 2 ± N 2 system.
Laser Raman Spectroscopy
Representative FIs from the Chuduoqu Pb-Zn-Cu deposit were studied using laser Raman spectroscopy to determine their gas compositions. The results suggest that the vapor phases of the L-type FIs, either coexisting with the V-type FIs or as individual assemblages, are dominated by H2O, with trace amount of CO2 and N2 (Figure 8a). Trace amount of CO2 and N2 are also found in the vapor phases of the V-type FIs in the Quartz-polymetallic sulfide stage (Figure 8b). In summary, the FIs can be described as the H2O -NaCl ± CO2 ± N2 system.
Ion Chromatography
The results obtained by the ion chromatography method are affected by the presence of secondary inclusions and must be taken with caution. The samples for ion chromatography were obtained from the chalcopyrite-bearing quartz veins of stage II in ore bodies M8 and M9. The cations in the liquid of the FIs were mainly Ca 2+ and Mg 2+ with lesser K + and Na + . The anions were mainly Cl −
Ion Chromatography
The results obtained by the ion chromatography method are affected by the presence of secondary inclusions and must be taken with caution. The samples for ion chromatography were obtained from the chalcopyrite-bearing quartz veins of stage II in ore bodies M8 and M9. The cations in the liquid of the FIs were mainly Ca 2+ and Mg 2+ with lesser K + and Na + . The anions were mainly Cl − and SO 4 2− with lesser F − , NO 2− , and NO 3− (Table 2).
Oxygen and Hydrogen Isotopes
Seven quartz samples of ore-forming stages I, II and III obtained from ore bodies M5, M8, and M9 were selected for H-O isotope analysis. The results of the analysis are reported in Table 3. The δ 18 O V-SMOW values range from 9.5% to 15.3% , the δ 18 O H2O values from −0.4% to 9.1% , and δD V-SMOW values from −113.2% to −93.8% .
Carbon and Oxygen Isotopes
The C-O isotopic compositions of the carbonate samples from ore bodies M1, M9, M10, and M11 of the ore-forming stage IV, together with previously published data, are listed in Table 4. The δ 13 C PDB values range from −7.5% to −5.4% , and δ 18 O SMOW values from 9.9% to 12.1% .
Sulfur Isotopes
Sulfur isotopic compositions were determined for fourteen sulfide samples from ore bodies M9, M10, and M11 of the Chuduoqu deposit ( Table 5). The δ 34 S CDT values of the sulfide range from −3.8% to 2.9% with a mean value of −0.5% .
Lead Isotopes
Lead isotopic compositions of the sulfide samples from ore body M9, together with previously published data, are listed in
Origin and Evolution of the Ore-Forming Fluids
The O and H isotopic compositions of selected minerals at different mineralization stages are widely used in the study of the source and evolution process of the hydrothermal fluids [45]. In the δ 18 O H2O -δ 18 D H2O diagram (Figure 9), the δ 18 O H2O values decrease from stage I to stage III, the δ 18 O H2O -δ 18 D H2O data from stage I plot close to the field of magmatic water (Figure 9), indicating that the magmatic fluid may play an important role in the development of ore-forming fluid. However, compared with typical magmatic water, the δ 18 D H2O values of the fluids we sampled are significantly lower than those for magmatic water (from −50% to −85% ) (Figure 9, [46]). Previous studies [47,48] have suggested that a continuous degassing of parent magma in an open system would decrease the δ 18 DH2O values in the residual water during the late crystallization phase, whereas the influence on the δ 18 OH2O value would be less. We attribute this isotope change in the Chuduoqu Pb-Zn-Cu deposit to the continuous degassing of the parent magma in an open system. However, although we attempted to heat treat the sample to remove secondary inclusions prior to isotope analysis, it does not preclude the possibility that the low δ 18 DH2O value from stage Ⅰ is affected by secondary fluid inclusions.
The δ 18 OH2O-δ 18 DH2O data from stage II and stage III are lower than the data from stage I ( Figure 9). In practice, however, magmatic bodies emplaced at shallow levels in the earth's crust may directly or indirectly interact with meteoric water [47]. Since the small amounts of H2O present in host magma, it should be easier for such processes to change the δ 18 DH2O values of magma than the δ 18 OH2O values [46]. Therefore, mixing between magmatic water and meteoric water may be a more plausible explanation for the low δ 18 DH2O values from stage II to stage III in the Chuduoqu Pb-Zn-Cu deposit. Furthermore, in the homogeneous temperature-salinity diagram (Figure 10), the salinity of the fluids decreases with drop of temperature, the data also suggest the occurrence of a fluid mixing. Previous studies [47,48] have suggested that a continuous degassing of parent magma in an open system would decrease the δ 18 D H2O values in the residual water during the late crystallization phase, whereas the influence on the δ 18 O H2O value would be less. We attribute this isotope change in the Chuduoqu Pb-Zn-Cu deposit to the continuous degassing of the parent magma in an open system. However, although we attempted to heat treat the sample to remove secondary inclusions prior to isotope analysis, it does not preclude the possibility that the low δ 18 D H2O value from stage I is affected by secondary fluid inclusions.
The δ 18 O H2O -δ 18 D H2O data from stage II and stage III are lower than the data from stage I (Figure 9). In practice, however, magmatic bodies emplaced at shallow levels in the earth's crust may directly or indirectly interact with meteoric water [47]. Since the small amounts of H 2 O present in host magma, it should be easier for such processes to change the δ 18 D H2O values of magma than the δ 18 O H2O values [46]. Therefore, mixing between magmatic water and meteoric water may be a more plausible explanation for the low δ 18 D H2O values from stage II to stage III in the Chuduoqu Pb-Zn-Cu deposit. Furthermore, in the homogeneous temperature-salinity diagram (Figure 10), the salinity of the fluids decreases with drop of temperature, the data also suggest the occurrence of a fluid mixing.
The differences in the δ 13 C PDB values of various carbon pools mean that C isotope analysis is an important method for tracing the origin of ore-forming fluids. The δ 13 C PDB and δ 18 O SMOW values of the eight calcite vein samples of the hydrothermal ore-forming phase of the Chuduoqu deposit range from −7.5% to −5.4% and from 9.9% to 12.1% , respectively. This range of δ 13 C PDB values lies within the established range of carbon isotope values for mantle or magmatic sources (from −9.0% to −3% , [50]. In the δ 13 C PDB -δ 18 O SMOW diagram (Figure 11), the C-O isotope composition data for the calcite vein samples of mineralization stage IV fall in or around the granite field and parallel with the direction of low temperature alteration of granite, which shows that substantial volumes of magmatic water were still involved in the ore-forming process during late-stage mineralization. affected by secondary fluid inclusions.
The δ 18 OH2O-δ 18 DH2O data from stage II and stage III are lower than the data from stage I ( Figure 9). In practice, however, magmatic bodies emplaced at shallow levels in the earth's crust may directly or indirectly interact with meteoric water [47]. Since the small amounts of H2O present in host magma, it should be easier for such processes to change the δ 18 DH2O values of magma than the δ 18 OH2O values [46]. Therefore, mixing between magmatic water and meteoric water may be a more plausible explanation for the low δ 18 DH2O values from stage II to stage III in the Chuduoqu Pb-Zn-Cu deposit. Furthermore, in the homogeneous temperature-salinity diagram (Figure 10), the salinity of the fluids decreases with drop of temperature, the data also suggest the occurrence of a fluid mixing. The differences in the δ 13 CPDB values of various carbon pools mean that C isotope analysis is an important method for tracing the origin of ore-forming fluids. The δ 13 CPDB and δ 18 OSMOW values of the eight calcite vein samples of the hydrothermal ore-forming phase of the Chuduoqu deposit range from −7.5‰ to −5.4‰ and from 9.9‰ to 12.1‰, respectively. This range of δ 13 CPDB values lies within the established range of carbon isotope values for mantle or magmatic sources (from −9.0‰ to −3‰, [50]. In the δ 13 CPDB-δ 18 OSMOW diagram (Figure 11), the C-O isotope composition data for the calcite vein samples of mineralization stage IV fall in or around the granite field and parallel with the direction of low temperature alteration of granite, which shows that substantial volumes of magmatic water were still involved in the ore-forming process during late-stage mineralization. Figure 11. The δ 13 CPDB-δ 18 OSMOW diagram of the Chuduoqu Pb-Zn-Cu deposit (base map modified after Demény et al. [51]. The data of mantle, continental carbonate, marine carbonate and sedimentary organic matter carbon are from [52][53][54][55][56]. This plot offers information about the various processes of CO2 and carbonate ions including meteoric water influence, sea water penetration, sediment contamination and high temperature influence, low temperature alteration [57][58][59], decarbonate and carbonate dissolution [60].
FI components also provide information on the sources of ore-forming fluids. The Ca 2+ content in the fluids of mineralization stage II samples has a range of 4.96-48.92 μg/g with a mean of 19.38 μg/g, the Mg 2+ content has a range of 1.15-8.09 μg/g with a mean of 3.54 μg/g, the Na + content has a range of 1.40-3.53 μg/g with a mean of 2.72 μg/g, and the K + content has a range of 1.57-3.69 μg/g with a mean of 2.59 μg/g.
Calcium in the ore-forming fluid has two possible sources: One is through the mixing with a Ca-rich fluid end member and the other is through water-rock reaction [61], namely interaction between ore-forming fluid and carbonate wall rock, which can remobilize Ca 2+ from carbonate wall rock into ore-forming fluid. However, the stable isotopic compositions show that the involved external fluid end member is meteoric water, which is characterized by low-temperature, low-salinity and free of Ca. The possibility of mixing with a Ca-rich fluid can therefore be ruled out. In the hydrothermal system of Chuduoqu Pb-Zn-Cu deposit, water-rock reaction significantly Figure 11. The δ 13 C PDB -δ 18 O SMOW diagram of the Chuduoqu Pb-Zn-Cu deposit (base map modified after Demény et al. [51]. The data of mantle, continental carbonate, marine carbonate and sedimentary organic matter carbon are from [52][53][54][55][56]. This plot offers information about the various processes of CO 2 and carbonate ions including meteoric water influence, sea water penetration, sediment contamination and high temperature influence, low temperature alteration [57][58][59], decarbonate and carbonate dissolution [60].
FI components also provide information on the sources of ore-forming fluids. The Ca 2+ content in the fluids of mineralization stage II samples has a range of 4.96-48.92 µg/g with a mean of 19.38 µg/g, the Mg 2+ content has a range of 1.15-8.09 µg/g with a mean of 3.54 µg/g, the Na + content has a range of 1.40-3.53 µg/g with a mean of 2.72 µg/g, and the K + content has a range of 1.57-3.69 µg/g with a mean of 2.59 µg/g. Calcium in the ore-forming fluid has two possible sources: One is through the mixing with a Ca-rich fluid end member and the other is through water-rock reaction [61], namely interaction between ore-forming fluid and carbonate wall rock, which can remobilize Ca 2+ from carbonate wall rock into ore-forming fluid. However, the stable isotopic compositions show that the involved external fluid end member is meteoric water, which is characterized by low-temperature, low-salinity and free of Ca. The possibility of mixing with a Ca-rich fluid can therefore be ruled out. In the hydrothermal system of Chuduoqu Pb-Zn-Cu deposit, water-rock reaction significantly dissolved the carbonate wall rock, remobilizing Ca ion into ore-forming fluid. In the same way, Mg 2+ in the ore-forming fluid is also interpreted as a result of water-rock reaction.
The Cl − content of the fluids of stage II samples ranges from 2.73 µg/g to 10.64 µg/g with a mean of 6.32 µg/g, the SO 4 2− content ranges from 5.53 µg/g to 55.12 µg/g with a mean of 20.86 µg/g, and the F − content ranges from 0.04 µg/g to 0.33 µg/g with a mean of 0.15 µg/g. In summary, meaning that the fluids of stage II were S-rich aqueous solutions. The fluids with a high S content provided an essential source of S for metal precipitation as sulfides.
The physical and chemical properties of the fluids in different mineralization stages are related to the complexity of the fluid source and evolution. Based on the H-O isotopic data, the C-O isotopic data and the fluid inclusion characteristics, the ore-forming fluids from stage I for the Chuduoqu Pb-Zn-Cu deposit were initially derived from magmatic water. In addition, from stage I to stage II, the homogenization temperatures of FIs show a decreasing trend, whereas the salinities are very similar, indicating a simple cooling process during the evolution of the ore-forming fluids ( Figure 10, [49]). Commonly, evidence for fluid boiling could be provided by the coexistence of liquid-rich and vapor-rich fluid inclusion assemblages in the same growth zone or healed fractures [62,63]. Liquid-rich (L-type) and vapor-rich (V-type) fluid inclusions with contrast salinities (Table 1) coexist in the same petrographic assemblages in stage III (Figure 6d), indicating fluid boiling may take place in stage III. The ore-forming fluids from stage IV with characteristics of low temperature (144-233 • C) and salinity (2.07-7.02 wt.% NaCl equivalent) is closely associated with addition of large volumes of meteoric waters.
In summary, the ore-forming fluids in the Chuduoqu Pb-Zn-Cu deposit is characterized by medium to low temperature (144-370 • C), medium to low salinity (2.07-11.81 wt.% NaCl equivalent), and contains minor vapor component of CO 2 and N 2 . The ore-forming fluids were mixed with a growing amount of meteoric water from stage II to stage IV, resulting in the decrease of fluid temperatures (evolved from 250 • C to 308 • C of stage II, from 230 • C to 294 • C of stage III, and eventually from 144 • C to 233 • C of stage IV) and salinities. FIs data from the Chuduoqu Pb-Zn-Cu deposit suggest a simple cooling process from stage I to stage II and a mixing process from stage II to stage IV ( Figure 10); small scale fluid boiling did take place in stage III.
Source of the Ore-Forming Materials
Sulfur isotopes are the main mineralizing agents for sulfophile elements precipitated as sulfides and also play an important role in the precipitation and enrichment of the metallogenic material [53]. Of the sulfide samples from the Chuduoqu Pb-Zn-Cu deposit, the chalcopyrite has higher δ 34 S CDT values than the galena. The chalcopyrite formed earlier than the other sulfides, and the δ 34 S CDT values of the sulfides decreases systematically according to the established order of sulfide formation, implying that the main mineralization stage of the Chuduoqu deposit developed in a stable and uniform hydrothermal environment, reflecting the characteristics of fractional crystallization under equilibrium conditions. The δ 34 S CDT values of the sulfides in the Chuduoqu Pb-Zn-Cu deposit range from −3.8% to 2.9% , with a mean value of −0.5% , showing that the source of S was homogeneous ( Figure 12). The narrow range of δ 34 S CDT values for the ores indicates a magmatic signature [64,65]. The same processes were likely responsible in the formation of other deposits in the northern part of the Sanjiang Metallogenic Belt, as inferred from the consistency in S isotope data for the Quemocuo Pb-Zn deposit (2.3-3.4% , [13]), the Nariniya Pb-Zn deposit (−0.1-1.8% , [66]), and the Narigongma porphyry Cu-Mo deposit (3.9-8.0% , [67]) ( Figure 12). [68]. The field of Cenozoic potassic volcanic rocks in Sanjiang Metallogenic Belt are from [69][70][71][72][73]; the field of Cenozoic potassium-rich porphyries in Sanjiang Metallogenic Belt are from [74,75], the field of Mesozoic-Cenozoic limestone strata are from [76].
A discontinuous high-K igneous province, controlled by intracontinental orogenesis [3] and crustal deformation, developed on the Tibetan Plateau during the late stage of India-Asia collision. These K-rich porphyries and associated K-rich volcanic rocks occur mainly in the central and eastern [68]. The field of Cenozoic potassic volcanic rocks in Sanjiang Metallogenic Belt are from [69][70][71][72][73]; the field of Cenozoic potassium-rich porphyries in Sanjiang Metallogenic Belt are from [74,75], the field of Mesozoic-Cenozoic limestone strata are from [76].
A discontinuous high-K igneous province, controlled by intracontinental orogenesis [3] and crustal deformation, developed on the Tibetan Plateau during the late stage of India-Asia collision. These K-rich porphyries and associated K-rich volcanic rocks occur mainly in the central and eastern [68]. The field of Cenozoic potassic volcanic rocks in Sanjiang Metallogenic Belt are from [69][70][71][72][73]; the field of Cenozoic potassium-rich porphyries in Sanjiang Metallogenic Belt are from [74,75], the field of Mesozoic-Cenozoic limestone strata are from [76].
A discontinuous high-K igneous province, controlled by intracontinental orogenesis [3] and crustal deformation, developed on the Tibetan Plateau during the late stage of India-Asia collision. These K-rich porphyries and associated K-rich volcanic rocks occur mainly in the central and eastern regions of the plateau, where mineralization is well developed. This high-K magmatism has been dated at 26,77] with a peak in activity at 35 ± 5 Ma [3]. Given the above, and regarding the source of metallogenic materials, previous studies have proposed a genetic relationship between the Cenozoic potassic magmatism [13,78] and the occurrence of Pb-Zn deposits in the northern part of the Sanjiang Metallogenic Belt. To further investigate this, we collated Pb isotope composition data for Mesozoic and Cenozoic limestone [76], Cenozoic potassic volcanic rock [69][70][71][72][73], and Cenozoic K-rich porphyry [74,75] in the Sanjiang region. The Pb isotope composition data for the ore in the Chuduoqu Pb-Zn-Cu deposit fall into the range of potassic magmatic rocks, and the Mesozoic-Cenozoic limestones in the Tuotuohe Basin have similar Pb isotope compositions to those of the Chuduoqu ore. Therefore, we infer that the dominant metallogenic source of the Chuduoqu deposit was a regional-scale potassic magmatic hydrothermal fluid system, with the ore-bearing Jurassic carbonate rocks providing a lesser contribution of metallogenic material.
The S and Pb isotopic compositions measured in the present study imply that the material source of the Chuduoqu Pb-Zn-Cu deposit was related to deep magmatic activity. Specifically, the formation of the ores was likely controlled by the syenite porphyry dykes, which are exposed in the study area, and possibly by other rocks at deep levels. The dyke and other igneous rocks provided both the heat source and the main metallogenic material for the formation of the Chuduoqu Pb-Zn-Cu deposit.
P-T Conditions of Ore Deposition
The trapping pressure of FIs can only be estimated when the actual trapping temperature is known, or if fluid boiling or immiscibility was occurring in the system at the time of entrapment (i.e., coeval liquid and/or saline-and vapor-rich inclusions with identical homogenization temperatures) [49,79]. As discussed earlier, in stage III, Liquid-rich (L-type) FIs are spatially associated with vapor-rich (V-type) FIs with similar homogenization temperatures and distinct salinities (Figure 6d). Petrographic and microthermometric data suggest that fluid boiling may take place in stage III. These conditions represent the best estimate of the ore-forming conditions in the Chuduoqu Pb-Zn-Cu deposit, the homogenization temperatures are interpreted to closely approximate the trapping temperatures [79]. Also, Roedder and Bodnar [80] assert that the presence of cations other than Na + have little effect on the slopes of isochors and vapor pressure as compared to the NaCl-H 2 O. According to the formula given by Driesner and Heinrich [81], the trapping pressures during the ore-forming stage III are estimated to range from ∼3 to 8 MPa and are mostly concentrated at 5 MPa (Figure 14), which would correspond to depths of 0.3-0.8 km assuming hydrostatic conditions [82]. Hence, the initial Pb-Zn mineralization in the Chuduoqu Pb-Zn-Cu deposit mainly occurred at depths of less than 0.8 km, which is a shallow mineralization depth. of cations other than Na + have little effect on the slopes of isochors and vapor pressure as compared to the NaCl-H2O. According to the formula given by Driesner and Heinrich [81], the trapping pressures during the ore-forming stage Ⅲ are estimated to range from ∼3 to 8 MPa and are mostly concentrated at 5 MPa (Figure 14), which would correspond to depths of 0.3-0.8 km assuming hydrostatic conditions [82]. Hence, the initial Pb-Zn mineralization in the Chuduoqu Pb-Zn-Cu deposit mainly occurred at depths of less than 0.8 km, which is a shallow mineralization depth.
Ore Precipitation Mechanism
Experimental research has shown that Cu, Pb and Zn are transported as bisulfide complexes at high temperature, but as chloride complexes at low temperature in both low-and high-salinity fluids [83]. Taking into consideration that the fluid inclusions of the Chuduoqu Pb-Zn-Cu deposit are characterized by medium to low temperature (370-144 • C), it is likely that the copper, lead and zinc were transported mainly by chloride complexes of Pb 2+ , Zn 2+ and Cu 2+ in the ore-forming fluids.
Mineralization stage I: The FIs and H-O isotope compositional data indicate that the fluids were released from magma. As the temperature dropped to 370 • C, specularite and quartz started to precipitate.
Mineralization stage II: The fluids migrated upwards, and the temperature dropped below 308 • C. The ore-forming fluids were mixed with meteoric water from stage II to stage II according to the discussion in Section 7.1. The results show that the ore-forming fluids were diluted by external meteoric fluids with moderate temperatures and salinities, which is related to the deposition of copper-rich sulfide [91]. Fluid mixing between magmatic fluids and meteoric water could play an important role in the deposition of metals from ore-forming fluids [90]. In addition, the precipitation of specularite in stage II increased the Cu/Fe ratio, which changed the sulfur from S 6+ (S 4+ ) to S 2+ , and reduced the solubility of copper in the fluids [92]. This was the main stage of Cu formation.
Mineralization stage III: Small scale fluid boiling occurred in some quartz from stage III; however, no boiling fluid inclusion assemblage was observed in other stages. The results show that fluid boiling was not the key factor to control the precipitation of lead, zinc and copper in the Chuduoqu Pb-Zn-Cu deposit. Fournier [93] believed that fluid cooling, along with the decrease of in temperature, salinity and pressure, increases the activity of S 2− , and decomposition of the metal complex [94], may have been the most likely ore precipitation mechanism. As shown in Figure 10, the salinities of the ore-forming fluids in the Chuduoqu Pb-Zn-Cu deposit decreased with decreasing homogenization temperatures from stage I to stage IV. It is suggested that the cooling might be an important factor for the formation of Pb-Zn ore bodies. The sulfides could form by the following reactions: This was the main stage of Pb and Zn formation. Mineralization stage IV: The ore-forming fluids migrated to the shallow subsurface. As the temperature dropped further, low-temperature sulfides such as pyrite were formed. The meteoric water could easily blend into the hydrothermal system because boiling of the ore fluids in stage III helps open the conduits and increases the permeability of the host rocks.
Metallogenic Model
Many geological and geochemical features of the Chuduoqu Pb-Zn-Cu deposit are consistent with the Cordilleran-type vein deposits [95][96][97]. These features include the following: (1) The deposit was formed in extensional faults; (2) The main metallogenic stage veins are rich in sulfide; (3) Vein formation is mainly under epithermal conditions at shallow levels (less than 0.8 km for the initial Pb-Zn mineralization, assuming hydrostatic conditions at Chuduoqu); (4) The hydrothermal fluids were characterized by medium to low salinity and temperatures below 370 • C; (5) The deposit have close spatial, temporal and genetic relationship with porphyry systems; (6) The ore-forming fluids were derived from a magmatic-hydrothermal system. However, the Chuduoqu Pb-Zn-Cu deposit also has some different characteristics as compared with the Cordilleran-type vein deposits. The differences include: (1) The Cordilleran-type deposits generally have a well-developed metal and alteration zonation in deposit scale [96], on the contrary, the Chuduoqu Pb-Zn-Cu deposit has no obvious metal and alteration zonation; (2) Cu-Zn-Pb-Au-Ag-(Bi-Sb) is a metal assemblage in the Cordilleran-type deposits, in which Au is a common metal element [97], whereas the Chuduoqu Pb-Zn-Cu deposit contain Cu-Pb-Zn-Ag without Au.
Hou et al. [3] analyzed collisional orogenesis on the Tibetan Plateau and identified three stages: primary collision (65-41 Ma), late collision , and post collision (25-0 Ma). India-Eurasia plate collision began at around 65 Ma, and the India plate was subducted northward until 41 Ma [2], causing N-S-directed tectonic compression [17,98,99] and accommodating at least 61 km [4] of shortening as well as forming a continental crust of twice the normal thickness. The shortening is ongoing [100].
During the late collision stage from 40 Ma, the thickened orogen was subjected to large-scale extension resulting from differentials in gravity potential and delamination. As a result of the gravitational instability, the lower crust and lithosphere mantle were removed and sunk together to the asthenosphere mantle. Subsequently, asthenosphere upwelling heated the lower-crust and upper-mantle rocks, leading to their partial melting. The resultant magma was a mixture of these partially melted rocks. The upwelling of magma along deep fractures and large-scale extensional faults formed volcanic, sub-intrusive, and intrusive rocks throughout the Sanjiang region. Ore-forming fluids were released from the magma as a result of decompression during ascent to shallower depths in the later stages of intrusive magma evolution. Phase-separated fluids mixed with meteoric water and exchanged material with surrounding rocks, then migrated to shallower regions to deposit various minerals. Disseminated and veinlet mineralization occurred during hydraulic fracturing when the ore-bearing fluids circulated around the hypabyssal intrusive mass, forming porphyry-type Cu polymetallic mineralization, such as that found in the Zhalaxiageyong, Nariniya, and Zhamuqu deposits. Ore-bearing fluids upwelled to shallower depths along deep fractures, and vein mineralization occurred along shallow extensional fractures, forming the Chuduoqu mesothermal hydrothermal vein Pb-Zn-Cu deposit ( Figure 15). regions to deposit various minerals. Disseminated and veinlet mineralization occurred during hydraulic fracturing when the ore-bearing fluids circulated around the hypabyssal intrusive mass, forming porphyry-type Cu polymetallic mineralization, such as that found in the Zhalaxiageyong, Nariniya, and Zhamuqu deposits. Ore-bearing fluids upwelled to shallower depths along deep fractures, and vein mineralization occurred along shallow extensional fractures, forming the Chuduoqu mesothermal hydrothermal vein Pb-Zn-Cu deposit ( Figure 15).
Conclusions
(1) The ores of the Chuduoqu Pb-Zn-Cu deposit in central Tibet are hosted in limestone and sandstone of the Middle Jurassic Xiali Formation (J2x) and are structurally controlled by NWW-trending faults cutting the host sediments. The mineralization of the Chuduoqu Pb-Zn-Cu deposit can be divided into four stages: quartz-specularite (stage I), quartz-barite-chalcopyrite (stage II), quartz-polymetallic sulfide (stage III), and quartz-carbonate (stage IV).
Conclusions
(1) The ores of the Chuduoqu Pb-Zn-Cu deposit in central Tibet are hosted in limestone and sandstone of the Middle Jurassic Xiali Formation (J 2 x) and are structurally controlled by NWW-trending faults cutting the host sediments. The mineralization of the Chuduoqu Pb-Zn-Cu deposit can be divided into four stages: quartz-specularite (stage I), quartz-barite-chalcopyrite (stage II), quartz-polymetallic sulfide (stage III), and quartz-carbonate (stage IV).
(2) H, O, C, S, and Pb isotope data of samples from the Chuduoqu deposit reveal that the ore-forming fluids had a dominantly magmatic signature but were mixed with meteoric water. The most likely source of metallogenic material was a regional-scale potassic magmatic hydrothermal fluid system, and the mineralization occurred between 40 and 24 Ma. Specifically for the Chuduoqu Pb-Zn-Cu deposit, the magmatic activity of a syenite porphyry intrusion most probably provided the heat source and main metallogenic material for the mineralization.
(3) Fluid mixing and cooling mainly contributed to the ore precipitation. In addition, small scale fluid boiling did take place in some quartz from stage III.
(4) The Chuduoqu Pb-Zn-Cu deposit is a mesothermal hydrothermal vein deposit and shares many similar features with those of Cordilleran-type vein deposits worldwide, and it was formed in an extensional environment related to late intracontinental orogenesis caused by India-Asia collision. | 2019-05-12T15:49:10.046Z | 2019-05-10T00:00:00.000 | {
"year": 2019,
"sha1": "c4fbb97de38b2b61a8efc1207ffe987dd0cb0db7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/9/5/285/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6aec76b25ebe167552c977f2e5a28c440e0e9cd0",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
258213970 | pes2o/s2orc | v3-fos-license | “Tool-in-lesion” Accuracy of Galaxy System—A Robotic Electromagnetic Navigation BroncHoscopy With Integrated Tool-in-lesion-Tomosynthesis Technology
Background: The Galaxy System (Noah Medical) is a novel robotic endoluminal platform using electromagnetic navigation combined with integrated tomosynthesis technology and augmented fluoroscopy. It provides intraprocedural imaging to correct computerized tomography (CT) to body divergence and novel confirmation of tool-in-lesion (TIL). The primary aim of this study was to assess the TIL accuracy of the robotic bronchoscope with integrated digital tomosynthesis and augmented fluoroscopy. Methods: Four operators conducted the experiment using 4 pigs. Each physician performed between 4 and 6 nodule biopsies for 20 simulated lung nodules with purple dye and a radio pacifier. Using Galaxy’s “Tool-in-Lesion Tomography (TOMO+)” with augmented fluoroscopy, the physician navigated to the lung nodules, and a tool (needle) was placed into the lesion. TIL was defined by the needle in the lesion determined by cone-beam CT. Results: The lung nodule’s average size was 16.3 ± 0.97 mm and was predominantly in the lower lobes (65%). All 4 operators successfully navigated to all (100%) of the lesions in an average of 3 minutes and 39 seconds. The median number of tomosynthesis sweeps was 3 and augmented fluoroscopy was utilized in most cases (17/20 or 85%). TIL after the final TOMO sweep was 95% (19/20) and tool-touch-lesion was 5% (1/20). Biopsy yielding purple pigmentation was also 100% (20/20). Conclusion: The Galaxy System demonstrated successful digital TOMO confirmed TIL success in 95% (19/20) of lesions and tool-touch-lesion in 5% (1/20) as confirmed by cone-beam CT. Successful diagnostic yield was achieved in 100% (20/20) of lesions as confirmed by intralesional pigment acquisition.
Current robotic platforms (Ion, Intuitive Surgical and Monarch, Auris Surgical Robotics) respectively utilize either shape sensing (SS) technology with an embedded fiber optic sensor that measures the shape of the catheter several hundred times per minute or electromagnetic navigation (EMN) combined with insertion distance and image/airway recognition for guidance. 5oth SS and EMN bronchoscopy systems are thought to be prone to CT-to-body divergence (CT2BD).CT2BD is the discrepancy between the electronic virtual target and the actual anatomic location of the peripheral lung lesion. 6A myriad of factors is thought to cause CT2BD including differences in lung volumes at the time of the preprocedural scan and bronchoscopy.Variations with respiration can lead to movement of the target lesion on average up to 17.6 mmsometimes larger than the lesion itself. 7To overcome CT2BD, many bronchoscopists are supplementing advanced imaging devices such as digital tomosynthesis, cone beams, and O-arms. 8ritchett 9 reported a total of 93 lesions with a median size of 16.0 mm were biopsied in 75 consecutive patients with a diagnostic yield of 83.7% based on the strict AQuIRE definition.Although cone-beam CT is generally accepted as the gold standard of intraoperative imaging, access and cost of fixed and mobile cone-beam CT systems limit widespread adoption. 10he largest prospective single-arm cohort study of navigational bronchoscopy, NAVIGATE, demonstrated a diagnostic yield of 67.4%. 11The advent of EMN bronchoscopy with digital tomosynthesis, DT-ENB, (SuperDimension Medtronic) promised an improved diagnostic yield over previous legacy systems.Katsis et al 3 demonstrated using DT-ENB in 363 peripheral lung lesions and achieved a diagnostic yield of 77.4%.
In a recent single-center study, the diagnostic yield of SS versus DT-ENB was 77% (110/143 Peripheral lung lesions) and 80% (158/197 Peripheral lung lesions) (P = 0.4). 12Although this study suggests SS and DT-ENB are comparable and offer a stepwise improvement over previous platforms, there is still significant room for improvement in the diagnostic yield.Combining robotic bronchoscopy with integrated intraprocedural image guidance may offer an improvement in diagnostic yield over current robotic and DT-ENB platforms.
The Galaxy System (Noah Medical) is a robotic endoluminal platform that combines EMN with integrated tomosynthesis technology and augmented fluoroscopy.(Fig. 1) The Galaxy System is designed to utilize the advantages of robotic bronchoscopy and mitigate CT2BD with a novel confirmation of tool-in-lesion (TIL).TIL is defined as the biopsy needle in the lesion.The primary aim of the "Tool-in-lesion" accuracy of Galaxy System, a robotic electromagnetic navigation bronchoscopy, with integrated TILtomosynthesis (TILT) technology, the MATCH Study, was to assess the TIL accuracy of the robotic bronchoscope with integrated digital tomosynthesis and augmented fluoroscopy as confirmed by cone-beam CT imaging.
Animal Preparation
A porcine model (Sus scrofa domesticus) was utilized.Pig lung anatomy has significant similarities to the anatomy and physiology of the human lung and is deemed an appropriate animal model for many bronchoscopy trials. 13The study was approved by the Sutter Institute for Medical Research Institutional Animal Care and Use Committee Protocol NRE.02.22.Animal husbandry, preparation, and euthanasia were performed according to accepted ethical and Institutional Animal Care and Use Committee guidelines.
Each pig was anesthetized with volatile gas and underwent tracheostomy with an 8.5 mm endotracheal tube and bilateral chest tube thoracostomy.Anesthesia was monitored by a veterinarian with invasive hemodynamic monitoring.
Under fluoroscopic guidance, simulated lung nodules were created by percutaneous injection of a gelatinous solution containing purple-colored tracer material and a radio pacifier into the lung periphery.A CT was then performed with a breath hold at inspiration with an adjustable pressure limit set to 25 cm H 2 O for preprocedure planning.All injected lung nodule targets were peripheral lesions in that they were surrounded by normally aerated lung, none were endobronchial, and all were beyond the segmental bronchus so that all biopsies were transbronchial rather than endobronchial.All lesions were at least 8 mm in the largest diameter.
TIL was defined by the needle in the lesion.Tool-touch-lesion (TTL) was defined as a needle that is tangential or touching the lesion but is not within the lesion.The center strike was defined as the needle in the middle third in 3 orthogonal planes (axial, sagittal, and coronal).TIL definitions can be seen in Figure 2 and Figure 3, for example, visualizations on preplanning CTs, tomography (TOMO), and cone-beam CT (CBCT) images.Nodule size was calculated using the average of the longest and shortest dimensions on the preplanning CT scan of the chest.This scan was also used to determine whether a bronchus sign was present and categorize the nodule's location (middle or peripheral lung zone).
Anesthesia Protocol
After tracheostomy, lung recruitment was performed by giving 4 positive end-expiratory pressure (PEEP) recruitment breaths (30 cm H 2 0 over 30 s).Repeated recruitment maneuvers were performed if atelectasis was noted on the helical CT.Pigs were ventilated using volume control ventilation with a high tidal volume (8 to 12 mL/kg) and a high PEEP strategy set at 15 cm H 2 0. An apneic breath hold was performed during image acquisition and the adjustable pressure-limiting valve was set to 25 cm H 2 0. If apneic breath-hold strategies were used, apnea was maintained no longer than 2 minutes.Continuous end-tidal CO 2 monitoring was utilized as a safety measure.Each pig underwent between 4 and 6 navigational bronchoscopies and multiple scans with a portable C-arm and fixed cone-beam CT imaging.The trial conditions were much longer than a typical human navigational procedure and the I-LOCATE study demonstrated that one of the major factors of atelectasis is the time of the procedure. 14Given the prolonged nature of the study, we utilized a modified version of a published lung navigation ventilation protocol with high tidal volume, high PEEP, and breath-hold strategies to mitigate atelectasis. 10vigation Over 4 separate days, 4 operators (the authors) conducted the experiment using 4 pigs.Each physician performed 6, 5, 5, and 4 nodule biopsies, respectively, for a total of 20 lung nodule biopsies.
Galaxy Planning Software was used to identify and mark target lesions on the CTs as well as plan pathways on the segmented airway tree.The robotic platform was set up, airway registration was performed, and an individual target lesion was selected.The bronchoscope was guided to the desired target lesion using a handheld controller under electromagnetic guidance (Fig. 4; Galaxy System User Interface).A tomosynthesis sweep was performed utilizing a 2-dimensional fluoroscopic c-arm with a 9 inch image intensifier (OEC 9900 Elite, GE).The C-arm sweep consisted of a limited circular rotation of 30 degrees left anterior oblique to 30 degrees right anterior oblique.The bronchoscope tip was marked in the software.Based on the reconstruction algorithm, 2-dimensional images were stacked to create a section image, on which the target nodule was marked.
The bronchoscope was then navigated to the corrected target and a needle was placed.If desired, the operator could utilize augmented fluoroscopy to help optimize the bronchoscope and tool position.A repeat tomosynthesis sweep was performed to confirm the TIL confirmation.Repeated attempts were allowed at the user's discretion until the needle was optimally positioned.The digital tomosynthesis TIL confirmation was based on the "TOMO reconstruction coordinate technique" described below.
Once the final position was confirmed, a CBCT scan was captured with an 8-second sweep (0.5 projection/degree, 396 projections).An 8-second spin was performed to optimize image quality.CBCT TIL confirmation was defined as needle placement either in the lesion or TTL defined as tangential to the lesion in three orthogonal planes (axial, sagittal, and coronal).The catheter position was not adjusted after the confirmatory CBCT scan (Fig. 5; study workflow).The time to navigation was determined from the start of navigation to the time that biopsies were concluded.Radiation exposure was recorded in milligray (mGy).The number of tomosynthesis sweeps and fluoroscopy time were recorded.After confirmation with CBCT, needle aspiration was performed.Needle passes were diagnostic if purple pigment was visualized on gross inspection or microscopic evaluation.
Tomography Reconstruction Coordinate Technique
A digital tomosynthesis sweep was performed as described.The reconstructed tomosynthesis image consists of a coordinate representing the depth of the displayed slice within the reconstruction in the anterior-posterior direction.The distance between the optimal image slice of the needle and the optimal image of the lesion was measured.The coordinate feature can help inform whether the tool is in the lesion by calculating the difference between the coordinates for the depths of the needle and the lesion, representing the distance between the needle and the center of the lesion.If the distance was > 4 mm, repeat navigation was attempted at the discretion of the operator.Less than 4 mm was considered optimal for TILT+ confirmation (Fig. 6; an example of the Tomo reconstruction coordinate technique).
Statistical Analyses
Mean and standard deviation are reported for continuous variables; categorical variables are reported as percentages and counts.The statistical significance of differences among continuous variables was assessed using a t test.Two-tailed P values of ≤ 0.05 were considered statistically significant and analyses were performed using Google Sheets (Version 2022).
RESULTS
Lung nodule's average size was 16.3 ± 0.97 mm and were predominantly in the lower lobes (65%).Only 15% (3/20) had a bronchus sign and the average distance to the pleura was 6.88 ± 5.5 mm.All 4 operators successfully navigated to all (100%) of the lesions in an average of 3 minutes and 39 seconds.The average procedure time from navigation to the conclusion of biopsies was 32.45 minutes, ranging from 14 minutes to 87 minutes.The median number of tomosynthesis sweeps was 3 (range: 2 to 13), and augmented fluoroscopy was utilized in most cases (17/20 or 85%).Table 1 demonstrates baseline characteristics.
FIGURE 6. Tomosynthesis reconstruction coordinate technique top row:
The lesion in the left lower lobe of the lung demonstrated an optimal image of the needle, the lung lesion of 1.5 mm, and was considered TIL.Bottom row: the distance between the optimal image of the lung nodule and the optimal image of the needle was 7.6 mm, which was > 4 mm and was considered not a tool in the lesion.
FIGURE 5. Study workflow.The procedure was planned and the airway was registered.In each case, the provider guided the robot to the virtual target, performed a TILT+ tomography (TOMO) sweep, and then corrected the position of the catheter for optimal alignment.A final TOMO sweep was performed to confirm the tool in the lesion, followed by a final CBCT confirmation spin.
5%.Although the intention of the biopsy was TIL confirmation, we had an incidental 60% center strike rate.Biopsy confirmation demonstrated diagnostic yield as defined the by the presence of intralesional purple pigment dye was 100% (20/20).
The ACCESS study utilizing Monarch in a cadaveric model utilized artificial tumor targets sized 10 to 30 mm in axial diameter and were implanted into 8 human cadavers.Sixty-seven nodules were evaluated in 8 cadavers.The mean nodule size was 20.4 mm.The overall diagnostic yield was 65/67 (97%). 15The Precision-1 ION study in a cadaveric model demonstrated a rate of successful nodule localization and puncture of 20 pseudotumors was 80%. 16The authors recognize that porcine and cadaveric experiments with highly controlled conditions typically outperform human trials.Conclusions as to the performance cannot be predicted based on solely animal or cadaveric trials alone.When comparing experiments, the authors believe cadaveric studies are less controlled than animal studies.Animal studies require anesthesia, prone to bleeding and atelectasis.Both Monarch and ION platforms performed similarly in human trials with high rates of lesion localization and lower rates of diagnostic yield.The Monarch BENE-FIT multicenter prospective trial demonstrated that lesion localization was successful in 96.2% of patients but had a diagnostic yield of 74.1%. 17ielding et al 18 demonstrated an ION SS localization rate of 96% and a diagnostic yield of 79%.
Monarch and ION robotic platforms utilize a preplanning CT scan to create an electronically generated virtual target.Electronically generated virtual targets are thought to be prone to CT2BD. 6CT2BD can occur for various reasons, including atelectasis, neuromuscular weakness due to anesthesia, tissue distortion from the catheter system, bleeding, ferromagnetic interference, and perturbations in anatomy such as pleural effusions.Radial probe endobronchial ultrasound, often used to localize lesions and overcome CT2BD, has intrinsic issues as an intraoperative imaging device.REBUS is only lateral looking, unable to assess directionality with eccentric views, and prone to false positives due to atelectasis and focal hemorrhage.However, rEBUS can be helpful with concentric lesions with a higher rate of diagnostic yield. 10,14,19CT2BD can increase the length of the procedure, frustrate the operator, and ultimately lead to a nondiagnostic procedure. 6espite the advancement of the first generation of robotic bronchoscopy systems, CT2BD limits improved diagnostic yield.Lesion localization although high in BENEFIT and Precision-1 at 96% may be potentially misleading to the reader and is only reflective of the catheter positioned at a virtual target.In contrast, the Galaxy System provides real-time intraprocedural imaging.In theory, the addition of digital tomosynthesis to a robotic platform may offer the provider TIL confirmation and improve provider confidence.
Digital tomosynthesis algorithms have been recently introduced for the correction of CT2BD.Pritchett et al, 20 in a 2-center trial, demonstrated superDimension Fluoroscopic Navigation system (Medtronic) improved 3-dimensional target overlap from 59.6% (28/47) to 83.0% (39/47) before and after location correction, respectively.A prospective single study center utilizing the first-generation LungVision system (Body Vision Medical Ltd.) demonstrated an average of 14.5 mm (range: 2.6 to 33.0 mm) from preprocedure CT to intraprocedural CBCT images.The average distance between the lesion location, as shown by LungVision augmented fluoroscopy CBCT indicates cone-beam computerized tomography; TIL, tool-in-lesion; TOMO, tomography.system, and the actual location measured by CBCT was 5.9 mm (range: 2.1 to 10.0 mm). 20he Galaxy System hopes to improve diagnostic yield by combining integrated digital tomosynthesis and the advantages of a robotic catheter system.In the current study, TILT+ with TOMO reconstruction coordinate technique successfully corrected for CT2BD with a diagnostic yield of 100%.The authors agree that human trials are required to better assess the performance of the Galaxy System.
COMPLICATIONS
No significant complications occurred.
LIMITATIONS
There are several limitations.All 4 operators had significant bronchoscopy experience with digital TOMO and cone-beam CT imaging, limiting generalizability.The study findings may not apply to less experienced operators.Lesion characteristics (mixed gelatinous solution) do not reflect the various lesion characteristics encountered in clinical practice.The success rate with a porcine model does not necessarily indicate human success.
CONCLUSIONS
The Galaxy System demonstrated successful digital TOMO-confirmed TIL success in 95% (19/20) of lesions and TTL in 5% (1/20) as confirmed by CBCT.Successful diagnostic yield was achieved in 100% (20/20) of lesions as confirmed by intralesional pigment acquisition.Additional clinical trials are warranted to see whether high success rates can be reproduced in human trials.
FIGURE 1 .
FIGURE 1. Image of the robotic endoluminal navigation platform, Galaxy System.Image provided by Noah Medical (Noah Medical).
strates navigation results.DISCUSSIONOur study using electromagnetic-guided robotic bronchoscopy with digital tomosynthesis with TOMO reconstruction coordinate technique for TIL confirmation showed a 95% and TTL of | 2023-04-20T06:16:20.089Z | 2023-04-19T00:00:00.000 | {
"year": 2023,
"sha1": "6010e182f481e49b6e27ba492b2c6fc6fcf46a3e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/lbr.0000000000000923",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "536fc6a34f94f90dbc8f66baf1f544377fb6f4ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211555330 | pes2o/s2orc | v3-fos-license | Genomic characterization of newly completed genomes of botulinum neurotoxin-producing species from Argentina, Australia and Africa.
Botulinum neurotoxin-producing clostridia are diverse in the types of toxins they produce as well as in their overall genomic composition. They are globally distributed, with prevalent species and toxin types found within distinct geographic regions but related strains containing the same toxin types may also be located on distinct continents. The mechanisms behind the spread of these bacteria and the independent movements of their bont genes may be understood through examination of their genetic backgrounds. The generation of 15 complete genomic sequences from bacteria isolated in Argentina, Australia, and Africa allows for a thorough examination of genome features, including overall relationships, bont gene cluster locations and arrangements, and plasmid comparisons, in bacteria isolated from various areas in the southern hemisphere. Insights gained from these examinations provide an understanding of the mechanisms behind the independent movements of these elements among distinct species.
Introduction
Botulism is caused by botulinum neurotoxin that is known be produced by any of seven different Clostridium species (Cruz-Morales et al. 2019), namely C. parabotulinum, C. sporogenes, C. botulinum, C.novyi sensu lato, C. argentinense, C. baratii, and C. butyricum (Williamson et al. 2016;Smith et al. 2018). Although C. parabotulinum and C. sporogenes are closely related, they have been identified as separate species (Weigand et al. 2015;Williamson et al. 2016). Both neurotoxigenic and nontoxigenic members are present in each of these species. These bacteria produce a variety of toxin types and are distributed globally (Fernandez 1994). Different bacterial species may produce the same toxin types, and different toxin types may be produced in the same species. For example, C. parabotulinum, C. botulinum, and C. sporogenes strains are known to contain bont/B genes; bont/E genes may be found in both C. botulinum and C. butyricum; and bont/F genes are located in C. parabotulinum, C. botulinum, and C. baratii strains. In some cases, the same bacteria may contain multiple bont genes and simultaneously produce two or more BoNTs. Currently, 7 toxin types and over 40 subtypes have been identified (Peck et al. 2017). Of these, toxin types A, B, E, and F are associated with human botulism and are the most commonly studied. The eight BoNT/A and eight Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution 2020. This work is written by US Government employees and is in the public domain in the US.
BoNT/F subtypes show a great deal of diversity. BoNT/A subtypes differ by 1.4-8.4% in nucleotide residues and BoNT/F subtypes differ by 1.1-24.1%, whereas the seven BoNT/B subtypes differ by only 0.9-4.0%.
There are several different manifestations of botulism, which include foodborne botulism, infant botulism, wound botulism, adult toxicoinfections, and iatrogenic botulism. Although the existence of foodborne botulism has been known for centuries, other forms of botulism have only been recently noted. For example, the condition known as infant botulism was formally recognized in 1977 (Arnon et al. 1977) and, although rare individual cases of wound botulism have been previously identified (Hall 1945;Werner et al. 2000), outbreaks of wound botulism in association with the injection of contaminated heroin have only been reported within the past 20 years (Passaro et al. 1998;Werner et al. 2000;Brett et al. 2004;Kalka-Moll et al. 2007).
Infant botulism is the most common type of botulism in the United States, with an average of 100 cases reported annually (CDC 1998). An unusually high prevalence of infant botulism has been mapped to areas in northern California and an area comprising eastern Pennsylvania/southern New Jersey/northern Delaware (CDC 1998). Outside the United States, there has been a high number of infant botulism cases in the Mendoza and Buenos Aires provinces of Argentina (Sagua et al. 2009) and several infant botulism cases have been identified in Australia as well (McCallum et al. 2015). The reason for the high rates of infant botulism in these areas is unclear, but it could be related to the heavier concentrations of BoNT-producing bacterial spores that are found in the soil in these areas. An association of infant botulism with the consumption of contaminated honey led to warnings about feeding honey to infants, which effectively minimized the number of cases associated with ingestion of this food. As the majority of current infant botulism cases do not show an association with particular foods, it is now thought that inhaled bacterial spores in dust may be a major cause of infant botulism via germination and toxin production within the intestines of babies (Koepke et al. 2008). Although this condition is seldom fatal, it can result in the need for prolonged intensive care and extended hospital stays (Opila et al. 2017;Payne et al. 2018).
In order to better understand the prevalence and types of BoNT-producing bacteria in various regions, selected laboratories have conducted soil surveys. The K.F. Meyer Laboratory at the George Hooper Institute, San Francisco, CA, executed a series of soil surveys in the United States and abroad, beginning in the early 1920s (Meyer and Dubovsky 1922) and an additional notable US survey was done by Dr L.Ds. Smith of the Anaerobe Laboratory, Virginia Polytechnic Institute in 1977 (Smith 1978). These environmental samples revealed a predominance of BoNT/A strains in the western part of the United States and a predominance of BoNT/B strains in the northeastern and mid-Atlantic states. A similar set of soil surveys were conducted in Argentina in the late 1960s by Dr D. Gimenez and Dr A. Ciccarelli of the Universidad Nacional de Cuyo, Mendoza, Argentina. The characterizations of BoNT-producing bacteria were particularly interesting in the Argentinean soil samples as they produce a diverse array of toxin types and subtypes (Gimenez and Ciccarelli 1968, 1970, 1978Raphael et al. 2010Raphael et al. , 2012Luquez et al. 2012;Williamson et al. 2017).
Although the most common toxin type/subtypes in the United States are BoNT/A1 and BoNT/B1, the major toxin subtype identified from Argentinean isolates is BoNT/A2. BoNT/A2-producing strains are commonly isolated from soils in the Mendoza region (Luquez et al. 2005;Williamson et al. 2016) and BoNT/A2 is the major toxin implicated in infant botulism cases there (Sagua et al. 2009). BoNT/B2, BoNT/F4, and BoNT/F5 strains have also been isolated from soil samples, as well as bivalent BoNT/A2F4 and BoNT/A2F5 strains, and rare BoNT/G strains (Gimenez and Ciccarelli 1970;Raphael et al. 2010;Williamson et al. 2016).
The frequency of botulism cases, particularly those related to infant botulism, in North and South America has resulted in intensive study of the responsible bacterial strains, but less is known about the composition of Australian and African BoNT-producing strains. Recent publications (McCallum et al. 2015;Williamson et al. 2016) have identified BoNT/A2 and BoNT/B6 strains associated with infant botulism cases in Australia. Two African isolates, from Mauritius and Uganda, also produce BoNT/A2. It is noteworthy that, although a majority of BoNT/A strains in the northern hemisphere are BoNT/A1, BoNT/A2 predominates throughout the southern hemisphere, to include Australia, Africa, and South America (Luquez et al. 2012;McCallum et al. 2015;Williamson et al. 2016).
Recent sequencing efforts of BoNT-producing bacteria have resulted in the public availability of over 250 genome sequences, of which more than 50 are complete. Genomes from strains originating in Argentina are well represented, with 36 draft and 13 complete genomes. Published genome sequences for Argentinean BoNT-producing strains include C. parabotulinum strains (Smith et al. 2018), C. botulinum strain CDC 66177 (Raphael et al. 2012), and C. argentinense strain 89G (Halpin et al. 2017). Australian strains are less well studied, with reports describing only two draft genomes (Williamson et al. 2016), two complete genomes (this study), and sequence data from four additional strains (McCallum et al. 2015). These Australian strains are associated with BoNT/A2 (C. parabotulinum) or BoNT/B6 (C. sporogenes). The African BoNT/A2 Mauritius strain was also part of this study.
Eleven complete genomes from Argentina were sequenced as part of this study, including three BoNT/A2 strains, one BoNT/A3 strain, four BoNT/B2 strains, and three strains that produce various BoNT/F subtypes (table 1). One strain is a bivalent toxin producer, containing both bont/A2 and bont/F5 genes. Ten of these strains were isolated from soil samples in the Mendoza region of Argentina and one was isolated from contaminated food associated with a foodborne botulism case there. These 11 sequences are compared with 3 newly sequenced Australian strains, 1 African strain, and previously published complete reference genomes to determine relationships among these strains and specific features in the chromosomal and plasmid regions containing the various bont genes.
Materials and Methods
Whole-genome sequencing of isolates was accomplished using the Illumina HiSeq (300 or greater fold coverage) and PacBio RSII (50 or greater fold coverage) platforms. Initial assemblies were separated by data type. Paired end Illumina data were assembled using Velvet (Zerbino and Birney 2008) and HGAP version 2.3.0 (Chin et al. 2013) was used for the PacBio assemblies. Genome assemblies were merged in Phrap (Gordon 2003) and annotated using the PGAP pipeline (Tatusova et al. 2016). GenBank accession numbers for the genomes and plasmids are listed in table 1.
Complete genome assemblies for C. parabotulinum and C. sporogenes were aligned to a reference genome, C. parabotulinum Kyoto-F (GCA_000022765.1), with NUCmer (MUMmer v3.23) (Delcher et al. 2002;Kurtz et al. 2004), and core-genome single nucleotide polymorphisms (SNPs) were called within NASP (Sahl et al. 2016). Duplicated regions of the reference genome were identified with NUCmer and removed from the analysis. A maximum likelihood phylogeny was generated from the resulting SNP matrix (bestsnps) with IQ-TREE (v1.6.10) (Nguyen et al. 2015) using the best-fit model identified by Modelfinder (Kalyaanamoorthy et al. 2017). The consistency index (parsimony informative SNPs only) and retention index were calculated with Phangorn (Schliep 2011). Additionally, the LS-BSR tool (Sahl et al. 2014) with the DIAMOND alignment option (Buchfink et al. 2015) and default LS-BSR settings was used to identify core proteins for the set of analyzed genomes. These core protein sequences were extracted and aligned with MUSCLE (Edgar 2004) using tools available in LS-BSR (extract_core_genome.py tool), and a maximum likelihood phylogeny was generated with IQ-Tree (v1.6.10) as described above. Trees were viewed and rooted in FigTree v1.4.4 (http:// tree.bio.ed.ac.uk/software/figtree/; last accessed March 16, 2020). Arrangements and locations of bont gene clusters were visualized using the Artemis Comparison Tool (ACT) v17.0 (Carver et al. 2005). Plasmid sequence identity and coverage tables, and plasmid synteny plots were generated using the Basic Local Alignment Search Tool (BLAST) (BlastN) (Altschul et al. 1990). Phage sequences were identified using PHASTER (Zhou et al. 2011;Arndt et al. 2016). Botulinum toxin gene sequences (A2, A3, and B2) were aligned with MUSCLE (Edgar 2004), and maximum likelihood phylogenies were generated with IQ-Tree (v1.6.10) as described above.
Results
A total of 15 strains representing different toxin types and subtypes from Argentina (11), Australia (3), and Africa (1) were sequenced and their complete genomes including plasmids were compared. Features of each genome are provided in table 1. The Argentinean genomes are coded according to their source. SU strains are from soil (suelo) samples and samples isolated from botulism outbreaks (brotes) are listed with Br followed by an abbreviation for the implicated food. Although partial genomes have been previously published for four of these genomes, all of the newly sequenced genomes in this study were complete, which allows for verification of genome size and bont gene cluster locations, and identification of plasmids. All of the newly completed genomes were from C. parabotulinum strains, except for Australian strain AM 1195, which is a C. sporogenes strain that produces BoNT/B6. The genomes ranged in size from 3.9 to 4.4 million base pairs and all contained plasmids of varying sizes except BoNT/A2 Mauritius and BoNT/B2 SU0742 (table 1). Large bont-containing plasmids ranging in size from 241 to 267 kb were identified in four of the newly sequenced strains. Additional small (10-58 kb) and large (196-215 kb) plasmids were discovered that were devoid of bont genes.
A core-genome SNP phylogeny and a phylogeny of concatenated proteins common to all analyzed strains ( fig. 1) illustrate the relationships of the 15 newly sequenced strains with each other and with complete reference genomes from C. parabotulinum and C. sporogenes strains. The newly sequenced genomes can be differentiated into three major C. parabotulinum clades with an additional distinctive clade that houses C. sporogenes strains. Most of the newly completed genomes have their bont genes within the chromosome. Exceptions include the bont/F5 genes of strains A2F5 BrDura and F5 SU0634F, and the bont/B6 gene in AM282 and C. sporogenes AM1195. One major clade includes genomes from BoNT/A2 and a BoNT/A3 strain (SU1169) located in Argentina, two BoNT/A2 and BoNT/ A2B6 Australian strains, and BoNT/A1(B) strains from the United States, separated into three subclades generally according to BoNT type and source location. Argentinean strains containing bont/A2 and bont/A3 genes are in one subclade, whereas Australian BoNT/A2 and the BoNT/A2F5 BrDura strains are within a second subclade. The third subclade is composed of BoNT/A1(B) strains.
A second major clade contains Argentinean BoNT/B2, BoNT/F3 SU0160, and BoNT/F4 SU1425 strains, and the African BoNT/A2 Mauritius and CDC 53174 strains. This clade also includes multiple reference strains that produce diverse BoNTs, such as HAþ BoNT/A1 strains, BoNT/A5 H04402 065, BoNT/B1 Okra, BoNT/B2 111, and BoNT/F1 strains. This clade contains multiple subclades, and as with the first major clade, the strains tend to sort by source location, but a greater variety of BoNT types is seen. BoNT/B2 SU0742 and BoNT/F4 SU1425 are an example of two very closely related strains containing different toxin types. The similar genetic composition of these strains that encode different toxin types demonstrates the mobility of the bont genes and the independent movements of entire bont gene clusters. The genome from a second pair of closely related strains, BoNT/A2 CDC 53174 and BoNT/A2 Mauritius, that are associated with Uganda and an island near Africa, respectively, are represented here as a distinctive subclade. It should be noted that, although these strains produce the same A2 toxin as all other BoNT/A2 strains, their genomes differ from those in the other two clades that contain Argentinean BoNT/A2 strains.
A third major clade is composed of reference genomes from bivalent BoNT-producing strains, plus the monovalent HAÀ BoNT/A1 CDC 297, BoNT/A2 SU0634A, and BoNT/F5 SU0634F strains. Although the Argentinean BoNT/A2 SU0634A gene cluster shows the same conserved ars-related location within the chromosome as other BoNT/A2 strains, its genome differs from the Mauritius strain and other Argentinean strains. Strain SU0634F, as with all categorized BoNT/F5 strains to date, has its bont gene cluster located at a conserved site within a large plasmid. The genome of the reference BoNT/A3 Loch Maree strain, isolated in Scotland, is a divergent strain. Particularly striking is its lack of genomic similarity with the other BoNT/A3-producing strain shown here, SU1169. The bont gene in BoNT/A3 SU1169 is chromosomally located, but the gene in the Loch Maree strain is found within a large plasmid.
A distinct clade that includes the genome of the BoNT/B6 AM1195 strain demonstrates a species level differentiation of C. sporogenes versus C. parabotulinum (Williamson et al. 2016;Smith et al. 2018). Four additional C. sporogenes genomes are shown in figure 1. Two of these genomes are from strains that produce botulinum toxin (BoNT/B1 CDC 1632 and BoNT/B2 Prevot 594) and the other two strains are nontoxic. Examination of the core SNP phylogeny reveals that bont/B1, bont/B2, and bont/B6 genes may be found within either C. parabotulinum or C. sporogenes. Five of the eight BoNT/B6 strains documented in the literature are C. sporogenes strains (Kenri et al. 2014;Sakaguchi et al. 2014;Williamson et al. 2016), whereas BoNT/A2B6 AM282, BoNT/B6 NSW4_B6, and BoNT/B6 CDC 66221 are C. parabotulinum isolates. In all of these cases, the bont/B6 genes are located within large, highly conserved plasmids. BoNT/B2 Prevot 594 and BoNT/B2 It 450 are also C. sporogenes strains and there is evidence that their bont genes are within the same large plasmids as the BoNT/B6 strains.
The movement of these genes between species provides evidence of the mobility of bont/B genes using conjugative plasmids. It further shows an ability of these conjugative plasmids to enter a variety of bacteria and speaks to differing evolution of the toxin genes and the organisms that house them.
BoNT/A2 strains are found within each of the three major SNP phylogeny clades, and BoNT/A3, BoNT/B2, BoNT/B6, and BoNT/F5 strains are scattered among these different clades as well. Phylogenies comparing bont gene sequences from BoNT/A2, BoNT/A3, and BoNT/B2 strains ( fig. 2A-C) were generated to provide an understanding of the mechanisms behind the evolution of the toxin gene subtypes.
The bont/A2 phylogeny shows three distinct clades. Although the first bont/A2 clade is highly conserved, with subclades that differ by only one to three nucleotides (0.03-0.08% difference), the bacterial strains that house these genes are found in each of the three phylogenetic clades that are shown in figure 1, emphasizing the differences between toxin gene evolution and that of the bacteria that produce these toxins. The majority of strains within this large bont gene clade are located in the southern hemisphere; in addition, several strains were isolated from honey that was likely sourced from Argentina.
A second clade containing three Italian and Corsican isolates differs from the first by 8-11 nucleotides ($0.25% difference) and appears to exhibit a conserved recombination event at the 3 0 part of the sequence, possibly involving the bont/A4 gene. Seven of these nucleotide differences are among the final 69 nucleotides of the sequence (bp 3,744-3,813), a >10% difference in this area.
A final clade contains a single strain from Puerto Rico, CDC 2171, that differs by 36 nucleotides from its closest relative. Sixteen of these differences are found between nucleotide FIG. 1.-Phylogenies of C. parabotulinum and C. sporogenes strains. (Left) Core-genome SNP phylogeny that includes newly completed genomes from Argentina, Australia, and Africa (consistency index, 0.54; retention index, 0.83). (Right) Phylogeny inferred from an alignment of core protein sequences. Three major C. parabotulinum clades are numbered and C. sporogenes is labeled. Complete genomes generated as part of this study are identified with Argentinean isolates in blue font, Australian isolates in purple font, and the African isolate in orange font.
2,620 and 2,722 (15.7% difference) and 11 are found between nucleotides 3,205 and 3,813 (1.8% difference), which makes it highly unlikely that this was the result of individual mutations. In contrast, there are only five nucleotide differences from bp 1 to 2,600, or 0.19% difference. In both of these distinctive regions, the sequences are identical or nearly identical with the bont/A8 sequence, indicating a recombination event may have occurred between them. In contrast to the first clade, the strains in these latter clades are from Europe and the United States. The bont/A2 genes shown here are all chromosomally located adjacent to the ars operon, with the exception of one bivalent reference strain (BoNT/A2B5 CDC 1436), which contains both bont/A2 and bont/B5 within the same large plasmid.
It has been noted that the bont/A2 gene appears to have been generated as a result of a recombination event between bont/A1 and bont/A3 (Hill et al. 2007). This accounts for the relatively large nucleotide differences between these subtypes (3.6% between bont/A1 and bont/A2; 5.4% between bont/ A2 and bont/A3). Apparently, recombination events have also shaped within-subtype differences as well.
Strains that carry bont/A3 genes are rare, with only about five to eight known isolates. All of the BoNT/A3 strains have been isolated from Argentina with the exception of the Loch Maree strain. The CDC 40234 strain is widely believed to be the same as Loch Maree, and one of the CDC strains from Argentina is likely the same as SU0945. The Scottish Loch Maree bont/A3 gene is closely related to but distinct from the Argentinean strains, differing by two to three nucleotides, indicating the differences are the result of individual nucleotide mutations. Two of these strains contain bont/A3 genes within large plasmids, whereas the others are chromosomally located, indicating an ability for the gene to migrate via the conjugative plasmid followed by integration into the chromosome.
As with the bont/A2 genes, the majority of the bont/B2 genes are located within the chromosome at the same location, the oppA/brnQ site. The bont/B2 genes are interesting in that they show a lower between-subtype and a greater within-subtype variability than bont/A2 genes. The bont/B2 genes form four general clades with two composed of single outliers. The first clade can be separated into three subclades. One subclade contains five members whose strains are all sourced from Argentina. The bont/B genes from several of these strains are identical in sequence, with no more than one to two nucleotide differences between any two strains. The strains housing the bont/B2 genes in the second subclade were located in the Middle East and their toxin genes differ from the Argentinean sequences by six to seven nucleotides. The third subclade and the entire second clade contain strains that originated in Europe. The bont/B genes from the European strains differ by no more than 1-3 nucleotides from each other or from the Middle East strains but show nucleotide differences of 9-11 nucleotides when compared against the genes from southern hemisphere strains. Although that appears to be a great deal of variability, an analysis of nucleotide differences indicates that many of these mutations are common to several of the bont/B2 genes. For example, nine to ten nucleotide differences that are common to the northern hemisphere strains are absent in the Argentinean strains.
Exceptions among the European strains include the two strains that compose unique clades. The clade containing It 433 differs from the first clade by $19 nucleotides, or 0.52% overall. Although there are only 6 nucleotide differences in the initial 3,730 nucleotides of this gene, 13 additional differences are seen within the final 127 nucleotides (10.2% difference between bp 3,742 and 3,869), suggesting a recombination event has occurred. The gene sequence for It 450, which forms another unique clade, contains similar sequence to It 433 in the region from nucleotides 3,742 to 3,869 (7.9% difference from other bont/B2 genes), with an additional area from nucleotides 3,314 to 3,729 that differs by 21 bp, or 5.1% versus an overall difference of 1.21% in nucleotide residues. This is an indication of possible multiple recombination events within the final 560 nucleotides of this gene. Similar differences are seen within bont/B3 and bont/B6 genes, indicating the recombination event may have involved interactions between these genes and confirming the close relationship among these three BoNT/B subtypes.
The existence of bont genes within both the plasmid and chromosome suggests a mechanism for the spread of these genes among differing bacterial strains and species using mobile genetic elements. Subsequent integration into the bacterial chromosome provides a more stable environment for the toxin genes, as plasmids may be eliminated from these strains fairly easily, resulting in a reversion to a nontoxic strain.
The completed genome sequences of these strains have also provided the opportunity to thoroughly examine and compare the location and composition of the bont gene clusters among strains, which can be difficult with fragmented draft assemblies. All of the Argentinean and Australian BoNT A2-producing strains analyzed in this study have their bont gene clusters and surrounding genes arranged in the same manner as BoNT/A2 Kyoto-F, with the bont cluster preceding the ars gene operon (Hill et al. 2009). However, the bont/A2 gene cluster from the Mauritius (African) strain has a similar location pattern to the BoNT/F1 strains, where the ars operon precedes the bont cluster. The BoNT/F1 and Mauritius genomes are also related, being members of the same phylogenetic clade ( fig. 1). Figure 3 illustrates the arrangements of the bont clusters in these BoNT/A2 strains.
It is of interest that the bont gene clusters associated with the chromosomally located bont/A2 gene and the plasmid-located bont/F5 gene within the BoNT/A2F5 BrDura strain are highly conserved, sharing 98% identity among the nontoxin gene cluster components. The DNA sequence between these two bont clusters diverges $50 bp prior to the terminus of the ntnh gene, similar to the divergence seen between BoNT/A2 and BoNT/F1 strains (Hill et al. 2009). This is an indication of a recombination event within the ntnh gene which may have placed the bont/A2 gene within a bont/F5 gene cluster, or vice versa.
Insertion sequences (IS elements) and transposases, or their remnants, are commonly found in regions containing bont gene clusters. These elements may facilitate insertion of bont gene clusters into plasmids and/or the chromosome. Among the newly sequenced genomes, an unusual internal feature that is seen in the bont gene clusters within Argentinean BoNT/A2, BoNT/A3, BoNT/F1, and BoNT/F5 strains is the presence of a 1.2-kb insert that contains a degraded IS6' insertion element (Hill et al. 2009;Luquez et al. 2009) between the orfX1 and botR genes. Although the function of this element is unknown, its presence indicates a relationship among the bont clusters of these strains that is not shared by their genomes. The insert is absent in BoNT/ A1(B) strains, Australian BoNT/A2 strains, Italian and French A2 strains, and the Argentinean HAÀ A1 strain SU0729, as well as strain A2 SU0634A (Luquez et al. 2012).
The bont/B2 gene clusters within the four BoNT/B2 Argentinean strains were all located at the oppA/brnQ site within the chromosome, which was previously published as the insertion location for haþ bont/A1 and silent bont/(B) gene clusters (Hill et al. 2009). The chromosome is a common location for bont B2 genes (Franciosa et al. 2009;Hill et al. 2009) and bont/B3 gene clusters are located there as well, whereas bont/B1, bont/B5, and bont/B6 gene clusters are generally found in specific locations within closely related large plasmids (Franciosa et al. 2009;Hill et al. 2009;Williamson et al. 2016) and bont/B4 (nonproteolytic bont/B) gene clusters are located within smaller conserved plasmids (Carter et al. 2014).
The bont gene cluster from four Argentinean BoNT/F strains (BoNT/F3, BoNT/F4, BoNT/F5, and BoNT/A2F5) were compared. As has been previously noted, the bont F4 gene cluster in SU1425 is chromosomally located, placed within a split pulE gene (Dover et al. 2013;Raphael et al. 2014). Similarly, the bont gene clusters within BoNT/F3 SU0160 and BoNT/F4 SU1425 were also discovered to be located between split pulE gene fragments ( fig. 4). The inserted 33.6-kb DNA sequence in the BoNT/F4 SU1425 strain consists of the 16.6-kb bont gene cluster including the lycA gene, immediately preceded by a transposase, three recognized genes, and four hypothetical genes. The hypothetical gene adjacent to the split pulE fragment contains the same gene sequence as the transposase that precedes the lycA gene, suggesting that this sequence may be involved in facilitating the insertion of the bont cluster into the chromosome. In addition to the conserved genetic sequence contained within the pulE gene fragments, strain BoNT/F3 SU0160 contains a novel 57-kb intact prophage sequence that matches Bacilli_Moonbeam_NC_027374 (fig. 4). A similar split pulE gene has been discovered in BoNT/A3 strain Loch Maree (Dover et al. 2013). However, the inserted genetic material in this case does not include bont genes but instead contains a distinct intact prophage sequence with similarity to phiCD38_2_NC_015568. Thus, the pulE location isnot strictly specific to the insertion of bont genes but may also facilitate recombination events involving other DNA sequences as well. With the BoNT/F5 SU0634F and BoNT/A2F5 BrDura strains, the bont/F5 gene clusters are located within a plasmid and intact pulE genes are found within the chromosome. Figure 4 details the insertion sites for the bont gene clusters in BoNT/F3 SU0160 and BoNT/F4 SU1425 and their relationship with the intact pulE gene in BoNT/F5 SU0634F.
The presence of 1, 2, or 3 plasmids of differing sizes was identified in 13 of the 15 strains. Two strains housed plasmids that contained bont gene clusters in combination with chromosomally located BoNT/A2 genes (BoNT/A2B6 AM282 and BoNT/A2F5 BrDura) and two strains (BoNT/B6 AM1195 and BoNT/F5 SU0634F) contained only plasmidborne gene clusters. Plasmid-borne bont gene clusters are consistently found at one of two discreet sites. The bont F5 gene cluster in both the A2F5 BrDura and F5 SU0634F strains was located at the same site, sometimes designated the "A/ F" site, within virtually identical 240-kb plasmids ( fig. 5A). The bont B6 gene clusters for the C. parabotulinum BoNT/A2B6 AM282 and C. sporogenes BoNT/B6 AM1195 strains were also located at the same distinct location designated as the "B" site within conserved 266-267-kb plasmids ( fig. 5B).
Three additional large plasmids that did not contain bont gene clusters were discovered among the newly sequenced genomes. These related plasmids ranged in size from 197 to 215 kb and showed no similarities to the large bont gene- containing plasmids. Although the bont/A3 gene in the reference BoNT/A3 Loch Maree strain is within a plasmid, the BoNT/A3 SU1169 has its bont genes within the chromosome at a unique location adjacent to the HepA/SNF2 gene and it houses a large distinctive plasmid (196,981 bp) that is devoid of bont genes. BoNT/A2 SU0634A and BoNT/F5 SU0634F were originally thought to be the same strain but later it was discovered that they produced different toxins. Both strains contain the same large plasmid that is devoid of toxin genes ( fig. 5C), but SU0634F also contains a second large plasmid that houses the bont/F5 gene, whereas the bont/A2 gene in SU0634A is chromosomally located. Figure 5C and D illustrates the relationships among the large plasmids that are lacking bont genes.
Smaller plasmids present in newly sequenced strains showed evidence of mosaicism. Closely related 10.1-kb plasmids were discovered in Argentinean strains BoNT/B2 SU0515, BoNT/B2 SU0609, and BoNT/F4 SU1425 and in Australian strain BoNT/A2 AM1051 (table 2). In addition, 17.4-17.5-kb plasmids were identified in Argentinean BoNT/A2 and BoNT/F5 strains that were closely related and 23.3-kb plasmids were found in BoNT/B2 SU0305, BoNT/B2 SU0515, and BoNT/B2 SU0609. The latter two strains share the presence of both 10.1-and 23.3-kb plasmids. When the 6.-Synteny plots of SU0515 pRSJ21_1 and SU0515 pRSJ21_2 against SU0160 pRSJ3_1 showing the mosaic nature of these plasmids. The missing portion in the 23-kb plasmid but present in the 17-kb plasmid is located in the 10-kb plasmid. smaller plasmids were compared using BLAST analysis (BlastN), it was found that 20-40% of the length of the 17kb plasmids matched with the 10-kb plasmids, and another 55% matched with the 23-kb plasmid sequences (table 2). However, the 10-and the 23-kb plasmids showed no significant similarity when compared, indicating that they are distinctive plasmids that appear to have been formed from the splitting of a progenitor 17-kb plasmid with subsequent additions of genetic material. Synteny plots illustrating this are shown in figure 6 using pRSJ3_1, a 17-kb plasmid from F3 SU0160 and the 10-kb and 23-kb plasmids pRSJ21-1 and pRSJ21_2 from B2 SU0515.
The presence of these similar mosaic plasmids in multiple distinctive strains is noteworthy. The small plasmids were common in the genomes of BoNT/B2 strains and were also found in BoNT/A2 AM1051, where the 17.5-kb plasmid was shared by strains BoNT/A2 SU0634A and BoNT/F3 SU0160. A third, slightly larger plasmid (57.6 kb) was also identified in BoNT/B2 SU0515 and BoNT/B2 SU0609 that shows no significant similarity to the other smaller plasmids. In addition, the presence of the larger conserved plasmids among different bacterial species, such as the conserved plasmid containing the bont/B6 gene cluster in C. parabotulinum strain AM 282 and C. sporogenes AM1195 indicates an ability for these plasmids to be shared across species (Marshall et al. 2010). The mosaicism seen among the smaller plasmids and their presence in phylogenetically distinct strains shows their facility for movement and interaction within a variety of bacteria.
Discussion
The anaerobic clostridia are thought to be among the oldest of bacteria, existing before the earth's atmosphere became aerobic. The ability to form inert spores has enabled these bacteria to survive conditions where actively growing bacteria would not survive. These organisms have evolved from one or a few prototypes into a genus that now contains over 200 distinct species. The vast majority of these are innocuous freeliving soil or sedimentary bacteria, but a few have evolved to produce multiple toxins that are responsible for a range of human and mammalian diseases, to include food poisoning, cellulitis, fasciitis, gas gangrene, tetanus, and botulism. There is evidence that the genes necessary for the expression of BoNT proteins evolved separately and were introduced via mobile genetic elements such as large plasmids and as part of bacteriophage DNA (Mansfield and Doxey 2018). This evidence includes the recent findings of bont-like genes and gene clusters in distantly related bacteria and fungi (Mansfield and Doxey 2018), as well as the presence of the same bont genes in different clostridial species. Subsequent recombination events have placed these genes within the chromosome, which stabilizes this trait by protecting against loss of the genes through loss of plasmid or curing of phage.
Complete genomic sequencing allows us to identify these mobile elements, note their movements, and understand the mechanisms involved in recombination by tracking the insertion locations of the bont gene cluster DNA.
Botulism is a global problem and BoNT-producing clostridia have been isolated from every continent. Seven species of bacteria are known to be capable of expressing BoNTs and over 40 toxin subtypes are recognized. BoNT production in different species represents a significant dispersal of a single protein. Because of their extreme potency, global geographic range, and diverse nature, BoNTs and BoNTproducing bacteria have been the subject of intensive study. These studies have documented the bacterial strains and toxin types responsible for botulism cases in specific areas and have attempted to link these cases with BoNTproducing strains within the soils and aquatic environments of these places. Of the >250 genomes from BoNTproducing bacteria currently in the NCBI database, nearly three-fourth are from the northern hemisphere, with the majority of the remaining strains coming from Argentina. More than 80% of genomes representing BoNT/A2 strains and all of the known BoNT/F3, BoNT/F4, and BoNT/F5 genomes are sourced from the southern hemisphere. All 15 genomes analyzed as part of this study were sourced from locations in the southern hemisphere, including rare Australian and African strains, providing an opportunity to better understand the relationships among these strains.
Eleven of the 15 strains examined here were sourced from Argentina. BoNT-producing bacteria in Argentina have been found to express a diverse array of toxin types and subtypes, including BoNT/A2, BoNT/A2F4, and BoNT/ A2F5, BoNT/F4, BoNT/F5, BoNT/B2 strains, and the rare BoNT/A3, BoNT/E9, BoNT/F3, and BoNT/G strains. The most common toxin type identified, BoNT A2, is responsible for the majority of Argentinean infant botulism cases and numerous BoNT/A2-producing bacteria have been isolated from the soil, particularly in Mendoza province. These strains have also been isolated from herbs ) and honey (Dabritz et al. 2014), which has been implicated in infant botulism cases globally.
The presence of identical bont genes within bacterial strains that are not closely related coupled with the presence of bont genes encoding multiple toxin types and subtypes within closely related bacteria emphasizes the mobility of these genes and suggests these genes must have evolved separately from their host bacteria. The phylogenies presented in figure 1 demonstrate the relationships of strains from distant geographic regions and of strains producing multiple BoNT types. To understand how the strains from the southern hemisphere differ from those of the northern hemisphere, complete genomes from the United States, Europe, and Asia were included as reference strains. It is noteworthy that all of the major C. parabotulinum phylogenetic clades illustrated in figure 1 contain BoNT/A2 producing strains. Surprisingly, despite their geographic separation, BoNT/A2-producing bacteria across the southern hemisphere are closely related phylogenetically. The importance of climate in the dissemination of clostridial spores should be noted. It has been shown that spores from infectious BoNT-producing organisms can be carried in dust particles (Dabritz et al. 2014) and there is evidence of phylogenetically related organisms isolated from both soil and clinical botulism cases. The prevailing winds in Argentina and southern Australia (the westerlies) circulate from west to east, which may have enhanced the west-east spread of particular strains and minimized northsouth interactions.
The bont genes themselves exhibit a degree of diversity due to both single nucleotide mutations and recombination events. Although single mutations play a large part in the diversity seen within subtypes, the greater diversity seen between subtypes may be generated more as a result of recombination events. Thus, there is a spectrum of diversity among the bont genes that is the result of multiple types of events. An example of this is the conservation of DNA sequence among bont/A2 genes contrasted with the large betweensubtype differences between bont/A1, bont/A2, and bont/ A3. Many bont/A2 genes analyzed here differ by no more than three nucleotides, and these differences are likely the result of random mutations. However, it has been shown that recombination events between bont/A1 and bont/A3 have shaped the bont/A2 gene, and further recombination events can be identified that have generated additional diversity within the bont/A2 gene. Similar events are seen among the bont/B2 genes, where the majority of genes differ by <12 nucleotides, but a few genes with differences of 21-40 nucleotides show evidence of recombination events.
Diversity among bont gene clusters has also been generated in multiple ways. Large ($30-90 kb) DNA segments that contain bont cluster genes are found at specific locations within plasmids or the chromosome. These bont gene clusters appear to be inserted via homologous recombination, often facilitated by insertion elements or transposases. In addition, novel gene clusters have been generated through the placement of existing bont genes within the gene clusters of other bont types, such as the placement of bont/A1 genes within haþ bont/B gene clusters or the placement of a bont/A2 gene within a bont/F gene cluster via a recombination event at the 3 0 terminus of the ntnh gene. Within the bont genes themselves, recombination events have also created chimeric toxins, such as bont/CD or bont/DC genes or the mosaic bont/A2 gene. The arrangements and locations of the bont gene clusters within these newly sequenced/completed strains are highly conserved. The arrangement of the Argentinean and Australian bont/ A2 gene clusters is constant within the chromosome, located immediately upstream (5 0 ) of the ars operon, whereas the bont gene cluster of BoNT/A2 Mauritius strain is located downstream (3 0 ) of the ars operon. The Argentinean bont/ B2 gene clusters are all found within the chromosome at the previously published oppA/brnQ site. The gene location of the newly sequenced BoNT/F3 and BoNT/F4 strains indicates that their bont gene clusters have been inserted by splitting the pulE gene, similar to the bont/F4 gene cluster in BoNT/ A2F4F5 strain Af84. BoNT/F3 SU0160 is distinctive in that an additional intact 57-kb prophage sequence is found among the inserted material between the two pulE gene fragments.
Highly conserved plasmids of various sizes were found among these strains, including several small plasmids that appear to be mosaics composed of interactions between existing plasmids with additional genetic material added from unknown sources. Large ($200-270 kb) plasmids that were found among these strains formed two distinct groupsthose harboring bont gene clusters and those without bont genes. Although the sizes of these plasmids are similar, they are completely unrelated, confirming that the plasmids devoid of bont genes were not formed as a result of excision of bont gene clusters during chromosomal integration. However, smaller mosaic plasmids were discovered that were formed via exchanges involving excision and integration events. The execution of complete, finished genomes has revealed these previously undetected plasmids, indicating that C. parabotulinum strains, like C. novyi sensu lato strains (Skarin et al. 2011;Skarin and Segerman 2014), contain a varied plasmidome, which may be an important factor in the evolution of such diversity in toxin genes within this and other neurotoxin-producing species.
Although the many whole-genome sequences that are available for BoNT-producing clostridia have enabled us to determine relative frequencies of toxin types and subtypes and mapped geographic regions where they are prevalent, the completed genomes have enabled a closer examination of the locations of the bont genes and a better understanding of the mechanisms responsible for the movement of these genes among related bacteria and their integration into bacterial chromosomes. | 2020-02-27T09:07:48.589Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "db4e3fc26296946e981598aad626e88a69061dba",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/gbe/article-pdf/12/3/229/33030256/evaa043.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a20a87ba51eaff12f4ab85874491af0857c8b4d3",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
260123171 | pes2o/s2orc | v3-fos-license | The Consequences of COVID-19 on Breast Cancer Screenings in an Underserved Urban Population and the Screening Access of Value for Essex Program’s Efforts to Control the Damage
Objective: This study aims to examine the impact of the COVID-19 pandemic on breast cancer screening in an underserved population, identify patient barriers, and discuss strategies to promote the importance of screening. Methods/operations: The Rutgers New Jersey Medical School Screening Access of Value for Essex (SAVE) program delivers cancer prevention services to the most vulnerable population in Essex County, New Jersey. The SAVE program was shut down from March 2020 to June 2020 due to COVID-19. The number of mammograms performed 18 months before the pandemic (September 2018 to March 2020) and 18 months after the shutdown of the program (July 2020 to December 2021) were recorded. A calling project was created in response to the pandemic to educate patients about COVID-19 precautions and provide healthcare and social services resources. Results: There was a 15.4% reduction in screening mammograms during the post-shutdown period (1,459 pre-COVID-19 versus 1,234 post-shutdown). The number of diagnostic mammograms increased from 264 to 272. The calling project spoke with 1,548 patients and identified the following concerns: exposure to COVID-19, language barriers, and lack of health insurance. Conclusion: Although COVID-19 had a profound impact on most patients, especially in the realm of breast cancer screening, the implementation of the SAVE program's strategies such as transitioning to an appointment-only system has helped minimize the negative impacts. Reaching out to the patients, partnering with community organizations, and promoting SAVE services have played a vital role in encouraging more patients to have screening done.
Introduction
During the peak of the COVID-19 pandemic, all patients had a substantially harder time accessing various modalities of healthcare including annual follow-ups, elective procedures, screening exams, and other routine health services [1,2]. The fear of contracting COVID-19 coupled with state-mandated lockdowns meant that very few individuals were leaving their homes, let alone seeing their physicians. In an attempt to react to the ongoing pandemic, national health organizations such as the Centers for Medicare & Medicaid Services issued recommendations to postpone screening and preventive visits for the time being in an effort to control the spread of COVID-19 [3]. Retrospective national analyses that aimed to quantify the overall decrease in the number of screening exams report that the number of screening mammograms completed during the peak of the pandemic in April 2020 was only 3.7% of the monthly average before the outbreak of COVID-19 [4].
All the barriers that prevented patients from accessing healthcare during the COVID-19 pandemic were magnified in underserved urban populations where a majority of residents require additional services such as public transportation, language interpreter services, and assistance-based payment programs [5]. Due to government shutdowns, these key services that are essential for ensuring vulnerable populations receive high-quality care were not available. As it stands, racial and ethnic minority groups such as Blacks and Hispanics have been shown to have higher incidences of disease and poorer health outcomes irrespective of the COVID-19 pandemic [6]. Black women in particular have been shown to have higher morbidity and mortality rates from breast cancer [7]. It is especially important to acknowledge that this detrimental phenomenon is exacerbated in places such as Essex County, which house large populations of historically underserved communities due to longstanding barriers to healthcare that have persisted for years on end. Essex County is one of New Jersey's most vulnerable patient populations with a demographic composed of over 60% of residents who identify as either Black or Hispanic [8]. This patient population is not only underserved but was also severely affected by the pandemic. Prior to the COVID-19 pandemic, there were several community-health programs that existed to provide patients in Essex County with a wide variety of health-related services. One of these programs is called the Rutgers New Jersey Medical School Screening Access of Value for Essex (SAVE) program, which is one of the oldest funded New Jersey Cancer Education and Early Detection (NJCEED) programs. The SAVE program delivers cancer prevention services such as breast exams, screening mammograms, pap smear testing, and much more to the most vulnerable population in Essex County. These screening exams are offered at no cost to women who meet all of the NJCEED criteria, such as being between 21-64 years old currently, being New Jersey residents, having no insurance, and having an income at or below 250% of the federal poverty line [9].
The SAVE program was shut down from March 2020 to June 2020, which ultimately meant that patients were unable to obtain many essential preventive health services. To combat the potential harms of the pandemic on breast screenings, the SAVE program developed a calling project to educate patients about COVID-19 precautions and provide healthcare and social services resources. Strategies to minimize COVID-19 exposure such as wearing masks, social distancing, and proper hand hygiene were provided to patients prior to screening visits via phone. Lists of social service resources such as locations of outdoor "Fresh Produce Pop Up Markets" were created and distributed to patients especially those who were concerned about COVID-19 exposure from grocery stores. The objective of this study is to determine how breast cancer screenings were impacted in the underserved population of Essex County due to the COVID-19 pandemic.
This article was previously presented in the form of an electronic poster presentation at the 2022 American College of Radiology Annual Meeting on April 24, 2022.
Pre-COVID-19 operations
The SAVE program is the only NJCEED screening program that operates in Essex County, New Jersey. This is the only health screening program in Essex County, which provides free breast and cervical cancer screenings for residents and has a primary focus on targeting the underserved population in this county. Traditionally, SAVE operates as a community outreach model. This is accomplished through the use of a mobile mammography van ( Figure 1) and more than 55 community partners that include various health departments, churches, and other community organizations throughout Essex County ( Figure 2). Operating at least three times a week, the mobile mammography van travels to the various outreach sites to ensure that the health screenings are accessible to patients that do not have transportation. At each screening site, 15-20 patients that meet NJCEED criteria are scheduled per screening site on a first-come-first-serve basis. At each appointment, patients receive cancer education as well as breast and cervical cancer screenings.
Post-COVID-19 operations
To minimize contact and stop the spread of COVID-19, the traditional first-come-first-serve basis was transitioned to an appointment-only system. This was done to allow for adequate time between each appointment to reduce potential exposure from one patient to the next and to maximize social distancing. In a similar light, patient information and cancer education were provided via phone prior to the appointment to reduce the amount of time each patient would physically take in person.
Calling project
In response to the COVID-19 pandemic, a calling project was developed to provide COVID-19 precautions and receive direct feedback from our active and newly scheduled patients. Patients were reached via phone call and were provided different strategies to protect themselves from contracting COVID-19 including social distancing, wearing masks, and resources for COVID-19 testing. Patients were able to voice on the phone about the obstacles they currently faced to acquiring their breast and cervical screening services, which were obtained prior to appointments, and these verbal responses from patients were recorded.
Metrics
The 18 months prior to the shutdown of the SAVE program from September 2018 to March 2020 were defined as the pre-COVID-19 time period, and the 18 months following the restart of the SAVE program after its shutdown from July 2020 to December 2021 were defined as the post-COVID-19 time period. The total number of screening mammograms and diagnostic mammograms were recorded for both the pre-COVID-19 and the post-COVID-19 time periods. Pre-and post-COVID-19 shutdown era screening and diagnostic mammograms were compared via Chi-squared tests. All analyses and descriptive statistics were conducted using Microsoft Excel (Microsoft Corporation, Redmond, Washington). To assess the progress of the calling project, the total number of patients that were contacted as well as the major concerns each patient had on accessing their screening services were recorded.
Subsequent Chi-squared tests sought to evaluate the differences between screening and diagnostic mammograms with respect to the pre-and post-COVID-19 shutdown time frame. Chi-squared analysis for the number of screening mammograms conducted pre-COVID-19 shutdown when compared to post-COVID-19 shutdown yielded a p-value of less than 0.001, while analysis for the number of diagnostic mammograms conducted in the same study time frame yielded a p-value of 0.0622 ( Table 1).
Chi-squared test
Pre
TABLE 1: Statistical analysis of pre-versus post-COVID-19 screening and diagnostic mammograms
Looking at the pre-and post-COVID-19 shutdown era in conjunction, it was found that 54.2% of screening mammograms were conducted pre-shutdown, and 49.3% of diagnostic mammograms were conducted postshutdown.
The calling project spoke with 1,548 patients during the post-COVID-19 shutdown period and identified the most prevalent and recurrent themes among patient concerns: exposure to COVID-19, language barrier, and lack of health insurance. All the concerns that were most consistently vocalized by patients had a connection to the aforementioned themes. Subsequent measures taken by the project to combat these issues included the following parameters: education regarding hospital procedures implemented to limit COVID-19 exposure and reassurance to patients in their own language with the use of language interpreter services. Language interpreters for Spanish, Portuguese, Haitian Creole, and French Creole were used. Flyers were created and distributed to patients to promote SAVE services and the development of best practice plans to keep patients safe, such as social distancing, hand hygiene, and the use of masks.
Discussion
The study ultimately observed a 15.4% reduction in screening mammograms during the post-COVID-19 shutdown period. This is extremely problematic given its context within an increase in screening eligibility resulting from the pandemic. The study observed a complex interplay between the increases and decreases in screening mammogram demand due to the pandemic's effect on already existing systemic factors such as insurance coverage, access to transportation, and language barriers. The Chi-squared analysis comparing the number of screening mammograms conducted pre-and post-COVID-19 shutdown demonstrated a statistically significant difference between these two time frames while demonstrating no statistically significant difference in diagnostic mammograms. While there is no single factor that clearly led to the observed increase in the number of diagnostic mammograms in the post-COVID-19 shutdown period, one of the most likely explanations is an accumulation of diagnostic screening mammograms that were previously scheduled. The previously scheduled appointments would have been rescheduled to a later date due to the COVID-19 shutdown. The total of these rescheduled appointments in addition to the new diagnostic mammograms that were ordered during the post-COVID-19 shutdown period would ultimately lead to an increase in the observed number of diagnostic mammograms. This observed increase in diagnostic mammograms also represents the success of the SAVE program in matching the increased demand for diagnostic mammograms.
Due to the COVID-19 pandemic, there was an overwhelming increase in unemployment, which subsequently resulted in many individuals losing the insurance plan provided to them by their job [10]. This was then followed by a resulting increase in the number of NJCEED-eligible residents. Small Area Health Insurance (SAHIE) estimated that in 2017, 11.5% were uninsured in Essex County. However, in 2020, this estimate had increased from 0.9% to 12.4% in the county [11,12]. Therefore, as a result of the COVID-19 shutdown period, more patients were now eligible to receive a screening mammogram under the NJCEED eligibility guidelines. This means that the 15.4% reduction in screening mammograms underestimates the severity of the resulting decrease in screenings in the scope of an increased eligible patient pool.
Despite the aforementioned causes for increasing screening eligibility, there were also a few considerable factors causing a decrease in screening demand. Marginalized communities have a longstanding history of public distrust toward the healthcare system, and this is a very significant obstacle in Essex County, which houses a large number of historically underserved and marginalized communities [13]. Historical occurrences of repeated acts of systemic bias, racism, and decreased quality of care have led to an overall sense of distrust within these communities of Essex County [14]. This was further propagated through the COVID-19 pandemic, which introduced another obstacle to accessing care through the forms of misinformation and uncertainties that ran rampant during the beginning of the pandemic [15]. Each individual's perception of the threat from the COVID-19 virus became another obstacle to seeking routine screening mammograms.
The SAVE program quickly adopted a two-pronged approach consisting of patient education regarding COVID-19 best practices and identifying patient concerns to minimize the decrease in screenings, all accomplished through the implementation of the calling project. The program quickly understood the role that COVID-19 played on the already existing barriers to care for the underserved community within Essex County, and the goal of the project was to prioritize educational initiatives to reduce the impact of COVID-19 on these at-risk communities. The program simultaneously was able to start assessing the major concerns that alienated these patients from the healthcare system during the post-COVID-19 shutdown period. These included concerns about exposure to COVID-19, language barriers, and lack of health insurance.
The current study parallels others in finding that the COVID-19 pandemic was a very strong determinant of the decrease in screening mammograms over the past few years. It is therefore imperative that other groups, institutions, and organizations follow the example set forth by the SAVE program and the calling project to do all that they can to alleviate the disproportionate decrease in screening mammograms that these specific communities were afflicted with during the peak period of the pandemic. There needs to be a replication of the two-pronged approach employed by the SAVE program, which not only sought to provide screenings to marginalized communities but also sought to communicate with them regarding their unique concerns, obstacles, and challenges to promote community-based education for long-lasting change and impact.
While this is the first study to assess how the Essex County community was impacted by the COVID-19 pandemic in regard to breast cancer screening, the major limitation of this study was the confinement of the pre-and post-shutdown periods to only 18 months. Defining longer shutdown periods would have allowed for the evaluation of more distinct longitudinal trends in breast cancer screenings and ultimately would have provided a more definitive answer to how the number of screening and diagnostic mammograms changed. An additional limitation includes restricted demographic data collection. Collection of the patient's race and ethnicity would allow for the analysis of individual demographic groups to assess if changes in breast screening cancer screening were more prominent in particular demographics. Marginalized communities are typically composed of minority groups that have been shown to have higher distrust of the healthcare system as mentioned earlier. This would allow for more targeted action plans like more community outreach and patient education toward these specific groups to address the causes of screening deficiencies.
Conclusions
The COVID-19 pandemic negatively impacted the most vulnerable patients in Essex County, New Jersey, in regard to breast cancer screenings. The reduction in screening mammograms can be attributed to multiple factors including fear of illness, lack of transportation, lack of insurance, and other barriers to healthcare. The SAVE program is the only NJCEED screening program that operates in Essex County, New Jersey, and has a mission of providing healthcare services to the underserved community in this county. To mitigate the consequences of the COVID-19 pandemic and government shutdown, the SAVE program implemented a calling project to promote SAVE services to ensure that patients can safely access their screening services while minimizing risks. The success of the program can be demonstrated by the statistically insignificant difference in diagnostic mammograms before and after the COVID-19 shutdown (p = 0.622). However, the decrease in screening mammograms represents a similar trend that is observed in previous literature regarding COVID-19 and breast cancer screening nationwide. Expansion of SAVE services with additional community sites and more available appointments will be necessary to accommodate additional patients, especially with future emerging infectious diseases.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-07-25T15:02:59.203Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "85b2461bf139cc4c4be5410e457edc8e9b344b0f",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/168516/20230723-10904-1tmdequ.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7addcc309d362f4cc4969e56181645237412afa2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225025251 | pes2o/s2orc | v3-fos-license | Insights for Air Quality Management from Modeling and Record Studies in Cuenca, Ecuador
On-road traffic is the primary source of air pollutants in Cuenca (2500 m. a.s.l.), an Andean city in Ecuador. Most of the buses in the country run on diesel, emitting high amounts of NOx (NO + NO2) and PM2.5, among other air pollutants. Currently, an electric tram system is beginning to operate in this city, accompanied by new routes for urban buses, changing the spatial distribution of the city’s emissions, and alleviating the impact in the historic center. The Ecuadorian energy efficiency law requires that all vehicles incorporated into the public transportation system must be electric by 2025. As an early and preliminary assessment of the impact of this shift, we simulated the air quality during two scenarios: (1) A reference scenario corresponding to buses running on diesel (DB) and (2) the future scenario with electric buses (EB). We used the Eulerian Weather Research and Forecasting with Chemistry (WRF-Chem) model for simulating the air quality during September, based on the last available emission inventory (year 2014). The difference in the results of the two scenarios (DB-EB) showed decreases in the daily maximum hourly NO2 (between 0.8 to 16.4 μg m−3, median 7.1 μg m−3), and in the 24-h mean PM2.5 (0.2 to 1.8 μg m−3, median 0.9 μg m−3) concentrations. However, the daily maximum 8-h mean ozone (O3) increased (1.1 to 8.0 μg m−3, median 3.5 μg m−3). Apart from the primary air quality benefits acquired due to decreases in NO2 and PM2.5 levels, and owing to the volatile organic compounds (VOC)-limited regime for O3 production in this city, modeling suggests that VOC controls should accompany future NOx reduction for avoiding increases in O3. Modeled tendencies of these pollutants when moving from the DB to EB scenario were consistent with the tendencies observed during the COVID-19 lockdown in this city, which is a unique reference for appreciating the potentiality and identifying insights for air quality improvements. This consistency supports the approach and results of this contribution, which provides early insights into the effects on air quality due to the recent operability of the electric tram and the future shift from diesel to electric buses in Cuenca.
Introduction
On-road traffic is one of the most important sources of air pollutants in cities located in Ecuador [1,2]. Emissions from this source are exacerbated for cities located in the Andean region of the country, owing to their altitude, where the content of atmospheric oxygen is lower compared to at sea level. Therefore, combustion processes emit more primary pollutants in these cities [3].
for studying the influence of six planetary boundary layer schemes for modeling the air quality in Cuenca [18]. Modeling is also a powerful approach for foreseeing the effects on air quality, due to changes in the emission inventories [19], which can help define policies, programs, and projects for air quality management.
The Air Quality from Cuenca
The air quality stations are mainly located in the urban area. There is one automatic station located in the historic center (MUN station, Figure 1), which, since 2012, has monitored the short-term air quality (CO, NO 2 , PM 2.5 , and O 3 ) and meteorology [20]. Additionally, there are about 20 passive stations for measuring monthly-mean air quality concentrations (NO 2 and O 3 ). Measurement of the air quality is based on the methods established in the Ecuadorian air quality regulation under the responsibility of the Municipality of Cuenca, which, for this purpose, is the entity accredited by the National Environmental Authority. The air quality network's current equipment is described in the report on the air quality from 2019 [20], which corresponds to the directives established by the USA Environmental Protection Agency and European. As part of its operation, corresponding quality assurance and quality control activities are permanently performed.
From 2012 to 2019, during fourteen days, the PM 2.5 concentrations (24-h mean) were higher than the WHO guideline (25 µg m −3 ). The annual mean PM 2.5 concentrations varied between 6.1 and 10.8 µg m −3 . During four years in this period, this mean was higher than the WHO guideline (10 µg m −3 , [11]). Studying the growing historical dataset or air quality records promotes the understanding of the complex behavior of air pollutants in Cuenca. Based on the records from 2013 to 2015, the weekend effect (WE), which is a phenomenon characterized by increased concentrations of O 3 during weekends, although the emissions of NO x and VOC are typically lower in comparison to weekdays, was identified in the urban area of Cuenca [27], suggesting the presence of a VOC-limited regime for O 3 production. This finding provided the first insights into the influence of decreased on-road traffic emissions during weekends.
Actions for Controlling Air Pollutant Emissions
One of the most relevant controls of air pollutant emissions in Cuenca is the technical vehicular revision (RTV, due to its acronym in Spanish). According to the RTV regulation, it is mandatory that Atmosphere 2020, 11, 998 6 of 21 vehicles running in Cuenca must demonstrate each year that their exhaust emissions are lower than the levels established in the national regulation as a requirement to allow their use. Currently, through the RTV, CO and HC emissions from gasoline cars and the opacity of diesel vehicle emissions are measured. Today, NO x emissions from on-road traffic are not controlled.
Another component currently affecting air pollutant emissions is the operation of an electric tram, conceived as the new core of the public transportation system of Cuenca that aims to solve the problems associated with on-road traffic. The building of this facility began at the end of 2013. However, different problems and conflicts have appeared related to restricted mobility and effects on local commercial activities [28]. After several years of delay, the electric tram began to work during the time of writing this manuscript. The electric tram alleviates the air pollutant emissions along its route, which includes the historic center. However, the displaced buses will move their emissions to new routes.
In the future, under the framework of the Ecuadorian energy efficiency law [29], all vehicles incorporated into the public transportation system must be electric by 2025. The shift from diesel to electric buses will eliminate or reduce the exhaust emissions from diesel buses, decreasing, as a consequence, the NO x and PM 2.5 emissions.
The Forced Lockdown Owing to COVID-19
To reduce the spread of COVID-19, several measures, such as lockdowns, quarantine, stay at home, and transportation restrictions were applied world-wide. The corresponding effects on air quality have begun to be reported from different regions. Nakada and Urban (2020) [30] [33] reported reductions in PM 10 , SO 2 , and NO 2 in Salé (Morocco).
In Ecuador, based on the infected people and the pandemic declaration by WHO, the government declared (decree 1017) the exception status on 16 March 2020 [34]. One of the measures of this status was a restriction on mobility. During the following days, on-road traffic and other activities notably reduced, therefore decreasing the emission of air pollutants.
Although the exception status was officially maintained in Cuenca until 24 May 2020, some activities restarted on 17 May 2020 [35]. On 25 May 2020, the status was relaxed, alleviating the restriction of on-road traffic and other activities, and allowed buses employed for regional transportation to reactivate their service. Since 01 June 2020, urban buses in Cuenca have returned to service.
The air quality records from the exception status provide a unique opportunity to learn and appreciate the potentiality for air quality improvement as a consequence of the decrease in activities, such as on-road traffic.
This contribution explores the following issues of the air quality in Cuenca: • Verification of the presence of the WE in Cuenca after 2015; • The effects on air quality due to the future shift from diesel to electric buses; • The air quality during the COVID-19 lockdown, and its comparison to previous weeks and years; • A holistic analysis of these interrelated components to identify insights for air quality management.
WE in Cuenca
To complete the analysis of 2013 to 2015, and based on the same approach presented in Parra (2017) [27], we obtained mean-daily profiles for CO, NO 2 , PM 2.5 , and O 3 for each year of the period 2016 to 2019, and considering weekdays, Saturdays, and Sundays. We obtained the maximum 8-h mean O 3 concentrations to quantify the variation (percentage) of this pollutant from Saturdays and Sundays, compared to weekdays. These O 3 concentrations were obtained from the hourly values obtained between 9:00 and 16:00, considering that, during this period, typically, the O 3 concentrations are higher. This approach was used in a study of the WE in Santiago (Chile) [36] and Quito (Ecuador) [37].
Shift from Diesel to Electric Buses
As an early and preliminary assessment of the impact owing to the shift from diesel to electric buses, established in the Ecuadorian efficiency law, we simulated the air quality in Cuenca under two scenarios: (1) A reference scenario corresponding to buses running on diesel (DB) and (2) the future scenario with electric buses (EB). We used the Eulerian Weather Research and Forecasting with Chemistry (WRF-Chem V3.2) model [38] for simulating the air quality during September of 2014, based on the most recent emission inventory of Cuenca [4]. WRF-Chem is a state-of-the-art chemical transport model that requires, as the input, hourly maps of speciated emissions. Hourly maps were built using factors for activity data, considering the differences between weekdays and weekends and the influence of meteorology on vegetation emissions. September was selected because its on-road traffic and other activity levels are representative for the other months. Additionally, during specific days of September 2015 (2) and September 2017 (3), O 3 concentrations were higher than the WHO guideline (100 µg m −3 , maximum 8-h mean), partly due to the high levels of near zenith solar radiation reaching the region of Ecuador during this month.
For the DB scenario, all of the emissions sources of Table 1 were included when building the hourly emissions maps. For the EB scenario, as a first assumption, all of the combustion emissions (NO x , CO, NMVOC, SO 2 , PM 10 exhaust, and PM 2.5 exhaust) from buses (Table 2) were eliminated.
Initial and boundary conditions were generated using the final National Centers for Environmental Prediction (NCEP FNL) Operational Global Analysis data [39]. Meteorological simulations were carried out using a master domain of 70 × 70 cells (27 × 27 km each) and three nested sub-domains. The third sub-domain (100 × 82 cells of 1 km each, and 35 vertical levels) covers the region of the Cantón Cuenca ( Figure 1). For the third sub-domain, the option of WRF-Chem for the chemical transportation of pollutants was activated, selecting the carbon bond mechanism-Z (CBMZ) [40] for gaseous pollutants and the model for simulating aerosols interactions and chemistry (MOSAIC) for aerosols [41]. One crucial feature of WRF-Chem is the possibility to apply online approach modeling, allowing simultaneous treatment with feedback between meteorological and air quality variables. For this study, the option for direct effects between aerosols and meteorology was activated, using four aerosol bins. Working with direct effects improved the performance when modeling the air quality in Cuenca, compared to modeling without feedback [18]. Table 3 indicates the physics options used for the simulation. We selected the Yonsei University (YSU) for the planetary boundary layer scheme-one of the options of WRF-Chem-which provided the best modeling performance for modeling the air quality in Cuenca [18].
Air Quality during the COVID-19 Lockdown
We compared the short-term air quality (maximum 8-h mean CO, maximum 1-h mean NO 2 , 24-h mean PM 2.5 , and maximum 8-h mean O 3 ) records obtained from the MUN station during 17 March 2020 to 16 May 2020 (61 days), with records from the following periods: • Weeks before the exception status (01 January 2020 to 16 March 2020), and; • From 17 March to 16 May, of previous years, from 2015 to 2019. Although there is information available from 2012, we selected 2015 onwards, because records after this year covered at least 70% of days. We selected this percentage to assure the representativeness of records.
The short-term air quality concentrations used for comparison are congruent with the WHO guidelines [9,11] and the Ecuadorian air quality regulation. The maximum 1-h mean corresponds to the maximum hourly mean concentration per day. The maximum 8-h mean corresponds to the maximum mean concentration for eight consecutive hours per day. We conducted Wilcoxon tests to establish if the distributions were statistically equal or different.
WE in Cuenca
In agreement with the results from 2013 to 2015 [27], from 2016 to 2019, the mean-daily profiles of CO, NO 2 , and PM 2.5 showed lower concentrations on Saturdays and Sundays, compared to weekdays. However, O 3 profiles were higher. Figure
WE in Cuenca
In agreement with the results from 2013 to 2015 [27], from 2016 to 2019, the mean-daily profiles of CO, NO2, and PM2.5 showed lower concentrations on Saturdays and Sundays, compared to weekdays. However, O3 profiles were higher. Figure 2 depicts the profiles from 2018. The profiles of other years presented similar configurations.
From 2013 to 2019, the increase in the maximum 8-h mean O3 concentrations of Saturdays and Sundays compared to weekdays varied between 2.6% and 11.8% and 5.6% and 15.8%, respectively ( Figure 3). Although we limited our analysis to the yearly period, our results confirm the presence of the WE in the urban area of Cuenca, where on-road traffic is the most relevant air pollutant source. More insights can be drawn in the future, through an analysis of the historical records per season or month. From 2013 to 2019, the increase in the maximum 8-h mean O 3 concentrations of Saturdays and Sundays compared to weekdays varied between 2.6% and 11.8% and 5.6% and 15.8%, respectively ( Figure 3). Although we limited our analysis to the yearly period, our results confirm the presence of the WE in the urban area of Cuenca, where on-road traffic is the most relevant air pollutant source. More insights can be drawn in the future, through an analysis of the historical records per season or month.
Diesel vehicles, representing 10.8% of the total fleet from Cuenca, reduce their activity during weekends. Therefore, significant reductions in NO x and PM 2.5 emissions take place on weekends compared to weekdays. The lower activity of gasoline vehicles, which cover 89.2% of the fleet, during weekends, mainly decreases the emissions of CO and NMVOC. Therefore, the decrease of CO, NO x , and PM 2.5 emissions during weekends produced, on average, lower concentrations of these pollutants (Figure 2). Other sources, such as small industries, also reduce their emissions during weekends, but the decrease in on-road traffic is more significant.
Atmosphere 2020, 10, x FOR PEER REVIEW 9 of 22 Diesel vehicles, representing 10.8% of the total fleet from Cuenca, reduce their activity during weekends. Therefore, significant reductions in NOx and PM2.5 emissions take place on weekends compared to weekdays. The lower activity of gasoline vehicles, which cover 89.2% of the fleet, during weekends, mainly decreases the emissions of CO and NMVOC. Therefore, the decrease of CO, NOx, and PM2.5 emissions during weekends produced, on average, lower concentrations of these pollutants (Figure 2). Other sources, such as small industries, also reduce their emissions during weekends, but the decrease in on-road traffic is more significant. .
Shift from Diesel to Electric Buses
At the location of the MUN station, differences between the simulated scenarios (DB -EB) showed decreases in CO (between 0.00 and 0.14 mg m
Shift from Diesel to Electric Buses
At the location of the MUN station, differences between the simulated scenarios (DB -EB) showed decreases in CO (between 0.00 and 0.14 mg m −3 , median 0.02 mg m −3 ), NO 2 (0. (Figure 4).
At the passive stations, the results of EB compared to the DB scenario ( Figure 5) showed decreases in mean-monthly NO 2 (0.2 to 5.6 µg m −3 , median 3.8 µg m −3 ), although increases in mean-monthly O 3 (0.0 to 5.7 µg m −3 , median 3.9 µg m −3 ) ( Figure 5). The smallest differences, for both NO 2 and O 3 , were computed at Ictocruz (ICT) and Escuela Héctor Sempértegui (EHS), which are passive stations located in the south and north, respectively, in terms of the consolidated urban area of Cuenca (Figure 1), and, therefore, are only influenced by on-road traffic emissions to a small degree. Figure 6 shows the modeled maps of NO 2 (1-h mean at 7:00 local time (LT)) and O 3 (maximum 8-h mean) concentrations of the DB and EB scenarios, from 12 September 2014. The Supplementary Materials section shows movies of the hourly modeled concentrations of NO 2 and O 3 from 12 September 2014, for both the DB and EB scenarios.
Shift from Diesel to Electric Buses
At the location of the MUN station, differences between the simulated scenarios (DB -EB) showed decreases in CO (between 0.00 and 0.14 mg m −3 , median 0.02 mg m −3 ), NO2 (0.8 to 16.4 µg m −3 , median 7.1 µg m −3 ), and 24-h mean PM2.5 (0.2 to 1.8 µg m −3 , median 0.9 µg m −3 ) concentrations. However, the maximum 8-h mean O3 increased (1.1 to 8.0 µg m −3 , median 3.5 µg m −3 ) (Figure 4). At the passive stations, the results of EB compared to the DB scenario ( Figure 5) showed decreases in mean-monthly NO2 (0.2 to 5.6 µg m −3 , median 3.8 µg m −3 ), although increases in meanmonthly O3 (0.0 to 5.7 µg m −3 , median 3.9 µg m −3 ) ( Figure 5). The smallest differences, for both NO2 and O3, were computed at Ictocruz (ICT) and Escuela Héctor Sempértegui (EHS), which are passive stations located in the south and north, respectively, in terms of the consolidated urban area of Cuenca (Figure 1), and, therefore, are only influenced by on-road traffic emissions to a small degree. At the passive stations, the results of EB compared to the DB scenario ( Figure 5) showed decreases in mean-monthly NO2 (0.2 to 5.6 µg m −3 , median 3.8 µg m −3 ), although increases in meanmonthly O3 (0.0 to 5.7 µg m −3 , median 3.9 µg m −3 ) ( Figure 5). The smallest differences, for both NO2 and O3, were computed at Ictocruz (ICT) and Escuela Héctor Sempértegui (EHS), which are passive stations located in the south and north, respectively, in terms of the consolidated urban area of Cuenca (Figure 1), and, therefore, are only influenced by on-road traffic emissions to a small degree. advisable to model other months or even the entire yearly period.
Another scenario deserving exploration is the control of emissions from heavy diesel vehicles, which contributed the largest percentages of on-road emissions of NOx (36.9%) and PM2.5 (63.4%) in 2014. The effects of NMVOC controls should be explored for gasoline cars, especially for older vehicles. Due to their emissions, other sources deserving dedicated assessments are industrial activities, the power facility, and the handcrafted production of bricks. The operation of the tram project will produce changes in the public transportation system of Cuenca. The routes of buses need to be appropriately redesigned to define the best way to incorporate them. Emissions from buses will be redistributed, alleviating their magnitude on the historic center, although moving emissions to areas under the influence of new routes. Table 4 presents a comparison with other assessments on the influence of moving to electric vehicles. Minet [46] applied an approach based on changes in emissions when assessing the effects in São Paulo (Brazil) and Cluj-Napoca (Romania), respectively. These assessments, summarized in Table 4, reported decreases in NOx emissions. Apart from decreasing their short-term concentrations, lowering the NO 2 and PM 2.5 will also reduce their annual mean levels, promoting the attainment of the WHO guidelines and the air quality regulation. We highlight the benefits of reducing air pollution and particulate matter, owing to their carcinogenicity to humans [12,13], and the effects of particulate matter on the brain, which, according to recent literature, is the component of air pollution that appears to be the most concerning [17]. The modeled results and VOC-limited regime for photochemical O 3 production suggest that VOC controls should accompany future NO x reduction to avoid an increase in O 3 levels in the urban area of Cuenca.
In the future, the RTV should incorporate both NO x and VOC emission controls to verify the proper condition of exhaust catalysts for gasoline cars. Additionally, the RTV should incorporate NO x and PM 2.5 controls for diesel vehicles.
The direction of change in pollutants between the DB and EB scenarios was consistent with that observed during the COVID-19 lockdown. Although other sources reduced their activities, the absence of buses allowed, to a high degree, a reduction in NO x and PM 2.5 in the urban area of Cuenca. This consistency supports the validity of the approach used in this contribution to assess the effects of the future shift from diesel to electric buses in Cuenca.
The modeled results provide a preliminary estimation of air quality benefits based on the assumption that all diesel buses belonging to public transportation in the future will be replaced by electric buses. An updated emission inventory and the proposal of an appropriate future electric bus fleet (electric and hybrid) configuration will refine these results. Another limitation of our study is the period employed for modeling. Although September was considered representative, it is advisable to model other months or even the entire yearly period.
Another scenario deserving exploration is the control of emissions from heavy diesel vehicles, which contributed the largest percentages of on-road emissions of NO x (36.9%) and PM 2.5 (63.4%) in 2014. The effects of NMVOC controls should be explored for gasoline cars, especially for older vehicles. Due to their emissions, other sources deserving dedicated assessments are industrial activities, the power facility, and the handcrafted production of bricks.
The operation of the tram project will produce changes in the public transportation system of Cuenca. The routes of buses need to be appropriately redesigned to define the best way to incorporate them. Emissions from buses will be redistributed, alleviating their magnitude on the historic center, although moving emissions to areas under the influence of new routes. Table 4 presents a comparison with other assessments on the influence of moving to electric vehicles. [46] applied an approach based on changes in emissions when assessing the effects in São Paulo (Brazil) and Cluj-Napoca (Romania), respectively. These assessments, summarized in Table 4, reported decreases in NO x emissions.
Although the replacement of diesel buses by electric buses will reduce the emissions along the routes used by these vehicles, the generation of electricity will produce air pollution in the areas of influence of the fossil fuel power facilities belonging to the Ecuadorian mix. From 2001 to 2018, electricity came from renewable sources (43.5% to 73.6%), fossil fuels (26.2% to 52.2%), and importations (0.1% to 11.5%) [47]. Non-renewable sources include the combustion of fuel oil, diesel, naphtha, natural gas, bunker, oil, and liquid petroleum gas. The impact on air quality due to the electricity produced in Ecuador is a topic deserving of further research.
Air Quality during the COVID-19 Lockdown
The concentrations of CO and NO 2 were lower during the COVID-19 lockdown compared to previous records from 2020 ( Figure 7). The maximum 8-h mean CO mean decreased from 0.74 (median) to 0.60 mg m −3 . The maximum 1-h mean NO 2 decreased from 36.8 (median) to 16.3 µg m −3 . The distributions of CO and NO 2 during the lockdown period were statistically different, with lower levels compared to distributions from 01 January 2020 to 16 March 2020 (Table 5).
Additionally, the concentrations of PM 2.5 were lower during the restriction (Figure 7). The 24-h mean PM 2.5 decreased from 9.6 (median) to 5.7 µg m −3 . The peak after 17 March 2020 can be associated with the arrival of volcanic ash from the Cayambe-one of the currently active volcanoes in Ecuador [48]-which produced light ash fallout in Cuenca on 24 March 2020 [49]. This peak influenced the PM 2.5 records from 17 March to 16 April 2020, which showed a distribution statistically equal to records from 01 January to 16 (Table 4). However, the distribution of O 3 from 17 March to 16 May 2020 was statistically equal, showing similar levels to the distribution from 01 January to 16 March 2020.
The seasonal behavior of the maximum 8-h mean O 3 in Cuenca shows a decrease during April and May, with the lowest concentrations during June and July (Figure 8). After this, O 3 increases, typically reaching the highest values during September. Therefore, the O 3 decrease during the second month of the lockdown relates to its seasonal variation. Figure 8 shows that the O 3 levels during the COVID-19 lockdown were higher than the concentrations from previous years (2015 to 2019). Figure 8 shows the mean profile (maximum 8-h mean) of O 3 concentrations deduced from the records of the period 2015 to 2019 and the concentrations from 2020. The profile from 2020 shows, in general, higher concentrations compared to the mean of the previous years. From 01 January to 16 March, the O 3 concentrations from 2020 were 10.0 µg m −3 (median) higher than the mean profile from previous years. During the lockdown (17 March to 16 May 2020), this difference increased to 13.9 µg m −3 (median), indicating a net increase of 3.9 µg m −3 , which is consistent with the increase (3.5 µg m −3 , median) obtained by modeling when assessing the air quality effects of moving from DB to EB.
During the first days of the lockdown, the O3 concentrations increased. The maximum 8-h mean O3 rose from 52.2 to 55.7 µg m −3 , with the last value being the median from the first month after 17 March 2020. The median from 17 March 2020 to 16 May 2020 was 47.1 µg m −3 . The distribution of O3 from 17 March to 16 April 2020 was statistically different, showing higher values compared to the distribution from 01 January to 16 March 2020 (Table 4). However, the distribution of O3 from 17 March to 16 May 2020 was statistically equal, showing similar levels to the distribution from 01 January to 16 March 2020.
The seasonal behavior of the maximum 8-h mean O3 in Cuenca shows a decrease during April and May, with the lowest concentrations during June and July (Figure 8). After this, O3 increases, typically reaching the highest values during September. Therefore, the O3 decrease during the second month of the lockdown relates to its seasonal variation. Figure 8 shows that the O3 levels during the COVID-19 lockdown were higher than the concentrations from previous years (2015 to 2019). Figure 8 shows the mean profile (maximum 8-h mean) of O3 concentrations deduced from the records of the period 2015 to 2019 and the concentrations from 2020. The profile from 2020 shows, in general, higher concentrations compared to the mean of the previous years. From 01 January to 16 March, the O3 concentrations from 2020 were 10.0 µg m −3 (median) higher than the mean profile from previous years. During the lockdown (17 March to 16 May 2020), this difference increased to 13.9 µg m −3 (median), indicating a net increase of 3.9 µg m −3 , which is consistent with the increase (3.5 µg m −3 , median) obtained by modeling when assessing the air quality effects of moving from DB to EB. The distributions of CO and NO 2 during the lockdown period from 2020 were statistically different, showing lower concentrations compared to the distributions of the same period from 2015 to 2019 (Table 6). Similarly, the distribution of O 3 was statistically different, showing higher levels compared to the previous years. The distribution of PM 2.5 during the lockdown period from 2020 was statistically equal, only showing similar concentrations to 2016.
The low PM 2.5 median from 2015 (4.2 µg m −3 ) can be associated with the reduction in traffic-especially buses-in the historic center, due to activities of the construction of the electric tram. This project's construction activities caused the closing of streets and changes in the routes of buses and limited the use of particular vehicles [28]. The operation of this project will produce changes in the public transportation system of Cuenca. At the time of writing this manuscript, the tram was being tested, and it will officially start working during the upcoming weeks.
The decrease in CO, NO 2 , and PM 2.5 from 17 March to 16 May 2020, compared to previous weeks (01 January to 16 March 2020), was consistent with the decrease of these pollutants compared to previous years (2015 to 2019). Although other activities, such as some industries, probably reduced their activities, these changes can be associated, to a high degree, with reductions in on-road traffic. During the restriction, all types of vehicles reduced their activity. Buses did not work, and, therefore, there were essential reductions in NO 2 and PM 2.5 .
On the other hand, the increase in O 3 concentrations is consistent with the hypotheses behind the WE [50]. Among them, the results suggested that the following could have a leading role:
•
There is a VOC-limited regime, with a VOC/NO x ratio lower than 8. Under this regime, VOC limits O 3 production, and NO x reduction promotes O 3 production, and; • Less O 3 is titrated because NO x emissions are lower compared to weekdays.
Other mechanisms, such as the reduction in soot, can contribute to higher O 3 concentrations. More studies are required to define the participation of these and other hypotheses behind the WE in Cuenca. Figure 10 shows the mean profiles of global solar radiation from 2017 to 2020 (MUN station), corresponding to the period 17 March to 16 May. These profiles indicate the mean levels of solar radiation from 9:00 to 16:00, representing the hours when O 3 concentrations are typically higher. The profile of 2020 did not show higher values compared to previous years. The corresponding Wilcoxon tests indicated that the distributions of global radiation records from 2020 were statistically equal compared to the previous three years. These results indicate that the increase in O 3 concentrations during 2020 is not related to higher solar radiation levels. Apart from changes in the emissions of precursors, another factor potentially involved is the long-range transport of O 3 . From 1 January to 16 May of 2020, Terra and Aqua satellites [51] identified forest fires, mainly in Colombia and Venezuela, toward the northeast of Ecuador. In addition, forest fires were mostly identified in the north of Peru and the center of Brazil. At the latitude of Cuenca, forest fires were less abundant and mainly at the center and west of South America. Although the influence of forest fires is outside the scope of this study, their occurrence from 1 January to 16 May suggests their emissions were not the leading cause of O 3 increases during the COVID-19 lockdown.
The effects of the COVID-19 lockdown and modeled results presented in this contribution provide an early reference for the potential changes in the air quality of Cuenca during the next few years.
Although we focused our analyses on Cuenca, our results can act as a preliminary reference for other medium-large Ecuadorian cities, which share similar features with regards to their vehicular fleets and emission contributions [1,2].
Conclusions and Summary
Based on records from seven years (2013 to 2019), we confirmed the presence of the WE in the urban area of Cuenca, where on-road traffic is the most relevant air pollutant source. The VOClimited regime for O3 production explains, at least in part, the mean increase in O3 concentrations during weekends, despite the decreased emissions of NOx and VOC, in comparison to weekdays. This regime is behind a counterintuitive variation of O3 owing to the variation of NOx emissions: The increase in NOx emissions decreases O3 concentrations, and the decrease in NOx emissions increases the O3 levels.
Our preliminary assessment, based on the assumption that all diesel buses will be replaced by electric buses, implies the elimination of 1861.2 t y −1 of NOx and 81.7 t y −1 of PM2.5 exhaust emissions ( Table 2). The modeled results indicated decreases in NO2 and PM2.5 but increases in O3 concentrations. The direction of these changes was consistent with the VOC-limited regime presented in Cuenca.
The effects of the limitation of activities during the COVID-19 lockdown also showed consistent variations (NO2 decrease and O3 increase) in comparison to the VOC-limited regime for O3 production.
There was consistency between the WE, the effects on air quality during the COVID-19 lockdown, and the modeled results owing to the future shift in the public transportation system of Cuenca. This consistency supports the modeling approach used in this contribution for assessing
Conclusions and Summary
Based on records from seven years (2013 to 2019), we confirmed the presence of the WE in the urban area of Cuenca, where on-road traffic is the most relevant air pollutant source. The VOC-limited regime for O 3 production explains, at least in part, the mean increase in O 3 concentrations during weekends, despite the decreased emissions of NO x and VOC, in comparison to weekdays. This regime is behind a counterintuitive variation of O 3 owing to the variation of NO x emissions: The increase in NO x emissions decreases O 3 concentrations, and the decrease in NO x emissions increases the O 3 levels.
Our preliminary assessment, based on the assumption that all diesel buses will be replaced by electric buses, implies the elimination of 1861.2 t y −1 of NO x and 81.7 t y −1 of PM 2.5 exhaust emissions ( Table 2). The modeled results indicated decreases in NO 2 and PM 2.5 but increases in O 3 concentrations. The direction of these changes was consistent with the VOC-limited regime presented in Cuenca.
The effects of the limitation of activities during the COVID-19 lockdown also showed consistent variations (NO 2 decrease and O 3 increase) in comparison to the VOC-limited regime for O 3 production.
There was consistency between the WE, the effects on air quality during the COVID-19 lockdown, and the modeled results owing to the future shift in the public transportation system of Cuenca. This consistency supports the modeling approach used in this contribution for assessing future air quality scenarios, owing to changes in the emission inventories. The same approach can be used to assess the effects of reductions in other emission sources, such as old gasoline cars (high NMVOC emissions). Moreover, the modeled results and their consistency with the WE and the effects of the COVID-19 lockdown support the validity of the emission inventory from 2014, which, although not a recent one, is a useful component for modeling purposes. Future emissions inventories should follow the same approach used when building the emission inventory from 2014.
Our findings suggest that VOC emission controls should accompany a future reduction in NO x emissions to avoid an increase in O 3 levels in the urban area of Cuenca. In the future, the RTV should incorporate both NO x and VOC emission controls to verify the proper condition of exhaust catalysts for gasoline cars. Furthermore, the RTV should incorporate NO x and PM 2.5 controls for diesel vehicles.
The operation of the electric tram system will produce changes in the transportation system of Cuenca. Emissions from buses will be redistributed, alleviating their magnitude in the historic center. The effects of the COVID-19 lockdown and modeled results presented in this contribution provide an early reference for the potential changes in the air quality of Cuenca during the upcoming years, due to the recent operability of the electric tram and the future shift from diesel to electric buses in Cuenca. | 2020-09-25T13:10:46.279Z | 2020-09-18T00:00:00.000 | {
"year": 2020,
"sha1": "31a8cfbebbb6bcfeaef01d13f83cd9bf5b08ebf8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/11/9/998/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "44b3df7a99b56e7d3d4a600d6beba06555c63629",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
15377072 | pes2o/s2orc | v3-fos-license | Decision Theory with Prospect Interference and Entanglement
We present a novel variant of decision making based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to describe a variety of interesting fallacies and anomalies that have been reported to particularize the decision making of real human beings. The theory characterizes entangled decision making, non-commutativity of subsequent decisions, and intention interference. We demonstrate how the violation of the Savage's sure-thing principle, known as the disjunction effect, can be explained quantitatively as a result of the interference of intentions, when making decisions under uncertainty. The disjunction effects, observed in experiments, are accurately predicted using a theorem on interference alternation that we derive, which connects aversion-to-uncertainty to the appearance of negative interference terms suppressing the probability of actions. The conjunction fallacy is also explained by the presence of the interference terms. A series of experiments are analysed and shown to be in excellent agreement with a priori evaluation of interference effects. The conjunction fallacy is also shown to be a sufficient condition for the disjunction effect and novel experiments testing the combined interplay between the two effects are suggested.
Introduction
Decision theory is concerned with identifying what are the optimal decisions and how to reach them. Most of decision theory is normative and prescriptive, and assumes that people are fullyinformed and rational. These assumptions have been questioned early on with the evidence provided by the Allais paradox (Allais, 1953) and many other behavioral paradoxes (Camerer et al., 2003), showing that humans often seem to deviate from the prescription of rational decision theory due to cognitive and emotion biases. The theories of bounded rationality (Simon, 1955) of behavioral economics and of behavioral finance have attempted to account for these deviations. As reviewed by Machina (2008), alternative models of preferences over objectively or subjectively uncertain prospects have attempted to accommodate these systematic departures from the expected utility model while retaining as much of its analytical power as possible. In particular, non-additive nonlinear probability models have been developed to account for the deviations from objective to subjective probabilities observed in human agents (Quiggin, 1982;Gilboa, 1987;Schmeidler, 1989;Gilboa and Schmeidler, 1989;Cohen and Tallon, 2000;Montesano, 2008). However, many paradoxes remain unexplained or are sometimes rationalized on an ad hoc basis, which does not provide much predictive power. Various attempts to extend utility theory by constructing non-expected utility functionals (Machina, 2008) cannot resolve the known classical paradoxes (Safra and Segal, 2008). Moreover, extending the classical utility theory "ends up creating more paradoxes and inconsistencies than it resolves" (Al-Najjar and Weinstein, 2009).
Here, we propose a novel approach, developed as a part of the mathematical theory of Hilbert spaces (Dieudonné, 2006) and employing the mathematical techniques that are used in quantum theory. Because of the latter, we call this approach the Quantum Decision Theory (QDT). This approach can be thought of as the mathematically simplest and most natural extension of objective probabilities into nonlinear subjective probabilities. The proposed formalism allows us to explain quantitatively the disjunction and conjunction effects. The disjunction effect is the failure of humans to obey the sure-thing principle of classical probability theory. The conjunction effect is a logical fallacy that occurs when people assume that specific conditions are more probable than a single general one. Our QDT unearths a deep relationship between the conjunction and the disjunction effects. We show that the former is sufficient for the later to exist.
QDT uses the same underlying mathematical structure as the one developed to establish a rigorous formulation of quantum mechanics (von Neumann, 1955). Based on the mathematical theory of separable Hilbert spaces, quantum mechanics showed how to reconcile and combine the continuous wave description with the fact that waves are organized in discrete energy packets, called quanta, that behave in a manner similar to particles. Analogously, in our framework, the qualifier quantum emphasizes the fact that a decision is a discrete selection from a large set of entangled options. Our key idea is to provide the simplest generalization of the classical probability theory underlying decision theory, so as to account for the complex dynamics of the many nonlocal hidden variables that may be involved in the cognitive and decision making processes of the brain. The mathematical theory of complex separable Hilbert spaces provides the simplest direct way to avoid dealing with the unknown hidden variables, and at the same time reflecting the complexity of nature (Yukalov, 1975). In decision making, the hidden variables can be the many unknown states of nature, the emotions, and subconscious processes.
However, it is necessary to stress that our approach does not require that a decision maker be a quantum object. All analogies with quantum processes have to be understood solely as mathematical analogies helping the reader to catch why the functional analysis is really an appropriate tool for modeling decision making. Before presenting our approach, it is useful to briefly mention previous studies of decision making and of the associated cognitive processes of the brain which, superficially, could be considered as related to our approach. This exposition will allow us to underline the originality and uniqueness of our approach. We do not touch here purely physiological aspects of the problem, which are studied in medicine and the cognitive sciences. Concerning the functional aspects of decision making, we focus our efforts towards its mathematical modeling.
Two main classes of theories invoke the qualifier "quantum". In the first class, one finds investigations which attempt to represent the brain as a quantum or quantum-like object (Penrose, 1989;Lockwood, 1989;Satinover, 2001), for which several mechanisms have been suggested (Fröhlich, 1968;Stuart et al., 1978Stuart et al., , 1979Beck and Eccles, 1992;Vitiello, 1995;Hagan et al., 2002;Pessa and Vitiello, 2003). The existence of genuine quantum effects and the operation of any of these mechanisms in the brain remain however controversial and have been criticized by Tegmark (2000) as being unrealistic. Another approach in this first class appeals to the mind-matter duality, treating mind and matter as complementary aspects and considering consciousness as a separate fundamental entity (Chalmers, 1996;Atmanspacher et al., 2002;Primas, 2003;Atmanspacher, 2003). This allows one, without insisting on the quantum nature of the brain processes, if any, to ascribe quantum properties solely to the consciousness itself, as has been advocated by Stapp (1993Stapp ( , 1999.
Actually, the basic idea that mental processes are similar to quantum-mechanical phenomena goes back to Niels Bohr. One of the first publications on this analogy is his paper (Bohr, 1929). Later on, he discussed many times the similarity between quantum mechanics and the function of the brain, for instance in Bohr (1933Bohr ( , 1937Bohr ( , 1961. This analogy proposes that mental processes could be modeled by quantum-mechanical wave functions, with all the consequences following from the mathematical properties of these objects. One of such immediate consequences would be the appearance of interference effects that are typical of quantum mechanics. The second class of theories does not necessarily assume quantum properties of the brain or that consciousness is a separate entity with quantum characteristics. Rather, these approaches use quantum techniques, as a convenient language to generalize classical probability theory. An example is provided by so-called quantum games (Meyer, 1999;Goldenberg et al., 1999;Eisert and Wilkens, 2000;Johnson, 2001;Benjamin and Hayden 2001;Iqbal and Toor, 2001;Du et al., 2001Du et al., , 2002Lee and Johnson, 2003). According to van Enk and Pike (2002), any quantum game can be reformulated as a classical game rigged with some additional conditions. Another example is the Shor (1997) algorithm, which is purely quantum-mechanical, but is solving the classical factoring problem. This shows that there is no contradiction in using quantum techniques for describing classical problems.
In any case, whether we deal really with a genuine quantum system or with an extremely complex classical system, the language of quantum theory can be a convenient effective tool for describing such complex systems (Yukalov, 1975). When dealing with genuinely quantum systems, the QDT provides natural algorithms that could be used for quantum information processing, the operation of quantum computers, and in creating artificial quantum intelligence Sornette, 2008, 2009). In the case of decision making performed by real people, the subconscious activity and the underlying emotions, which are difficult to quantify, play the role of the hidden variables appearing in quantum theory.
It is important to stress that we do not assume that human brain has anything to do with a real quantum object or that consciousness possesses some underlying quantum nature. But we use the theory of complex separable Hilbert spaces as a mathematical language that is convenient for the formal description of complicated processes associated with decision making. What we actually need is just the mathematical theory of Hilbert spaces. We could even avoid the use of the term "quantum", since there is no any quantum mechanics, as a physical theory, in our approach. The sole common thing between our QDT and quantum mechanics is that both employ the theory of Hilbert spaces, characterizing the states as vectors in this space. We use the denomination "quantum" for brevity and because quantum theory is also based on the theory of Hilbert spaces. In that way, we employ the techniques of quantum theory as a convenient mathematical tool, without assuming any genuine underlying quantum processes.
As another analogy, we can mention the theory of differential equations, which was initially developed for describing the motion of planets. But later on, this theory has been extended to numerous problems, having nothing to do with the motion of planets, and employed in a variety of branches of science as a mathematical tool. To emphasize this point, we conclude the section by the important statement that clarifies our position and helps the reader avoid any confusion.
Statement. Quantum Decision Theory is based on the mathematical techniques employed in quantum theory, using the notion of Hilbert spaces as a formal mathematical tool. But QDT does not require that a decision maker be necessarily a quantum object.
Foundations of Quantum Decision Theory
The classical approaches to decision making are based on the utility theory (von Neumann and Morgenstern, 1944;Savage, 1954). Decision making in the presence of uncertainty about the states of nature is formalized in the statistical decision theory (Lindgren, 1971;White, 1976;Hastings and Mello, 1978;Rivett, 1980;Buchanan, 1982;Berger, 1985;Marshall and Oliver, 1995;Bather, 2000;French and Insua, 2000;Raiffa and Schlaifer, 2000;Weirich, 2001). Some paradoxes, occurring in the interpretation of classical utility theory and its application to real human decision processes have been discussed, e.g., by Berger (1985), Zeckhauser (2006), and Machina (2008).
Idea of Quantum Decision Theory
Here we suggest another approach to decision making, which is principally different from the classical utility theory. We propose to define the action probability as is done in quantum mechanics, using the mathematical theory of complex separable Hilbert spaces. This proposition can be justified by invoking the following analogy. The probabilistic features of quantum theory can be interpreted as being due to the existence of the so-called nonlocal hidden variables. The dynamical laws of these nonlocal hidden variables could be not merely extremely cumbersome, but even not known at all, similarly to the unspecified states of nature. The formalism of quantum theory is then formulated in such a way as to avoid dealing with unknown hidden variables, but at the same time to reflect the complexity of nature (Yukalov, 1975). In decision making, the role of hidden variables is played by unknown states of nature, by emotions, and by subconscious processes, for which quantitative measures are not readily available.
In the following sub-sections, we develop the detailed description of the suggested program, explicitly constructing the action probability in quantum-mechanical terms. The probability of an action is intrinsically subjective, as it must characterize intended actions by human beings. For brevity, an intended action can be called an intention or an action. In compliance with the terminology used in the theories of decision-making, a composite set of intended actions, consisting of several subactions, will be called a prospect. An important feature of our approach is that we insist on the necessity of dealing not with separate intended actions, but with composite prospects, including many incorporated intentions. Only then it becomes possible, within the frame of one general theory, to describe a variety of interesting unusual phenomena that have been reported to characterize the decision making properties of real human beings.
Mathematically, our approach is based on the von Neumann theory of quantum measurements (von Neumann, 1955). The relation of the von Neumann theory to quantum communication procedures has been considered by Benioff (1972). We generalize the theory to be applicable not merely to simple actions, but also to composite prospects, which is of principal importance for the appearance of decision interference. A brief account of the axiomatics of our approach has been published in the recent letters Sornette, 2008, 2009). The aim of the present paper is to provide a detailed explanation of the theory and to demonstrate that it can be successfully applied to the real-life problems of decision making.
Main Definitions
In order to formulate in precise mathematical terms the process of decision making, it is necessary to introduce several definitions. To better understand these definitions, we shall give some very simple examples. The entity concerned with the decision making task can be a single human, a group of humans, a society, a computer, or any other system that is able or enables to make decisions. Throughout the paper, we shall employ the Dirac (1958) notations widely used in quantum theory.
Definition 1. Intended actions
An intended action which, for brevity, can be called an intention or an action, is a particular thought about doing something. Examples of intentions could be as follows: "I would like to marry" or "I would like to be rich" or "I would like to establish a firm". There can be a variety of intentions A i , which are enumerated by the index i = 1, 2, 3, . . .. Between any two intended actions, A and B, it is possible to define the binary operations of addition and multiplication in the same way as it is standardly done in mathematical logic (Mendelson, 1965) and probability theory (Feller, 1970). The sum A + B means that either A or B is intended to be accomplished. The summation of several actions is denoted as i A i ≡ A 1 + A 2 + · · ·. The product AB implies that both A and B are intended to be accomplished together. The product of several intended actions is denoted as i A i ≡ A 1 A 2 · · ·. The total set of such intended actions, equipped with these binary operations, is called the action ring.
Definition 2. Action modes
Intention representations, or action modes, are concrete implementations of an intention. For instance, the intention "to marry" can have as representations the following variants: "to marry A" or "to marry B", and so on. The intention "to be rich" can have as representations "to be rich by working hard" or "to be rich by becoming a bandit". The intention "to establish a firm" can have as representations "to establish a firm producing cars" or "to establish a firm publishing books" and so on. We number the representations of an i-intention by the index µ = 1, 2, 3, . . . , M i . The intention representations may include not only positive intention variants "to do something" but also negative variants such as "not to do something". For example, the Hamlet's hesitation "to be or not to be" is the intention consisting of two representations, one positive and the other negative.
Definition 3. Mode states
The mode state, or representation state, of an action mode A iµ is denoted as the vector |A iµ > corresponding to the µ-representation of an i-intention. This vector is a member of a linear space to be defined below.
Definition 4. Mode basis
The mode basis, or representation basis, is the set {|A iµ >} of the representation states |A iµ > corresponding to those intention representations A iµ , which are classified as basic. Here "basic" means the most important and fundamental, in the sense that linear combinations of the vectors |A iµ > exhaust the whole set of i-intentions. The members of a mode basis are supposed to be well distinguished from each other and also normalized. This can be formalized as saying that the representation basis is orthonormal, which implies that a form, called scalar product, is defined, such that the scalar product < A iµ |A iν > yields the Kronecker delta symbol δ µν : (1)
Definition 5. Mode space
The mode space consists of all possible intention states. It is formed as the closed linear envelope spanning the mode basis {|A iµ >}. Thus, we can assume that the mode space is a Hilbert space, that is, a complete normed space, with the norm generated by the scalar product.
Definition 6. Intention states
The intention state at time t is a function corresponding to an i-intention, which can be represented as a linear combination of the states from the representation basis {|A iµ >}. The intention state (3) is a member of the mode space (2). Since the mode space has been assumed to be a Hilbert space, the associated scalar product exists and yields The norm of the intention state (3) is generated by the scalar product (4) as The expansion coefficients in Eq.
(3) are assumed to be defined by the decision maker, so that |c iµ | 2 gives the weight of the state |A iµ > into the general intention state.
Definition 7. Action prospects
A prospect π j is a conjunction of several intended actions or several intention representations. In reality, an individual is always motivated by a variety of intentions, which are mutually interconnected. Even the realization of a single intention always involves taking into account many other related intentions. So, generally, a prospect is an object of the composite type ABC · · ·, where each action can be composed of several modes.
Definition 8. Elementary prospects
An elementary prospect e n is a simple prospect formed by a conjunction of single action modes A iν i . With each intention representation marked by the index ν i , the elementary prospect is labelled by the multi-index n ≡ {ν 1 , ν 2 , ν 3 , . . .}.
The elementary prospects are assumed to be mutually disjoint.
Definition 9. Basic states
Basic states are the vectors which are mapped to the elementary prospects labelled in (6). These vectors are the tensor products of the mode states |A iν i >.
Definition 10. Prospect basis
The prospect basis {|e n >} is the family of all basic states (7) corresponding to the elementary prospects. Different states belonging to the prospect basis are assumed to be disjoint, in the sense of being orthogonal. Since the modulus of each state has no special meaning, these states are also normalized to one. This can be formalized as the orthonormality of the basis, for which there exists a scalar product where is the product of the Kronecker symbols.
Definition 11. Mind space
The space of mind is defined as the closed linear envelope over the prospect basis {|e n >}: This is a Hilbert space, being the direct product of the mode spaces (2), which can be thought of as a possible mathematical representation of the mind. Note that the closed linear envelope (10) exhausts all possible states that can be expanded over the total basis {|e n >}. Mathematically, L{|e n >} is identical to ⊗ i M i . Therefore the product ⊗ i M i is a direct consequence of the structure of L{|e n >}.
Definition 12. Mind dimensionality
The dimensionality of the mind space (10), which can be termed the dimensionality of mind, where M i is the number of the i-intention modes.
Definition 13. Prospect states
A prospect state |π j > is a member of the mind space (10). The prospects are enumerated with the index j = 1, 2, . . .. The total set {|π j >} of all prospect states |π j >, corresponding to all admissible prospects, forms a subset of the space of mind. The set {|π j >} ⊂ M can be called the prospect-state set. Note that the vectors |π j > are not necessarily orthogonal with each other and, generally, are not normalized. The normalization condition will be formulated for the prospect probabilities to be defined below.
Definition 14. Strategic state
The strategic state of mind at time t is a given specific vector which can be represented as a linear combination of the prospect basic states {|e n >}. The coefficients c n (t) are given complex-valued functions of time, whose temporal evolution is associated with the particular individual and context. The strategic state (12) belongs to the mind space (10), which is a Hilbert state endowed with the scalar product The norm of the strategic state (12) is generated by the scalar product (13), The strategic state of mind is normalized to unity, so that |||ψ s (t) > || = 1 .
Then, from the definition of the scalar product (13), we have n |c n (t)| 2 = 1 .
The strategic state of mind is a fixed vector characterizing a particular decision maker, with his/her beliefs, habits, principles, etc., that is, describing each decision maker as a unique subject. Hence, each space of mind possesses a unique strategic state. Different decision makers possess different strategic states.
Entangled Prospects
Prospect states can be of two qualitatively different types, disentangled and entangled.
Definition 15. Disentangled states
A disentangled prospect state is a prospect state which is represented as the tensor product of the intention states (3): We define the disentangled set as the collection of all admissible disentangled prospect states of form (17):
Definition 16. Entangled states
An entangled prospect state is any prospect state that cannot be reduced to the tensor product form of disentangled prospect states (17).
In quantum theory, it is possible to construct various entangled and disentangled states (see, e.g., . For the purpose of developing a theory of decision making, let us illustrate the above definitions by an example of a prospect consisting of two intentions with two representations each. Let us consider the prospect of the following two intentions: "to get married" and "to become rich". And let us assume that the intention "to get married" consists of two representations, "to marry A", with the representation state |A >, and "to marry B", with the representation state |B >. And let the intention "to become rich" be formed by two representations, "to become rich by working hard", with the representation state |W >, and "to become rich by being a gangster", with the representation state |G >. Thus, there are two intention states of type (3), The general prospect state has the form |π >= c 11 |AW > +c 12 |AG > +c 21 |BW > +c 22 |BG > , where the coefficients c ij belong to the field of complex numbers. Depending on the values of the coefficients c ij , the prospect state (20) can be either disentangled or entangled. If it is disentangled, it must be of the tensor product type (17), which for the present case reads Both states (20) and (21) include four elementary-prospect states (7): • "to marry A and to work hard", |AW >, • "to marry A and become a gangster", |AG >, • "to marry B and to work hard", |BW >, • "to marry B and become a gangster", |BG >.
However, the structure of states (20) and (21) is different. The prospect state (20) is more general and can be reduced to state (21), but the opposite may not be possible. For instance, the prospect state c 12 |AG > +c 21 |BW > , which is a particular example of state (20) cannot be reduced to any of the states (21), provided that both coefficients c 12 and c 21 are non-zero. In quantum mechanics, this state would be called the Einstein-Podolsky-Rosen state, one of the most famous examples of an entangled state (Einstein et al., 1935). Another example is the prospect state whose quantum-mechanical analog would be called the Bell state (Bell, 1964). In the case where both c 11 and c 22 are non-zero, the Bell state cannot be reduced to any of the states (21) and is thus entangled.
In contrast with the above two examples, the prospect states c 11 |AW > +c 12 |AG > , c 11 |AW > +c 21 |BW > , are disentangled, since all of them can be reduced to the form (21). Since the coefficients c ij = c ij (t) are, in general, functions of time, it may happen that a prospect state at a particular time is entangled, but becomes disentangled at another time or, vice versa, a disentangled prospect state can be transformed into an entangled state with changing time .
The state of a human being is governed by his/her physiological characteristics and the available information (Bechara et al., 2000;Dickhaut et al., 2003). These properties are continuously changing in time. Hence the strategic state (12), specific of a person at a given time, may also display temporal evolution, according to different homeostatic processes adjusting the individual to the changing environment.
Decision Making
We describe the process of decision making as an intrinsically probabilistic procedure. The first step consists in evaluating consciously and/or subconsciously the probabilities of choosing different actions from the point of view of their usefulness and/or appeal to the choosing agent. Mathematically, this is described as follows.
Definition 17. Prospect set
The total family L ≡ {π j : j = 1, 2, . . .} of all prospects π j , among which one makes a choice, is called the prospect set.
Definition 18. Prospect operators
The prospect operator, corresponding to a prospect π j with the prospect state |π j > iŝ The prospect operators in decision theory are analogous to the operators of local observables in quantum theory. The prospect probabilities are defined as the expectation values of the prospect operators with respect to the given strategic state. The strategic state of mind of an agent at some time t is represented by the state |ψ s (t) >.
Definition 19. Prospect probabilities
The probability of realizing a prospect π j , with the prospect state |π j >, under the given strategic state |ψ s (t) >, characterizing the agent's state of mind at time t, is the expectation value of the prospect operator (25): The prospect probabilities defined in (26) are assumed to possess all standard probability properties, with the normalization condition The prospect probabilities are defined in Eq. (26) through the prospect states and the strategic state of mind. The latter is normalized to one, according to Eq.(15). By their definition, the prospect probabilities have to be summed to one, as in Eq. (27). But the prospect states themselves do not need to be normalized to one, which means that different prospects can have, and usually do have, different weights, corresponding to their different probabilities. In physics, this situation would be similar to defining the cross-section in a scattering experiment over a system containing elementary particles (elementary prospects) and composite clusters (composite prospects) formed by several particles.
In the traditional theory of decision making, based on the utility function, the optimal decision corresponds, by definition, to the maximal expected utility which is associated with the maximal anticipated usefulness and profit resulting from the chosen action. In contrast, our QDT recognizes that the behavior of an individual is probabilistic, not deterministic. The prospect probability (26) quantifies the probability that a given individual chooses the prospect π j , given his/her strategic state of mind |ψ s (t) > at time t. This translates in experiments into a prediction on the frequency of the decisions taken by an ensemble of subjects under the same conditions. The observed frequencies of different decisions taken by an ensemble of non-interacting subjects making a decision under the same conditions serves as the observable measure of the subjective probability. It is, actually, the known fact that subjective probabilities can be calibrated by frequencies or fractions (Tversky and Kahneman, 1973;Kaplan and Garrick, 1981).
This specification also implies that the same subject, prepared under the same conditions with the same strategic state of mind |ψ s > at two different times, may choose two different prospects among the same set of prospects, with different relative frequencies determined by the corresponding prospect probabilities (26). Verifying this prediction is a delicate empirical question, because of the possible impact of the "memory" of the past decisions on the next one. In order for the prediction to hold, the two repetitions of the decision process should be independent. Otherwise, the strategic state of mind in the second experiment keeps a memory of the previous choice, which biases the results. This should not be confused with the fact that the projection of the strategic state of mind onto the prospect state |π j >, when the decision is made to realize this prospect, ensures that the individual will in general keep his/her decision, whatever it is, when probed a second time sufficiently shortly after the first decision so that the strategic state of mind, realized just after the projection, has not had time yet to evolve appreciably.
Definition 20. Optimal prospect
The prospect π * is called optimal if and only if its probability is the largest among the probabilities of all prospects from the considered prospect set L, In QDT, the concept of an optimal decision is replaced by a probabilistic decision, when the prospect, which makes the probability p(π j ) given by (26) maximal, is the one which corresponds best to the given strategic state of mind of the decision maker. In that sense, the prospect which makes p(π j ) maximal can be called "optimal with respect to the strategic state of mind". Using the mapping between the subjective probabilities and the frequentist probabilities observed on ensembles of individuals, the prospect that makes p(π j ) maximal will be chosen by more individuals than any other prospect, in the limit of large population sampling sizes. However, other less probable prospects will also be chosen by some smaller subsets of the population.
Remark 1. Entangled decision making
As is explained above, a prospect state |π j > does not have in general the form of the product (17), which means that it is entangled. The strategic state |ψ s > can also be entangled. Therefore, the prospect probability p(π j ), in general, cannot be reduced to a product of terms, but has a more complicated structure, as will be shown below. In other words, the decision making process is naturally entangled.
Consider the example of Section 2 of the specific prospect state (20) associated with the two intentions "to get married" and "to become rich". And suppose that A does not like gangsters, so that it is impossible to marry A and at the same time being a gangster. This implies that the prospect-representation AG cannot be realized, hence c 12 = 0. Assume that B dreams of becoming rich as fast as possible, and a gangster spouse is much more luring for B than a dull person working hard, which implies that c 21 = 0. In this situation, the prospect state (20) reduces to the entangled Bell state c 11 |AW > +c 22 |BG >. A decision performed under these conditions, resulting in an entangled state, is entangled.
Remark 2. Noncommutativity of subsequent decisions
There exist numerous real-life examples when decision makers fail to follow their plans and change their mind simply because they experience different outcomes on which their intended plans were based. This change of plans after experiencing particular outcomes is the effect known as dynamic inconsistency (Frederick et al., 2002;Barkan et al., 2005;). In our language, this is a simple consequence of the non-commutativity of subsequent decisions, resulting from entanglement between intention representations and caused by the existence of intention interference.
Prospect Interference
As soon as one accepts the description of decision making, which invokes the mathematical techniques of quantum theory as is suggested by Bohr (1929Bohr ( , 1933Bohr ( , 1937Bohr ( , 1961, one inevitably meets the effects of interference. The possible occurrence of interference in the problems of decision making has been mentioned before on formal grounds (see, e.g., Busemeyer et al., 2006). However, no general theory has been suggested, which would explain why and when such effects would appear, how to predict them, and how to give a quantitative analysis of them. In our approach, interference in decision making arises only when one takes a decision involving composite prospects. The corresponding mathematical treatment of these interferences within QDT is presented in the following subsections.
Illustration of Interference in Decision Making
As an illustration, let us consider the following situation of two intentions, "to get a friend" and "to become rich". Let the former intention have two representations "to get the friend A" and "to get the friend B". And let the second intention also have two representations, "to become rich by working hard" and "to become rich by being a gangster". The corresponding strategic mind state is given by Eq. (12), with the evident notation for the basic states |e n > and the coefficients c ij given by the identities Suppose that one does not wish to choose between these two friends in an exclusive manner, but one hesitates of being a friend to A as well as B, with the appropriate weights. This means that one deliberates between the intention representations A and B, while the way of life, either to work hard or to become a gangster, has not yet been decided.
The corresponding composite prospects are characterized by the prospect states The coefficients of the prospect states define the weights corresponding to the intended actions, among which the choice is yet to be made. One should not confuse the intended actions with the actions that have already been realized. One can perfectly deliberate between keeping this or that friend, in the same way, as one would think about marrying A or B in another example above. This means that the choice has not yet been made. And before it is made, there exist deliberations involving stronger or weaker intentions to both possibilities. Of course, one cannot marry both (at least in most Christian communities). But before marriage, there can exist the dilemma between choosing this or that individual. Calculating the scalar products we find the prospect probabilities Recall that the prospects are characterized by vectors pertaining to the space of mind M, which are not necessarily normalized to one or orthogonal to each other. The main constraint is that the total set of prospect states L = {|π j >} be such that the related probabilities p(π j ) ≡ | < π j |ψ s > | 2 be normalized to one, according to the normalization condition (27).
The probabilities (31) can be rewritten in another form by introducing the partial probabilities and the interference terms Then the probabilities (31) become Let us define the uncertainty angles and the uncertainty factors Using these, the interference terms (33) take the form The interference terms characterize the existence of deliberations between the decisions of choosing a friend and, at the same time, a type of work. This example illustrates the observation that the phenomenon of decision interference appears when one considers a composite entangled prospect with several intention representations assumed to be realized simultaneously. Thus, we can state that interference in decision making appears when one decides about a composite entangled prospect.
For the above example of decision making in the case of two intentions, "to get a friend" and "to be rich", the appearance of the interference can be understood as follows. In real life, it is too problematic, and practically impossible, to become a very close friend to several persons simultaneously, since conflict of interests often arises between the friends. For instance, doing a friendly action to one friend may upset or even harm another friend. Any decision making, involving mutual correlations between two persons, necessarily requires taking into account their sometimes conflicting interests. This is, actually, one of the origins of the interference in decision making. Another powerful origin of intention interference is the existence of emotions, as will be discussed in the following sections.
Conditions for the Presence of Interference
The situations for which intention interferences cannot appear can be classified into two cases, which are examined below. From this classification, we conclude that the necessary conditions for the appearance of intention interferences are that the dimensionality of mind should be not lower than two and that there should be some uncertainty in the considered prospect. These conditions imply that the considered prospect can be entangled.
Case 1. One-dimensional mind
Suppose there are many intentions {A i }, enumerated by the index i = 1, 2, . . ., whose number can be arbitrary. But each intention possesses only a single representation |A i >. Hence, the dimension of "mind" is dim(M) = 1. Only a single basic vector exists: In this one-dimensional mind, all prospect states are disentangled, being of the type Therefore, only one probability exists: Thus, despite the possible large number of arbitrary intentions, they do not interfere, since each of them has just one representation. There can be no intention interference in onedimensional mind.
Case 2. Absence of uncertainty
Another important condition for the appearance of intention interference is the existence of uncertainty. To understand this statement, let us consider a given mind with a large dimensionality dim(M) > 1, characterized by a strategic state |ψ s >. Let us analyze a certain prospect with the state |π j >= c j |ψ s > (|c j | = 1) .
Then, the corresponding prospect probability is and no interference can arise. Thus, the necessary conditions for the intention interference are the existence of uncertainty and the dimensionality of mind not lower than 2.
Interference Alternation
Let us consider two intentions, one composing a set {A i } of M 1 representations and another one forming a set {X j } of M 2 representations. The total family of intention representations is therefore The prospect basis is the set {|A i X j >}. The strategic state of mind can be written as an expansion over this basis, with the coefficients satisfying the normalization ij |c ij | 2 = 1 .
Let us assume that we are mainly interested in the representation set {A i }, while the representations from the set {X j } are treated as additional. A prospect π i ≡ A i X, where X = i X i , which is formed of a fixed intention representation A i , and which can be realized under the occurrence of any of the representations X i , corresponds to the prospect state The probability of realizing the considered prospect π i is according to definition (26). Following the above formalism, used for describing intention interferences, we use the notation for the joint probability of A i and X j ; and we denote the interference terms as Then, the probability of π i , given by Eq. (42), becomes The interference terms appear due to the existence of uncertainty. Therefore, we may define the uncertainty angles and the uncertainty factors Then, the interference terms (44) take the form It is convenient to define the sum of the interference terms This allows us to rewrite the prospect probability (45) as The joint and conditional probabilities are related in the standard way In view of the normalization condition (27), we have i p(π i ) = 1, which means that the family of intended actions (38) is such that at least one of the representations from the set {A i } has to be certainly realized. We also assume that at least one of the representations from the set {X j } necessarily happens, that is, Along with these conditions, we keep in mind that at least one of the representations from the set {A i } must be realized for each given X j , which implies that Then we see that i q(A i X) = 0. By introducing the prospect utility factor conditions (52) and (53) can be combined in one normalization condition j f (π j ) = 1 .
The above consideration can be generalized into the following statement.
Theorem 1. Interference alternation: The process of decision making, associated with the probabilities p(π j ) of the prospects π j ∈ L, occurring under the normalization conditions (27) and (55), is characterized by alternating interference terms, such that the total interference vanishes, which implies the property of interference alternation j q(π j ) = 0 .
Proof: From the above definitions, it follows that the prospect probability has the form From here, taking into account the normalization conditions (27) and (55), we get the alternation property (56).
Equality (56) shows that, if at least one of the terms is non-zero, some of the interference terms are necessarily negative and some are necessarily positive. Therefore, some of the probabilities are depressed, while others are enhanced. This alternation of the interference terms will be shown below to be a pivotal feature providing a clear explanation of the disjunction effect. It is worth emphasizing that the violation of the sure-thing principle, resulting in the disjunction effect, will be shown not to be due simply to the existence of interferences as such, but more precisely to the interference alternation.
For instance, the depression of some probabilities can be associated with uncertainty aversion, which makes less probable an action under uncertain conditions. In contrast, the probability of other intentions, containing less or no uncertainty, will be enhanced by positive interference terms. This interference alternation is of crucial importance for the correct description of decision making, without which the known paradoxes cannot be explained.
Interference Quarter Law
In agreement with the form (57), the prospect probability p(π j ) is the sum of two terms, the utility factor f (π j ) and the interference term q(π j ). The first term defines the prospect utility for the decision maker. The second term characterizes the prospect attractiveness for this decision maker, or a subjectively defined prospect quality. Therefore the quantity q(π j ) can be called the attraction factor or quality factor. As has been stressed several times throughout the paper, this reflects the fact that the interference terms are embodying subjective feelings and emotions of the decision maker.
The appearance of the interference terms is the consequence of the use of quantum-theoretical techniques for describing the process of decision making. However, the possible occurrence of interference as such does not yet provide an explanation of paradoxical effects in human decision making. If we would simply postulate the existence of the interference terms and would fit them on the basis of some particular experiments, this would have no scientific value. Our approach may acquire the status of a theory if (i) it explains the conditions under which the interference terms appear, (ii) it delineates their underlying origin and (iii) it provides a procedure, even approximate, for their quantitative evaluation. The following proceeds to demonstrate these three points.
Aggregate Nature of Quantum Decision Theory
In the previous sections, we uncovered two important properties of the interference terms. First of all, we showed that these terms arise only when the considered prospects are composite. Second, we derived the theorem of interference alternation (Theorem 1). These properties clarify the conditions under which interference can arise. But they are not yet sufficient for estimating the values of the interference terms.
Strictly speaking, being defined to reflect subjective factors embodying subconscious feelings, emotions, and biases, the interference terms are contextual. This means that the values of q can be different for different decision makers. Moreover, they can be different for the same decision maker at different times. These features seem to be natural when one keeps in mind real humans, whose decisions are usually different, even under identical conditions. It is also known that the same decision maker can vary his/her decisions at different times and under different circumstances. But focusing solely on the contextual character of the interference terms gives the wrong impression of a lack of predictive power of the approach which would make it rather meaningless.
Fortunately, there is a way around the problem of contextuality, based on the fact that QDT has been constructed as a probabilistic theory, with the probabilities interpreted in the frequentist sense. This is equivalent to saying that QDT is a theory of the aggregate behavior of a population. In other words, the predictions of the theory are statistical statements concerning the population of individualistic behaviors, namely QDT provides the probability for a given individual to take this or that decision interpreted in the sense of the fraction of individuals taking these decisions.
Keeping in mind this aggregate nature of QDT, there is no need to discuss the specific values of the factor q appropriate to particular decision makers. But it is necessary to evaluate typical, or expected values of q, corresponding to an ensemble of decision makers under given conditions. In the following subsections, we show how this can be done. Knowing the expected value of q makes it possible to predict the typical behavior of decision makers.
Binary Prospect Set
For concreteness, let us consider the case of two prospects. Suppose, one deliberates between the intended actions A and B, under an additional intention with two modes, X = X 1 + X 2 . So that, one chooses between two composite prospects The interference terms (48) can be rewritten as The interference-alternation theorem (Theorem 1), which leads to (56), implies that and This defines the relation between the uncertainty factors. A fundamental well-documented characteristic of human beings is their aversion to uncertainty, i.e., the preference for known risks over unknown risks (Epstein, 1999). As a consequence, the propensity/utility (and therefore the probability) to act under larger uncertainty is smaller than under smaller uncertainty. Mechanically, this implies that it is possible to specify the sign of the uncertainty factors yielding (61).
To find the amplitudes of the uncertainty factors, we may proceed as follows. By the definition of these factors, we have Without any other information, the simplest prior is to assume a uniform distribution of the absolute values of the uncertainty factors in the interval [0, 1], so that their expected values are respectively Choosing in that way the average values of the uncertainty factors is equivalent to using a representative agent, while the general approach is taking into account a pre-existing heterogeneity. That is, the values (63) should be treated as estimates for the expected uncertainty factors, corresponding to these factors averaged with the uniform distribution over the large number of agents.
To complete the calculation of q(π A ) and of q(π B ), given by (59), we also assume the noninformative uniform prior for all probabilities appearing below the square-roots, so that their expected values are all 1/2, since they vary between 0 and 1. Using these in Eq. (59) results in the interference-quarter law valid for the four-dimensional mind composed of two intentions with two representations each.
Expected Value of Interference Terms
In the previous subsection, we have shown that, in the case of a binary prospect set, the magnitude of the interference term can be estimated by the value 1/4. Now, we extend this result by demonstrating that the expected value of the interference-term magnitude can be estimated as 1/4 for an arbitrary prospect, under quite general conditions. The interference term, or the attraction factor, q(π j ), is defined by emotions, subconscious feelings, and other hidden variables. Strictly speaking, it is contextual, depending on a particular decision maker at a given time. For an ensemble of decision makers, the interference term can be treated as a random variable in the interval [−1, 1]. That is, the modulus |q(π j )| of the attraction factor, is a random variable in the interval [0, 1].
Let the distribution of this random variable be ρ(ξ), with the variable ξ in the interval [0, 1]. The expectation value of the modulus of the attraction factor is q ≡ 1 0 ξρ(ξ) dξ .
By its definition, the distribution is normalized as Since the exact form of this distribution is not known, we can consider two limiting cases. One limiting case is provided by a distribution concentrated in the center, which is described by the Dirac delta function δ(ξ), so that the distribution is Recall that the delta function is defined through the integral where h(ξ) is any smooth function of ξ and a > 0. The delta distribution is normalized, Another limiting case is the uniform distribution in the interval [0, 1], which is described by the form ρ 2 (ξ) = Θ(1 − ξ) , expressed through the unit-step function The uniform distribution is also normalized, Knowing only two limiting cases, we may model the unknown distribution ρ(ξ) by the average of these two limiting cases: which yields This distribution, by construction, is normalized as in (66). Calculating the expected value (65), we obtain Thus, the expected value of the modulus of the interference term is again given by the quarter law: q = 1/4. This allows us to quantitatively estimate the influence of emotions in decision making and to predict, on the aggregate level, the average behavior of typical decision makers. It is appropriate to remember that it was Bohr (1929Bohr ( , 1933Bohr ( , 1937Bohr ( , 1961 who advocated throughout all his life the idea that mental processes do bear close analogies with quantum processes. The analogies should be understood here in the sense of their similar theoretical description, but not necessarily in the sense of being physiologically equivalent. Since interference is one of the most striking characteristic features of quantum processes, the analogy suggests that it should also arise in mental processes as well. The existence of interference in decision making disturbs the classical additivity of probabilities. Indeed, we take as an evidence of this the nonadditivity of probabilities in psychology which has been repeatedly observed (Tversky and Koehler, 1994;Fox et al., 1996;Rottenstreich and Tversky, 1997), although it has not been connected with interference.
It is also important to stress that the mere existence of interference as such does not allow one to make any reasonable predictions in analyzing human decision making. It is necessary to derive the main general properties of interference in order to make this notion operationally meaningful. These general properties that we have derived are: • Interference appears only for composite prospects under the presence of uncertainty.
• Interference terms satisfy the alternation condition formalized in Theorem 1.
• The expected value of the interference-term magnitude can be estimated by the quarter law.
Equipped with the knowledge of these properties, it becomes possible to analyze the influence of interference on human decision making and explain the corresponding paradoxical effects.
Disjunction Effect
The disjunction effect was first specified by Savage (1954) as a violation of the "sure-thing principle", which can be formulated as follows (Savage, 1954): if the alternative A is preferred to the alternative B, when an event X 1 occurs, and it is also preferred to B, when an event X 2 occurs, then A should be preferred to B, when it is not known which of the events, either X 1 or X 2 , has occurred.
Sure-Thing Principle
For the purpose of self-consistency, let us recall the relationship between the sure-thing principle and classical probability theory. Let us consider a field of events {A, B, X j |j = 1, 2, . . .} equipped with the classical probability measures (Feller, 1970). We denote the classical probability of an event A by the capital letter P (A) in order to distinguish it from the probability p(A) defined in the previous sections by means of quantum rules. We shall denote, as usual, the conditional probability of A under the knowledge of X j by P (A|X j ) and the joint probability of A and X j , by P (AX j ). We assume that at least one of the events X j from the set {X j } certainly happens, which implies that j P (X j ) = 1 .
The probability of A, when X j is not specified, that is, when at least one of X j happens, is denoted by P (AX), with X = j X j . The same notations are applied to B. Following the common reasoning, we understand the statement "A is preferred to B" as meaning P (A) > P (B). Then the following theorem is valid.
The above proposition is the theorem of classical probability theory. Savage (1954) proposed to use it as a normative statement on how human beings make consistent decisions under uncertainty. As such, it is no more a theorem but a testable assumption about human behavior. In other words, empirical tests showing that humans fail to obey the sure-thing principle must be interpreted as a failure of humans to abide to the rules of classical probability theory.
Examples Illustrating the Disjunction Effect
Thus, according to standard classical probability theory which is held by most statisticians as the only rigorous mathematical description of risks, and therefore as the normative guideline describing rational human decision making, the sure-thing principle should be always verified in empirical tests involving real human beings. However, numerous violations of this principle have been investigated empirically (Savage, 1954;Tversky and Shafir, 1992;Croson, 1999;Lambdin and Burdsal, 2007;Li et al., 2007). In order to be more specific, let us briefly outline some examples of the violation of the sure-thing principle, referred to as the disjunction effect.
Example 1. To gamble or not to gamble?
A typical setup for illustrating the disjunction effect is a two-step gamble . Suppose that a group of people accepted a gamble, in which the player can either win an amount of money (action X 1 ) or lose an amount (action X 2 ). After one gamble, the participants are invited to gamble the second time, being free to either accept the second gamble (A) or to refuse it (B). Experiments by Tversky and Shafir (1992) showed that the majority of people accept the second gamble when they know the result of the first one, in any case, whether they won or lost in the previous gamble. In the language of conditional probability theory, this translates into the fact that people act as if P (A|X 1 ) is larger than P (B|X 1 ) and P (A|X 2 ) is larger than P (B|X 2 ) as in Eq. (73). At the same time, it turns out that the majority refuses to gamble the second time when the outcome of the first gamble is not known. The second empirical fact implies that people act as if P (BX) overweighs P (AX), in blatant contradiction with inequality (74) which should hold according to the theorem resulting from (73). Thus, the majority accepted the second gamble after having won or lost in the first gamble, but only a minority accepted the second gamble when the outcome of the first gamble was unknown to them. This provides an unambiguous violation of the Savage sure-thing principle.
Example 2. To buy or not to buy?
Another example, studied by Tversky and Shafir (1992), had to do with a group of students who reported their preferences about buying a nonrefundable vacation, following a tough university test. They could pass the exam (X 1 ) or fail (X 2 ). The students had to decide whether they would go on vacation (A) or abstain (B). It turned out that the majority of students purchased the vacation when they passed the exam as well as when they had failed, so that condition (73) was valid. However, only a minority of participants purchased the vacation when they did not know the results of the examination. Hence, inequality (74) was violated, demonstrating again the disjunction effect.
Example 3. To sell or not to sell?
The stock market example, analysed by Shafir and Tversky (1992), is a particularly telling one, involving a deliberation taking into account a future event, and not a past one as in the two previous cases. Consider the USA presidential election, when either a Republican wins (X 1 ) or a Democrat wins (X 2 ). On the eve of the election, market players can either sell certain stocks from their portfolio (A) or hold them (B). It is known that a majority of people would be inclined to sell their stocks, if they would know who wins, regardless of whether the Republican or Democrat candidate wins the upcoming election. This is because people expect the market to fall after the elections. Hence, condition (73) is again valid. At the same time, a great many people do not sell their stocks before knowing who really won the election, thus contradicting the sure-thing principle and inequality (74). Thus, investors could have sold their stocks before the election at a higher price, but, obeying the disjunction effect, they were waiting until after the election, thereby selling at a lower price after stocks have fallen. Many market analysts believe that this is precisely what happened after the 1988 presidential election, when George Bush defeated Michael Dukakis.
There are plenty of other more or less complicated examples of the disjunction effect (Savage, 1954;Tversky and Shafir, 1992;Shafir and Tversky, 1992;Shafir et al., 1993;Shafir, 1994;Croson, 1999;Lambdin and Burdsal, 2007). The common necessary conditions for the disjunction effect to arise are as follows. First, there should be several events, each characterized by several alternatives, as in the two-step gambles. Second, there should necessarily exist some uncertainty, whether with respect to the past, as in Examples 1 and 2, or with respect to the future, as in Example 3.
Several ways of interpreting the disjunction effect have been analyzed. Here, we do not discuss the interpretations based on the existence of some biases, such as the gender bias, or which invoke the notion of decision complexity, which have already been convincingly ruled out (Croson, 1999;Kühberger et al., 2001). We describe the reason-based explanation which appears to enjoy a wide-spread following and discuss its limits before turning to the view point offered by QDT.
Reason-Based Analysis
The dominant approach for explaining the disjunction effect is the reason-based analysis of decision making Shafir and Tversky, 1992;Shafir et al., 1993;Shafir, 1994;Croson, 1999). This approach explains choice in terms of the balance between reasoning for and against the various alternatives. The basic intuition is that when outcomes are known, a decision maker may easily come up with a definitive reason for choosing an option. However, in case of uncertainty, when the outcomes are not known, people may lack a clear reason for choosing an option and consequently they abstain and make an irrational choice.
From our perspective, the weakness of the reason-based analysis is that the notion of "reason" is too vague and subjective. Reasons are not only impossible to quantify, but it is difficult, if possible at all, to give a qualitative definition of what they are.
Consider Example 1 "to gamble or not to gamble?" Suppose you have already won at the first step. Then, you can rationalize that gambling a second time is not very risky: if you now lose, this loss will be balanced by the first win (on which you were not counting anyway, so that you may actually treat it differently from the rest of your wealth, according to the so-called "mental accounting" effect), and if you win again, your profit will be doubled. Thus, you have a "reason" to justify the attractiveness of the second gamble. But, it seems equally justified to consider the alternative "reason": if you have won once, winning the second time may seem less probable (the so-called gambler's fallacy), and if you lose, you will keep nothing of your previous gain. This line of reasoning justifies to keep what you already got and to forgo the second gamble. Suppose now you have lost in the first gamble and know it. A first reasoning would be that the second gamble offers a possibility of getting out of the loss, which provides a reason for accepting the second gamble. However, you may also think that the win is not guaranteed, and your situation could actually worsen, if you lose again. Therefore, this makes it more reasonable not to risk so much and to refrain from the new gamble.
Consider now the situation where you are kept ignorant of whether you have won or lost in the first gamble. Then, you may think that there is no reason and therefore no motivation for accepting the second gamble, which is the standard reason-based explanation. But, one could argue that it would be even more logical if you would think as follows: Okay, I do not know what has happened in the first gamble. So, why should I care about it? Why don't I try again my luck? Certainly, there is a clear reason for gambling that could propagate the drive to gamble the second time.
This discussion is not pretending to demonstrate anything other than that the reason-based explanation is purely ad-hoc, with no real explanatory power; it can be considered in a sense as a reformulation of the disjunction fallacy. It is possible to multiply the number of examples demonstrating the existence of quite "reasonable" justifications for doing something as well as a reason for just doing the opposite. It seems to us that the notion of "reason" is not well defined and one can always invent in this way a justification for anything. Thus, we propose that the disjunction effect has no direct relation to reasoning. In the following section, we suggest another explanation of this effect based on QDT, specifically the negative interference between the two uncertain outcomes resulting from an aversion to uncertainty (uncertainty-aversion principle), which provides a quantitative testable prediction.
Quantitative Analysis within Quantum Decision Theory
The possibility of connecting the violation of the sure-thing principle with the occurrence of interference has been mentioned in several articles (see, e.g., Busemeyer et al. (2006). But these attempts were just ad hoc assumptions not based on a self-consistent theory. Our explanation of the disjunction effect differs from these attempts in several aspects. First, we consider the disjunction effect as just one of several possible effects in the frame of the general theory. The explanation is based on the theorem of interference alternation, which has never been mentioned, but without which no explanation can be complete and self-consistent. We stress the importance of the uncertainty-aversion principle. Also, we offer a quantitative estimate for the effect, which is principally new.
Application to Examples of the Disjunction Effect
Let us discuss the two first examples illustrating the disjunction effect, in which the prospect consists of two intentions with two representations each. One intention "to decide about an action" has the representations "to act" (A) and "not to act" (B). The second intention "to know the results" (or "to have information") has also two representations. One (X 1 ) can be termed "to learn about the win" (gamble won, exam passed), the other (X 2 ) can be called "to learn about the loss" (gamble lost, exam failed). Given the numbers of these representations M 1 = 2 and M 2 = 2, the dimension of mind is dim(M) = M 1 M 2 = 4.
For the considered cases, the general set of equations for the prospect probabilities reduces to two equations p(AX) = p(AX 1 ) + p(AX 2 ) + q(AX) , in which X = i X i and the interference terms are q(AX) = 2ϕ(AX) p(AX 1 ) p(AX 2 ) , Of course, Eqs. (77) and (78) could be postulated, but then it would not be clear where they come from. In QDT, these equations appear naturally. Here ϕ(AX) and ϕ(BX) are the uncertainty factors defined in (47). The normalization conditions become with conditions (53) being p(A|X 1 ) + p(B|X 1 ) = 1 , p(A|X 2 ) + p(B|X 2 ) = 1 .
The uncertainty factors can be rewritten as with the interference terms being The principal point is the condition of interference alternation (Theorem 1), which now reads Without this condition (83), the system of equations for the probabilities would be incomplete, and the disjunction effect could not be explained in principle.
In the goal of explaining the disjunction effect, it is not sufficient to merely state that some type of interference is present. It is necessary to determine (quantitatively) why the probability of acting is suppressed, while that of remaining passive is enhanced. Our aim is to evaluate the expected size and sign of the interference terms q(AX) (for acting under uncertainty) and q(BX) (for remaining inactive under uncertainty). Obviously, it is an illusion to search for a universal value that everybody would strictly use. Different experiments with different people have indeed demonstrated a significant heterogeneity among people, so that, in the language of QDT, this means that the values of the interference terms can fluctuate from individual to individual. A general statement should here refer to the behavior of a sufficiently large ensemble of people, allowing us to map the observed frequentist distribution of decisions to the predicted QDT probabilities.
Alternation Theorem and Interference-Quarter Law
Now we shall employ the alternation theorem and the quarter law for describing the disjunction effect. The interference terms are given in (59). The interference-alternation theorem (Theorem 1) yields Eqs. (60) and (61). Hence, in the case where p(A|X j ) > p(B|X j ), which is characteristic of the examples illustrating the disjunction effect, one must have the uncertainty factors which exhibit the opposite property, |ϕ(AX)| < |ϕ(BX)|, so as to compensate the former inequality to ensure the validity of equality (60) for the absolute values of the interference terms. The expected values of the latter can be evaluated from the Quarter Law as 1/4. The next step is to determine the sign of ϕ(AX) and, thus, of ϕ(BX)), from (61) and their typical amplitudes |ϕ(AX)| and |ϕ(BX)|. A fundamental well-documented characteristic of human beings is their aversion to uncertainty, i.e., the preference for known risks over unknown risks (Epstein, 1999). As a consequence, the propensity/utility and, therefore, the probability to act under larger uncertainty is smaller than under smaller uncertainty. Mechanically, this implies that it is possible to specify the sign of the uncertainty factors, yielding since A (respectively B) refers to acting (respectively to remaining inactive). As a consequence of (84) and also of their mathematical definition (47), the uncertainty factors vary in the intervals Invoking the interference-quarter law, we find the expected values of the interference terms q(AX) = −0.25 , q(BX) = 0.25 .
As a consequence, the probabilities for acting or for remaining inactive under uncertainty, given by (77), can be evaluated as The influence of intention interference in the presence of uncertainty on the decision making process at the basis of the disjunction effect can thus be estimated a priori. The sign of the effect is controlled by the aversion to uncertainty exhibited by people (uncertainty-aversion principle). The amplitude of the effect can be estimated, as shown above, from simple priors applied to the mathematical structure of the QDT formulation.
Principle of Uncertainty Aversion
The above calculation implies that the disjunction effect can be interpreted as essentially an emotional reaction associated with the aversion to uncertainty. An analogy can make the point: it is widely recognized that uncertainty frightens living beings, whether humans or animals. It is also well documented that fear paralyzes, as in the cartoon of the "rabbit syndrome," when a rabbit stays immobile in front of an approaching boa instead of running away. There are many circumstantial evidences that uncertainty may frighten people as a boa frightens rabbits. Being afraid of uncertainty, a majority of human beings may be hindered to act. In the presence of uncertainty, they do not want to act, so that they refuse the second gamble, as in Example 1, or forgo the purchase of a vacation, as in Example 2, or refrain from selling stocks, as in Example 3. Our analysis suggests that it is the aversion to uncertainty that paralyzes people and causes the disjunction effect.
It has been reported that, if people, when confronting uncertainty paralyzing them against acting, are presented with a detailed explanation of the possible outcomes, they then may change their mind and decide to act, thus reducing the disjunction effect Croson, 1999). Thus, encouraging people to think by providing them additional explanations, it is possible to influence their minds. In such a case, reasoning plays the role of a kind of therapeutic treatment decreasing the aversion to uncertainty. This line of reasoning suggests that it should be possible to decrease the aversion to uncertainty by other means, perhaps by distracting people or by taking food, drink or drug injections. This provides the possibility to test for the dependence of the strength of the disjunction effect with respect to various parameters which may modulate the aversion response of individuals to uncertainty.
We should stress that our explanation departs fundamentally from the standard reason-based rationalization of the disjunction effect summarized above. Rather than using what we perceive is an hoc explanation, we anchor the disjunction effect on the very fundamental characteristic of living beings, that of the aversion to uncertainty. This allows us to construct a robust and parsimonious explanation. But this explanation arises only within QDT, because QDT allows us to account for the complex emotional, often subconscious, feelings as well as the many unknown states of nature that underlie decision making. Such unknown states, analogous to hidden variables in quantum mechanics, are taken into account by the formalism of QDT through the interference alternation effect, capturing mental processes by means of quantum-theory techniques.
Numerical Analysis of Disjunction-Effect Examples
Let us now turn to the examples described above and suggest their quantitative explanations.
For the interference terms, we find The uncertainty factors (81) They are of opposite sign, in agreement with condition (83). The probability p(AX) of gambling under uncertainty is suppressed by the negative interference term q(AX) < 0. Reciprocally, the probability p(BX) of not gambling under uncertainty is enhanced by the positive interference term q(BX) > 0. This results in the disjunction effect, when p(AX) < p(BX)).
It is important to stress that the observed amplitudes in (88) are close to the value 0.25 predicted by the interference-quarter law. They are, actually, undistinguishable from 0.25 within the typical statistical error of 20% characterizing these experiments. That is, even not knowing the results of the considered experiment, we are able to quantitatively predict the strength of the disjunction effect.
Example 2. To buy or not to buy?
For the second example of the disjunction effect, the data, taken from Tversky and Shafir (1992), read p(A|X 1 ) = 0.54 , p(A|X 2 ) = 0.57 , p(AX) = 0.32 .
Again, the values obtained in (89) are close to those predicted by the interference-quarter law, being undistinguishable from 0.25 within experimental accuracy.
Because of the uncertainty aversion, the probability p(AX) of purchasing a vacation is suppressed by the negative interference term q(AX) < 0. At the same time, the probability p(BX) of not buying a vacation under uncertainty is enhanced by the positive interference term q(BX) > 0. This alternation of interferences causes the disjunction effect, when p(AX) < p(BX). It is necessary to stress it again that without this interference alternation no explanation of the disjunction effect is possible in principle.
In the same way, our approach can be applied to any other situation related to the disjunction effect associated with the violation of the sure-thing principle.
Conjunction Fallacy
The conjunction fallacy constitutes another example revealing that intuitive estimates of probability by human beings do not conform to the standard probability calculus. This effect was first studied by Kahneman (1980, 1983) and then discussed in many other works (see, e.g., Morier and Borgida, 1984;Wells, 1985;Yates and Carlson, 1986;Shafir et al., 1990;Tentori et al., 2004). Despite an extensive debate and numerous attempts to interpret this effect, there seems to be no consensus on the origin of the conjunction fallacy (Tentori et al., 2004).
Here, we show that this effect finds a natural explanation in QDT. It is worth emphasizing that we do not invent a special scheme for this particular effect, but we show that it is a natural consequence of the general theory we have developed. In order to claim to explain the conjunction fallacy in terms of an interference effect in a quantum description of probabilities, it is necessary to derive the quantitative values of the interference terms, amplitudes and signs, as we have done above for the examples illustrating the disjunction effect. This has never been done before. Our QDT provides the necessary ingredients, in terms of the uncertainty-aversion principle, the theorem on interference alternation, and the interference-quarter law. Only the establishment of these general laws can provide an explanation of the conjunction fallacy, that can be taken as a positive step towards validating QDT, according to the general methodology of validating theories (Sornette et al., 2007). Finally, in our comparison with available experimental data, we analyze a series of experiments and demonstrate that all their data substantiate the validity of the general laws of the theory.
Individual versus Group Decisions
In order to be precise, it is necessary to distinguish the conjunction fallacy observed in the process of decision making performed by separate individuals and by groups of decision makers. Group decisions can be different from those of noninteracting individuals (Baron, 1998;Sheremeta and Zhang, 2009). In particular, the conjunction fallacy, that has been documented for isolated decision makers, practically disappears for decisions taken by groups of interacting individuals. The violation rate characterizing the conjunction fallacy falls significantly when communication between participants is allowed (Charness et al., 2008). The reduction of the strength of the conjunction effect is due to the existence of social interactions. These social interactions play a role analogous to the interaction between particles, which are known to lead to "decoherence" in quantum systems. A study of the decoherence phenomenon in the present context is beyond the scope of our paper, which focuses on the conjunction fallacy associated with separate individuals, in absence of social interactions. This corresponds to the setup that was studied by Kahneman (1980, 1983).
Conjunction Rule
Let us first briefly recall the conjunction rule of standard probability theory. Let us consider an event A that can occur together with another one among several other events X j , where j = 1, 2, . . .. The probability of an event estimated within classical probability theory is again denoted with the capital letter P (A), to distinguish it from the probability p(A) in our quantum approach. According to standard probability theory (Feller, 1970), one has where X = i X i . Since all terms in the sum (90) are positive, the conjunction rule tells us that That is, the probability for the occurrence of the conjunction of two events is never larger than the probability for the occurrence of a separate event.
Conjunction Error
Counterintuitively, humans rather systematically violate the conjunction rule (91), commonly making statements such that p(AX) < p(AX j ) , for some j, which is termed the conjunction fallacy (Tversky and Kahneman, 1980;1983). The difference is called the conjunction error, which is positive under conditions in which the conjunction fallacy is observed. A typical situation is when people judge about a person, who can possess a characteristic A and also some other characteristics X j (which can be "possessing a trait" or "not having the trait", since not having a trait is also a characteristic), as in the oft-cited example of Tversky and Kahneman (1980): "Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more likely? (i) Linda is a bank teller; (ii) Linda is a bank teller and is active in the feminist movement." Most people answer (ii) which is an example of the conjunction fallacy (92). Numerous other examples of the fallacy are described in the literature Kahneman, 1980, 1983;Morier and Borgida, 1984;Wells, 1985;Yates and Carlson, 1986;Shafir et al., 1990;Tentori et al., 2004). It is important to stress that this fallacy has been reliably and repeatedly documented, that it cannot be explained by the ambiguity of the word "likely" used in the formulation of the question, and that it appears to involve a failure to coordinate the logical structure of events in the presence of chance (Tentori et al., 2004).
Conjunction Interference
Within QDT, the conjunction fallacy finds a simple and natural explanation. Let us consider a typical situation of the fallacy, when one judges a person who may have a characteristic A, treated as primary, and who may also possess, or not possess, another characteristic, labelled as secondary. Generally, the person could also be an object, a fact, or anything else, which could combine several features. Translating this situation to the mathematical language of QDT, we see that it involves two intentions. One intention, with just one representation, is "to decide whether the object has the feature A." The second intention "to decide about the secondary feature" has two representations, when one decides whether "the object has the special characteristic" (X 1 ) or "the object does not have this characteristic" (X 2 ).
For these definitions, and following the general scheme, we have where X = i X i . This is a typical situation where a decision is taken under uncertainty. The uncertainty-aversion principle imposes that the interference term q(AX) should be negative.
Taking the perspective of the representation X 1 , definition (93) together with Eq. (94) imply that the conjunction error reads The condition for the conjunction fallacy to occur is that the error (95) be positive, which requires that the interference term be sufficiently large, such that the conjunction-fallacy condition be satisfied. The QDT thus predicts that a person will make a decision exhibiting the conjunction fallacy when (i) uncertainty is present and (ii) the interference term, which is negative by the uncertaintyaversion principle, has a sufficiently large amplitude, according to condition (96).
Comparison with Experiments
For a quantitative analysis, we take the data from Shafir et al. (1990), who present one of the most carefully accomplished and thoroughly discussed set of experiments. Shafir et al. questioned large groups of students in the following way. The students were provided with booklets each containing a brief description of a person. It was stated that the described person could have a primary characteristic (A), and also another characteristic (X 1 ) or its absence (X 2 ).
In total, there were 28 experiments separated into two groups according to the conjunctive category of the studied characteristics. In 14 cases, the features A and X 1 were compatible with each other, and in the other 14 cases, they were incompatible. The characteristics were treated as compatible, when they were felt as closely related according to some traditional wisdom, for instance, "woman teacher" (A) and "feminist" (X 1 ). Another example of compatible features is "chess player" (A) and "professor" (X 1 ). Those characteristics that were not related by direct logical connections were considered as incompatible, such as "bird watcher" (A) and "truck driver" (X 1 ) or "bicycle racer" (A) and "nurse" (X 1 ).
In each of the 28 experiments, the students were asked to evaluate both the typicality and the probability of A and AX 1 . Since normal people usually understand "typicality" just as a synonym of probability, and vice versa, the predictions on typicality were equivalent to estimates of probabilities. This amounts to considering only how the students estimated the probability p(AX) that the considered person possesses the stated primary feature and the probability p(AX 1 ) that the person has both characteristics A and X 1 .
An important quality of the experiments by Shafir et al. (1990) lies in the large number of tests which were performed. Indeed, a given particular experiment is prone to exhibit a significant amount of variability, randomness or "noise". Not only the interrogated subjects exhibited significant idiosyncratic differences, with diverse abilities, logic, and experience, but in addition the questions were quite heterogeneous. Even the separation of characteristics into two categories of compatible and incompatible pairs is, to some extent, arbitrary. Consequently, no one particular case provides a sufficiently clear-cut conclusion on the existence or absence of the conjunction fallacy. It is only by realizing a large number of interrogations, with a variety of different questions, and by then averaging the results, that it is possible to make justified conclusions on whether or not the conjunction fallacy exists. The set of experiments performed by Shafir et al. (1990) well satisfies these requirements.
For the set of compatible pairs of characteristics, it turned out that the average probabilities were p(AX) = 0.537 and p(AX 1 ) = 0.567, with statistical errors of 20%. Hence, within this accuracy, p(AX) and p(AX 1 ) coincide and no conjunction fallacy arises for compatible characteristics. From the view point of QDT, this is easily interpreted as due to the lack of uncertainty: since the features A and X 1 are similar to each other, one almost certainly yielding the other, there is no uncertainty in deciding, hence, no interference, and, consequently, no conjunction fallacy.
However, for the case of incompatible pairs of characteristics, the situation was found to be drastically different. To analyse the related set of experiments, we follow the general scheme of QDT, using the same notations as above. We have the prospect with two intentions, one intention is to evaluate a primary feature (A) of the object, and another intention is to decide whether at the same time the object possesses a secondary feature (X 1 ) or does not possess it (X 2 ). Taking the data for p(X j ) and p(AX 1 ) from Shafir et al. (1990), we calculate q(AX) for each case separately and then average the results. In the calculations, we take into account that the considered pairs of characteristics are incompatible with each other. The simplest and most natural mathematical embodiment of the property of "incompatibility" is to take the probabilities of possessing A, under the condition of either having or not having X 1 , as equal, that is, p(A|X j ) = 0.5. For such a case of incompatible pairs of characteristics, Eq. (94) reduces to p(AX) = 1 2 + q(AX) .
The results, documenting the existence of the interference terms underlying the conjunction fallacy, are presented in Table 1, which gives the abbreviated names for the object characteristics, whose detailed description can be found in Shafir et al. (1990). The average values of the different reported probabilities are p(AX) = 0.22 , p(X 1 ) = 0.692 , p(X 2 ) = 0.308 , p(AX 1 ) = 0.346 , p(AX 2 ) = 0.154.
One can observe that the interference terms fluctuate around a mean of −0.28, with a standard deviation of ±0.06: q(AX) = −0.28 ± 0.06 .
There is a clear evidence of the conjunction fallacy, with the conjunction error (93) being ε(AX 1 ) = 0.126. QDT interprets the conjunction effect as due to the uncertainty underlying the decision, which leads to the appearance of the intention interferences. The interference of intentions is caused by the hesitation whether, under the given primary feature (A), the object possesses the secondary feature (X 1 ) or does not have it (X 2 ). The term q(AX) is negative, reflecting the effect of deciding under uncertainty, according to the uncertainty-aversion principle. Quantitatively, we observe that the amplitude |q(AX)| is in agreement with the QDT interference-quarter law, actually coinciding with 0.25 within the experimental accuracy.
Conjunction and Disjunction Effects
The QDT predicts that setups in which the conjunction fallacy occurs should also be accompanied by the disjunction effect. To see this, let us extend slightly the previous decision problem by allowing for two representations of the first intention. Concretely, this means that the intention, related to the decision about the primary characteristic, has two representations: (i) "decide about the object or person having or not the primary considered feature" (A), and (ii) "decide to abstain from deciding about this feature" (B). This frames the problem in the context previously analysed for the disjunction effect. The conjunction fallacy occurs when one considers incompatible characteristics (Tversky and Kahneman, 1983;Shafir et al., 1990), such that the probabilities of deciding of having a conjunction (AX j ) or of not guessing about it (BX j ) are close to each other, so that one can set p(A|X j ) = p(B|X j ) (∀j) .
The theorem on interference alternation (Theorem 1) implies that the interference term for being passive under uncertainty is positive and we have q(BX) = −q(AX) > 0 .
Now, the probability p(BX) of deciding not to guess under uncertainty is governed by an equation similar to Eq. (94). Combining this equation with (101), we obtain p(BX) = p(AX) + 2|q(AX)| , which shows that, despite equality (100), the probability of being passive is larger than the probability of acting under uncertainty. This is nothing but a particular case of the disjunction effect. This example shows that the conjunction fallacy is actually a sufficient condition for the occurrence of the disjunction effect, both resulting from the existence of interferences between probabilities under uncertainty. The reverse does not hold: the disjunction effect does not necessarily yield the conjunction fallacy, because the latter requires not only the existence of interferences, but also that their amplitudes would be sufficiently large according to the conjunction-fallacy condition (96).
To our knowledge, experiments or situations when the disjunction and conjunction effects are observed simultaneously have not been investigated. The specific prediction coming from the QDT, that the disjunction effect should be observable as soon as the conjunction effect is present, provides a good test of QDT.
We have considered here the case when participants take decisions independently, without consulting with each other. When decisions are taken in groups, the conjunction fallacy becomes much weaker (Charness et al., 2008). In the language of QDT, the social interactions cause the phenomenon of decoherence, which influences the strategic state and destroys the interferences.
Conclusion
In the present paper, we have suggested a quantum theory of decision making. By its nature, it can, of course, be realized by a quantum object, say, by a quantum computer. Or it can be used as a scheme for quantum information processing and for creating artificial intelligence based on quantum laws. This, however, is not compulsory. And the developed theory can also be applied to non-quantum objects with an equal success. It just turns out that the language of quantum theory is a very convenient tool for describing the process of decision making performed by any decision maker, whether quantum or not. In this language, it is straightforward to characterize such features of decision making as the entangled decision making, non-commutativity of subsequent decisions, and intention interference. These features, although being quantum in their description, at the same time, have natural and transparent interpretations in the simple everyday language and are applicable to the events of the real life. To stress the applicability of the approach to the decision making of human beings, we have provided a number of simple illustrative examples.
We have demonstrated the applicability of our approach to the cases when the Savage surething principle is violated, resulting in the disjunction effect. Interference of intentions, arising in decision making under uncertainty, possesses specific features caused by aversion to uncertainty. The theorem of interference alternation that we have derived connects the aversion to uncertainty to the appearance of negative interference terms suppressing the probability of actions. At the same time, the probability of the decision maker not to act is enhanced by positive interference terms. This alternating nature of the intention interference under uncertainty explains the occurrence of the disjunction effect.
We have proposed a calculation of the interference terms, based on considerations using robust assessment of probabilities, which makes it possible to predict their influence in a quantitative way. The estimates are in good agreement with experimental data for the disjunction effect.
The conjunction fallacy, demonstrated by individual decision makers, is also explained by the presence of the interference terms. A series of experiments are analysed and shown to be in excellent agreement with the a priori evaluation of interference effects. The conjunction fallacy is also shown to be a sufficient condition for the disjunction effect and novel experiments testing the combined interplay between the two effects are suggested.
The main features of the Quantum Decision Theory can be summarized as follows.
(1) Quantum Decision Theory is a general mathematical approach that is applicable to arbitrary situations. We do not try to adjust QDT to fit particular cases, but the same theory is used throughout the paper to treat quite different effects.
(2) Mathematically, QDT is based on the theory of Hilbert spaces and techniques that have been developed in quantum theory. However the use of these techniques serves only as a convenient formal tool, implying no quantum nature of decision makers.
(3) Each decision maker possesses his/her own strategic state of mind, characterizing this decision maker as a separate individual.
(4) The QDT developed here allows us to characterize not a single unusual, quantum-like, property of the decision making process, but several of these characteristics, including entangled decisions and the interference between intentions.
(5) Aversion with respect to uncertainty is an important feeling regulating decision making. We formulate this general and ubiquitous feeling under the uncertainty-aversion principle, connecting it to the signs of the alternating interference terms.
(6) We prove the theorem on interference alternation, which shows that the interference between several intentions, arising under uncertainty, consists of several terms alternating in sign, some being positive and some being negative. These terms are the source of the different paradoxes and logical fallacies presented by humans making decisions in uncertain contexts.
(7) Uncertainty aversion and interference alternation, combined together, are the key factors that suppress the probability of acting and, at the same time, enhance the probability of remaining passive, in the case of uncertainty.
(8) We demonstrate that it is not simply the interference between intentions as such, but specifically the interference alternation, together with the uncertainty aversion, which is responsible for the violation of Savage's sure-thing principle at the origin of the disjunction effect.
(9) The conjunction fallacy is another effect that is caused by the interference of intentions, together with the uncertainty-aversion principle. Without the latter, the conjunction fallacy cannot be explained.
(10) The conjunction fallacy is shown to be a sufficient condition for the disjunction effect to occur, exhibiting a deep link between the two effects.
(11) The general "interference-quarter law" is formulated, which provides a quantitative prediction for the amplitude of the interference terms, and thus of the quantitative level by which the sure-thing principle is violated.
(12) Detailed quantitative comparisons with experiments, documenting the disjunction effect and the conjunction fallacy, confirm the validity of the derived laws. Table 1. Conjunction fallacy and related interference terms caused by the decision under uncertainty. The average interference term is in good agreement with the interference-quarter law. The empirical data are taken from Shafir et al. (1990). | 2011-02-14T03:39:19.000Z | 2011-02-14T00:00:00.000 | {
"year": 2011,
"sha1": "8938a471b27d2720c77dfc0c18d9f08f64f1f345",
"oa_license": null,
"oa_url": "https://www.research-collection.ethz.ch/bitstream/20.500.11850/29070/2/11238_2010_Article_9202.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8938a471b27d2720c77dfc0c18d9f08f64f1f345",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
15248192 | pes2o/s2orc | v3-fos-license | 4D Modeling of CME expansion and EUV dimming observed with STEREO/EUVI
This is the first attempt to model the kinematics of a CME launch and the resulting EUV dimming quantitatively with a self-consistent model. Our 4D-model assumes self-similar expansion of a spherical CME geometry that consists of a CME front with density compression and a cavity with density rarefaction, satisfying mass conservation of the total CME and swept-up corona. The model contains 14 free parameters and is fitted to the 2008 March 25 CME event observed with STEREO/A and B. Our model is able to reproduce the observed CME expansion and related EUV dimming during the initial phase from 18:30 UT to 19:00 UT. The CME kinematics can be characterized by a constant acceleration (i.e., a constant magnetic driving force). While the observations of EUVI/A are consistent with a spherical bubble geometry, we detect significant asymmetries and density inhomogeneities with EUVI/B. This new forward-modeling method demonstrates how the observed EUV dimming can be used to model physical parameters of the CME source region, the CME geometry, and CME kinematics.
Introduction
It has become a generally agreed concept that the EUV dimming observed during the onset of a coronal mass ejection (CME) manifests the coronal mass loss of the CME, and thus we basically expect a one-to-one correlation between the detections of CMEs and EUV dimmings, unless there exist special circumstances. For instance, the CME could originate behind the limb, in which case the EUV dimming is obscured, or the CME could start in the upper corona, where there is little EUV emission because of the gravitational stratification. The latter case would imply very low masses com-Correspondence to: Aschwanden (email: aschwanden@lmsal.com) pared with a CME that originates at the base of the corona, i.e., ≈ 10% at two thermal scale heights. However, there exists a case with an average CME mass that did not leave any footprints behind in EUV (Robbrecht et al. 2009). A statistical study on the simultaneous detection of EUV dimmings and CMEs has recently been performed by Bewsher et al. (2008). This study based on SOHO/CDS and LASCO data confirms a 55% association rate of dimming events with CMEs, and vice versa a 84% association rate of CMEs with dimming events. Some of the non-associated events may be subject to occultation, CME detection sensitivity, or incomplete temperature coverage in EUV and soft X-rays. Perhaps the CME-dimming association rate will reach 100% once the STEREO spacecraft arrive at a separation of 180 • and cover all equatorial latitudes of the Sun.
A number of studies have been carried out by using the detection of coronal dimming to identify CME source regions, focusing on transient coronal holes caused by filament eruptions (Rust 1983;Watanabe et al. 1992), EUV dimming at CME onsets (Harrison 1997;Aschwanden et al. 1999), soft X-ray dimming after CMEs (Sterling & Hudson 1997), soft X-ray dimming after a prominence eruption (Gopalswamy & Hanaoka 1998), simultaneous dimming in soft X-rays and EUV during CME launches (Zarro et al. 1999;Harrison & Lyons 2000;Harrison et al. 2003), determinations of CME masses from EUV dimming from spectroscopic data (Harrison & Lyons 2000;Harrison et al. 2003) or from EUV imaging data (Zhukov and Auchere 2004;Aschwanden et al. 2009b). All these studies support the conclusion that dimmings in the corona (either detected in EUV, soft X-rays, or both) are unmistakable signatures of CME launches, and thus can be used vice versa to identify the mutual phenomena.
In this study here we attempt for the first time to model the kinematics of a CME and the resulting EUV dimming quantitatively, which provides us unique physical parameters of the CME source region and on the CME kinematics in the initial acceleration phase. Aschwanden: CME Expansion and EUV Dimming
Model Assumptions
The dynamics of a CME can often be characterized by a rapid expansion of a magnetically unstable coronal volume that expands from the lower corona upward into the heliosphere. Different shapes have been used to approximately describe the 3D geometry of a CME, such as a spherical bubble, an ice-cone, a crescent, or a helical flux rope, which expand in a self-similar fashion and approximately maintain the aspect ratio in vertical and horizontal directions during the initial phase of the expansion. Here we develop a four-dimensional (4D=3D+T) model that describes the 3D evolution of the CME geometry in time (T) in terms of 4D electron density distributions n e (x, y, z, t) that allow us also to predict and forward-fit a corresponding EUV intensity image data cube I EUV (x, y, t) in an observed wavelength.
For the sake of simplicity we start in our model here with the simplest case, assuming: (1) spherical 3D geometry for the CME front and cavity; (2) self-similar expansion in time; (3) density compression in the CME front and adiabatic volume expansion in the CME cavity; (4) mass conservation for the sum of the CME front, cavity, and external coronal volume; (5) hydrostatic (gravitational stratification) or superhydrostatic density scale heights; (6) line-tying condition for the magnetic field at the CME base; and (7) a magnetic driving force that is constant during the time interval of the initial expansion phase. This scenario is consistent with the traditional characterization of a typical CME morphology in three parts, including a CME front (leading edge), a cavity, and a filament (although we do not model the filament part). The expanding CME bubble sweeps up the coronal plasma that appears as a bright rim at the observed "CME front" or leading edge. The interior of the CME bubble exhibits a rapid decrease in electron density due to the adiabatic expansion, which renders the inside of the CME bubble darker in EUV and appears as the observed "CME cavity". The assumption of adiabatic expansion implies no mass and energy exchange across the outer CME boundary, and thus is consistent with the assumption of a low plasma β-parameter in the corona with perfect magnetic confinement, while the CME bubble will become leaking in the outer corona and heliosphere, where the plasma β-parameter exceeds unity (not included in our model here).
Analytical Model
A spherical 3D geometry can be characterized by one single free parameter, the radius R of the sphere. The self-similar expansion maintains the spherical shape, so the boundary of the CME bubble can still be parameterized by a single timedependent radius R(t). The time-dependence of the CME expansion is controlled by magnetic forces, e.g., by a Lorentz force or hoop force. For sake of simplicity we assume a constant force during the initial phase of the CME expansion, The definition of the self-similar geometry of the CME model is depicted for four different times, consisting of a cyclidrical base and a spherical shell. The height of the centroid of the spherical CME volume is h(t), the outer radius of the CME sphere is R(t), and the inner radius of the CME front is r(t). These parameters increase quadratically with time. The circular footpoint area of the CME with radius r0 stays invariant during the self-similar expansion in order to satisfy the line-tying condition of the coronal magnetic field at the footpoints.
which corresponds to a constant acceleration a R and requires three free parameters (R 0 , v R , a R ) to characterize the radial CME expansion, where R 0 = R(t = t 0 ) is the initial radius at starting time t 0 , v R is the initial velocity and a R is the acceleration of the radial expansion. For the motion of the CME centroid at height h(t) we assume a similar quadratic parameterization, where h 0 = h(t = t 0 ) is the initial height at starting time t 0 , v h is the initial velocity and a h is the acceleration of the vertical motion. This parameterization is consistent with a fit to a theoretical MHD simulation of a breakout CME (Lynch et al. 2004) as well as with kinematic fits to observed CMEs (Byrne et al. 2009). Further we constrain the CME geometry with a cylindrical footpoint area of radius r 0 , which connects from the solar surface to the lowest part of the spherical CME bubble. In order to ensure magnetic line-tying at the footpoints, the CME bubble should always be located above the cyclidrical footpoint base, which requires that the initial height satisfies h 0 > r 0 and the acceleration constants are a h > a r . We visualize the model geometry in Fig. 1.
The time-invariant CME footprint area allows us a simple mass estimate of the CME from the cylindrical volume integrated over a vertical scale height, since the spherical CME bubble will eventually move to large heights with no additional mass gain (at time t n ≫ t 0 , see right-hand panel in Fig.1).
Assuming adiabatic expansion inside the CME cavity, the electron density in the confined plasma decreases reciprocally to the expanding volume, i.e., so it drops with the third power as a function of time from the initial value n 0 (of the average density inside the CME).
For the mass distribution inside the CME we distinguish between a compression region at the outer envelope, containing the CME front, and a rarefaction region in the inside, which is also called CME cavity. We define an average width w 0 of the CME front that is assumed to be approximately constant during the self-similar expansion of the CME. While the radius R(t) demarcates the outer radius of the CME front, we denote the inner radius of the CME front or the radius of the cavity with r(t), which has an initial value of The total volume V 0 of the CME is composed of a spherical volume with radius R(t) and the cylindrical volume beneath the CME with a vertical height of (h 0 − r 0 ), which has an initial volume value of V 0 , The volume of the CME front V F (t) is essentially the difference between the outer and inner sphere (neglecting the cylindrical base at the footpoint) while the volume V C of the cavity is, We have now to define the time-dependent densities in the CME, for both the CME front, which sweeps up plasma during its expansion, as well as for the CME cavity, which rarifies due to the adiabatic expansion. The total mass m E (t) of the plasma that is swept up from the external corona in a CME corresponds to the total CME volume V (t) minus the initial volume of the CME cavity, where m p is the mass of the hydrogen atom and < n E > is the electron density in the external corona averaged over the CME volume. The same mass has to be contained inside the volume V F of the CME front, Thus, mass conservation yields a ratio of the average electron density < n F > in the CME front and the average external density < n E > of This density ratio amounts to unity at the initial time, i.e., q n (t = t 0 ) = 1 and monotonically increases with time. The maximum value of the density jump in MHD shocks derived from the Rankine-Hugoniot relations (e.g., Priest 1982) is theoretically q n,max = 4. Fast CMEs are expected to be supersonic and will have a higher compression factor at the CME front than slower CMEs. Thus we keep the maximum compression factor q n,max as a free parameter, keeping in mind that physically meaningful solutions should be in the range of 1 < ∼ q n,max < ∼ 4. Since we prescribe both the width w 0 of the CME front as well as a maximum density compression ratio q n,max we obtain a definition of the critical value ρ(t) for the cavity radius r(t) when the prescribed maximum density compression q n,max is reached (using Eq. 11), which yields the critical radius ρ(t), Therefore, only plasma outside this critical radius ρ(t) can be compressed in the CME front, while the plasma inside this critical radius dilutes by adiabatic expansion and forms the cavity, yielding an average density ratio q n,cav inside the cavity (according to Eq. 3), Our numerical model of a spherical CME expansion has a total of 14 free parameters: 3 positional parameters (the heliographic coordinates (l, b) and height h 0 of the initial CME centroid position), 5 kinematic parameters (starting time t 0 , velocities v h , v R , accelerations a h , a R ), 2 geometric parameters (initial radius r 0 and width w 0 of the CME front), and 4 physical parameters (coronal base density n 0 , maximum density compression factor q n,max in the CME front, the mean coronal temperature T 0 at the observed wavelength filter), and a vertical density scale height factor (or superhydrostaticity factor) q λ that expresses the ratio of the effective density scale height to the hydrostatic scale height Thus, assuming an exponentially stratified atmosphere (Eq. 15), a density compression factor q n (t) ≤ q n,max in the CME front (Eq. 12), and adiabatic expansion inside the CME cavity (Eq. 14), we have the following time-dependent 3D density model: where d is the distance of an arbitrary location with 3D coordinates (x, y, z) to the instantaneous center position [x 0 (t), y 0 (t), z 0 (t)] of the CME, which is located at height h(t) vertically above the heliographic position (l, b).
STEREO/EUVI Observations
One CME event observed with STEREO that appears as a spherically expanding shell, and thus is most suitable for fitting with our analytical model, is the 2008-Mar-25, 18:30 UT, event. This CME occurred near the East limb for spacecraft STEREO/Ahead, and was observed on the frontside of the solar disk with spacecraft STEREO/Behind. Some 6 Aschwanden: CME Expansion and EUV Dimming preliminary analysis of this event regarding CME expansion and EUV dimming can be found in Aschwanden et al. (2009a), the CME mass was determined in white light with STEREO/COR-2 (Colaninno and Vourlidas 2009) and with STEREO/EUVI (Aschwanden et al. 2009b), and the 3D geometry was modeled with forward-fitting of various geometric models to the white-light observations (Thernisien et al. 2009;Temmer et al. 2009;Maloney et al. 2009;Mierla et al. 2009a,b). While most previous studies model the white-light emission of this CME, typically a few solar radii away from the Sun, our model applies directly to the CME source region in the lower corona, as observed in EUV. We follow the method outlined in Aschwanden et al. (2009a). Our model also quantifies the geometry and kinematics of the CME, as well as the EUV dimming associated with the launch of the CME. In order to determine the positional parameters of the CME as a function of time we trace the outer envelope of the CME bubble (by visual clicking of 3 points) in each image and each spacecraft and fit a circle through the 3 points in each image. The selected points for fitting the position of the CME bubble were generally chosen in the brightest features of the lateral CME flanks, but could not always been traced unambiguously in cases with multiple flank features. In those cases we traced the features that were closest to a continuously expanding solution. The radii and ypositions of the circular fits are fully constrained from the STEREO/A images, so that only the x-positions of the centroid of the spherical shell need to be fitted in the epipolar STEREO/B images. We note that the fits of the CME bubble roughly agree with the envelopes of the difference flux in the STEREO/B images initially (up to 18:48 UT), while there is a discrepancy later on. Apparently the CME has a more complex geometry than our spherical bubble model, which needs to be investigated further.
Fitting of Positional Parameters
This procedure yields the CME centroid positions [x A (t i ), y A (t i )] and [x B (t i ), y B (t i )] for the time sequence t i , i = 1, ..., 16. The images in Fig. 2 and 3 are displayed as a baseline difference (by subtracting a pre-CME image at 18:36 UT) to enhance the contrast of the CME edge. The circular fits to the CME outer boundaries are overlayed in Fig. 2 and 3. Both images have been coaligned and rotated into epipolar coordinates (Inhester et al. 2006), so that the y-coordinates of a corresponding feature are identical in the spacecraft A and B images, while the x-coordinates differ according to the spacecraft separation angle α sep , which amounts to α sep = 47.17 • at the time of the CME event. The epipolar coordinates measured from both spacecraft are then related to the heliographic longitude l, latitude b, and distance r c from Sun center as follows, which can directly be solved to obtain the spherical (epipolar) coordinates (l A , b, r c ), Therefore, using stereoscopic triangulation, we can directly determine the spherical coordinates (l i , b i , r c,i ), i = 1, ..., 16 for all 16 time frames, as well as obtain the CME curvature radii R(t i ) from the circular fits to the CMEs. We plot the so obtained observables l A (t), b(t), R(t), and h(t) = r c (t) − R ⊙ in Fig. 3 and determine our model parameters l and b from the averages. We obtain a longitude of l A = −102.4 • ± 0.9 • (for spacecraft STEREO/A), l B = l A + α sep = −54.9 • ± 0.9 • (for spacecraft STEREO/B), and a latitude b = −8.8 • ± 0.6 • . Thus, the CME source region is clearly occulted for STEREO/A. These epipolar coordinates can be rotated into an ecliptic coordinate system by the tilt angle β AB = 2.66 • of the spacecraft A/B plane. Viewed from Earth, the longitude is approximately l E ≈ −102.4 + α sep /2 ≈ −78.8 • . Thus, the CME source region is 12 • behind the East limb when seen from Earth. This explains why the EUV dimming is seen uncontaminted from post-flare loops, which are seen by STEREO/B but hidden for STEREO/A.
Fitting of Kinematic Parameters
We plot also the observables h(t) and R(t) in Fig. 4 and determine the model parameters h 0 , R 0 , a h , a R by fitting the quadratic functions R(t) (Eq. 1) and h(t) (Eq. 2), for which we obtain the starting time t 0 = 18.64 hrs (18:38 UT), the initial CME radius R 0 = 45 Mm, the initial height h 0 = 45 Mm, and the accelerations a R = 0.54 km s −2 for the CME radius expansion, and a h = 0.52 km s −2 for the height motion. The initial velocity is found to be negligible (v R ≈ 0 and v h ≈ 0). We estimate the accuracy of the acceleration values to be of order ≈ 10%, based on the uncertainty of defining the leading edge of the CME. Thus, we determined 9 out of the 14 free parameters of our model sofar.
Note that the acceleration measured here refers to the very origin of the CME in low coronal heights of < ∼ 0.6 solar radii observed in EUVI data. The acceleration is expected to be initially high and to rapidly decline further out, when the driving magnetic forces decrease at large altitudes. This explains why our values for the acceleration in low coronal heights are significantly higher than measured further out in the heliosphere, typically in the order of tens of m s −1 in height ranges of 5-22 solar radii, as measured with SOHO/LASCO. SOHO/LASCO reported even a slightly negative acceleration at altitutes of 5-22 solar radii. The driving magnetic forces that accelerate a CME are clearly confined to much lower altitudes.
Fitting of Geometric Parameters
We model the 3D geometry of the CME bubble with the time-dependent radius R(t) and the width w 0 of the CME compression region. In Fig. 5 we show cross-sectional EUV brightness profiles across the CME in horizontal direction (parallel to the solar surface) and in vertical direction for the EUVI/A 171Å observations (indicated with dotted lines in Fig. 2). These baseline-subtracted profiles clearly show a progressive dimming with a propagating bright rim at the CME boundary, which corresponds to the density compression region at the lateral expansion fronts of the CME. The bright rims are clearly visible in the images during 18:46−18:56 UT shown in Fig. 2. The average width of the observed bright rims is w 0 ≈ 10 Mm, a value we adopt in our model.
Fitting of Physical Parameters
Finally we are left with the four physical parameters T 0 , q λ , n 0 , and q n,max . Since we show here only data obtained with the 171Å filter, the mean temperature is constrained by the peak temperature of the instrumental EUVI response function, which is at T 0 = 0.96 MK. This constrains the thermal scale height to λ T = 47, 000 × 0.96 = 45, 000 km. The remaining free parameters q λ , n 0 , and q n,max need to be determined from best-fit solutions by forward-fitting of simulated EUV brightness images (or profiles, as shown in Fig. 5) to observed EUV brightness images (or profiles). The EUV emission measure in each pixel position (x, y) can be computed by line-of-sight integration along the z-axis in our 3D density cube n e (x, y, z) per pixel area dA for each time from which the intensity I 171 (x, y) in the model image in units of DN s −1 can be directly obtained by multiplying with the instrumental response function R 171 (T ) of the 171Å filter, where R 171 = 3632 × 10 −44 DN s −1 cm 3 MK −1 and the FWHM of the 171 filter is ∆T 171 = 1.25 − 0.51 = 0.74 MK.
In Fig. 6 we show best-fit solutions of horizontal and vertical brightness profiles. The absolute flux level is proportional to the coronal base density squared, which we obtain by minimizing the mean flux difference between simulated and observed flux profiles. We obtain a best-fit value of n 0 = 6.5 × 10 8 cm −3 . The super-hydrostaticity factor is most sensitive to the vertical flux profile (Fig. 5, right-hand side panels), for which we find a best-fit value of q λ = 1.45. Thus, the average density scale height in the CME region is slightly super-hydrostatic, as expected for dynamic processes. These values are typical for quiet-Sun corona and active region conditions (see Figs. 6 and 10 in Aschwanden and Acton 2001).
The last free parameter, the maximum density compression factor q n,max , affects mostly the brightness of the CME rims. Fitting the brightness excess at the CME rims at those times where bright rims are visible are consistent with a value of q n,max ≈ 2.
Comparison of Numerical Simulations with Observations
After we constrained all 14 free parameters (listed in Table 1) of our analytical 4D model by fitting some observables, such as measured coordinates (Fig. 4) and cross-sectional horizontal and vertical brightness profiles (Fig. 5), we are now in the position to inter-compare the numerically simulated images with the observed images, as shown in Fig. 6 for 5 selected times, for both the STEREO/A and B spacecraft. The comparison exhibits a good match for the extent of the dimming region the the bright lateral rims, both extending over about 1.5 thermal scale heights above the solar surface. The basedifference images of EUVI/A reveal a fairly symmetric CME (as the model is by design), surrounded by spherical bright rims at the northern and southern CME boundaries (as the model is able to reproduce it). The model, however, is less in agreement with the observed EUVI/B images. The extent of the EUV dimming region matches relatively exactly, although the observed dimming region is somewhat cluttered with bright postflare loops that appear in the aftermath of the CME, which are mostly hidden in the EUVI/A observations. The biggest discrepancy between the model and the EUVI/B observations is the location of the brightest rim of the CME boundary. The combination of projection effects and gravitational stratification predict a brighter rim on the west side, where we look through a longer and denser column depth tangentially to the CME bubble, which is not apparent in the observations of EUVI/B. Instead, there is more bright emission just above the eastern limb that cannot be reproduced by the model. Apparently there exists stronger density compression on the eastern side of the CME bubble than the model predicts. Another inconsistency is the bright loop seen in EUVI/B at 18:51 UT, which does not match the surface of the modeled CME sphere as constrained by EUVI/A. Apparently, there are substantial deviations from a spherically symmetric CME bubble model that are visible in EUVI/B but not in EUVI/A. Perhaps a flux rope model could better fit the observations than a spherical shell model. These discrepancies between the observations and our simple first-cut model provide specific constraints for a more complex model (with more free parameters) that includes inhomogeneities in the density distribution of the CME.
Estimate of the CME Mass
Our model allows us, in principle, to estimate the CME mass by integrating the density n e (x, y, z, t) over the entire CME sphere, which is of course growing with time, but expected to converge to a maximum value once the CME expands far out into the heliosphere. A simple lower limit can analytically be obtained by integrating the density in the cylindrical volume above the footpoint area, M CME = m p n e (x, y, z)dV CME ≈ m p πR 2 0 n 0 λ T q λ . (22) From our best-fit values R 0 = 45 Mm, q λ = 1.45, n0 = 6.5 × 10 8 cm −3 and the thermal scale height of λ T = 47 Mm, we obtain a lower limit of M CME ≥ 0.47 × 10 15 g. However, this CME appears to expand in a cone-like fashion in the lowest density scale height, so the total volume and mass is likely to be about a factor of ≈ 2 higher. Moreover, the mass detected in 171Å amounts only to about a third of the total CME mass (Aschwanden et al. 2009b), so a more realistic estimate of the total CME mass is about a factor 6 higher than our lower limit, i.e., M CME ≈ 3 × 10 15 g, which brings it into the ballpark of previous CME mass determinations of this particular event, i.e., m CME = 2.9 × 10 15 g from STEREO/COR-2 white-light observations (Colaninno and Vourlidas 2009), or m CME = (4.3 ± 1.4) × 10 15 g from STEREO/EUVI observations (Aschwanden et al. 2009b). | 2009-08-13T14:53:50.000Z | 2009-08-13T00:00:00.000 | {
"year": 2009,
"sha1": "dee38a9b94c3f7e273d76bee65908a338687396e",
"oa_license": "CCBY",
"oa_url": "https://www.ann-geophys.net/27/3275/2009/angeo-27-3275-2009.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "ae300c6194a4997532e3b092dbcd41cbeaa09ef7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
263278175 | pes2o/s2orc | v3-fos-license | Beyond mere repetition: On tradition, creativity and theological speech
further illuminate what it means for a tradition to be open to the future in a way that displays vulnerability and vitality.
Tradition and confessing anew
In 1982, a Reformed denomination in South Africa, then still named the Dutch Reformed Mission Church (DRMC), accepted in draft form a confessional document known as the Belhar Confession.Four years later, at the General Synod of 1986, the confession was accepted officially and thus became part of the confessional base of this church, with their other confessions being the Belgic Confession (of 1561), the Heidelberg Catechism (of 1563) and the Canons of Dort (1618Dort ( -1619)).After the transition of the apartheid South Africa to democracy in 1994, the DRMC united with the largest part of the Dutch Reformed Church (DRC) in Africa to form the Uniting Reformed Church in Southern Africa (URCSA), and the Belhar Confession became part of this newly established church's confessional base.The DRC (with predominantly white members) was not part of this unification process and has not accepted the Belhar Confession officially as a confession, although there have been several attempts to do so that failed as the conditions determined by some church juridical regulations could not be met (see Plaatjies-Van Huffel 2017:53-66;Vosloo 2017:277-287).
In the first year or two after the acceptance of the draft confession in 1982, the official reaction of the DRC was negative, taking note of the confession 'with great sorrow' and viewing the confession's emphasis that the church must stand with the oppressed as too one-sided and as based on an unacceptable exegesis typical of liberation theology (Nederduitse Gereformeerde Kerk, Handelinge 1982:1403, 1986:26-27).We have the 16th-century confessions, many argued, why do we need a new confession?
The Belhar Confession should be understood against the backdrop of the theological struggles in the 1970s and 1980s in apartheid South Africa.Given the biblical and theological justification of the policy and practice of racial segregation by the white DRC, the conviction became stronger in black Reformed churches and some ecumenical circles that the justification of This article argues for understanding Christian theological speech, including a Reformed engagement with confessions, as 'traditioned creativity'.The argument is introduced by highlighting a theological hermeneutic that underlies the Belhar confession's accompanying letter.This discussion points towards an account of Christian discourse that is 'traditioned' by the past but also moves beyond the mere repetition of the tradition's authoritative statements.The article, therefore, affirms the need to distinguish between a living tradition and a narrow traditionalism.In addition, the article also interrogates some forms of theological rhetoric in which 'tradition' functions to insert control over spaces and people, often exhibiting totalising discourses and over-triumphant claims.
apartheid on theological grounds is nothing but a heresy.Hence, the growing conviction that in these circumstances a status confessionis has dawned -with the term status confessionis being a technical term for the idea that the moment for a state or stance of confession has arrived (see Smit 1984).The gospel itself, many believed, was at stake.At the synod meeting in 1982, a small committee was therefore tasked to produce a confessional document that states what the church believes.The result was a confessiona confession that in typical Reformed fashion affirms that Jesus is Lord and connected this core belief to the biblical call for unity, reconciliation and justice.I will not go into the details of the history of origin and content of the Belhar Confession (for this, see Botha & Naudé 2011;Naudé 2010;Plaatjies-Van Huffel 2017;Vosloo 2020).I do want, however, to call attention to the accompanying letter that was written as a document to be read together with the confession, also with a discussion of the interrelation between tradition, creativity and Christian theological speech in view.The Accompanying letter to the Belhar Confession begins as follows: We are deeply conscious that moments of such seriousness can arise in the life of the church that it may feel the need to confess its faith anew in the light of a specific situation.We are aware that such an act of confession is not lightly undertaken, but only if it is considered that the heart of the gospel is at stake.In our judgment, the present church and political situation in the country … calls for such a decision.(URCSA n.d.:1)The accompanying letter also makes it clear that the confession 'is not aimed at specific people or groups of people or a church or churches'; rather, it is proclaimed 'against a false doctrine, against an ideological distortion that threatens the gospel itself' (URCSA n.d.:1).I allude to this letter because I want to underscore how Christian confessional and prophetic speech is understood in this letter.The first important point is that it expresses the strong need to say something because of the experience that the gospel itself is at stake.Hence the need for Christians and the church to say again and anew what they believe, given their experience of current realities.Of course, the church has scripture and other confessions to guide their faith convictions and practices.Still, many believed that the form of speech and public witness needed at this time in history asks for more than the mere repetition of yesterday's truths and wisdom.What is needed, they proclaimed, is a word for the moment, a moment experienced as a kairos, or a moment of truth.It is also evident from the history of the origin of the Belhar Confession that the confessing of the faith anew was not dislocated from tradition; there was the deep belief that what it expresses is in line with the deepest convictions of the Reformed tradition, and indeed with the heart the Christian faith itself.Therefore the abundant references in the confession to scripture, the ecumenical creeds, and other Reformed confessions and prophetic statements (cf.Naudé 2010:77-128).
Thus, the Belhar Confession and the accompanying letter consciously aligned itself with the Reformed faith tradition in which it stood.One should also note that the Belhar confession and its accompanying letter intimates a hermeneutic that affirms that Christian speech and witness require more than mere literal repetition of previous theological statements.The times or the moment can necessitate the need to confess anew, and in different words and metaphors.One can even recognise a kind of theological creativity at work.But this creativity is not a work of individual genius that neglects or crosses out the past; instead, it is more a matter of saying anew and again what one has heard than standing in total discontinuity with the past.
One can also add that when the DRMC and the DRC in Africa united in 1994 to form the URCSA, it accepted a church order in which the Belhar Confession is included as part of its confessional base.The first article of this new church order specifically acknowledges that circumstances in the future can arise that might call for the adoption of new confessions (URCSA Church Order 1994).This openness in principle affirms the idea that the church is not viewed as merely a church with confessions, but rather a confessing church -a church that confesses its faith ever anew, not merely through adopting confessions but also through the nature of its ongoing pastoral and prophetic witness.
So far, I have referred to the Belhar Confession and confessional statements, but this is, of course, just one form of Christian speech and performance.What I presented as the theological hermeneutic underlying the Belhar Confession is also applicable to preaching, prophetic statements, pastoral letters, and the embodied public witness of Christians and churches.
This theological logic of Christian speech and action (as it is displayed, according to my reading, in the accompanying letter to the Belhar Confession), moves beyond the mere regurgitation of the truths of the tradition.Rather, faithfulness to the tradition requires that we risk fresh and new articulation, also for the sake of the specific tradition itself.One should therefore guard against understanding a tradition as a fixed entity and its transmission as a static process.Tradition and creativity should not be seen as mutually exclusive.As the Roman Catholic theologian Avery Dulles (1992) reminds us: The ideas of 'tradition' and 'creativity' seem at first glance to be opposed and incompatible.Tradition says continuity; creativity says innovation and hence discontinuity.With the proper distinctions, however, it may be possible to show that the two are not only compatible but mutually supportive.(p.20) The question remains, though, how to think tradition and creativity together.In grappling with this question, one should keep in mind that innovation, creativity and originality are not to be contrasted uncritically with tradition.Originality and creativity is actually often the result of a particular form of engagement with the past.It is thus false to confuse the concept of tradition with stagnation, as 'the activity of the living transmission of a traditum is a highly dynamic business' (Pieper 2008:15).Tradition is indeed a dynamic process, with conversation, and even argument and conflict, being vital for the life of a tradition.As De Gruchy (2011) observes: Traditions stay alive precisely because those who share them are in conversation with the past and in debate with each other … This is how traditions are re-invented from one context to the next, how they break open to appropriate the new, rather than break down.(p.12)
Tradition and creativity
In his book simply titled Tradition, published in 1981, the sociologist Edward Shils argues that -because of the influence of Enlightenment rationality and scientific knowledge -change and innovation have become coterminous with progress and improvement, while traditionality has become connected with ignorance and superstition.Shils is sceptical of this scepticism towards tradition inherited from the Enlightenment (Shils 1981:4-7).His work is an attempt to revive the notion of tradition against these impulses.Alasdair MacIntyre too is critical of tradition-free reasoning and the modernist legacy that presents itself as a tradition of non-tradition and defines a 'living tradition' famously as 'an historically extended, socially embodied argument, and an argument precisely about the goods which constitute a tradition' (MacIntyre 1984:222).
While the scepticism against tradition, as underlined by Shils and MacIntyre, should be affirmed, one needs to acknowledge simultaneously that any fossilised understanding of tradition needs to be critiqued.It may be a way of legitimising a theological undertaking that displays an ignorant insularity and a docile antiquarianism.
One should, therefore, differentiate carefully between tradition and traditionalism.Jaroslav Pelikan's (1984) oftquoted remark still provides a helpful way into such a discussion: 'Tradition is the living faith of the dead; traditionalism is the dead faith of the living' (p.65).
More recently, David Bentley-Hart (2022) has also utilisedin his book Tradition and apocalypse: An essay on the future of Christian belief -the distinction between tradition and traditionalism, describing traditionalism as: [A] fretful, even at time neurotic, fixation upon the past configurations of the faith that one remembers from childhood, or remembers one's parents remembering, or remember hearing about those who vaguely remember remembering.(p.12) Such a traditionalism does not understand the fullness of a living tradition.Over against such a reduction, Bentley-Hart (2022) states: A tradition, in its full theological sense, is truly vital to the degree that it is always, in every epoch, in a state of patient but dynamic reconstruction … Here recollection, imagination, and inspired invention must work in inseparable concert.(p. 111-112) Concerning the Arian controversy and the councils of Nicaea and Constantinople, Bentley-Hart (2022) even argues that Arius and his followers were fierce traditionalists who were unable to grasp the demands of tradition, and thus lacked imagination, whereas the Nicene party 'were daring innovators, willing to break with the past to preserve its spiritual force' (p.129).The language of the former proved to be sterile, while that of the latter gave the tradition new life.
One should also add that it is the thrust of Hart's argument in Tradition and Apocalypse that traditionalists resent the disruptive vitality of a living tradition and therefore one finds the struggle within Christian tradition between, on the one hand, those who guard the religious and social stability and, on the other hand, what Bentley-Hart (2022) calls 'the apocalyptic ferment of the Gospel' (p.131-145).Hence, the claim that true fidelity to what is most original in a tradition entails the play between stability and disruption; it requires an openness to the future in the light of the past's promise.
Although there is often contestation -including concerning their respective historical analyses and constructive proposals -between the voices (from various theological traditions) that critique a static traditionalism, they nevertheless share the view that faithful and liberating Christian speech should not be equated with the mere repetition of ideas from the past or the statements of authoritative figures.I already mentioned how the Belhar Confession draws on the Reformed confessional tradition but moves beyond mere repetition.
One can also think of the Swiss Reformed theologian Karl Barth's (1995) creative engagement with the 16th-century Reformer John Calvin.As we read in the Introduction of his The Theology of John Calvin (1922): (W)e do not have teaching by repeating Calvin's words as our own or making his views ours … (T)hose who simply echo Calvin are not good Calvinists, that is, they are not really taught by Calvin.Being taught by Calvin means entering into dialogue with him, with Calvin as the teachers and ourselves as the students, he speaking, we doing our best to follow him and then -this is the crux of the matter -making our own response to what he says … For that Calvin wants to teach and not just say something that we will repeat.The aim, then, is a dialogue that may end with the taught saying something very different from what Calvin said but that they learned from or, better, through him.(p.4) One can discern a similar logic in the work of the Anglican theologian Rowan Williams, and more particularly in his reading of Augustine.Jeffrey McCurry's article 'Towards a Poetics of Theological Creativity: Rowan Williams reads Augustine's De Doctrina after Derrida' is illuminating here.
McCurry (2007) begins this article with the remark that for Williams:
(T)he choice between faithfulness to received traditions of creedal, scriptural, and theological discourse, on the one hand, and genuine theological creativity, on the other is false.This is because Williams sees the texts of scripture, creed, and tradition not as historical artifacts whose meaning is equated with the original authorial intention behind the texts but rather as scripts for a certain kind of performance, similar to the script of a play.(p.415) And McCurry (2007) adds the following pertinent comment concerning theological creativity: Under the grammar of this kind of poetics of theological creativity, faithfulness to tradition and genuine creativity is not mutually exclusive, and the Christian sources are always waiting to be creatively re-performed for and in ways inflected by present ecclesial and historical needs.In this way the Christian sources do not serve as the end of theological poesis but the beginning.(p.415) McCurry sees Williams' reading of Augustine as traditioned but not traditional, as creative but not unconditioned.In this sense, McCurry (2007) speaks of a 'traditioned theological creativity' (p.430) -a term that aptly captures something of the heart of our argument here.
Many other examples can be added in which we see the hermeneutic of 'traditioned theological creativity' at play, also from South African theologians whose work, in this sense, displays sensitivity for the ages and the moment; for historicity and contextuality; and for tradition and creativity.
One can thus say, or so this article argues, that such a historical hermeneutic is not about mere repetition but about performative and participatory remembering, requiring what Catherine Pickstock calls, in conversation with Kierkegaard, non-identical repetition (2013:xi-xii).In a certain sense, total repetition is of course not possible.Kierkegaard's famous example of his returning visit to Berlin to see if identical repetition is possible confirms this.Even if you try to replicate a previous travel experience as carefully as possible, it is still not the same.So, for Kierkegaard, the kind of real repetition that he pleas for is not linked to the memory of external places but rather refers to an inner quality of life that draws in freedom from the past and is able to act in the present as part of a process of 'remembering forward' (see Kierkegaard 1983:150-176).
In arguing that the faithfulness to tradition requires more than mere repetition (or a different kind of repetition in the Kierkegaardian sense), one should also keep in mind that the type of historical (or rather ahistorical) hermeneutic that favours a view that argues that one only need to restate what the Bible or one's tradition says, and in this sense do not risk betrayal, is also performing something very pertinent.Often these kinds of sentiments are driven by the urge to affirm the status quo or to resist the challenge new language and speech might hold for ingrained racist, patriarchal, or colonial attitudes and structures.
Interrogating and cultivating tradition
In arguing for theological speech as 'traditioned creativity', one should understand this being 'traditioned' in a dynamic way.Faithfulness to the tradition does not exclude but may require taking the risk to articulate anew considering current realities what one has internalised from the tradition.In this sense, a living tradition stands over the sort of 'traditionalism' that has rightly been the target of many critiques.This said, one should add that even if one affirms a dynamic understanding of tradition, a further cautionary remark in reflecting on a responsible hermeneutic of tradition is needed.In this regard, some comments by the Yale theologian Willie Jennings are highly instructive.
In a book symposium on his monograph After whiteness: A pedagogy of belonging, published 2021 in the theological journal Modern Theology, Jennings responded to his interlocutors in an article 'Against the Finished Man.'He observes herein that the title of his book After whiteness gestures not towards some kind of post-racial future, but that it is a play on Alasdair MacIntyre's influential book After virtue, a pivotal book in his own theological journey.Jennings (2021) writes as follows about the reception of MacIntyre's thought on virtue and tradition, and it is worth quoting him at length: It offered a path toward cultivating a comprehensive theological identity.Yet through its digestion and dissemination … I watched a colonial process of formation assert itself in and through the grasping of something called tradition.What I saw was less a matter of MacIntyre's philosophical project and much more a matter of theological longing.I watched people aim their life towards a vision of maturity that bridged an imagined past to current intellectual postures.But the past was not what was actually brought forward but instead a person held tightly in a dream of coherence and clarity that had merged with the colonial master's dream of the control of spaces.(p.1057) What I take from Jennings' comment is not so much a critique against the notion of tradition as such, as an exposure of how the rhetoric of 'tradition' can be in service of a type of colonial desire associated with the trope of 'white self-sufficient men' guided by control, possession and mastery.According to such a mentality, we know and control the tradition and become the gatekeepers of a polished and coherent tradition we have mastered through our grasp.
Jennings challenges such a view of tradition, affirming and extending in the process the idea that tradition is best understood as a pneumatological reality in which one participates.It is through living in and with the Spirit that we are connected with others across space and time.'Yet, ' Jennings (2021) adds, 'when the living of the faith are baptized in colonial desire, then the Spirit is thwarted and tradition unfolds within the logic of the plantation' (p.1057).
These remarks by Jennings challenge a rhetoric of tradition that functions as a totalising discourse in which notions such as comprehensiveness, coherence and clarity are used as tools to exclude, in part because the understanding of tradition does not allow for ambiguity, messiness, contradiction and plurality.As Jennings (2021) writes: 'Coherence, clarity, consistency -these are not bad words, but when executed through colonialist desire they pull scholarly aspirations towards controlling gestures' (p.1058).
Jennings rightly points to harmful ways in which the notion of tradition can be utilised in our discourses.It can easily slip into a totalising concept in which we imagine that we can fully oversee tradition as a whole and polished entity.
In her book Nothing gained is eternal: A theology of tradition, the Roman Catholic theologian Anne Carpenter too subjects a theology of tradition to decolonial criticism (drawing not only on the work of Bernard Lonergan, Charles Péguy, Maurice Blondel and Hans Urs von Balthasar but also on the writings of M. Shawn Copeland, Willie Jennings and James Baldwin).Carpenter sees tradition as a resource but also makes the concomitant argument that one must not look away from the shadow side of tradition (indeed from its sin and failure).This emphasis challenges any triumphalism in using tradition as a resource for contemporary theological conversations (2022:xii, 169).Carpenter (2022) points out that it is true that 'our Christian past' is much more monstrous than we are accustomed to think.'But,' she continues, 'it is also true that knowing this past is explanatory of a great deal in our actual present,' giving us the task 'of dealing with the past in its present presence ' (p. 174).Christianity should, therefore, concretely confront present injustices and their origins, remembering that 'Christian fidelity is rarely comfortable ' (p. 174).
Much more can be said against any romanticising (or indeed any one-sided demonisation) of the Christian part.But for our purposes here, I want to underscore the idea that the lack of an understanding of a tradition as complex, ambivalent and messy quickly leads to reductive and often outright false constructions of a coherent and all-comprehensive 'tradition'.One implication hereof is that an emphasis on 'traditioned creativity' -such as this article also underscores -requires an account of fragments, in line with what the Chicago theologian David Tracy calls 'frag-events'.Tracy (2020) comments: Frag-events (a neologism -fragmentary and fragmenting events) negatively shatter or fragment all totalities, even as they are positively open to Infinity.Fragments, therefore, can play an important role in a world still largely trapped in oppressive economic, social, political, and even cultural (including religious) totality systems … Fragments not only shatter all closed systems; they simultaneously open one to difference and otherness.(pp.1-2) For Tracy (2020), fragments -understood as fragmentary and fragmenting events (or frag-events) -provide a very fruitful way, albeit not the only way, into the liberating aspects of theories and traditions.As he observes: 'Discover the right fragment -in one's own and other traditions, in one's own and other lives -and you will discover an entry into the eventful, infinite character of reality itself' (p.2).
The theology of tradition that Jennings and Carpenter point towards, and which also resonates in Tracy's language of fragments (or frag-events), challenges the rhetoric in which 'tradition' is put into service of totalising discourses and any over-triumphant claims.One of the signs of the integrity of theological discourse (also confessional and prophetic speech) might well be that it is not selfcongratulatory and does not seek separation.In this sense, the accompanying letter to the Belhar Confession is again instructive.Here we read that the act of confession is 'a two-edged sword': We know that the attitudes and conduct that work against the gospel are present in all of us and will continue to be so.Therefore the confession must be seen as a continuous process of soulsearching together, a joint wrestling with the issues, and the readiness to repent in the name of our Lord Jesus Christ in a broken world.It is certainly not intended as an act of selfjustification and intolerance, for that will disqualify us in the very act of preaching to others.(URCSA n.d.:2)Yet this attitude does not deter one from speaking and confessing, even if the speech is painful and can bring sadness.Such speech is, however, at its core not hurtful but hopeful.Thus, the accompanying letter concludes: We know that such an act of confession and process of reconciliation will necessarily involve much pain and sadness.It demands the pain of repentance, remorse and confession … We are only too well-aware that this confession calls for the dismantling of structures of thought, of church, and society that have developed over many years.However, we confess, that for the sake of the gospel, we have no other choice … Accordingly, our prayer is that the pain and sadness we speak of will be pain and sadness that lead to salvation.(URCSA n.d.:2)
Conclusion
As an epigraph to this article, I used some words by Hans Urs von Balthasar and, in closing, I want to return to some metaphors he uses that capture well the theological hermeneutic that this article -with its emphasis on Christian speech and performance as 'traditioned theological creativity' (cf.McCurry 2007:430) -argues for.
In the Foreword to his book Presence and thought: Essay on the religious philosophy of Gregory of Nyssa, Von Balthasar (1995) argues that there is no historical situation (and we can add text or figure) that can provide 'a kind of master key capable of solving all the problems that plague us today' (p.10).Certainly, the theologian can and must appeal for help to tradition.Still, one must be clear what tradition can and http://www.hts.org.zaOpen Access cannot give us.And in this regard, Von Balthasar uses a helpful set of metaphors that can illuminate the argument for Christian theological speech as traditioned creativity.Von Balthasar (1995) writes as follows about tradition: One would be quite mistaken to imagine it as a relay of runners, each of whom, at the end of his segment of the race, hands of the 'witness' or the 'message', or a written work that, through space and time, is preserved of itself in its immovable materiality.If there were indeed a witness and a message to preserve, a more correct image would be that of a torch … For even while it remains identical to itself, a living flame can lay claim to being protected, at every moment, against a constant succession of dangers and being sustained by a substance that is ever new.In very truth, this living Flame is that of the Spirit of love, who, having come down from heaven to the Holy Land, is jealously preserved by her through all generation in order to inflame the world.(p.11) Like all metaphors, the metaphor of a torch and a living flame has its limitations and cannot fully convey the depth and complexity of the concept of a tradition.Yet it points to an account of tradition with an openness to the future that is not about the transmitting of a dead deposit, but about participating in and sharing something vulnerable, everchanging and life-giving.Theologically speaking, it affirms an account of tradition as a pneumatological reality sustained by the Spirit of truth and love. | 2023-10-01T15:19:49.735Z | 2023-09-28T00:00:00.000 | {
"year": 2023,
"sha1": "62b95166138ff0477a75eab95eae96bf18887a20",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/9051/25814",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0d11d99feb61c74457878d8516dc9d2c5726a293",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
} |
8101289 | pes2o/s2orc | v3-fos-license | The efficiency of sonography in diagnosing volvulus in neonates with suspected intestinal malrotation
Abstract This study is to prospectively evaluate the efficiency of sonography for volvulus diagnosis in neonates with clinically suspected intestinal malrotation. A total of 83 patients with suspected intestinal malrotation who underwent detailed abdominal sonography and upper gastrointestinal contrast study were included. Malrotation was characterized by inversion of the superior mesenteric artery (SMA) and superior mesenteric vein (SMV) in sonographic examination. The “whirlpool sign” of Color Doppler Sonography was recognized as a characteristic for malrotation with volvulus. The degrees of rotation of the SMV winding around SMA were also detected by sonography. Surgery was performed in patients with sonography diagnosed malrotation. A total of 39 patients were sonographically diagnosed as malrotation which was subsequently confirmed by surgery. The sensitivity and positive predictive value of the sonographic diagnosis were both 100% (39/39). The sensitivity, specificity and accuracy of “whirlpool sign” for the detection of midgut volvulus were 95.2% (20/21), 88.9% (16/18), and 92.3% (36/39), respectively. Greater degrees of rotation (equal or greater than 720°) showed higher risk (odds ratio, 5.0; P < .01) for intestinal necrosis occurrence. Sonography is more accurate in diagnosing suspected malrotation than upper gastrointestinal contrast study. Specific sonographic “whirlpool sign” related to volvulus may be used as a potential indicator for intestinal necrosis. In addition, sonography can exclude malrotation and may help the diagnosis of other diseases, such as annular pancreas and duodenal atresia.
Introduction
Intestinal malrotation is one of the most common embryonic malformations in the digestive tract. Malrotation with volvulus is a potentially life-threatening condition. [1] Sudden onset of bilious vomiting with a flat abdomen in the neonatal period strongly suggests the diagnosis of malrotation. One of the consequences of malrotation is midgut volvulus. Most neonates with midgut volvulus usually present this complication within 7 days after birth, and up to 80% of patients with midgut volvulus develop this complication within the first month of life. [2] Early diagnosis of malrotation is critical as delayed diagnosis can lead to necrosis of the midgut and mortality. [3][4][5] Diagnosis of malrotation mainly relies on gastrointestinal imaging (GI) in the past but the radiologic results are not always reliable. [6] Sonography is more valuable for accurate diagnosis of malrotation than an upper gastrointestinal contrast study. [7] Currently, sonography is considered as the preferential tool for the evaluation of suspected malrotation in children. [8][9][10] Malrotation could be diagnosed by sonography, in which inverse orientations of the superior mesenteric artery (SMA) and the superior mesenteric vein (SMV) are featured signs of malrotation. In addition, "whirlpool sign" is an imaging characteristic of midgut volvulus and has a high predicting value for volvulus. [8][9][10] Malrotation or malrotation with volvulus can be effectively diagnosed based on these characteristics. [8][9][10] Ultrasound diagnosis has the advantage of no radiation exposure. The rapid and accurate diagnosis of volvulus [11] by ultrasound helps to establish the diagnosis of malrotation in time and thus allows for urgent surgical intervention to avoid bowel necrosis. However, not all cases of malrotation have abnormal SMV/SMA orientations on ultrasound images. [12] On the one hand, malrotation can occur when the mesenteric vessels are rotated correctly. [1,12,13] On the other hand, not all cases of "whirlpool sign" were found to suffer from malrotation. [7] The aim of this study is to evaluate the potential value of sonography in identifying or ruling out malrotation or volvulus in neonates, and to predict intestinal necrosis by degrees of rotation of the superior mesenteric vessels.
Patients
This prospective study recruited 83 infants (male: 45; female: 38) aging from 1 day to 31 days with clinically suspected malrotation in the Shandong Qianfoshan Hospital between February 2013 and July 2016. All patients presented with acute clinical symptoms suggestive of malrotation, such as irritability, poor feeding, and bilious vomiting. Malrotation was diagnosed immediately after scanning by a pediatric sonographer, who was blinded to other examination findings. Patients who had a history of abdominal surgery or pyloric muscle hypertrophies were ruled out. The research was approved by the Hospital Institutional Clinical Research Ethics Committee. All informed parental consent forms were signed.
Ultrasonographic examination and the upper gastrointestinal contrast study
The sonographic examinations were performed for all recruited infants by a pediatric sonographer with 3 years of pediatric sonography experience. He used a Philips iU22 scanner (Philips Medical Corporation, Hudson, WI), a C5-1-MHz curvilinear transducer (Philips Medical Corporation) or a L12-5-MHz linear array transducer (Philips Medical Corporation). Briefly, for clear visualization, 50 mL of water were instilled via the nasogastric tube before sonographic examination. During water instillation, the patients were placed in the right lateral decubitus position to allow water flowing into the duodenum as soon as possible. Then, the patients were placed in the supine posture for ultrasonographic examination. The antropyloric and duodenal portion were scanned longitudinally first, followed by the descending portion of the duodenum lateral on transverse scanning below the pancreatic head and more distal intestine. The position of superior mesenteric vessels, collapse of bowel loops, and other abnormalities were recorded. Color Doppler imaging was used to verify the SMA and SMV, and whether there was a "whirlpool sign." Malrotation was recorded if an inverse orientation of the SMA and SMV was shown. Malrotation and midgut volvulus were ruled out if none of the above features was found. The diagnosis of malrotation or midgut volvulus was confirmed by surgical exploration. Intestinal necrosis was also predicted by the degrees of clockwise rotation with the SMV winding around SMA. Finally, the rest of the abdomen was scanned to exclude other abnormalities.
The upper gastrointestinal contrast study was performed for all recruited infants by another radiological specialist with 3 years of experience in pediatric abdominal radiology. The malrotation was indicated by duodenojejunal flexure at the right of the vertebral bodies on UGI images.
Other imaging examinations
For infants who did not receive surgery, other imaging examinations (such as barium enema, or computed tomography (CT)) were performed. CT examination was applied to critically ill patients. Malrotation was excluded if no sign indicating malrotation was shown by these examinations.
Statistical analysis
SPSS software (version 13.0) was used for all data analysis. Data were analyzed using Chi-square test. A P-value < .05 was considered statistically significant.
Diagnostic results of malrotation
Of the 83 infants, 39 had malrotation as diagnosed by sonographic examination, and the results were subsequently confirmed by surgery. Six infants with malrotation were eventually excluded due to the sonography and/or surgery demonstrated abnormalities, including annular pancreas (2 cases), duodenal atresia (1 case), duodenal stenosis (2 cases), and descending duodenal web (1 case). The remaining 38 infants were finally diagnosed with nonmalrotation by barium enema, or CT examination ( Table 1).
The results of sonographic examination, upper gastrointestinal barium, and surgery were compared. The results are shown in Table 2. The ultrasonographic characteristics of inversion of the SMA and SMV identified all of the 39 patients with malrotation that were later surgically confirmed, with a sensitivity of 100%. As shown in Fig. 1, SMV (thick arrow) appeared to the left side of the SMA (thin arrow). In contrast, the positive upper gastrointestinal contrast study only identified 36 out of the 39 patients with malrotation, with a sensitivity of 92.3%. The duodenojejunal flexures were not shown clearly because the obstruction blocked the contrast agent from entering across the horizontal part of duodenum or the descending duodenum in the remaining 3 infants. The negative predictive value (NPV) of upper gastrointestinal barium diagnosis was 93.6% (44/47). Nevertheless, the positive predictive value (PPV) and specificity of both sonographic diagnosis and upper gastrointestinal barium diagnosis were 100% (36/36) and 100% (44/44), respectively.
Diagnostic results of volvulus
We investigated the accuracy of sonographic diagnosis for volvulus in the 39 patients with malrotation. Of them, 21 cases were complicated by volvulus as confirmed by surgery. "Whirlpool sign" of sonography was observed in 22 cases (Table 3) corresponding to a clockwise wrapping of the SMV around the SMA. The SMV (thick arrow) can be seen encircling SMA (thin arrow; Fig. 2A), and sonogram showed the Table 2 Comparison of sonography, upper gastrointestinal barium, and surgery results for the diagnosis of the 83 neonates with suspected malrotation.
Malrotation examinations
Surgical findings "whirlpool sign" with the SMV (thick arrow) surrounding the SMA (thin arrow; Fig. 2B). The dilation of the stomach (STO) and the duodenum (DU) was also observed (Fig. 2C). Two falsepositive and 1 false-negative volvulus were detected by "whirlpool sign." For the 2 false-positive cases, the sign was caused by upper jejunum diverticulum in 1 case, and in the other child a postoperative intussusception was noted, a 0. (Table 3). Additionally, the false-positive rate (9.1%; 2/22) and falsenegative rate (5.9%; 1/17) were low.
Sonographic diagnosis number of twists of malrotation and intestinal necrosis
Sonography also allowed an accurate estimation for the number of twists of the SMV around SMA, which ranged from 270 to 1080° (Table 4). Degrees of rotation by sonographic diagnosis Table 3 Whirlpool sign of sonography with or without volvulus in neonates with malrotation (n = 39). have a high accuracy, and 360°was the most commonly detected twist (the diagnostic rate was 100%). Although only 1 case twisted 1080°, sonography can also accurately diagnose patients with 1080°rotation (Table 4). However, sonography was unable to accurately diagnose the patient with 270°twist. We found that 21 infants with malrotation had volvulus confirmed by operation, among them 6 patients who had rotations equal or greater than 720°and 15 patients had rotations less than 720°( Table 5). Incidence of intestinal necrosis in patients with different rotation degrees is also shown in Table 5. Incidence of intestinal necrosis in cases with rotations equal or greater than 720°was 33.3% (2/6), whereas the incidence in cases with rotations less than 720°was 6.7% (1/15). However, Chisquare test confirmed no statistically significant relationship between the degrees of rotation and intestinal necrosis.
Discussion
Malrotation is a congenital abnormality that induces upper gastrointestinal obstruction in neonates with intestinal malrotation and it requires emergent operations to prevent the catastrophic complication of volvulus and bowel necrosis. [14] Thus, early diagnosis of intestinal malrotation is of great importance. In the past, upper gastrointestinal examination was the preferred examination for the diagnosis of intestinal malrotation, although it had the drawbacks of low sensitivity, radiation exposure, and difficult positioning. [9] In addition, Barium enema is no longer a routinely used diagnostic method for malrotation due to the fact that approximately 20% of children with malrotation have a normally positioned cecum. [15] In contrast, ultrasonography has potential benefits of portability, low levels of radiation exposure, and applicability to critically ill children who might be too sick to take a gastrointestinal contrast study. Therefore, ultrasonography plays an important role for neonatal care in the intensive care unit. In recent years, ultrasonography has been introduced as an alternative method for the diagnosis of malrotation, with an emphasis on the relationship of the SMV and SMA and the socalled "whirlpool sign" in cases of volvulus. Studies have reported that sonographic findings of abnormal relative positions of the SMA and SMV have high sensitivity in diagnosing intestinal malrotation. [16,17] Inversion of the superior mesenteric vessels confirms the diagnosis of malrotation. Studies have suggested that the inversion of the SMV (i.e., the SMV to the left of the SMA) is found in 100% malrotation cases. [1,5] In this study, all of the diagnoses were confirmed surgically to have a left-sided SMV. The main advantages of ultrasound are noninvasiveness and simplicity. [13] Alehossein et al [9] reported that sonographic accuracy eliminated the need for further diagnostic tests. In this study, 6 patients with other diseases were also successfully detected by ultrasonography.
The main features of malrotation on an upper gastrointestinal study include duodenojejunal flexure at the right of the midline. In this study, the upper gastrointestinal contrast study failed to identify malrotation in 3 infants. This failure may have occurred because the contrast agent did not pass the horizontal part or the descending part of duodenum due to obstruction.
Sonographical "whirlpool sign" is sensitive in detecting volvulus, [16] with the reported sensitivity of 86%, specificity of 92% and PPV of 89%. [8] The specificity and the NPV of "whirlpool sign" in midgut volvulus detection were reported as 99% and 97.1%, respectively. [10] Therefore "whirlpool sign" of sonography is considered as a reliable indicator for the presence of volvulus. Consistently, our study showed that the "whirlpool sign" could detect volvulus with the sensitivity, specificity and accuracy of 95.2%, 88.9%, and 92.3% respectively. Our results also showed that this detection method had a high NPV (94.1%). Importantly, midgut volvulus might cause intestinal necrosis, which could be used as an indication for surgical intervention. [5,18,19] It should be noted that not all of "whirlpool sign" can be considered as cases of volvulus [8,14] and other causes can contribute to "whirlpool sign" [20] ; meanwhile, absence of a "whirlpool sign" cannot reliably exclude volvulus either. In this study, a whirlpool sign was observed in 2 patients who were not volvulus cases as excluded by surgery. Besides, although Karmazyn et al [21] suggested that a normal sonographic finding definitively ruled out the danger of midgut volvulus, we have found that 1 patient with normal ultrasound was actually suffering from midgut volvulus as confirmed by CT scanning and surgery. Sometimes, further radiographic contrast studies are required as a supplement to ultrasound diagnosis. [9,22] Ultrasonography can also be used as an accurate estimation on the number of twists of the SMV. However, sonography was Diagnosis for the degrees of rotation by ultrasound in neonates with whirlpool sign.
Number
Degrees of rotation,°Sonographic diagnosis cases (n = 22) Surgically proved cases (n = 21) Accuracy, % 1 270 0 1 0 2 360 10 10 100 3 540 5 4 80 4 720 6 5 83 5 1080 1 1 100 Zhang et al. Medicine (2017) 96:42 Medicine unable to accurately diagnose the twist with rotation less than 270°. This may be because twists with rotation degrees lower than 270°are hard to detect and the experience of the examiner also may play a role. We also found that the twists of 360°were the most frequently observed twist in the 21 patients, and the patients with twists equal or more than 720°tended to have intestinal necrosis (33.3%; 2/6). In contrast, of the 15 patients with twists less than 720°, only one intestinal necrosis was identified (incidence: 6.7%). However, there was no statistically significant relationship between the degrees of rotation and intestinal necrosis. We speculate that this is caused by a limited number of patients is and future studies with larger sample size are needed. Our results indicate that sonography can accurately diagnose degrees of rotation to fulfill the need of surgical intervention and avoid intestinal necrosis.
The main limitations of this study include that the follow-up period is short and the sample size is small. Thus, we cannot conclude that all negative sonographic findings and upper gastrointestinal contrast findings are true-negative results. In many patients malrotation is not diagnosed until adulthood, [14,23] in some patients false-negative sonographic results may have occurred.
Conclusions
In conclusion, our study shows that sonography may be more valuable for the accurate diagnosis of suspected malrotation (inversion of the SMA and SMV) than upper gastrointestinal contrast studies. The upper gastrointestinal contrast studies have low diagnostic accuracy and a slightly higher false-negative rate. The specific sonographic "whirlpool sign" can be used to evaluate degrees of rotation and moreover, sonography can exclude malrotation and provide additional diagnostic information, such as annular pancreas and duodenal atresia, etc. | 2018-04-03T00:00:34.947Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "0a03605ae8c0e22938c1984c1a711fc2d4e77736",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000008287",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a03605ae8c0e22938c1984c1a711fc2d4e77736",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6919380 | pes2o/s2orc | v3-fos-license | Physiological effects of autotoxicity due to DHAP stress on Picea schrenkiana regeneration
Picea Schrenkiana as one of the most important zonal vegetation was an endemic species in Middle Asia. Natural regeneration of P. Schrenkiana is a long existing problem troubling scientists. The autotoxicity of 3,4-dihydroxy-acetophenone (DHAP) was found to be a causative factor causing the failure of P. Schrenkiana natural regeneration. The effects of concentrations of DHAP treatment on the viability of root cell, activities of antioxidant enzymes and levels of P. Schrenkiana phytohormones were performed to disclose the physiological mechanism of DHAP autotoxicity. It was observed that high concentration of DHAP could inhibit the seed germination and seedling growth, but had a hormesis at low concentrations. Analyses showed that the root cells significantly lost their viability treated with high DHAP. The enzymes activities of seedlings were significantly stimulated by the treatment of 0.5 mM DHAP to give a transient increase and then decrease as DHAP concentration increased to 1.0 mM except for GR (glutathione reductase) in which DHAP treatment had little effect on its activity. Comparing with the control, an increase in the levels of phytohormones ZT (zeatin), GA3 (gibberellic acid) and IAA (indole acetic acid) was induced by the treatment of DHAP at low concentrations (0.1–0.25 mM), but the significant deficiency was found treated by high concentrations (0.5–1.0 mM). In addition, the ABA (abscisic acid) level increased in all experimental observations. These results suggested that DHAP significantly affected indices of growth and physiology, and provided some new information about different effect in P. Schrenkiana treated with DHAP.
Introduction
Plant recruitment plays a central role in plant population and dynamic communities [1]. Plant recruitment can be influenced by several parameters including light, nutrients, water, understory vegetation or predation [2][3][4], and also by the chemically mediated interferences (allelopathy) [5]. Higher plants generally release one or more bioactive chemicals into the environment that interact between plants with either stimulatory or inhibitory influences, i.e. a a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 phenomenon known as allelopathy [6] which was first put forward to describe the effect of ethylene on the fruit ripening from physiological perspective [7]. Allelopathy is usually interspecific [8][9], but also may occur within the same species, which is called autotoxicity [10]. In forest ecosystems, many examples of autotoxicity exist in coniferous trees [11][12][13][14][15][16][17]. Autotoxicity was a potential functional process that could influence early recruitment including germination and seedling growth with emphasis on the natural regeneration of Pinus halepensis [15]. A spruce-specific metabolite named as p-hydroxyacetophenone was isolated in spruce through fall and organic layer showed negative effects on root elongation of spruce seedlings [14]. Previous studies suggested autotoxicity could result in inhibition of seedlings growth or delayed germination, limited offspring [18]. Such regulations could reduce the intensity of intraspecific competition and damage the fitness of the dominant members of a population [19]. For these reasons, autotoxicity has been argued as a cause of forest regeneration failure [20].
Allelochemicals were excreted from plants during the processes of secondary metabolism and accumulated in plants, soils and other organisms [21]. In the latter research, a biotic stress was termed as allelochemicals stress [22], where these allelochemicals negatively affect the growth and trigger a series of morphological and physiological variations in the target plants.
Production of large number of reaction oxygen species (ROS) by the plants in response to allelochemicals stress has been suggested. In response to ROS, it is proposed that the allelochemicals extracted from cucumbers such as peroxidase (POD) and superoxide dismutase (SOD) can significantly activate antioxidant mechanisms [5]. The effects of methanolic extracts from Phytolacca latbenia on the activities of antioxidant enzymes such as POD, SOD and catalase (CAT) in the geminating seeds of Brassica napus and Triticum aestivum were also checked, indicating that the activities of POD and SOD were significantly decreased, but CAT activity presented a linear increase in both tested seeds with increasing the concentration of allelochemicals [23]. Additionally, plant hormones regulate several aspects of plant growth and development processes in response to the multiple abiotic and biotic stresses [24][25][26][27]. The level changes of hormones in plants due to allelochemicals also have been reported. For examples, the methanol extracts of Lepidium draba were found to increase the ABA level and significantly decrease GA 3 level of corn and redroot pigweed [28], and the aqueous leachate of Sicyos deppei caused higher content of ABA during all times of tomato germination [29]. However, the physiological mechanism of autotoxicity remains to be elucidated.
Schrenk spruce (Picea schrenkiana) as one typical species of boreal forest, is mainly distributed on the northern and southern slopes of Tianshan Mountains and the northern slope of Kunlun Mountain West in China., P. schrenkiana has been received much attention in the ecological aspect, because it plays an important role in water and soil conservation and to maintain balance of ecosystem. However, the natural regeneration of P. schrenkiana has been in jeopardy [30]. It has been hypothesized that secondary metabolites released by litter and root secretion were accumulated around the rhizosphere due to fire suppression, which caused a autotoxic effect to the regeneration of P. Schrenkiana. 3,4-dihydroxy-acetophenone (DHAP) was proved to be the major allelochemical in P. schrenkiana needles and litter [30][31][32]. In natural forest condition, 0.51 mg/g DHAP was contained in dry soil of a mature P. schrenkiana forest. The concentration of DHAP would be 0.224 mM when 0.51 mg DHAP was dissolved into 15 mL snow or rain water [32]. According to the previous studies, DHAP stress was considered to be a biotic stress to P. schrenkiana. In this work, therefore, series of experiments including root cell viability, antioxidant enzymes activities and plant hormones content were designed and conducted to explore physiological mechanism of P. Schrenkiana treated with different concentrations of DHAP.
Materials and methods
Chemicals, plant material and reagents DHAP was isolated in our laboratory [31][32]. Seeds were collected from pure stands of P. Schrenkiana located at the forest farm of Xinjiang Agricultural University (2 198 m, 432 2 0 58"N, 86˚49 0 33"E) on September, 2014. All Seeds were selected from healthy plants without infection and stored at 4˚C. All other chemicals and solvents in analytical grade were purchased from commercial sources.
DHAP treatment on seed germination 100 grains of seeds were preceded in plastic boxes (12×12 cm) lined with two layers of filter paper in five replicates, and 10 mL DHAP at concentrations of 0, 0.1, 0.25, 0.5 and 1.0 mM were added into each box, respectively. Seeds were incubated in an artificial intelligence simulation incubator under a 16/8 h (day/night) photo period with photon flux density of 40 μmolÁm -2 s -1 at a day/night temperature of 12/4˚C. When radicle emerged, the seeds were considered germination after incubation. Germination rate was calculated after 15 days and germination vigor was calculated on the tenth day of DHAP treatment. The under-germinated seeds at 3 days were selected to determine the antioxidant enzymes activities and the levels of plant endogenous hormones.
DHAP treatment on seedlings growth P. Schrenkiana seeds were pre-germinated in plastic boxes lined with filter paper until radicle emergence. Subsequently, 100 grains of successful germination seeds were placed in Petri dishes in five replicates, and 10 mL DHAP (0, 0.1, 0.25, 0.5 and 1.0 mM) was added into each dish, respectively. The cultural conditions of seedlings growth were under a 16/8 h (day/night) photo period with photon flux density of 40 μmolÁm -2 s -1 at a day/night temperature of 14/6˚C. The length of radicle of five seeds randomly sampled from each Petri dish, was measured with a vernier caliper (GB/T 1214.2-1996, Measuring Instrument LTD, Shanghai). Meanwhile fresh weight of P. Schrenkiana seedlings was also recorded. After five days, these parameters were taken measurements, and continued once every five days for a total of 20 days. The seedlings after the measurement of radicle length and fresh weight were used to determine the antioxidant enzymes activities and the levels of plant endogenous hormones.
Root cell viability of seedling
The viability of P. Schrenkiana root cell was referred to the method of double staining with fluorescein diacetate (FDA) and propidium iodide (PI) [33]. Root tissues (0.1-1 cm length from the tip) were excised from the intact P. Schrenkiana seedlings treated with 0, 0.1, 0.25, 0.5 and 1.0mM DHAP. The root tissues were stained with a mixture of 12.5 μgÁmL -1 FDA and 5 μgÁmL -1 PI for 10 min at room temperature in the dark and then washed with distilled H 2 O. The slit root tissues were observed and photographed using a fluorescence microscope (Nikon E600 with a B-2A filter, excitation 450-490 nm, emission at 520 nm, Nikon Corp., Tokyo, Japan).
Evans blue staining was the other method to evaluate cell viability [34]. The intact P. Schrenkiana seedlings were treated with different concentration of DHAP for 3, 6 and 9 days, respectively. After the roots were washed with distilled H 2 O, several seedlings roots (0.1-1 cm from the tip) were stained in 0.25% (w/v) aqueous solution of Evans blue for 1 h at 30˚C in the dark. Thereafter, the stained roots were washed with distilled H 2 O for 10 min and then extracted with N,N-dimethylformamide without grinding for 24 h at 30˚C in the dark. Finally absorbance of the released Evans blue was measured using spectrophotometer (Beckman DUO 640; Beckman Coulter Inc., Fullerton, CA, USA) at 600 nm.
Assay of antioxidant enzyme activities in P. Schrenkiana
Tissues (0.1 g) were weighed and ground in 1 mL phosphate buffer (50 mM, pH 7.8) containing 1 mM EDTA and 2% (w/v) polyvinyl pyrrolidone (PVP) using chilled mortar and pestle. The homogenate was filtered for two times and centrifuged at 10,000 rÁmin -1 for 20 min at 4˚C, and the clear supernatant was then used to determine the antioxidant enzyme activities except APX activity. For measuring APX activity, the tissue was homogenized in phosphate buffer (50 mM, pH 7.8) supplemented with 2 mM ascorbate, 1 mM EDTA and 2% (w/v) PVP. The parallel control was run where distilled H 2 O was used instead of enzyme extract. All spectrophotometric analyses were conducted at 25˚C in a Shimadzu UV/Visible Light spectrophotometer.
Superoxide dismutase (SOD) activity was determined based on the inhibition of the photochemical reduction of nitroblue tetrazolium (NBT) according to Giannopolitis and Ries [35]. The reaction mixture (6.6 mL) consisted of 50 mM phosphate buffer (pH 7.8) 3 mL, 130 mM methionine 0.6 mL, 750 μM NBT 0.6 mL, 20 μM riboflavin 0.6 mL, enzyme extract 0.2 mL, 0.1 mM EDTA 0.6 mL and distilled H 2 O 1 mL. The reaction was conducted at 25˚C under 4,000 lx for 15 min. After illumination, absorbance of solution was measured at 560 nm. One unit of SOD activity was defined as that amount of enzyme that caused 50% inhibition of NBT reduction.
Peroxidase (POD) activity was detected by guaiacol method [36]. The reaction mixture was 4 mL including 0.1 mL enzyme extract, 1.9 mL phosphate buffer at 50 mM, 1 mL guaiacol solution at 50 mM and 1 mL 2% H 2 O 2 . The increase in the absorbance was measured at 470 nm as guaiacol oxidation recorded at 30 s intervals up-to 2 min. One unit of POD activity was defined as the amount of enzyme increased 0.01 in the absorbance at 470 nm per min [37].
Catalase (CAT) activity was measured from the rate of H 2 O 2 decomposition as measured by the decrease of absorbance at 240 nm, following the procedure of Lee et al [38]. The reaction mixture (3 mL) contained 100 μL enzyme extract and 2.9 mL phosphate buffer (10 mM H 2 O 2 included) at 50 mM. One unit of CAT activity was calculated as the amount of enzyme reduced 0.01 in absorbance at 240 nm per min [37].
Ascorbate peroxidase (APX) activity was determined according to Nakano and Asada [39]. The reaction mixture (3 mL) was composed of 2.5 mL phosphate buffer (containing 0.5 mM ascorbate) at 50 mM, 50 μL H 2 O 2 at 6 mM and 450 μL enzyme extract. The hydrogen peroxide-dependent oxidation of ascorbate was followed by a decrease in the absorbance at 290 nm. APX activity was expressed as 1 mM ascorbate oxidized per min.
The guaiacol peroxidase (GPX) was determined by the modified method described by Cakmak and Marschner [40]. The reaction including 1 mL phosphate buffer at 50 mM, 400 μL guaiacol (containing 2.5 mM NaN 3 ) at 1 mM, 200 μL H 2 O 2 at 1.5 mM and 400 μL enzyme extract was carried out at 37˚C for 5 min. The absorbance of 412 nm was recorded. One unit of GPX was defined as the amount of glutathione increased 1 in the absorbance at 470 nm per min.
Glutathione reductase (GR) activity was usually assayed by following GSSG-dependent oxidation of NADPH [41]. The reaction mixture (3 mL) contained 450 μL enzyme extract, 2.34 mL phosphate buffer at 50 mM, 60 μL NADPH at 10 mM and 150 μL GSSG at 10 mM. The decrease in absorbance at 340 nm was monitored for 2 min. One unit of GR activity was expressed as 1 μM NADPH oxidized per min.
Assay of phytohormones levels in P. Schrenkiana
Samples (0.1 g) were frozen in liquid nitrogen and instantly ground to a powder. 200 μL cold methanol of 80% (containing 1 mM BHT as an antioxidant) was sequentially added, and the homogenate was temporarily incubated at 4˚C in the dark for 12 h. After that, the homogenate was centrifuged at 10 000 rÁmin -1 for 20 min at 4˚C. The supernatants were passed through Chromosep C18 columns, prewashed with 80% methanol. The hormone fractions were dried under N 2 , dissolved in 2 mL mobile phase and filtered by 0.22 μm membrane for analysis. Chromatographic analysis was performed using the Agilent 1290 UPLC (Ultra-high Performance Liquid Chromatography) system with a C18 reversed-phase column (Eclipse Plus C18, 2.1×150 mm, 1.8 μm) (Agilent, Santa Clara, CA, USA) set at 30˚C. A diode array detector was monitored at 254 nm. Elution with solvent A (methanol/acetonitrile, 5:95) and solvent B (water/acetonitrile, 5:95) in a step gradient manner at a flow rate of 0.5 mLÁmin -1 was carried out as follows: 0-1 min, 25% A; 1-4 min, 25%-45% A; 4-8min, 45% A; the sample injection volume was 0.3 μL. Phytohormone concentrations (μgÁg -1 fresh weight) were automatically calculated from peak area by software using authentic standards run with the samples.
Statistical analyses
All results were presented as the mean ± standard error of five replications. All data were statistically analyzed using SPSS software (IBM, New York, USA). For statistical analyses, relationships were considered to be significant when p<0.05. If the results of One-way ANOVA showed the significant differences at the 0.05 significance level, we used LSD (Least Significance Difference) for multiple comparisons among the different treatments.
Effects of DHAP on seed germination
Effects of DHAP on the germination of P. Schrenkiana seed were measured by germination rate and germination vigor (Fig 1). In comparison with the distilled water as control, DHAP at 0.1 mM level had a stimulatory effect on seed germination, and the germination rate and vigor remarkably increased by 36% and 24%, respectively. At DHAP concentrations ranged from 0.25 to 1.0 mM, the germination rate and vigor were inhibited, especially at 1.0 mM DHAP level, the germination rate and germination vigor significantly decreased by 65% and 53% respectively, indicating a strong inhibitory effect.
Effects of DHAP on enzyme activities and phytohormones levels in seed germination
The seeds of P. Schrenkiana were treated with different concentrations of DHAP, and the antioxidant enzymes activities and endogenous hormones levels of seeds were determined after 3 days treatment. The data in Table 1 showed that after treated by 0.5 mM DHAP, the activities of antioxidant enzymes SOD and CAT significantly increased by 58% and 65% higher than the control respectively, but the treatment of DHAP at high concentration (1.0 mM) reduced SOD and CAT activity by 9% and 19%, compared with the control. Unlike SOD and CAT, the activities of antioxidant enzymes POD, GPX and GR tended to be stimulated as DHAP concentration increased, such as the significant increase of 56% in POD activity by the treatment of DHAP at 1.0 mM, the obvious increases of 42% and 20% for GPX and GR activities at the concentration of 0.5 mM. However, the activity of APX enzyme was increased at low DHAP concentration (0.1-0.25 mM), but reduced at high DHAP concentration (0.5-1.0 mM). In general, the activities of all the antioxidant enzymes except for APX were higher than the control at 0.5 mM DHAP concentration, indicating that P. Schrenkiana can make a positive selfprotection effect at moderated DHAP concentration, but a negative self-inactivation effect at high DHAP concentration of 1.0 mM due to the decay of antioxidant enzymes activities, which in turn affected the seeds germination.
To analyze the changes in endogenous hormones of P. Schrenkiana seeds, chromatogram of ZT, GA 3 , IAA, ABA and DHAP by UPLC was conducted and showed that calibration curves of ZT, GA 3 , IAA and ABA were linear and the R 2 values were in the range 0.9995-0.9998, presenting good linearity. The UPLC analyses of the plant hormones suggested that DHAP at moderate concentration significantly increased the levels of ZT, GA 3 and IAA, but at high concentration inhibited the levels. The highest levels of ZT, GA 3 and IAA were observed at 0.25 mM DHAP, increased by 15%, 39% and 24% in comparison with the control, respectively, and the lowest levels of ZT, GA 3 and IAA were found in the treatment of 1.0 mM DHAP, reduced by 37%, 24% and 41%, respectively ( Table 1). As an exception, the level of hormone ABA was
Effect of DHAP on seedling growth
Radicle length and fresh weight are often defined as seedling growth parameters, and thus they were adopted to determine the effect of DHAP on seedling growth of P. Schrenkiana (Fig 2). In general, DHAP at low concentration (0.1 mM) had a significant hormesis on both the radicle length and fresh weight in the early stage of seedlings growth. Compared with that of the control, the increase of the radicle length and fresh weight were reached the maxima of 67% and 58% respectively at 12 days of DHAP stress treatment. However, it showed inhibitory effect on the development of seedlings in particular that the DHPA at 1.0 mM decreased the radicle length and fresh weight by 60% and 58% in comparison with the control after 12 days treatment, respectively (Fig 2). These results implied that DHAP at high concentration (!0.25 mM) had inhibitory effect on the development of P. Schrenkiana seedlings, but it gave stimulatory effect at low concentration (0.1 mM).
Effect of DHAP on cell viability in seedling roots
The cell viability in P. Schrenkiana root was determined by a double-staining method using FDA-PI (Fig 3). The double-staining analysis demonstrated that the root tip treated by DHAP at 0.1 mM presented green fluorescence after 9 days, indicating that cells were viable. The root tip treated with 0.25 mM DHAP revealed green fluorescence after 6 days and reddish brown fluorescence after 9 days, respectively. But a reddish brown fluorescence in the 0.5 mM DHAP-treated root tip after 6 days clearly indicated the death of cells. Compared with control treat, DHAP at 1.0 mM level induced cell death after 3 days treatment. Therefore, the damage of 0.1 and 0.25 mM DHAP on root tip cell was less than that of 1.0 mM DHAP, which was more serious. For further study, Evans blue staining quantified the rates of cell death was also adapted to determine cell viability in P. Schrenkiana roots ( Table 2). DHAP at 0.5 and 1.0 mM enhanced significantly Evans blue uptake of the roots after 3, 6 and 9 days treatment compared with control, while the uptake level of the roots treated by 0.1 mM DHAP after 3 and 6 days was much lower than that of the control. The uptake of Evans in the root cells treated by 0.25 mM was presented a slight increase compared with control. This indicates that DHAP at 1.0 mM concentration has higher activity to induce the death of the root cells than low concentration DHPA. In addition, the results of Evans blue staining coincided with the results of FDA-PI staining on root cells.
Effect of DHAP on enzymes activities of seedlings
Activities of enzymes SOD and POD were monitored at 3, 6, 9 and 12 days of DHAP stress in P. Schrenkiana seedlings (Fig 4a and 4b). SOD activity of P. Schrenkiana seedlings remained unchanged in comparison with the control. As the degree of DHAP stress increased, the SOD activity increased initially during early days of growth except for the 1.0 mM DHAP concentration, and reached a maximum at 6 days (30% increase) under 0.5 mM DHAP treatment (Fig 4a). Compared with the control, SOD activity of P. Schrenkiana seedlings treated with 1.0 mM DHAP increased initially and then declined. Like the SOD activity, a significant decrease about 24% was also observed in POD activity after 6 grown days under 0.5 mM DHAP treatment and the enzyme activity was decreased under 1.0 mM DHAP (Fig 4b). The activity of CAT increased in P. Schrenkiana seedling treated with DHAP during early days of growth and reached the maximum at 9 days under 0.5 mM DHAP treatment, but decreased thereafter under 1.0 mM DHAP level (Fig 4c). By treated with 0.5 mM DHAP, an obvious increase about 1.1 times was observed in enzyme activity at 9 days as compared to the control plants. CAT activity of P. Schrenkiana seedlings treated with 1.0 mM DHAP declined, whereas a transient increase was found at 3 days treatment. The activities of SOD, POD and CAT were all increased under 0.5 mM DHAP treatment, indicating that moderate DHAP stress can increase the resistance ability of P. Schrenkiana, but 1.0 mM DHAP inhibited the activities of these enzymes, due to the reduction of tolerance to high DHAP stress. APX and GPX play important roles in the H 2 O 2 scavenging system, thus we examined the APX and GPX activities in P. Schrenkiana seedling subjected to DHAP stress during early grown days (Fig 4e and 4f). The APX activity of P. Schrenkiana seedling induced by 0.1 and 0.25 mM DHAP had a transient increase at 3 days treatment and then decreased with extending the days, as compared to the control. Unlike APX activity variation, GPX activity was slightly increased for early grown 12 days. A significant increase in APX and GPX activity was detected under high DHAP (0.5 and 1.0 mM) after 3 days and reached maximum after 6 days under 0.5 mM DHAP stress, and then decreased thereafter. The obvious elevation of APX and GPX activities about 25% and 29% was observed as compared to the control plants. Unlike other antioxidant enzymes, the activity of GR activity had little changes in the presence or absence of DHAP (Fig 4d).
Effect of DHAP on phytohormones level in seedlings
The endogenous ZT, GA 3 , IAA and ABA levels of P. Schrenkiana treated with different concentration of DHAP were compared in Fig 5. A time-course study revealed that the levels of phytohormones induced by DHAP increased under the low concentration during early 12 days of grown, but the levels were inhibited significantly under high DHAP concentration. An obvious elevation about 40% and 28% in IAA level was observed in 9 days grown seedlings at 0.25 and 0.1 mM DHAP levels, respectively. The levels of GA 3 and ZT increased by 26% and Physiological effects of autotoxicity 22% respectively in 3 grown days of seedlings at 0.25 mM DHAP. After 12 days exposed to 1.0 mM DHAP, the seedlings exhibited significant decreases of 64%, 42% and 50% in IAA, GA 3 and ZT levels. Moreover, the seedlings exposed to 0.5 mM DHAP exhibited the decreases of 52%, 34% and 28% in comparison with those seedlings in absence of DHAP after 12 days. Unlike the levels of IAA, GA 3 and ZT, an increase of ABA level was found along with the DHAP treated times. In addition, the seedlings induced by the treatment of 1.0 mM DHAP had a significantly higher ABA level than those treated with the control, 0.1 mM and 0.25 mM DHAP (13.3, 3.8, and 2.0 folds), respectively (p<0.01).
Discussion
Some compounds have been found to exhibited concentration-dependent stimulatory or inhibitory effects on seedling growth [42][43][44]. 4, 8-Dihydroxy-1-tetralone (4,8-DHT) isolated from Carya cathayensis had a hormesis at low concentration, but significantly inhibited seedling growth of lettuce at the high concentration, [44]. Needle-leached DHAP had a similar effect on some of the plants. DHAP promoted the seeds germination and seedlings growth of P. Schrenkiana at low concentration ( 0.1 mM), but had a significant inhibition at high concentration (!0.5 mM). In this investigation, 0.25 mM DHAP presented a slight promotion and inhibition on seeds germination and seedlings growth of P. Schrenkiana, so it was hypothesized that 0.25 mM DHAP might be the inflection point of changing action direction. It was near to the concentration of 0.224 mM in natural forest conditions [32]. Thus, our investigation provided evidence on the phytotoxic potential of P. Schrenkiana.
FDA-PI is a rapid, convenient, reliable and simultaneous double-staining procedure to determine the cell viability [45][46][47]. To study the effect of DHAP on the viability of P. Schrenkiana root cell, a double staining experiment was conducted according to this procedure. FDA readily enters intact cells and undergoes hydrolysis by endogenous esterase to releases free fluorescence [48]. PI readily enters the cells with injured membranes and can be detected by its red fluorescence [48]. Therefore, it is used to detect dead cells [49]. In this study, high concentration of DHAP induced cell death in root tip cells after 3 days treatment. Similar findings were reported in the studies on phytotoxic activities of L -DOPA exuded from Mucuna spp., which induced cell death [49].
Allelochemicals as biotic stress exhibit a wide range of action mechanism, from effects on DNA, photosynthetic, ion uptake water balance, and the activities of antioxidant enzymes and plant hormones [5,28,[50][51][52]. SOD, POD, CAT, APX and GPX are the major antioxidant enzymes [23,[53][54][55], and GR plays an important role in maintaining a high GSH/GSSG ratio in plants [56]. Within a cell, the SOD is considered the first line of defense against the ROS, as well as a key antioxidant enzyme to convert O 2 − into H 2 O 2 and O 2 [57]. Subsequently, both Physiological effects of autotoxicity CAT and APX are correlated to consume H 2 O 2 [58]. In the present study, DHAP had different effects on antioxidant enzymes activities in P. Schrenkiana seedlings, respectively. The activities of all antioxidant enzymes except for GR were significantly increased at DHAP concentration of 0.5 mM during the early days of seedlings growth. It can be concluded that the moderate concentration of DHAP can increased the activities of antioxidant enzymes to help P. Schrenkiana seedlings maintain the ROS levels well below to their deleterious levels to enhance the resistance of P. Schrenkiana. These results agree with other studies described antioxidant enzymes under allelochemical stress. It has been reported that low and medium ginsenoside isolated from ginseng, significantly stimulated the activities of SOD, POD and CAT of treated roots of American ginseng [59]. Likewise ferulic acid increased antioxidant enzymes in maize seedlings [60], and benzoic acid in cucumber cotyledons [61]. That is a self-protective mechanism of plants in response to biotic and abiotic stresses. But the activities of the antioxidant enzymes were decreased at 1.0 mM toxicity level. A reduction in enzymes activities has also been observed in other studies on allelochamical modes of actions, that is, two allelochemicals isolated from the leachates of Ageratina adenophora decreased POD and SOD activities in rice seedlings under high concentration after 48 h treatment [52]. It is speculated that the Physiological effects of autotoxicity accumulation of ROS induced during severe DHAP stress goes beyond the clearance ability of antioxidant enzymes. Excessive ROS can induce cell damage which in turn can induce P. Schrenkiana seedlings death. In addition, previous researches have shown that various allelochemicals could change plant hormone levels of crops and weed [28,[62][63]. Evidences from physiological studies indicated that IAA, ZT and GA 3 affected cell enlargement and balanced the plant growth [64][65][66]. The present studies showed that different concentrations of DHAP affected the level of IAA, ZT and GA 3 . It was probable that low concentration of DHAP increased the levels of IAA, ZT and GA to promote the growth of seedlings, while high concentration of DHAP inhibited the level of IAA, ZT and GA 3 and subsequently blocked extension growth. The radicle length and fresh weigh were related to the content of IAA, ZT and GA 3 affected by DHAP. This was parallel to the results by treated with other abiotic and biotic stresses [67][68][69]. On the other hand, the level of ABA was significantly higher in the DHAPtreated seedlings than that of the control, indicating that the elevated DHAP stress increased the ABA content, which was an adaptation process in response to DHAP stress. These results suggested that the endogenous hormones might have interactive effects on P. Schrenkiana seedlings to respond and adapt the DHAP stress. Thus, a further study is thereby needed to determine how endogenous hormones regulate the growth of P. Schrenkiana seedlings under DHAP stress.
Conclusion
The present investigation suggested that DHAP as allelochemical is one of the many possible factors contributing to the failure of P. Schrenkiana natural regeneration. According to the experiment, the DHAP isolated from P. Schrenkiana was found to inhibit germination, radicle elongation and fresh weight of P. Schrenkiana at high concentration and had a hormesis at low concentration. DHAP had different effects on antioxidant enzymes activities and plant hormones levels in P. Schrenkiana seedlings, respectively. The moderate concentration of DHAP increased antioxidant enzymes activities, favorable to disturb the balance between production and scavenging of ROS, and in turn excessive ROS induced by high DHAP concentration could inhibit the P. Schrenkiana seedlings growth. Moreover, DHAP induced significant cellular damages, which played a major role in inhibition of radicle elongation and tolerance to DHAP. In present study, all experiments were carried out in the simulated environment. More research is needed for further evaluation using pot and field experiments for better understanding the autoxicity potential of P. Schrenkiana under field conditions. Besides, other possible factors involved in the natural regeneration of P. Schrenkiana such as stand structure and the deterioration of soil physicochemical properties are needed for further evaluation.
Supporting information S1 File. Supporting information was the raw data underlying the findings of a paper. (XLSX) | 2018-04-03T05:01:02.327Z | 2017-05-08T00:00:00.000 | {
"year": 2017,
"sha1": "72a1997f38ef075cc8e70f63a5d25345d9267d02",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0177047&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72a1997f38ef075cc8e70f63a5d25345d9267d02",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
236429101 | pes2o/s2orc | v3-fos-license | Going Deeper into Semi-supervised Person Re-identification
Person re-identification is the challenging task of identifying a person across different camera views. Training a convolutional neural network (CNN) for this task requires annotating a large dataset, and hence, it involves the time-consuming manual matching of people across cameras. To reduce the need for labeled data, we focus on a semi-supervised approach that requires only a subset of the training data to be labeled. We conduct a comprehensive survey in the area of person re-identification with limited labels. Existing works in this realm are limited in the sense that they utilize features from multiple CNNs and require the number of identities in the unlabeled data to be known. To overcome these limitations, we propose to employ part-based features from a single CNN without requiring the knowledge of the label space (i.e., the number of identities). This makes our approach more suitable for practical scenarios, and it significantly reduces the need for computational resources. We also propose a PartMixUp loss that improves the discriminative ability of learned part-based features for pseudo-labeling in semi-supervised settings. Our method outperforms the state-of-the-art results on three large-scale person re-id datasets and achieves the same level of performance as fully supervised methods with only one-third of labeled identities.
Introduction
The person re-identification (re-id) task aims at matching images of the same person captured by non-overlapping surveillance cameras. Convolutional neural networks (CNN) have demonstrated good results on large-scale crosscamera annotated datasets of pedestrians [1,2,3,4,5,6,7]. However, annotating person identities in multiple camera views is a labor-intensive task in practical scenarios. Therefore, a more realistic setting called semisupervised person re-identification has gained attention in recent years, where a subset of data is annotated across cameras, and the rest of the data is used without labels.
Pseudo-labeling 1 is one of the key components in the recent semi-supervised learning approaches [8]. However, a critical challenge of pseudo-labeling for semi-supervised person re-id is that the number of identities is unknown in the unlabeled subset. Existing works aim to solve this problem by fixing the number of unlabeled identities during training [9,10], or by utilizing features from multiple CNNs to improve the quality of pseudo-labels [9,10,11]. These approaches are limited in the sense that making an assumption on the number of unlabeled identities is not feasible in practical scenarios. Moreover, ensembling multiple networks is computationally intensive, memory-wise and time-wise, at the training and test stage.
To overcome these limitations, we propose a pseudo-labeling method that does not require the number of unlabeled identities to be known/assumed. In contrast to network ensembling approaches, we propose to use a single model, which significantly reduces the space and time complexity. Motivated by the success of part-based features for supervised learning [12,4,3,13], we propose to employ part-based embeddings for pseudo-labeling in semi-supervised settings. We consider semantic parts of the image, and a pseudo-label is derived based on consensus clustering of parts embeddings. Our method does not require a sophisticated part detector and performs well with coarse part detection such as equal horizontal stripes [13] because images of people are mostly walking pedestrians. To the best of our knowledge, our method is the first to utilize embeddings of semantic parts to compute pseudo-labels.
The key challenge in learning embeddings with the triplet loss is mining hard triplets that contribute non-zero value to the loss [5]. To improve part-based embeddings' learning and discriminative ability, we propose a PartMixUp loss function that minimizes the distances between more difficult training samples compared to the triplet loss. We utilize the observation that for the pair of pedestrian images to be negative (images from different people), it is sufficient that only one semantic part is different. For example, if only the shoes are different, it is considered a different identity in person re-identification. Therefore, we create training pairs that contribute non-zero value by sampling an image of a person and its copy where some semantic parts are replaced with corresponding parts from a different random identity. We perform this operation on the embedding level and do not manipulate input images. The advantage of this technique is that it can be used on both labeled and unlabeled images, provided that identities are disjoint to avoid replacing a part from the same identity.
The main assumptions that we make in this work are as follows: (1) the labeled and unlabeled data comes from the same domain. This represents a real-world use case when only a subset of the collected data is labeled; (2) the labeled and unlabeled subsets have disjoint identities, representing a practical scenario where annotated and unlabeled images come from different time periods.
In summary, the main contributions of the paper are three-fold: • we are the first to conduct an in-depth survey of existing works in person re-id with limited annotations and identify their advantages and limitations; • we propose a novel method for semi-supervised person re-identification based on consensus clustering of embeddings for semantic parts that do not make an assumption about the number of identities in the unlabeled subset; • we introduce a PartMixUp loss for enhancing the learning of discriminative embeddings by mixing up embeddings of semantic parts.
The rest of the paper is organized as follows. We first conduct a comprehensive overview of person re-id from the perspective of the amount of annotated data in Section 2. Then we describe our proposed semi-supervised training method for the re-id task and PartMixUp loss in Section 3. Finally, we demonstrate the results of our experiments, evaluate the method's components and discuss ablation studies in Section 4.
Person re-identification is a wide research area that has been studied from various perspectives including deep feature learning, ranking optimization and metric learning. Recent surveys [29,30,31] have conducted a comprehensive overview and systematize existing works in person re-id. Different from previous surveys, we specifically focus on the limited labels scenario which is a less studied research. We highlight the assumptions and limitations of previous methods that constraint their applications to practical scenarios and propose a method that overcomes the limitations of existing works. We review person re-id methods from various perspectives including: the level of supervision (e.g., amount of labeled data), pseudo-labeling strategies, feature learning process (i.e., features learnt from multiple or single networks), and if they generate extra data for training or not. Finally, we emphasize the place of our work with respect to existing works in Figure 1.
Level of supervision
Due to the performance plateau of fully supervised methods for person re-id, the focus of the recent research has shifted to purely unsupervised, semi-supervised, one-shot per person, and intra-camera labels settings.
Unsupervised person re-id [20] is a challenging task due to the absence of target labels. Existing works utilize clustering to learn pseudo-labels [32], introduce learning soft similarity labels [28] and employ deep asymmetric metric learning [33]. However, purely unsupervised methods show inferior performance to methods with any amount of annotations.
Intra-camera labels (i.e., images of the person within one camera) are an attractive scenario as intra-camera labels are easy to obtain using tracking algorithms [34] while cross-camera labels require human effort. However, a model trained only on intra-camera labels tends to learn camera-specific features and fails to generalize across cameras [23]. To learn camera agnostic features, Qi et al. [23] propose progressive learning of cross-camera soft labels and Zhu et al. [25] design a method targeted for self-discovering the intercamera identity correspondence. Another scenario is labeling one example per identity [26,27] assuming that a few images per identity is labeled and the rest of the training images are without labels.
Although the aforementioned scenarios eliminate the tedious inter-camera identity labeling process, they require at least one example per identity to be present in the labeled training subset. Thus, we focus our attention on a semi-supervised scenario where a small subset (e.g., 10-30%) of data is annotated with cross-camera and within-camera labels.
Pseudo-labeling strategies
Pseudo-labeling is the process of assigning labels to unlabeled examples, which has been successfully applied in the classification tasks where there are a fixed list of class labels. Pseudo-labeling for person re-id is challenging as there exists an unknown number of identities in the unlabeled subset.Existing works utilize either k-means clustering [20,9,10] or kNN graphs [11,21] which work under the assumption that the number of clusters (i.e., the number of identities) is known in the unlabeled data. Unsurprisingly, the best performance is achieved where the number of clusters equals the number of true identities in the unlabeled data. Although the methods are robust to some variations in the number of assumed clusters, this requirement significantly limits their applicability in practical scenarios. In this work, we remove the requirement of prior knowledge about the number of clusters in unlabeled data.
Feature learning process
Several works utilize ensembles of neural networks to learn multi-view features and obtain pseudo-labels for the unlabeled subset [9,10,11]. However, training multiple CNNs increases the usage of memory and computational resources. Moreover, ensembling methods are usually superior to single models, and as such, it is not clear how much of the performance gain comes from the method itself as opposed to using feature ensembling. Different from previous works, our approach learns embeddings and computes pseudolabels from training a single CNN. Inspired by successful usage of part-based features [3,13,7,35] in supervised person re-id, we employ part embeddings for assigning pseudo-labels in semi-supervised settings.
Generative frameworks
A compelling approach is to generate additional labeled data using the Generative Adversarial Networks (GANs) [36]: Ding et al. [17] considers feature affinities between GAN's generated samples and labeled data to estimate labels; and Zheng et al. [37] proposes an end-to-end joint learning framework for training re-id and data generation tasks. However, challenges in generating images that depict the same identity in different poses limit the applicability of this method.
Another way to overcome the problem of scarcity of labeled data is employing a deep re-id model pre-trained on a labeled domain and transferring the knowledge to the label-scarce domain by reducing the domain discrepancy between the two domains [38]. While the aforementioned unsupervised domain adaptation strategy yields impressive performance [19,39,40], fully annotated large datasets with similar identities may not be available in many practical scenarios. Therefore, we focus on semi-supervised learning from only one domain to alleviate the need for an external dataset or identity annotation.
Pseudo-labeling via consensus clustering of semantic part embeddings
We employ part-level features and propose a consensus clustering of semantic part embeddings for pseudo-labeling in semi-supervised person reidentification. By clustering embeddings for semantic image parts, each image gets a list of cluster assignments (c 1 , c 2 , . . . , c Q ) where Q denotes the number of semantic parts. The list (c 1 , c 2 , . . . , c Q ) can be seen as an encoded part description of a person. For example, if we consider a coarse partition in three body parts (head, upper body, legs), the cluster assignment can be interpreted as (dark hair, white top, black bottom). Embeddings of semantic parts are clustered independently, and pseudo-labels for images are determined based on the agreement between parts' clusters.
Our method can be used with any convolutional model that outputs part-based embeddings such as PCB [13], DPB [3] or KAE-Net [41]. The output of a model compatible with our method should be an array of part embeddings [h 1 , h 2 , . . . , h Q ], with h i ∈ R d , Q the number of parts, and d the dimension of the embedding space for semantic parts.
The model is optimized in multiple pseudo-labeling iterations. In particular, the model is retrained at each iteration using both labeled data and a subset of unlabeled data with computed pseudo-labels. The following section explains training steps in one pseudo-labeling iteration.
Training steps in one pseudo-labeling iteration
At each pseudo-labeling iteration, a round of model training, clustering embeddings, and assigning pseudo-labels to the unlabeled subset is performed, as illustrated in Figure 2. Let (X L , Y L ) be the labeled data with corresponding labels and X U be the unlabeled data. (X PL , Y PL ) denotes the pseudo-labeled subset which is empty at the start of the algorithm.
At the beginning of each pseudo-labeling iteration, the model f θ (x) is initialized with ImageNet [42] pre-trained weights. We found experimentally that re-initializing the model at the start of each pseudo-labeling iteration yields better performance than fine-tuning from the previous iteration. The model is optimized on the union of labeled subset (X L , Y L ) and pseudo-labeled subset (X PL , Y PL ) by minimizing the loss of Eq. (5). Once the model f θ has converged, we compute part embeddings for unlabeled images for each semantic part q are clustered independently. We explain the clustering step in detail in Section 3.1.2. The clustering result is a list of partitions [P 1 , P 2 , . . . , P Q ]. Each partition consists of k q clusters so that the cluster assignment is disjoint c q i ∩ c q j = ∅ and covers the whole unlabeled subset ∪ kq j=1 c q j = X U . Note that the number of clusters k q for each semantic part is different and determined during clustering.
The final step in the pseudo-labeling iteration is to aggregate partitions for each semantic part to obtain image-level pseudo-labels. We use consensus clustering [43] to aggregate multiple clustering results on a list of partitions [P 1 , P 2 , . . . , P Q ]. Consensus clustering aims to find a partition P * of the unlabeled subset X U by combining ensemble members [P 1 , P 2 , . . . , P Q ] so that P * produces better pseudo-labels than each individual partition P j . Details of consensus clustering are covered in Section 3.1.3. We then obtain pseudolabels from computed consensus clusters. A pseudo-labeled subset (X PL , Y PL ) is re-initialized with samples that have a sufficient number of examples per pseudo-label (e.g., five images per identity). Implementation details are given in Section 4.3).
After assigning samples to a pseudo-labeling subset, we proceed with the next pseudo-labeling iteration. The pseudo-code for the whole algorithm is presented in Algorithm 1. In the following sections, we review the clustering algorithm employed to cluster part embeddings. We then describe consensus clustering to aggregate part partitions.
Compute CA matrix on partitions [P 1 , P 2 , . . . , P Q ] with Equation (1) Cluster matrix 1 − M to get a partition P * Re-init a pseudo-labeled subset (X PL , Y PL ) = (∅, ∅) Assign (X PL , Y PL ) ← (x, P * (x)) for all x ∈ X U if the number of images in the cluster P * (x) is greater or equal to l until convergence or maximum iterations are reached Return: f θ
Clustering part embeddings
We cluster embeddings for each semantic part independently using hierarchical Agglomerative ("bottom-up") [44] clustering algorithm. Each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. The clusters are linked using Ward's minimum variance method [45] that minimizes the total within-cluster variance. Agglomerative clustering doesn't require the predefined number of clusters. As a criteria to merge clusters, we provide maximum distance of the clusters which we empirically set to 2 2 in the experiments The resulting number of clusters is different for each semantic part.
Apart from the agglomerative clustering, we analyze the suitability of Affinity Propagation [46] and DBSCAN [47] clustering algorithms. Previous work [17] uses Affinity Propagation with tuned preference values for each data point heuristically. We favor Agglomerative clustering over Affinity Propagation and DBSCAN as it has fewer hyperparameters that require tuning using heuristics. Ablation studies for other clustering algorithms are summarized in Section 4.6.
Consensus clustering
Consensus clustering of semantic part partitions [P 1 , P 2 , . . . , P Q ] is based on a co-association method [43] that is recommended when the number of clusters in each partition is different [48]. Co-association matrix M counts the number of partitions when x i and x j are in the same cluster. Matrix M is obtained from: with P q (x i ) representing the associated cluster of the image x i in the partition P q , and δ(a, b) = 1 if a = b, and 0 otherwise. Each value in M matrix is a measure of how many semantic parts of images x i and x j are in the same cluster. Matrix 1 − M can be considered as a new measure between images with values ranging from 0 (similar) to 1 (different). The consensus partition is obtained by applying a hierarchical clustering algorithm [49] to 1 − M matrix and varying a distance threshold when two clusters can be merged. We vary a threshold from strict (any value below 1/Q), meaning that all semantic parts are required to agree on the cluster assignment to intermediate value, when a majority (e.g., 75% of semantic parts) agree on the assignment.
The advantage of our method over previous work [22] is that it does not transfer any bias from the labeled subset because our pseudo-labeling method works solely on the unlabeled subset.
PartMixUp loss function
In order to increase the discriminative ability of part embeddings, we introduce a PartMixUp loss that extends the triplet loss [14] for part embeddings. We briefly review the triplet loss and outline its shortcomings, followed by our proposed PartMixUp loss that addresses the specified drawbacks.
Triplet loss. The triplet loss [14] accepts triplets of images (x a , x pos , x neg ) where x a (anchor) and x pos (positive) are images from the same person and an image x neg (negative) is from a different person. The triplet loss L T encourages the distances between positive pairs of embeddings to become smaller than the distances between negative pairs of embeddings by a given margin m: where D is a distance metric in the embedding space (e.g., Euclidean or cosine) and f is a model. The squared distance is commonly used to simplify the derivative computations during backpropagation. The strategy for selecting triplets (x a , x pos , x neg ) for the triplet loss is important. Generating random triplets would result in many triplets already in a correct position (a negative sample is further than a positive from the anchor by a margin m) and contribute zero value to Equation (2). Batch-hard triplet mining [5] aims to overcome this problem and selects the hardest positive (the furthest example from the same class) and the hardest negative (the closest example from a different class) within a batch for each anchor image.
PartMixUp loss. Our proposed PartMixUp loss L PM aims to further improve the triplet loss by taking advantage of semantic part embeddings. Learning discriminative part embeddings is essential for our part-based clustering, where each part contributes to the identity assignment.
PartMixUp loss builds on the observation that two different persons with similar appearances are hard to distinguish and represent useful examples for a learning algorithm. However, it is hard to mine such pairs from the dataset. We take advantage of part-based embeddings and generate such pairs for the PartMixUp loss by replacing some part embeddings of the image with the corresponding part embeddings from another person (Figure 3). The created example corresponds to a new person that differs from the original only by replaced parts. We formally demonstrate that PartMixUp loss mines hard pairs that contribute non-zero values to the loss. Let us consider a pair of part-based embeddings from different identities z a = {z 1 a , z 2 a , . . . , z Q a } and z neg = {z 1 neg , z 2 neg , . . . , z Q neg }. Semantic parts selected for replacement can be represented as two subsets U and U of a set of indices U = (1, 2, . . . , Q) so that U ⊆ U ⊆ U . The part-based embeddings z a and z a are created from z a by replacing with z neg parts with indices in U and U , respectively. More specifically, embeddings z a and z a have the same part embeddings for part indices in U and different for other parts. The same part embeddings contribute zero to the sum so the distance between a negative pair decreases when the number of similar parts increases: In other words, the more semantic parts are shared between images of two people, the harder it is to distinguish these people. For example, if two individuals are dressed the same and the only difference is in the face and hairstyle, then it is a hard pair to distinguish.
The formula for computing PartMixUp loss is as follows. PartMixUp loss L PM on a batch of embeddings Z is computed by selecting the furthest positive z pos within the batch and the closest mixed up negative z for each anchor z a : where Z is composed of Z by replacing some semantic part embeddings for each anchor z a ∈ Z with corresponding part embeddings from another identity. The number of part embeddings controls the difficulty of generated negative pairs shared in the pair: the more parts are shared, the harder the negative pair becomes (Figure 3). We will use the notation PM(a) for the PartMixUp loss with the maximum number of shared part embeddings equal to a. Replacing parts is performed between part embeddings and not in the image pixels. Cutting and pasting image parts is prone to errors in a part detector.
The key difference between our PartMixUp loss and the previous variations of the triplet loss [5,50] is that it minimizes the distances for negative pairs by creating new samples rather than searching for them within the batch. The benefit of PartMixUp loss is the improved discriminative ability of semantic part embeddings. In the ablation study (Sec. 4.6), we show that adding PartMixUp loss improves the model performance.
Total loss. In addition to the triplet and PartMixUp losses, the model is optimized with a cross-entropy loss widely used for person re-identification [6,16]. A classification layer is added at the beginning of each pseudo-labeling iteration with the number of outputs being the number of known training identities (labeled and pseudo-labeled). The overall objective is a weighted sum of all loss functions defined as: where λ CE , λ T and λ PM are the weighting factors for each loss.
Market-1501 [1] contains 32,668 images for 1501 identities captured from 6 cameras placed in front of a campus supermarket. The standard evaluation protocol [1] splits the dataset into fixed subsets: 12,936 images of 751 identities in the training subset, and 15,913 images in the gallery subset and 3,368 images in the query subset of 750 test identities (disjoint with training identities). During testing, query images are used to retrieve matching images in the gallery set. The bounding boxes are computed using Deformable Part Model (DPM) [52] making it close to realistic settings.
DukeMTMC-reID [2] contains 36,411 images for 1,404 identities captured from 8 cameras, which is a subset of the pedestrian tracking dataset. The dataset is split into three fixed subsets: 702 identities with 16,522 images are used for training, and 2,228 images from other 702 identities are used for query images retrieving the rest 17,661 gallery images. The semi-supervised settings for DukeMTMC-reID are the same as for Market-1501 dataset.
CUHK03 dataset [51] contains 14,097 images of 1,467 identities. We use the first evaluation protocol [51] (like the majority of existing works) that splits the dataset randomly 20 times, and the gallery for testing has 100 identities each time. We evaluate on bounding boxes automatically detected by DPM [52].
Evaluation protocol
We follow the semi-supervised setting of recent works [22,9] where only a 1/3 of identities is labeled and identities of labeled and unlabeled subsets are disjoint.
We assess the performance with two evaluation metrics: Cumulated Matching Characteristics (CMC) which treats the re-identification task as a ranking problem, and mean Average Precision (mAP) which treats it as an object retrieval problem. Similar to previous works [9,10,21,11,22], on Market-1501 and on DukeMCMT-reID, we report CMC at rank-1 and mAP in a single-query mode. The performance on CUHK03 is evaluated at rank-1, rank-5, rank-10 and rank-20 in a single-shot mode.
Implementation details
We use ResNet50 [53] architecture as a backbone like most other semisupervised re-id. We build on the part-based convolutional baseline (PCB) proposed in [13] that outputs part-based embeddings by pooling feature maps computed by the backbone network over regions of interest (ROI). PCB works with a coarse part detection which is a split into six equal horizontal stripes. We use PCB without refined part pooling (RPP) proposed in the same work. Utilizing a more sophisticated part detection method may further improve the results.
We perform five pseudo-labeling iterations with 100 epochs in each iteration. The model is trained with Adam optimizer [54]. The initial learning rate is set to 0.001 with decay at 60th and 80th epochs in each iteration. The Each training batch contains 10 labeled identities and 10 pseudo-labeled identities with 6 images per identity. At the first pseudo-labeling iteration, when the data has not been pseudo-labeled yet, the training batch contains only labeled data. Our method is implemented with PyTorch [55] and TorchReId [56]. We train the model on one Tesla M40 GPU 12GB. Table 1 shows results with different methods on Market-1501, DukeMTMC-reID and CUHK03 datasets with 1/3 of labeled data. As a baseline, we train the backbone architecture with image-level embeddings (BIL) in a supervised manner on the labeled subset. Semi-supervised BIL is trained with pseudolabels computed using agglomerative clustering. A semi-supervised part-based (PB) model is trained with pseudo-labels computed using concatenated partbased embeddings. Our part-based (PB) model is trained with pseudo-labels computed using consensus-clustering of part-based embeddings. Finally, the PB model with PartMixUp loss (PB+PM) is trained as the previous model with the additional PartMixUp loss component (Equation 5).
Semi-supervised evaluation
The experiment results on three datasets (Table 1) demonstrate that consensus-clustering of part-based embeddings (our PB) outperforms both clustering of image-level embeddings (semi-supervised BIL) and clustering of concatenated part-based embeddings (semi-supervised PB). Moreover, learning part-based embeddings with PartMixUp loss (our PB+PM) further improves both Rank-1 and mAP metrics. Figures 4a and 4b show the progress of Rank-1 and the number of pseudolabeled images for different methods over pseudo-labeling iterations. Supervised training of BIL model does not utilize pseudo-labeled images so it is not shown in Figure 4b. A sharp increase in Rank-1 is observed in the second iteration due to the fact that a bulk of pseudo-labels is added to the training after the model has been pretrained on the labeled data in the first iteration.
Comparison with state-of-the-art methods
We compare the performance of our method with existing semi-supervised person re-id which use similar experimental settings on Market-1501 and DukeMTMC-reID (Table 2) and CUHK03 (Table 3). We select the setting with 1/3 labeled data that most existing works use to report the results. Our approach outperforms previous methods at both CMC and mAP metrics. Our method achieves 91.5% in Rank-1 and 76.7% in mAP on Market-1501 without the assumption on the number of identities in unlabeled data. Tables 2 and 3 show that the performance of our method on 1/3 labeled data is close to the performance of the same part-based model [13] on the whole labeled dataset, e.g. Rank-1 82.4% of our method versus 82.6% with full supervision on DukeMTMC-reID and Rank-1 91.5% of our method versus 92.3% with full supervision on Market-1501.
Ablation study
In this section, we review and evaluate various components and design choices in the proposed method.
The effect of consensus clustering. To evaluate the importance of consensus clustering, we analyze the Rand index [59] of cluster assignment on the unlabeled subset using available ground truth labels (used only for evaluation). The Rand Index is a similarity measure between two cluster partitions computed by considering the ratio of pairs that are assigned to the same clusters in the predicted and true assignments. Figure 5 shows the Rand Index of clustering for each semantic part, concatenation of part embeddings and our consensus clustering. We observe that the highest Rand Index of 0.85 is achieved with our consensus clustering comparing to the concatenation of parts (the Rand Index 0.82) and clustering by each semantic part separately (the Rand Index 0.79 -0.80).
The influence of agreement in consensus clustering. In Figure 6, we show the results of experiments with varying levels of agreement in consensus clustering on different datasets. We vary the number of semantic parts required for agreement in consensus clustering and perform experiments with strict 100% agreement (6 out of 6 parts), 83% agreement (5 out of 6 parts), 66% (4 out of 6 parts) and 50% (3 out of 6 parts). The best results are achieved when all 6 out of 6 parts agree on the cluster assignment. The performance decreases slightly when 5 out of 6 parts agree, from 91.5% to . Experiments confirm that strict agreement in consensus clustering is essential in our semi-supervised method.
The influence of the clustering method. We evaluate our method with two other clustering algorithms that do not require the pre-defined number of clusters, Affinity Propagation [46] and DBSCAN [47]. Experiments are conducted with default parameters for the clustering algorithms on the Market-1501 dataset with 1/3 and 1/6 labeled data. Table 4 shows the number of detected clusters versus the number of ground truth identities at the first pseudo-labeling iteration. Rank-1 and mAP are compared after the first iteration. We observe that DBSCAN failed to identify any clusters (similar results are observed in [23]) so DBSCAN requires tuning parameters based on heuristics that we try to avoid. Agglomerative clustering identifies more clusters than Affinity Propagation at the first iteration and yields better performance with the assigned pseudo-labels (rank-1 88.1% versus 87.4% and mAP 72.6% versus 70.3%).
The influence of PartMixUp loss. We observe that PartMixUp loss improves the re-id performance of the part-based PCB model [13] in the supervised setting and our method in the semi-supervised setting (Table 5).
Conclusion
In this paper, we propose a novel semi-supervised method for person re-id by consensus clustering of part-based embeddings. Our method assigns pseudo-labels without any assumption about the number of identities in the unlabeled subset. The pseudo-labels are assigned based on consensus clustering of part-based embeddings which yields better pseudo-labels than clustering of image-level embeddings or features from multiple CNNs. The developed PartMixUp loss improves the discriminative ability of part-based features and further increases the performance of the model. Our method utilizes only one CNN and is compatible with any CNN architecture as a backbone. Extensive experiments in various settings on multiple person re-id datasets confirm the effectiveness of the proposed approach. | 2021-07-27T01:16:17.978Z | 2021-07-24T00:00:00.000 | {
"year": 2021,
"sha1": "a689c4a8383fb5f751da5d25998779f7d6fe8c5b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a689c4a8383fb5f751da5d25998779f7d6fe8c5b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4463464 | pes2o/s2orc | v3-fos-license | Histopathological Features of Brain Arteriovenous Malformations in Japanese Patients
Clinical features of high risk brain arteriovenous malformations (BAVMs) are well characterized. However, pathological evidences about the differences that are possessed by high risk patients are still lacking. We reviewed archived routine hematoxylin-eosin specimens from a total of 54 surgical treated BAVMs. The histopathological features in nidus were semi-quantitatively analyzed. We obtained the pathological differences of BAVMs nidus between several clinical features. Among the analyzed pathological features, the significant differences were observed in degree of venous enlargement and intimal hyperplasia. Juvenile, female, diffuse nidus, high Spetzler-Martin grade, and low flow patients had a lesser degree of those parameters compared to adult, male, compact nidus, low Spetzler-Martin grade and high flow patients. High risk profiles of BAVMs patients were well-reflected in the nidus pathology. Therefore, juvenile, female, diffuse nidus, and low flow in Japanese BAVMs patients might have different vascular remodeling process that predispose to higher tendency of hemorrhage.
Introduction
Brain arteriovenous malformations (BaVms) comprise tangles of abnormally developed arteries and veins without intervening capillaries. 1) as a consequence, an abnormal shunting of arteries and veins occurs and results in high-pressure vascular channels that are at a risk of rupturing, often with catastrophic results. 2) therefore, appropriate management is necessary to reduce lifetime risk of morbidity and mortality of BaVms.
the BaVms always provide challenge for neurosurgeons, as the risk of procedure may outweigh the benefits. 3) spetzler-martin grading has been widely used for determining surgical-related morbidity, 4) subsequently Lawton et al. 5) proposed supplementary grading for determining surgical morbidity of BaVms. However, high risk patients such as history of hemorrhage, [6][7][8] young age, 9,10) deep venous drainage, 6,11,12) and female 10) might need more aggressive modalities due to lifetime risk of hemorrhage. despite high risk patients are well-identified, direct pathological evidences about the differences that are possessed by high risk patients are still lacking. the aim of this study is to investigate the histopathological evidences of BaVms features in association with identified risk factors for subsequent hemorrhage in Japanese patients.
I. Patient population
a total of 54 specimens were obtained from the surgical treatment of BaVms of Japanese patients at kyoto university Hospital with standard indications. clinical data of the patients are summarized in table 1.
II. Sample preparation
all specimens were fixed in 10% formalin overnight and embedded in paraffin the next day. the specimens were stored at room temperature. specimens were sliced into multiple and sequential 6-μm thick sections, deparaffinized in xylene, rehydrated, and then used for histological studies. all specimens were stained with hematoxylin-eosin and observed with a BX51 microscope (olympus optical co., Ltd., tokyo).
III. Assessment of infiltrating cells
We searched the focus of inflammation in the nidus by using low power magnification. then, the ORIGINAL ARTICLE number of infiltrating cells was observed by using high power magnification on three adjacent fields. the average numbers of infiltrating cells were classified semi-quantitatively as follows: less than 20 (mild), 20-40 (moderate), and more than 40 (severe).
IV. Assessment of intimal hyperplasia
the thickest tunica intima of the vein in hematoxylin-eosin stained samples was recorded. the findings were classified semi-quantitatively as follows: less than 100 μm (mild), 100-200 μm (moderate), and more than 200 μm (severe).
V. Assessment of microvessel accumulation
initially, we identified the highest vascular density region in the nidus of the hematoxylin-eosin stained samples by using low power magnification, then measured the number of microvessels (< 100 μm) by using high power magnification (10× objective). the findings were classified semi-quantitatively as follows: less than 10 (mild), 10-20 (moderate), and more than 20 (severe).
VI. Assessment of venous enlargement
We carefully determined the largest venous diameter in hematoxylin-eosin stained samples. the findings were classified semi-quantitatively as follows: less than 1 mm (mild), 1-2 mm (moderate), and more than 2 mm (severe).
VII. Statistical analysis
the results of all histopathological studies were expressed as the mean ± standard deviation. statistical analysis was performed with sofastats 1.4.3 (Paton-simpson & associates, auckland, new Zealand). clinical data including age, sex, occurrence of hemorrhage, seizure, velocity, size of nidus, and pre-operative embolization were analyzed along with histological data. P values less than 0.05 were considered statistically significant.
Results
in the present study, we assessed the pathological features in the BaVms nidus particularly focusing on infiltrating cells, intimal hyperplasia, microvessels accumulation, and venous enlargement (Fig. 1). among the assessed parameters, only intimal hyperplasia and venous enlargement were predominantly affected by the clinical variables.
Discussion
the present study revealed that intimal hyperplasia and venous enlargement in BaVms nidus pathologically were the differentiating factors of clinical profiles of BaVms patients. our result might indirectly the vascular remodeling process was regulated differently between certain patients profiles. Younger age of onset and female patients tended to have the thinner intima and smaller drainer, although hemorrhage presentation was not increased in the aforementioned group, the condition might predispose for further hemorrhage if untreated. in line with this study, our previous report indicated that children and female patients had higher risk for subsequent hemorrhage after initial hemorrhage. 10) apparently, there is inconsistency in patients age and sex as predictors for hemorrhage in BaVms. 12,13) it is noteworthy, racial background in the clinical series might contribute to those differences. at least, two reports from scandinavian region also indicated that younger age 7,14) and female 14) are more prone to future hemorrhage. Furthermore, the fertile female might get into the pregnancy thus important consideration is needed in this management of this group. 15) in this study, angiographically obtained clinical profiles such as nidal diffuseness, velocity, and spetzler-martin grade were influential for the intimal hyperplasia and venous enlargement. diffuse nidus, low flow, and higher spetzler-martin grade were observed to have thinner intimal hyperplasia and smaller drainer. We defined low flow BaVms as slower transition from arterial to venous phase in angiography, this indirectly reflects the venous hypertension (Y.t. and s.m.). those aforementioned conditions were highly consistent as hemorrhagic predictors in BaVms. 6,7,[10][11][12][13][14]16) Furthermore, in our series, higher proportion of hemorrhage presentation was obtained in low flow patients (83.3% in low flow patients, 33.3% in high flow patients). therefore, we assumed that hemodynamic properties of BaVms have a highly prominent role in vascular remodeling process in nidus.
to our surprise, infiltrating cells were not influenced by clinical profiles. the presence of inflammation in BaVms nidus is well noticed, 17) we had reported the activation of nF-kappa B and stat3 in the BaVms. 18,19) it is noteworthy, the precised contribution of inflammation in BaVms pathobiology remains elusive. moreover, our semi-quantitative scoring might also contribute to the result.
the highly angiogenic environment of BaVms nidus is well documented, abnormality in VegF, 20,21) tie-2, 20,21) and HiF-1α 22) might contribute to BaVms pathobiology. in the present study, the highly angiogenic environment was well reflected through microvessels accumulation in the nidus; however the difference was not detectable between clinical variables. We assumed the abnormal angiogenesis is a common feature of BaVms, thus insensitive to differentiate
Conclusion
in this study, the intimal hyperplasia and venous enlargement were distinctly influenced by clinical features. Younger age and female patients, as well as angiographical features of diffuse nidus, low flow, and high spetzler-martin grade possessed thinner intimal hyperplasia and smaller drainer. those conditions might predispose high risk patients to higher tendency of hemorrhage in Japanese patients. Future investigation is necessary to elucidate the underlying mechanism.
Conflicts of Interest Disclosure
the authors declare that there is no conflict of interest. | 2018-04-03T04:31:23.709Z | 2016-04-06T00:00:00.000 | {
"year": 2016,
"sha1": "070d393bb54ac897daaf55ec3c00d3b9def4f67f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/nmc/56/6/56_oa.2016-0032/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77398b1c26f915f19db08c735b4e274fbe88ca7d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207077610 | pes2o/s2orc | v3-fos-license | Lymphoedema in patients with lentigo maligna treated with imiquimod: a long‐term adverse effect
Lentigo Maligna (LM) is a potential precursor lesion of Lentigo Maligna Melanoma (LMM). It is treated to prevent progression to LMM. A recent epidemiological study reports a progression rate of 2.0-2.6% over a course of 25 years1 . The gold standard of treatment is surgical excision with 5 mm margin2 . Topical application of imiquimod cream is an off-label alternative. This article is protected by copyright. All rights reserved.
DOI: 10.1111/bjd.16267 DEAR EDITOR, Lentigo maligna (LM) is a potential precursor lesion of lentigo maligna melanoma (LMM). It is treated to prevent progression to LMM. A recent epidemiological study reports a progression rate of 2Á0-2Á6% over the course of 25 years. 1 The gold standard treatment is surgical excision with a 5-mm margin. 2 Topical application of imiquimod cream is an off-label alternative. 2,3 Complete clinical response rates for LM treated with imiquimod vary from 37Á1% to 100%. [4][5][6] We report three patients with LM who developed lymphoedema following application of topical imiquimod.
Three consecutive patients with LM were treated according to our protocol. Patients were instructed to apply imiquimod once daily to the lesion with a 1-to 2-cm margin, for 12 weeks. The goal was to achieve at least 10 weeks of inflammation. Depending on the inflammatory reaction, the treatment schedule was adapted. If it was too intense, patients were instructed to apply imiquimod three times per week, or if the inflammation was insufficient, patients were instructed to apply imiquimod 2-3 times daily. 7 The first patient was a 66-year-old woman with a 9 9 10 mm pigmented brown macule on the left cheek. The diagnosis of LM was confirmed by a punch biopsy. After 12 weeks of treatment with imiquimod 5%, no residual pigmentation was visible macroscopically or by dermatoscopy. Within days after starting treatment, the patient developed erythema, soreness and oedema at the site of application. The erythema partially subsided, the soreness quickly disappeared, but a nonpitting swelling persisted. A punch biopsy obtained 2 years post-treatment demonstrated fibrosis, with increased numbers of fibroblasts and a mild lymphohistiocytic infiltrate that had replaced the normal subcutaneous tissue (Fig. 1a, b). D2-40 immunostaining showed several compressed lymphatic vessels within this fibrotic tissue. Four years post-treatment, the lymphoedema was still present.
The second patient was a 68-year-old woman with a 14 9 14-mm irregularly pigmented macule on her right cheek. LM was confirmed histopathologically. She applied imiquimod once daily during the first 4 weeks of treatment. As a result of intense inflammation she was instructed to apply the imiquimod three times weekly for the remaining 8 weeks, for a total of 12 weeks. One month post-treatment a biopsy showed postinflammatory hyperpigmentation; no LM was found. In the dermis oedema was observed. Histologically it was unclear if the oedema was lymphoedema or residual oedema because of inflammation. The oedema persisted for 3 years, after which it disappeared. The third patient was a 69-year-old woman, who was referred following excision of a LMM on her right cheek. Histological examination of the excised lesion showed radically excised LMM with a Breslow thickness of 0Á6 mm. Several years later, pigmentation measuring 15 9 15 mm appeared around the scar. A biopsy showed LM, without evidence of LMM. The patient declined surgical treatment because she found the potential scarring unacceptable. She was treated with off-label imiquimod. During treatment, the patient developed an inflammatory reaction with erythema, swelling, soreness and crusting. After treatment, no residual pigmentation was present. The erythema and soreness disappeared but lymphoedema persisted. The lymphoedema disappeared gradually after a year.
Topical imiquimod is an off-label option for the treatment of patients with LM who do not qualify for or do not opt for surgical treatment. Imiquimod is applied for a prolonged period of time to achieve a sufficient inflammatory response. 5 We hypothesize that lymphoedema may complicate treatment of patients with LM using topical imiquimod. This adverse effect may be caused by the intense treatment regimen used in our patients, resulting in severe inflammation and significant dermal fibrosis, impairing normal tissue drainage by afferent lymphatic vessels.
In the two patients who had biopsies after imiquimod treatment (2 years post-treatment for one and 1 month after for the other), fibrosis was clearly present in the reticular dermis histologically. We hypothesize that in our patients, similar to the sequence of events during cutaneous wound healing, a late phase of remodelling (maturation) may have followed previous phases of inflammation and proliferation in response to imiquimod. The remodelling phase involves degradation of excess collagen and organization of fibrotic connective tissue, which may take several years. 8 This may explain why lymphoedema persisted and only resolved in two of the three patients. Alternatively, the lymphoedema may have been related to other unknown or unrecognized factors.
In conclusion, topical imiquimod is an off-label alternative treatment option for the treatment of LM, for patients who are ineligible or do not opt for surgical treatment. When prescribing topical imiquimod for a lesion located on the cheek for a prolonged period of time, patients should be informed about the risk of secondary lymphoedema. | 2018-04-03T05:36:14.769Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "0177f014c9dcf78d0b94a79367c95d601942357b",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjd.16267",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "0b1998fdd92e828c41ab5a972522a0b449303866",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9278665 | pes2o/s2orc | v3-fos-license | Single-Chip Fully Integrated Direct-Modulation CMOS RF Transmitters for Short-Range Wireless Applications
Ultra-low power radio frequency (RF) transceivers used in short-range application such as wireless sensor networks (WSNs) require efficient, reliable and fully integrated transmitter architectures with minimal building blocks. This paper presents the design, implementation and performance evaluation of single-chip, fully integrated 2.4 GHz and 433 MHz RF transmitters using direct-modulation power voltage-controlled oscillators (PVCOs) in addition to a 2.0 GHz phase-locked loop (PLL) based transmitter. All three RF transmitters have been fabricated in a standard mixed-signal CMOS 0.18 μm technology. Measurement results of the 2.4 GHz transmitter show an improvement in drain efficiency from 27% to 36%. The 2.4 GHz and 433 MHz transmitters deliver an output power of 8 dBm with a phase noise of −122 dBc/Hz at 1 MHz offset, while drawing 15.4 mA of current and an output power of 6.5 dBm with a phase noise of −120 dBc/Hz at 1 MHz offset, while drawing 20.8 mA of current from 1.5 V power supplies, respectively. The PLL transmitter delivers an output power of 9 mW with a locking range of 128 MHz and consumes 26 mA from 1.8 V power supply. The experimental results demonstrate that the RF transmitters can be efficiently used in low power WSN applications.
Introduction
Short-range wireless transceivers are widely used in emerging low-power applications such as wireless sensor networks (WSNs), wireless body area networks (WBANs), and biomedical implantable electronic systems [1,2]. These applications have very strict requirements on the size, weight, cost and power consumption of the system and they are a special class of radio-frequency integrated circuits (RFICs) [3][4][5]. Therefore, having a simple design with minimal building blocks becomes an attractive approach for these applications [6]. The conventional architecture of a narrow-band direct-modulation transmitter is shown in Figure 1a. It includes a digital-to-analog converter (DAC), a mixer, a local oscillator, a band-pass filter and a power amplifier (PA). After the digital information is converted to an analog signal through the DAC, it is upconverted by the mixer to the carrier that is generated by the local oscillator. The signal is then filtered to remove harmonics generated by the mixer, after which its power is boosted by the PA. Figure 1b shows a first step to reducing the building blocks of the transmitter by using a voltage-controlled oscillator (VCO) to perform direct-frequency modulation (FM). By further simplifying and removing the DAC, direct-frequency shift keying (FSK) or Gaussian frequency shift keying (GFSK) can also be applied [7]. Figure 1c further reduces the blocks to only one, which is a power voltage-controlled oscillator (PVCO). Similar power oscillators are described in [8][9][10][11]; however none of these were fabricated using a standard CMOS process. This paper discusses the feasibility of using CMOS PVCO based fully integrated, direct modulation transmitters in RF transceivers with comparable performance to other technologies. The main drawback of the architecture shown in Figure 1c is the frequency drift since the PVCO is used in open loop. This problem can be solved by the architecture in [12], however, it was not a single-block design and required a PA to boost the output signal. Therefore, a new topology based on phase-locked loop (PLL) is used as shown in Figure 1d. This improves the architecture shown in Figure 1c by implementing a PLL along with the PVCO to stabilize the frequency against any drift that might occur due to temperature or supply voltage variations, for instance. This paper also presents the design implementation and measurement results of 2.4 GHz and 433 MHz PVCO based transmitters and 2.0 GHz PLL based transmitter, respectively. Due to the limited availability of research related to PVCO based transmitters in open literature, the proposed circuits are compared with previous works at different frequencies and in different technologies.
Design of PVCO Transmitters
Two circuits were designed, one operating at 433 MHz and the other at 2.4 GHz. Both circuits consist of a differential cross-coupled negative-g m VCO. Figure 2 shows the basic schematic of the proposed single stage direct-modulation transmitter. The tuning network consists of an LC-tank using inductor L 1 and a number of NMOS accumulation-mode varactors C V that have a variable capacitance of 1 to 3 pF. Inductor L 1 is 18 nH with a low quality-factor (Q) of 2, which has a major effect on the efficiency of the circuit and its phase noise performance. Since the operating frequency of this design is relatively low (433 MHz), it was possible to use 16 varactors in parallel to have a wider tuning range. The varactors were controlled through two different signals to have more flexibility in tuning the circuit and in applying the baseband modulation. Capacitors C b act as DC blocking capacitors with a value of 12 pF each and resistors R, each of 50 kΩ, provide a DC ground for biasing of the varactors. Each of the RF outputs goes to the 50 Ω load of the measurement equipment directly without a buffer. The loads will be replaced by the transmission antenna in the final system. Transistors M 1 -M 4 form the cross coupled negative-g m differential pair. Each NMOS device has a total width of 200 μm using 80 fingers (finger width = 2.5 μm) and a length of 0.18 μm, while the PMOS devices have double the number of fingers. The sizes of these transistors are chosen based on a tradeoff between the required output power, operating frequency and achievable efficiency. The design equations can be found in [13]. Transistor M 5 acts as a current source that can be controlled in order to vary the transmitted power. This is a long channel device that uses 500 fingers with an aspect ratio of 1.25 mm/1 μm.
PVCO Design (2.4 GHz)
The 2.4 GHz PVCO design utilizes the same architecture of Figure 2; however, the set of varactors used for tuning (portion within the dashed lines) was removed and the total number of varactors used was 2. Inductor L 1 in this design is 2.3 nH with a quality-factor (Q) of 8. In transistors M 1 -M 4 , the NMOS devices each have a total width of 175 μm using 70 fingers (finger width = 2.5 μm) and a length of 0.18 μm, while the PMOS devices have double the number of fingers. Transistor M 5 has the same aspect ratio as used in the 433 MHz design.
Design of PLL Transmitter
The basic block diagram of the PLL-based transmitter is shown in Figure 3. As the PVCO drives the antenna with the required output power at a frequency of 2.0-2.1 GHz, its frequency is divided by the prescaler, which is a divide-by-128 logic circuit and was implemented as a cascade of seven toggle flip-flops (TFFs). The divided frequency is then compared by the phase-frequency detector (PFD) to the external reference frequency coming from the crystal (Xtal), which should be in the range of 16 MHz. The output of the PFD will control the charge pump together with the loop filter in order to speedup or slowdown the PVCO until the output frequency and phase of the PVCO are locked to the Xtal reference. Both FSK and FM modulation can be applied by pulling the resonant frequency of the crystal, which would be considered indirect modulation of the VCO [14]. The important blocks of PLL transmitter are described in the following subsections.
PVCO Design (2.0 GHz)
The 2.0 GHz PVCO design utilizes the same architecture of Figure 2; however, the portion within the dashed lines was removed. The NMOS and PMOS devices use the same dimensions as used in 2.4 GHz PVCO design. The simulated gain of the PVCO (K VCO ) is −218 MHz/V in the tuning range from 0 to 1 V and −77.8 MHz/V in the tuning range from 1.0 to 1.8 V as shown in Figure 4.
Phase-Frequency Detector and Charge-Pump
The PFD is used to detect phase/frequency differences between the input signal coming from the PVCO and the reference signal coming from the crystal. The difference is translated into a proportional control signal that tunes the PVCO accordingly. There are many techniques used to provide such a function and the most commonly used is the PFD architecture since it increases the acquisition range and the lock speed of the PLL. Due to the finite speed of the PFD components, as the phase and frequency difference of the reference and PVCO signal approach zero, the output pulses do not approach zero linearly. Therefore, when the PLL is in the locked state, the control voltage can wander around the reference value resulting in a dead-zone, which can produce undesired spurs in the output spectrum. A charge pump is used after the PFD to charge or discharge the loop filter resulting in an increase or decrease in the output control voltage. The actual current-to-voltage conversion is done by the loop filter followed by the charge pump.
Loop Filter
A third-order loop filter was designed in order to provide good filtering of the ripples in the VCO tuning voltage, while achieving a wide bandwidth for the PLL. It is very important to select the loop filter values properly. The design equations for a third-order loop filter can be found in [15]. Figure 5a shows the simulated PLL response to a number of steps in the input frequency, which shows a settling time less than 1.5 μs. The Nichols chart of the loop is shown in Figure 5b. A phase margin of 47 degrees was designed for, which is a good compromise between stability and settling time.
Measurement Results of PVCO Transmitters
All the circuits were fabricated in a 6-metal layer, 0.18 μm CMOS technology, with a 2 μm thick top-metal layer. Figure 6 shows a photomicrograph of the fabricated 2.4 GHz transmitter that occupies an area of only 0.6 mm 2 including the pads. The inductors, all major interconnections and the RF pads were laid out using the top metal layer to minimize parasitic effects following the approach in [16]. The fabricated chips were tested on-wafer using RF probes of a ground-signal-ground (GSG) configuration for the RF signals and a single pad for DC connections. Measurements were performed by connecting one side of the differential output to the instrument and dummy-loading the other side with a 50 Ω load.
The output signal was measured using an Agilent-E4440A spectrum analyzer and the baseband modulation was provided using a Wavetek-178 waveform synthesizer. An HP4145B semiconductor parameter analyzer was used to provide the biasing and to measure the DC power; however a DC battery box was used to bias the circuit for carrying out the phase noise measurements. This was done to avoid the increase in phase noise introduced by the output of the semiconductor parameter analyzer. The schematic and photograph of the actual experimental setup used for the measurement of direct-modulation transmitters are shown in Figure 7a and b, respectively. Figure 8 shows the measured and simulated output power and the measured drain efficiency of the 2.4 GHz design as a function of the supply voltage. The drain efficiency is defined as the ratio of the output power delivered to the load to the DC power consumed from the supply. During this sweep, the tail current source (transistor M 5 in Figure 2) was biased in the linear region to test the maximum effect of the supply voltage variation. The output power follows an expected square-law relationship with the supply voltage curve, however the efficiency appears to increase to a peak value of 27 % and then slightly decrease. The increase is due to varying the biasing of the MOSFET devices as the supply voltage varies, resulting in a variation in the transconductance of the device (g m ). The MOSFET devices used in the cross-coupled VCO topology have equal drain and gate biasing and g m tends to increase rapidly with biasing to a peak maximum value, after which, it begins to decrease. As the supply voltage increases further, the gate biasing of the active devices increases resulting in a higher conduction period, which drops the efficiency such as in class-A amplifiers [17]. The power-added efficiency (PAE) in such a design is equal to the drain efficiency since the power gain can be considered infinite. This is due to the fact that the input power, which is required to drive the varactors by the digital baseband signal, is negligible. Figure 8 also shows the spectrum of the output signal. The second and third order harmonics are approximately 30 dB below the fundamental and no on-chip filter was used to suppress the higher order harmonics. An external narrow-band antenna can be used to suppress the higher-order harmonics [18][19][20], thus avoiding the need for an on-chip filter to maintain the achieved efficiency. Figure 9 shows the measured output power and drain efficiency of the 2.4 GHz design as a function of the tail current source biasing and the tuning voltage applied to the varactors at a supply voltage of 1.5 V. The gate biasing of the tail current source can be used to control the output power. As the tail current bias increases, the oscillator moves from the current limited regime to the voltage limited regime at a transition gate bias of 0.7 V. Increasing the gate bias of the tail current source no longer has an effect on the output power in the voltage limited regime; since the tail current source moves out of saturation and enters the linear region. The efficiency however, slightly increases since the resistance of the tail current source is reducing as V gs increases, causing less power loss in the active device. The output power slightly drops as the varactor tuning voltage increases since the quality-factor of the varactors goes down, which results in a drop in efficiency. To improve the drain efficiency, several variations were investigated, which included changing the size of the filtering capacitors applied to the DC supply, using higher Q-factor varactors, using smaller DC-blocking capacitors, using a smaller bias resistor R, and finally using an inductor that is laid out over a deep n-well isolated substrate [10]. However, by increasing the DC supply filtering capacitors and reducing the size of the blocking capacitor, the drain efficiency was improved from 27% to 36% for the 2.4 GHz design.
Performance of 2.4 GHz PVCO Transmitter
The output frequency of oscillation of the 2.4 GHz PVCO as a function of the tail current source bias, the supply voltage and the tuning voltage, is shown in Figure 10. The voltages were swept over the ranges for which the circuit oscillates. The circuit does not oscillate for supply voltages below 1.2 V due to the low gain and the stacked transistors in this topology. Low gate biases applied to the tail current source also cause the oscillation to stop, since the current flowing into the circuit will be too low, resulting again in a very low gain in the transistors. The output frequency is a strong function of the tail current source bias in the current limited regime; however, it is hardly affected by the tail current bias in the voltage limited regime since the DC current in the circuit no longer depends on the tail current source, as previously shown in Figure 9. The supply voltage was kept constant at 1.5 V as the tail current bias was varied. In case of the supply voltage sweep, the tail current bias was kept constant at a value of 1.5 V to keep the circuit operating in the voltage limited regime to enable us to see the effect of varying supply voltage, since its effect on the oscillation frequency is negligible in the current limited regime. In order to minimize the effects of voltage supply ripples, the circuit can be operated in the current limited mode. Figure 10 shows that the output frequency is mostly affected by the tuning voltage, which gives a tuning range of 9% at about 0.2 GHz/V. A high tuning range is not desired in this circuit since the tuning voltage will only be used to modulate the output signal. The tuning curve exhibits a poor linearity over the whole range of the sweep since the device used is a MOS varactor, which is known to be very non-linear [21,22], especially in the depletion to accumulation transition region. However, since the modulation input will only vary the varactor voltage in the millivolt range, the modulation can be considered reliable within small ranges (0 V to 0.4 V or 0.5 V to 1 V) where the tuning curve is almost linear.
The phase noise measurements of the 2.4 GHz design are shown in Figure 11 for various control voltages applied to the gate of the tail current source (transistor M 5 ). The phase noise generally decreases as the circuit goes more into the voltage limited regime. Since oscillation frequency of the circuit in the current limited mode is very sensitive to the value of the biasing current, as previously shown in Figure 10, the slow random fluctuations in the tail current cause the frequency to jitter, resulting in phase noise. Low-frequency flicker noise [23,24] generated by the tail current source upconverts into 1/f 3 phase noise causing modulation of the biasing current and the amplitude of the output signal. The 1/f 2 phase noise also reduces in the voltage limited mode since the current source is the main contributor of 1/f 2 due to down-conversion of white channel noise at the current source where the frequency is double the output frequency [25]. The phase noise is not shown in Figure 11 for higher voltage levels since they have the same trend.
In Figure 11, the curve of 0.6 V control voltage stands out with an increase in 1/f 3 phase noise. When looking carefully at the curves, the phase noise is low for both biasing points before and after the 0.6 V curve. Based on the results previously shown in Figure 9 and Figure 10, at low biases such as 0.55 V, the circuit is operating in the current limited mode. At higher gate biases above 0.7 V, the tail current source enters the linear region and loses its effect on the circuit, which makes the circuit operate in the voltage limited regime. At some point in between, the circuit is going through a transition from one mode to the other at which both modes can be competing, causing an increase in phase noise. An important point to note from this behavior is that such bias points should be avoided to improve the phase noise performance of the circuit and the tail current source should either be biased in weak inversion or biased with a high gate voltage, such that the device enters the triode mode.
The 2.4 GHz design has a phase noise of -90 dBc/Hz at a 50 kHz offset and a phase noise of −122 dBc/Hz at a 1 MHz offset with a bias of 1.5 V applied to transistor M 5 , hence, operating in the voltage limited regime. Even though the proposed PVCO is fabricated in CMOS, it achieves phase noise comparable to low power CMOS designs and high power designs in other technologies [7,8,10,11]. Figure 12 shows the measured output power spectrum of the 2.4 GHz design in FSK mode taken from a single ended output at a supply voltage of 1.5 V. The two peaks are 380 kHz apart, modulated by a 1 MHz, 5 mV square wave applied to the varactors. Figure 13 shows a photomicrograph of the fabricated 433 MHz transmitter that occupies an area of only 0.9 mm 2 including the pads with all components fully integrated. The increase in area compared to the 2.4 GHz design is due to the need for a larger inductor (18 nH), which was a 7.5 turn in this case. Figure 14 shows the measured and simulated output power and the measured drain efficiency of the 433 MHz design as a function of the supply voltage. The sweep was done over the voltage range for which the circuit oscillates. Since the output voltage swing increases with a higher supply voltage, the output power also increases. The discrepancy between the measured and simulated output power in this design is higher than the 2.4 GHz design, since the quality factor of the inductor estimated by the equivalent circuit model was not accurate at this frequency. At a supply voltage of 1.2 V, the drain efficiency had already reached its peak value and started to slightly degrade since larger transistors, which have a higher g m , were used in this design than the 2.4 GHz design. The output frequency of oscillation of the 433 MHz design is shown in Figure 15 as a function of the supply voltage, the tuning voltage, the modulation voltage and the tail current source bias. Based on the modulation voltage signal only, the circuit has a tuning range of 16 %, which is larger than the The phase noise performance of the 433 MHz design has a similar behavior to that of the 2.4 GHz design, as shown in Figure 11 previously. The circuit has a phase noise of −84 dBc/Hz at a 50 kHz offset and a phase noise of −120 dBc/Hz at a 1 MHz offset with a bias of 1.5 V applied to transistor M 5 . The phase noise is slightly degraded when compared to the 2.4 GHz design, mainly due to the lower quality factor of the large 18 nH on-chip inductor used in the 433 MHz design.
Performance Comparison of PVCO Transmitters
The well-established figure-of-merit (FoM) used in oscillator designs [3] is not suitable for comparing PVCO since the FoM does not include a term for the transmitted output power. The following equation shows a modified FoM that was used to compare the performance of PVCOs: where, P out is the RF output power and P dc is the DC power consumption of the oscillator normalized to 1 mW. The ratio of P out to P dc is a measure of the drain efficiency of the PVCO. Parameter ω 0 is the angular frequency of oscillation of the oscillator, ∆ω is the offset frequency at which the phase noise L(∆ω) is measured and Q is the quality-factor of the inductor used in the tank circuit. Table 1 summarizes the performance comparison of designed PVCO direct-modulation transmitters with previously published VCO implementations using the proposed FoM [7,8,11,[26][27][28]. The work presented in this paper shows that the CMOS PVCO can achieve acceptable performance level comparable to other technologies, while at the same time providing high level of integration and low-cost. Figure 16 shows a photomicrograph of the fabricated transmitter that occupies an area of only 0.77 mm 2 , including the pads [17]. The inductor and all major interconnections were laid out using the top metal layer to minimize parasitic effects and only the top metal layer was used for RF pads to minimize parasitic capacitances. Separate biasing pads and lines were used for digital and analog circuits and the logic circuits were laid out in a deep-n-well to isolate their substrate from the substrate of the high power VCO signals following the layout considerations from [13,16]. The same experimental set-up was used for the measurement of PLL transmitter as shown in Figure 7. The circuit was tested with a supply voltage of 1.8 V for both the digital and analog parts. When applying a 0.55 V rms, 15.8 MHz reference signal, the measured output power is 9 mW with an output frequency of 2.0224 GHz. The DC current consumed by the PLL is 2 mA, while the current consumed by the PVCO is 24 mA. This results in a total drain efficiency of 19%. The PLL has a lock range of 128 MHz and a capture range of 104 MHz.
Measurement Results of PLL Transmitter
An FSK modulated input was applied at the reference point with a modulation rate of 10 kbps and a deviation frequency of 1.172 kHz. The measured output spectrum, averaged over 200 samples, is shown in Figure 17. The figure shows a frequency deviation of 150 kHz from the center, which is equivalent to 128 times the deviation applied to the input at the reference point. Figure 18a shows the measured output spectrum with the best case largest spurs at 70 dBc, whereas Figure 18b shows the worst case spurs at 40 dBc. The change from Figure 18a to Figure 18b was done by increasing the reference frequency from 15.8 MHz to 16 MHz, which corresponds to a change in the output frequency from 2.0224 GHz to 2.048 GHz respectively. These spurs can be attributed to the mismatch between the NMOS and the PMOS devices in the charge pump current or even substrate coupling due to the high power signals of the VCO. However, the values of the spurs are more than 40 dB below the carrier, so they will not have a significant effect on the system's performance for such applications. Table 2 summarizes the performance evaluation of 2.4 GHz and 433 MHz PVCO transmitters with a 2.0 GHz PLL transmitter.
Conclusions
In this paper, design and implementation of 2.4 GHz, 433 MHz PVCO and 2.0 GHz PLL based direct-modulation transmitters are presented for short-range wireless applications. All three RF transmitters were fabricated in a standard CMOS technology. Performance evaluation of PVCO transmitters with the previously reported implementations is done using the proposed figure-of-merit. Measurement results of the 2.4 GHz transmitter demonstrate an improved drain efficiency of 36%. 2.4 GHz and 433 MHz PVCO transmitters deliver an output power of 8 dBm with a phase noise of −122 dBc/Hz at a 1 MHz offset and 6.5 dBm with a phase noise of −120 dBc/Hz at a 1 MHz offset respectively. The PLL transmitter measurement results demonstrate an output power of 9 mW with a total DC current of 26 mA from a 1.8 V supply. The transmitter die areas are 0.6 mm 2 , 0.9 mm 2 and 0.77 mm 2 for 2.4 GHz, 433 MHz and 2.0 GHz designs respectively. The results demonstrate that the proposed circuits can achieve acceptable performance level with considerable reduction of transmitter die area and power consumption, leading to simpler and more efficient designs that are suitable for emerging low-power applications such as WSNs and WBANs. | 2014-10-01T00:00:00.000Z | 2013-08-01T00:00:00.000 | {
"year": 2013,
"sha1": "fb40fe01d5a89703cab082530f5218fc74be0a9e",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1424-8220/13/8/9878/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb40fe01d5a89703cab082530f5218fc74be0a9e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Engineering"
]
} |
214795185 | pes2o/s2orc | v3-fos-license | Generalization of Kimberling's concept of triangle center for other polygons
In this article we introduce a general definition of the concept of center of an $n$-gon, for $n\geq 3$, generalizing the idea of C. Kimberling for triangle. We define centers associated to functions instead of to geometrical properties. We discuss the definition of those functions depending on both, the vertices of the polygons or the lengths of sides and diagonals. We explore the problem of characterization of regular polygons in terms of these $n$-gon center functions and we study the relation between our general definition of center of a polygon and other approaches arising from Applied Mathematics.
Introduction
C. Kimberling [8,9] in the second half of the 20th century decided to give an unified definition for triangle center, including the classical centers (incenter, barycenter, circumcenter and orthocenter) and much others (Steiner Point, Fermat point,. . . ). Moreover, he created an encyclopedia [10] trying to contain all known triangle centers. His new idea was to consider triangle centers as functions of the set of lengths, instead of loci.
Following this spirit, we provide a definition of center of a polygon as a function (n-gon center functions). We also provide a geometric interpretation of this center functions as points in the plane. Although we still can find some works in the literature studying "centers" of polygons (see for example [1,7,13]), as far as we know there is no general definition for this concept.
The main obstacle is that n-gons are not determined by their sidelegths for n ≥ 4, so we have defined centers as functions of the vertices (Definition 9). To connect with the definition given by Kimberling for triangles, we also provide an alternative and equivalent definition for the concept of center involving, not only sidelengths, but also the lengths of the diagonals, which seems also to be fruitful to encompass some of the well known examples of polygon centers (Definition 12).
In [1] the authors already studied the problem of exploring "the degree of regularity implied by the coincidence of two or more" centers for quadrilaterals. In that article only the center of mass of the four vertices, the center of mass of the four sides, the center of mass of the whole figure considered as a lamina of uniform density and the FermatTorricelli center are considered. This problem is related to the fact that for squares those four centers coincide. In our new general setting all the centers (in 2 Basics about triangle centers As we explained in the introduction, C. Kimberling in his works [8][9][10] decided to define triangle centers as functions (the so-called triangle center functions) instead of loci in the plane. These functions also allow a geometric interpretation as points in the plane, via trilinear coordinates, as it is described below.
Unlike what happens for n-gons for n ≥ 4, triangles are determined by their sidelengths: Remark 1 Every triangle T can be identified with the tuple of its three sidelengths (a, b, c) (placed in clockwise order), up to congruences.
According to this identification, we denote the set of all triangles as Before proceeding with the definition, let us say that we denote by [a : b : c] the points in the real projective plane P 2 R with the usual convention [a : Then we have: Definition 2 (Kimberling's definition of triangle center [8][9][10]) A real-valued function f of three real variables a, b, c is a triangle center function if it satisfies the following properties: (i) Homogeneity: there exists some constant n ∈ N such that for all t ∈ R ≥0 we have f (ta, tb, tc) = t n f (a, b, c).
(ii) Bisymmetry in the second and third variables: for all (a, b, c) ∈ T , we have f (a, b, c) = f (a, c, b).
Define also the coordinate map ϕ : T → P 2 R given by: ϕ f (a, b, c) can be also interpreted as trilinear coordinates (the first one corresponding to the side of length a, the second one to the side of lenght b and the last one to the one of length c). Thanks to these two interpretations, we can think of a center as, actually, a point in the plane. Figure 1: There is a unique point P in the plane such that the relative distances from P to each one of the sides are [t 1 : t 2 : t 3 ]. [t 1 : t 2 : t 3 ] are then said to be the trilinear coordinates of P .
Example 3 (circumcenter, see [10]) The triangle center function corresponding to the circumcenter is f (a, b, c) = a(b 2 + c 2 − a 2 ), since the trilinear coordinates corresponding to the circumcenter are: Kimberling's definition of triangle center function ensures that: Property 4 (the coordinate map is well defined) The trilinear coordinates of the center are associated to the triangle independently of the labelling of the sides (but obviously re-ordered).
Property 5 (the definition is coherent with respect to similarities) Let f be a triangle center function. Let T 1 = (a, b, c), T 2 = (a , b , c ) be two similar triangles such that S is the similarity T 1 = S(T 2 ). Let P 1 , P 2 be the points with trilinear coordinates given by ϕ f (a, b, c) and ϕ f (a , b , c ), with respect to each of the triangles. Then P 1 = S(P 2 ).
In this setting, we see that if we have a triangle (a 1 , a 2 , a 3 ) and a permutation σ of the set {1, 2, 3}, then if: we have that: There is a correspondence between trilinear coordinates and barycentric coordinates. We are interested in this second setting. Kimberling decided to use the first option, although he also claimed the existence of this correspondence: are the trilinear coordinates of a point P (with respect to the sides of a triangle), then [at 1 : bt 2 : ct 3 ] are the barycentric coordinates of this point (with respect to the vertices of this triangle).
Polygon centers
The concept of triangle center can be generalized to polygons. First we will fix the notation. For us, a polygon is a finite number of straight line segments connected in a closed chain. We say that the polygon is non-degenerated if none of the vertex coincide and that it is simple if those segments only intersect with the adjacent element of the chain in a vertex. We will also introduce the following convention, which we will use along the paper: Remark 7 Any n-gon can be identified with an n-uple (V 1 , . . . , V n ) after a labelling of its vertices. This labelling is chosen in such a way that for 1 ≤ i ≤ n the segments V i , V (i+1 mod n) are edges of the polygon, and the rest of segments joining vertices are diagonals.
Consider the dihedral group D n = ρ, σ : ρ n = id, σ 2 = id, σρσ = ρ −1 (it has 2n elements). It can be viewed as a subset of the permutation group of the set {1, . . . , n}, determined by But it can also be viewed as a rellabeling (in the sense explained in Remark 7) in the set of all n-gons: As in the case of triangles, we want to define the n-gon center function as a function of the vertices f (V 1 , . . . , V n ) (then defined in (R 2 ) n ) and the coordinate map. Trilinear coordinates are not a good option to provide a geometric interpretation since they do not extend in a natural way from triangles to n-gons for n ≥ 4. So, we will use barycentric coordinates instead.
For a fixed n, we denote the set of all n-gons by P n ≈ (R 2 ) n . See that: Remark 8 (other domains) Sometimes, we may restrict ourselves to convex n-gons (whose vertices satisfy (1) V i = V j if i = j and (2) for any i, all vertices (except V i , V i+1 mod n ) lie on the same side of the line defined by V i , V i+1 mod n ) or to non-degenerated n-gons. Now we are ready for generalizing the definition of triangle center function by C. Kimberling: Definition 9 (main definition of n-gon center function) We say that a real-valued function f (V 1 , . . . , V n ) is a n-gon center function if it satisfies the following properties: (1) Preservation with respect to relabellings: for the symmetry σ ∈ D n : (2) Homogeneity: there exists some k ∈ N such that, for all t ∈ R ≥0 we have that f (tV 1 , . . . , tV n ) = t k f (V 1 , . . . , V n ).
(3) Preservation with respect to motions: for every rigid motion T in the plane, Define also the coordinate map ϕ : P n → P n−1 R given by: Note that it is not possible to define the coordinate map for n-gons such that Coordinates ϕ f (V 1 , . . . , V n ) (when defined) are interpreted as barycentric coordinates with respect to the vertices. Then, the geometric interpretation of the center of a given n-gon (V 1 , . . . , V n ) associated to the n-gon center function f is the point: We want the coordinate map to satisfy an analogue of Properties 4 and 5. The next result ensures that, and may help us to clarify the notation and ideas in this paper. Theorem 10 Definition 9 and the geometric interpretation described in (3.2) provide an analogue to Properties 4 and 5 for n-gon center functions, i.e., P1 (the coordinate map is well defined) The coordinates given by the coordinate map ϕ f are associated to each polygon independently of the labelling (but obviously re-ordered).
P2 (the definition is coherent with respect to similarities) Let f be a n-gon center function. Let . . , V n ) be two similar n-gons such that S is the similarity N 1 = S(N 2 ). Let P 1 , P 2 be the points with barycentric coordinates given by ϕ f (V 1 , . . . , V n ) and ϕ f (V 1 , . . . , V n ), with respect to each of the n-gons. Then P 1 = S(P 2 ).
Proof: To prove P1, see that for every α ∈ D n To prove P2 see that any similarity S can be obtained as a composition T • H of a rigid motion T and an homotethy H = λ · id fixing the origin. Hence, We conclude this section with the following result, which states an important property. Recall that we say that an n-gon is regular if it is equiangular and equilateral. Regular n-gons can be either convex or star.
Proposition 11
For any center function f , a regular n-gon (convex or star) (V 1 , . . . , V n ) satisfies Proof: Let (V 1 , . . . , V n ) be a regular n-gon and f a center function for this n-gon. Then, the n-gon is also regular and corresponds to a rotation T i of 2π(i−1) n rad of (V 1 , . . . , V n ), i.e., Since a rotation is a rigid motion in the plane, by property (3) of Definition 9 we have that The definition of center above is, in some sense, not satisfactory. The first reason is that it may be not inmediate to verify condition (3). And the second one is that, traditionally, some of the most useful centers are described in terms of the sidelengths, not in terms of the coordinates of the vertices.
We need again to stablish some conventions. Let (V 1 , . . . , V n ) be an n-gon. We will denote by d ij the length of the segment with endpoints V i , V j . It is obvious that d ij = d ji . If j = i + 1 mod n, then d ij is a sidelength. We will write e ij instead of d ij when we want to emphasize that we are referring to sidelengths.
An n-gon is not completely determined by its sidelengths, some of the lenths of the diagonals are required to determine it up to congruence. The set of all the sidelengths and of the lengths of the diagonals of an n-gon must satisfy some compatibility conditions. For example, consider a quadrilateral with sidelengths e 12 , e 23 , e 34 , e 41 and diagonals d 13 , d 24 . According to the Cayley-Menger determinant formula for the volume of a 3-dimensional tetrahedron (see [15]) we have that: and sometimes, if it is more simple for the corresponding formulas, with the n · (n − 1)-uple (d ij ) with i = j, taking in mind that some of the entries d ij are redundant.
In this setting we can define the n-gon center function as a real-valued function g depending on the sidelengths and the lengths of the diagonals (defined then in R (n(n−1)) ), instead of the vertices, as follows.
Definition 12 (definition of n-gon center function in terms of lengths) We say that a realvalued function g(d ij ), i, j = 1, . . . , n, is a n-gon center function if it satisfies the following properties: (1') Preservation with respect to relabellings: for the symmetry σ ∈ D n : g(d ij ) = g(d σ(i),σ(j) ).
(2') Homogeneity: there exists some k ∈ N such that, for all t ∈ R ≥0 , we have that g(t·d ij ) = t k ·g(d ij ).
In this context, the geometric interpretation of the center is again (in those cases where the coordinate map is defined): where g(d ij ), . . . , g(d ρ n−1 (i),ρ n−1 (j) ) are the normalized coordinates as done in (3.3).
The next result ensures that both definitions of center function are compatible.
Theorem 13 (equivalence between definitions 9 and 12) Given a n-gon center function f (V 1 , . . . , V n ), it is possible to find an n-gon center function g(d ij ) such that the geometric interpretations (if they exist) of the centers corresponding to f and g coincide for every element in dom(f ) ⊂ P n , and viceversa.
Proof: Supose that we have an n-gon with vertices (V 1 , . . . , V n ). The vertices determine univocally the lengths d ij = |V i − V j |. On the other hand, the lengths d ij determine modulo congruence the vertices (V 1 , . . . , V n ). So we can consider the vertices V k as functions of the lengths V k (d ij ), if we impose V 1 = (0, 0), V 2 = (d 12 , 0) and that the rest of the vertices are ordered clockwise.
Next, see that if g(d ij ) is a center function in the sense of Definition 12, then it is easy to find an associated center function f (V 1 , . . . , V n ) in the sense of Definition 9 of the form: We just have to prove that if g satisfies properties (1') and (2'), then this f satisfies properties (1), (2), and (3). First see that, for σ ∈ D n as defined in (3.1) we have that: by property (1'). Hence, property (1) holds. Now, see that: by property (2'). So, property (2) also holds. Finally, the proof of property (3) is inmediate: all congruent n-gons have the same sidelengths and diagonals. 2
Some examples
In this section we present some of the more relevant centers for polygons. Most of them arise from important problems in Applied Mathematics, and can be naturally defined as an affine combination of the vertices. The coefficients of this combinatio are functions of either the vertices or the sidelengths and lengths of the diagonals. Some of those examples can be found in [1] in the particular case n = 4 (quadrilaterals).
Example 14 (centroid, barycenter or center of mass of the vertices) The barycenter of a polygon with vertices V 1 , . . . , V n , or the center of mass of the vertices (provided that all the vertices have the same weight) is the point: So, the associated center function can be chosen to be f 0 (V 1 , . . . , V n ) = 1 and the coordinate map is ϕ f0 (V 1 , . . . , V n ) = [1 : . . . : 1] (recall that the coefficients for the affine combination are the normalized ones).
Example 15 (center of mass of the perimeter of a convex polygon) The center of mass of the perimeter of a convex polygon (provided that all the points in the perimeter have the same weight) with vertices V 1 , . . . , V n is the point (see [7]): So, the associated center function can be chosen as Example 16 (centroid of the polygonal lamina) The centroid of a polygonal lamina with vertices V 1 , . . . , V n is the point (see [4]): . This is not an affine combination since so this is not the geometric interpretation of a center in our sense.
To include this important center in the setting of our definition we are going to modify (5.2). G 2 (V 1 , . . . , V n ), if the n-gon (V 1 , . . . , V n ) is convex, can be computed via "geometric decomposition" as for B = G 0 (V 1 , . . . , V n ) (see (5.1)), which, according to the Shoelace Formula (see [14]) and to the formula of the centroif of a triangular lamina (the classical centroid of the triangle), equals to where: Expression (5.3) does correspond to the geometric interpretation of a center with center function Example 17 (medoid) The medoid of the set of vertices V 1 , . . . , V n is the point G 3 such that (see, for example, the recent work [2]): The medoid is not well defined for any n-gon (this minimum may be reached by two or more of the vertices). The medoid can also be considered as a center in our sense. In this case the center function is: 6 Characterization of n-gons using centers (specially mention to quadrilaterals) The idea of characterizing regular polygons using n-gon center functions was one of the main reasons of our interest in this topic, in connection to other geometric problems. This study was already started in [1] for quadrilaterals. We say that: Definition 18 A set of center functions f 1 , . . . , f k with associated coordinate maps ϕ f1 , . . . , ϕ f k characterizes a family F of n-gons if (V 1 , . . . , V n ) ∈ F if and only if: If a family F is characterized by a set of center functions then it must be closed under congruences and it must contain regular n-gons (convex and star, see Proposition 11).
Regular triangles (for triangles equilaterality and equiangularity are equivalent properties) are characterized by just one center function. Take for example: f (V 1 , V 2 , V 3 ) = V 2 − V 3 . Equiangular quadrilaterals (rectangles and their non-simple version called crossed rectangles), provided that they are non-degenerated, are also characterized by one center function: This is not so trivial: the cosine of two angles being equal does not imply the angles are equal but complementary. But in this case this is not a problem since the sum of the angles of a non-degenerated quadrilateral must be less or equal to 2πrad.
However, there is no center or set of centers characterizing either equilateral quadrilateral (rhombi) or regular quadrilaterals (squares). The following results formalize this idea: Theorem 19 Equiangular n-gons can be characterized by one center function, provided that they are non-degenerated and convex.
Proof: The n-gon center function that characterizes equiangular n-gons (provided that they are convex and so the angle between two adjacent sides is less than π rad) is again In Figure 2 we show a pentagon which, despite not being equiangular, is also included in a family of polygons characterized by the center (6.1). Figure 2: This pentagon is obtained as the union of two equilateral triangles. So all the angles equal to π/3, except the one over V 3 , which equals 2π − π/6. Theorem 20 For n ≥ 3 being an odd number, equilateral n-gons can be characterized by one center function.
For n ≥ 4 being an even number, equilateral n-gons cannot be characterized using n-gon center functions. This is a consequence of the fact that any n-gon center function f must satisfy: for S n being the family of equiangular n-gons (V 1 , . . . , V n ) such that V ρ (i) − V ρ (j) = V σ(i) − V σ(j) (generalization of rectangles for n ≥ 6). This family can be characterized by one n-gon center function.
Proof: The n-gon center function that characterizes equilateral n-gon for n odd is The fact that any n-gon center function must characterize S n is inmediate from the "preservation with respect to relabellings property". The center function that characterizes S n is: Theorems 19 and 20 together imply: Corollary 21 For n ≥ 3 odd, regular n-gons can be characterized by two center functions, provided that they are convex. For n ≥ 4 even, they cannot.
See that Theorems 3.1, 3.2, 3.3, 3.4, 3.5 in [1] are compatible with the results proved here, although the authors are only interested in some particular quadrilateral centers.
Other possible definitions of center
Some of the centers arising from Applied Mathematics are defined by an implicit equation involving the vertices, or as solution of a problem of optimization (see Example 17). Besides, the term "center" appears in a different setting for compact length spaces in [13], from a totally different approach. This leads to the two following alternative definitions of center: Definition 22 Let F (P, V 1 , . . . , V n ) be a map satisfying the following properties: (a) For every (V 1 , . . . , V n ) in P n (additionally non-degeneration property can be required) F (P, V 1 , . . . , V n ) = 0 defines univocally P .
(d) Preservation by motions: for every rigid motion T in the plane, We say that the point P ensured by (a) is an implicit center of (V 1 , . . . , V n ).
Definition 23 Let G(P, V 1 , . . . , V n ) be a real function defined in P n (additionally non-degeration property can be required for the domain) such that: (a') For every (V 1 , . . . , V n ) in P n there exists a unique X(V 1 , . . . , V n ) = arg min P ∈R 2 (G(P, V 1 , . . . , V n )).
We will say that the point X(V 1 , . . . , V n ) ensured by (a') is a minimal center of (V 1 , . . . , V n ).
2 Some examples of well-known points usually called "centers" that could be naturally included in this different definitions could be: Example 25 (geometric median of the vertices) The geometric median of the set of vertices V 1 , . . . , V n of an n-gon is the point X minimizing the sum of distances to the vertices. Thus, it could be naturally considered as a minimal center defined by (see [6]): Provided that X is distinct from any vertex, it can be also described as an implicit center by the formula: It is known that there is no explicit "simple" formula for G or its coordinates (see [3]).
Example 26 (Chebyshev center) The Chebyshev center of a bounded set Q is the center X of the minimal-radius ball enclosing the entire set Q (see [5]). It is described as a minimal center by the formula:
Final comments
During the development of this article, some questions have arisen: Open Question 27 Can regular n-gons, for n-odd, be characterized by only one center function?
Open Question 28 What de we know about the Characterization Problem when we do not have the restriction of the polygons being convex?
Open Question 29 Is any implicit center in the sense of Definition 22 a center in the sense of Definition 9? See that Example 25 shows that the corresponding center function may not be trivial at all to find.
P n is naturally a D n -space (a topological space endowed with a group of symmetries, see [11]). In this context, coordinate maps can be understood as G-maps. It could be interesting to explore this point of view. In particular, this may connect n-gon centers with interesting problems in Plane Geometry such as as the Square Peg Problem and its variants [12].
Finally we would like to remark that the study of centers for k-dimensional polyhedra (k ≥ 3, but specially k = 3) would be of great interest in different areas (computational geometry and computer vision, for instance), and is a problem still to be explored. | 2020-04-06T01:01:07.565Z | 2020-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "9aa583034e3e4c42b52739da760461ce4a3b0ec9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.01677",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9aa583034e3e4c42b52739da760461ce4a3b0ec9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.