id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
96918723
|
pes2o/s2orc
|
v3-fos-license
|
Anomalous dynamics of confined water at low hydration
The mobility of water molecules confined in a silica pore is studied by computer simulation in the low hydration regime, where most of the molecules reside close to the hydrophilic substrate. A layer analysis of the single particle dynamics of these molecules shows an anomalous diffusion with a sublinear behaviour at long time. This behaviour is strictly connected to the long time decay of the residence time distribution analogously to water at contact with proteins.
INTRODUCTION
Water plays a major role in many different biological, chemical and physical phenomena 1,2, 3 . In most of these cases a large fraction of water is at contact with different substrates and its motion is restricted in small spaces.
It is expected that both the geometrical confinement and the interaction with the substrate perturb the structural and dynamical properties of water and some general trends have been found in experiments and computer simulation 4 .
In many phenomena, like those connected with biological matter, it is of great relevance the behaviour of the shells of water close to substrates. In particular the slow dynamics of water close to the surfaces of proteins might play a fundamental role in the protein functionality and evidences have been found of a glasslike behaviour of water at contact with plastocyanin investigated in a wide temperature range by computer simulation 5 .
Water confined in Vycor glass is a good prototype system for studying the effect of an hydrophilic substrate since the surface of Vycor pores is well characterized 6 . In particular experimental studies with quasi-elastic neutron scattering and neutron resonance spin-echo 7,8 on confined water indicated a slowing down of the dynamics with respect to the bulk and a study focused on the low hydration regime has evidenced upon supercooling the existence of a low frequency scattering excess typical of strong glass former 9 .
In computer simulations of water confined in a pore of Vycor glass we found that strong layering effects are present where the molecules close to Vycor glass show very slow dynamics even at ambient temperature 10,11,12 . In a more focused study on the low hydration regime 13,14 we recently performed also a preliminary study of the residence time (RT) of the water molecules. We found that the RT is strongly dependent on the distance from the substrate and its distribution shows an anomalous non-brownian behaviour when the contribution of the molecules close to the substrate alone is considered.
The aim of this paper is to show the connection between the anomalous behaviour of the RT and the long time limit of the mean square displacement (MSD) in the low hydration regime. The paper is structured as fol-lows: in the next section we briefly describe the system, give details of the computer simulation and discuss some structural properties of confined water 13 useful for the characterization of the RT behaviour. In the third section by considering the residence time distribution and the molecular diffusion we show the onset of an anomalous behaviour connected to the presence of the solid disordered substrate. The last section is devoted to conclusions.
II. STRUCTURAL PROPERTIES OF CONFINED WATER
The molecular dynamics (MD) calculations have been performed in a cell of silica glass previously obtained by the usual quenching procedure 15 .
Inside the glass cubic cell of 71Å a cylindrical cavity of 40Å diameter has been carved. The surface of the cavity has been treated to reproduce the average properties of the surface of the pores of Vycor. Along this line hydrogen ions (acidic hydrogens) have been added to oxygens non saturated by silicons in order to mimic the procedure followed by experimentalists in the preparation of the Vycor sample before hydration. The surface of silica glass can be considered as a prototype model for a disordered hydrophilic substrate. Being primarily interested in the dynamics of water the substrate is kept rigid. The water inserted in the cavity is simulated by using the SPC/E potential, where each molecule is represented by three charged sites. These sites interact also with the silicon and oxygen atoms of the substrate by means of an hydrophilic potential described in previous work 11,15 . The molecular dynamics (MD) is performed at different hydrations by varying the number of water molecules contained in the pore. For each hydration the system is equilibrated at different temperatures. The quantities of interest presented in the following are averaged over runs that extend up to 1.2 − 1.3 ns. We note that we have extended the simulation length at all temperatures with respect to ref. 13 to improve statistics especially on the computation of the RT.
The number of water molecules considered in this work are N W = 500, N W = 1000, N W = 1500, which corre- spond to hydration levels of the pore of 19%, 38% and 56% respectively, since the density corresponding to the full hydration in the experiments ρ = 0.878 g/cm 2 is obtained in our geometry for N W = 2600. The effects of the hydrophilic interaction of the substrate on the water molecules are shown in the bottom panels of Fig. 1, 2 and 3 for the different hydrations. The radial density profiles normalized to the density of bulk water at ambient conditions show the formation of two layers of molecules close to the substrate. The positions of the peaks of the double layer structure do not change with hydrations, the first layer is located at around R = 17.5Å and the second layer is at R ≈ 15.5Å with a minimum in the density at R ≈ 16Å. The heights of the peaks increase with increasing hydrations and for N W = 1500 the normalized density profile for confined water in the layers reaches values higher than one at ambient temperature.
In the middle panels of Fig. 1, 2 and 3 it is shown that the intermolecular hydrogen bond (HB) profiles increase and reach a maximum value at the position of the minimum of density (R ≈ 16Å) where they start to go down in correspondence with the increase of the HB of water molecules with the atoms of the Vycor surface. The layering effect shown in the bottom panels is due to the formation of the Vycor-water HB. We found that the temperature has little effects on the density and HB profiles at all the hydration levels and for this reason we show here only the results corresponding to ambient temperature.
III. RESIDENCE TIME AND ANOMALOUS DIFFUSION
At the top of Fig. 1, 2 and 3 the residence times (RT) of the water molecules at ambient temperature are reported along the pore radius 13 . The large oscillations of the RT appear modulated by the structure of the density profiles, reported in the bottom panels of the same figures. Apart for the molecules attached to the surface water resides for the longer time inside the shells, where the density profile reaches the highest values. The minima of the RT are located close to the minima of the density profiles.
In Fig 4 are reported the calculations of the mean square displacement (MSD) at N W = 1000 for decreasing temperatures. It is clear that after the ballistic regime at short time, at around 0.1 ps there is the onset of a cage effect characterized by the presence of a plateau which increases by lowering the temperature. The plateau is determined by the transient caging of the nearest neighbours. At longer times the MSD does not appear to reach the usual Brownian diffusion since the behaviour is sub-linear. Analogous results are found for the other hydration levels investigated. At this point further analysis is needed in order to clarify whether this subdiffusive behaviour is just a transient leading to normal brownian diffusion for longer times, although unreachable with normal computers, or it can be framed in the context of anomalous subdiffusive phenomena.
Anomalous diffusion is generally speaking defined trough the time dependence of the MSD, the second moment of the spatial coordinates of the diffusive particles, which generally has a long time dependence of the form < r 2 (t) >= at α (1) Anomalous diffusion corresponds to α = 1. In particular α > 1 is termed superdiffusion and α < 1 subdiffusion. From a theoretical point of view the origin of anomalous diffusion can be traced back to the analytic form of the distribution of the waiting times ψ(t). Under the assumption that the amplitude of the random jumps is constant and finite anomalous diffusion has been shown to be generated by an inverse power law distribution for large times 16,17 In the case of ordinary brownian motion the long time limit of the distribution would decay with an exponential law.
In our case the sublinear diffusion observed can be connected to the processes which take place close to the substrate and to the interaction of the water molecules with the disordered surface 18 . In this respect since oscillations of the RT appear so closely connected to the double layer structure it is of interest to look at the residence time distribution of the water molecules close to the substrate.
In Fig. 5 and 6 we report the residence time distributions (RTD) ψ(t) at the highest T = 300 K and the lowest T = 240 K temperatures investigated for N W = 1000 and N W = 1500 . The RTD, calculated for the molecules in the layer 14 < R < 20Å, shows the predicted power law behaviour of Eq. 2 while for the rest of the molecules we get an exponential decay, as shown in the inset of the figures. The power law behaviour related to the temporal disorder of the distribution of the residence times of the molecules has been observed in computer simulation and experiments on water at contact with proteins 5 . They are specifically related to the interaction of the solvent with the protein sites.
The power law decay of the RTD of the molecules in the 6Å layer from the surface is determined by values of the exponents which are similar to the ones obtained for the RTD of water in fewÅ shells close to protein hydration sites 5 . In particular for the case N W = 1000 we have µ = 1.54 ± 0.05 at T = 300 K and µ = 1.50 ± 0.05 at T = 240 K, while for N W = 1500 the fits yield µ = 1.50 ± 0.05 at T = 300 K and µ = 1.52 ± 0.05 at T = 240 K. We note that the present result for µ at N W = 1500 and ambient temperature is slightly different from the preliminary one reported in ref. 13 , where the statistics was poorer.
The sublinear behaviour of the MSD is connected to the power law decay of the RTD by the asymptotic temporal dependence From the fit of the long time behaviour of the MSD for N W = 1000 and N W = 1500, reported in Fig. 7 and consistent with the exponents obtained from the power law behaviour of the RTD seen in Fig. 5 and Fig. 6.
In the lower hydration case N W = 500, Fig. 9, the RTD decays for long time with an exponent similar to the previous cases, µ = 1.45 ± 0.05 and µ = 1.55 ± 0.05 for T = 300 K and T = 240 K respectively, but the MSD show, as seen in the inset of Fig. 9, a long time behaviour not in agreement with the one predicted by Eq. 3.
The asymptotic behaviour of the MSD shows a further slowing down with respect to the higher hydrations. This behaviour is likely connected to the fact that the water molecules are arranged in clusters close to the substrate.
IV. CONCLUSIONS
The dynamical properties of confined water are expected to be changed by the interaction with the substrate. We performed computer simulation of water molecules confined in a silica pore in low hydration regimes, where the larger amount of water resides in the shells closer to the hydrophilic surface. We found drastic changes of the diffusion of the water molecules. From a layer analysis for the investigated hydrations we find that the diffusive regime at long time of the molecules close to the substrate is characterized by a sublinear trend.
Anomalous diffusion phenomena are related to a temporal disorder typical of particles which diffuse close to and interact with a disordered surface. Different interaction processes between the water molecules and the for NW = 1500 at temperatures T = 300 K and T = 240 K from above. The long dashed lines are the fits to a sublinear behaviour < r 2 >∝ t α with α = 0.45 ± 0.05 at T = 300 K and α = 0.48 ± 0.05 at T = 240 K. In the inset are reported the functions < r 2 > /t. In the inset MSD of water molecules in the layer 14 < R < 20Å for NW = 500 at temperatures T = 300 K and T = 240 K. The bold line is the fit to a sublinear behaviour < r 2 >∝ t α with α = 0.34 ± 0.05 for both T = 300 K and T = 240 K.
sites of the substrate modulate the residence time of the molecules 5 . A dispersive transport regime related to temporal disorder shows up in the power law decay of the residence time distribution with an exponent which determines also the long time tail of the mean square displacement. In our system the exponent of the long time behaviour of the mean square displacement is related to the long time decay of the residence time distribution of the molecules in the same layer for the cases N W = 1500 and N W = 1000 as theoretically predicted. For the lowest hydration case (N W = 500) the mobility of the molecules is more strongly modulated by the substrate with respect to the higher hydrations. The formation of clusters of molecules close to the solid surface does not appear to modify the long time decay of the RTD but it induces a further slowing down of the dynamics with a violation of the expected behaviour of the MSD.
V. ACKNOWLEDGMENTS
We thank J. Baschnagel for useful and stimulating discussions.
|
2019-04-06T00:43:10.650Z
|
2003-11-12T00:00:00.000
|
{
"year": 2003,
"sha1": "a840f836dbf5e79a043631978c2a284d753b6774",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0311286",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a840f836dbf5e79a043631978c2a284d753b6774",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
4895253
|
pes2o/s2orc
|
v3-fos-license
|
Application of Excimer Laser Coronary Atherectomy Guided by Optical Coherence Tomography in the Treatment of a Severe Calcified Coronary Lesion
To the Editor: A 73‐year‐old female was referred to our hospital for evaluation of unstable angina pectoris. She had a greater than 10‐year history of hypertension. Coronary angiography (CAG) was performed in another hospital in March 2017, which showed diffuse calcific stenosis (about 90%) in the proximal and mid‐left anterior descending (LAD) artery, 70% stenosis in the ostial obtuse marginal branch, and no apparent stenosis in the right coronary artery. The lesion in the LAD could not be intervened because the 2.0 mm × 15.0 mm Sprinter balloon (Medtronic, USA) could not be expanded and the 2.5 mm × 15.0 mm NC Sprinter balloon (Medtronic, USA) could not pass through the lesion.
Correspondence
To the Editor: A 73-year-old female was referred to our hospital for evaluation of unstable angina pectoris. She had a greater than 10-year history of hypertension. Coronary angiography (CAG) was performed in another hospital in March 2017, which showed diffuse calcific stenosis (about 90%) in the proximal and mid-left anterior descending (LAD) artery, 70% stenosis in the ostial obtuse marginal branch, and no apparent stenosis in the right coronary artery. The lesion in the LAD could not be intervened because the 2.0 mm × 15.0 mm Sprinter balloon (Medtronic, USA) could not be expanded and the 2.5 mm × 15.0 mm NC Sprinter balloon (Medtronic, USA) could not pass through the lesion.
After obtaining informed consent, percutaneous coronary i n t e r v e n t i o n f o r t h e L A D l e s i o n w a s p e r f o r m e d . A 6-Fr EBU 3.75 guiding catheter (Medtronic, USA) was inserted from the radial artery. A Balance Middle Weight Universal II guide wire (Abbott, USA) was successfully introduced into the distal LAD. An optical coherence tomography (OCT) catheter (St. Jude Medical, USA) failed to pass through the proximal lesion. Excimer laser coronary atherectomy (ELCA) was initiated using a 0.9-mm eccentric catheter (Spectranetics, USA) at 45/45 (fluence/Hz), then 45/60 (fluence/Hz), 45/80 (fluence/Hz), and 80/80 (fluence/Hz), but there was no progress. We repeatedly dilated the lesion with a 1.5 mm × 15.0 mm Sprinter balloon at 10-12 atm. Then, we successfully performed ELCA at 45/45 (fluence/Hz). OCT assessment was performed after the laser catheter passed through the lesion and severe calcifications were noted [ Figure 1A1-A3]. A culprit lesion was dilated repeatedly with a 2.5 mm × 15.0 mm NC Sprinter balloon at 12-14 atm. Two stents ([2.50 mm × 28.00 mm Xience Xpedition; Abbott, USA] and [2.75 mm × 24.00 mm Endeavor Resolute; Medtronic, USA]) were placed from far to near at 12 atm. Finally, postdilation was performed with the stents at 14-20 atm with a 3.0 mm × 15.0 mm NC Sprinter balloon. CAG showed an acceptable result, and the final OCT results showed no apparent dissection malapposition or underexpansion [ Figure 1B1-B3], and the minimum lumen area was 4.17 mm 2 .
The therapeutic effect of excimer laser technology is mainly achieved by the following three kinds of actions: photochemical, photothermal, and photomechanical effects. The depth of action is 0.1 mm. Excimer laser technology can break the molecular bonds of tissues and produce small debris, including water, gas, and small particles (90% <10 μm). [1] The laser advantages include delivery of flexible catheters in curvature anatomy, precision of controlled penetration into the lesion, and circumferential or eccentric distribution of laser rays, which create a smooth "pilot channel." In addition, the laser interacts favorably with a thrombus and uniquely suppresses platelet activity, thus reducing the risk of thrombosis within the newly revascularized site. [2] Previous studies have shown that ELCA can be used for the treatment of coronary thrombosis in patients with acute coronary syndrome, chronic total occlusion lesions, saphenous vein graft occlusions, stent restenosis, and mild-to-moderate calcifications. [1,3,4] A 0.9-mm eccentric catheter is a xenon-chlorine (excimer)-pulsed laser catheter that is capable of delivering higher energy density with lower heat production (smaller area of ablation) and has been suggested as a treatment option for these calcified lesions. [1] This catheter could deliver excimer energy (wave length, 308 nm; pulse length, 185 nanoseconds) from 30 to 80 mJ/mm 2 (fluences) at pulse repetition rates (frequency) from 25 to 80 Hz using a 10-s on and 5-s off lasing cycle. This finding compares with other excimer catheter technology (1.4-, 1.7-, and 2.0-mm catheters) delivering 30-60 mJ/mm 2 at 25-40 Hz using a 5-s on and 10-s off lasing cycle. These improvements in laser energy delivery were proposed to maximize tissue penetration while controlling complications within acceptable limits. [1] ELCA is usually performed during intracoronary saline infusion to minimize the risk of vapor bubble formation that can lead to arterial dissection; [1] however, a case report [5] discussed the injection of contrast before laser activation for treating an underexpanded stent (negating the established instruction for use that mandates meticulous contrast removal from the guiding catheter before activation). Due to the known effect of contrast on amplification of laser waves, this maneuver can extend the depth of energy distribution to soften the calcium. Indeed, such laser manipulation may cause complications as well, from spasm and dissections to perforations.
Our case is the first report involving treatment of a severe calcified lesion with a 0.9-mm catheter guided by OCT. Alternative use of ELCA and balloon dilation could alter calcified plaque morphology and achieve better results. ELCA with contrast injection should be used infrequently and with great caution. Severe calcified lesions more easily lead to malapposition and underexpansion. OCT is the highest resolution intracavity imaging examination at present, which could provide accurate evaluation. These methods can improve the procedure's success rate and optimize the procedure.
Declaration of patient consent
The authors certify that they have obtained the patient's consent form. In the form, the patient has given her consent for her images and other clinical information to be reported in the journal. The patient understand that her name and initials will not be published and due efforts will be made to conceal her identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-04-27T03:28:06.291Z
|
2018-04-20T00:00:00.000
|
{
"year": 2018,
"sha1": "ab16f2801a817a25584e8e14d36597995cc2a6ea",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.229901",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab16f2801a817a25584e8e14d36597995cc2a6ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
74653538
|
pes2o/s2orc
|
v3-fos-license
|
The role of intravascular ultrasound scan and thin-sliced coronary computed tomography angiography in diagnosing aortic dissection causing acute myocardial infarction
Introduction: Acute aortic dissection is a disease of high mortality. the symptoms may mimic other conditions and misdiagnosed, such as acute coronary syndrome, coronary involvement complicates the clinical scenario and increases mortality. case report: We herein report a case of an acute myocardial infarction caused by acute aortic dissection. Without noticing the aortic dissection, we performed emergent coronary angiography, which showed severe stenosis of the proximal right coronary artery. Intravascular ultrasound scan led us to suspect aortic dissection. However, we performed balloon angioplasty because the patient’s hemodynamic status was unstable. EcG-gated coronary computed tomography angiography provided a definitive diagnosis, and the patient underwent successful surgical repair of the aortic dissection. conclusion: Daisuke Nagatomo1, Daizaburo Yanagi1, Takeshi Serikawa2, Masanori Okabe3, Yusuke Yamamoto4 Affiliations: 1MD, Resident Physician, Department of Cardiology, Cardiovascular and Aortic Center of Saiseikai Fukuoka General Hospital, Fukuoka, JAPAN; 2MD, Manager of the Catheterization Laboratory, Department of Cardiology, Cardiovascular and Aortic Center of Saiseikai Fukuoka General Hospital, Fukuoka, JAPAN; 3MD, Assistant Director, Department of Cardiology, Cardiovascular and Aortic Center of Saiseikai Fukuoka General Hospital, Fukuoka, JAPAN; 4MD, Director, Department of Cardiology, Cardiovascular and Aortic Center of Saiseikai Fukuoka General Hospital, Fukuoka, JAPAN. Corresponding Author: Daisuke Nagatomo, 1-3-46 Tenjin, chuo-ku, Fukuoka-shi, Fukuoka, JAPAN.810-0001; Tel: +8192-771-8151, Fax: +81-92-716-0185; Email: miserybeatle@ me.com Received: 19 October 2013 Accepted: 23 November 2013 Published: 01 April 2014 Acute coronary syndrome associated with acute aortic dissection is not rare. However, the management of these conditions depends on the details of each case. this case demonstrates the difficulty of treating such cases in the real world. Herein, we describe educational imaging findings and briefly discuss the management of cases involving acute coronary syndrome associated with acute aortic dissection.
IntroductIon
Patients with acute aortic dissection (AAD) may initially present with only signs of acute coronary syndrome (ACS), such as ST elevation on electrocardiograms (ECGs). In such situations, the correct diagnosis may be missed. A diagnosis of acute coronary syndrome may lead to the inappropriate administration of thrombolytic agents, resulting in catastrophic consequences. Transthoracic echocardiography is useful as a simple imaging test. However, its diagnostic capability is sometimes insufficient in the emergency room. Although the exact diagnosis can be reached, the management of these conditions remains controversial, with only a few reports in literature. www.ijcasereportsandimages.com Nagatomo et al. 303
cAsE rEPort
A 62-year-old male with the sudden onset severe chest pain was transferred to our emergency room by ambulance. He had a history of aortic valve replacement (AVR) due to aortic regurgitation of the tricuspid valve four years earlier. The AVR had thus been performed using a mechanical valve and the prothrombin timeinternational normalized level at admission was 1.76. He did not suffer from back pain and no laterality of the blood pressure was observed. An ECG showed ST-segment elevation in leads II, III and aVF (Figure 1), suggesting inferior acute myocardial infarction (AMI). A chest X-ray showed no abnormalities. On the trans thoracic echocardiography neither mechanical valve failure or cardiac tamponade was observed. We performed emergent coronary angiography, which revealed a long tight lesion in the proximal segment of the right coronary artery (RCA) ( Figure 2) and normal left coronary artery. We planned to perform emergent percutaneous coronary intervention (PCI), however, intravascular ultrasound (IVUS) scan performed before PCI revealed a hypoechoic mass around the stenoticlesion. The narrowing lumen appeared not to be occupied by thrombi, but rather was oppressed by the surrounding mass ( Figure 3). These findings suggested that ascending aortic dissection had caused AMI. Although we considered emergent surgical repair, we decided to perform PCI first because the ST level was still elevated and the patient's hemodynamic status was unstable. Balloon angioplasty improved the flow of the RCA, and the hemodynamics was stabilized. We did : On intravascular ultrasound scan, a hematoma (arrow) was observed around the coronary artery. The narrowing lumen of the right coronary artery did not appear to be occupied by atheromatous plaque or thrombi but rather oppressed by the surrounding mass. The mass continued to the aorta on manual pull-back of the intravascular ultrasound catheter. not implant any stents because we wished to avoid the use of antiplatelet agents and did not recognize the acute recoil after balloon angioplasty. Acute aortic dissection was definitely diagnosed following contrast-enhanced computed tomography (CT) angiography (Figures 4 and 5), which clearly showed that the proximal RCA was embedded and oppressed by the intramural hematoma ( Figure 5A). The left main trunk was mildly oppressed by the communicating false lumen of the dissection ( Figure 5B-C). The patient underwent successful surgical repair of the aortic dissection.
In addition, coronary involvement is a fatal complication of AAD, with a reported incidence of from one to two percent [2]. However, AAD itself sometimes fails to demonstrate any of the classical physical findings, such as a widened mediastinum, aortic regurgitation or the laterality of blood pressure, and up to 30% of patients suffering from AAD are therefore initially suspected to have other conditions [3,4].
In this case, we could not diagnose AAD based on either the physical findings, chest X-ray or transthoracic echocardiography in the emergency room even though the patient had a history of AVR. Therefore, if when treating AMI patients in the emergency room, especially those with inferior AMI, clinicians should suspect the existence of aortic dissection at the back of the AMI [4]. However, if aortic dissection cannot be diagnosed in the emergency room in such cases, emergent CAG should be performed. Once AAD is identified as the cause of AMI, the question arised as to how the patient should be managed in the catheterization laboratory? It is controversial to first perform emergent surgical repair of the aorta, or primary PCI before surgery. Furthermore, whether to implant a stentis a difficult choice. The use of strong antiplatelet therapy can result in surgical difficulties, while the strong radial force of the implanted stent would assure a more stable coronary flow. Therefore, this decision should be made based on whether the patient is stable hemodynamically [5]. The findings of IVUS scan and coronary CT angiography in the present case are very educational, as they clearly showed AAD involving the coronary artery. In addition, this case highlights the difficulty of treating similar cases in the real world. concLusIon Acute coronary syndrome associated with acute aortic dissection is not rare. However, the management of these conditions depends on the details of each case, and there are many cases that the guidelines cannot be applied. This case demonstrates the difficulty in treating similar cases in the real world. We performed balloon angioplasty, refraining from using stenting before surgery, and subsequently obtained a good result. We believe that this therapeutic regimen is a potential treatment choice in cases involving a poorly disturbed coronary flow. *********
Author contributions
Daisuke Nagatomo -Conception and design, Acquisition of data, Analysis and interpretation of data, Drafting the article, Critical revision of the article, Final approval of the version to be published Daizaburo Yanagi -Conception and design, Acquisition of data, Drafting the article, Critical revision of the article, Final approval of the version to be published showing the right coronary artery was embedded and oppressed by the intramural hematoma (yellow arrow), (B) On multiplanar reconstruction imaging, the left main trunk appeared to be mildly oppressed by the communicating false lumen (white arrow), (C) Volume rendering imaging showed that the proximal segment of the right coronary artery was oppressed (yellow arrow) and the left main trunk was embedded in the communicating false lumen (white arrowhead).
dIscussIon
Acute aortic dissection can occur as one of the most serious complication late complications after AVR. Predictors of AAD after AVR include fragility and thinning of the ascending aorta, aortic dilatation, AR at initial AVR (especially, bicuspid aortic valve) and hypertension [1]. www.ijcasereportsandimages.com
Nagatomo et al. 305
Takeshi Serikawa -Conception and design, Acquisition of data, Drafting the article, Critical revision of the article, Final approval of the version to be published Masanori Okabe -Conception and design, Critical revision of the article, Final approval of the version to be published Yusuke Yamamoto -Conception and design, Drafting the article, Critical revision of the article, Final approval of the version to be published
|
2019-03-12T13:03:31.604Z
|
2014-03-12T00:00:00.000
|
{
"year": 2014,
"sha1": "e01c39262f57948ce9a5282539a8313eb44decdf",
"oa_license": "CCBY",
"oa_url": "http://www.ijcasereportsandimages.com/archive/2014/004-2014-ijcri/CR-10373-04-2014-nagatomo/ijcri-1037304201473-nagatomo.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4df9df392befc055ca6b1f10e43abe5e80c9a78d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
110246698
|
pes2o/s2orc
|
v3-fos-license
|
A First Look at the Impact of Electric Vehicle Charging on the Electric Grid in The EV Project
ECOtality was awarded a grant from the U.S. Department of Energy to lead a large-scale electric vehicle charging infrastructure demonstration, called The EV Project. ECOtality has partnered with Nissan North America, General Motors, the Idaho National Laboratory, and others to deploy and collect data from over 5,000 Nissan LEAFs TM and Chevrolet Volts and over 10,000 charging systems in 18 regions across the United States. This paper summarizes usage of residential charging units in The EV Project, based on data collected through the end of 2011. This information is provided to help analysts assess the impact on the electric grid of early adopter charging of grid-connected electric drive vehicles. A method of data aggregation was developed to summarize charging unit usage by the means of two metrics: charging availability and charging demand. Charging availability is plotted to show the percentage of charging units connected to a vehicle over time. Charging demand is plotted to show charging demand on the electric gird over time. Charging availability for residential charging units is similar in each EV Project region. It is low during the day, steadily increases in evening, and remains high at night. Charging demand, however, varies by region. Two EV Project regions were examined to identify regional differences. In Nashville, where EV Project participants do not have time-of-use electricity rates, demand increases each evening as charging availability increases, starting at about 16:00. Demand peaks in the 20:00 hour on weekdays. In San Francisco, where the majority of EV Project participants have the option of choosing a time-of-use rate plan from their electric utility, demand spikes at 00:00. This coincides with the beginning of the off-peak electricity rate period. Demand peaks at 01:00.
Introduction
Concerns with global climate change, United States reliance on foreign oil, increasing global demand for petroleum-based fuels, and increasing gas prices are changing consumer preferences and industry direction toward more fuel-efficient and alternative energy vehicles.Nissan and General Motors have successfully introduced a new generation of plug-in electric vehicles (PEV).
Several other automotive manufacturers plan to launch PEVs in 2012.This illustrates a shift to cleaner and more efficient electric drive systems.These vehicles, which include plug-in hybrid electric vehicles, extended range electric vehicles, and battery electric vehicles draw some or all of their motive power from onboard batteries, which are charged from the electric grid.In order for PEVs to be commercialized, electric charging infrastructure must be deployed.Charging infrastructure must be safe, financially viable, and convenient.Additionally, electric utilities must be able to manage PEV charging demand on the electric grid.
For years, researchers have worked to model future PEV and charging infrastructure markets to assess the impact of charging PEVs on the electric grid [1][2][3].Small-scale demonstrations have been conducted to document actual PEV charging behavior and grid impact using aftermarket conversion PEVs [4].With the recent launch of high-volume PEVs by major automakers, it is now possible to deploy charging infrastructure and assess grid impact with large-scale demonstrations.
In 2009, ECOtality was awarded a grant from the U.S. Department of Energy to embark on such a demonstration, called The EV Project.With matching cost share from ECOtality and its partners, The EV Project's total project budget is approximately $230 million.ECOtality is partnering with Nissan North America, General Motors, and several other companies to deploy over 5,000 Nissan LEAF TM battery electric vehicles and Chevrolet Volts extended range electric vehicles and over 10,000 charging systems to support them.These charging systems, referred to as electric vehicle supply equipment (EVSE), are being installed in private and public locations in 18 strategic markets across the United States.
The purpose of this paper is to summarize early EV Project residential EVSE usage and demand on the electric grid, based on data collected through the end of 2011.This information is provided to help analysts assess the impact of early adopter PEV charging on the electric grid.
Project Description
The purposes of The EV Project are to characterize vehicle and EVSE usage in diverse topographic and climatic conditions, evaluate the effectiveness of charging infrastructure, and conduct trials of various revenue systems for commercial and public charging infrastructure.The ultimate goal of The EV Project is to take the lessons learned from the deployment of these first PEVs and the charging infrastructure supporting them to enable the streamlined deployment of the next five million PEVs.To accomplish these purposes, ECOtality has partnered with the Idaho National Laboratory to collect and analyze electronic data from EV Project vehicles and charging units.
The EV Project uses the Blink brand of EVSE, which is manufactured by ECOtality.The Blink product line consists of AC Level 2 residential and commercial EVSE and a DC Level 2 commercial fast charger.The AC Level 2 EVSEs are 240-VAC, single-phase units, operating at charge rates up to 7.2 kW.Blink EVSE are networked, enabling data collection, user authentication, and additional functionality.All units have internal energy meters and touchscreen user interfaces and allow user-controlled charge scheduling.Numerous data parameters are collected from the Blink EVSE participating in The EV Project, including time when the EVSE is connected to a vehicle and transferring power to the vehicle, energy consumed from the grid, and 15-minute rolling average power demand.
Vehicles enrolled in The EV Project include the Nissan LEAF and the Chevrolet Volt.Both vehicles connect to AC Level 2 EVSE using SAE J1772®-compliant connectors.Figure 1 shows a Volt connected to a Blink AC Level 2 commercial EVSE unit.Table 1 shows the number of units, by project region, that had been deployed and transferred data to the Idaho National Laboratory.
The EV Project's deployment phase will continue through 2012, during which time additional vehicles and EVSE will be enrolled in the project.Data collection will continue for 1 year following completion of the deployment phase.While still early in the project at the time of this writing, the number of vehicles and EVSE enrolled is significant.This allows a preliminary assessment of the impacts of these PEVs on the grid.This paper focuses on the usage of residential EVSE in households with Nissan LEAFs and their aggregate electricity demand relative to time of day and day of the week.Additional studies will be conducted as The EV Project progresses to include the effect of nonresidential charging, including DC fast charging and localized distribution of EVSE on the electric grid.
Data Aggregation Approach
It is useful to understand how EVSEs are being utilized through time.Specifically, knowing when vehicles are connected to the grid and when power is being drawn from the grid is valuable information.In order to communicate this information, two curves were calculated from EV Project EVSE usage data: the charging availability curve and charging demand curve.These curves and the associated plots are described below.
Charging Availability
Charging availability at a point in time is the percentage of EVSE in a geographical area that are connected to a vehicle.The charging availability curve depicts the charging availability on a 15-minute interval versus time.The charging availability curve is a good way to observe the collective behavior of a large sample of vehicle owners as they connect and disconnect their PEVs to and from their EVSE.Figure 4 shows the charging availability curve for a 3-week time period during October 2011 for many residential EVSE in The EV Project.The charging availability curve for residential EVSE is a periodic curve with both daily and weekly patterns.The daily peaks and troughs of the curve correspond to the night time and day time, respectively.The peaks are caused as people return to their residences and plug in their vehicles in the evening.The troughs are caused as people unplug their vehicles and (presumably) leave their residences.The weekly pattern revolves around the weekends.The weekend days tend to have lower peaks and higher troughs than the weekdays.Higher troughs during the day result from fewer people unplugging their vehicles on weekend days.Lower peaks are due to the fact that fewer EVSE, which had been disconnected, were connected in the evening.
The daily and weekly patterns in the charging availability curve can be displayed using a 24-hour time-of-day plot for weekdays and another 24-hour time-of-day plot for weekend days.This kind of time-of-day plot is a concise way to visualize the daily behavior of many calendar days of data simultaneously.To create a time-of-day plot, the charging availability curves for each calendar day are superimposed on the same 24-hour scale.Figure 5 shows this superposition of each day in the weekday charging availability curve depicted in Figure 4.
To reduce the noise caused by the individual calendar day curves, the area between the maximum and minimum curves at each point in time are filled in.This creates an envelope of charging availability (as shown in Figure 6).The maximum and minimum charging availability at each point in time across all calendar days is highlighted with blue and green lines, respectively.
Charging Demand
Charging demand at a point in time is the total amount of power being drawn from the electric grid by a group of EVSE in a geographical area.This is typically shown as a curve of charging demand versus time, which is sometimes referred to as a load profile.Figure 7 shows the charging demand curve during a 3-week time period for many residential EVSE in The EV Project.This curve is based on 15-minute rolling average power measurements collected from the EVSE.
The charging demand curve is a periodic curve, with both daily and weekly patterns similar to the charging availability curve.The daily peaks and troughs of the charging demand curve correspond to the night time and day time, respectively.The demand at night is high, whereas the demand Because the charging demand curve follows the same periodic patterns as the charging availability curve, weekday and weekend time-of-day plots also will be used to visualize the charging demand.
Time-of-Day Plot Variations
Additional information beyond the maximum and minimum curves can be added to a time-of-day plot.Two variations of the time-of-day plots are described below.
Peak Day
In the time-of-day demand plot, it is sometimes helpful to visualize charging demand for one calendar day.While any calendar day could be chosen, it was decided to show the demand for the "peak day" in some reports for The EV Project [5].The peak day is defined as the calendar day during the time period being analyzed, on which day the highest demand was experienced.For example, the highest weekday charging demand during the 3-week period in October analyzed above occurred at 23:00 on October 13, 2011.Therefore, the charging demand curve for the entire day of October 13 is shown as the peak day curve on the time-of-day charging demand plot.The time-of-day charging availability plot shows the charging availability curve for October 13, as well.Figures 8 and 9 show the peak day curves on these two plots.
Quartiles
In a time-of-day plot, it also is helpful to see how the data points from each calendar day are distributed between the maximum and minimum at any time of day.This is depicted by dividing the range between the maximum and the minimum into quartiles and displaying the median and the inner quartile range (IQR).The median is a measure of central tendency that corresponds to the 50 th percentile.The IQR is the range between the 25 th to the 75 th percentiles.It is used as a measure of the spread of the data.Time-of-day plots with these features are shown in Figures 10 and 11. Figure 10 shows that during the 3-week period in October 2011, there tends to be more variation in charging availability during the night-time hours than during day-time hours.For example, at 03:00, the IQR is 3% and the overall range is 8%.At 12:00 (noon), the IQR is about 1% and the overall range is 3%.Further, between 22:00 and 06:00, the upper three quartiles are grouped closely together.This means that wide overall range of charging availability during these hours is due to a relatively small number of days with lower charging availability.
Figure 11: Time-of-day demand plot with median and inner quartile range.
Figure 11 shows that variation in charging demand is similar to variation in charging availability, with higher variability in the night-time hours.
Results
The data aggregation approach described in the preceding section was applied to a set of EV Project EVSE usage data.These data were collected from 2,704 residential EVSE between October 1 and December 31, 2011.The EVSE are located in private households owning Nissan LEAFs in each of the project regions.
Charging Availability
Weekday and weekend time-of-day charging availability plots were first created for all EVSE in the data set.These are shown in Figures 12 and 13.
In general, Figures 12 and 13 show that The EV Project participants tend to have their vehicles connected at home in the evening and late night hours.The percent of EVSE connected begins to increase after 16:00 and reaches a peak between 04:00 and 05:00 on both weekdays and weekend days.The slow but steady increase in the number of EVSE connected between midnight and 04:00 indicates that some participants plug in their vehicles in the early morning hours.Charging availability then begins to drop as vehicles are disconnected from EVSE in increasing numbers after about 05:00, presumably as individuals depart their homes for work or other daily activities.As few as 8% of residential EVSE have vehicles connected during the mid-day hours on weekdays.The minimum percent of EVSE connected on weekend days is higher at 18%.Fewer EVSE are connected during the early mornings after midnight on Saturdays and Sundays than on weekday early mornings.
Figure 12: Weekday time-of-day charging availability for all EV Project regions.
Figure 13: Weekend time-of-day charging availability for all EV Project regions The range of variation in charging availability from calendar day to calendar day during the quarter is significant.In Figure 12, the time with highest variation on weekdays occurs between 0:00 and 05:00, when the percent of EVSE connected varies from about 30 to over 50%.The range of the top three quartiles from midnight and 05:00 is about 9%, but the range of the bottom quartile over the same time period is about 15%.Also the range of the bottom three quartiles from 11:00 to 16:00 is about 4%, whereas the range of the top quartile over the same time period is about 7%.In both of these time periods, only one quarter of the data is responsible for about 60% of the spread between the maximum and the minimum.
Inspection of the charging availability curve for all of the fourth quarter (Q4) 2011 found that the increased size of the lower quartile between 0:00 and 05:00 and the increased size of the upper quartile between 11:00 and 16:00 is due to a change in behavior on the days surrounding Thanksgiving and Christmas 2011.The weekdays from Monday, December 26 through Friday, December 30 saw fewer EVSE connected during the night and more EVSE connected during the day than during other weeks in the quarter.This trend is represented in Figure 12 by a decrease in the minimum charging availability (green line) between 20:00 and 06:00 and an increase in the maximum charging availability (blue line) from 09:00 to 18:00.
The IQR can be examined to focus on common behavior and ignore atypical behavior, such as that seen around the holidays.The tight IQR shown in Figure 12 indicates that common weekday charging availability has little variation from day to day.The largest IQR on weekdays or weekend days occurs on weekends between 20:00 and 06:00, indicating that this is the period of greatest variation in user "plugging-in" behavior from day to day.
Weekend charging availability in Figure 13 follows a pattern similar to behavior on weekdays around the holidays, in that the median charging availability is lower at night and higher during the day compared to most weekdays.On weekend days, there is very little variation across days between noon and 16:00 on weekends, when 18 to 23% of EVSE are connected.
Time-of-day charging availability plots also were generated for individual EV Project regions.Figures 14 and 15 show these plots for the Nashville region in Q4 2011.
With the exception of less smooth lines due to a smaller sample size, the patterns in these figures are similar to the patterns in the charging availability plots for all EV Project EVSE in the data set (Figures 12 and 13).Again, these plots show similar patterns.This indicates that participants in the San Francisco region exhibited the same behavior as their counterparts in the Nashville region and as the overall project population, with respect to when they connect their vehicles to their residential EVSE.
Charging Demand
Time-of-day charging demand plots were generated from the data set for all EV Project regions in Q4 2011.The magnitude of the demand was normalized per EVSE by dividing the magnitude of the charging demand curve by the number of EVSE available for use on a given day and time.These plots are shown in Figures 19 and 20.
Figure 19: Weekday time-of-day charging demand for all EV Project regions.
Figure 20: Weekend time-of-day charging demand for all EV Project regions.
On first glance, it may appear that the charging demand magnitude in these figures is too low.After all, a single Nissan LEAF draws about 3.3 kW during steady-state charging, yet the charging demand time-of-day plot never exceeds 1 kW.Note, however, that the percent of EVSE connected to a vehicle never exceeds 60%, as shown in Figure 12.Thus, the normalized charging demand per EVSE will never exceed 60% of the maximum possible demand for one vehicle.Furthermore, not all vehicles that are connected to EVSE are drawing power.At any given time, a fraction of the vehicles connected have full battery packs and have ceased drawing power from the EVSE.The charging demand plots show the resulting demand of EVSE with vehicles connected and drawing power, normalized with respect to all EVSE in the data set.
When comparing charging demand in Figures 19
and 20 to charging availability in Figures 12 and 13, it is immediately apparent that demand is not proportional to the percent of EVSE connected to a vehicle.This occurs for two reasons.First, demand begins to fall off after about 01:00, even though charging availability remains high until about 6:00.Demand falls during this period as battery packs reach full charge and the vehicle control system stops power flow, even though the vehicle is still connected to the EVSE.This is consistent with analysis of individual charging events.For residential EVSE in all project regions, the average duration of time connected per charging event is 11.5 hours, whereas the average period of time when the vehicle draws power per charging event is 2.2 hours [5].
Second, charging availability steadily increases in the evening hours, whereas demand increases only slightly in the evening and then increases dramatically at midnight.This difference is due to user charge scheduling.The large spike in demand at midnight, as well as the small jumps on the hour between 20:00 and 01:00, is the result of numerous users programming their EVSE or vehicles to commence charging at these times.
As with charging availability, the minimum charging demand (green lines in Figures 19 and 20) is reduced considerably due to different user behavior around Thanksgiving and Christmas time.Otherwise, the upper three quartiles are fairly tightly grouped, indicating there is consistent demand for electricity from day to day in Q4, excluding the days around the holidays.
Demand on weekdays and weekend days drops to nearly 0 kW per EVSE between 05:00 and 06:00, even though charging availability does not begin to fall off until 06:00 or later.This indicates that the Nissan LEAFs being charged have sufficient time to fully charge during the night.Note that this is a function of the state of charge of the vehicles' batteries prior to charging, which is a function of how much the vehicles have been driven prior to charging.This topic will be studied in future works.
To investigate regional differences in demand, charging demand plots were generated for the Nashville EV Project region.These plots are shown in Figures 21 and 22.
Figure 21: Weekday time-of-day charging demand for Nashville.
In Nashville, an increase in the weekday demand curve from 16:00 to 20:00 corresponds to the increase in the charging availability curve over the same time period shown in Figure 14.In this region, most users do not program their vehicles or EVSE to begin charging at a scheduled time.
Instead, the vehicles begin to draw power from the EVSE immediately after they are plugged in.
Because people arrive home or otherwise choose to plug in their vehicles at home at different times throughout the evening, charging demand increases gradually.This charging diversity leads to relatively low peak demand and smooth changes in demand.
Figure 22: Weekend time-of-day charging demand for Nashville.
The weekend charging demand increases in Nashville on weekend days between 08:00 and 12:00 (Figure 22), despite a decrease in charging availability during this time (Figure 15).Demand increases as vehicles are connected to EVSE during this period and the vehicles begin to charge immediately.Charging availability decreases during this time because the number of vehicles being disconnected from EVSE is greater than the number of vehicles being connected.However, the vehicles being disconnected had already completed charging prior to the time they are unplugged; therefore, the disconnecting of vehicles does not reduce charging demand.
Charging demand plots also were generated for the San Francisco EV Project region.These plots are shown in Figures 23 and 24.
Figure 23: Weekday time-of-day charging demand for San Francisco.
In San Francisco, a large increase in demand occurs at 00:00 (midnight).This is depicted in As mentioned previously, the midnight spike in demand is a result of a large number of users programming their EVSE or vehicles to begin charging at midnight.
Ninety percent of EVSE in the San Francisco region are located in the Pacific Gas & Electric service territory.This electric utility offers its EV-owning customers an experimental residential time-of-use rates for "low emission vehicle refueling."In this rate structure, the cost per kilowatt-hour of electricity is reduced during off-peak hours.On weekdays, the off-peak period is from midnight to 07:00.Weekend off-peak hours start at 21:00 [6].While it is not possible to determine which customers have signed up for these rates, it obvious from the plots that many EVSE users schedule the start of vehicle charging at midnight.It is assumed that this behavior is motivated by the desire to take advantage of the reduced electricity price.
The tendency for many EVSE users in the region to begin charging at midnight has consequences.First, the strong increase in charging availability from 16:00 to 22:00 is not accompanied by an equivalent increase in the demand curve.Instead, the demand curve has only a slight increase over this time period.This serves to reduce the demand on the electric grid during the afternoon and evening hours, which is typically the period of peak demand.Second, because many people schedule to begin charging immediately at 00:00, a nearly instantaneous spike in demand occurs.This large spike in demand may pose problems for low-voltage distribution systems.
Peak demand in San Francisco occurs at 01:00.This occurs because a relatively small number of EVSE users schedule charging to begin at this time.The demand from these EVSE augments the demand from EVSE whose vehicles began charging at 00:00 and are still charging.
Because there is less diversity when vehicles begin charging in San Francisco than in Nashville, the absolute peak in San Francisco is greater.San Francisco's weekday peak median demand is 1.05 kW per EVSE, compared to 0.65 kW in Nashville.It should be noted that the energy consumed per EVSE per day in these two regions is the same, making this comparison possible.These differences in peak demand occur because EV Project participants in San Francisco tend to program their EVSE or vehicles to start charging at a specific time, whereas participants in Nashville do not.It is assumed that this behavior is driven by the availability (or lack thereof) of reduced electricity price through time-of-use rates.In general, residential EVSE charging availability is low during the day, steadily increases in the evening, and remains high at night.Charging availability, which is a function of when individuals connect their vehicles to their EVSE, is consistent across EV Project regions.
Conclusion
Day-to-day variation in charging availability and charging demand on weekdays is high during Q4 2011, because user behavior on the weekdays surrounding Thanksgiving and Christmas varies from the other weekdays in the quarter.On the weekdays surrounding the holidays, charging availability was low at night and high during the day, similar to weekend charging availability.To ignore the effect of the holidays and compare common weekday and weekend user behavior, the charging availability IQR was used.The IQR is highest on weekends between 20:00 and 06:00, indicating that this is the period of greatest variation in user "plugging-in" behavior from day to day, excluding days close to or on holidays.
When EVSE in all regions are examined in aggregate, demand peaks on weekdays and weekend days during the 00:00 hour.Weekday demand is lowest between 06:00 and 12:00, during which time it is nearly 0 kW per EVSE.
In order to identify regional differences in charging demand, EVSE usage in two individual EV Project regions was examined.
In Nashville, where EV Project participants do not have time-of-use electricity rates, demand increases each evening as charging availability increases, starting at about 16:00.Demand peaks in the 20:00 hour on weekdays.
In San Francisco, the majority of EV Project participants have the option of choosing a special time-of-use rate plan from their electric utility.In this region, demand spikes at 00:00, which is the beginning of the off-peak electricity rate period.Demand peaks at 01:00.
In both regions, demand on weekdays and weekend days drops to nearly 0 kW per EVSE between 05:00 and 06:00, even though charging availability does not begin to fall off until 06:00 or later.This suggests that the Nissan LEAFs being charged have sufficient time to fully charge during the night.Note that this is a function of how much the vehicles are driven prior to charging.This topic will be addressed in future works.
In San Francisco, financial incentive provided by time-of-use rates appears to successfully shift charging demand to off-peak hours.This may benefit the electric utility by preventing an increase in peak system demand.However, a large number of users in this region schedule charging to begin immediately at midnight, which is the beginning of the off-peak period.This low diversity in charging start time creates an unintended demand spike at the beginning of the off-peak period.This may pose a different set of problems to the electric utility.
relations group and leads the EVSE Infrastructure planning consulting efforts.
Don Scoffield
Mr. Scoffield is an engineer in the Energy Storage and Transportation Systems department at the Idaho National Laboratory.
John Smart
Mr. Smart is an engineer in the Energy Storage and Transportation Systems department at the Idaho National Laboratory.
Figure 2 :
Figure 2: Blink DC fast charger with Nissan LEAF.
Figure 3
Figure3shows the major metropolitan areas where The EV Project is deploying charging infrastructure.
Figure 3 :
Figure 3: EV Project cities.At the end of 2011, there were approximately 4,000 Nissan LEAFs and 200 Chevrolet Volts enrolled in the project.There were equal numbers of residential EVSE installed, because each participating vehicle owner had a Blink residential AC Level 2 EVSE installed in their residence.Additionally, approximately 950 publicly available AC Level 2 EVSE and 15 DC fast chargers had been installed by the end of 2011.Table1shows the number of units, by project region, that had been deployed and transferred data to the Idaho National Laboratory.
Figure 4 :
Figure 4: Charging availability curve for many residential EVSE in The EV Project.
Figure 5 :
Figure 5: Weekday time-of-day charging availability from Figure 4 plotted on a single 24-hour scale.
Figure 6 :
Figure 6: Weekday time-of-day charging availability envelope derived from Figure 5.
during the day is close to zero.This indicates a strong preference among EV Project participants for night-time residential charging.The weekly pattern revolves around the weekends.The lowest demand occurs on the weekend days.Demand increases on each weekday until it reaches a peak on Wednesday or Thursday night.Then demand diminishes again as the weekend comes.
Figure 7 :
Figure 7: Charging demand curve for many residential EVSE in The EV Project.
Figure 8 :
Figure 8: Time-of-day charging availability plot with peak day curve.
Figure 9 :
Figure 9: Time-of-day demand plot with peak day curve.
Figure 10 :
Figure 10: Time-of-day charging availability plot with median and inner quartile range.
Figure 14 :
Figure 14: Weekday time-of-day charging availability for Nashville.
Figure 15 :
Figure 15: Weekend time-of-day charging availability for Nashville Figures 16 and 17 show time-of-day charging availability for the San Francisco region in the Q4 2011.
Figure 16 :
Figure 16: Weekday time-of-day charging availability for San Francisco.
Figure 17 :
Figure 17: Weekend time-of-day charging availability for San Francisco Figures 23 and 24 by comparing the far right-hand side of each plot to the far left-hand side of the same plot.
Figure 24 :
Figure 24: Weekend time-of-day charging demand for San Francisco.Figures 25 and 26 show the same weekday and weekend charging demand for San Francisco, but with the time scale on the x axis shifted to the right by 16 hours to allow visualization of the increase in demand at midnight.
Figure 25 :
Figure 25: Weekday time-of-day charging demand for San Francisco with shifted time scale.
Figure 25 :
Figure 25: Weekend time-of-day charging demand for San Francisco with shifted time scale.
This paper summarizes early usage of EV Project residential EVSE in households with Nissan LEAFs, based on data collected during Q4 2011 from 2,704 EVSE.A method of data aggregation was developed to summarize EVSE usage by the means of two metrics: charging availability and charging demand.Charging availability was plotted relative to time of day and day of the week to show the range of percentage of EVSE connected to a vehicle over time.Charging demand was plotted to show the range of charging demand of the EVSE on the electric gird over time.
|
2018-12-11T14:36:54.761Z
|
2012-05-01T00:00:00.000
|
{
"year": 2012,
"sha1": "3deea0d5e665d984b77292f38965992f9c1aa3fc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2032-6653/5/3/667/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3deea0d5e665d984b77292f38965992f9c1aa3fc",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
247783649
|
pes2o/s2orc
|
v3-fos-license
|
Which Is More Likely to Achieve Cardiac Synchronization: Left Bundle Branch Pacing or Left Ventricular Septal Pacing?
In advanced heart failure patients with low left ventricular ejection fraction and left bundle branch block (LBBB), cardiac resynchronization therapy (CRT) via stimulation of both the right ventricle (RV) and the left ventricular lateral wall is a recommended therapeutic strategy (1–3). However, conventional biventricular pacing causes a dyssynchronous cardiac contraction due to non-physiological fusion of paced propagation, with a non-response rate of up to 30% (4, 5). In 2016, Mafi-Rad et al. (6) established the viability of the left ventricular septal pacing (LVSP) via a trans-interventricular septal approach in 10 patients with sinus node dysfunction, which shortened QRS duration and preserved acute left ventricular contractility compared to RV pacing. Huang et al. refined LVSP and introduced first left bundle branch pacing (LBBP) in 2017 (7), which could restore physiological left ventricular contractility in a patient with LBBB by pacing left bundle branch (LBB) immediately beyond the conduction blockage with satisfactory pacing parameters. Many studies have demonstrated the feasibility and stability of LBBP in patients with pacemaker indications, and it has been proposed that LBBP is a novel physiological pacing method for delivering CRT for achieving electric resynchronization in patients with LBBB (8–10).
INTRODUCTION
In advanced heart failure patients with low left ventricular ejection fraction and left bundle branch block (LBBB), cardiac resynchronization therapy (CRT) via stimulation of both the right ventricle (RV) and the left ventricular lateral wall is a recommended therapeutic strategy (1-3). However, conventional biventricular pacing causes a dyssynchronous cardiac contraction due to non-physiological fusion of paced propagation, with a non-response rate of up to 30% (4, 5). In 2016, Mafi-Rad et al. (6) established the viability of the left ventricular septal pacing (LVSP) via a trans-interventricular septal approach in 10 patients with sinus node dysfunction, which shortened QRS duration and preserved acute left ventricular contractility compared to RV pacing. Huang et al. refined LVSP and introduced first left bundle branch pacing (LBBP) in 2017 (7), which could restore physiological left ventricular contractility in a patient with LBBB by pacing left bundle branch (LBB) immediately beyond the conduction blockage with satisfactory pacing parameters. Many studies have demonstrated the feasibility and stability of LBBP in patients with pacemaker indications, and it has been proposed that LBBP is a novel physiological pacing method for delivering CRT for achieving electric resynchronization in patients with LBBB (8-10).
BRIEF PACING MECHANISMS OF LBBP AND LVSP
Selective LBBP (SLBBP) and non-selective LBBP (NSLBBP) are two subgroups of LBBP. SLBBP, that is, only the LBB trunk or its proximal fascicles is captured ( Figure 1A). NSLBBP, that is, concomitant LBB and adjacent myocardium are captured (Figures 1B,E). It is LVSP if just the left ventricular septal myocardium is captured ( Figure 1D). Both LVSP and LBBP usually present a paced pseudo right bundle branch block (RBBB) pattern in lead V1 (11), with the percentage of direct evidence that LBBP captured LBB ranging between 60 and 90% (12-14). Therefore, LBBP described in some previous studies was actually LVSP. A method to measure the time from stimulus to left ventricular activation at high and low outputs in lead V5 or V6 (Stim-LVAT) to distinguish LBBP from LVSP with a specificity of 100% has recently been presented (11). If the Stim-LVAT remains shortest and constant (prolonged ≤ 10 ms) as the pacing output decreases, it must be LBBP, because LBBP directly captures the LBB resulting in physiologically LV excitation; otherwise LVSP can be considered, because LVSP excites left ventricular septum first, rather than LBB. SLBBP and NSLBBP can be distinguished by the discrete component and isoelectric interval between the pacing artifact and V wave on intracardiac electrogram with unchanged Stim-LVAT (11).
COMPARISON OF LBBP AND LVSP IN INTERVENTRICULAR SYNCHRONY
In the paper published in Frontiers in Cardiovascular Medicine, Curila et al. (15) used ultra-high-frequency electrocardiography to compare ventricular depolarization in SLBBP, NSLBBP, and LVSP in 57 bradycardia patients, which were rigorously distinguished by Stim-LVAT. They concluded that LVSP preserved interventricular synchrony and had the same or better local depolarization durations than NSLBBP and SLBBP. Furthermore, they investigated two different types of NSLBBP capture, namely, NSLBBP with LBB and adjacent myocardium captured ( Figure 1B), and NSLBBP with LBB and left septal myocardium captured ( Figure 1E). NSLBBP with LBB and adjacent myocardium captured, that is, NSLBBP is converted to SLBBP with a shortest and constant Stim-LVAT while decreasing the pacing outputs. NSLBBP with LBB and left septal myocardium captured, that is, NSLBBP is converted to LVSP with prolonged Stim-LVAT while decreasing the pacing outputs. They evaluated the two types of NSLBBP capture and found no statistical difference in Stim-LVAT between the two types, but NSLBBP with LBB and left septal myocardium captured showed greater interventricular synchronization.
Then, which pacing strategy is more physiological, LBBP or LVSP? SLBBP and NSLBBP, unlike LVSP, capture the intrinsic conduction system and rapidly excite LV to maintain left ventricular synchrony at levels comparable to intrinsic left ventricular activation (16). At the same time, activation propagates slowly from left to right in the interventricular septum to excite RV, resulting in interventricular dyssynchrony. LVSP, on the other hand, captures left ventricular septal myocardium, resulting in direct left-to-right septal activation, preserving interventricular dyssynchrony. The terminal R ′ /r ′ wave duration in lead V1, which indicates delayed right ventricular excitation, was significantly longer in LBBP than in LVSP (17), also indicating that LBBP caused more pronounced interventricular dyssynchrony than LVSP. However, this interventricular synchrony of LVSP may not be physiological. Instead of using the same stimulation marker, such as the pacing artifact, Curila et al. calculated interventricular dyssynchrony in SLBBP, NSLBBP, and LVSP as the difference between the first and last activation (15). There is no doubt that Stim-LVAT of LVSP is significantly longer than that of LBBP, implying that the LV excitation in LVSP occurs later than in LBBP. As a result, the improved interventricular synchronization of LVSP is attributable to greater overlap of LV and RV activation produced by delayed activation of both the LV and the RV (18).
Curila et al. only evaluated the LBBP with unipolar pacing configuration, not bipolar pacing configuration (15). Lin et al. developed a bilateral bundle branch area pacing strategy that involves stimulating the cathode and anode in various pacing configurations to capture both LBB and right bundle branch (RBB) area, which can diminish delayed right ventricular activation caused by LBBP and result in more physiological ventricular activation (19). It is essentially LBBP with bipolar pacing configuration (Figure 1C), with the cathode tip capturing LBB and the anode ring capturing RBB area. Shimeno et al. also revealed that the terminal R ′ /r ′ wave duration of LBBP with bipolar pacing configuration is shorter than that of LVSP, presumably due to the contribution of the anodal capture during bipolar pacing (17). In addition, some previous studies and case reports have shown that LBBP can shorten the QRS duration of intrinsic RBBB or even completely correct RBBB (19-23), while LVSP cannot, but the underlying mechanism remains unclear and needs further study.
CONCLUSION
Compared with LVSP, LBBP is a more ideal pacing strategy for CRT, and many studies have confirmed its safety, stability and efficacy. Future study will focus on how to diminish RBBB associated with LBBP in order to obtain better physiological interventricular synchrony. For example, adjusting the atrioventricular delay to combined LV stimulation by LBBP with intrinsic RV excitation in patients with normal RBB conduction, or modifying the interelectrode distance of pacing lead to better complete bilateral bundle branch area pacing in patients with RBBB. Although LVSP in close proximity to LBB can be an alternative choice, clinically, this is essentially NSLBBP. The pacing output necessary to convert LVSP to NSLBBP, on the other hand, had not been investigated, and it was unknown if this output would have an adverse effect on pacemaker battery longevity. The long-term clinical effects of LVSP and LBBP remains unclear. Current studies solely examine the differences in electrophysiologic characteristics between LVSP and LBBP, such as Stim-LVAT, QRS duration, terminal R' wave duration, QRS area, etc. In the future, it will be necessary to evaluate the echocardiographic activation of LVSP and LBBP, encompassing not only intraventricular synchronization, but also interventricular synchronization.
AUTHOR CONTRIBUTIONS
KZ wrote the original manuscript and conceptualized the idea. DC and QL supervised and wrote and edited the manuscript for publication. All authors contributed to the article and approved the submitted version.
|
2022-03-30T13:24:49.937Z
|
2022-03-28T00:00:00.000
|
{
"year": 2022,
"sha1": "53340901dcdb43f586d9f3e8583fcfcd14f33df0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "53340901dcdb43f586d9f3e8583fcfcd14f33df0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252022547
|
pes2o/s2orc
|
v3-fos-license
|
Virtual Care in Undergraduate Medical Education: perspectives beyond the pandemic. How medical education can support a change of culture towards virtual care delivery in Canada
The pandemic has led to further the importance of telemedicine, teleconsultation, and technology as essential components for delivering of care in all settings. Prior to the pandemic, the instruction surrounding the safe delivery of virtual care in undergraduate medical education was sparse and informal. For care to be delivered to the high standards expected of Canadian physicians, the University of Ottawa undergraduate medical program (UGME) made the decision to define virtual care as a series of tools to facilitate and support the safe delivery of care. By focusing on virtual care as a set of tools, it provides the framework for skill development for future clinicians early in their careers and provides a critical thinking pathway to support the ever-evolving landscape of digital technology in the provision of safe, effective, timely and patient-centered care. This white paper shares our experience creating a virtual care curriculum and the possible implications for medical education.
Introduction
When the global pandemic emerged in March 2020, Medical Faculties across Canada faced major challenges to ensure the continuity of medical education. The significant acceleration in telemedicine services required prompt adjustments in medical education training. In March 2020, Clerkship students were removed from the clinical environment as it underwent a rapid and significant transformation at both the institutional and community level. Though virtual care has had its role in the delivery of care since the 1970s, the pandemic led to further the importance of telemedicine, teleconsultation, and technology in all settings. 1 Virtual care will likely remain a permanent fixture in the Canadian health care system postpandemic. 2 Prior to the pandemic, an important cultural shift towards the delivery of virtual care was already taking place. In 2019, the Federation of Medical Regulatory Authorities of Canada (FMRAC) started to provide guidance frameworks on the minimum regulatory standards to FMRAC members to help inform the development of the medical regulatory authorities' policies and guidance to physicians to promote pan-Canadian consistency in virtual care. 3 In February 2020, a joint virtual task force led by the Canadian Medical Association (CMA), the College of Family Physicians of Canada (CFPC), and the Royal College of Physicians and Surgeons of Canada (RCPSC) released their report on virtual care. In their report, they made recommendations for scaling up virtual medical services as well as the virtual care handbook for Canadian physicians. 4,5 The pandemic precipitated the implementation of this shift. National certifying colleges such as the RCPSC and the CFPC organized resources for their members. They provided guides for the use of telemedicine and advanced and meaningful use of Electronic Medical Records (EMR). 6,7 This initiative was supported by the Canadian Medical Protective Association (CMPA) in providing guidance and reassurance regarding virtual care as long as due diligence is maintained in professional practice. 8 Shifts in office and virtual primary care during the early COVID-19 pandemic in Ontario were recorded and reported in the Canadian Medical Association Journal. 9 It demonstrated that in primary care in Ontario, there was a large shift from office to virtual care over the first four months of the COVID-19 pandemic. 9 In May 2020, the CFPC released data from their member survey on COVID-19 that found that 89% of family physicians contacted their patients at home by phone, email, or other methods. Four out of five patient visits were virtual, with most consultations occurring by phone. Many virtual appointments were also using video conferencing tools such as Zoom or telemedicine services either integrated into their EMRs or through the Ontario Telemedicine Network (OTN). 10 In the US, McKinsey & Company claims-based analysis suggested that up to 20% of emergency room visits could be avoided using virtual care, and almost a quarter of healthcare office visits and outpatient volume could be delivered virtually. 11 Additionally, a Deloitte pre-pandemic study found that 50% of health care executives thought that a least a quarter of all outpatient care, preventative care, long-term care, and wellbeing services would be delivered through virtual care by 2040. 12 According to Deloitte, virtual care is more than just a trend. Virtual care has far-reaching implications beyond patient provider interactions but can also affect management at a health system level. 13 For Deloitte, "rapid investments in virtual care solutions in response to the coronavirus crisis have accelerated Canada down a path that will have significant and long-lasting impacts on health care delivery in this country. Continuous investment will be required to sustain the change." What does this mean for medical education?
According to the CMA's Virtual Care Task Force, virtual care is "any interaction between patients and/or members of their circle of care, occurring remotely, using any forms of communication or information technologies, with the aim of facilitating or maximizing the quality and effectiveness of patient care." 4 Virtual care can be delivered in synchronous (video or phone) and asynchronous (messaging patients via secure portals) methods. For care to be delivered to the high standards expected of Canadian physicians, the University of Ottawa undergraduate medical program (UGME) made the decision to define virtual care as a series of tools to facilitate and support delivery of care. Virtual care as a set of tools has far reaching implications beyond the pandemic and immediate geographical location to support the educational concepts of collaborative care, addressing the needs of underserved populations, and providing health care to remote and rural areas where access to care is a significant barrier. Focusing on virtual care as a set of tools provides the framework for clinical skill development unique to the virtual environment and the development of a broader set of competencies for future clinicians introduced early in their careers. It also provides a critical thinking pathway to support the ever-evolving landscape of digital technology in the provision of safe, effective, timely, and patientcentered care.
Medical education must take the necessary steps to provide physicians in training with the knowledge, skills, critical appraisal tools, and the attitudes required to integrate virtual care as a component in the delivery of safe, effective, and comprehensive patient care.
In 2019, Sharma et al. published an article outlining core competencies for the Medical Virtualist. 14 Supported by recent opinion articles outlining the necessity and shift towards a hybrid model of delivery of care post pandemic, as well as significant industry investment in virtual care platforms and recruitment of health care professionals, medical education has a social accountable obligation to ensure that graduates are meeting the emerging standards of virtual care. 13,15,16,17 The Medical Council of Canada (MCC) has an important role to play in solidifying these standards of care.
As outlined by Sharma et al., competencies and standards of care could fit within three domains: digital communication and "webside" manner, scope and standards of care, and virtual clinical interactions. For example, Sharma et al. describe the optimization of visualization, body language and speech as well as graphic assisted communication. 14 The goal would be to incorporate these important communication skills within the clinical skills development programs. By focusing on these types of skill acquisition in order to provide virtual care safely and effectively, further curriculum development and integration into existing clinical skills development programs at both the medical school and residency years in a graduated fashion will be possible. Furthermore, the importance of Personal Health Information Protective Act (PHIPA) compliance, eprescribing and virtual care pathways for follow-up and urgent situations can be reinforced during medical school. 18 There is also evidence emerging on the appropriateness of virtual physical exam and that diagnostic accuracy or agreement of virtual care which appears to be similar to in-person visits in primary care. 19,20 These skills can be introduced longitudinally and assessed appropriately.
It is no surprise that the medical learning environment has changed significantly over the last two years with the introduction of virtual care. As such new adaptations of existing frameworks are being proposed to help health professional learning environments prepare future physicians for this new clinical environment. 21 Because of the complexity of virtual care for both the clinician and the learner, we elected that the foundational concepts and associated clinical skills must be introduced early in a learner's educational journey to support successful and safe virtual care encounters. These foundational competencies are required to critically evaluate the rapidly evolving virtual care and digital health landscape. The CMA's virtual task force report listed the following key components for virtual care development, which include interoperability as defined as the ability of EMRs, patients' virtual charts and other stakeholders of medical information such as pharmacies to exchange and make use of information, governance, licensure, quality of care, payment models and medical education at all levels (undergraduate, post graduate, and continued professional development). 4 At the University of Ottawa, they decided to focus primarily on the importance of quality of care and its implications for medical education.
As we prepared for the re-integration of the University of Ottawa clerkship students in May 2020 to the changed clinical environment, it was clear that we needed to provide an opportunity in a structured learning environment to introduce the concepts associated with quality virtual care. A virtual care curriculum was developed. The goal was to prepare learners for the changed clinical environment and to become familiar with virtual care tools. In return, learners could participate in the care to their patients and play a supportive role in the interdisciplinary care teams. Secondary gains could be anticipated as this would support change in practice through future training and in their practice decisions. Like others, we share the vision that virtual care/digital health is not simply a matter of moving to a new platform; it requires a cultural transformation. 22
Curriculum innovation at the University of Ottawa
After a review of our existing programs and learning objectives, a working group of clinical experts in virtual care was developed drawing from the expertise of practicing interprofessional clinicians with experience in virtual care, medical educational experts, experts from the Department of Communications, and learner representatives. Seeing no learning objectives and limited educational activities in our curriculum, we developed a set of learning objectives that were linked to our MD Program objectives and the relevant CanMED competencies and roles. At the time of the development, there were no learning objectives in the literature. Considering the short timeline for the reintegration of the clerkship students in May 2020, we focused mainly on educational activities aligned to the following two key themes: • The history and medical-legal implications of virtual care From conception, the vision for this program was a longitudinal curriculum, to be further developed using a spiral design, with program assessment and scholarship congruent with its goals and to be delivered in both French and English. It was first introduced to learners in Year 3 (MD2021), re-integrated into the clinical environment in May 2020. In August 2020, incoming Year 3 learners (MD2022) were started in the program and finally in spring 2021 Year 2 learners and Year 1 learners (MD2023 and MD2024) were started in the program. Though initially the program was intended for clerkship students, the benefit of introducing foundational concepts early in the learner's educational journey allowed for the program to be stretched out to introduce the curriculum during preclerkship.
The following is a list of the University of Ottawa Virtual Care Objectives: 1. Describe the technical requirements, proposed benefits and challenges of providing health care to patients through telemedicine.
2. Explain the differences between the various categories of telemedicine (teleconsultation, teleexpertise, medical telemonitoring, tele-medical assistance and emergency telemedicine).
3. Describe the technical requirements that must be in place to provide patient care safely through a telemedicine platform.
4. Describe the patient groups that would benefit from participation in a telemedicine program. 11. Explain how the principles from social-behavioral sciences can assess the impact of accessing or seeking care using telemedicine.
12. Describe how a hybrid model of virtual care and in person visits complement each other and provide value to the patient and the clinician experience.
The curriculum was structured into two components.
Part 1: Theoretical component (learning objectives 1-5)
• Two sessions covering the history of telemedicine, best practices in telemedicine, preparing for a teleconsultation (communication skills, physical exam skills, the physical environment and followup) and finally, challenges encountered in telemedicine with practice tips. These sessions are delivered through online facilitated sessions by interprofessional health care providers and physician experts and include videos, patient testimony and existing modules.
Part 2: Practical component (learning objectives [6][7][8][9][10][11][12] • Five sessions of teleconsultation with a standardized patient. Cases are developed in house with an interdisciplinary team and focus on history taking, physical examination skills, management plan development and case conference/collateral information gathering Physician facilitators provide feedback and lead a group debriefing session at the end of each teleconsultation.
Our experience to date
Currently 95 francophone and 235 anglophone students from the MD2021 and MD2022 cohorts complete the virtual care program with 48 francophone and 121 anglophone students from the MD2023 program partially completing the program, their participation is ongoing. As all our course material is evaluated through feedback forms, our initial feedback form from students indicated increased confidence in the use of teleconsultation as a tool to deliver patient care and agreement that virtual care is an important educational topic to be discussed and explored further. Practically speaking, learners can seamlessly integrate into the learning environment when virtual care opportunities are present.
With the recent appointment of the inaugural Faculty lead for the longitudinal virtual care curriculum, we expect there to be further expansion of the current learning objectives to encompass further competency-based language, aligning the curriculum to the national Entrustable Physician Activities (EPAs) and developing an appropriate assessment strategy to ensure students have acquired the knowledge, skills, attitudes, and behaviours required to be entrusted to deliver virtual health care. Furthermore, there will be continued expansion of the program to include faculty development and program evaluation. Research will be conducted to identify emerging themes and educational trends in the evolving learning environment. A working group on curriculum renewal is currently re-assessing the program and making recommendations for true integration within the curriculum. Ideas around artificial intelligence, analytics/machine learning, remote monitoring of patients, collaborative interprofessional care, ePrescribing, digital health tools, electronic triage tools, treatment optimization and digital communication tools will need further definition and discussion to see how it can best integrate into medical education. Finally, we will continue to explore and expand on the role that virtual care plays to respond to local, provincial, national, and international community's needs to hopefully help bridge gaps to accessing care.
There is a lack of research at this time on the assessment of medical students and virtual care. However, the use of written examination questions, standardized patients, OSCE and simulation in the assessment of learners is well established and accepted as an important part of assessment throughout a learner's medical education. The competencies outlined based on the current curriculum objectives could be demonstrated through written and simulated patient care scenarios. Training in virtual care can be integrated within the physician activities already evaluated by medical schools in areas of patient communication, suitability for patient encounters as well as professional behaviors demonstrated through EPAs in the learning environment.
Advantages, challenges, and recommendations for a virtual care curriculum
Virtual care adds a new layer of complexity and expands on the competencies currently expected of all physicians for entry into practice.
Virtual care competencies will enable our educational and assessment systems to incorporate digital communication, patient interactions in a digital world, virtual clinical interactions, and adjustment of the physical exam in the existing curriculum structure. As tools that need practice depending on the learning environment, Faculty will be well positioned to support the learner to apply these skills to promote safe and appropriate clinical care in a virtual environment. Discussions around virtual care tools also expand on already established areas covered in medical education such as privacy, security, medico-legal and patient safety which are cornerstones to ensure the safe and effective practice of future physicians. Given that professional behaviors are already integrated into most medical programs such as EPAs, the direct observation of the application of these tools and skills can help assess these behaviors in a virtual care platform. It is hoped that it will contribute to our collective commitment to excellence, respect, integrity, empathy, accountability, and altruism within the Canadian healthcare system. 23 Because medical schools provide the foundational training of future physicians; content must reflect the changing landscape of health care delivery in Canada. It will ensure that students are prepared and can further support the cultural transformation of the health care system through learned lived experiences. As learners graduate from programs, we hope that they will have developed the critical reasoning skills necessary to apply technology safely and effectively, and evaluate resources and tools available to them critically. More importantly, we also hope that learners will put into practice these skills to provide more equitable access to care for patients, improve patient engagement and outcomes, and help decrease existing barriers to care and transportability of the health care system.
There are some important high-level barriers to moving forward with virtual care training. As identified by the Virtual Care Task Force, regulatory authorities and industry must follow and provide the necessary incentives, support, and engagement to ensure the longevity and portability of the program. 4 It is reassuring to see in the last two years that these organizations are evaluating these issues and calling into action governments to support this cultural shift. Most recently in Ontario, virtual care billing codes have been included in the recently ratified physician's agreement, solidifying virtual care permanence as an option for health care delivery. 24 It is important to note that it is possible that the industry may look to profit from the ease of accessibility of virtual care by charging patients premiums for access. Significant technological barriers also exist for patients, and it is naïve to think that all patients are technologically savvy, have adequate equipment, or have fast enough Wi-Fi. Virtual care should not become an all-or-nothing approach to clinical encounters. It should be an option, when appropriate, and not only reserved for those with financial or technological means. Creative approaches may be needed to ensure health care equity and access. Finally, virtual care tools and technology are ever-changing and rapidly evolving. There is no standardization of digital technology or lexicon which can be overwhelming for both patients and practitioners. Knowing this, we feel that our role as educators is to focus on the critical appraisal of these technologies and competencies that give the clinical foundations to provide the best care regardless of the tool used. Hence, medical schools play an important role in assessing the important communication aspects of virtual care and the professional behaviors associated with them.
Conclusion
Virtual care curriculums longitudinally offered throughout medical schools can train future physicians to offer highquality virtual care in a generalizable and sustainable way.
Stakeholders such as the AFMC Virtual Care in Medical Education Task Force, curriculum leads and national organizations such as the CMA, CFPC, RCPSC, and patient advocacy groups recognize the value and importance of the clinical application of virtual care tools for the care of patients.
By acknowledging the need to incorporate virtual care training in medical education, we can set expected standards, deliverable and expected outcomes as learners graduate medical school. Virtual care competencies can be integrated into current assessment models with emphasis on the communication skills, critical reasoning skills and physicians' behaviours required to ensure safe, effective and appropriate virtual care for patients. By doing so, it may help with the development of national virtual care accreditation standards which can help medical schools not only create programs and learning opportunities to support the shift in the culture for delivery of care, but also hold future physicians to the excellence and standards of care expected of Canadian graduates.
|
2022-09-03T15:09:09.996Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a31ec08862af051813ad433a0400f22cbe163a5e",
"oa_license": "CCBYNCND",
"oa_url": "https://journalhosting.ucalgary.ca/index.php/cmej/article/download/73879/56089",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "45733e96f1f852efdf51feca79df73cb38cc15cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
267090421
|
pes2o/s2orc
|
v3-fos-license
|
The role of exhausted natural killer cells in the immunopathogenesis and treatment of leukemia
The immune responses to cancer cells involve both innate and acquired immune cells. In the meantime, the most attention has been drawn to the adaptive immune cells, especially T cells, while, it is now well known that the innate immune cells, especially natural killer (NK) cells, play a vital role in defending against malignancies. While the immune cells are trying to eliminate malignant cells, cancer cells try to prevent the function of these cells and suppress immune responses. The suppression of NK cells in various cancers can lead to the induction of an exhausted phenotype in NK cells, which will impair their function. Recent studies have shown that the occurrence of this phenotype in various types of leukemic malignancies can affect the prognosis of the disease, and targeting these cells may be considered a new immunotherapy method in the treatment of leukemia. Therefore, a detailed study of exhausted NK cells in leukemic diseases can help both to understand the mechanisms of leukemia progression and to design new treatment methods by creating a deeper understanding of these cells. Here, we will comprehensively review the immunobiology of exhausted NK cells and their role in various leukemic malignancies. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12964-023-01428-2.
Introduction
Regarding the increased prevalence of hematopoietic malignancies and the existence of difficulties in treatment, it is essential to study the etiology and immunopathogenesis of blood cancers, especially leukemias.Despite all the progress achieved, chemotherapy is the main therapeutic strategy for almost all hematopoietic malignancies which magnifies the importance of identifying novel and efficacious therapeutic targets [1,2].
Both innate and adaptive immune responses are critical in the defense against cancer cells.Although it is generally supposed that adaptive immune cells, particularly T cells, are an essential part of the immune response against cancer cells, natural killer (NK) cells also play a critical role in the defense against malignant cells.They are the most critical innate immune lymphocytes in defense against infections and cancers.While impaired cytotoxic function of NK cells is correlated with cancer progression, the upregulation of activating receptors on NK cells is correlated with better disease prognosis [1,2].Similarly, the accumulation of functional NK cells in the tumor microenvironment has also been associated with low grades of cancer [3,4].
Cancer cells can induce exhaustion in NK cells by changing the phenotype and function of NK cells and suppressing their anti-tumor function.The term "exhaustion" was initially used for T lymphocytes, wherein these cells undergo phenotypic changes and functional impairment following repeated exposure to antigens under pathological conditions, such as cancer or chronic infection.Induction of this state is associated with extensive alterations in T lymphocytes, including the induction of the inhibitory immune checkpoints, metabolic changes, epigenetic changes, and changes in molecular signaling pathways [5].It should be noted that the exhaustion process and features are different in T and NK lymphocytes, which may be due to the fundamental differences between these two cells.While T cells have very diverse antigen receptors and identify antigens depending on MHC molecules [6,7].
The evidence indicates that the occurrence of exhausted NK cells in leukemic patients has significantly increased, and their anti-leukemic activity has been dramatically inhibited [8,9].On the other hand, leukemia treatments based on targeting these cells have been associated with promising results, which indicate the introduction of a new treatment method.It seems necessary to mention that using NK cells in treating hematopoietic malignancies has been more successful than solid tumors.This issue is probably due to the difficulty of infiltrating NK cells to the tumor site and exposure to inhibitory signals in the tumor microenvironment, suppressing NK cell activity and inducing their exhaustion [10].
Despite efforts, many ambiguous and unknown issues in this field still require detailed and comprehensive studies.In this review article, we will try to discuss the immunobiology of NK cells, exhaustion of NK cells, the role of exhausted NK cells in leukemia, and targeting these cells for the treatment of leukemia.
Immunobiology of NK cells
NK cells are the most important components of the innate immune system that originate from the bone marrow and are present in the bloodstream and tissues such as lymph nodes, liver, thymus, and uterus [11].
Different subtypes of NK cells have been identified in humans and mice.The expression of molecules such as NK.1, CD49b, and NKp46 determine NK cells in mice.In humans, the CD16 + CD56 + phenotype represents these cells [12].This classification is partly derived from the differentiation stages of these cells.Accordingly, four stages of differentiation have been observed in murine NK cells.Primary immature NK cells have a CD27 − CD11b − phenotype.In the second stage, these cells obtain the expression of molecules such as CD27, NK1.1, NKp46, and NKGD2.The third step starts with the expression of CD27, CD11b, and S1P5 molecules on the cell surface.In the fourth stage, mature cells will have a CD27 − CD11b + KLRG1 + phenotype that exhibits cytotoxic function.
On the other hand, the differentiation of NK cells in humans includes five stages.In the first stage in the bone marrow, pre-NK cells originate from lymphoid progenitors.In the second stage, these cells express the IL-15 receptor to ensure their survival during this stage until stage four.CD3 − CD56 bright CD16 − cells are the product of the fourth stage of differentiation, which resides mainly in the lymph nodes, produces a high amount of cytokines, and has low toxicity.The outcome of the fifth stage of differentiation is CD3 − CD56 dim CD16 + cells, which are mainly present in blood circulation or inflammatory tissues.These cells exert cytotoxic function through the production of perforin and interferon (IFN)gamma [13,14].
According to another classification, within CD56 dim cells, there are two types, including conventional NK cells and adaptive NK cells [15].Adaptive NK cells have a different metabolic profile and epigenetic characteristics similar to effector CD8 + T cells [16].These cells are long-lived; many express CD94/NKG2C [17].These cells, mainly identified in mice, can survive over six months and self-renew.So far, three categories of adaptive NK cells have been identified, including hepaticliver-resident NK, cytokine-induced memory-like NK, and cytomegalovirus (CMV)-specific NK (Specific NK cells against CMV) [18].Adaptive NK cells exhibit antigenic specificity levels and memory recall properties.While the lifespan of conventional NK cells is less than ten days, adaptive NK cells can survive for months and even years [19].
Functionally, NK cells have a high ability to identify and kill virus-infected cells and transformed cells.These cells do not need prior exposure to the antigen to recognize the target and identify antigens through germlineencoded receptors.NK cells express various inhibitory and activating receptors, whose interaction with different ligands leads to activating or inhibiting their activity following target cell recognition.It is essential to mention that in the case of the simultaneous connection of activating and inhibitory receptors with their ligands, inhibitory signals will dominate, probably to maintain homeostasis and prevent self-directed responses [20].NK cells express several inhibitory receptors, including T-cell immunoreceptor with immunoglobulin and immunoreceptor tyrosine-based inhibitory motif domains (TIGIT), programmed death protein 1 (PD-1), T-cell immunoglobulin and mucin domain-containing protein 3 (TIM3), CD96, CD112R, interleukin (IL)-1R8, NKG2A, killer-cell immunoglobulin-like receptors (KIRs), and lymphocyteactivation gene 3 (LAG-3).The activating receptors of these cells also include NKG2, NKG2D, and CD226 [21].NK cells express many inhibitory checkpoints of T cells, which led many researchers to think that it is possible to enhance the activity of NK cells by inhibiting these checkpoints and preventing exhaustion, similar to what has been experienced with T lymphocytes.
NK cells use a variety of mechanisms to activate and kill virus-infected or transformed cells.Identifying cells that lack MHC I molecules can lead to the non-activation of inhibitory checkpoints and, as a result, the dominance of NK cell activating receptors.Antibody-dependent cell-mediated cytotoxicity (ADCC) is another important mechanism of NK cell activation, in which NK cells bind to antibody-mediated opsonized particles through CD16 receptor.Moreover, some inflammatory cytokines can also activate NK cells [22,23].
NK cell exhaustion
Exhaustion of immune cells is a state in which immune cells are defective regarding function and proliferative capacity.Exhaustion associated with a significant transcriptional profile alteration which is a consequence of extensive phenotypic, metabolic, and epigenetic alterations.Exhaustion primarily transpires in the presence of antigens and through recurrent stimulation, exemplified by chronic infections and malignancies.Considering the lifespan of NK cells, it seems that the exhaustion is mainly related to adaptive NK cells, which have long-term survival and can be chronically and repeatedly exposed to infectious or tumor antigens.However, there is evidence indicating that conventional natural killer (NK) cells can also experience an exhaustion process following repeated stimulation with cytokines or infectious agents [24].As mentioned earlier, the exhaustion of NK cells is associated with various phenotypic, metabolic, functional, and epigenetic changes, which we describe below (as shown in Fig. 1).
The augmentation of the expression of inhibitory receptors serves as a critical marker for the state of exhaustion in natural killer cells.Accordingly, increased expression of the PD-1 molecule is considered one of the most critical indicators of exhausted cells in both T and NK cells.The PD-1 molecule is not only an indicator of exhausted NK cells, but its signaling plays a significant role in the exhaustion process [25].The upregulation of PD-1 in CD56 dim NKG2A − KIR + CD57 + NK cells has been observed in both solid tumors and hematopoietic malignancies.These elevated PD-1 levels were accompanied by reduced cytokine secretion, defective degranulation, and decreased proliferation [1,26,27].
It should be noted that some studies have suggested that the expression of TIGIT and, to a lesser extent, LAG-3 and TIM-3 are more critical compared to PD-1 as an indicator of NK cell exhaustion [28,29].Another molecule whose signaling is mentioned as one of the influential factors in inducing NK exhaustion is NKG2A.CD56 dim NKG2A high NK cells were also increased in hepatocellular carcinoma patients, which was associated with a poor prognosis [30].KIR receptors family and CD96 are other inhibitory molecules increased in exhausted NK cells.
Moreover, the decrease in the expression of stimulatory receptors is correlated with the exhaustion of NK cells.There has been a regular observation of the diminished presence of the stimulatory receptor NKG2D in different types of solid tumors or leukemia [31].The reduced expression of NKG2D was also correlated with the decreased DAP10 expression [32].Furthermore, decreased expression of several other NK cell activating receptors, including NKp30, NKp44, NKp46, CD16, 2B4, and CD226, has also been demonstrated in cancer patients, associated with poor prognosis [1].Given that the equilibrium between the signals obtained from the inhibitory and activating receptors of NK cells will ultimately dictate the outcome of cellular activity, it appears that the decrease in expression levels of activating receptors in individuals with cancer results in the promotion of inhibitory signals prevailing in these cells.
From a functional point of view, it has been determined that NK cells have an incomplete function in a tumor environment or chronic infection and lose the ability to perform cytotoxic activities.For example, cancer progression in murine models is associated with the decreased frequency and function of NK cells [33,34].Furthermore, adoptively transferred NK cells to leukemic mice also lost their cytotoxic ability after exposure to malignant cells [22].Also, tumor-infiltrating NK cells exhibit impaired cytotoxic functions, which is in part related to the downregulation of IFN-γ, FasL, perforin, CD107a, granzyme B, and tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) [30,31].
In addition to phenotypic and functional modifications, there seem to be some transcription profile changes in exhausted NK cells in the tumor area.It has been reported that transcription factors T-bet and eomesodermin (Eomes), which play an essential role in NK cells' activity, differentiation, and maturation, are significantly reduced in exhausted NK cells [22,35].Adoptively transferred NK cells to the murine leukemia model exhibited the reduced expression of these transcription factors, which was associated with impaired NK cell function.Interestingly, the induction of Eomes expression in these cells partially restored the function of NK cells [22].This finding shows that the reduction of Eomes indicates exhaustion in NK cells and part of its induction process.
To understand the induction of NK cell exhaustion, it is necessary to examine the influential factors implicated in this process.One of the influential factors in NK exhaustion is the immunosuppressive microenvironment of the tumor.The most important immunosuppressive cells include regulatory T cells (Tregs) [36], myeloidderived suppressor cells (MDSCs) [37], tumor-associated fibroblasts [38], tumor-associated neutrophils [39], and tumor-associated macrophages [40].The upregulation of immunosuppressive cells in cancer patients is associated with decreased frequency and function of NK cells.These inhibitory cells can inhibit NK cells and induce exhaustion with various mechanisms dependent on cell-cell contact or independent of cell attachment (through secretion of immunosuppressive factors) [41][42][43].Immune inhibitory cytokines such as transforming growth factor-β (TGF-β) and IL-10 are also essential in NK cell exhaustion [44,45].The hypoxic condition of the tumor microenvironment is an important factor in the suppression and exhaustion of NK cells.The hypoxia downregulates NKG2D, NKp46, NKp30, and NKp44 in NK cells through the expression of hypoxia-inducible factor 1α (HIF-1α) [46,47].
The upregulation or downregulation of certain factors on cancer cells, which act as ligands for inhibitory or activating receptors of NK cells, is considered to be one of the most crucial elements that lead to NK cell exhaustion.As mentioned, the balance between the signals received from activating and inhibitory receptors of NK cells will determine their function.This balance will be towards inhibitory signals in the vicinity of malignant cells.Numerous investigations have demonstrated the notable presence of diverse molecules on the surface of malignant cells, which serve as the ligands for the suppressive receptors of NK cells.Consequently, this phenomenon has resulted in the impairment of NK cells and, ultimately, their exhaustion.For example, the increased expression of CD200 and galectin-9 on cancer cells (ligands of CD200R and TIM-3 on NK cells, respectively) has been demonstrated in acute myeloid leukemia (AML) [48,49].On the other hand, the downregulation of CD48 on leukemic cells (the ligand for activating receptor 2B4 on NK cells, was observed in AML [50].
Finally, exosomes secreted from cancer cells are also effective factors in the inhibition and exhaustion of NK cells.Exosomes with different mechanisms such as the secretion of inhibitory cytokines such as TGF-β, providing microRNAs involved in NK suppression or the expression of inhibitory receptor ligands of NK cells can lead to defects in the cytotoxic activity of these cells and facilitate the exhaustion process [51,52].
Exhausted NK cells and leukemia
The inactivation of NK cells and their failure to eradicate leukemic cells is a prevailing occurrence noted in nearly all leukemic malignancies.In this section, we will review the studies on the inefficiency of NK cells in various leukemias, the mechanisms of NK exhaustion, and the therapeutic strategies used to strengthen NK cells.
CLL
The ability of ammonium chloride to inhibit the cytotoxic activity of NK cells cultured with the leukemia cell line K562 (chronic myelogenous leukemia) was one of the first experiences related to the inhibition of these cells in leukemia.This suppression was reversible (after 15 h) and dose-dependent [53].
One of the most common ways to study exhausted NK cells is to examine their frequency in patients and their correlation with disease prognosis.Accordingly, an increased frequency of exhausted NK cells has been reported in chronic lymphocytic leukemia (CLL) patients.By studying the peripheral blood of 24 CLL patients and 19 normal individuals, Hadadi et al. have shown that the frequency of CD56 + CD3 − Tim-3 + cells has increased significantly in these patients, which was associated with the downregulation of CD56 + CD3 − NKp30 + cells.They indicated that these dysregulated frequencies were correlated with poor prognostic factors such as high absolute lymphocyte count, decreased hemoglobin, and elevated serum C reactive protein (CRP) concentration [54].Some studies have shown the effect of some common treatments on NK cell exhaustion in leukemia patients.For example, the anti-CD20 monoclonal (ofatumumab or rituximab) antibody could promote NK cell exhaustion by binding to CD16 on NK cells.This interaction could impair the cytotoxic activity of NK cells.Following this interaction, some NK cells activating receptors such as NKp46, NKG2D, 2B4, and DNAM-1 could not phosphorylate essential signaling molecules involved in the cytotoxic function of NK cells, such as phospholipase C (PLC)γ2, SH2-domain-containing leukocyte protein of 76 kDa (SLP-76), and Vav1.Furthermore, this ligation could also recruit the inhibitory phosphatase Src homology region 2 domain-containing phosphatase-1 (SHP-1) to the cytoplasmic tail of CD16, leading to further suppression of NK cells.These findings interestingly show the dual role of the CD16 receptor in the activation or inhibition of NK cells and provide a new mechanism for the exhaustion of these cells because the pharmacological blockade of this receptor led to the recovery of the cytotoxic activity following binding to anti-CD20 antibody [55].Although these findings imply that rituximab can inhibit some anti-leukemia responses, it should be evaluated in further studies to evaluate its advantages and disadvantages.
On the contrary, there exists information suggesting that certain therapies can impede the exhaustion of NK cells and enhance the cytotoxic capabilities of these cells.Accordingly, NK cells derived from healthy subjects after treatment with drugs such as sunitinib, sorafenib, or the pan-RAF inhibitor ZM336372 had a high ability to secrete cytokines and kill target cells and prevent exhaustion in the RAS/RAF/ERK signaling pathway-dependent manner [56].Similarly, treatment of 55 relapsed/refractory and 50 untreated CLL patients with Ibrutinib was associated with increased effective NK cells compared to 20 normal subjects [57].Likewise, during four years of follow-up of 31 CLL patients and 20 normal subjects, Ibrutinib could enhance the frequency and function of NK cells, which was associated with good prognosis [58].
The expression of checkpoints on NK cells and their engagement with the corresponding receptors on leukemic cells stands as a highly crucial means by which NK cells experience suppression and exhaustion.Likewise, the endeavor to target inhibitory checkpoints on NK cells has been recognized as a preeminent immunotherapeutic approach for targeting exhausted NK cells.The study of circulating NK cells in 17 CLL patients showed that leukemic B cells express significantly higher levels of Siglec-7 ligands than normal individuals, which was associated with poor disease prognosis.Moreover, the blockade of the Siglec-7 ligand markedly enhanced the sensitivity of leukemic cells to the cytotoxic effects of NK cells [59].In another study, the expression of LAG3 was significantly increased in leukemic and NK cells of 61 untreated CLL patients, which was associated with a poor prognosis.In addition, the blockade of LAG3 using monoclonal antibodies significantly preserved the cytotoxic function of NK cells against leukemic cells [60].B and T lymphocyte attenuator (BTLA) is another inhibitory checkpoint its upregulation is detected on both leukemic and NK cells in 46 untreated CLL patients associated with a poor prognosis.Interestingly, ex vivo blockade of BTLA led to the depletion of leukemic cells and enhanced cytotoxic function and cytokine secretion by NK cells [61].Human inhibitory receptors Ig-like transcript 2 (ILT2) (also known as LIR-1 or LILRB1) is also an immune checkpoint investigated in CLL patients regarding its role in suppressing NK cells.While B-CLL cells exhibited low levels of this ILT2, it was upregulated in the NK cells of CLL patients (n = 60) compared to normal individuals (n = 25), which was associated with the severity of disease.It has been reported that inhibiting ILT2 with Lenalidomide significantly activates NK cells and eliminates leukemic cells [62].In contrast, blockade of PD-1 and TIM-3 checkpoints in NK cells derived from the peripheral blood of 18 early-stage CLL patients did not affect the cytotoxic function and secretion of tumor necrosis factor (TNF)-α and IFN-γ [63].
As we reviewed in this section (summarized in Table 1), despite the interesting clues regarding the role of exhausted NK cells in CLL patients, there are still many unknowns in this field.In future studies, different subtypes of exhausted NK cells should be investigated in patients with varying degrees of disease progression.
Future investigations should comprehensively address the differences between peripheral blood and bone marrow NK cells.Also, many other checkpoints have not yet been investigated in CLL patients, which should be comprehensively evaluated.Furthermore, the efficacy of targeting exhausted NK cells in CLL patients as a therapeutic target should be precisely investigated in preclinical and clinical studies.
AML
Among the various malignancies of leukemia, the majority of research regarding the significance of exhausted NK cells has been conducted in patients with AML.A particular investigation within this context investigated the impact of genetic polymorphism in the coding sequences of receptors responsible for the inhibition or activation of NK cells, and its potential association with the susceptibility to AML.While AML patients (n = 169) exhibit high expression of KIR activating receptors, especially KIR3DS1, normal subjects (n = 167) express KIR inhibitory receptors, especially KIR2DL1 and KIR3DL1.It has been proposed that the increased expression of KIR-activating receptors in patients probably leads to the hyperactivation of these cells following the encounter with leukemic cells and further the exhaustion of NK cells and the progression of cancer [64].Interestingly, some fungal infections, such as A. fumigatus, promote NK cell exhaustion in AML patients leading to their reduced cytotoxic function against leukemic cells.Exhaustion was associated with the reduced secretion of inflammatory cytokines such as TNF-α, IFN-γ, regulated upon activation, normal T cell expressed and presumably secreted
Main findings Ref
Ammonium chloride can inhibit the cytotoxic activity of NK cells in vitro [53] The upregulation of circulating CD56 + CD3 − Tim-3 + exhausted NK cells in CLL patients was associated with the upregulation of CD56 + CD3 − Tim-3 + NK cells associated with disease progression [54] Anti-CD20 monoclonal antibodies ofatumumab or rituximab) can cause the exhaustion of NK cells in CLL patients through binding to CD16 on NK cells [52] Treatment of 55 relapsed/refractory and 50 untreated CLL patients with Ibrutinib was associated with increased effective NK cells compared to 20 normal subjects [57] The long-term treatment of CLL patients with Ibrutinib increased NK cells in peripheral blood, which was associated with a good prognosis [58] Leukemic B cells of CLL patients express significantly higher levels of Siglec-7 ligands than normal individuals, which was associated with poor disease prognosis [59] The expression of LAG3 was significantly increased in leukemic cells and NK cells and was associated with a poor prognosis Inhibiting this checkpoint increased the cytotoxic activity of NK cells against leukemic cells [60] The upregulation of BTLA is detected in both leukemic cells and NK cells in untreated CLL patients, which was associated with a poor prognosis Ex vivo blockade of BTLA led to the depletion of leukemic cells and enhanced cytotoxic function and cytokine secretion by NK cells [61] The expression of ILT2 was increased in the NK cells of CLL patients (n = 60), which was associated with a poor prognosis Inhibiting ILT2 with Lenalidomide significantly activated NK cells and eliminated leukemic cells [62] Inhibition of PD-1 and TIM-3 receptors in circulating NK cells of 18 early-stage CLL patients did not affect the recovery of cytotoxic function and secretion of TNF-α and IFN-γ [63] macrophage inflammatory protein (MIP)-1α, and MIP-1β, decreased expression of activating receptors such as NKG2D and NKp46, and impaired NK cell degranulation [65].Therefore, some infections in leukemic patients can also be considered one of the causes of NK cell exhaustion.
The frequency and absolute number of peripheral CD56 + CD16 + NK cells were also markedly decreased in myelodysplastic syndromes (MDS) and AML secondary to MDS patients (n = 130) compared to normal subjects (n = 40).It was also shown that NK cells expressing NKG2D, NKp46, and CD161 were significantly reduced in patients compared to normal subjects [66].Zeng and colleagues also proposed that NK cells in the peripheral blood of AML patients are in an exhaustion state.In contrast, NK cells in the bone marrow mainly have a terminally differentiated phenotype, which correlates with low patient survival.They showed that the frequency of CD16 − CD56 dim NK cells is reduced in the peripheral blood of novo AML and AML patients with complete remission after chemotherapy, whereas there was no change in the bone marrow.In addition, while the frequency of killer cell lectin-like receptor G1 (KLRG1)-and TIGIT-expressing NK cells was increased in peripheral blood of novo patients, it was recovered in patients after chemotherapy.Moreover, the frequency of terminally differentiated NK cells (CD56 dim CD16 + CD57 + ), which had a potent cytotoxic function and low replication capacity, was increased in the bone marrow of novo AML patients and correlated with a lower survival rate [67].Tang et al. also demonstrated that peripheral NK cells of AML patients (n = 79) had extensive functional impairments, which were correlated with disease relapse and resistance to treatment.Also, in patients who responded to chemotherapy, the functional responses of NK cells were restored [68].By studying the bone marrow samples of 37 newly diagnosed AML patients, it was found that the expression of TIM-3 checkpoint on NK cells and blasts can be used as a prognosis marker [69].The review of these three studies shows that the exhaustion status of NK cells in AML patients with various degrees of progression is different.Another point that can be taken from these studies is the difference between exhausted NK cells in peripheral blood and bone marrow.Finally, it seems that the exhaustion status in NK cells is reversible, and after treatment with drugs such as chemotherapy, the activity of these cells returns to normal conditions.
Bou-Tayeh and coworkers demonstrated that the deregulation of signaling pathways induced by cytokines in NK cells of AML patients might constitute one of the mechanisms implicated in the exhaustion process.They provided evidence that the advancement of AML correlated with impairments in the maturation and functionality of NK cells in murine models of AML.While the IL-15/ mammalian target of rapamycin (mTOR) and type I interferon (IFN) signaling pathways were constitutively active in NK cells purified from the leukemia murine model, these cells could not respond to stimulation with IL-15, in vitro.Therefore, chronic activation of NK cells through the IL-15/mTOR pathway is one of the NK cell exhaustion mechanisms in AML models.Similar results were observed in NK cells derived from AML patients in vitro.There were low expression levels of IL-2/15Rβ and decreased response to stimulation with IL-15 in these cells [70].
Targeting checkpoint molecules has been introduced as an effective immunotherapy method for targeting exhausted NK cells.Recently, poliovirus receptor-related immunoglobulin domain-containing (PVRIG) has been introduced as one of the new effective checkpoints inhibiting NK cells' activity, especially CD56 bright cells.By studying the blasts of 20 AML patients, it has been shown that these cells continuously express the ligand of this checkpoint, poliovirus receptor-related 2, PVRL2.Interestingly, blockade of the PVRIG/PVRL2 axis using an anti-PVRIG antibody markedly activated NK cells to kill PVRL2-expressing leukemic cells.In contrast to peripheral NK cells, bone marrow NK cells had no upregulated PVRIG expression.Activation of NK cells following ligation of activating receptors (NKp46 and CD16), leukemic cell recognition, or cytokine stimulation (IL-12 and IL-2) could suppress the expression of PVRIG in NK cells [71].
PD-1/PD-L1 axis is another checkpoint that was well investigated in various cancers.An exciting recently published finding showed that NK cells in contact with leukemic cells in the C1498 murine AML model acquire the PD-1 molecule from leukemic cells through the Trogocytosis mechanism in a SLAM receptor-mediated manner.Trogocytosis denotes a biological phenomenon characterized by the physical extraction and ingestion of cellular material, referred to as "bites," from one cell by another cell [72].Therefore, PD-1 checkpoint expression on NK cells seems intrinsic and can originate from leukemic cells.
Moreover, exhausted NK cells are worthy targets for the treatment of PD-L1 − tumors.Dong and coworkers showed that the ameliorative effect of anti-PD-L1 antibody in PD-L1 − AML (n = 79) is related in part to the targeting of PD-L1 + exhausted NK cells.Malignant cells induce PD-L1 in NK cells through the AKT signaling pathway.The impact of anti-PD-L1 antibody on NK cells is also through the P38 pathway.Furthermore, the combination use of anti-PD-L1 antibody and NK cell-stimulating cytokines had a more significant impact on leukemic cells compared to monotherapy [73].These findings imply that the rationale for using an anti-PD-L1 antibody in PD-L1 negative tumors can be targeting exhausted NK cells.The mouse model of AML has yielded comparable findings with regard to the inhibition of PD-L1, underscoring the significance of checkpoint inhibition in averting NK cell exhaustion and enhancing their efficacy against leukemia [74].
Evaluation of NK cells in peripheral blood of 100 AML patients showed increased expression of B7-H3 checkpoint, which was associated with poor prognosis.Also, inhibition of this checkpoint using monoclonal antibody had a significant impact on the cytotoxic function of NK cells, both in vitro (in HL-60, Kasumi-1, THP-1, MV4-11, MOLM-13, U937, OCI-AML3, OCI-AML2, and MOLM-14 cells) and in vivo.Treatment of AML patient-derived xenografts with an anti-B7-H3 antibody had similar results [75].
Another checkpoint that plays a role in inhibiting the function of NK cells is TIGIT, which binds to its ligands, including CD155 and CD112, on leukemic cells.In an in vitro study conducted on AML cell lines (MOLM-13, MV-4-11, NB-4, THP-1, and KG-1), inhibition of CD155/CD122 on leukemic cells by Flt3 inhibitors enhanced the cytotoxic function of NK cells against these cells [76].TIGIT was also upregulated on NK cells and bone marrow samples in AML patients following allogeneic transplantation, which was associated with the downregulation of NK cells in the bone marrow [77].Likewise, blockade of TIGIT, CD39, and A2AR checkpoints on NK cells purified from peripheral blood (n = 15) and bone marrow (n = 25) of AML patients increased the cytotoxic function of NK cells [78].
Furthermore, the examination of TIM-3 expression in NK cells in 47 newly diagnosed AML patients showed that the expression of this checkpoint is associated with a poor prognosis and has prognostic significance [79].In a similar study, newly diagnosed AML patients (n = 23) had a higher frequency of TIGIT + PD-1 + TIM3 + NK cells compared to normal subjects, which was associated with poor prognosis.The expression of these checkpoints was associated with low cytotoxicity, and their inhibition increased the function of NK cells [80].In contrast, in a study on 150 AML patients, Rakova and coworkers reported that TIM-3 expression in NK cells correlates with high functional capacity and better clinical outcomes.Moreover, in contrast to M1 and M2 patients, M4 and M5 patients had a lower frequency of NK cells compared to normal subjects.They also showed intact TIGIT and significant downregulation of PD-1 in patients-derived NK cells compared to healthy donors.Intriguingly, the blockade of TIM-3 (but not PD-1 and TIGIT) inhibited the secretion of IFN-γ in NK cells following stimulation [81].
The use of anti-NKG2A antibodies in the mouse model of AML led also to significant activation of NK cells and tumor regression [82].
In addition to the inhibitory receptors, investigating the activating receptors of NK cells can also be considered a solution to prevent the exhaustion of these cells and increase their anti-leukemic activity.Accordingly, by studying 111 AML patients, it has been determined that the interaction of the OX40 molecule on leukemic cells with the OX40L checkpoint causes the activation of NK cells and eliminates leukemic cells [83].
The design and production of CAR-NK cells is another advanced and innovative method of treating leukemia.Gurney et al., using transposon-engineered CAR-NK cells, could efficiently target and destroy AML cell lines with expression of CLL-1 molecule and primary AML cells, even leukemia stem cells.They further engineered CAR-NK cells by silencing NK cell cytokine checkpoint cytokine-inducible SH2-containing protein (CIS) using the CRISPR/Cas9 system, which was associated with increased cytotoxicity of these cells [84].
Taken together, it seems that the progression of AML is associated with an increase in the frequency of exhausted NK cells, accompanied by a high expression of inhibitory checkpoints and a decrease in the expression of activating receptors and functional defects in these cells (Table 2).Also, although the results of the studies are not sufficient and comprehensive, it seems that exhausted NK cells in the peripheral blood and bone marrow of AML patients do not have similar conditions; however, this field should be studied comprehensively.The results of the studies that have used the inhibition of NK cell checkpoints for treating AML are promising.Still, they should be evaluated in more detail in mouse models and clinical efficacy.
Acute lymphocytic leukemia (ALL)
In a study conducted by Duault et al. on many B-ALL and T-ALL patients, they suggested that the characterization of NK cells can be used to predict patients' clinical outcomes.They showed that although the cytotoxic function of NK cells in leukemic patients is significantly reduced compared to normal controls, activation markers such as high expression of CD69 and CD56 molecules, production of cytokines, and Calcium signaling are increased.The results also showed that the incomplete maturation of NK cells into effector cells prevents the lysis of leukemic cells by NK cells.They proposed that chronic activation may lead to NK cell exhaustion in ALL patients.The increase of cytokine-producing activated NK cells was associated with disease progression and independently indicated a poor prognosis for ALL [85].
Interestingly, NK cells in the bone marrow and peripheral blood samples of leukemic patients had a similar status.a study that Bailur and colleagues conducted on bone marrow samples of 35 B-ALL and 26 AML patients, they showed that the activity of NK cells was significantly impaired compared to normal subjects (n = 11).While CD16 + CD57 + NK cells were significantly decreased in AML patients, these cells did not change in B-ALL patients.Also, while granzyme secretion and NKG2D expression were reduced in AML patient cells and TIM3 expression was increased, NK cells of B-ALL patients were not different from normal individuals [86].Moreover, by conducting next-generation sequencing and evaluating gene expression profiles and gene polymorphisms, it has been demonstrated that B-ALL and T-ALL might differentially regulate NK cell exhaustion [87].
In ALL patients, the tumor microenvironment in the bone marrow has been mentioned as one of the influential factors in inducing NK cell exhaustion.Ramírez-Ramírez and colleagues have suggested that molecular CRTAM/Necl-2 interaction in bone marrow niches leads to NK cell exhaustion.They reported that lymphoid progenitor cells in the bone marrow with phenotype CD34 + CD56 + CD3 + CD19 + CRTAM + are likely to be in the first activation phase and are adjacent to niches with high expression of the ligand nectin-like-2.On the other
Main findings Ref
The increased expression of KIR activating receptors in ALL patients leads to the hyperactivation of these cells following the encounter with leukemic cells and further the exhaustion of NK cells and the progression of cancer [64] Some fungal infections, such as A. fumigatus, cause exhaustion in NK cells derived from AML patients and significantly inhibit their cytotoxic activity against leukemic cells [65] The frequency and absolute number of CD56 + CD16 + NK cells in the peripheral blood of AML patients significantly decreased compared to normal subjects NK cells expressing NKG2D, NKp46, and CD161 were significantly reduced in patients compared to normal subjects [66] NK cells in the peripheral blood of AML patients are in an exhaustion state NK cells in the bone marrow mainly have a terminally differentiated phenotype, which correlates with low patient survival [67] Circulating NK cells in AML patients had many functional defects associated with excessive maturation and significant reduction of NKG2D and NKp30 expression In patients who responded to chemotherapy, the functional responses of NK cells were restored [68] The expression of TIM-3 on NK cells and blasts in the bone marrow of AML patients can be used as a prognosis marker [69] Chronic activation of NK cells through the IL-15/mTOR pathway is one of the NK cell exhaustion mechanisms in AML models A similar phenomenon was observed in NK cells derived from AML patients in vitro [70] Targeting the PVRIG/PVRL2 axis in AML patients is an efficient immunotherapeutic approach [71] NK cells in contact with leukemic cells in the C1498 murine AML model acquire the PD-1 molecule from leukemic cells through the Trogocytosis mechanism in a SLAM receptor-mediated manner [72] They showed that the success of leukemia treatment using the anti-PD-L1 antibody in PD-L1 − leukemia is due to the targeting of exhausted NK cells that express PD-L1 [73] Blockade of PD-L1 in murine models of AML prevented the exhaustion of NK cells and increased their anti-leukemic activity [74] B7-H3-expressing NK cells increased in the peripheral blood of patients, which was associated with poor prognosis Inhibition of B7-H3 enhanced the cytotoxic activity of NK cells, both in vitro and in vivo [75] Blockade of CD155/CD122 on leukemic cells by Flt3 inhibitors increased the cytotoxic activity of NK cells against these cells [76] TIGIT was upregulated on NK cells and bone marrow samples derived from AML patients after allogeneic transplantation.This was associated with the downregulation of NK cells in the bone marrow [77] TIGIT, CD39, and A2AR checkpoints are essential in inhibiting the activities of NK cells in both peripheral blood and bone marrow Blockade of these checkpoints increased the cytotoxic activity of NK cells [78] The expression of TIM-3 in NK cells in newly diagnosed AML patients was associated with a poor prognosis and had prognostic significance [79] Newly diagnosed AML patients have a higher frequency of NK cells expressing TIGIT, PD-1, and TIM3 than normal subjects, which was associated with a poor prognosis The expression of these checkpoints was associated with low cytotoxicity, and their inhibition increased the function of NK cells [80] TIM-3 expression in NK cells correlates with high functional capacity and better clinical outcomes in AML patients TIGIT was intact, and PD-1 decreased in patients-derived NK cells compared to healthy donors Blockade of TIM-3 (but not TIGIT and PD-1) inhibited the secretion of IFN-γ in NK cells following stimulation [81] The use of anti-NKG2A antibodies causes the activation of NK cells and tumor regression in a mouse model of AML [82] The interaction of the OX40 molecule on leukemic cells with the OX40L checkpoint causes the activation of NK cells and eliminates leukemic cells in AML patients [83] Transposon-engineered CAR-NK cells could efficiently target and destroy CLL-1-expressing AML cell lines, primary AML cells, and even leukemia stem cells [84] hand, the marrow of ALL patients had a high abundance of CD56 high CRTAM + NK cells with an exhausted phenotype, which was functionally defective and could produce IL-10 and TGF-β [88].
The phenotypic and functional properties of CD56 − NK cells have also been investigated in ALL and chronic myelogenous leukemia (CML) patients treated with dasatinib by Ishiyama and colleagues.This study was conducted on 36 CML or Ph + ALL (Philadelphia chromosome-positive ALL), 26 imatinib or nilotinib-treated patients, and 15 normal subjects.Their results showed that the frequency of CD56 − NK cells was exclusively augmented in dasatinib-treated patients who were cytomegalovirus-seropositive, and expansion of these cells was accompanied by upregulation of specific NK cells against CMV.The expression of differentiation and activation markers such as CD57, NKG2D, NKG2C, NKp46, NKp30, perforin, and granzyme B was significantly reduced in CD56 − NK cells.A comparison of NK cells showed that the characteristics of CD56 − and CD56 dim cells were relatively similar, but these two cell groups were completely different from CD56 bright cells; however, it should be noted that the functional properties of CD56 − NK cells were significantly lower than CD56 dim cells.Also, inhibition of PD-1 could increase the activity of NK cells, especially CD56 dim cells, which was proportional to the expression level of PD-1.This study demonstrated that expansion of CD56 − PD-1 + NK cells indicates chronic activation of NK cells in dasatinibtreated patients who were cytomegalovirus-seropositive.Further, combination therapy by blockade of the PD-1/ PD-L1 axis and dasatinib was suggested as a potential treatment approach for leukemia [89].
A study on leukemic cells in 5 patients with B-cell precursor ALL (BCP-ALL) also showed that microRNA582 inhibits the cytotoxic effects of NK cells on leukemic cells by inducing the CD276 (B7-H3) checkpoint.Therefore, it is suggested that blockade of CD276 or its ligand on NK cells may be a potential therapeutic approach in leukemia [90].In addition to the discussed inhibitory checkpoints, Rothfelder et al., by studying 44 ALL patients and ALL cell lines (NALM-16, JURKAT, SD-1, REH, SUP -B15, TOM-1, and), have shown that the expression of the OX40L checkpoint on NK cells and its interaction with OX40 on ALL cells causes the activation of NK cells and depletion of Leukemic cells [91].
The use of cytokines that stimulate the activity of NK cells has been proposed as one of the exciting immunotherapy methods that prevent their exhaustion by activating the cytotoxic activity of these cells.Accordingly, a study conducted on a 70Z/3 murine pre-B cell leukemia model (murine model of immune-mediated rejection of the acute lymphoblastic leukemia) showed that treating mice by injecting IL-15-secreting leukemic cells induces and activates NK1.1 + cells, accompanied by increased mouse survival time [92].
Few studies have been conducted regarding the exhaustion of NK cells in ALL patients, which causes the lack of accurate conclusions from the results (Table 3).More studies are needed regarding the expression and role of different factors in the exhaustion of NK cells.Studies suggest that the progression of ALL is associated with the exhaustion of NK cells, and targeting exhaustion markers has been associated with promising results; however, there is a need for more comprehensive studies.
Conclusion
The importance of the anti-cancer function of NK cells in innate immunity has encouraged several investigators to investigate the phenotypic and functional characteristics of these cells in hematopoietic malignancies to evaluate their targeting worth for leukemia immunotherapy.
The phenotype and function of NK cells in almost all leukemia malignancies shifted towards exhausted cells,
Main findings Ref
The characterization of NK cells can predict ALL patients' clinical outcomes [85] The activity of NK cells in ALL patients was significantly impaired compared to normal subjects [86] B-ALL and T-ALL might differentially regulate NK cell exhaustion [87] Molecular CRTAM/Necl-2 interaction in bone marrow niches leads to NK cell exhaustion [88] Expansion of CD56 − PD-1 + NK cells indicates chronic activation of NK cells in dasatinib-treated patients who were cytomegalovirus-seropositive Combination therapy by blockade of PD-1/PD-L1 axis and dasatinib was suggested as a potential treatment approach for leukemia [89] microRNA582 inhibits the cytotoxic effects of NK cells on leukemic cells by inducing the CD276 (B7-H3) checkpoint in ALL patients [90] The expression of the OX40L checkpoint on NK cells and its interaction with OX40 on ALL cells causes the activation of NK cells and depletion of Leukemic cells in ALL patients [91] Treating the 70Z/3 murine pre-B cell leukemia model by injecting IL-15-secreting leukemic cells induces and activates NK1.1 + cells, accompanied by increased mouse survival time [92] associated with a poor prognosis.Also, the presence and increase of exhausted NK cells were associated with a poor prognosis.Another point that can be mentioned about exhausted NK cells in leukemic malignancies is the importance of targeting them as a new treatment method.Except for a few exceptions, in almost all studies that targeted the inhibitory checkpoints of exhausted NK cells, all therapeutic studies showed that this treatment method is associated with the recovery of the anti-leukemic function of NK cells and the elimination of leukemic cells.
Regarding the commonality of NK cell exhaustion in different subtypes of leukemia, it seems that leukemia occurrence and progression are associated with the reduced frequency and function of NK cells, hallmarks of the exhaustion process.The upregulation of inhibitory checkpoints and reduced secretion of inflammatory cytokines and toxic mediators are other common NK cell exhaustion characteristics observed in the majority of studies.
Despite the above-indicated deduction, there are unknown issues related to exhausted NK cells in leukemic malignancies, which require comprehensive studies in the future.While most of the studies on exhausted NK cells have been conducted in AML patients, little is known regarding the immunobiology and targeting potential of these cells in other leukemic malignancies, especially CML.
The comparison of exhausted NK cells in the peripheral and bone marrow of leukemia patients is another important issue that should be further studied.In the few studies that have evaluated exhausted NK cells in both peripheral blood and bone marrow, comprehensive information has not been obtained.In some cases, there were even contradictory data.Therefore, extensive studies must investigate these cells in various leukemias in peripheral blood and bone marrow samples.Consideration of disease stage and previously received treatments are also critical issues that should be considered in studies since they can significantly affect the exhaustion process.The defect observed in many studies was that either a proper classification of patients was not provided or patients with different disease stages were not studied simultaneously.
A critical question that future studies should seek to answer is whether the development of cancer leads to the creation of exhausted cells or whether the emergence of exhausted NK cells is one of the reasons for cancer development.Available evidence supports both theories; although these two events may coincide, other factors cause these events.The observation that chemotherapy or anti-leukemia treatments that target the leukemic cells themselves restore the function of NK cells further strengthens the theory that other factors are involved in this issue.Several checkpoints have not yet been investigated in various leukemia diseases, which should be considered in future studies.Also, the molecular mechanisms of NK cell exhaustion have not yet been clearly defined and should be studied further studies.
Fig. 1
Fig. 1 Phenotypic and functional properties of NK cells in cancer.While immunosuppressive cells or cytokines, exosomes, and hypoxia promote exhaustion of NK cells, chemotherapy, stimulatory cytokines, blockade of immune checkpoint molecules, and stimulating activating receptors prevent exhaustion
|
2024-01-24T05:08:29.585Z
|
2024-01-22T00:00:00.000
|
{
"year": 2024,
"sha1": "885a8c23a28ac2153fa321f2fbcfb61eb053d536",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "885a8c23a28ac2153fa321f2fbcfb61eb053d536",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247581088
|
pes2o/s2orc
|
v3-fos-license
|
On the Studies of Dendrimers via Connection-Based Molecular Descriptors
Topological indices (TIs) have been utilized widely to characterize and model the chemical structures of various molecular compounds such as dendrimers, neural networks, and nanotubes. Dendrimers are extraordinarily comprehensible, globular, artificially synthesized polymers with a structure of frequently branched units. A mathematical approach to characterize the molecular structures by manipulating the topological techniques, including numerical graphs invariants is the present-day line of research in chemistry. Among all the defined descriptors, the connection-based Zagreb indices are considered to be more effective than the other classical indices. In this manuscript, we find the general results to compute the Zagreb connection indices (ZCIs), namely, first ZCI (1 ZCI), second ZCI (2 ZCI), modified 1 ZCI, modified 2 ZCI, and modified 3 ZCI. Furthermore, we compute the multiplicative ZCI (MZCI), namely, first MZCI (1 MZCI), second MZCI (2 MZCI), third MZCI (3 MZCI), fourthMZCI (4 MZCI), modified 1 MZCI, modified 2MZCI, andmodified 3MZCI. In addition, we compare the calculated values with each other in order to check the superiority.
Introduction
Dendrimers are compartmentalized, versatile, well-defined, synthetic chemical polymers with numerous attributes which make them advantageous in biological systems. e structure of dendrimers is made up of three components, the multivalent surface, the outer shell, and a core which is protected by the dendritic branches in higher generations of dendrimers. Dendrimers are synthesized by the use of two approaches, divergent, and convergent. Nowadays, dendrimers are considered to be the notably manufactured macromolecules with applicability in the domain of biomedical science including gene transfection, tissue engineering, drug delivery, contrast intensification for magnetic resonance imaging, and immunology, for details see [1][2][3].
ey are extensively employed in the formation of chemical sensors, colored glass, nanolatex, nanotubes, and micro-/ macrocapsules. Due to their wide ranging applications in distinct areas, dendrimers are attaining valuable contemplation from the researchers. ey are trying to specify these molecular structures by the use of numerical graphs descriptors. e numerical graph descriptors or topological indices are the trending topological approach in computational and mathematical chemistry to characterize or signalize the topology of molecular structures. ese graph descriptions or invariants have countless utilizations in quantitative structure-activity relationship (QSAR) and quantitative structure property relationship (QSPR) studies appropriate for hazard analysis of chemicals, the discovery of drugs, and novel molecular designs [4]. Topological index (TI) is a numeric measure which helps to correlate the distinct psychochemical properties of molecular structures like freezing point, melting point, volatility, density, stability, flammability, and strain energy of molecular compounds. Topological indices (TIs) are classified on the basis of distance, degree, and polynomial. Wiener [5] put forward the innovational conception of distance-based TI which is known by Wiener index. Aslam et al. [6] compute the TIs of some interconnection networks. After the invention of Wiener index, a large number of other distance-based TIs have been investigated and considered by many analysts in the chemical and mathematico chemical literature, for details see [7][8][9].
Gutman and Trinajstic [10] initiated the innovational notion of first ZI (1 st ZI) in 1972. In 1975, Gutman et al. [11] proposed the conception of second ZI (2 nd ZI). ese classical ZIs have been utilized broadly in the study of chemical graph theory. Furthermore, the conception of third ZI (3 rd ZI), also called forgotten index, was explored by Furtula and Gutman [12]. ese degree-based TIs have great significance in the field of cheminformatics, as one can see [13][14][15]. In 2003, Nikolic et al. [16] explored the new index, namely, modified ZI. Hao [17] compared these introduced ZIs and considered the outcomes concerning these indices in well-mannered way. Das et al. [18] investigated some MZIs of graph operations.
Recently, Ali and Trinnajstic [19] explored a new way to study the psychochemical properties of compounds by introducing the connection number (CN) of the vertex and initiated Zagreb connection indices (ZCIs). e number of those vertices which are distance two from a certain vertex is said to be a CN of that vertex. ey reported that the newly proposed connection-based ZIs have better applicability to forecast the psychochemical properties of various molecular structures instead of the classical ZIs. After the invention of CN, many researchers started work to explore new connection-based indices. Multiplicative leap ZIs were investigated by Haoer et al. [20]. Du et al. [21] utilized connection-based modified FZI to find the extremal alkanes. Recently, Sattar et al. [22] computed the general expressions to compute MZCI of dendrimer nanostars. Furthermore, in 2020, Ali et al. [23] worked out to calculate the modified ZCIs for T-sum graphs. Javaid et al. [24] calculated multiplicative ZIs for some wheel graphs. In 2019, Nisar et al. [25] computed ZCIs of two types of dendrimer nanostars. Ye et al. [26] calculated ZCIs of nanotubes and regular hexagonal lattice. Bokhary et al. [27] studied the topological properties of some nanostars. Bashir et al. [28] computed the 3 rd ZI of a dendrimer nanostar. Gharibi et al. [29] developed the conception of Zagreb polynomials of nanotubes and nanocones. For the other information, we recommend the readers to study [30,31]. e motivation for this article is as follows: (1) Topological indices (TIs), the numerical descriptors, are efficient enough to characterize the topology of molecular structures and also assist to correlate their distinct psychochemical properties.
(2) Dendrimers are symmetric, versatile, and well-defined chemical polymers forming a tree-like structure. ese nanoparticles are signalized by a numerous attributes which make them advantageous for wide ranging utilizations in various fields of science.
(3) e connection-based ZIs have better applicability to predict the various psychochemical properties of distinct molecular structures in chemistry rather than the other classical ZIs present in literature.
In this paper, we present the general expressions to compute the ZCIs and MZCIs of nanostar. is manuscript is organized as follows: in Section 2, the elementary definitions are discussed which helps the readers to fully understand the main idea of this article. In Section 3, we present the general expressions to compute ZCIs, namely, 1 st ZCI, 2 nd ZCI, modified 1 st ZCI, modified 2 nd ZCI, and modified 3 rd ZCI. Section 3 involves general expressions to compute the MZCIs, namely, 1 st MZCI, 2 nd MZCI, 3 rd MZCI, 3 rd MZCI, modified 1 st MZCI, modified 2 nd MZCI, and modified 3 rd MZCI. Section 4 covers the concluding remarks.
Preliminaries
is section involves some useful primary definitions from the literature to understand the main result of this manuscript.
Definition 1.
Let Ω � (R(Ω), S(Ω)) be a graph, where R(Ω) and S(Ω) be the set of vertices and set of edges, respectively. en, the degree-based Zagreb indices are defined as follows: Here, d Ω (t) and d Ω (x) denote the degree of the vertex t and x, respectively. ese degree-based indices, discovered by Gutman and Trinajstic [10], are known as first ZI (1 st ZI) and second ZI (2 nd ZI), respectively.
Definition 2. For a graph Ω, connection-based Zagreb indices are given as Here, φ Ω (t) and φ Ω (x) indicate the connection number (CN) of the vertex t and x, respectively. ese connectionbased indices were discovered by Ali and Trinajstic [19] and are known as the first Zagreb connection index (1 st ZCI) and second Zagreb connection index (2 nd ZCI), respectively. Definition 3. For a graph Ω, the modified ZCIs can be given as follows: ese modified ZIs, proposed by Ali [19] and Ali et al. [23], are known as the modified 1 st ZCI, modified 2 nd ZCI, and modified 3 rd ZCI, respectively.
ZCIs of Nanostar Dendrimer D[k]
is section involves the expressions to obtain connectionbased ZIs, namely, 1 st ZCI, 2 nd ZCI, modified 1 st ZCI, modified 2 nd ZCI, and modified 3 rd ZCI of the nanostar dendrimer. e molecular structure of D[k] for k � 1, 2, 3 together with connection number of each vertex is presented in Figure 1, 2 and 3. e molecular structure of D[k] for k � 1, 2, 3 together with degree of each vertex is presented in Figures 4, 5, and 6. First, in order to compute all ZCIs, we rewrite the abovementioned ZIs as follows.
Definition 6. For a graph Ω, the 1st ZCI can be rewritten as where |F α (Ω)| is the total number of vertices in Ω with CN α. Furthermore, we can rewrite the 2 nd ZCI as Similarly, the modified 1 st ZCI can be rewritten as where |F α,τ (Ω)| is the total number of edges in Ω with CNs (α, τ). e modified 2 nd ZCI can be rewritten as e modified 3 rd ZCI can be rewritten as (5) where |F (μ,])(α,τ) (Ω)| is the total number of edges in Ω with degrees (μ, ]) and CNs (α, τ).
Before computing the general results of this article, we first categorize the D[k] into term-hexagon, pivot-hexagon, primary vertices, and (e, f)− type edges.
(1) Term-hexagon: a hexagon is named as term-hexagon if its five vertices have degree 2.
(2) Pivot-hexagon: a hexagon which is not term-hexagon is said to be pivot-hexagon.
Definition 7. For a graph Ω, the 1st MZCI can be rewritten as where |F α (Ω)| are the total vertices in Ω with CN α. e 2nd MZCI is rewritten as where |F α,τ (Ω)| are the total edges with CNs α and τ. e 3 rd MZCI can be rewritten as where |F ((c,α)) (Ω)| are the total vertices with degree c and CN α. Similarly, the 4 th MZCI can be rewritten as where |F (α,τ) (Ω)| are the total edges in Ω with CNs (α, τ). e modified 1 st MZCI can be rewritten as 8 Mathematical Problems in Engineering e modified 2 nd MZCI can be rewritten as e modified 3 rd MZCI can be rewritten as MZC * 3 (Ω) � equal to two times of number of term-hexagons of Ω. So, F (2,2) ′ (Ω) � 9 2 k− 1 .
Furthermore, we find |F (2,3) ′ (Ω)|. It can be easily seen that the every term-hexagon of Ω contains two vertices having degree 2 and CN 3 while pivothexagon contains 4 vertices. e total amount of such vertices in term-hexagon and pivot-hexagon of Ω are 6(2 k− 1 ) and 24(2 k− 1 ) − 1, respectively. us, Future directions: in future, we are interested to compute the connection-based Zagreb indices for the other type of dendrimers.
Data Availability
e data used to support the findings of this study are included within this article. However, the reader may contact the corresponding author for more details on the data.
|
2022-03-21T15:12:29.014Z
|
2022-03-19T00:00:00.000
|
{
"year": 2022,
"sha1": "67373fbb4803214f39f15f68a247ad21663453e5",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2022/1053484.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "260ec15afcefbcf0cf5e44e170dad4e0f576a735",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": []
}
|
256136858
|
pes2o/s2orc
|
v3-fos-license
|
Strengthening Polylactic Acid by Salification: Surface Characterization Study
Polylactic acid (PLA) is one of the market’s most commonly used biodegradable polymers, with diverse applications in additive manufacturing, specifically fused deposition modeling (FDM) 3D printing. The use of PLA in complex and sophisticated FDM applications is continually growing. However, the increased range of applications requires a better understanding of the material properties of this polymer. For example, recent studies have shown that PLA has the potential to be used in artificial heart valves. Still, the durability and longevity of this material in such a harsh environment are unknown, as heart valve failures have been attributed to salification. Additionally, there is a gap in the field for in situ material characterization of PLA surfaces during stiffening. The present study aims to benchmark different dynamic atomic force microscopy (AFM) techniques available to study the salification phenomenon of PLA at micro-scales using different PLA thin films with various salt concentrations (i.e., 10%, 15%, and 20% of sodium chloride (NaCl)). The measurements are conducted by tapping mode AFM, bimodal AFM, the force spectroscopy technique, and energy quantity analysis. These measurements showed a stiffening phenomenon occurring as the salt solution is increased, but the change was not equally sensitive to material property differences. Tapping mode AFM provided accurate topographical information, while the associated phase images were not considered reliable. On the other hand, bimodal AFM was shown to be capable of providing the topographical information and material compositional mapping through the higher eigenmode’s phase channel. The dissipated power energy quantities indicated that how the polymers become less dissipative as salt concentration increases can be measured. Lastly, it was shown that force spectroscopy is the most sensitive technique in detecting the differences in properties. The comparison of these techniques can provide a helpful guideline for studying the material properties of PLA polymers at micro- and nano-scales that can prove beneficial in various fields.
Introduction
The idea of artificial heart valves made of polylactic acid (PLA) is getting closer to reality. However, the extent of degradation of plastic valves by salification is not well understood and has not been extensively investigated. The present study is focused on investigating the impact of salification on the valves through a comprehensive material characterization effort utilizing various AFM methods.
Atomic Force Microscopy Techniques
Atomic force microscopy (AFM), invented in the 1980s, is from the family of scanning probe microscopies providing major advantages in material characterization. AFM is capable of extracting topographical information from a wide range of samples (conductive to nonconductive, metals to polymers) while allowing material property characterization to be performed, such as mechanical properties, chemical properties, and electrical and magnetic properties [1][2][3][4][5]. The main component of AFM is the micro-cantilever. The deflection of the cantilever is measured by shining a laser over the back of the cantilever and measuring the signal through the laser's reflection on the photodiode detector. The analysis and manipulation of this signal will differ based on the imaging scheme. Figure 1 depicts a schematic of how dynamic AFM imaging works. One specific mode of AFM is the tapping mode, in which the cantilever oscillates at the cantilever's resonance frequency (i.e., the first eigenmode frequency) using a piezoelectric driver. Although the excitation frequency is fixed, the distance between the tip and sample is modulated so that the oscillation amplitude remains constant. In this method, named amplitude modulation AFM (AM-AFM), the height (i.e., topography) information is gathered from the amplitude signal [6]. The phase signal, the phase lag between excitation at the base of the cantilever and the oscillation of the tip, provides information on the compositional mapping of the surface. Tapping mode can be carried out in two different regimes, namely, attractive or repulsive. Attractive mode is when the phase signal is greater than 90 • , and repulsive mode is when the phase signal is less than 90 • [7]. In the attractive regime, the cantilever does not physically touch the surface but interacts with long-range interaction forces such as electrostatic forces or van der Waals forces. The bistability phenomenon will occur if the oscillation setpoint and free oscillation amplitude are set so that the solution of the equation of motion of the cantilever can have two roots [8]. Ideally, the repulsive regime creates higher quality images, but the attractive regime can be good for soft samples, as it applies smaller forces on the sample. Although tapping mode AFM can provide both topography and compositional mapping, there is no guarantee that the quality of height and phase channels will be good for all samples. Additionally, the information observed from phase images is not directly related to mechanical properties, rendering them not quantitatively useful. Therefore, there has been significant enthusiasm toward force spectroscopy and material models needed to fit the data during the past few years. In force spectroscopy techniques, the cantilever is approached to the sample, while its deflection is measured, as shown in Figure 1 [9][10][11], going through the attractive and then repulsive forces. Using the feedback loop, the cantilever is pulled out of the sample after reaching the given maximum deflection. The observed deflection can then be converted into force versus tip sample separation to extract mechanical properties such as adhesion, stiffness (modulus), and the indentation depth. Force spectroscopy only collects a single point measurement, so force mapping is useful to collect a grid of force curves over an area of a sample and allows for the mechanical properties within that area to be compared [5,12].
In the early 2000s, it was found AFM tapping mode might not provide both topography and material composition at the same time when imaging the soft matter [13,14]. Therefore, a branch of imaging techniques known as multifrequency AFM was introduced [15]. Bimodal AFM, the most common multifrequency technique, is accomplished through the simultaneous excitation of two eigenmode frequencies of the AFM cantilever. This technique simultaneously generates multiple additional properties, such as a second phase channel. It can also allow for higher resolution and more detailed information about the sample. Subsequently, it was shown that higher eigenmode AFMs could provide depth penetration through samples and enhance AFM capabilities to image subsurface features.
Many AFM techniques have been developed and used over the years. However, there is still a need to fundamentally understand the strengths and limitations of these techniques. This work aims to fill this knowledge gap when dealing with biodegradable polymers.
3D-Printed Artificial Heart Valves
Cardiovascular disease (CVD) is the current leading cause of death in the United States, resulting in 696,962 deaths in 2020 [16]. Heart valve disease (HVD), which concerns any of the four heart valves, is a major contributor to CVD. Moreover, the aortic valve can thicken and become stiff (aortic valve stenosis), which is the most common cause of the need for surgery. The stiffening causes the valve to be unable to open fully, reducing the blood flow to the body [17]. Aortic valve stenosis is commonly caused by calcium and calcium phosphate buildup on the aortic valve, otherwise known as salification. Therefore, it is critical to firstly understand the stiffening phenomenon on the polymer surface of an artificial aortic valve and secondly visualize this concept to be able to design better valves.
Generally, two types of heart valves can be used for aortic valve replacementbiological or mechanical. Biological or bioprosthetic valves replace the original aortic valve with a new valve typically made from bovine (cow) or porcine (pig) tissue [17]. This animal tissue is referred to as a xenograft. Similarly, allografts are valves taken from human cadavers or brain-dead patients [17]. However, these valves are less readily available as they cannot be mass-produced or available commercially.
Mechanical valves, generally made of carbon or other sturdy materials, last longer than biological valves and do not need to be replaced. However, they do require the patients to take blood-thinning medications for the rest of their lives. Recent emerging technologies have turned aortic heart valve replacement towards 3D printing. It is believed that the interaction between the artificial heart valve and the patient's native anatomy could be significantly improved through the 3D printing of patient-specific models. Unlike mechanical or biological heart valve replacements, 3D-printed heart valves can be designed specifically for the size and anatomy of the patient. Although salification is a concern for all heart valve types, it can play a major role in the longevity and performance of 3D-printed valves. However, there is little understanding of if and how the salification process stiffens plastic valves.
Motivation and Objectives
Recent studies in the cardiovascular field are touting artificial heart valves made of polylactic acid (PLA), a biodegradable polymer commonly used in additive manufacturing and extensively in fused deposition modeling. However, 3D-printed heart valves can be negatively impacted by salification, similar to other artificial heart valves. Moreover, the extent of degradation of plastic valves by salification is not well understood and has not been extensively investigated.
In the present study, we undertake a comprehensive surface characterization effort focused on the salification process of biodegradable polymer-based heart valves. More specifically, our research is focused on how salification affects PLA thin films at the microscopic level. Various AFM methods are utilized to help gain a better understanding of how salification affects the material properties and quality of PLA at both quantitative and qualitative levels. The methods include tapping mode images, multifrequency AFM, energy-based AFM analysis, and force spectroscopy. The range of AFM methods in the present study can be used to provide a comprehensive guide to researchers studying salification effects on PLA to generate the most meaningful information using AFM.
The knowledge gained by the present investigation will help to understand if 3Dprinted valves are viable for heart valve replacement. The findings can also help determine if salification can strengthen PLA for additive manufacturing applications. If results show that salification has caused stiffening on the surface of the PLA polymer, one can decide how to treat the polymer to reduce the chance of salt bonding to the surface.
Methods and Materials
Three PLA film films with increasing salt concentrations were produced to understand how salification affects the material properties of PLA. Although there is a difference between bulk and surface material properties, it is understood that the performance of artificial heart valves is dictated by the changes on their surfaces. Additionally, for experimental purposes, spin-coated thin films provide more controlled methods of measurement for fundamental understanding of the matter. Initially, small beads of PLA were placed in a glass beaker, and methylene chloride was added to dissolve the polymer. A magnetic stirrer was then added, and the sample was mixed until all the PLA was dissolved. Three sodium chloride (NaCl) solutions were then prepared. The first solution had a salt to di-water weight ratio of 1:10, the second solution had a ratio of 1.5:10, and the third solution had a ratio of 2:10. With a pipet, equal parts of sodium chloride solution and PLA-methylene chloride were combined in a beaker. This procedure was repeated for each sodium chloride solution. Dime-sized silicon wafers were then prepared through a cleaning process with an ultra-sonic cleaner and an isopropyl and de-ionized water rinse. Each silicon (Si) wafer was then placed on the SCK-300 digital spin-coater device. The PLA-methylene chloride-salt solution was deposited on the Si wafers. The sample was then spun at 3000 rpm for 60 s. This process was repeated for each of the three NaCl concentrations. Each sample was placed on a glass slide and set aside overnight to fully dry. The samples are referenced in Table 1. Once the samples were ready for measurement, the Asylum Research MFP-3D AFM controlled with an ARC2 controller was used to perform various AFM studies. The same type of cantilever (Multi75-G), manufactured by Budget Sensors, was used throughout the measurements for proper comparison of the AFM results. The cantilever's spring constant was measured by Sader's method [18] and was found to be 2.07 N/m with a resonance frequency of 78 kHz. These cantilevers are around 225 µm in length, 28 µm in width, and 3 µm in thickness. The tip radious is around 10 nm and made out of silicon.
The conventional tapping mode was the first method used to measure each sample's topography and phase. Simultaneously, the observables of the conventional tapping mode were analyzed from the energy-based method. Specifically, the amplitude and phase signals collected from tapping mode imaging were converted into virial (V ts ) and dissipated power (P ts ), which are convolutions of the tip-sample interactions with position and velocity, respectively. Equations (1) and (2) describe the conversions, where index i specifies the corresponding eigenmode under study, k is the stiffness, A the instantaneous amplitude, A 0 the free amplitude, f _exc the excitation frequency, f 0 the free resonance frequency, φ the phase, and Q the quality factor. Based on these equations, if the imaging mode is simple tapping mode where the excitation frequency is equal to the resonance frequency, the f exc / f will be reduced to one and will simplify the equations [19,20].
The second AFM method performed on each of the samples was a multifrequency technique called bimodal AFM. In this method, imaging is performed through the simultaneous excitation of two different eigenmodes, while the first eigenmode is modulated with the feedback loop, and the second eigenmode is in an open loop. The technique results in a series of images, including topography, the first eigenmode amplitude, the second eigenmode amplitude, the first eigenmode phase, and the second eigenmode phase. Since one can optimize the two relatively weakly coupled eigenmodes separately, both topography and compositional mapping are guaranteed through this technique. In this method, the free amplitude of the higher eigenmode is the key parameter governing the intensity of the tip-sample interactions for a given free amplitude and amplitude setpoint of the fundamental eigenmode [21]. The user can also control the drive frequency, which should be selected close to the eigenmode frequencies. By observing the governing equation of motion of the cantilever, it can be seen that the greater indentation capability of higher eigenmodes is possible with the use of higher eigenmodes: where A 0 is the free amplitude, z = z/A 0 is the normalized tip displacement, z ts is the normalized tip-sample distance, t = ω 0 t is the dimensionless time, k is the cantilever force constant, and F ts is the tip-sample force interaction. The free oscillation amplitude is assumed to be equal to F 0 Q k based on the work done by Ricardo Garcia et al. [21]. The damping and excitation terms are combined with the 1 Q factor. As shown in Equation (1), the last term on the right-hand side is normalized by the product of the cantilever force constant and the free oscillation amplitude. Therefore, as the denominator of this fraction increases, the effect of tip-sample force interactions on the dynamics of the cantilever diminishes. For the bimodal AFM case, the force constant of higher eigenmodes is approximately 36 times the first eigenmode. Therefore, the cantilever becomes less sensitive to forces when excited with two eigenmodes. Consequently, surface penetration is observed, and if the soft matter under study can be compressed, the AFM tip will compress the film and a stiffer surface is observed [22].
The method of force spectroscopy was carried out to understand how increased salification affects the stiffness of PLA. An approach curve was captured on a stiff glass slide while the deflection in volts was measured. To collect the appropriate data, an area of 5 × 5 µm was imaged using tapping mode AFM, which was also for virial and dissipated power analyses. Next, we zoomed into the image twice on both a PLA area and salt contamination area and collected a smaller image around 1 × 1 µm. Once we ensured that the image focused on one material in particular, a force map of 20 contact mode force curves was collected over an area of approximately 200 × 200 nm. The same process was repeated twice per three samples.
Contact mechanics models were used to extract material properties of the samples by fitting the force curves to the appropriate model. The three primary models included Hertz, Derjaguin-Muller-Toporove (DMT), and Johnson-Kendall-Roberts (JKR) [23]. The Hertz model neglects adhesion and friction, which works well for nanoindentation and liquid imaging; however, it does not work for most AFM tip-sample interactions, since the AFM tip does interact with the sample. Neutral atoms and molecules still experience forces between one another, so adhesion cannot be neglected. The DMT model is appropriate for stiffer, longer-range adhesion, while the JKR model is appropriate for stronger and shorter-range adhesion. Based on this information, the DMT contact model was selected for force spectroscopy analysis.
Results and Analysis
AFM tapping mode was the first method utilized to collect information from each of the three calcified PLA samples. Figure 2 compares the results of this method for both topography and phase. Although the topography images show some increases in surface roughness as we increased the salt concentration on the samples, they are not completely distinguishable. The main differences between the results were seen in the phase results. It should be noted that when the cantilever was tuned in air (not interacting with the surface), the phase value near or at the resonance frequency was about 90 degrees. Based on the equation of motion of a simple harmonic excited system, as the cantilever interacted with the surface (stiffer than air), the phase value decreased to values below 90 degrees. Therefore, a general rule in analyzing phase images states that the lower phase values (i.e., darker colors in phase images) represent stiffer surfaces. There were clear regions where the salt contamination on surfaces was shown. This was shown as islands on samples 2 and 3. However, more importantly, an overall increase in the stiffness of samples was shown going from sample 1 to sample 3. Overall lower phase values among samples represented the stiffer surface and tip-sample force interactions, while the contrast between the islands represented the material composition. For sample 1, which had a 10% salt concentration, the salt presented itself in small, scattered circles over 6.74% of the surface. The size and shape of these salt particles appeared to be very similar to the size and shape of the humidity pores depicted by the dark circles in the topography. Even where those pores presented themselves in the topography, the phase images showed that they were the same material as where the raised surfaces occurred. Therefore, the conclusion can be drawn that at the microscale, the salt in a calcified PLA sample with a 10% salt concentration binds to both the surface of the PLA as well as the pores.
For sample 2, which had a salt concentration of 15%, the salt presented itself in large, scattered islands over 25.44% of the surface. Compared to samples 1 and 3, the salt concentrations were distributed more evenly over the surface of the PLA. Here, the salt adhered to the actual surface of the PLA rather than distributed into the pores or binding with itself.
For sample 3, which had a salt concentration of 20%, the salt presented itself in one large island with a few scattered and raised circular patches over 33.44% of the surface. The separation of the PLA and salt concentrations was likely caused by an over-concentration of the salt solution with the PLA. The solutions were likely separated even before the spin-coating process occurred, causing the salt to bind to itself and therefore adhere to the surface of the PLA as singular large islands rather than small scattered islands, as seen in sample 2.
The subsequent analysis that was performed was a virial and dissipative analysis based on the amplitude modulation channels from the tapping mode images collected from each sample. Figure 3 displays the results of this study. By visual inspection, both the virial and dissipated power images yielded similar contrasts to the tapping mode phase images. The dissipated power analysis nearly depicted the same images, while the virial depicted slightly more detail [24,25]. Again, visually, it was difficult to interpret a change in stiffness from these images, but further numerical analysis could show that the results correlated with an increase in stiffness across samples.
The next method used to analyze the calcified PLA samples was bimodal imaging, where the results were generated through the simultaneous excitation of two eigenmodes of the AFM cantilever. Figure 4 displays the results of this method, including the topography, phase 1 generated through the first eigenmode, and phase 2 generated through the second eigenmode.
The bimodal imaging results shown in Figure 4 compared well with the results from normal tapping mode presented in Figure 2, and the topographies appeared to be nearly identical. Similarly, there was some increase in surface roughness as the salt concentration increased, but the materials were not completely distinguishable. The phase 1 images were also nearly identical to the phase images from Figure 2, which clearly distinguished the salt contaminations from the PLA surface. Moreover, areas of the surface became more detailed when considering the phase 2 images based on the second eigenmode. This change was most notable for samples 2 and 3, as the gaps in the salt contaminations began to become more apparent. Additionally, the raised area in the top right corner of sample 3 differed between phase 1 and phase 2. The phase 1 image showed that area as PLA or an area with lower stiffness, while salt contaminations began to appear in that region on the phase 2 image. Overall, comparing these sets of images, bimodal imaging allows for a more detailed understanding of the materials present on the surface compared to normal tapping mode.
The final analysis performed on the three calcified PLA samples was force spectroscopy, which allowed for the Young's modulus, or stiffness, of each sample to be collected. A series of force curves was collected using force mapping over an area of high salt concentration and primarily PLA. For example, for sample 3, force maps were collected in the dark purple outer region and the pink inner region depicted in Figure 5a. Figure 5b is an example of two force versus separation curves for the different areas on sample 2, which were converted from the raw deflection versus distance curves collected through force spectroscopy. Figure 5c displays the effective Young's modulus values for each sample, which were calculated by fitting the DMT contact model to every curve and performing a particle analysis to determine the percentage of the surface covered in salt. Figure 5d is a physical representation of the DMT contact model, in which the DMT spring is represented as an infinite series of springs. Based on this representation, more springs are activated as the AFM tip goes deeper into the surface. Therefore, stiffness depends both on the position of the tip and the contact area. The 3D representation of the topography superimposed with the phase clearly distinguished the areas of high salt concentrations as well as the impact on the roughness of the samples. Additionally, the slopes of the force versus separation curves over those two different areas distinguished the stiffnesses. The orange-colored force curve over the salt contamination area was steeper than the purple curve over the PLA area. This trend was consistent over each sample, and the Young's modulus over the PLA areas also increased from sample 1 to sample 3. Therefore, as the salt contaminations covered a greater percentage of the surface and the overall NaCl concentration increased, the effective Young's modulus of the samples also increased.
It is important to note the quantitative and qualitative differences when analyzing and comparing the results from each method, including tapping mode, a virial and dissipative analysis, bimodal imaging, and force spectroscopy. First, in examining the phase change across the samples, specifically the PLA regions, it could be interpreted that the stiffness of the PLA decreases as the NaCl concentration increases. For example, as shown in Figure 5a, the phases for the PLA regions of samples 1, 2, and 3 were approximately 35 • , 30 • , and 15 • , respectively. However, the force spectroscopy results proved otherwise. The effective Young's modulus was calculated as an accumulation of force spectroscopy results for both PLA and salt contamination regions on each sample along with their respective percent surface area. Through this analysis, the force spectroscopy results gathered that the average Young's modulus over the PLA regions increased across the samples as the NaCl concentration increased. Specifically, the average Young's modulus in these regions for samples 1, 2, and 3 were 0.774 GPa, 1.062 GPa, and 1.154 GPa, respectively. The standard deviations were 0.53 kPa, 0.79 kPa, and 0.61 kPa, respectively. A similar trend could be seen for the salt contamination areas, where phase images indicated that the pink salt areas of sample 3 had a lower stiffness than the yellow salt areas of samples 1 and 2. Once again, the force spectroscopy results proved otherwise, as sample 3 had the highest average Young's modulus value of 2.582 compared to values for samples 1 and 2 of 1.718 GPa and 1.373 GPa, respectively. It is also worth noting that these phase trends from tapping mode were consistent with the phase 1 and phase 2 images from bimodal imaging.
Theoretically, each AFM method should provide the same information about each sample. However, the results proved there were differences. While there were similarities between the methods, each method offered additional unique information and could provide guidelines for researchers when choosing different characterization techniques for samples with varying mechanical properties. Figure 6 displays the data normalization results comparing the results of each AFM method to understand the sensitivity of each AFM characterization technique performed. Sample 2 and sample 3 data were normalized by sample 1 data for the corresponding method of measurement. For example, in the tapping mode AFM column, sample 2 and sample 3 average phase values are divided by sample 1 average phase values. Since sample 1 is the untreated polymer, we used this sample as the reference point in our study. The closer the value was to one in this plot, the less of a difference was observed by the measurement technique. The dotted horizontal line represents the threshold. The average phase for simple tapping mode, average dissipated power, average phase 2 for bimodal imaging, and the effective Young's modulus from force spectroscopy were selected for comparison. Overall, the sensitivity study shows that the phase results from simple tapping mode are not reliable. Although they provide a good sample topography, the phase images are not necessarily reliable regarding the stiffness of the samples. The energy quantity results do provide useful information as the sample surfaces become less dissipative, indicating increased stiffness as salt concentration increases. Additionally, the bimodal phase 2 images do provide useful information about the samples. The results show that the higher the salt concentration, the lower the phase values, indicating that stiffness increases across the samples. Finally, the force spectroscopy results follow the trend exactly as the Young's modulus and stiffness increase across the samples. These results also show that the force spectroscopy results are more sensitive to the sample change.
Conducting each method proved that using the same technique across samples of increasing stiffness may not be viable. It also shows that some techniques, such as bimodal phase imaging and force spectroscopy, provide more useful information than others, such as simple tapping mode phase. In other words, force spectroscopy and bimodal AFM are more sensitive to material differences over a given surface. Finally, the comparison between the normal tapping mode images and bimodal imaging proves that both methods do not need to be conducted. It would be more useful for the researcher to only consider bimodal imaging, as the addition of phase 2 and the implementation of two eigenmodes provide similar yet more accurate information about the samples. This holds true as long as the product of kA 0 explained in Equation (1) does not increase drastically, so the forces applied on soft matter cause damage to the surface.
Conclusions
In this study, we compared tapping mode AFM, bimodal AFM, energy analysis (dissipated power), and force spectroscopy techniques while characterizing one of the most commonly used biodegradable polymers (PLA). During this study, it was shown that as the salt concentration on PLA surfaces increases, AFM techniques are capable of detecting the material property differences. However, it was also shown that each technique has its own sensitivity to these property changes. Tapping mode AFM is not a reliable characterization technique for material properties. However, using the amplitude and phase signals of tapping mode AFM, we derived the virial and dissipated power, which verified that dissipated power (a combination of amplitude and phase information) is more sensitive to sample differences. In addition to simple tapping mode AFM, bimodal AFM was shown to be a useful technique that can detect the material changes while still providing topographical information. However, its sensitivity is not as good as force spectroscopy analysis. This study concluded that in order to detect different material properties, force spectroscopy is the most sensitive technique, although it cannot provide topographical information. Therefore, based on this work, it is recommended that investigators perform bimodal AFM imaging, followed by a force map, that can fit different material models discussed in the paper for a comprehensive analysis.
|
2023-01-24T17:06:08.280Z
|
2023-01-17T00:00:00.000
|
{
"year": 2023,
"sha1": "80122415d3934843fe137f90202a00e5db01406f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/15/3/492/pdf?version=1673967107",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a66e6b5dec6ed811f51ffe258a94836878b487cc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12344087
|
pes2o/s2orc
|
v3-fos-license
|
Theory of Inpatient Circadian Care (TICC): A Proposal for a Middle-Range Theory
The circadian system controls the daily rhythms of a variety of physiological processes. Most organisms show physiological, metabolic and behavioral rhythms that are coupled to environmental signals. In humans, the main synchronizer is the light/dark cycle, although non-photic cues such as food availability, noise, and work schedules are also involved. In a continuously operating hospital, the lack of rhythmicity in these elements can alter the patient’s biological rhythms and resilience. This paper presents a Theory of Inpatient Circadian Care (TICC) grounded in circadian principles. We conducted a literature search on biological rhythms, chronobiology, nursing care, and middle-range theories in the databases PubMed, SciELO Public Health, and Google Scholar. The search was performed considering a period of 6 decades from 1950 to 2013. Information was analyzed to look for links between chronobiology concepts and characteristics of inpatient care. TICC aims to integrate multidisciplinary knowledge of biomedical sciences and apply it to clinical practice in a formal way. The conceptual points of this theory are supported by abundant literature related to disease and altered biological rhythms. Our theory will be able to enrich current and future professional practice.
INTRODUCTION
Inpatient care represents a challenge for health professionals because the requirements for each patient are diverse. Timing is a key element in patient care at hospitals because almost all follow a schedule of interventions such as medication management, food management, laboratory sampling, testing, and visits. This alters the patient's own routine, which is already disturbed by the changes brought about by the disease, sleep disturbances and family dependency. Displacing the patient's lifestyle and family or work environment with the hospital environment that disengages their vital functions and daily activities from environmental factors can generate great stress. The environmental cues given by the social environment are changed by the regular hospital characteristics of noise peaks and constant light. Changes in routines and rhythms can alter the patient's recovery process and ability to adapt.
Understanding the presence of biological rhythms in healthy subjects and patients in relationship with their external environment, can contribute and strengthen the scientific rationale for nursing practice. Although there is abundant literature related to the topic of biological rhythms and their role in the homeostasis of the individual and disease, so far there is no middle-range theory that integrates existing knowledge of biological rhythms and patient care. The purpose of this paper is to propose a middle-range theory for inpatient circadian care.
OVERVIEW OF THE THEORY OF INPATIENT CIRCADIAN CARE (TICC)
Time is a phenomenon that affects the biological activities of organisms. Time-bound activities are present at all levels of biological organization from the cellular to the systemic level and are known as biological rhythms. Biological rhythms are known to all physiological phenomena that occur in a cyclical manner in living organisms. The importance of biological rhythms in living organisms stimulated the development of chronobiology, a line of physiology that was born with the congress on biological rhythms in Cold Spring Harbor (1960) directed by Prof. Dr. Franz Halberg [1]. Chronobiology studies the temporal organization of biological processes at the molecular, cellular, tissue, systemic and the individual levels, and the interactions between this organization and their environment, in order to establish a correlation between environmental events and the organization of biological functions.
Biological rhythms are mechanisms that prepare the body for predictable changes in the environment. One predictable event that occurs repeatedly over time at a constant interval is the day-night cycle [2,3]. Biological rhythms are classified into three groups according to the 24-hour day: Ultradian rhythms are rhythms shorter than 24 hours; Infradian rhythms are rhythms lasting longer than 24 hour (weeks, months, or seasons); and Circadian rhythms have a period of about one day (the term circadian rhythm was coined by Franz Halderg from the Latin circa, "around" and dian "day", which literally means "about a day") [4]. Circadian rhythms are the most often studied rhythms due to their clinical implications and physiology [5][6][7].
Most physiological and behavioral functions in humans change over day and night. These changes allow organisms to anticipate and adapt to changes in the environment of light and dark which are linked to the rotation of the earth in a precise and controlled way [8][9][10]. This rhythmicity is generated by specific structures called endogenous biological clocks that are genetically encoded to generate internal organic fluctuations that respond to presence or absence of external signals. These fluctuations allow the development of adaptive processes by the individual towards environmental changes [11]. These changes provide the individual internal temporal order essential for the survival of the species. The main function of this internal order is to optimize metabolism and ensure the proper use of energy to sustain vital body processes, which requires a command from the central nervous system [6].
In mammals, the temporal control is performed from the suprachiasmatic nucleus (SCN), located in the hypothalamus. The SCN acts as a pacemaker through the expression of at least a dozen genes called clock genes. However, new evidence suggests that peripheral clocks exist in tissues of some organs to modulate many aspects of physiology and behavior, allowing for temporal homeostasis [12,13].
ZEITGEBERS OR "TIME-GIVERS"
Addition to the internal factors, there are external factors that synchronize the internal clock to changes in the environment (the external synchronizers are called Zeitgebers or "time-givers"). The strongest synchronizer is the cycle of light/dark, which adjusts the timing of the circadian clock to a 24-hour interval. In addition to the light/dark cycle, various other factors such as food habits, social and working hours, and the administration of drugs for medical purposes may also affect biological rhythms in humans [5,6,14]. The light/dark cycle determines the sleep/wake cycle, one of the vital biological rhythms that is fundamental for life, body homeostasis, and recovery of body daily wear. This cycle is determined by endogenous factors, but also by external cues such as light and noise levels and working hours. Sleep/wake patterns can be unique to each individual and their alteration can cause health problems.
Illness and hospitalization processes constitute an important risk factor for disruption of the rest/activity and sleep/wake rhythms, which are also modified by other factors including age, anxiety, depression, pain, medication management, and hospital environment. Factors such as light and noise, typical of the hospital environment, can alter the beginning of the sleep phase [15,16].
The TICC proposed theory has one key concept: adaptation. This concept is fundamental in the model of Roy, one of the theoretical models that support current nursing practice worldwide [17]. Several middle-range theories derived from the Roy adaptation model apply to all nursing practice and propose that the objective of patient care is to restore balance and conserve energy homeostasis [17]. Hospitalization processes cause a disturbance in life, requiring a period of compensation followed by adaptation, which can be positive or negative, and complete or incomplete [18]. These changes require the organism to implement strategies to effectively ensure its survival against challenges in everyday situations and critical periods of changes in the environment. Adaptation, homeostasis and survival are the goal of nursing care [19,20].
Currently there is enough scientific evidence showing that rhythmicity and adaptability are key elements in the phenomenon of biological rhythms. Evidence also shows how these elements can determine health or disease. However, there is widespread lack of understanding among health professionals about this evidence and the implications of clinical interventions, care, and environmental factors, such as lighting, noise and sleep disturbances in hospitalized patients [21].
THEORY DEVELOPMENT PROCESS
The method employed in order to develop the middlerange theory about the "Inpatient Circadian Care (TICC)" was the same one employed for the development of the middle-range theory in aviation nursing: "Flight Nursing Expertise: towards a middle-range theory" [22].
Initially, key concepts were stated by the authors based on their previous knowledge and professional experience. The following key concepts were established: chronobiology, biological rhythms, circadian rhythms, hospital, and adaptation. These concepts guided an exhaustive literature review using databases such as PubMed, SciELO Public Health, and internet Google Scholar. The data search was done using the key terms: biological rhythms, chronobiology, circadian rhythms, nurse care, middle-range theories, and adaptation. The search was performed taking into account a period of 6 decades from 1950 to 2013. Finally, an analysis of the information and a discussion were done, supported by all the information that was found and the decades of experience in basic and clinic research for each author in the disciplines of nursing, medicine, chronobiology, neuroscience and genetics. In order to strengthen the conceptual frame of this theory, we integrated theoretical elements with clinical practice. Connections between concepts of chronobiology and the characteristics of the care for the hospitalized patient, focusing on the nursing care were examined. In order to build this new theory, inductive and deductive reasoning processes, supported in empiric evidence and experience, were used.
OPERATIONAL DEFINITION OF CONCEPTS
The middle-range "Inpatient Circadian Care" (TICC) has three basic building blocks:
1)
The patient or circadian subject (time subject) 2) The synchronization process of a patient's biological rhythms
The following concepts arise from this theory: the circadian subject "I clock"; the "synchronization" process of biological rhythms of an individual, and the temporal environment or time giving environment represented in this model by the hospital staff "clockmaker" (Fig. 1).
Circadian Human Being "I Clock"
In the circadian human being, these rhythms are present throughout life. During prenatal life, the maternal circadian information adjusts the fetal internal clock, thus emmiting signals for labor initiation and birth. Later, in the neonatal stage, circadian cycles will depend on the light/darkness periods [23][24][25][26][27][28][29][30]. During the first years of life, the challenge for every individual is to adapt to a sleep/wake cycle appropriate to the surroundings. This sleep/wake cycle will adjust itself progressively, and it will be mediated by genetic and cultural factors that modulate the sleep habit throughout life [31]. The development of a sleep pattern coincides with the development of the central nervous system: "infant sleep can be considered as a window to observe the developing brain" [32]. It has been proposed that alterations in normal sleep development could expose problems in the development of the central nervous system and therefore, imply risks to a person's adaptation to the environment [32].
It is important to note that not all individuals are equal from the point of view of time. Different circadian phenotypes are called chronotypes. A chronotype is defined as the personal preference regarding the wake (activity) and sleep (rest) schedule. Three basic chronotypes have been described: morning (early bird), evening (night owls), and intermediates [33,34].
Usually childhood, there is usually a preference for being an early riser. Due to hormonal changes, this preference may change during adolescence, between ages 12 to 16, due in large part to hormonal influences. The individual becomes more of a night owl and increases the number of sleep hours at night, phase delaying such that they prefer to awaken in the late morning, closer to noon. This situation continues until about age 20 [35][36][37]. In adults, the sleep period is gradually reduced and tends to be common for the elderly to prefer a more phase advanced chronotype, with early bed times and early awakenings before dawn [38].
Extreme morning and evening chronotypes can also be considered. These features are mediated by a combination of genetic, demographic (age and gender), individual (personality, lifestyle, working conditions), and environmental factors such as geographic latitude [34,35,39,40].
Circadian rhythms not only modulate normal biochemical, physiological and behavioral variables on a daily basis, but also determine prognosis and response to treatment at the onset of disease. Many diseases also exhibit temporal structure: signs and symptoms vary during the day in conditions such as asthma, peptic ulcer, gastroesophageal reflux disease, hemorrhagic and ischemic stroke, epilepsy, myocardial infarction, hypertension, ventricular arrhythmias, cancer, depression, anxiety, bipolar disorder, and Alzheimer's disease. A common outcome for many of these diseases is death, and even this outcome may have a particular time of presentation [41][42][43][44][45][46][47][48][49][50][51].
"Synchronization" Process
Under disease conditions the circadian human being loses synchronization between the oscillations of the biological clock and the rest of the body, resulting in internal clock desynchronization with periods different from the 24-hour cycle (early or late phase). This can develop in patients with chronic diseases such as diabetes, hypertension, or cancer [52,53]. It may also be present in depressive states and other psychiatric disorders [54]. Given the high incidence and susceptibility of people to suffer from internal desynchronization, research and clinical observations suggest that individual genetic factors play a determinant role in both acute and chronic medical conditions [55,56]. Desynchronization of circadian rhythms (eg, sleep/wake cycle) is promptly manifested as sleep disturbance, persistent fatigue, hypnotic drug dependence, and impaired mood, including depression [57].
Temporal Environment as a Synchronizer
The spatial and temporal environments intervene in each of the concepts of ill person, nursing, and health in a complex person-environment interaction [58]. The circadian human being is subjugated to continuous adaptation through external time signals (Zeitgebers), with light being the most powerful, followed by feeding time, daily activity and resting, social interaction, and exercising. All these factors allow the individual to keep adapted and guided to the temporal environment in his daily activities [59][60][61]. Nevertheless, an asynchrony can present between the internal and external clocks, and the illumination/darkness cycle, caused by multiple factors: rapid travels through hour time zones, nocturnal working time, and exposition to excessive bright light at night. All of these factors generate desynchronization with the external time-givers [60,62].
This state is called chronodisruption that is defined as a significant disturbance of the temporal internal order of the physiological, biochemical and behavioral circadian rhythms. Chronodisruption is associated with a higher incidence of metabolic syndrome, cardiovascular diseases, cognitive and affective disorders, sleep disorders, some types of cancer, premature aging, and accidents [63,64]. Chronodisruption can be potentiated by the hospital as a new environment for the patient, a place where ambient clues appear different from clues generally found in daily activities: illumination, noise, infrastructure, temperature, workers, ventilation, security, visiting time, number of patients in each room, clinical assessment, invasive procedures, laboratory tests, and medications intake. All these elements together can alter the circadian synchronization of a patient physiologically, psychologically, and socially.
"Clockmaker" and Circadian Rhythms in Hospitalized Patients
The aim of hospital staff interventions is to allow hospitalized patients to conserve or restore in concordance with their internal clock and their environment. This can be accomplished using the following eight principles: Fig. (1). Inpatient Circadian Care (TICC). Biological rhythms are regulated by the environment due to the presence of timers or Zeitgebers (1). Disease is characterized by structural and functional changes that can be caused by alterations in biologic rhythms and/or interfere with the internal rhythms of the individual (2). Hospitalization represents a new environment for the patient, with new ambient timers that can complicate the recovery of the patient (3). The role of nursing is to intervene with the factors, both external and individual, that contributes to the desynchronization of the internal rhythms to favor patient´s recovery.
Control of Suitable Light/Dark Cycles
Constant exposure to bright light, both during day and night, has negative effects on circadian rhythms [65], creating an inadequate environment to sleep [66]. It is important for nurses to take into account that proper exposure to light is essential for therapeutic stimulation, in order to help to maintain patient's circadian rhythms. In this sense, natural light/dark cycles can be simulated in hospitals, leading to a better distribution of the periods of activity/rest. Parameters such as light intensity, frequency and duration of light exposure, and spatial distribution, are already established [67]. For example, the Illuminating Engineer Society recommends daytime light levels of 30 lumens/ft for hospital rooms [68,69].
Noise Level Regulation
Noise is a major source of environmental pollution and has a harmful effect on health and wellbeing. Sound levels higher than 40-45 decibels (dB) interfere with communication, disrupt sleep, and can lead to increased stress in patients. Recommended sound levels in hospital rooms are 35 dB. Therefore nurses should be alert to perturbing sounds such as alarms from infusion pumps, phone ringing, and the sound of nurse's shoes, in order to reduce disturbance levels and improve quality of care [68,[70][71][72][73].
Temperature Levels Regulation
Although the circadian clock can be entrained by temperature cycles [74], its free-running period is relatively constant within a broad range of physiological parameters. This phenomenon is called temperature compensation [75]. Extreme temperatures, however, have the potential to disrupt sleep continuity, and therefore may contribute to an adverse care environment. The threshold of recommended room temperature for healthy sleep is less than 75 degrees Fahrenheit (°F). Thus, an adverse environment with extreme temperatures associated with changes in the level of light (e.g., combination of bright light/low temperature, darkness/hot temperature) may contribute to the disruption of continuous sleep. Nursing interventions should focus on maintaining the temperature of the hospital rooms at less than 75°F in order to improve sleep efficiency [68,76].
Adequate Drug Administration
Suitable drug administration is important to align therapy to endogenous rhythms. Chronopharmacology refers the "design and evaluation of drug delivery systems that release a bioactive agent at a rhythm that ideally matches the biological requirement of a given disease therapy" [77]. The pharmacodynamics and pharmacokinetics of a drug can be altered according to the time of day that the medication is administered [78], thus controlling the efficacy and toxicity of the drug. Chronotherapy can be delivered through a variety of strategies and may include both time and sitespecific drug application regimens. Chronopharmacological techniques ensure that drug levels in the blood are within therapeutic ranges during periods of maximal disease severity. An example of this is seen in how evening doses of antihypertensive therapy can be used to prevent morning rises in blood pressure [79]. The evening dose of the drug may thus be well timed with diurnal changes in blood pressure, preventing diurnal worsening of hypertension. Another example is chemotherapy in which researchers have found that the toxic effects of more than 30 anticancer drugs vary by timing of administration [80].
Currently, however, a large number of drugs are administered without considering time of day or adjusting schedules to chronotherapeutic principles. This situation may lead to treatment failure or increased risk for drug toxicity. Furthermore, alterations in the circadian rhythms of biochemical, physiological, and behavioral processes induced by the drug itself can alter homeostatic regulation and exacerbate the disease [81][82][83][84][85][86]. Therefore, nurses must consider chronopharmacology as a means of increasing the efficacy of drugs and improving drug tolerance, paying attention to potential side effects of circadian disruption.
Rationale for Management of Practices and Procedures
Clinical procedures and tests in hospitalized patients often require patient preparation (food deprivation, fluid intake, laxatives, etc.), which induces stress and discomfort. Also, several usual care practices (such as taking vital signs or laboratory samples) are performed unexpectedly, without preparing the patient. Similarly, medium and high complexity surgical procedures are often programmed considering hospital's staff schedules and availability, regardless of a patient's biological chronotype. For example, vaginal delivery or caesarean sections are programmed in accordance to personnel convenience, thus altering the timing of the natural birth process.
Cognitive ability and capacity of the professionals who perform procedures may also vary throughout the day, which in turn may interfere with the procedure's duration and outcome. All these elements generate distress, anxiety, fear and further a patient's internal desynchronization. Additionally, it should be emphasized that several biochemical and physiological variables show a circadian variation [87][88][89][90][91][92].
Control of Feeding Schedules
There is an association between sleep duration and food intake [93]. The timing of food intake and behavior are controlled by the internal circadian clock, and the circadian control of food intake and digestion has important metabolic implications [93,94]. For example, chronic circadian misalignment has been proposed to be the underlying cause for the adverse metabolic and cardiovascular health effects of shiftwork [95]. Furthermore, it was recently reported that timing of food intake influences the success of a weight-loss diet [96], suggesting that novel therapeutic strategies should incorporate not only the classic caloric intake and macronutrient distribution but also the timing of food. Thus "chrononutrition", appropriate times of food intake can prevent metabolic and circadian alterations in the patient, by reducing cardiovascular risks and improving nutrients' supply that collaborate to the patient's recovery [97][98][99][100][101].
Encourage Social Interaction
A disruption in social rhythms may lead to instability in circadian rhythms [102,103]. The hospitalization process leads to a rupture of social ties, including separation from family, friends and daily activities [104]. Therefore, it is important that hospital nurses encourage social habits and interactions by facilitating visiting hours, providing information about what is happening in their environment, and promoting leisure and recreation. Better communication and interaction among patient, nurses and doctors regarding patient's health status will promote recovery. Taken together, these factors allow patients a proper synchronization with the social environment, reduce stress levels, and transition back into the family setting after hospital discharge.
Promote Adequate Sleep Pattern (Sleep Hygiene)
Disturbed sleep quality is associated with deficits in psychologic, behavioral, and somatic functions and predicts the emergence of deficits in interpersonal and psychosocial functioning [105]. Models used in the investigation of sleep quality emphasize the role of caretakers in ensuring that patients have a regular sleep/wake schedule, a suitable sleep environment, and a bedtime routine that prepares them physiologically, behaviorally, and emotionally for sleep. These practices, commonly referred to as sleep hygiene, influence sleep quality. Nurses will understand that, in addition to controlled environmental factors, there are a number of individual and intrinsic factors that induce and maintain sleep through personal routines [106].
In summary, there is clearly a need for incorporating these eight principles into bedside practice protocols as well as a need for further research to assess the positive outcomes of these efforts.
THE THEORY AS A WHOLE
There is strong scientific evidence that demonstrates a reciprocal relationship between a robust endogenous circadian system and the presence of regularly programed Zeitgebers. During disease, the correct function of the circadian synchronization system can be more dependent on exposure to intentionally programmed stimuli. Light levels, physical activity, temperature, and feeding are key elements for the internal synchronization of the central circadian clock and peripheral circadian oscillators. In such conditions, a regularly programed exposure to Zeitgebers, based on rational and scientific criteria, would be very important to recovery and well being [107]. Avoiding desynchronization and promoting circadian homeostasis contributes to optimal physiological function, general good health, and reduced susceptibility to comorbidities in persons with an acute or chronic illness [108]. For the health system, paying attention to circadian principles could reduce costs by preventive interventions that diminish patient complications and reduce hospital stay (Fig. 2). Fig. (2). The circadian nursing care of the hospitalized patient and costs in health system. A scientific nursing practice that makes the accurate intervention to correct the disruption of the patient´s biological rhythms can reduce costs by avoiding prolonged hospitalary stays and severe complications in the patient (1). An inappropriate attention of the patient without considering biological rhythm´s synchronizing and desynchronizing factors implies the probability of raising costs in the attention of health services and inappropriate quality of attention (2).
CONCLUSION AND IMPLICATIONS FOR PRACTICE AND RESEARCH
Chronobiology applied to patient care has broad implications for clinical practice, basic clinical research, and the training of new health care professionals. Knowledge of human circadian variations (physiological, psychological, behavioral, and social), as well as knowledge of how a disease behaves during a 24-hour period, considering its signs and symptoms, as well as the desired and undesired side effects of medications, will generate an appropriate care plan for the timing needs of the patient. A care plan should includes ensuring a stable external environment for the individual, with suitable light-dark cycles, low noise, constant temperature, and proper drug and food administration. In addition minimizing isolation and encouraging social interaction should be introduced into the plan of care for any long-term patient. This holistic approach presents an integration of existing scientific knowledge on chronobiology and the art and skill of nursing practice.
The patient is in continuous interaction with the environment (family, group, community, society, and hospital room) and, accordingly, experiences particular life styles and responses to health and disease. The need to have an updated knowledge base on biological rhythms requires that the nurse interact with other disciplines, both basic biomedical sciences and clinicians. Given that certain circadian traits are inherited, understanding the genetic architecture of molecular mechanisms of the circadian clock regulation can provide the necessary knowledge for individualizing patient care, and thus facilitating a rational therapeutic approach to their hospital care.
Finally, the performance of nursing care based on current scientific knowledge of chronobiology can positively impact the costs of health systems to detect and prevent elements of the individual or the environment that may disrupt or interfere with the patient's recovery. This can help reduce hospitalization time, medical and surgical complications, and reduce morbidity and mortality. However, more research is required in the area of circadian nursing care to determine the practical utility of this theory and to generate new knowledge that can strengthen the training of new professionals and the quality of current professional nursing practice.
|
2016-10-26T03:31:20.546Z
|
2015-02-27T00:00:00.000
|
{
"year": 2015,
"sha1": "2e323c39060c9969c110ac8de8ada09cefa38216",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TONURSJ/TONURSJ-9-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e323c39060c9969c110ac8de8ada09cefa38216",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
}
|
53221572
|
pes2o/s2orc
|
v3-fos-license
|
Medicaid enrollment among previously uninsured Americans and associated outcomes by race/ethnicity—United States, 2008‐2014
Objectives To examine the person‐level impact of Medicaid enrollment on costs, utilization, access, and health across previously uninsured racial/ethnic groups. Data Source Medical Expenditure Panel Survey, 2008‐2014. Study Design We pooled multiple 2‐year waves of data to examine the direct impact of Medicaid enrollment among uninsured Americans. We compared changes in outcomes among nonpregnant, uninsured individuals who gained Medicaid (N = 963) to those who remained uninsured (N = 9784) using a difference‐in‐differences analysis. Principal Findings Medicaid enrollment was associated with significant increases in total health care costs and total prescription drug costs and a significant decrease in out‐of‐pocket costs. Among those who gained Medicaid, prescription drug use increased significantly relative to those who remained uninsured. Medicaid enrollment was also associated with a significant increase in reporting a usual source of care, a decrease in foregone care, and significant improvements in severe psychological distress. Changes in total prescription drug costs and total prescription drug fills differed significantly across each racial/ethnic group. Conclusions Among a national sample of uninsured individuals, Medicaid enrollment was associated with substantial favorable changes in out‐of‐pocket costs, prescription drug use, and access to care. Our findings suggest Medicaid is an important tool to reduce insurance‐related disparities among Americans.
WINKELMAN Et AL. evidence stemming from these experiments suggests Medicaid has positive effects on access to care, health, and financial security. [8][9][10][11][12][13][14] For example, Medicaid expansion under the ACA led to an 8.2 percentage point improvement in insurance coverage, 5 a 12.1 percentage point increase in access to primary care, 15 a 3.4 percentage point decrease in self-reported lifetime depression diagnoses among individuals with chronic conditions, 8 and a decrease in unpaid medical bills of $3.4 billion over 2 years. 10 Although the population-level effects of state Medicaid expansions (i.e, average treatment effects) are well documented, less is known about Medicaid's direct impact among people who gain Medicaid after a period of uninsurance (i.e, average treatment effect on the treated). The Oregon Health Insurance Experiment (OHIE), the most rigorous study to date to examine the impact of gaining Medicaid at the individual level, found that uninsured individuals who gained Medicaid in Oregon state had significantly lower levels of depression and out-of-pocket spending and higher levels of prescription medication use than individuals who were not enrolled in Medicaid. [16][17][18] No other contemporary studies have followed individuals who gain Medicaid after a period of uninsurance. Such studies would be helpful to build on the findings of the OHIE and may shed light on whether identified associations are consistent across time, region, and race/ethnicity. These data are critical because they can inform ongoing policy debates regarding the design and funding of Medicaid, as well as efforts to improve racial and ethnic disparities in care. 12,[19][20][21][22] We used a nationally representative panel survey to examine the impact of Medicaid enrollment on disparities in health care costs, access to care, and general health measures among previously uninsured Americans who transitioned onto Medicaid and stratified our analyses by race/ethnicity. Based on findings from the OHIE and populationlevel studies, we hypothesized that Medicaid enrollment would be associated with lower out-of-pocket costs, higher levels of prescription medication use and usual sources of care, and improvements in mental health.
| Data and study population
We used 2008-2014 Medical Expenditure Panel Survey (MEPS) data. Medical Expenditure Panel Survey is a nationally representative survey that compiles demographic, health insurance, health care costs, utilization, and access, and self-reported health data. Medical Expenditure Panel Survey has an overlapping panel design that surveys each respondent five times over a period of 2 years. Therefore, in any given year, half the sample is in their first year and half in their second year. To create our analytic sample, we restricted analyses to respondents who had 2 years of data, were between the ages of 19 and 64 (inclusive) in their first year of MEPS, who were not pregnant in either year, and whose family income was ≤400 percent of the federal poverty level in each year. We excluded pregnant individuals because pregnancy is a categorical eligibility for Medicaid and because patterns of health care in pregnancy are substantively different than for other health circumstances.
Our sample consisted of two groups: (a) those who remained uninsured throughout the 2-year study period and (b) those who gained Medicaid after a period of uninsurance. We defined the latter population as respondents who were uninsured for at least 6 months within their first 9 months in MEPS (Period 1) and had at least 6 months of Medicaid coverage for the remaining 15 months (Period 2). We chose to set our cut-point at 9 months because nearly all individuals who gained Medicaid in our sample would have completed two rounds of surveys while uninsured prior to the fourth quarter of their first year in MEPS, and because this definition is similar to other evaluations of low-income populations who gain Medicaid. 23 To ensure all outcomes derived from round 2 of MEPS occurred in the first 9 months, we excluded individuals who did not complete round 2 by September of their first year in MEPS. Additionally, in sensitivity analyses, we vary each group definition to test the robustness of our results.
| Outcome measures
Health care costs, health care utilization, and self-reported general and mental health were obtained in each of the five MEPS survey rounds, but due to how the sample was created, we did not include values from round 3. We used values from rounds 1 and 2 during Period 1 and rounds 4 and 5 during Period 2 to ensure similar followup time across each period and to allow for a brief washout period between uninsurance and Medicaid enrollment. Access measures and psychological distress were only reported in rounds 2 and 4.
| Health care costs
We examined total health care costs and total out-of-pocket costs for individuals in Period 1 and Period 2, as well as total and out-ofpocket prescription drug costs. Each cost measure was adjusted to 2014 dollars using the Medical Component of the Consumer Price Index. 24 For inpatient, outpatient, and emergency department (ED) visits and prescription drug costs, MEPS collects data from the participating individual and their medical providers. 25
| Health care utilization
We estimated having any ED visit, total number of ED visits per person, any inpatient visit, total number of inpatient visits per person, any prescription drug fill, and total number of prescription drug fills per person. These were obtained through medical provider records.
| Health care access
Several health care access measures were assessed, including a usual source of care, foregone medical care (i.e, "unable to get medical care, tests, or treatments a respondent or a doctor believed to be necessary"), delayed medical care (i.e, "delayed medical care, tests, or treatments a respondent or a doctor believed to be necessary"), or unable to get necessary prescription drugs (i.e, "unable to obtain prescription medicines a respondent or a doctor believed to be necessary"). Each of these outcomes was asked in rounds 2 and 4 and refers to the preceding 12 months.
| Health outcomes
Our final outcome measures included several self-reported health measures: general health (fair or poor health in any survey round for each period), mental health (fair or poor mental health in any survey round for each period), and severe psychological distress (i.e, Kessler index score of 13 or greater). 26
| Covariates
We considered several sociodemographic characteristics that are known to be associated with health insurance status. 6
| Statistical analysis
We first examined whether there were differences between individuals who gained Medicaid and those who remained uninsured by comparing means of baseline sociodemographic characteristics.
Next, for each of our outcomes, we compared baseline values (i.e, in Period 1) to follow-up values (i.e, in Period 2) for individuals who gained Medicaid vs for those who remained uninsured. We also stratified our analysis of Medicaid gainers by race/ethnicity. Due to considerable heterogeneity and low sample size within "Other, non-Hispanic," we excluded this group in stratified analyses. Significance testing of outcome differences between Period 1 and Period 2 was conducted using multivariable linear regression, incorporating the characteristics identified above.
In our final set of analyses, we estimated multivariable linear regression models to assess how gaining Medicaid affected each of our four sets of outcomes relative to remaining uninsured. In each model, we used a difference-in-differences framework, interacting time period with an indicator of Medicaid enrollment, to compare changes in the outcomes for Medicaid gainers to changes among those who remained uninsured between Period 1 and Period 2. Analyses were conducted among the entire sample and also stratified by race/ethnicity. We accounted for the complex survey design in MEPS using svy commands in Stata In addition to our primary multivariable regression specifications, we ran a series of sensitivity analyses to examine the robustness of our results. First, we compared linear trends for costs, utilization, and self-reported health outcomes in Period 1 among individuals who gained Medicaid and those who remained uninsured. We did not assess Period 1 trends for access measures because data for these outcomes were only collected once during Period 1. Second, to ensure those who gained Medicaid and those who remained uninsured were well matched, we re estimated our difference-in-differences regressions using entropy balancing, an approach that directly reweights the control group to match the means (or other moments) of the treatment group. [28][29][30] We estimated two models using entropy balancing, first weighting with the covariates used in our baseline approach and second, weighting with round 1 and 2 values of the outcomes, using costs when the outcomes were not measured more than once in a period. In both cases, we used the resulting weights to estimate difference-in-differences regressions similar to our baseline analyses. Third, for cost variables, we re-estimated the models using a two-part model. [31][32][33] Finally, we made a variety of modifications to our definitions of both the Medicaid gainer population and the control group, in each case varying the number of months they were either uninsured or had Medicaid coverage.
| Health care costs
Next, we examined how each outcome changed from Period 1 to Period 2 among the entire sample. As indicated in (Table 3). Increases in total prescription drug costs were statistically significant across racial/ethnic groups.
| Health care utilization
We found no significant changes in ED and inpatient visits among individuals who gained Medicaid compared to those who remained uninsured (
| Health care access
We also observed varying changes in access to care (
| Health outcomes
Individuals who gained Medicaid and those remaining uninsured both reported just under a 2.5 percentage point decrease in the probability of reporting fair or poor health, although this was only statistically significant for those remaining uninsured (
| Difference-in-differences estimates
In addition to examining changes in mean values of each outcome finding that was generally consistent across racial/ethnic groups.
We found no significant differences in changes in ED or inpatient utilization patterns between those gaining Medicaid and those remaining uninsured ( Finally, we found modest changes in health outcomes (Table 4).
We did not find statistically significant differences in changes in selfreported fair/poor general or mental health. However, there was a
| Sensitivity analyses
In our first sensitivity analysis, we examined linear trends in Period 1 using multivariable linear regression models. We found that trends were generally similar for individuals who gained Medicaid and those that remained uninsured. Statistically significant, though quantitatively modest, differences in linear trends during Period 1 were identified for three measures: total costs, any prescription drug fill, and total prescription drug fills (Table S1). Total cost and total prescription fill differences in Period 1 were <25 percent of our difference-in-differences estimate. Therefore, differences in Period 1 are unlikely to explain the large differences observed between Period 1 and Period 2. Differences in Period 1 are also illuminating and suggest individuals who gain Medicaid likely face escalating health costs that may result, through a variety of mechanisms, in enrollment in public health insurance coverage.
Our first entropy-balanced model was weighted on our baseline set of covariates, and our second model was additionally weighted based on round 1 and 2 outcome values, which eliminated Period 1 trend differences (Table S2). Entropy-balanced estimates were substantively similar to each other and to our primary analysis, with one exception. When we weighted on round 1 and 2 outcomes, our difference-in-differences estimates for increases in ED visits and inpatient visits became statistically significant, and the decrease in out-of-pocket spending, while similar in magnitude, was no longer statistically significant.
We also re-estimated cost models using a two-part model. We found smaller, though still statistically significant, increases in total costs and total prescription drug costs, as well as significant reductions in both total out-of-pocket costs and out-of-pocket prescription drug costs (Table S3); these results did not substantively alter our main findings.
In a final sensitivity analysis, we varied the definitions for both study groups. For those gaining Medicaid, we varied the length of uninsured months and time enrolled in Medicaid; for those remaining uninsured, we varied the requirements for the months remaining uninsured. Results were similar to estimates from our primary model specification (Table S4).
| Limitations
Study findings should be considered in light of limitations related to our data source and study design. First, our statistical adjustments may not fully control for selection bias regarding who enrolls in Medicaid. Specifically, somewhat asymmetric trends in Period 1 in comparisons of individuals who remain uninsured vs enroll in Medicaid suggest that uninsured individuals who enroll in Medicaid may do so for reasons related to their need for medical care. While we were unable to fully adjust for these possibilities, we employed entropy balancing in our sensitivity analyses as an additional type of control. Estimates of costs, prescription drug utilization, and access to care from our entropy balancing models were substantively similar to our primary specification.
A second limitation is that only 1 year of data was available after the ACA-sponsored Medicaid expansion occurred in some states (i.e, 2014), and therefore, we could not determine whether gaining Medicaid coverage under the ACA had differential effects compared to gaining Medicaid in prior years. A final limitation is that the access to care outcomes refers to the preceding 12 months. While these measures were generally obtained toward the end of each MEPS data period (i.e, quarter 3 or 4), some responses might refer to previous periods of insurance and bias estimates toward the null. However, we are unaware of other contemporary datasets that could be used to track uninsured populations into Medicaid coverage to examine the outcomes presented in this study at the national level. Healthcare. An earlier version of this manuscript was presented at the 2018 Academy Health Annual Research Meeting in Seattle, WA.
|
2018-11-15T16:51:26.312Z
|
2018-11-05T00:00:00.000
|
{
"year": 2018,
"sha1": "38455ae67ac609e54261075fd9b65d410faf6b96",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1475-6773.13085",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "38455ae67ac609e54261075fd9b65d410faf6b96",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213606867
|
pes2o/s2orc
|
v3-fos-license
|
The formation of job- and competency-based human resource management in Japan
This paper examines the formation of joband competency-based human resource management (HRM) in Japan, drawing on oral histories from the steel industry to trace the path of development. At Nippon Steel and Nippon Kokan, the personnel systems evolved from the prewar academic background-based status system to the postwar academic background-based status system and finally the competency-based grade system. The process of shedding the postwar academic background-based status system required the concept of competency, which established its foundation due to two contributing factors. First, the existence of job-based wages brought the nature of specific jobs into clearer light. Second, recruiting high school graduates for blue-collar jobs created uniformity among the workforce in terms of academic background—and that enabled assessments on competencybased, not academic, criteria. Middle school graduates and university graduates came from altogether different academic backgrounds, but high school graduates came in with similar levels of knowledge—a prerequisite for applying work-oriented criteria. Despite those similar trends, Nippon Steel and Nippon Kokan would then embark on different paths in developing their respective personnel systems. Whereas Nippon Steel essentially perpetuated its job-based wage structure, Nippon Kokan converted its existing job-based wages into competency-based rates—and the difference emanated from the companies’ HRM policies. Keyword: joband competency-based human resource management, academic background-based status system, competency-based grade system, HRM policy, oral history
I. Introduction
This paper examines the formation of job-and competency-based human resource management (HRM) in Japan, drawing on oral histories from the steel industry to trace the path of development.
At Japanese companies in pre-WWII Japan, blue-collar workers and white-collar workers were on opposite sides of a yawning status gap. Status differentiation was common within both camps, as well: certain segments of white-collar workers received better treatment than others, and the same went for blue-collar workers. Academic background was one of the biggest factors in shaping status. Several researchers have probed the dynamics behind the process. A study by Ujihara (1959), a prominent work in the field, explained how four employment tiers at prewar Japanese firms (full-time employees, junior employees, factory workers, and subcontracted day-laborers) corresponded with educational background (university or technical school diploma, middle school diploma, upper primary school diploma, and elementary school diploma). From that standpoint, the prewar personnel framework appears to have operated on an academic background-based status system.
In the aftermath of the war, labor unions began to form in Japan. Contemporary workers, facing the confusion of the postwar socioeconomic climate and the threats of starvation, banded together in hopes of securing sufficient wages to get by and support their families. As inflation ballooned throughout the Japanese economy, the labor unions called for substantial wage hikes. Their demands eventually came to fruition in October 1946, when the "densan" wage system-a setup with an emphasis on providing life security-came into being.
The densan wage system included the component of ability-based pay. Under the agreed framework, ability-based pay varied according to "skill" level and "performance" level. Employers evaluated a worker's "skill" based on the importance and difficulty of the skill in question; the "performance" assessment covered intangibles like the worker's sense of responsibility, processing capacity, compatibility, inquisitiveness, and diligence. The framework took official effect in April 1947 (Labor Dispute Investigation Committee, ed. 1957, 177-180). In the years following the end of the war, then, the term "ability" was a familiar part of the vocabulary on both sides of the employer-employee dynamic.
Besides calls for higher pay, there was another impetus driving the emergence of labor unions: a demand for the democratization of company management. The prevalent status distinctions between blue-collar workers and white-collar workers, which I noted above, created disparities in working conditions ranging from salary and bonuses to promotions and benefits. Labor unions thus criticized the status divide separating the frontline and the back office and insisted on reforms to the existing personnel system, one where academic background essentially reigned supreme. In the effort to work out their differences, both sides of the labor conflict once again turned to the concept of ability as a replacement for academic background at the foundations of various personnel systems.
As previous studies including Nimura (1994) and Hisamoto (1998) have showed, however, the postwar efforts failed to bridge the existing blue-collar worker-white-collar worker status gap completely. Saguchi (1990) came to the same conclusion, explaining that the academic background-rooted labor market did change in the postwar context-but the new employment categories of "professional clerk," "engineer," "general clerk/technical assistant," and "factory worker" still correlated with the divisions between university graduates, high school graduates, and middle school graduates. Instead of disappearing, as labor unions were hoping it would, the prewar academic background-based status system simply gave way to a postwar academic background-based status system.
Employers and employees did not create the postwar academic background-based status system on their own volition, it appears. Labor unions had, indeed, made their voices heard in determining the personnel systems in the immediate wake of the war; many of their aims came to fruition. However, the concept of competency failed to take full root until the corporate community began shifting toward competency-based management in the 1960s. In substance, the personnel systems in place at Japanese companies had to maintain their links to academic background.
In a 1993 work, scholar Nitta Michio argued that the "core philosophy behind competency-based management" was, in its simplest terms, "to establish uniform management of all employees on the basis of job and competency" (Nitta 1993, 33). Companies value an employee's competency, which is inherently connected to his or her job; competency is not an independent variable, in other words. The idea of the "job" was originally central to the concept of competency-based management. 1 The personnel systems at Japanese companies followed the same basic evolutionary trajectory, going from the prewar academic background-based status system to the postwar academic backgroundbased status system and then on to the competency-based grade system. Getting past the postwar academic background-based status system hinged on the concept of competency. The competency-based grade system is a practical embodiment of competency-based management, a personnel system that rests on a structure of coherent, consistent competency standards. By ranking all employees within that type of competency-based grade system and implementing both grade promotions and converting work roles in a systematic fashion, companies were able to establish consistent internal orders that made logical sense to their workforces.
The concept of competency took root thanks in large part to two key background factors. First, companies' use of job-based wages had already given their employees clear ideas of what specific jobs entailed. The second factor had to do with the academic level of incoming employees. Up to that point, companies that hired middle school graduates and university graduates lacked a feasible way to evaluate their employees-the gap in academic knowledge between the two segments was simply too large. By hiring high school graduates to work in their factories, however, companies had much less scholastic disparity to contend with. As their hires had similar academic backgrounds, companies could use the competency yardstick more consistently and effectively.
The following sections look at how job-and competency-based HRM formed in Japan, using the cases of Nippon Steel Corporation (Nippon Steel) and Nippon Kokan Ltd. (Nippon Kokan) to flesh out the analysis. 2 Section 2 introduces various oral histories from the steel industry, which form the basis of my discussion. Sections 3 and 4 then present the cases of Nippon Steel and Nippon Kokan, respectively, and Section 5 concludes the paper with a summary of the findings.
II. Oral histories from the steel industry
The research community has been proactive in collecting and compiling oral histories from the steel industry. With the scope of available resources always growing, scholars have probed the materials to craft numerous reports and analyses. 3 Nippon Steel and Nippon Kokan account for a sizable 1 Nōryoku-shugi kanri [Competency-based management], a policy paper by the Japan Federation of Employers' Associations, also states that the "central idea behind competency-based management is managing employees separately, according to their aptitude, from a job-centric perspective." The document defines applying a "job-centric perspective" to individual employee management as "analyzing the competencies requisite to a specific job, assigning employees with said competencies to said job, and determining employee treatment in light of the job and competency in their corresponding duties (Japan Federation of Employers' Associations Study Group on Competencybased Management, ed., 1969: 20-21). 2 Aoki (2012) took up the case of F Steel Company to examine the development of competency-based management in the steel industry. In his paper, Aoki focused on how companies responded to changes in job types and, secondary to that main question, how they handled imbalances in opportunities for promotions. Other relevant research includes Umezaki (2010) and Umezaki (2014), which use oral histories for a discourse analysis of Nippon Kokan's personnel-system reforms and explore the notion of labor management as a science. For various investigations of job-and competency-based HRM in the electric industry, especially within the Mitsubishi Electric organization, refer to Suzuki (2008), Suzuki (2010), Suzuki (2012), Suzuki (2016), Suzuki (2017a), and Suzuki (2017b). 3 I, personally, have never taken part in creating an oral history from the steel industry. This paper thus uses oral histories from various outside sources. portion of the oral histories, a preponderance that owes itself to the dominant presence of the two companies in the industry.
Nippon Steel oral histories
There are ten existing oral histories on Nippon Steel, comprising interviews with personnel directors, labor-union leaders, and worksite employees. i. "
Nippon Kokan oral histories
The following seven titles are the available oral histories on Nippon Kokan. Like the resources on Nippon Steel, most are interviews with personnel directors and labor-union leaders. i. Okuda
Japan Federation of Steel Workers Unions oral histories
Authors have also conducted interviews with leaders of the Japan Federation of Steel Workers Unions, an industry-specific labor organization, and compiled the results into the four oral histories below. 4 i.
Tekkō The steel industry, an area that consistently draws considerable interest in the academic community, has inspired many scholars to compile oral histories in the field. Out of that deep reservoir of firstperson testimonials comes valuable insight into the industry's various institutions and an understanding of their design, which constitutes the main thrust of this paper.
III. The case of Nippon Steel
The following section examines the formation of job-and competency-based HRM at Nippon Steel.
Growing out of the postwar academic background-based status system
According to Nippon Steel Corporation Company History Editorial Committee, ed. (1981a), the prewar structure at Japan Iron & Steel (the entity that existed before the company dissolved into Yawata Iron & Steel and Fuji Iron & Steel) separated employees into four categories: staffers, junior staffers, factory workers, and contract workers. Like many other firms, then, it would be reasonable to argue that Japan Iron & Steel abided by the prewar academic background-based status system. The company then went on to abandon the existing status divisions after World War II and change gears in 1947, categorizing employees into office workers, engineers, workers, medical staff, and ship workers. When Japan Iron & Steel split into Yawata Iron & Steel and Fuji Iron & Steel in 1950, Yawata Iron & Steel retained the 1947 structure. Problems eventually began to appear, however; the system placed a fixed number on employees in each grade, which prevented some workers-regardless of how superior they may have been in ability, knowledge, skill, or experience-from earning promotions. From the drooping morale among that group of employees to a slackening workplace order and the difficulty of ensuring proper employee treatment, issues abounded. The company ultimately overhauled the personnel system in 1953 to divide employees into another set of categories: clerical workers, technical workers, factory workers, special workers, and field workers. In the postwar context, Yawata Iron & Steel faced the need to build a personnel system around the idea of ability.
According to Komatsu Hiroshi's oral history, however, "Office workers who'd graduated from technical schools or universities got clerical jobs; engineers with technical school or university diplomas went into the 'technical workers' category; and middle school graduates got the factory jobs, which were mostly physical labor," Komatsu recalled. "Your academic background pretty much determined where you'd land" (Komatsu 1982, 110). The concept of competency had yet to instill itself in the Yawata Iron & Steel organization, making it impossible for the organization to transcend the prewar academic background-based status system-instead, the structure then in place simply evolved into its postwar equivalent.
It was not until Yawata Iron & Steel merged with Fuji Iron & Steel in 1970, creating Nippon Steel, that the postwar academic background-based status system met its end, if only partially. With the integration came revisions to the personnel systems in place, and the merger agreement stipulated that the new company would work to ensure the fair treatment of its employees by implementing management rooted in job and competency (Komatsu and Tanaka 1982, 74).
Introducing job-based wages
Yawata Iron & Steel instituted a job-based wage system in 1962 (Nippon Steel Corporation Company History Editorial Committee, ed. 1981a, 650-652) for two main reasons: to ensure fair treatment of employees, first and foremost, and streamline its business-operation organization and personnel management. The labor negotiations took a total of 46 meetings and consumed roughly three months before the sides reached an agreement on a setup for job-based wages. Considering that the Yawata Iron & Steel wage system was rife with contradictions and inconsistencies, the labor union chose to fight for corrective measures to the existing framework rather than oppose job-based wages head-on. With the union taking a receptive stance, the company instituted a new job-based wage system that would account for 13.8% of all wages.
Yawata Iron & Steel's job-based wages applied range rates, as Table 1 shows. In addition to reflecting the difference in work quality between new employees and experienced employees, the rangerate approach also served to prevent an employee from spending excessive amounts of time at the same wage point. Under the structure, no job-based wage ever exceeded an advanced wage; the only way for an employee to earn more than the advanced wage was to move up into the next-higher grade. That fact, even despite the range rates, suggests that the job-based wages at Yawata Iron & Steel centered primarily on the job in question.
The year 1962 also saw Fuji Iron & Steel institute job-based wages (Iwabuchi 1964, 64-65). Five years earlier, in April 1957, the company had conducted job evaluations of workers in blue-collar positions and determined some of the evaluation-based bonuses based on the results. That initial foray was a response to young laborers working on the front lines with brand-new equipment; aiming to steer its operations toward the policy of equal pay for equal jobs, the company adopted evaluation-based bonuses as a means of making its wage policy fairer. The process stumbled into a variety of roadblocks, however. First, there was the issue of ensuring company-wide balance-the need to prevent inconsistencies in evaluations for the same jobs at different worksites. Another was the complexity of the evaluation methods. Third, the company had to grapple with opposition from the labor union, which was reluctant to sign on to the new system because executives had failed to provide a thorough explanation of the system when it took effect. The fourth problem revolved around discrepancies with the actual jobs that employees were doing: the tasks and conditions comprising each job changed, creating disconnects in the system. Fifth, technical issues emerged in the job evaluations. To remedy the situation, Fuji Iron & Steel began implementing job evaluations on a company-wide basis in March 1961 and exploring ideas for concise, complication-free evaluation methods that would represent the best fit for the company and satisfy both the management side and the labor side. The company, knowing that input from the employee side would be vital, met with the union to formulate an acceptable setup. After obtaining the union's approval for the new system, Fuji Iron & Steel officially put its job-based wages in place in 1962.
As Table 2 shows, Fuji Iron & Steel's job-based wage structure employed range rates with an adjustment factor that varied according to length of continuous service. Still, the primary determinant in the company's pay scale-like the arrangement at Yawata Iron & Steel-was the job, not duration of service. Source: Komatsu 1963, 14. Note: Grades 1-3 were for jobs in the non-skilled group; 4-6 for jobs in the general group; 7-9 for jobs in the skilled group; 10-12 for jobs in the senior group; 13-17 for jobs in the officer group; and 18-20 for jobs in the leader group.
Maintaining job-based wages Yawata Iron & Steel and Fuji Iron & Steel merged to create Nippon Steel in 1970.
Even after the merger, the existing structure for job-based wages stayed firmly in place. For proof, one can simply look at the ratios of job-based wages to total wages across time. In 1963, job-based wages amounted to 15.4% and 13.7% of all wages at Yawata Iron & Steel and Fuji Iron & Steel, respectively. Those same percentages sat at 19.2% and 19.5% in 1969, just a year before the merger. After the integration into Nippon Steel was complete, the composite share of job-based wages actually jumped: the number went from 20. 6% in 1970to 24.6% in 1973(Nippon Steel Labor Department 1973. While the job-based wages at Yawata Iron & Steel and Fuji Iron & Steel had employed range rates from the outset, job type was the most prescriptive component of both systems. In Fuji Iron & Steel's initial job-based wage setup, the adjustment factor depended on length of service. In 1968, the company then switched the variable for the adjustment factor from service duration to competency Nippon Steel also created an "additional pay" compensation category, a form of monetary recognition for expertise and skill. The additional pay was entirely separate from job-based wages, however, and applied to a scant few employees: the percentage of job-based wages as of 1973 was 24.6%, but additional pay accounted for just 6.3% of the total (Nippon Steel Labor Department 1973, 4).
Interpreting the oral histories
The personnel system at Nippon Steel evolved from the prewar academic background-based status system to the postwar academic background-based status system and finally on to the competencybased grade system. That the postwar academic background-based status system essentially inherited the legacy of the prewar academic background-based status system, with little substantial change, was the result of a conceptual hole: the idea of competency had not yet lodged itself in the contemporary consciousness. The process of evolving out of the postwar academic background-based status system required the concept of competency, and the concept of competency was able to establish that crucial foundation due to two contributing factors. The first was the presence of job-based wages, which had made the nature of specific jobs more salient and easier to grasp. Second, hiring high school graduates to work in the field gave companies a relatively uniform set of employees in terms of academic background-and that enabled assessments on competency criteria. Whereas the middle school graduates and university graduates that companies employed came from disparate academic backgrounds, which formed the basis of the existing systems, high school graduates came into their jobs with similar levels of knowledge and thereby led employers to apply different evaluation criteria.
Interviews with Fukuoka (2002 and2010) hint at the importance of that first factor: how implementing job-based wages had helped clarify the concept of the job itself.
"The biggest thing that job-based wages did was create the whole concept of the 'job.'" (Fukuoka 2010, 9) "With job-based wages, people could see the idea of competency through the 'job' filter. . . On the factory floor, the concept of competency gave workers a way of understanding their abilities in the context of their jobs-and that was revolutionary. I'd say it played a big role in driving massive innovation in the industry, in fact." (Fukuoka 2002, 92-93) The other factor was academic background. From 1952 to April 1955, Yawata Iron & Steel implemented a no-new-hire policy due mostly to the company's slagging production after the Korean War. When it resumed recruitment in 1955, however, the company began hiring more and more high school graduates for jobs on the technical side of operations. The relative weight of high school graduates in the entire hiring picture stayed sizable for years: they accounted for 416 of the 589 new recruits in 1955, 1,031 of 1,362 in 1956, and 1,043 of 1,234 in 1957. High school graduates brought an influx of high-caliber talent into the company, with many excelling in their careers (Komatsu and Tanaka 1982, 61).
According to Komatsu (1982), the motivations behind the effort to hire more high school graduates lay largely in a growing need for sophisticated skills on the factory floor.
"Up to the time between the mid-1950s and the mid-1960s or so, we tended to hire elementary school graduates and middle school graduates to fill out our engineer and factoryworker positions. From that point on, though, it got harder and harder to get by on those skill levels; it took employees with a high school background to operate a lot of the electrical equipment, power equipment, and other mechanical facilities. The need for skilled labor was fading, and we switched over to hiring high school graduates." (Komatsu 1982, 112) As Komatsu's words imply, the company's decision to hire high school graduates for on-site positions came about because of changing technical conditions. That shift had direct consequences on how administrators evaluated employees. Given that middle school graduates and university graduates came from decidedly different academic backgrounds, it was virtually impossible to evaluate everyone against identical ability criteria. By focusing its hiring scope to high school graduates, the company gained an inflow of recruits with similar academic backgrounds-and could thus implement evaluations in a consistent fashion around the concept of competency.
Ⅳ. The case of Nippon Kokan
For another exploration of how HRM incorporated the elements of job and competency, the next section examines the case of Nippon Kokan.
Growing out of the postwar academic background-based status system
According the oral history of Orii Hyūga (1973, Chapter 3), Nippon Kokan was aware of the need to do away with the prewar academic background-based status system. However, the traditional split between white-collar workers and blue-collar workers was still in place after the war. Executives, harboring concerns about disrupting the existing order, continued to deny the labor union's demands for an end to the white-collar worker-blue-collar worker status divide.
However, a series of facility-rationalization initiatives spawned a variety of complications at Nippon Kokan. The streamlining efforts made the jobs of white-collar workers and the jobs of blue-collar workers increasingly similar in nature, first of all. In terms of academic background, blue-collar workers were also nearing their white-collar counterparts. Third, the imbalanced labor circumstances among blue-collar workers created inequalities in promotion opportunities. Considering these and other issues, dissatisfaction with the existing status differentiation was growing-and survey responses made the sentiment clear. According to the results of a September 1963 questionnaire, 64% of all employees supported the elimination of the "staffer" and "factory worker" denominations. That sentiment was particularly strong in the ranks of factory workers, 73% of whom voiced their displeasure with the categorization. The mood pervading the workforce prompted the company to discontinue its use of the "staffer" and "factory worker" designations in January 1964 and instead refer to all workers as "employees." Just over two years later, in April 1966, Nippon Kokan went on to implement a new, competency-based personnel system. The new personnel system sought to rectify the significant status barriers between "staffer" and "factory worker", first of all, and use competency criteria to ensure both fair employee treatment and effective management of the promotion protocol. Under the framework, the company classified employees not by job type but by competency-and competency became the determining factor for a host of other conditions. The company began using competency level to assign employees to different grades, and the ability to base employee treatment on competency helped the company lay out clearer standards and rules for earning promotions.
As was the case at Nippon Steel, the prewar academic background-based system at Nippon Kokan thus survived despite criticism from its detractors and simply evolved into its postwar counterpart with all its basic components intact. Not until the second half of the 1960s did the company gradually start to grow out of the academic-background mold.
Introducing job-based wages
Nippon Kokan had already installed a single-rate structure for job-based wages in April 1963. Under the system, white-collar employees actually received competency-based rates-not job-based wages. Instead of referring to the compensation as "job-based wages," which would be technically inaccurate, the company therefore referred used the phrase "work wages" to cover both white-and blue-collar employees. The "work wages" were a comparatively small part of the overall wage system, however: whereas regular wages accounted for 47.5% of the aggregate, performance wages at 34.8%, and work wages at 13.6%. Rounding out the total were family wages and special-work wages at 2.7% and 1.5%, respectively (Imada 1963, 3). Orii (1973, Chapter 4) recalled that Nippon Kokan had begun discussing a way of unifying its job-evaluation methods on a company-wide scope in June 1961, when projects to build new factory facilities prompted inter-worksite employee transfers and thereby led to some confusion on the jobevaluation front. Out of those discussions came an idea for a model, which the company then tested out in actual assessments. Based on the results, the company submitted a proposal for the new jobevaluation system to the labor union in March 1962. Management knew that it would have to engage fully with the labor side and win approval for the new system, as there was a diversity of opinions on the matter and the potential for conflicts of interest between different employees was real. The company headed into negotiations ready and willing to incorporate whatever input it could from the labor union representatives.
After receiving the proposal from management, the labor union spent three months mulling over the terms and eventually came up with a response. The official statement from the General Council of Trade Unions in Japan, then a national umbrella organization, raised several issues with Nippon Kokan's proposal. First were concerns that job-based wages would lead to tougher individual control over workers. Another sticking point was Nippon Kokan's plan to weight evaluations in favor of "contributions to the company," which the General Council saw as a possible means of tightening worker bondage to the company-not a reliable way of assessing an employee's actual performance of his or her job. Having cited several shortcomings, the General Council announced that it would continue to oppose the measure. Echoing the General Council, its upper-tier organization, the Nippon Kokan labor union also adopted a critical stance on the plan for job evaluations and job-based wages. The group was not looking for an all-or-nothing battle, however; leaders opted for the more realistic approach of seeking revisions that would benefit union members, if only slightly, rather than calling for a complete overhaul of the proposal.
Subsequent negotiations between management and labor representatives went on for nearly two months. Upon eventually reaching a provisional agreement with the labor union in August 1962, management began making the necessary adjustments to job evaluations across the organization. The sides then came to a final agreement in March 1963.
Converting job-based wages into competency-based rates
In 1967, Nippon Kokan converted the existing work wages into a sliding arrangement that would pay employees according to their internal grades-their competency levels, in other words (Orii 1973, Chapter 4). As I noted earlier, the company eliminated the "staffer" and "factory worker" division in its personnel system to create a system that managed all of its employees in a standardized system of job types and competency levels. When the management and labor sides reached a provisional agreement on the personnel system in the spring of 1966, the company set to renovating the work-wage setup.
The existing work wages had been in place for three years, over which time the system began to show symptoms of a critical flaw: an employee might demonstrate a clear improvement in competency, but his or her work wage could very well stay the same. Obviously, the concern was that the framework might sap workers of their motivation for self-development. The proportion of work wages to total wages had risen since the system took effect, as well. Management knew it had to make adjustments so that the system would reward workers for competency enhancements, regardless of whether or not they moved up to a different job grade. That was the impetus behind the revisions to the work-wage system, which management and labor representatives agreed to in the spring of 1967. Table 3 shows how, within each grade, there were multiple pay points for different job types and competency levels.
While the company shifted away from its single-rate framework for competency-based rates and adopted job-based wages, it understood the merits in breaking down the staffer-factory worker barrier and ensuring the benefits of competency-based wages extending to blue-collar workers. The blue-collar segment was bringing in employees with higher-level academic backgrounds, which gradually made it impossible to maintain the existing divide in management approaches. The changing demographics exposed a clear need to align the management techniques for blue-collar workers and their white-collar counterparts more closely. For Nippon Kokan, the effort to convert work wages into competency-based rates seems to have aspired to meeting that need.
Interpreting the oral histories
Okuda Kenji, then a personnel staff member at Nippon Kokan, recalled how the system took shape. "When you make systems like our job-based wage structure, you can lay out the competency requirements for each competency level in so much detail: the equipment and machinery an employee needs to be able to operate, for example, and the repair skills an employee needs to have," he said. "That means that if you boost your competency, you get a better competency evaluation-it's simple" (Okuda 2004, 57). As Okuda's statement suggests, the idea of having a "competency" for performing a certain "job" was becoming part of the common consciousness at Nippon Kokan.
Nippon Kokan also began hiring high school graduates to fill factory-floor positions in 1959, thus infusing both the white-and blue-collar segments with workers from the same basic high school background. "We'd bring a high school graduate into the factory one day and give a high school graduate a white-collar job the next," Okuda remembered. "Even though we knew there'd be some confusion in terms of personnel treatment, we did it because we had to. We started seeing problems right away, just as we'd expected. Recruits from the exact same high school would land on both sides-some became blue-collar workers, and some became engineers. They'd see their friends and get to talking, you know, about how it didn't make much sense the way the jobs shook out. When you get to that point, you realize that you're probably going to have to unify the whole employee system. Sometimes, there's just no way around it-the entire setup needs to go. We got to talking about possible solutions and, in 1964, decided to eliminate the 'staffer 'and 'factory worker' status distinction altogether" (Okuda 2004, 242). Agitation in the workplace, therefore, opened management's eyes to important issues and put Nippon Kokan on course to grow out of its postwar academic background-based status system.
Behind that development was a series of problems that Nippon Kokan encountered after several facility-rationalization initiatives. The efforts made the staffers' and factory workers' jobs more similar, first of all. Factory workers also demonstrated stronger academic backgrounds, approaching their staffer counterparts. Third, factory workers did their jobs within an imbalanced structure that created an uneven playing field for promotions. "We started hiring high school graduates for blue-collar jobs right around the time we launched a cutting-edge hot strip mill (capable of producing thin sheets at high speeds) at our new Mizue Plant," Okuda recalled. "The technology was sophisticated-much more advanced than what we'd been using-so we needed capable workers. If you didn't have the skills that a high school graduate brought to the table, you simply couldn't run the machinery" (Okuda 2004, 62). Okuda's statement points to the larger significance of recruiting high school graduates for blue-collar jobs: it enabled Nippon Kokan to focus on the idea of competency, which gave the company a consistent conceptual platform for evaluating all its employees across the board. At Nippon Kokan, the presence of blue-collar workers with high school backgrounds created the need to reassemble the company's internal order. By acting on that need, the company moved past the evolutionary step of the postwar academic background-based status system-but doing so, of course, required the concept of "competency" to be firmly in place.
Nippon Kokan, unlike Nippon Steel, transformed its job-based wages into competency-based rates. The seeds of that change, however, had already begun to take root before the job-based wage system emerged. Okuda's oral history again provides an illuminating perspective: "The job surveys we'd been doing for so long were finally starting to have an impact on actual wages. From our internal discussions, we started to understand that we needed a more fluid system-one that made it easier to move people up and down. Sticking to a fixed structure wasn't going to get us anywhere." (Okuda 2004, 222) As Okuda's words suggest, the HRM policy at Nippon Kokan had been leaning in the direction of competency-based rates from the beginning. When worries about single-rate job-based wages began to surface three years after the system went into operation, the company quickly moved to implement competency-based rates in place of the existing job-based wages. Embracing different HRM policies, Nippon Steel and Nippon Kokan would see their personnel systems develop in different directions.
Ⅴ. Conclusion
This paper traced the formation of job-and competency-based HRM in Japan through the cases of Nippon Steel and Nippon Kokan. At both companies, the personnel systems evolved from the prewar academic background-based status system to the postwar academic background-based status system and finally the competency-based grade system. The process of shedding the postwar academic background-based status system required the concept of competency, which established its foundation due to two contributing factors. First, the existence of job-based wages brought the nature of specific jobs into clearer light. Second, recruiting high school graduates for blue-collar jobs created uniformity among the workforce in terms of academic background-and that enabled assessments on competencybased, not academic, criteria. Middle school graduates and university graduates came from altogether different academic backgrounds, but high school graduates came in with similar levels of knowledgea prerequisite for applying work-oriented criteria. Despite those similar trends, Nippon Steel and Nippon Kokan would then embark on different paths in developing their respective personnel systems. Whereas Nippon Steel essentially perpetuated its job-based wage structure, Nippon Kokan converted its existing job-based wages into competency-based rates-and the difference emanated from the companies' HRM policies.
Looking ahead, there are numerous ways to deepen and enrich the findings above. Advocates of competency-based management began to make their voices heard in the 1960s, but the idea only started to jell into actual competency-based grade systems in the 1970s. Even then, the systems at many companies simply ignored the element of the "job" itself. As a result, Japanese firms ran with the idea of performance-based pay in the 1990s and set to overhauling their competency-based grade systems-a process that observers tend to see as a failure. Looking at the circumstances from a different angle, however, that failure did not mean that Japanese companies remained in the same framework of competency-based management in the early 2000s. Currently, the role-classification system-a practical embodiment of performance-based management-is becoming the standard framework at Japanese companies. Future research could analyze the development of performance-based management and the role-classification system from a historical perspective.
|
2020-01-02T21:56:32.558Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "b864cf1162041ac4cc0a8e3f07323604f87e33b8",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jrbh/36/0/36_32/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a0257ca8cabc9b5ae636ec5681a343eb50f38f4d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
236976073
|
pes2o/s2orc
|
v3-fos-license
|
Nonlinear dispersion in wave-current interactions
Via a sequence of approximations of the Lagrangian in Hamilton's principle for dispersive nonlinear gravity waves we derive a hierarchy of Hamiltonian models for describing wave-current interaction (WCI) in nonlinear dispersive wave dynamics on free surfaces. A subclass of these WCI Hamiltonians admits \emph{emergent singular solutions} for certain initial conditions. These singular solutions are identified with a singular momentum map for left action of the diffeomorphisms on a semidirect-product Lie algebra. This semidirect-product Lie algebra comprises vector fields representing horizontal current velocity acting on scalar functions representing wave elevation. We use computational simulations to demonstrate the dynamical interactions of the emergent wavefront trains which are admitted by this special subclass of Hamiltonians for a variety of initial conditions. In particular, we investigate: (1) A variety of localised initial current configurations in still water whose subsequent propagation generates surface-elevation dynamics on an initially flat surface; and (2) The release of initially confined configurations of surface elevation in still water that generate dynamically interacting fronts of localised currents and wave trains. The results of these simulations show intricate wave-current interaction patterns whose structures are similar to those seen, for example, in Synthetic Aperture Radar (SAR) images taken from the space shuttle.
Introduction
The sea-surface disturbances whose trains of curved wavefronts trace the propagation of internal gravity waves on the ocean thermocline hundreds of meters below the surface may be observed in many areas of strong tidal flow. For example, the passage of the Atlantic Ocean tides through the Gibraltar Strait produces trains of curved sea-surface wavefronts expanding into the Mediterranean Sea. Likewise, the passage of the Pacific Ocean tides through the Luzon Strait between Taiwan and the Philippines produces trains of curved sea-surface wavefronts expanding into the South China Sea. These coherent trains of expanding curved wavefront disturbances are easily observable because they are strongly nonlinear. Their sea-surface signatures in the South China Sea may even be seen from the Space Shuttle [1,33,39], as prominent crests of wave trains move in great arcs hundreds of kilometres in length and traverse sea basins thousands of kilometres across. Figures 1 and 2 show SAR images of the signatures of internal waves on sea surface. https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/instruments/ sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/content/oceanic-internal-waves https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/instruments/ sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/content/oceanic-internal-waves Multi-layer modelling of internal waves. The emission of surface effects near the Gibralter Strait observed in the SAR images seen in figure 1 are short term effects. In contrast, the surface effects seen in SAR images near Dong Sha Atoll in the South China Sea in figure 2 are long term effects involving internal wave propagation over hundreds of kilometers. The short term behaviour of internal waves has been modelled with some success using the well known multi-layer Green-Naghdi (MGN) equations [30]. However, longer term modelling of these waves has been problematic, because MGN and its rigid-lid version, the Choi-Camassa (CC) equation [9,10], were both shown in [32] to be ill-posed in the presence of either bathymetry or shear. For example, even the shear induced by a single travelling wave causes the linear growth-rate of a perturbation of MGN or CC solutions to increase without bound as a function of wave number.
Until recently, the ill-posedness of MGN or CC solutions had prevented convergence under grid refinement of the numerical simulations of these waves over long times, because the cascade of energy to smaller scales would eventually build up at the highest resolved wave number. Regularisation was possible by keeping higherorder terms in an asymptotic expansion, as in for example [2]. However, such methods tended to destroy the Hamiltonian property of the system and also degrade its travelling wave properties. Moreover, if one is to consider the problem of wave generation and propagation at sea, one must consider the effects of bathymetry and shear, both of which may induce instability. Thus, the MGN equations had to be modified to make them well-posed. A recent review of the various approaches to regularising the MGN is given in [15]. The analysis in [15] focuses on the Camassa-Holm regime of asymptotic expansion for nonlinear shallow water waves defined in [11]. The present paper also focuses on this asymptotic expansion. Thus, the M L √ D Hamiltonian system remains well-posed in the presence of shear and its solutions agree with those of the MGN system in the absence of shear [14]. With these properties in mind, we shall choose the M L √ D Hamiltonian system as the basis for the present work.
Aims of the present paper. The overall aim of the present paper is to model the internal-wave surface signatures seen by SAR images such as those in figures 1 and 2. For this purpose, the investigations of the present paper will focus on the theoretical and computational simulation properties of the solutions of the single-layer case of M L √ D, known as 1L √ D. The 1L √ D model possesses three well-known variants. These are the two-component Camassa-Holm equation (CH2) and the modified CH2 equation (ModCH2) with H 1 and H div kinetic energy norms. We will derive these variants and then focus computational simulations on the ModCH2 equation with the H 1 kinetic energy norm, which relates to previous work in [17,18,26,27,28].
We are inspired by the Synthetic Aperture Radar (SAR) images of the internal-wave signatures of wavefronts on the sea surface shown in figure 1 and figure 2. As mentioned earlier, these wavefronts are known to be driven by internal waves propagating on the interfaces of the stratified layers lying beneath the sea surface [1,33,39]. However, the SAR data only contains the wavefront signatures of the internal waves on the sea surface, as seen from a distance overhead by the Space Shuttle, for example. This means the below-surface processes of their formation cannot be directly observed. To describe the interactions among these wavefront signatures on the surface, we seek a minimal description of their dynamics which involves only observable quantities. This minimal model is based on the single-layer version of M L √ D which accounts for both kinetic and potential energy. Specifically, we seek to model the formation and dynamics of trains of wavefronts arising from an initial impulse of momentum, or from an initial gradient of surface elevation. We also seek to derive the dynamics of their collisions, including their nonlinear reconnections. In fact, the model we seek would treat the data only as the motion of curves in two dimensions which make optimal use of their kinetic and potential energy over a certain horizontal interaction range. In particular, the minimal model would not attempt to describe the interactions among internal-waves beneath the surface which are believed to produce these wavefronts.
To formulate such a minimal model of wavefront dynamics, we will derive a sequence of approximate equations in the so-called Camassa-Holm regime of nonlinear wave dynamics [15]. Starting from the singlelayer case (1L √ D) we will derive the 2D version of the two-component Camassa-Holm equation (CH2). The 1D version of CH2 is well-known for its completely integrable Hamiltonian properties. However, here we will be working in 2D. From CH2, we will obtain the modified two-component Camassa-Holm equation (ModCH2). In 1D, ModCH2 possesses emergent weak solutions supported on points moving along the real line [28,26]. In the 2D doubly periodic planar case treated here, ModCH2 possesses emergent weak solutions supported on smooth curves embedded in 2D. However, the simulations here do not always capture the singular solutions, indicating that the formation of the singular solution may occur quite slowly. The moving curves in the simulations are meant to model the dynamics of the sea-surface signature wavefronts driven by internal waves interacting below, as seen in the SAR image data.
Computational simulations of ModCH2 in 2D will be used here to display the various types of interactions among these emergent wave profiles in 2D. These simulations show wave trains with both singular and nonsingular profiles emerging from smooth initial conditions. This emergence is followed by reconnections among the wavefronts during their nonlinear interactions. Some of the intricate patterns seen during these 2D simulations turn out to be strikingly similar to those seen in the SAR images shown in figures 1 and 2.
Plan of the paper. The plan of the paper is as follows.
Section 1 has provided the problem statement and the main goal of the paper. Namely, we aim to formulate a minimal model of the dynamical wavefront behaviour seen in SAR images such as those shown in figures 1 and 2. We have listed the desired aspects of such a model. These desiderata have already been accomplished in deriving the M L √ D model, which provides a multi-layer well-posed description of internal waves in [14]. Thus, this section has set the context for what follows in the remainder of the paper's investigation of single-layer wavefront interaction dynamics. Section 2 begins by showing that a certain approximation of the 1L √ D system easily yields the Hamiltonian two-component Camassa-Holm equation (CH2) in 2D. In one spatial dimension (1D), the CH2 equation is known to be completely integrable by the isospectral method [8,16]. Its 2D behaviour will be discussed here, briefly but not extensively, because our main goal is the study of a further approximation. The further approximation yields the modified CH2 system (ModCH2). As discovered in [28], the solution ansatz for the dominant behaviour of ModCH2 is given by the singular momentum map discussed in Theorem 2.13 equation (2.46) in any number of spatial dimensions. Its 1D singular solutions were shown to emerge and dominate the ModCH2 dynamics arising from all smooth, confined, initial conditions discussed in [26]. As we shall see, the 2D ModCH2 solutions simulated here will not always capture the sharp peaks of the singular solutions. The Euler-Poincaré formulation of the 2D 1L √ D equation follows from Hamilton's principle δS = 0 with S =´T 0 (u, D)dt for the following Lagrangian, 1 Here, we denote fluid velocity as u, constant mean layer thickness as d, bathymetry as b(x) with x = (x, y), and the total depth as D, the last of which satisfies the following advection equation, In the 1L √ D Lagrangian, the term representing kinetic energy of vertical motion is proportional to the Fisher-Rao metric, which appears in probability theory. See e.g., [3] for a fundamental discussion of the Fisher-Rao metric and other generalised information metrics in probability theory. The Fisher-Rao metric is also important in information geometry [37]. An equivalent form of the Lagrangian 1L √ D in terms of spatial gradients is given in (2.4).
In standard form for fluid dynamics, the motion equation for 1L √ D is expressed as Remark 2.1 (An alternative form of the 1L √ D Lagrangian and energy conservation).
Upon substituting the continuity equation (2.2) into the 1L √ D Lagrangian in (2.1), one finds an equivalent Lagrangian, written as the difference of the kinetic and gravitational potential energies, Here, the symmetric, positive-definite operator Q op (D) is defined by its action on the velocity vector, as After an integration by parts, the conserved sum of the kinetic and potential energies may be expressed as The conserved total energy in (2.6) can be regarded as a metric on the space of smooth functions of vector fields and densities over R 2 , (u, D) ∈ X(R 2 ) × Den(R 2 ). Hence, one can write the total energy for the 1L √ D in (2.6) as a squared norm which defines the following metric on X(R 2 ) × Den(R 2 ), The Lie-Poisson Hamiltonian structure of the 1L √ D model in equations (2.2) and (2.3) with energy (2.6) is discussed along with two other models to follow in remark 2.10. Substituting the Lie-derivative relation, for any material loop c(u) moving with the fluid flow.
The 1L √ D model admits potential flows. The motion equation for 1L √ D in (2.3) and the vector calculus relation in (2.8) imply that if curlu = 0 initially, then it will remain so. In this case, the corresponding velocity potential φ(x, t) for curl-free flows given by u = ∇φ satisfies a Bernoulli equation given by, Proof. The Euler-Poincaré equation for a Lagrangian functional (u, D) is given by [4,25] ∂ t δ δu The corresponding variational derivatives of the Lagrangian functional 1L √ D (u, D) in (2.1) are given by (2.14) We now set D = d in the kinetic energy terms only, to find [31]. For a comprehensive survey of the role of the Camassa-Holm equation in the wider context of nonlinear shallow water equations, see [29].
Remark 2.6 (Solving for velocity u from momentum m with the grad-div operator in (2.17)).
In equation However, it is also useful to verify that the solution for the velocity u from the CH2 momentum (m /d) can be implemented directly, without solving for the Green function explicitly.
The velocity u can be obtained from momentum m via their linear operator relation for velocity u in equation (2.17). For this, one begins with the Hodge decomposition for the velocity. Namely, Here, the vector potential A is divergence-free divA = 0, has zero mean´D Ad 2 x = 0, and satisfies Neumann boundary conditions, ∂ n A| ∂D = 0. The scalar potential φ vanishes at the boundary. With these conditions, the vector and scalar potentials each satisfy Poisson equations, Taking the curl and divergence of the defining relation in (2.17) for the momentum m in terms of the velocity u then yields the inversion formulas for the velocity potentials, (2.20) Inverting the relations in (2.20) for the vector and scalar velocity potentials A and φ then yields the velocity via the Hodge decomposition in (2.18).
A geometric way of writing the Euler-Poincaré equation for any Lagrangian functional (u, D) in n dimensions arises by regarding the fluid velocity as a vector field, denoted u, the depth as an n-form, denoted D, and its dual momentum density as a 1-form density, denoted m := δ /δu. In coordinates, this is [4,25] u For the Euler-Poincaré variational principle, one also assumes a natural L 2 pairing, · , · , so that In this framework, the Euler-Poincaré equation (2.12) for a Lagrangian functional (u, D) and the auxiliary equation for the advection of density D are given by [4,25] where D is a density and δ /δD is a scalar function, according to our L 2 pairing in (2.22).
The Kelvin circulation theorem in this framework is then proved, as follows, In particular, according to (2.16) and (2.17), the Kelvin circulation theorem for CH2 with Lagrangian CH2 in (2.15) is given by For CH2 in 2D, applying the Stokes theorem to the Kelvin theorem (2.25) implies conservation along flow trajectories of a potential vorticity. Namely, This means, in particular, that if σ vanishes initially, it will continue to do so.
Moreover, equation (2.26) and the continuity equation in (2.2) imply preservation of the integral quantities (enstrophies) given by for any differentiable function Φ.
For CH2 in 3D, applying the Stokes theorem to the Kelvin circulation conservation law for the CH2 model in Kel-CH2 implies the advection of a potential-vorticity vector field, σ, given in components by In 3D, the CH2 equation (2.26) and the continuity equation in (2.2) imply preservation of the integral quantity (helicity) given by The helicity integral in (2.29) is taken over any volume (blob) B t = φ t B 0 of fluid moving with the flow, φ t , with outward normal boundary condition curlv ·n = 0 on the surface ∂B.
Deriving the Modified 2-component Camassa-Holm equation (ModCH2) in 2D and 3D
To derive the ModCH2 model equations, we modify the potential energy terms in the Lagrangian (2.15) for the CH2 model, as follows where convolution with the Green function G Qop(d) acts as a smoothing operator in the potential energy term in the ModCH2 Lagrangian. The (divu) 2 term in (2.30) effectively replaces the vertical kinetic energy by the divergence of the horizontal velocity.
Proof. The corresponding variational derivatives of the Lagrangian functional M odCH2 (u, D) in equation (2.30) are given by The Euler-Poincaré equation (2.12) for the variational derivatives in (2.32) yields the ModCH2 equation in (2.31).
In summary, the velocity u in the ModCH2 equation (2.31) is obtained from momentum m at each time step by inverting the grad-div Helmholtz operator in (2.32) as explained in remark 2.6. This procedure is valid in both 2D and 3D. Remark 2.9 (Kelvin theorem, conservation laws and an additional property for ModCH2).
In combination with the continuity equation for the total depth as D in (2.2), the Kelvin theorem and conservation laws for ModCH2 may be obtained as analogues of those for CH2 in both 2D and 3D in remark 2.7. However, ModCH2 was introduced in [26] to provide an additional structural feature which goes beyond the CH2 equation. Namely, ModCh2 is both a geodesic equation and an Euler-Poincaré equation. In the next section, we will discuss the implications of these dual properties.
In addition to possessing the same Kelvin theorem and all of the corresponding conservation laws for CH2 discussed in remark 2.7 which accompany its derivation as an Euler-Poincaré equation, the Lie-Poisson Hamiltonian formulation of the ModCH2 equation places it into a class of equations which admit singular momentum map solutions in any number of dimensions. This is the subject of the next section.
Singular momentum map solutions for Modified CH2 (ModCH2)
The purpose of this section is to explain how the dual properties of ModCH2 in being both a geodesic equation and an Euler-Poincaré equation endow it with singular momentum map solutions in any number of dimensions. That is, ModCH2 admits singular solutions that are represented as a sum over Dirac deltas supported on curves in the plane, or surfaces in three dimensions, which are advected by the flow of the currents which they themselves induce throughout the rest of the domain.
Specifically, the singular solutions are given in Theorem 2.13 by where s is a coordinate on a submanifold S of R n , exactly as in the case of EPDiff. For R 2 , the case dim S = 1 yields fluid variables supported on filaments moving under the action of the diffeomorphisms, while for R 3 dim S = 2 yields fluid variables supported on moving surfaces. The geometric setting of the peakon solutions of the Camassa-Holm equation and its extension to pulson solutions of EPDiff was established in [25]. Following the reasoning in [23,28], one may interpret Q i in (2.33) as a smooth embedding in Emb(S, R n ) and P i = P i ·dQ i (no sum) as the canonical 1-form on the cotangent bundle T * Emb(S, R n ) for the i-th smooth embedding.
In a sense, the singular ModCH2 wave-currents are analogues for nonlinear wave dynamics of point vortices in 2D and vortex lines in 3D for Euler fluid dynamics. However, unlike vortices and vortex lines in 3D for Euler fluids, the singular ModCH2 wave-currents can emerge spontaneously from smooth, spatially confined initial conditions, while the point vortices and vortex lines do not emerge spontaneously in Euler fluid dynamics.
Remark 2.10 (Shared Lie-Poisson Hamiltonian structure). As we have seen, all of the models 1L √ D, CH2 and ModCH2 yield semiditect-product Euler-Poincaré equations in the class EP(Diff F) in equation (2.12).
Here, F comprises the smooth scalar functions of the densities D = D d n x ∈ Den(R n ) and denotes the semidirect-product action [4,25].
In n dimensions, the corresponding Lie-Poisson Hamiltonian equations can be obtained from the Legendre transformation, The variational derivatives of the Hamiltonian are given by Under the Legendre transformation (2.34), the semidirect-product Lie-Poisson Hamiltonian equations corresponding to the Euler-Poincaré equations in (2.12) can be written in three-dimensional matrix component form, as [4,25] ∂ ∂t In (2.36), one sums over repeated spatial component indices, i, j = 1, 2, 3, for each of the Lagrangians 1L √ D , CH2 , and M odCH2 , and all three motion equations share the continuity equation for the total depth, D, When the Lie-Poisson matrix form (2.36) is extended to n dimensions, the 1L √ D equations describe geodesic motion with respect to the following metric Hamiltonian is the Green function for the symmetric operator Q op (D) in equation (2.5). That is, is the velocity vector for the 1L √ D model.
Likewise, the ModCH2 equations describe geodesic motion with respect to the metric Hamiltonian obtained by replacing G Qop(D) by G Qop(d) in equation (2.38). The ModCH model also has the special feature that its Hamiltonian lies in the following class of general metrics (Green functions), Importantly for the remainder of the present work, the class of Hamiltonians in (2.41) admits emergent singular solutions supported on advected embedded spaces.
In preparation for displaying the computational simulations of the singular solution behaviour for ModCH2, we write the equations in dimension-free form. In addition, the dimension-free form of the symmetric operator Q op (σ) is redefined with α 2 := σ 2 /12 as Q op(α) u := 1 − α 2 ∇div u . (2.42) Consequently, the dimension-free form of the Lagrangian for ModCH2 is given by
43)
The constants σ 2 1 and F r −2 = O(1) here are, respectively, the squares of the aspect ratio and the Froude number, which have been obtained in making the expression dimension-free. The final dimension-free number to be defined in the simulations will be the ratio of widths obtained by dividing the width of the initial condition by the filter width, or interaction range, α = d/ The singular momentum map we shall discuss here arises as part of a dual pair. 3 The rigid body provides a familiar example of a dual pair. In the rigid body, the two legs of the dual pair correspond to the cotangent lift momentum maps for right and left actions, respectively. The dual pair for Euler fluids implies (from rightinvariance) that the momentum map J R is conserved. For Euler fluids, the conservation of the right momentum map J R is equivalent to Kelvin's circulation theorem. For Euler fluids, the left momentum map J L maps Hamilton's canonical equations on T * (SDif f ) to their reduced Lie-Poisson form and at the same time implies that the solutions on T * (SDif f ) can be defined on embedded subspaces of the domain of flow which are pushed forward by the left action of SDif f [23]. These results for ideal incompressible Euler fluids were generalised to semidirect-product left action of Dif f on embedded subspaces of the domain of flow for ideal compressible fluids in [28]. For the fundamental proofs that these maps satisfy the technical conditions required for verifying them as dual pairs, see [19].
In summary, for the semidirect-product case of EP(Diff F), the weights w i for i = 1, . . . , N in (2.33) are considered as maps w i : S → R * . That is, the weights w i are distributions on S, so that w i ∈ Den(S), where Den := F * . In particular, considering the triple leads to the following solution momentum map introduced in [28]. Theorem 2.13 (Singular solution momentum map [28]).
The singular solutions of the semidirect-product Lie-Poisson equations in (2.36) for = M odCH2 in (2.30) are given by (2.46) The expressions for (m, D) ∈ X * (R n ) Den(R n ) in (2.46) identify a momentum map J : The considerations discussed in [28,19] derive the above singular momentum map as the left-invariant leg of a defined dual pair. However, these considerations will not be reviewed here. Instead, the next section will start a series of illustrations by numerical simulations of the dynamical behaviour of the solutions of the ModCH2 equations (2.31) in 2D with periodic boundary conditions.
Background -Euler-Poincaré and Lie-Poisson derivations
This section reports computational simulations of the interaction dynamics of wave fronts. Before embarking on our report of these computational simulations, let us place them into the context of the previous literature, which is based on approximating the Lagrangian in Hamilton's principle for fluid dynamics. Such approximations have been designed before to preserve the transport and topological properties of variational principles [4,20,21,17,18,25,26,27].
Specifically, this section reports simulations in which the Lagrangian functional M odCH2 (u, D) in equation (2.43) has been augmented to complete the H 1 norm in the kinetic energy of the dimension-free form of the Lagrangian. Namely, we modify the Lagrangian M odCH2 in (2.43) to include the full H1 norm by writing,
5)
This Hamiltonian also lies in the class the class of Hamiltonians in (2.41). Consequently, the H1ModCH2 equation will admit emergent singular solutions supported on advected embedded spaces which dominate their asymptotic behaviour. and studied numerically in 2D and 3D in [27]. For divergence-free flows, the nD Camassa-Holm equation is also known as the Euler-α model in a class of other α-fluid models [24], and it was the source of the Lagrangian-Averaged Navier-Stokes α model (LANS-α model) of divergence-free turbulence in [6,7,17,18]. In the remainder of the present paper, we will present computational simulations of equation (3.3) in 2D which include the effects of potential energy as well as the vorticity in the interaction of singular wave fronts. Consequently, the results we will obtain may be compared with computational simulations of the Camassa-Holm equation in 2D and 3D of [27], in order to see the differences made in the solution behaviour due to the presence of gravitational potential energy. The 1D version of these comparisons have already been made for solutions of both the ModCH2 and H1ModCH2 equations in [26]. In 1D the same run accomplishes the comparisons of the Camassa-Holm solutions with those of both ModCH2 and H1ModCH2, because in 1D the operators div∇ and ∇div are the same. The work here presents solutions of H1ModCH2 in 2D for comparisons with corresponding solutions of the Camassa-Holm equation in [27]. The comparisons of ModCH2 in 2D and 3D with corresponding solutions of the Camassa-Holm equation in [27] will be deferred to a later paper. In the later paper, we will also present comparisons of Camassa-Holm solutions in 2D and 3D with the corresponding solutions of EPDiff(H div ), as introduced in [31].
Simulations of emergent H1ModCH2 solutions
In the rest of this section, we consider computational simulations of the H1ModCH2 equation (3.3) dynamics in 2D. We will present five initial conditions in the paper and ten initial conditions in the supplementary materials. For each initial condition, we consider the dynamical exchange between kinetic and potential energy. This will be illustrated by starting with only kinetic energy with zero initial elevation in sections 3. The first panel (top left panel) in each figure corresponds to the initial condition. The subsequent panels, reading across the first row and then across the second row, are snapshots at the subsequent times. The domain is [0, 2π] × [0, 2π] with doubly periodic boundary conditions. Coordinates are x horizontally and y vertically. We use the colour map shown in figure 3 for the L2 norm of the velocity, |u| 2 , where the minimal values and maximal values appear grey and white respectively. This is the same approach used in [27] where the black colour at 12.5% intensity exists to show the outlines of spatially confined velocity segments. While the colour map 3 is apt at showing small scale features for positive definite fields, it is not suitable to plot figures that take on negative values. Thus, the elevation figures will use the standard colour map turbo. In each figure, the colour map is determined for each panel separately, so that the features of each snapshots are visible. The scales of the 2D plots are included in each figure alongside the colour bar such that the variation of the intensity across each panel is clear. Four 1D slices of the domain are included in each snapshot in the directions shown in figure 4. Specifically, the solid black line is the profile along the horizontal y = π, the dashed red line is along the vertical x = π, the solid green line is along the upward diagonal y = x − π, and the dashed blue line is along the downward diagonal y = π − x. Similarly to the 2D snapshots, the scales of the 1D plots are also determined per panel for maximal clarity.
In (3.6) we denote, as follows. At step i, we have the fourth order solutionū i and fifth order solutionû i as well as the previous time step h i−1 . The value p = 4 is the order of the solutionū i and the order ofû i is p + 1. If the L2 norm ofū i −û i is less than the tolerance , the step size for the next time step is derived from (3.6). If the ||ū i −û i || > , the current step is repeated with the step size derived from (3.6). The relative tolerance and safety factor used in this work are = 10 −5 and γ = 0.9 respectively.
Plate
In figures 5-8, we consider the combined dynamics of the velocity magnitude |u| and elevation (D − b(x)) in the interplay of kinetic and potential energy for different values of α, starting from the same initial conditions in a doubly periodic square domain with a flat bottom topography, so b(x) = const. The Plate initial condition is inspired by the two SAR images in figure 1. The first of these two SAR images shows the surface signature of an internal wave propagating midway through the Gibralter Strait. The second SAR image shows the train of wavefront surface signatures which develops after the internal wave has propagated into the open Mediterranean Sea. Initially, the momentum m shown in the first panel of the Plate figure is distributed along a line segment whose corresponding velocity falls off exponentially as e −|x|/w0 at either end of the segment and also in the transverse direction. Thus, the transverse slice of the fluid velocity profile shown in the rectangular strip below the panel as a black curve has a contact discontinuity, i.e., a jump in its derivative. The name "Plate" also refers to the corresponding case for the 2D CH dynamics simulated in [27]. The advected depth variable D is initially at rest and the elevation is flat, so D(x, 0) = const. Figure 5 shows snapshots of the velocity profile of the initially rightward moving line segment. The support of the velocity solution develops a curvature and "balloons" outward as it moves rightward. It also stretches because the endpoints of its profile are fixed by the imposed exponential fall-off of velocity there. The shapes of the velocity profiles in the transverse direction of travel are shown by the 1D plots beneath the 2D snapshots. The bottom panels of 5 show the smoothing of the initial contact curves. Figure 6 shows the snapshot of elevation (D(x, t) − b(x)) accompanying the evolution of velocity in figure 5. Note that the moving peak in elevation is accompanied by a trailing depression. This happens because of conservation of total mass. Namely, mass conservation implies that the moving surface elevation of an initially flat elevation profile must be accompanied by a corresponding moving depression of the elevation. The peak of the elevation follows the motion of the velocity profile. However, the profiles of velocity and elevation do not develop the same shape, because of the trailing depression below the mean elevation. The region of depression formed behind the peak extends from the initial position of the velocity profile to the tail of the current velocity profile.
Wavefront emergence When α < w 0 , in figure 7, the unstable initial velocity profile produces a train of peakon segments emerging as the initial profile breaks up. Each of the emergent wavefronts is curved because it velocity vanishes at the initial endpoints. The number of wavefronts depends on the size of α. In figures 7-8, the first emitted velocity wavefronts have the highest velocity and subsequent wavefronts have lower velocity. Consequently, they will not overtake each other and a wave train will be formed. The material peaks travelling along with the velocity profiles also have the feature that the first peak is the highest and all subsequent peaks are lower. The depression region is now bounded by the location of initial velocity profile and the arc defined by the slowest emitted wavefront. The process of velocity wavefront emergence takes time to complete. This is shown in the last panel in figure 7, where the initial condition has evolved into 6 fully formed segments ahead of ramps. As time progresses further, the ramps will develop into a train of wavefront segments. Figure 8 shows the elevation associated to the velocity in figure 7. Panel 1 of figure 8 shows the initially flat elevation. Panel 2 shows the early development of a wavetrain of positive elevation. As expected, the leading wave is the tallest. In panels 2, 3 and 4 one sees the rightward propagation of mass as the wave train moves away from the initial rightward impulse. In the subsequent panels one sees the continued development of a leftwardmoving depression of the surface due to the emission of the rightward moving wave train of positive elevation. The grey rectangular strips below the panels show details of the wave-forms along the colour-coded directions in figure 4. Note, however, that the elevation of the surface between the successive wavefronts in the wave train is less that the initial level of the fluid at rest. Indeed, the depression is developing a counterflow in the opposite direction which might eventually cause a large scale oscillation in the wake of the plate. Comparing the properties of the fastest wavefronts for different values of α, we see that both the material and velocity wavefronts are higher for smaller values of α. This is due to conservation of mass and energy, in which α controls the width of the wave profile.
Skew
Skew flows in figures 11 and 12 are initiated with two peakon segments of the same width and with constant elevation. The peakon segment located at the back has 1.5 times the amplitude of the peakon segment moving horizontally. Thus, the waves emerging from the back peakon will overtake the waves emerging from the peakon moving to the right by moving along the negative diagonal. Panel 2 of figure 11 shows the result of collisions of the first emitted curved velocity segments. Here, both overtaking and head-on collisions have occurred along different axes and the resulting non-linear transfer of momentum has resulted in the merging, or reconnection, of the wave segments. The collision has also produced a hotspot of momentum and elevation located at the intersection point. This hotspot expands rapidly outward to form the red region of the rightmost wavefront in panel 3. The appearance of hotspots during the reconnection of wavefronts is also seen in the dynamics of doubly periodic solutions of the Kadomtsev-Petviashvili equation [5] and also observed, for example, in a famous photograph of crossing swells in the Atlantic Ocean [36].
The notion of Lagrangian memory wisps introduced in [27] is particularly visible in panel 3 of the elevation evolution in figure 12, where two wisps can be found connecting the boundaries of the expanding hotspot to edge points of elevation segments. By examining intermediate snapshots, we see that the initial memory wisp connects from the hotspot to the edge of the elevation segment after the collision travelling downwards. Via hotspot expansion and the emission of additional wavefronts from the inital conditions, the wisp splits into two and connects to different elevation segments. In panel 4 to 6, we see the same interaction of subsequently emitted wavefronts with multiple collisions and reconnections. In each of the collisions, memory wisps are produced as seen between resultant wavefronts which suggests the hotspots are part of the mechanism creating the wisps. We note that the persistent memory wisps in panel 6 between the most and second most rightward elevation segments are the same memory wisps seen in panel 3. This suggests that the memory wisps are not produced by the numerical method, instead they are products of the wavefront collisions which preserve the reversibility of the evolution.
Wedge
The "wedge" initial condition is a modification of the skew collision in which the initial upper peakon segment travels downward along the negative diagonal and the initial lower peakon segment travels upward along the positive diagonal. The magnitudes of the velocities are the same and there is a reflection symmetry along the horizontal axis in the middle of the domain, along y = π. When the emergent wavefronts meet along y = π, their vertical momentum components collide in opposite directions (head-on). The "wedge" initial condition can be seen on the left of the lower panels of figure 13, emerging from the line y = π. In panel 3 of figure 13, the collision of the velocity segments forms a hotspot along the mid-line, which expands outward away during the reconnection process near the center of panel 4. As these hotspots expand further in the next panels, they leave behind memory wisps in the velocity which are visible in panel 6. These memory wisps are not seen, though, in the snapshots of elevation in figure 14, as they are obscured near the boundaries between the depression regions and the elevation of the material wave segments.
For w 0 = 8α in figures 15 and 16, multiple "wedge" collisions occur from the wave train emerging from the initial conditions. They also interact among wavefronts from the same wave train due to the elastic collision property. This interaction produces fast, small-scale oscillations which resemble the emergent wavefronts, broken into even shorter "shards" seen in panel 3 to 6 of figure 15 and in 16. These broken shards of wave segments arise when the numerical method can no longer resolve the smallest scale behaviour. Lowering the values of α narrows both the velocity and elevation wave segments. It also has the effect of highlighting the presence of memory wisps, as more collisions occur with higher transfer of momentum and thus greater separation of the wavefronts, as seen in the panel 6 of figure 16.
Other aspects of head-on collisions will be discussed next in section 3.2.4 for the "parallel" initial conditions.
Parallel
The initial condition for the "parallel" collision comprises two peakon segments of equal and opposite magnitudes moving toward each other along vertically offset parallel horizontal lines, as shown in figure 17. This situation differs from the overtaking (rear-end) collisions seen in the "skew" initial conditions, as the collisions are headon; so they involve wavefronts with positive and negative velocity components. In 1D, when the wavefronts are peakons, no vertical offset can occur and an antisymmetric initial condition on the real line produces a collision in which the two weak solutions bounce off each other elastically in opposite directions. In the 2D case, the offset initial condition introduces angular momentum into the system. Consequently, the offset head-on collision can access angular degrees of freedom and thus it will show more complex behaviour than the head-on collision in 1D.
Consider the case where α = w 0 in figure 17. The initial velocity segments balloon outwards and the shape is smoothed as occurs in the "plate" condition. When the wavefronts collide in panel 3, the magnitude of velocity along the collision front vanishes and the velocity profile becomes very steep as seen also in 1D peakon collisions. In panel 4, we see that the wavefront segments which did not undergo head-on collisions contain hotspots. The hotspots indicate where reconnections have occurred. These hotspots expand in panels 5 and 6 into a velocity profile which balloons outwards with an angle away from the vertical axes. The results of the head-on collisions are the dark segments connecting the upper and lower velocity wavefronts. The scattering angle seen clearly in the third panel of figure 18 is due to the conservation of angular momentum during the offset head-on collision. Figure 18 shows snapshots of the elevation during the offset head-on collision. As the elevation segments are advected with the velocity profile, we see an elevation head-on collision in panel 3. In contrast with the velocity profile where the velocity is tends to zero along the collision front, the elevations are rising in the collision to create a elevation segment of large amplitude. This reinforced elevation then decreases in height in panel 4 to 6 as the elevation wavefront emerges from the head-on collisions. This is clearest from the black 1D profile in the grey rectangular strip below panel 6.
When α < w 0 , the evolution becomes even more complex because entire trains of wavefronts are involved, as seen in figures 19 and 20. In these figures, one see the reconnections of velocity and elevation segments which had undergone head-on collisions with those segments that had not collided. The complexity builds, as the head-on collisions and reconnections recur again and again, as additional wavefronts continue to be emitted from the initial conditions.
Dam Break
A Dam Break, or Lock Release, flow is produced when at time t = 0 a volume of fluid at rest behind a dam, or lock, is suddenly released. Gravity then drives the flow, as potential energy is converted in kinetic energy. Here, we treat the case of a radially symmetric Gaussian distribution of initial depth with constant, non zero bathymetry, b(x) = const > 0. This corresponds to the case where the radially symmetric Gaussian distribution of initial surface elevation is released into a fluid at rest with a flat surface over a constant bathymetry. Consider the elevation profile D − b in figure 21. The first panel shows the initial condition for the elevation, while the second panel show the plateauing of the elevation peak in lowering of its initial Gaussian profile which, in turn, becomes wider as mass is pushed outward radially by gravity. When the critical width α of the expansion is reached, then a wavefront is emitted radially outwards on the left and right hand sides of the domain, as seen in panel 3 and 4. We note that the elevation becomes negative behind the formation of the material wavefront and it becomes more negative in subsequent panels after panel 3. The leading edge of the wavefront subsides exponentially as it evolves, as does the shape of the leading edge of the velocity wavefront in figure 22. In the velocity profile, we see that the emerging wavefront takes the peakon form in the third panel. As the system evolves, the leading edge of the velocity wavefront is similar in shape to the material wave profiles in panels three, four and five.
For smaller values of α, a train of velocity and material wavefronts rapidly develops, as seen in figures 24 -23 where w 0 = 8α. Similarly to the "plate" initial condition, the first wavefront has the highest velocity and elevation, while subsequent wavefronts have lower velocity and elevation. The elevation ahead of the front of the first wavefront remains flat, but as the expansion continues the level of the fluid surface drops behind the expanding wave train. If one looks closely, one sees that the level of the surface between the wavefronts in the wave train is lower that the initial level at rest. Perhaps this would eventually produce a counter flow. Now we consider a variation of the Dam break initial condition that does produce persistent velocity peakon wavefronts. The initial conditions for the figures 25 -28 are with b(x) = 0 and u = 0. This corresponds to the case where the initial surface elevation is released into the constant bathymetry. Comparing the velocity profile for the same value of α in figure 26 and 22, we see that the evolution are very similar for the first three panels as the development of peakon wavefronts. However, we see in the bottom three panels of figure 26 that the peakons persist and travel outwards radially in a wave train with decreasing amplitude. The elevation profile in figure 25 also start with the plateauing and lowering of the initial Gaussian elevation in panel two. In panel three, one sees the start of the formation of a material wavefront. Instead of the wavefront being formed and emitted like the velocity wavefront, the material wavefront travels outwards, loses amplitude and vanishes when it reaches the edge of the elevation distribution. The width of the elevation distribution is widened in this process, as seen in the bottom three panels.
Similarly, for smaller values of α, a train of velocity and material wavefronts appears from the initial condition, as seen in figures 27 -28 where w 0 = 8α. The process of formation and annihilation of material peakons persists and the shape of the wavefront reassemble the peakon shape for more subsequent waves in the wave train. Since the elevation is not negative in this flow, no counter flow would be produced.
From these two variations of the Dam Break problem we see that from a smooth, spatial confined initial conditions, the time varies for persistent, peakon-shaped wavefronts to develop. However, the issue of the time of formation of peakon wave profiles is beyond the scope addressed in the present paper.
Dual Dam Break
Here, we treat the Dam Break flow in which the initial condition contains two radially symmetric Gaussian distributions of initial surface elevation. These are simultaneously released into a fluid at rest with a flat surface over a constant bathymetry. To study the interaction of both velocity and elevation wavefronts, we consider the case where the bathymetry b > 0. The emergence property of wavefronts remains the same of the single Dam Break in section 3.2.5. In figure 30, panel 1 is the initial condition and panel 2 is the emergence of elevation wavefronts. In the middle of the panel 3, one sees the head-on collisions of these emergent wavefronts. In the center of the domain in panel 4 to 6, one sees the head-on collisions of the emitted radial peakons and their reconnections in the form of two rapidly expanding hotspots located along x = π. As one part of the elevation wavefront expands radially away from the center of the domain, it leaves a widening region of depression behind it which creates a counter-flow, which one sees developing in panel 6 as the dark purple region. The corresponding velocity profile in figure 29 evolves similarly to the "Parallel" and "Single Dam Break" flows for head-on collisions and emergence of wavefronts respectively.
For smaller values of α, a train of peakon wavefronts rapidly ensues, as seen in figures 31 -32 where w 0 = 8α. Consider the interaction in the centre of the domain after the initial head-on collision of the first wave in the emergent wave train in panel 3. Panel 4 shows the "rebound" wavefront interacting with the subsequent wavefront in the wave train to create hotspots above and below x = π. This process repeats for every wavefront and creates a checkboard pattern in the region above and below x = π. The connection between wavefronts are the memory wisps, seen in panel 5 and 6. Thus, this interaction produces a cellular elevation profile which is locally similar to that of the doubly periodic cnoidal waves seen in solutions of the Kadomtsev-Petviashvili equation [36,5].
Conclusions and outlook
Inspired by the SAR images of sea-surface wavefronts regarded as the signatures of the dynamics of internal waves propagating below the surface, we proposed in the introduction to derive a single-layer minimal model of the surface velocity and elevation whose solution behaviour would mimic the dynamics of the curved wavefronts seen in the SAR images, figure 1 and 2. The computationally simulated solutions of the H1ModCH2 minimal model illustrated the emergence of trains of wavefronts which evolved into complex patterns as they propagated away from localised disturbances of equilibrium and interacted nonlinearly with each other through collisions, stretching and reconnection.
To mimic the wave-current interaction that drives the curved wavefronts seen in SAR images, we investigated a variety of computational simulation scenarios which addressed two questions. First, we asked how would an initial condition evolve if there were a current possessing kinetic energy, but the surface were flat and, thus, had no gravitational potential energy? This question was answered in sections 3.2.1, 3.2.2 and 3.2.3 to which we refer for details. Second, we considered the converse question for initial conditions in which a stationary elevation was released into still water, as discussed in sections 3.2.5 and 3.2.6. In addition, we have also mimicked the reconnection properties of the internal waves signatures in the cases where wavefront collisions occur.
We had hoped that the singular momentum map solutions discussed in section 2.1 would emerge from our simulations of wavefront trains arising from localised disturbances. This would have reduced the problem of wave-current interaction among sea-surface wavefronts to the much simpler problem of mutual advection among curves in the plane. This would have been the case, of course, if we had started with the singular momentum map in equation (2.46) as the solution ansatz which would follow the dynamics in (2.48). However, we hoped to see wavetrains of singular solutions on embedded curve segments emerge from generic smooth confined initial conditions. In fact, we did see that effect in some of the simulations. We saw that wavetrains of peakon curves did form in some cases of our suite of simulated energy exchange dynamics, more specifically, the dam break problem with zero bathymetry in section 3.2.5. However, in some other cases such as the "plate" in section 3.2.1, the wavetrains of peakon curves did not form completely. That is, the singular solutions supported on embedded curves did not always form completely during the time intervals of our simulations. Moreover, in the "dam break" initial condition in section 3.2.5, the leading peaks in the elevation began to form, and then slowly ebbed away and disappeared as other peaks emerged behind them and then disappeared later, as well.
So, the question of emergence of the singular momentum map solutions in (2.46) for ModCH2 from a smooth confined initial condition dynamics remains open. In particular, the question is, under what conditions will a solution of the 2D ModCH2 equations starting from a smooth confined initial condition indeed produce a train of singular peakon curve segments, if ever?
|
2021-08-12T01:16:25.041Z
|
2021-08-11T00:00:00.000
|
{
"year": 2021,
"sha1": "4af87e535fb90ce6f19276a70ebdaecdb3dab7f0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4af87e535fb90ce6f19276a70ebdaecdb3dab7f0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246949670
|
pes2o/s2orc
|
v3-fos-license
|
Hydrogen Recovery from Coke Oven Gas. Comparative Analysis of Technical Alternatives
The recovery of energy and valuable compounds from exhaust gases in the iron and steel industry deserves special attention due to the large power consumption and CO2 emissions of the sector. In this sense, the hydrogen content of coke oven gas (COG) has positioned it as a promising source toward a hydrogen-based economy which could lead to economic and environmental benefits in the iron and steel industry. COG is presently used for heating purposes in coke batteries or furnaces, while in high production rate periods, surplus COG is burnt in flares and discharged into the atmosphere. Thus, the recovery of the valuable compounds of surplus COG, with a special focus on hydrogen, will increase the efficiency in the iron and steel industry compared to the conventional thermal use of COG. Different routes have been explored for the recovery of hydrogen from COG so far: i) separation/purification processes with pressure swing adsorption or membrane technology, ii) conversion routes that provide additional hydrogen from the chemical transformation of the methane contained in COG, and iii) direct use of COG as fuel for internal combustion engines or gas turbines with the aim of power generation. In this study, the strengths and bottlenecks of the main hydrogen recovery routes from COG are reviewed and discussed.
INTRODUCTION
The exponential growth of the population in the last century together with the associated industrial development has originated a considerable increase in energy demand, that has been mainly supplied from fossil fuels. However, the current carbon-based energy system must cope with the depletion of the global fuel reserves and climate change in the short term, which could lead to an unsustainable situation. Thus, the search for new renewable energy sources and sustainable use of fossil fuels are the main challenges in the energy supply chain roadmap. 1 With regard to the industrial sector, the iron and steel industry is the largest energy consuming sector, and it accounts for 9% of global carbon dioxide emissions. 2,3 Steel is made from iron ore as the main iron source, oxygen, and other minerals that occur in nature. Nevertheless, since iron ore contains iron oxide, their sinter (agglomerate of iron oxide fines and other minerals) is previously reduced to iron by the removal of the oxygen content. Coke has been traditionally used as a fuel and reducing agent in blast furnaces, where hot air is injected into the coke, lime, and sinter. Coke is obtained by burning coal in the absence of oxygen at high temperatures in the coke oven batteries. As a result, a solid fraction (coke) and gas fraction (coke oven gas) are obtained. The molten iron from blast furnace is transported to the oxygen furnace, where oxygen is used to decrease the carbon content from 4% to <0.5%. 4 To overcome the high energy consumption, the iron and steel industry has improved its process efficiency, reducing by 61% the energy required to produce a ton of steel in 2020 compared to 1960. 5 This context together with the rising price of fossil fuels demands alternatives focused on reducing the energy demand and heat losses and the recovery of valuable compounds contained in waste streams. 6 In this sense, the waste heat and value compound composition of the exhaust gases such as blast furnace gas (BFG), COG, and Linz-Donawitz converter gas (LDG) could potentially fulfill up to 30% of the energy demand of the iron and steel industry by using them as fuel. 6,7 Furthermore, COG stands out among waste gas streams due to its high content of valuable compounds (Table 1).
Approximately 50 Nm 3 of COG is generated per ton of steel giving 93000 million Nm 3 of COG produced in 2020. 11,12 Commonly, there are two ways to cope with coke oven gas. On the one hand, raw COG can be directly used for heating purposes in coke oven batteries or blast furnaces. On the other hand, COG can be cleaned and further processed to obtain valuable products by separation or conversion techniques. 9 Hence, promoting COG energy recovery pathways is a step forward toward sustainability in the iron and steel industry. Among the valuable compounds, the outlining high content of hydrogen positions COG as a promising source of clean energy. Hydrogen is a feedstock not only in the production of chemicals or refining processes in large scale applications but also in healthcare, food, or pharmaceutical small scale applications. However, the versatility and potential as a fuel source free of greenhouse gas emissions have given rise to a new segment of the market in power generation and the transport sector, where hydrogen acts as an energy vector. So far, the hydrogen demand has been fulfilled by the reforming of fossil fuels, and the obtained product is recognized as "grey hydrogen". Alternatively, green hydrogen, which is being highly promoted, comes from routes such as water electrolysis using energy from renewable sources. A greenhouse gas emissionsfree, hydrogen-based economy places hydrogen as a key element with different purposes: i) to balance the grid when needed using a fuel cell (FC) system (power-to-power), ii) to be blended in the natural gas grid or used as feedstock for synthetic natural gas production (power-to-gas), 13,14 iii) to be used as fuel in the transport sector (power-to-fuel), 13,15 or iv) to be employed as a valuable commodity to produce chemical compounds or synthetic fuels (power-to-feedstock). 16, 17 The technological research is being supported by the development of hydrogen policies (30 countries have released hydrogen roadmaps in 2021) in many regions such as Asia, Europe, or Canada. 18−20 The total investment in hydrogen spending will exceed $300 billion through 2030, and as a result, the hydrogen economy will continue its expansion with a 5.7% growth forecasted for the period 2021−2030. 15 The future development of hydrogen relies on the reduction of the production costs. In this sense, the rapid global scale-up could drop the electrolyzer system costs from $1120 kW −1 in 2020 to $230 kW −1 in 2030. Moreover, the cost of renewable energy is falling year-over-year (13% and 9% in solar and wind power, respectively) driven by the infrastructure and equipment development. This context suggests that green hydrogen could be produced for $0.7−1.6 kg H 2 −1 before 2050 being competitive with natural gas and fossil fuels. 21,22 Thus, supplementary sources of hydrogen such as industrial waste streams can contribute to meet the demand after the appropriate recovery process is applied. In this sense, coke The information in this table was adapted from refs 8 (with permission of Elsevier) and 9. b Dry basis. Raw COG contains water vapor (up to 30%) which is removed as the condensate at the pretreatment stage. 9,10 Figure 1. Schematic diagram of the COG pretreatment process, including the potential uses of minor components (adapted from Razzaq et al. 8 with permission from Elsevier and Remus et al. 9 ). The three main stages of COG pretreatment are limited by the dashed lines. oven gas which is presently used as additional fuel in coke ovens or even burnt off in flares is an up-and-coming source of hydrogen. This review discusses the state of the art in hydrogen recovery from COG streams and its further use.
HYDROGEN RECOVERY FROM COKE OVEN GAS
2.1. Pretreatment of Coke Oven Gas. Raw coke oven gas coming out from coke oven batteries contains some minor compounds such as ammonia, tar (semisolid mixture of condensable aromatic hydrocarbons), or hydrogen sulfide, which must be eliminated to prevent fouling and corrosion in pipelines and equipment (see Table 1). Figure 1 shows an illustration of the pretreatment stages (limited by the dashed lines) with the aim of conditioning COG for further recovery. 8,9 COG is cleaned by the following pretreatment stages: • Cooling: Raw coke oven gas (1000°C) is preliminarily cooled by spraying an ammonia solution in gooseneck equipment. Then, gases are further cooled to a temperature of 28−30°C in direct or indirect coolers, and the fine tar droplets are removed in an electrostatic precipitator. While indirect coolers are shell-and-tube heat exchangers, direct cooling is performed by direct contact with countercurrent streams of ammonia in cooling towers. Subsequently, COG is carried out to washing stages by means of exhausters (suction fans).
Since exhausters cause compression of the gas, secondary cooling is necessary in view of attaining the processing conditions for the NH 3 /H 2 S removal stage. Furthermore, tar/water separation of the condensate streams from cooling stages is carried out in a decanter. Finally, tar, which is commonly treated as residue, could be treated by catalytic cracking or reforming reactions to obtain polycyclic aromatic hydrocarbons or hydrogen, while the aqueous solution called "coal water" is fed to the ammonia liquor tank. 23 Nevertheless, the feasibility of tar recovery is determined after the economic analysis considering that only 25−45 kg of tar can be obtained from each ton of coke, which could question the capital investment. • NH 3 removal and desulfurization: Ammonia removal and desulfurization stages are carried out by well-known commercial processes. Ammonia can be removed as ammonium sulfate by spraying dilute sulfuric acid solution to the gas or as ammonia solution by water scrubbing. Hydrogen sulfide can be captured by liquid absorption or oxidized by wet or dry oxidative processes to sulfur. 24−26 Then, the captured H 2 S from absorption could be later transformed to sulfuric acid or sulfur by the CLAUS process. 27 Although dry oxidation has been historically used, the development of liquid absorption and wet oxidation has neglected this technique, because it entails high cost and space requirements. Additionally, the NH 3 /H 2 S scrubbing-stripping (liquid absorption− desorption) circuit is used with the aim of preventing the production of highly contaminated wastewater from wet oxidation of H 2 S and NH 3 ; besides, ammonia liquor could be recovered as a supplementary source for cooling stages of the cleaning process. The process sequence has been detailed by Remus et al. 9 at the Best Available Techniques reference documents (BREFs). Ammonia is removed from COG in the first scrubber with water. Then, the aqueous solution with ammonia from the first scrubber is used in a consecutive unit as a scrubbing liquor to remove H 2 S. Ammonia and hydrogen sulfide are recovered from the scrubber solution in the stripping stage, and they may be further conditioned. Nevertheless, upgrading of ammonia and hydrogen sulfide streams must satisfy economic feasibility since only 3 kg of NH 3 and 2.5 kg H 2 S are produced per ton of coke. 9 • Fractioning: the outlet gas from the NH 3 /H 2 S scrubbing-stripping circuit contains light oil. The main Industrial & Engineering Chemistry Research pubs.acs.org/IECR Review constituents are benzene, toluene, and xylene (BTX). Benzene is primarily used in plastics and resins manufacturing, while toluene and xylene can be used in refineries for gasoline blending. 28, 29 The separation may be accomplished by condensation, gas−liquid absorption, or gas−solid adsorption. Condensation is carried out by a combination of compression and refrigeration steps, which results in high energy consumption and capital investment. Absorption is a mature procedure to recover light oil from COG using creosote or petroleum oil. 30 Then, BTX are separated from the oil liquor by steam-distillation. Clean COG recovery pathways are summarized in the following section highlighting the production of hydrogen as a valuable product. Figure 2 shows the alternative routes for the recovery of valuable products from COG.
Clean coke oven gas could lead to a wide range of valuable compounds. Hydrogen, which is the most promising product, can be purified by means of separation processes, or it can be obtained from chemical transformations, such as reforming or partial oxidation of the methane fraction of COG. In addition, syngas (H 2 + CO), which is a feedstock to produce methanol or ammonia, can be obtained in the chemical conversion routes. The H 2 /CO ratio determines the application of the obtained syngas. While higher ratios from steam reforming are suitable for iron reduction in the iron and steel industry or ammonia production (H 2 /CO ≈ 3) by the Haber-Bosch process, lower ratios from partial oxidation or dry reforming fit the requirements for methanol production (H 2 /CO ≈ 2). Furthermore, the hydrogen/methane ratio has positioned COG as a suitable fuel for internal combustion engines or gas turbines for the cogeneration of power and heat to increase the energy efficiency of the manufacturing process. In addition, the upgrading routes can be coupled to increase the recovery of hydrogen from COG in hybrid separation-reaction systems. In this sense, after pretreatment, the clean COG can be subjected to a separation in membrane modules or the PSA unit, obtaining a hydrogen-rich permeate stream, while the methane-rich retentate stream can be subsequently converted into hydrogen by chemical reactions such as reforming or partial oxidation.
2.2. Hydrogen Purification. High-purity hydrogen is required for its conversion to electrical energy in fuel cell devices or when it is used as feedstock in manufacturing processes. Commonly, gas separation can be carried out by cryogenic distillation, pressure swing adsorption (PSA), and membrane technology. This review is focused on pressure swing adsorption and membranes because of the large energy consumption of cryogenic distillation, although this technology could be economically feasible to recover hydrogen from purge gas streams in other processes. 31 Table 2 shows the comparison of hydrogen purification techniques.
The quality grade required in the produced hydrogen together with the levels of the specific product impurities is critical to the selection of the purification technique. The PSA process is the best choice for high-purity hydrogen production (above 99.9 vol %), whereas polymeric membrane technology is a low-cost alternative to obtain hydrogen of 90−98 vol %, and palladium (Pd) and ceramic membranes are able to reach higher purities (>99.9 vol %). Plant capacity and feed/product pressures should also be considered. Membrane systems are modular, and therefore the costs and production rate are closely related, as capital investment and energy demand are proportional to the number of modules. Besides, PSA benefits from the economy of scale, and it is applicable throughout a full range of capacities and produces hydrogen at feed pressure (10−40 bar), reducing downstream compression costs; this is an advantage when compared to membranes units, where the product is obtained at lower pressures.
2.2.1. Pressure Swing Adsorption (PSA). Pressure swing adsorption is a mature gas separation technology that has been positioned at the forefront for hydrogen purification (85% share of hydrogen purification worldwide) because it allows reaching high purities (>99.99 vol %) and recoveries (70− 90%). 34−36 The process is based on the retention of contaminant molecules in an adsorption bed at high pressures, including some intermediates (methane) and lightly adsorbed components (nitrogen, carbon monoxide). The separation takes place until low adsorbable compounds such as nitrogen or carbon monoxide are not retained in the bed any further and contaminate the product stream (breakthrough time). At that moment, the desorption step starts by means of either decreasing the column pressure or flowing a low pressure fraction of the hydrogen product stream (purge); the adsorbent is regenerated in this step. 34,37 Thus, it is a cyclic adsorption−desorption operation that gives rise to a H 2 -rich stream (from adsorption) and a CH 4 -rich stream (from desorption). Commonly, the adsorption bed is made of different selective layers. Molecular sieves such as zeolites are used to remove nitrogen and carbon monoxide, which are the most concerning contaminants. 38 Nevertheless, alumina (AA) and activated carbon (AC) layers must be placed before the molecular sieve to remove water vapor, methane, and carbon dioxide since their strong interaction with zeolites leads to high energy consumption in the desorption stage. 39,40 Figure 3 shows a schematic representation of the PSA separation technology.
The main operating variables in the PSA technology are the adsorption pressure, the purge (referred to the regeneration stage)-to-feed ratio (P/F), and the cycle time. Increasing Industrial & Engineering Chemistry Research pubs.acs.org/IECR Review pressure and P/F increases hydrogen purity since the adsorption mechanism is promoted but at the expense of lower hydrogen recovery and higher energy costs. Moreover, the productivity can be increased by short operation cycles.
Since PSA is carried out by adsorption-regeneration cycles, commercial processes are designed with more than four columns to ensure continuous hydrogen production during the regeneration step. Table 3 summarizes the performance data of well-known commercial PSA technologies for hydrogen purification. Moreover, the PSA purification process has been registered as the standard technology in several patents for the hydrogen production route from COG. 41−44 Commonly, high purity hydrogen (99.9% H 2 ) and a methane-rich stream are obtained. Chen et al. 41 have patented a combined PSA-steam reforming process to increase the hydrogen recovery from COG. The hydrogen extracted from COG by PSA together with the hydrogen obtained by the steam reforming reaction of the methane-rich stream from PSA accounts for a recovery of 40,110 Nm 3 h −1 of H 2 from 50,000 Nm 3 h −1 of COG. The production of high purity hydrogen by PSA still needs to address operational drawbacks such as i) high energy consumption, ii) removal of low absorbable contaminants such as N 2 and CO which are present in COG, and iii) improving productivity.
One approach to reduce energy consumption involves the use of vacuum in the regeneration stage (VPSA). Since a blower operates at lower pressure ratios than an air compressor of PSA, it is a more energy efficient device. Furthermore, additional equipment such as dryers or filters is not necessary in VPSA, which reduces the capital investment. 45 ) positioned VPSA as the best alternative for the regeneration method overcoming the high energy consumption of conventional PSA. Regarding low adsorbable contaminants, the key point is the selection of the molecular sieve layer. In this sense, zeolite 5A and CaX are widely reported in the literature for hydrogen purification from COG. Delgado et al. 48 have developed the simulation of hydrogen purification from COG in a four-bed PSA process. Zeolite 5A and CaX were selected as adsorption materials achieving almost all fuel cell purity requirements (99.7 vol %) and high recoveries (>70%) at feed pressure of 3 bar in both adsorbent layers. 48 On the other hand, Ahn et al. 49 obtained 99.99 vol % H 2 with activated carbon and zeolite 5A as the molecular sieve layer, working at 10 atm as feed pressure. The analysis of the adsorption curves of a synthetic gas mixture of COG in the AC/zeolite 5A dual layer was reported by Jee et al. 50 The results confirmed that N 2 is the less adsorbable compound of COG, which results in the shortest breakthrough time (300 s) at 10 atm of feed pressure. Although the number of works that report experimental results with synthetic gas mixtures with similar composition to COG is steadily growing, alternative approaches for the recovery of H 2 from binary mixtures to increase the separation performance achieved by PSA are also being considered in the open literature.
The effect of vacuum regeneration and short cycle times was studied in H 2 /CO 2 binary mixtures by Lopes et al. 51 Their results showed that a 1-min reduction in the cycle time can increase hydrogen production from 100 to 600 mol-H 2 kg adsorb −1 day −1 . Since the average cycle time in PSA operation is 10−30 min, the reduction of the cycle time could increase the productivity, and the separation could be carried out in smaller columns.
The influence of the P/F ratio was analyzed by Yang et., 52 working with H 2 /CO and H 2 /CH 4 binary mixtures (70/30 vol %) in a two-bed process using zeolite 5A as adsorbent. The results showed that an increase in the P/F ratio results in higher regeneration yield of the bed that ultimately leads to an increase in hydrogen purity. Then, the P/F ratio was optimized by Li et al. 53 working with a multicomponent hydrogen stream (72.9 vol % H 2 , 3.6 vol % CH 4 , 4.5 vol % CO) and using a dual layer (AC/zeolite 5A) adsorbent. It was found that the P/F ratio should not overpass 0.1 to prevent a significant decrease in the recovery percentage. Regarding new adsorbents for N 2 and CO impurities, attention has been paid to transition metals with the aim to increase the adsorption capacity of CO. The interaction between transition metals and carbon monoxide by means of reversible complexation reaction results in higher CO adsorption capacity. 54 . Although PSA is a mature technology which has been widely industrialized, there is still room for improvement. The adsorption capacity to low adsorbable contaminants of the selective layer and reduction of the energy consumption should be further improved to meet fuel cell requirements (99.99 vol %, <0.2 CO ppm, <2 CO 2 ppm) 57 and ensure the economic feasibility of the process.
2.2.2. Membranes. For many fluid-phase separations, membranes represent a lower investment cost and lower energy consumption option than alternative and more conventional technologies. Commonly, membrane materials can be classified as polymeric (organic), ceramic, carbon, and metallic (inorganic) membranes, although in recent years there has been growing interest in the development of mixed matrix membranes. 58,59 Ceramic and carbon membranes are microporous materials, which allow hydrogen purification by the molecular sieve mechanism according to the kinetic diameter of molecules. Mass transfer in polymer membranes is usually described by the solution-diffusion mechanism, which assumes that the molecules are absorbed on the membrane surface, then diffuse, and finally are desorbed in the downstream side of the membrane. 60,61 Temperature and pressure are the main operating variables in membrane separation, while permeability (related to flux) and selectivity (related to purity) are the main characterization parameters. Since polymers are low-cost materials and provide a high degree of separation, research and development in recent decades has resulted in several commercially available membranes for hydrogen separation and purification. 62 In general, the studies reported in the literature about membranes to separate hydrogen from mixtures classify the membranes in two categories: i) hydrogen-selective membranes, where hydrogen permeates preferentially through the membrane obtaining a hydrogenenriched permeate stream, and ii) CO 2 -selective membranes, where impurities such as CO 2 permeate preferentially through the membrane, obtaining a hydrogen-enriched retentate stream. 61 In the case of hydrogen recovery from COG, H 2selective membranes are preferred, since there are no membranes available that are methane-selective. Tables 4 and 5 summarize the characteristics of commercial hydrogenselective membranes for gas separation.
Membrane technology can be also found in patented processes for hydrogen production from COG. 73−75 As it has been explained in the subsection dealing with the PSA technology, the methane-rich (65 vol % CH 4 ) stream may be converted to hydrogen by gas reforming or partial oxidation to increase the hydrogen recovery, or it can be used as supplementary fuel in the plant. Among membrane materials, palladium membranes are selected to obtain high purity H 2 (99.99 vol %) in separation or hybrid reaction-separation systems. Hydrogen permeation in Pd membranes comprises the adsorption on Pd active sites, the split of the molecule in two protons, the diffusion through the membrane, and recombination on the other side. 76 Although Pd membranes deliver high separation factors (H 2 /CO 2 : 3147, H 2 /N 2 : 2718), their performance is limited by embrittlement phenomena at low temperature and pressures and poisoning of the membrane when it makes contact with H 2 S, CO, and other compounds. 77 In this sense, Pd is alloyed with other metals such as silver, copper, or gold to ensure stability in long time operations. 78,79 The influence of the alloying element on the performance was discussed by Al-Mufachi et al. 80 While Pd-Y membranes deliver the highest H 2 permeability (3.7−5) × 10 −8 mol m −1 s −1 Pa −0.5 at 350°C), Pd-Cu exhibits higher mechanical stability and sulfur deactivation resistance. Moreover, the development of membranes with a higher flow of hydrogen is necessary to increase the cost-effectiveness of separation. Thus, research is focused on the production of membranes with a thin layer of palladium on a porous support. Itoh et al. 81 prepared a thin film of Pd (2−4 μm) and H 2 /N 2 selectivity of 5000 supported on alumina tubes. The preparation of Pd membranes by physical vapor deposition was studied by Pereira et al. 82 A thin film (1 μm) of Pd supported on alumina with H 2 permeance of 0.21 × 10 −6 mol m −2 s −1 Pa −1 , at 300°C , was observed. Finally, Goldbach et al. 83 obtained a Pd-Au layer supported on a ceramic composite membrane by an electroless plating method. The thin dense layer (3−5 μm) permits high H 2 permeability (1.3 × 10 −6 mol m −2 s −1 Pa −0.5 at 300°C) and H 2 /N 2 selectivity (1100) at 500°C. 83 Despite the commercial and patented membrane technology providing hydrogen of high purity degree and recovery, one single-stage membrane process can very rarely meet both requirements, except in the case of using high-cost palladium membranes. For that reason, multiple membrane stages, i.e., membrane cascades, are routinely employed, as shown in Figure 4. 84 Numerous studies have been published in the literature on the synthesis and optimization of gas permeation membrane networks, describing various possible configurations for the membrane cascades. 85 However, membrane systems consisting of a series of two or three stages represent the optimum configurations from the techno-economic point of view. 86,87 The selection of the cascade configuration is determined by the feed gas composition, pressure ratio, product purity, and product recovery. Among these, membrane selectivity is the most influential factor. 84 Although the recovery of the components of coke oven gas separation requires further research, the development of highperformance membranes for hydrogen purification is a topic of enormous interest to the scientific community. In this sense, polymer blending, pyrolysis (thermal annealing) of polymer precursors to obtain carbon membranes, and doping with inorganic fillers stand out to address hydrogen purification for fuel cell applications. 91 Acharya et al. 92 analyzed the behavior of the performance of polysulfone(PSF)/polycarbonate(PC) membranes for the separation of H 2 /CO 2 mixtures. An increase in H 2 permeability (from 13.5 Barrer to 25 Barrer at 50 wt % of PSF/PC) was observed, while the selectivity is inversely proportional (from 2.52 to 1.17 at 50 wt % of PSF/ PC) to the concentration of PC compared to pristine PSF. Moreover, Matrimid polyimide membranes have been widely used in the synthesis of polymer blends for hydrogen purification. 93−96 The influence of pyrolysis of Matrimidblends to obtain carbon membranes was reported by Hosseini et al. 94 Results showed that carbon PBI/Matrimid membranes surpass the Robeson upper bound for hydrogen separation from nitrogen, carbon dioxide, and methane.
Carbon molecular sieve (CMS) membranes are produced by pyrolysis of polymeric precursors. The degradation of the polymeric chains leads to the formation of porous structures (<0.6 nm) which increase the selectivity through the molecular sieve mechanism. The selection of the polymeric precursor and the operating variables of the thermal treatment determines the membrane structure and the separation performance. Lei et al. 97 studied the separation performance of carbon hollow fibers of cellulose precursor. The membranes were fabricated by the dry-wet spinning process and carbonized at 550 to 850°C . Results exhibited 83.9 H 2 /CO 2 selectivity with 148.2 GPU of H 2 permeance as the best overall performance (T pyrolysis : 850°C ). Nevertheless, the selectivity increases 4 times by the increase in the pyrolysis temperature, while permeance decreased 3.6 times. Xu et al. 98 prepared CMS by the pyrolysis of phenolphthalein-based cardo poly(arylene ether ketone) (PEK-C) at 700°C. The membranes showed high H 2 permeability (5260 Barrer) and selectivity (H 2 /N 2 : 142, H 2 / CH 4 : 311, H 2 /CO: 75). In addition to traditional polymer precursors, graphene-based membranes have gained attention in the recent years. Since defect-free graphene is impermeable to all gases, single layer studies focus on the development of different techniques (UV-oxidative etching or ion beam milling) to create subnanometer pores that can act as gas transport channels. On the other hand, multilayer graphene membranes deliver high performance and simpler manufacturing processes to cope with the bottlenecks of single-layer membranes. 99 Li et al. 100 developed a thin graphene oxide multilayer (9 nm) supported on alumina by vacuum filtration. The membranes were tested with binary hydrogen mixtures (50/50 vol % H 2 /CO 2 and 50/50 vol % H 2 /N 2 ) and exhibited high H 2 /CO 2 (3400) and H 2 /N 2 (1000) selectivity and flux (H 2 permeance 300 GPU) at 20°C. Moreover, multilayer configuration allows the manufacturing of hollow fiber membranes facilitating industrial applications. In this sense, the synthesis of graphene membranes (320 nm selective layer) supported on alumina hollow fiber was studied by Huang et al. 101 The separation performance of the membrane (H 2 show a decrease in selectivity in humidity atmospheres. Since graphene is a hydrophilic material, the water vapor trend to condense on the surface or inside the pores leads to a significant reduction of the separation performance. 102 In this sense, an interesting approach was reported by Huang et al. 103 Positively charged nanodiamonds were incorporated into the graphene oxide layers. The results showed that the graphene/ nanodiamond membrane retains up to 90% of H 2 selectivity in an aggressive humidity test. Inorganic fillers such as zeolites and metal organic frameworks (MOFs) have received great attention in the last decades to improve the hydrogen selectivity delivered by pristine polymers. Mixed matrix membranes (MMMs) combine the molecular sieve mechanism due to the filler microstructure together with an increase in the polymer free volume, which results in an increase in hydrogen selectivity and permeability avoiding the pyrolysis treatment. The effectiveness of MMMs relies on the pore size of the filler and the compatibility with the polymer. In this sense, zeolites are microporous crystalline aluminosilicates which have been used in a wide range of applications. 104 Regarding hydrogen purification, zeolites with intermediate pore size between H 2 (2.89 Å) and CO 2 (3.3 Å) kinetic diameter are highly desirable. Among the studies reported in the literature, it was observed that the use of zeolites 4A and 3A as fillers provides the higher increase in selectivity. 105−109 Ahmad et al. 108 showed an increase in H 2 /N 2 selectivity of 37% when 25 wt % of zeolite 4A was added to polyvinyl acetate. Khan et al. 109 found an increase by 2.3 times in the H 2 /CO 2 selectivity with 40 wt % of zeolite 3A incorporated into polysulfone acrylate membranes. ZIFs (Zeolitic Imidazole Frameworks) a subset of MOFs, which easily interact with polymers and facilitate hydrogen permeation flux, have been also investigated as fillers. 110 Addition of ZIF-8 to different polymers provides higher H 2 /N 2 , H 2 /CH 4 , and H 2 /CO selectivity, while the selectivity of binary H 2 /CO 2 mixtures slightly increases compared to the pristine Matrimid polymer, because the pore size of the ZIF-8 (3.4 Å) is placed between H 2 , CO 2 , and bulk compounds: N 2 (3.64 Å), CO (3.76 Å), CH 4 (3.8 Å). 111−113 Besides, Diestel et al. 111 reported an increase in H 2 / CO 2 selectivity with ZIF-90 in the Matrimid polymer matrix. Overall, according to the reported literature, the use of inorganic fillers could result in the increase of both the selectivity and the permeability. Although promising, mixed matrix membranes must face challenges and further development to assess the technology scale-up and industrialization. The filler to polymer ratio requires further investigation and optimization. Ratios up to 35 wt % are recommended because higher ratios can lead to weaker structures and lower selectivity performance due to the excessive increase in the free volume which results in higher permeabilities of bulk compounds such as N 2 , CH 4 , and CO. Moreover, the scale-up of membrane technology is based on hollow fiber configuration and multistage membrane systems. In this sense, further investigation on the manufacturing of hollow fiber mixed matrix membranes together with design and optimization of multistage membrane systems for hydrogen recovery from COG is required. In addition, the modularity of membrane technology has resulted in hybrid configurations with PSA with the aim of reducing the costs of producing high-purity hydrogen. In this sense, the selection of the configuration (PSA-Membrane, Membrane-PSA, Membrane-PSA-Membrane) and the optimization of the operating parameters are the main challenges that must be addressed. Li et al. 114 compared the performance of PSA-Mem and Mem-PSA with conventional PSA for the purification of hydrogen from coal gasification syngas (62.57% H 2 , 31.61% CO 2 , 4.33% N 2 , 1.12 CO, and 0.37% CH 4 ). Results showed an increase of 40% in hydrogen recovery of the PSA unit in hybrid configurations in the production of high purity H 2 (99.98%). Although hybrid systems allow an increase in the recovery of the process, the selection of the configuration must meet the product specifications and financial profitability. The technical and economic analysis of hybrid separation processes was carried out by Lin et al. 115 The study evaluates the separation of the H 2 -N 2 mixture from the decomposition of ammonia with PSA, membrane, and hybrid processes. Results showed that hybrid configurations with more stages such as Mem-PSA-Mem increase the energy consumption. On the other hand, since high-purity hydrogen is obtained in the PSA unit, configurations in which hybrid PSA is placed before the membrane unit is placed before membrane units are recommended, from an energy efficiency point of view. The tail gas is from PSA, which is fed to the membrane unit, where the permeate stream is fed back into the PSA unit. This design decreases the stream flowrate through the membrane module, which reduces the energy consumption in the compression stage. PSA-Mem delivers the lowest cost ($4. 31 ) for the separation of high purity hydrogen (99.97%).
Finally, dense ceramic membranes have become a hot topic as novel membranes for hydrogen purification. The transport mechanism involves the following steps: i) H 2 adsorbs onto the membrane surface and dissociates into protons and electrons and ii) protons and electrons diffuse to the other side of the membrane, where they recombine to H 2 . Theoretically, the hydrogen selectivity of the mixed proton−electron conducting membranes is 100% as in the case of Pd membranes. Since ceramic membranes are less expensive and have a greater resistance in H 2 S, CO, and CO 2 atmospheres, they are wellpositioned for the purification of hydrogen at high temperatures such as those employed in membrane reactors. Nevertheless, the commercialization of proton−electron conducting membranes is still hampered by insufficient stability in long-term operations, low proton and electron conductivities which lead to lower H 2 flux, and fundamental knowledge of the membrane performance. 116,117 Thus, research focuses on the development of membranes containing electron and proton conducting phases, doping of the membranes, and the investigation of novel materials such as La 2 Ce 2 O 7 oxides. Since dense ceramic membranes are still in their early days, the open literature focuses on the characterization of hydrogen flux by pure gas experiments; thus, studies on hydrogen separation from multicomponent gas mixtures are lacking. A comprehensive review of future trends and the summary of hydrogen flux in dense ceramic membrane can be found in Tao et al. 118 which also produce a methane-rich byproduct stream which could be burnt as fuel. In this sense, upgrading techniques such as reforming or partial oxidation of COG provide syngas from the reaction of methane. Then, hydrogen is obtained by means of the water-gas-shift (WGS) reaction and downstream purification step such as PSA. 119 Thus, hydrogen recovery from COG by separation steps is complemented with the hydrogen product obtained from methane conversion. Nevertheless, with the high value of syngas as feedstock in manufacturing processes, the chemical conversion to H 2 in WGS reactors is not always considered an option. Moreover, all the proposed methods are based on the catalytic conversion in fixed bed or fluidized reactors which requires a previous cleaning process with the aim of preventing poisonous effects on the catalyst. 3.1.1. Steam and Dry Reforming. Steam reforming (SR) is the main process for syngas and hydrogen production ( Figure 5). The process consists of a heterogeneous catalyzed reaction of the methane fraction of COG with high temperature steam (700−1000°C, 15−30 bar) to obtain syngas with the H 2 /CO ratio of ideally 3/1 (reaction 1). 120 Among the catalysts, Ni stands out from the noble metals (Ru, Rh, Pd, Ir, or Pt) due to its lower price. Nevertheless, Ni delivers the lower activity (≈94% CH 4 conversion) and deactivation resistance to carbon deposition or sulfur poisonous compounds. 121−123 Moreover, the selection of the catalyst morphology depends on the operating conditions. Large particles with thick walls such as six-hole cylinders offer high resistance to temperature and mechanical stress. 27 After steam reforming, an additional amount of hydrogen can be obtained from syngas by the watergas-shift reaction (reaction 2).
COKE OVEN GAS CHEMICAL CONVERSION
Commonly, the WGS reaction takes place in two reactors. First, the high amount of carbon monoxide is converted until reaching equilibrium in a high-temperature reactor at 300−350°C with iron oxide-based catalysts. Then, the outlet stream is cooled down to 200°C and further converted (90−99% CO conversion) using a copper-zinc catalyst supported on alumina or silica. 122 Temperature, pressure, and the steam to carbon ratio (S/C) are the main operating variables of the process. The production of high purity hydrogen from COG requires advanced separation-reaction systems (sorption-enhanced (SE) or membrane assisted (MA) steam reforming reactors), since the initial content of hydrogen and carbon monoxide in COG induces unfavorable reactions such as the reverse-watergas-shift (RWGS). The main goal of separation-reaction systems is the increase in the reactant conversion by the removal of reaction products from the reactor that shifts the equilibrium to higher conversions (up to 35% higher than conventional reactors). 125 In this sense, while hydrogen is selectively recovered by membranes, sorption-enhanced systems rely on the capture of the carbon dioxide which is produced in the WGS reaction on an adsorption bed. In addition, membrane reactors allow operating at lower reaction temperatures, reducing the capital and operational costs by the lower energy consumption and materials costs. Moreover, this introduces the development of new strategies of heat integration for the off-gases of the processes. 126,127 Membrane reactor configuration generally presents shell and tube configuration in cocurrent flow. The catalyst may be placed at the inner of the tube or in the annulus, while permeate flows in the remaining section. 125,128 The schematic representation of the configuration is shown in Figure 6.
The selection of the operating variables of the MA reactors must meet the reaction and separation requirements. In this sense, temperature ranges between 400 and 600°C, which enhance the reactants conversion and hydrogen permeation, reducing the energy consumption compared to conventional SR reactors. Regarding pressure, reaction and separation show competitive effects. While the conversion of the reactants is unfavored by an increase of the pressure, the driving force for gas transport is enhanced. Thus, mild pressures (1−10 bar) are commonly used in MA reactors. 125 The shift from conventional reforming to new separation-reaction systems can be either observed in patented processes or in the open literature for hydrogen production from COG. Regarding the registered technology, metallic membrane reactors have been patented in the past decade. 129,130 On the other hand, studies of steam reforming of COG are scarce to the best of our knowledge since the process is still at its early stages. The performance of a separation-reaction system for the production of hydrogen from COG was evaluated by Chen et al. 131,132 High purity hydrogen (>99.9 vol %) was obtained in a MA-SE-SR process from COG at 560°C with an S/C ratio of 4. Calcined dolomite was used as the adsorbent for carbon dioxide capture, 134 They produce high purity hydrogen with full methane conversion (99%) in a protonic membrane reformer (PMR) at 800°C. Thus, an almost pure carbon dioxide membrane is also obtained. Furthermore, the modeling of the process showed that PMR requires 1/3 electricity and 2/ 3 natural gas compared to a traditional MA reactor. Dry reforming (DR), which consists of the reaction of methane and carbon dioxide (reaction 3), can be promoted during steam reforming.
Dry reforming has the advantage of using both greenhouse gases for syngas production with a low H 2 /CO ratio (1/1). Nevertheless, the reaction requires high temperatures (>800°C ) because of its endothermic character. Thus, the open literature focuses on the enhancement of the catalyst activity. Li et al. 135 reported the increased activity of monometallic catalysts and the resistance to carbon deposition by the Ni-Co bimetallic catalyst with 70.36% and 86.46% conversion of methane and carbon dioxide, respectively, at 700°C. The influence of the catalyst in the reaction was observed by Angeli et al. 136 Their results showed that higher temperatures (1100°C ) are required to carry out the dry reforming of BFG and COG in the absence of a catalyst (78.5% of CO 2 conversion and 95% CH 4 conversion). Combined steam and dry reforming reactions were studied by Kim et al. 137 Lower carbon dioxide (25−34%) and methane conversion (81−87%) were observed compared to dry reforming, while a H 2 /CO ratio slightly higher than 3 was obtained. Although the reforming reaction requires separation-reaction systems or downstream hydrogen purification to meet fuel cell requirements, this alternative is well positioned to increase the recovery of hydrogen from COG. Moreover, a reforming reactor can be also placed after the separation process by membranes or PSA to further transform the methane-rich stream to hydrogen.
Regarding syngas production, the ratio H 2 /CO is determined by the selection of the reforming process. While higher ratios obtained from steam reforming (>3) are suitable when syngas is used as a reducing agent in iron production, lower ratios obtained from dry reforming (≈2) are required in methanol production which could be obtained by partial oxidation (PO) of COG.
3.1.2. Partial Oxidation. The partial oxidation (PO) of methane unlike steam and dry reforming is an exothermic process that does not require an external source of energy (equation 4). 120 Commonly, Ni-based catalysts are used to promote the reaction rate and selectivity.
According to the stoichiometry of reaction 4, ideally a 2:1 H 2 /CO ratio is obtained by the partial oxidation reaction; this fulfills the requirements for methanol production. Then, hydrogen can be also obtained by means of the water-gasshift reaction followed by a purification step. The main challenge in partial oxidation is the supply of high purity oxygen. Conventionally, pure oxygen has been produced from the cryogenic distillation of air at the expense of high energy consumption. In this sense, attention has been paid to oxygenselective ceramic membranes, which integrate oxygen separation and PO reaction in a single stage; this integrated step provides significant reduction in energy demand and capital investment. This approach is found in the open literature of hydrogen production by partial oxidation of COG. 138−144 Furthermore, the oxygen permeable reactor has been patented for the partial oxidation of COG. 145 Nb-perovskite-based ceramic membranes (BaCo 0.7 Fe 0.2 M 0.1 O 3-δ , recognized as "BCFM") where "M" used to be a transition metal such as Nb, Ta, or Zr and "δ" is the concentration of oxygen vacancies in the structure are widely studied. The performance of the membrane reaction system was studied by Yang et al. 143 and Zhang et al. 141 The methane conversion and oxygen flux ranges from 90 to 95% and 15−17 mL cm −2 min −1 , respectively, at 875°C. Moreover, Cheng et al. 139 studied the influence of the transition metal on the stability of the perovskite membrane. In spite of the slight increase in permeation flux with Zr, it was found that BCFZ membranes have lower structural stability in the CO 2 atmosphere. The partial oxidation technology has been also patented for the production of syngas from COG. 146,147 Thus, according to the state-of-the-art literature, research should be focused on the development of oxygen-selective ceramic membranes with higher stability and permeation flux to offer a more Thus, COG can be used to provide the reagents in the methanation reaction. Methanation has recently gained attention in power-to-gas applications in which hydrogen excess is used for synthetic methane production from CO 2 toward the reduction of fossil fuels consumption and carbon dioxide emissions. 149 Conventionally, methanation is a catalytic reaction which is carried out in adiabatic reactors. Although methanation was discovered at the end of the 19th century, it still remains as a new alternative in the recovery routes of COG. In this sense, a methanation process has been patented with in-series adiabatic reactors. 150,151 Nevertheless, the literature review shows that there are two main obstacles to be overcome in methanation: i) catalyst performance and ii) temperature control. Since it is a catalytic reaction, many studies focused on increasing the catalyst activity and the deactivation resistance. In this sense, bifunctional Ni-based catalysts have been widely reported. Lu et al. 152 observed the enhancement of the activity and stability of the Ni catalyst with zirconia (Ni-Zr) to reach 100% and 80% conversion of CO and CO 2 , respectively, at 450°C. Moreover, Ni-Ce catalysts were tested by Quin et al. 153 The results showed complete conversion of carbon monoxide and carbon dioxide at 260°C. On the other hand, the exothermic character of the reaction together with the high concentration of reactants results in a significant increase in the temperature of the reactor. Thus, heat exchangers should be coupled to the adiabatic reactors to control the temperature of the process. 154 The comparison between conventional adiabatic reactors and nonadiabatic reactors was studied by Quin et al. 148 Nonadiabatic reactors delivered higher production ratios (20%) and lower costs (14%) due to the reduction of the necessary equipment. Figure 7 shows the illustration of the methanation process of COG.
COKE OVEN GAS COMBUSTION TO ENERGY
Among nonstandard gaseous fuels, COG has a high heating value (16−20 MJ m −3 ), which allows the gas to be burnt at a normal temperature, while the blast furnace gas, with a onetenth heating value of the natural gas (3−5 MJ m −3 ), requires higher temperatures. 155,156 In this sense, raw COG, which is sometimes flared off during periods of lower demand, has been commonly fed to furnaces and coke oven batteries accomplishing a low cost reuse standard. However, hydrogen and methane concentration in COG has given rise to unprecedented recovery routes such as feedstock in cogenera-tion or internal combustion engines with the aim of power and heat the coke at the iron and steel industry, reducing the energy demand (Figure 8). 157 Regarding cogeneration, modeling and simulation of cogeneration studies are focused on the optimization of exhaust gases allocation in the plant. 157,158 The optimization of the utilization of COG and LDG in the iron and steel plant was studied by Garcı́a et al. 157 Mixed integer linear programming (MILP) was used as a tool for the allocation of the streams. Results showed an increase of 16.9% of the benefits by the MILP model since it allows the optimization of the performance of the cogeneration plant, while human decision-making is only focused on the reduction of natural gas consumption. On the other hand, COG can be fueled in two types of internal combustion engine devices: turbines and reciprocating engines. Some modern gas turbines, e.g., GE 6B gas turbine, are fuel flexible and can be fed by liquid or gaseous fuels, such as COG. 159−161 Gas turbines can burn COG with compressed air, propelling the rotation of the shaft with the combustion gases and producing electricity with a generator connected to the same shaft. To further achieve a higher system efficiency, a combined-cycle gas turbine (CCGT) can be used, in which the exhaust gases can be used to heat water through a heat recovery steam generation (HRSG). 162,163 The steam produced is then introduced in a steam turbine connected to the same or another generator. Therefore, gas turbines are a very efficient and high-power density technology; however, they are expensive and require a very specialized maintenance. In contrast, reciprocating internal combustion engines (ICEs) are easily scalable to the plant requirements, are cheaper than gas turbines, and require low specialized maintenance. In order to be fueled with gaseous fuels, a preliminary conditioning is required to tackle combustion differences from oil conventional fuels (diesel and gasoline), optimizing the operating conditions. The necessary modifications in ICEs are related to design: i) higher capacity injectors due to the lower density of hydrogen-rich mixtures which results in larger fuel volumes, ii) spark plugs and better cooling systems able to manage higher combustion temperatures, and iii) other minor instrumentation, such as a wideband lambda sensor to operate at leaner mixtures. 164,165 Two main injection configurations are usually employed. Portfuel injection, which requires low-pressure injectors, provides a more homogeneous air-fuel mixture and increases the combustion efficiency, but a higher backfire tendency and lower power output due to the less volumetric efficiency are obtained. 166,167 On the other hand, direct fuel injection into the cylinder increases the power performance because of the higher mass of air induced and richer air-fuel mixtures can be employed without the risk of backfire. Nevertheless, highpressure injectors are required, and higher thermal NOx should be controlled as higher combustion temperatures are reached. 166 Studies of ICEs fueled with gaseous fuels have grown exponentially in the last decades. A tradeoff between Industrial & Engineering Chemistry Research pubs.acs.org/IECR Review higher efficiency but lower output power operating at lean airfuel mixtures has been found in hydrogen internal combustion engines. 168 In addition, leaner mixtures avoid abnormal combustion and reduce NOx emissions, especially at optimum spark advance. 169 Spark advance also influences the maximum brake torque, becoming an important factor for the optimization of the operating conditions, as observed by Sopena et al. 165 In order to increase the power performance while reducing knocking at richer air-fuel mixtures, blends of H 2 and CH 4 can be used as gaseous fuels. In this sense, a wider operating range can be employed, limiting the combustion temperature and duration. 170−172 Thus, cleaned COG, which is mainly composed of hydrogen and methane as shown in Table 1, is a very interesting industrial waste stream to harness its energy content. Different studies of the combustion of COG or similar gas compositions in internal combustion engines are found in the literature. Regarding compression ignition engines, COG and a pilot amount of diesel have been tested and compared with producer gases with different H 2 percentages and pure H 2 in a supercharged dual-fuel engine by Roy et al. 173,174 Higher H 2 content increased the efficiency but reduced the output power and the emissions as leaner air-fuel mixtures were required to avoid knock, observing an important influence of the air-to-fuel ratio and the timing of the pilot diesel injection.
In the case of spark ignition engines, gas mixtures similar to COG were tested and compared with other synthesis gases with different compositions. 175,176 Results showed good combustion stability of COG and suitable antiknock properties of CH 4 , CO, and CO 2 . 176 In addition, knocking was reduced similarly by diluting the fuel mixture by means of EGR or by leaning the air-to-fuel mixture with an excess of air. 175 Comparing a methanized COG mixture of 55 vol % of H 2 and 45 vol % natural gas (NG) with NG and a mixture with 30 vol % H 2 and 70 vol % of NG, higher efficiency and NOx emissions were obtained with the methanized COG mixture but produced lower torque and low emissions of CO and HC. 177 An availability analysis (maximum useful work that can be produced from a system during the interaction to a state of thermal, mechanical, and chemical equilibrium with its environment) for COG, methane, and a mixture of 80 vol % of H 2 and 20 vol % CH 4 was carried out, delivering the highest thermal efficiency and the lowest specific fuel consumption with COG. 178 Additionally, it was found that the irreversibility could be reduced by increasing the compression ratio and delaying the spark timing. On the other hand, Ortiz-Imedio et al. 179 compared hydrogen, methane, and a synthetic COG mixture, observing a widening of the air-fuel ratio operation range with COG and obtaining lower specific NOx, hydrocarbon, CO, and CO 2 emissions. Moreover, a computational fluid dynamics (CFD) simulation showed that intermediate spark advance values of COG reduced the combustion pressure and temperature within the cylinder, decreasing NOx emissions and the wall heat transfer. In this way, COG generated the highest power values compared to CH 4 and H 2 at lean air-to-fuel mixtures. 180 Garcı́a et al. 181 analyzed the environmental impact of the energy recovery of waste streams in steel production by means of the life cycle analysis tool. Coke oven gas and Linz-Donawitz converter gas were evaluated as supplementary fuels to natural gas in different scenarios that were defined according to the energy contribution of natural gas and off-gases. The authors reported environmental benefits in human toxicity (evaluation of toxic compounds for the human health), ionizing radiation (damage to human health and ecosystems that are associated with the emissions of radionuclides), fossil and ozone depletion indicators (depletion of natural fossil fuel resources and emissions to air that cause the destruction of the stratospheric ozone layer, respectively), and natural gas savings (120 Nm 3 MWh −1 in 100% of energy production from COG and Linz-Donawitz gas) in all the analyzed scenarios. 182 Furthermore, it was demonstrated that the higher the energy recovery from waste gases the greater the benefit. In conclusion, the high-energy content of coke oven gas can be harnessed in a controlled way through its combustion in both gas turbines and reciprocating internal combustion engines. A wider operating range of air-to-fuel ratios compared to H 2 and CH 4 can be employed, taking advantage of the individual benefits of its main constituents. High thermal efficiency and output power values are obtained, while lower hydrocarbon emissions compared to conventional fuels and lower NOx emissions than pure H 2 are generated. Therefore, COG as an industrial waste stream is a very interesting alternative for energy production in the iron and steel industry, reducing the energy demand from more polluting fossil fuels.
ENVIRONMENTAL ANALYSIS OF THE VALORIZATION ROUTES
Among coke oven gas valorization routes, the production of electricity and heat is positioned as the cheapest alternative. Industrial & Engineering Chemistry Research pubs.acs.org/IECR Review Nevertheless, the sustainability of the valorization routes must be addressed according to economic and environmental aspects. In this sense, the emissions of carbon dioxide are the main bottleneck in the valorization of COG. Since the production of iron and steel is an energy intensive industry, the selection of the upgrading technique should be focused on the reduction of greenhouse gas emissions. A comparison of the environmental performance of the valorization routes of COG was performed by Zhang et al. 183 The study evaluated the energy consumption and carbon dioxide emissions of the alternatives that have been discussed in previous sections ( Table 6).
As can be seen in Table 6, the environmental performance of hydrogen purification stands out compared to the cogeneration of heat and electricity, which is currently the most economic option since the low energy consumption. Moreover, the recovery of hydrogen from COG has been compared to alternative hydrogen production routes in recent studies. 184,185 The global warming potential of hydrogen production from COG is in the range of natural gas reforming (10−13 kg CO 2-eq kg H 2 −1 ) and only decreased by water electrolysis with renewable energy sources. Although the recovery of hydrogen from COG must face economic drawbacks, the growth of hydrogen economy together with the environmental performance could position this alternative at the head of valorization techniques of COG in the midterm.
CONCLUSIONS AND FUTURE PROSPECTS
Among exhaust gases of the iron and steel industry, COG stands out as a promising hydrogen sustainable source. Although raw COG is used as a supplementary fuel, the high production rates in the iron and steel industry result in surplus COG which is usually burnt off in flares. Thus, COG as a hydrogen source, after the appropriate conditioning, has attracted much attention due to the environmental and economic potential toward sustainability and a hydrogenbased economy. In this sense, two main pathways are distinguished in the recovery of hydrogen from COG: i) separation/purification process and ii) chemical conversion from methane and carbon dioxide contained in COG combined with separation/purification steps. Furthermore, the hydrogen and methane composition in COG positions it as suitable fuel for H 2 -fueled internal combustion engines or gas turbines in stationary applications to supply electricity and heat to the iron and steel plant. Regarding hydrogen recovery, the selection of the alternative route depends on the purity of the hydrogen product, capital investment, and operation costs. According to the literature research, hybrid separation-reaction systems are well positioned to maximize the hydrogen recovery from COG. Since the initial composition of hydrogen in COG unfavored the conversion of methane by shifting the equilibrium of the reaction, membrane technology can be placed prior to the conversion step as the first hydrogen recovery stage. Then, the methane-rich stream can be converted to syngas by reforming or partial oxidation and further processed to hydrogen by the water-gas-shift reaction. Finally, the product stream from the WGS reactor (70−75% H 2 ) should be purified by the PSA process to meet fuel cell purity requirements. Thus, hybrid separation-reaction systems allow an increase in the hydrogen production since the initial content in COG is enhanced by the chemical transformation of methane to hydrogen. Nevertheless, separation and chemical transformation routes must overcome operating drawbacks to address the economic feasibility of the process (Table 7). Regarding separation technologies, lower energy consumption from PSA and higher separation performance are required. In this sense, the operation of the regeneration stage under vacuum conditions allows the reduction of the energy consumption and the capital investment. Regarding membrane technology, the selection of the membrane material depends on the operating conditions. While Pd and proton conducting membranes are the best alternative for the recovery of hydrogen at high temperatures such as those employed in membrane reactors, polymeric-based materials deliver high separation performance at lower operation temperatures such as the initial recovery of hydrogen from COG previous to the chemical conversion route. However, polymeric-based membranes are not able to meet the high purity requirements hampered by the separation of hydrogen and carbon dioxide. Thus, the studies focus on the doping (mixed matrix membranes) or conditioning of the membranes (carbon membranes) to increase the separation grade. On the other hand, the increase in the catalyst activity and deactivation resistance is required in the chemical conversion routes to hydrogen to ensure long-term operation and reduction of the energy requirements. Regarding the increase in catalyst activity, bifunctional Ni-based catalysts are widely found in the open literature, while advanced membrane-reaction integrated systems have shown lower energy requirements and capital investment than conventional reaction systems.
|
2022-02-19T16:20:44.255Z
|
2022-02-17T00:00:00.000
|
{
"year": 2022,
"sha1": "0b7fde76ba1ae5425d332870e2b8b7315af732ea",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.iecr.1c04668",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c68c418ec918ee377cfc2daf5a6dc0de565341ca",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22211864
|
pes2o/s2orc
|
v3-fos-license
|
Application of single-cell sequencing in human cancer
Abstract Precision medicine is emerging as a cornerstone of future cancer care with the objective of providing targeted therapies based on the molecular phenotype of each individual patient. Traditional bulk-level molecular phenotyping of tumours leads to significant information loss, as the molecular profile represents an average phenotype over large numbers of cells, while cancer is a disease with inherent intra-tumour heterogeneity at the cellular level caused by several factors, including clonal evolution, tissue hierarchies, rare cells and dynamic cell states. Single-cell sequencing provides means to characterize heterogeneity in a large population of cells and opens up opportunity to determine key molecular properties that influence clinical outcomes, including prognosis and probability of treatment response. Single-cell sequencing methods are now reliable enough to be used in many research laboratories, and we are starting to see applications of these technologies for characterization of human primary cancer cells. In this review, we provide an overview of studies that have applied single-cell sequencing to characterize human cancers at the single-cell level, and we discuss some of the current challenges in the field.
Introduction
The clinical importance of comprehensive molecular phenotyping of cancer tumours is increasing with the advent of precision medicine [1], which aims to provide tailored treatment to individual patients based on their molecular phenotype. DNA sequencing and RNA sequencing (RNA-seq), together with many other molecular profiling technologies, enable comprehensive molecular phenotyping of tumours and have been applied to characterize many cancer types in projects like the cancer genome atlas [2,3]. Sequence-based molecular phenotyping reveals quantitative information on a multitude of molecular levels, including data on somatic and germ-line single-nucleotide variation, copy number variation (CNV), gene fusions, DNA methylation and gene expression variability. Conventional molecular profiling is based on an average molecular phenotype from a large population of cells (often described as a 'bulk' sample within the context of single-cell studies), which has proven useful in many applications. However, substantial loss of information occurs through averaging over the molecular phenotype of individual cells. Single-cell molecular phenotyping has the capability to generate high-resolution molecular phenotype information and provide means for quantitative analysis of several key properties of tumours, including intra-tumour heterogeneity, cellular composition (cell types), cellular hierarchies and cell states. It is likely that single-cell molecular phenotyping will replace bulk average molecular profiling in many cancer research and clinical applications in the future.
Cancer is a disease with inherent heterogeneity [4][5][6] caused by multiple factors, including intra-tumour evolution, cellular plasticity [6] and multiple sources of stochastic variability ( Figure 1). Chromosomal instability, which leads to intratumour heterogeneity, is associated with poor patient outcomes [7], and cancer patients with larger proportion of subclonal mutations have also been observed to have a higher chance of relapse [8]. A key challenge in cancer treatment is detection of rare subpopulations of cells that have the potential to develop resistance to therapy. Such subpopulations of cells can be either subclones or subpopulations of cells that through stochastic Mattias Rantalainen is an assistant professor at Karolinska Institutet in Sweden. His research is focused on development and application of statistical and bioinformatic methodologies to address problems in cancer genomics and personalized medicine.
V C The Author 2017. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com processes or cellular plasticity can adapt to changing selective pressure from treatment or other environmental factors.
Single-cell sequencing of tumour cells are improving our ability to characterize intra-tumour heterogeneity, and it is likely that intra-tumour heterogeneity will prove to be clinically relevant in the context of precision medicine [4,5]. Tumour heterogeneity can arise on multiple levels, through clonal evolution and through heterogeneity in cell states that are the effect of dynamic molecular phenotypes, including epigenetic and transcriptomic effects, or a combination of multiple molecular levels. Quantitative characterization of tumour heterogeneity in particular, including detection of rare subclones of cells with possible drug resistance potential, has the potential to be translated to the clinic in the future. There are multiple clinically relevant applications where single-cell sequencing and information on tumour heterogeneity is likely to be of importance, including prediction of treatment response, prognosis, monitoring of disease progression, prediction of treatment effect and detection of emerging drug resistance ( Figure 1). Single-cell sequencing can also be applied for molecular phenotyping of circulating tumour cells (CTCs) [9], i.e. cancer cells disseminated into the bloodstream, with samples collected through minimally invasive liquid biopsies.
Single-cell sequencing of primary cancer cells
Development of technologies for single-cell isolation, wholetranscriptome or whole-genome amplification (WGA) together with next-generation sequencing provides the foundation that has enabled the emergence of single-cell sequencing. Generation of single-cell sequencing data from primary human cancer cells can be described through a set of fundamental process steps ( Figure 2): (1) sample acquisition from patient; (2) creation of single-cell suspension; (3) temporary storage; (4) isolation of single cells and library preparation; (5) sequencing; and (6) bioinformatic and statistical analyses. Owing to logistical challenges when working with clinical samples, there are typically delays in sample processing (Step 3). This logistic delay can often be avoided when working with model systems (animals or cell lines), while it remains a reality in many studies based on patient material (e.g. biopsy) that is collected at a clinic.
Sample handling
In studies based on clinical samples, e.g. biopsies or surgically removed tumours, it is essential to ensure that the molecular integrity of the samples is preserved until molecular phenotyping. To accomplish this, samples either have to be processed immediately at time of collection or a method that allows preservation of the molecular integrity has to be applied. Immediate single-cell sequencing of fresh samples is often challenging to implement because of separation in physical location between specialized laboratories and clinic ( Figure 2, Step 3). If samples are collected for later molecular phenotyping, a single-cell suspension is generated followed by application of a preservation method compatible with downstream molecular profiling. Evaluation of a few methods for temporary storage of samples for single-cell sequencing applications, including cryopreservation [22] (DNA-or RNA-seq), methanol fixation [23] (DNA-or RNA-seq) and CellSave [24] (DNA sequencing), has recently been reported. Single-cell sequencing of cryopreserved cells [22], as well as methanol fixed cells [23], revealed high transcriptomic concordance with fresh cells. Recently, a method for preservation of cells for single-cell RNA-seq without chemical crosslinking or freezing [using CellCover (AL Anacyte Laboratories UG), DNA RNA preservation] was also applied in a single-cell RNA-seq study [25]. Clinical samples are routinely prepared as formalin-fixed, paraffin-embedded (FFPE), which limits the opportunity for single-cell sequencing, especially in respect to RNA sequencing. Martelotto et al. [26] evaluated a method for single-cell whole-genome copy number profiling in FFPE material based on isolation of intact nuclei using fluorescenceactivated cell sorting (FACS) sorting. Results of this study suggested that CNV profiles from FFPE material can be comparable with single-cell fresh-frozen material [26]. For CTC analysis either positive or negative selection, or a combination thereof, has to be applied to isolate the CTCs from blood. Liquid biopsies (e.g. blood samples) have to be kept in a state where RNA and DNA are not degraded before molecular phenotyping. In a study evaluating three different available preservatives [K3EDTA, Cell-Free DNA BCT (BCT) and CellSave (Cellsearch)], BCT and CellSave provided the best preservation of CTCs, while BCT provided the better preservation of RNA in comparison with K3EDTA [24]. Further development and evaluation of protocols for sample preservation methods compatible with single-cell DNA-and RNA-seq are necessary to enable wider application of single-cell sequencing to characterize clinical samples. Large collaborative efforts, for example 'the human cell atlas' [27], will most likely contribute to the development and systematic evaluation of improved sample handling protocols, which is essential to enable large-scale application of single-cell profiling.
Single-cell isolation
Single-cell sequencing typically requires a suspension of individual cells as starting material. In situations where single cells from solid tissues are to be profiled, dissociation of the tissue into a cell suspension has to be accomplished as a first step, followed by isolation of the individual cells. Techniques for singlecell isolation from cells in suspension have been reviewed extensively before and include FACS (DNA-or RNA-seq), microfluidics (DNA-or RNA-seq), droplet-based capture (RNAseq), Laser Capture Microdissection (DNA-or RNA-seq) and manual selection (DNA-or RNA-seq) [14,17,28,29]. More recently, a novel microwell-based approach [25] (RNAseq) and methods based on combinatorial indexing [30,31] (DNA-or RNA-seq) have also been proposed, offering cost-effective high-capacity methods for single-cell isolation and library preparation. The different methodologies differ in respect to fundamental physical principles and the maximum number of cells that can be captured. The choice of method for single-cell isolation depends on the context and objective of the study. Single-cell analysis of CTCs provides an attractive surrogate biopsy of primary or metastatic tumours, as liquid biopsies can be collected in a minimally invasive procedure through a conventional blood sample [32]. CTCs are present in exceptionally low frequency in the blood ($1 of 10 9 blood cells), making efficient enrichment and capture methods important. Many methods and strategies have been reported for CTC isolation and reviewed elsewhere [19,[33][34][35]. Cellsearch (Veridex) is one of the most widely applied platforms for CTC enumeration and capture of CTCs [36]. Cellsearch is based on positive selection using antibodies against EpCAM and cytokeratins (positive markers) and against leukocyte antigen CD45 (negative marker) together with a nuclear dye (4',6-diamidino-2-phenylindole). Cellsearch enrichment together with single-cell isolation using DEPArray (Silicon Biosystems) has been applied in multiple studies [37,38]. Additional CTC enrichment and capture methods include Magsweeper [39], flow cytometry [40], microfluidic devices [41,42], HD-CTC [43], MINDEC [44], Rosettesep (STEMCELL Technologies Inc.), EPIC CTC platform [45] and CTC ichip [46].
Single-cell sequencing
There are now multiple methods available for DNA and RNA sequencing in single cells. Single-cell sequencing protocols all require amplification of the genomic DNA or complementary DNA, in the case of RNA-seq, before preparation of sequencing libraries. Single-cell DNA sequencing has proven to be more challenging compared with RNA-seq, as each cell contains many RNA molecules, but only two copies of DNA. Currently, single-cell RNA-seq is more established than single-cell DNA sequencing, with a more diverse set of methods available for single-cell RNA-seq. Studies applying single-cell RNA-seq typically include larger numbers of cells (hundreds or even several thousand cells in recent studies) compared with those that focus on single-cell DNA sequencing.
WGA of the single genome copy is currently necessary for single-cell DNA sequencing, and ideally, the amplification procedure should have minimal biases and sequence errors. There are multiple methods for WGA with different limitations and performance in respect to genome coverage and uniformity. The most commonly applied methods are polymerase chain reaction (PCR)-based (DOP-PCR) [47,48], isothermal amplification (MDA) [49], hybrid methods (MALBAC) [50], together with proprietary methods including GenomePlex WGA4 (Sigma-Aldrich) based on PCR amplification of randomly fragmented genomic DNA. The relative performance of these methods has been evaluated [51][52][53], and commercial kits, including AMPLI1, MALBAC, Repli-G and PicoPlex, for single-cell exome sequencing were evaluated in [54]. WGA methods were also previously reviewed from a comparative perspective [14]. Furthermore, Baslan et al. [55] proposed a modified DOP-PCR method with improved performance and cost-effectiveness for single-cell CNV profiling. Zahn et al. [56] described a direct library preparation method for single-cell genome sequencing for CNV analysis, which display higher degree of uniformity compared with WGA-based methods.
Single-cell analysis
Common objectives in bioinformatic and statistical analyses in single-cell cancer studies are analyses of intra-tumour heterogeneity, molecular subtyping at the single-cell level, detection of rare cell types, mutation detection, CNV profiling and lineage inference. To gain the most out of the single-cell sequencing studies, specific models and methods should be used in some applications instead of methods developed for analysis of conventional bulk average profiles. Single-cell RNA-seq data in Figure 2. Overview of the process of applying single-cell sequencing to patient-derived tumour samples. particular have distinctly different distributional properties compared with conventional bulk average RNA-seq data, including substantially zero-inflated expression distribution and latent variability because of, e.g., cell cycle effects [72] (RNA). WGA typically leads to data with limited genome coverage, and allelic dropout leads to loss of one or both alleles at some locations during amplification. An increasing number of specialized methods for analysis and modelling of single-cell data are available, including methods for rare cell detection [73][74][75] (RNA), differential expression [76][77][78] (RNA), pathway analysis [79] (RNA), imputation [80][81][82] (RNA), heterogeneity [78,79] (RNA), lineage inference [83,84] (DNA), pseudo-time-ordering [85][86][87] (RNA), clustering [80,88] (RNA), dimensionality reduction [89][90][91] (RNA), modelling of latent factors [72] (RNA) and quality control [92] (RNA). Reviews of bioinformatic and statistical methods for single-cell analysis are also available [10,20,21].
Applications of single-cell sequencing for molecular phenotyping of human cancer cells Here, we provide an overview of studies that apply single-cell sequencing to characterize primary human cancer cells, while studies based on cell lines, xenograft models and primary cultures are not included in the current survey. Most of these studies are of small size, particularly in respect to the number of patients. The number of single-cells from each patient is also limited in many of the studies, although the more recent studies include larger number of cells [93][94][95], reflecting the ongoing technology development in the field. It is evident that singlecell sequencing has been applied across a range of different cancer disease using both DNA-or RNA-seq, and addressing a range of general and specific research questions, some of which are outlined in the introduction. Studies are summarized in two tables, Table 1 including single-cell sequencing of primary cancer cells, and Table 2 including studies focused on single-cell sequencing of CTCs. We comment on some of the key studies below.
Single-cell sequencing of primary cancer cells CNV profiles of primary single cells from two breast cancer patients were reported in an early landmark study demonstrating feasibility of the method and revealing distinct clonal subpopulations of cells as well as concordant CNV profiles between primary tumour and metastasis [96]. In another study, CNV profiles were generated from breast cancer tumours and disseminated tumour cells (DTCs) from bone marrow [97], and CNV profiles were compared between primary tumour or lymph node metastases and DTCs, revealing concordance of CNV profiles in 53% of identified DTCs and allowing for phylogenetic analysis and ability to determine the origin of DTCs [97]. Gawad et al. [98] applied targeted single-cell DNA sequencing to study childhood acute lymphoblastic leukaemia (ALL) in 1479 cells from 6 patients, which allowed them to gain insights into clonal evolution, development of ALL and determine co-occurring mutations. Single-cell RNA-seq was applied in a study of metastatic melanoma based on 4645 cells from 19 patients using fresh material and single-cell isolation by FACS [95]. The cells profiled in this study included cancer cells as well as stromal, immune and endothelial cells, thus allowing for characterization of the tumour microenvironment in addition to analysing inter-and intra-tumour heterogeneity. Interestingly, subpopulations of cells with expression of genes indicative of resistance to targeted therapies were identified. Interactions between cancer cells and tumour microenvironment were also investigated [95]. In another study focused on glioblastoma patients, the RNA of single cells was sequenced, revealing intra-tumour heterogeneity in respect to established molecular subtypes suggesting possible effects on prognosis [99]. Tirosh et al. [94] used singlecell RNA-seq to characterize fresh single-cells from human oligodendroglioma patients. They characterized intra-tumour heterogeneity and found that cancer cells mainly belonged to two subgroups defined by their expression profiles, together with a smaller third subgroup of undifferentiated cells with stem cell-like expression profiles, which also had a high proliferative potential [94].
Single-cell sequencing of CTCs
Liquid biopsies provide means for minimally invasive collection of biopsies from cancer patients that allow for single-cell sequencing of CTCs. Single-cell sequencing of CTCs has been applied across a range of cancer disease ( Table 2), but most of the sequencing-based CTC studies are too small to be able to draw conclusions regarding clinical outcomes. In a study of CTCs from lung cancer, it was found that the CNV profile was concordant with metastases in the same patients, while singlenucleotide variation was found to be heterogeneous from cell to cell [109]. Analysis of CNV profiles and genomic instability in CTCs from metastatic castrate-resistant prostate cancer demonstrated ability to detect key alterations with clinical relevance, including loss of PTEN and amplification of androgen receptor (AR) [117]. Single-cell RNA-seq was also used to profile prostate cancer CTCs, which revealed significant within-patient heterogeneity, including expression of AR splice variants (AR-v7) associated with resistance against anti-androgen treatment [113]. Results from a study based on cultured CTCs from breast cancer patients have indicated heterogeneity in respect to HER2 status, including spontaneous and dynamic interconversion (i.e. cell-state plasticity [6]) between HER2þ and HER2À status in cell populations, which may contribute to development of drug resistance [118]. CTCs have been demonstrated to be predictive of treatment response in both breast and prostate cancers [119][120][121].
Discussion
Single-cell sequencing is revolutionizing cancer research by providing a significant step forward in respect to the resolution at which the molecular phenotype of tumours can be characterized. Intra-tumour heterogeneity is common in many cancer diseases and related to treatment response, progression and survival outcomes. Intra-tumour heterogeneity can only be fully characterized at the single-cell level. Methods for singlecell isolation and DNA and RNA sequencing are now well established and further improvements and novel methodologies are continuously being developed. Single-cell methods for collection of additional molecular levels beyond DNA and RNA are also being developed and so are single-cell multi-omics methods where multiple molecular levels are profiled in the same cell, providing unique opportunity to generate comprehensive molecular phenotypes of tumours with single-cell resolution.
Study design is of key importance in single-cell cancer studies. However, there are few studies to date that cover aspects of study design in single-cell studies. The cost is approximately proportional to the (number of patients) * (number of cells per patients) * (number of sequencing reads per cell), but depending on application, these factors should be carefully considered. Recent developments in single-cell sequencing have increased the number of cells that can be isolated and sequenced [122]. However, in studies with larger number of cells, fewer sequencing reads from each individual cell are typically collected, thus limiting the sensitivity of molecular phenotype data acquired from the cells. To evaluate association with molecular data and patient outcomes, larger number of patients will have to be included in studies. Detection of rare cell types, or rare cell states, requires profiling of larger number of cell from each patient. The amount of sequencing reads collected from each cell is related to the sensitivity in detecting and quantifying the molecular phenotype, and although different cell types can be correctly classified with relatively limited RNA-seq data from each cell (<50 000 s reads/cell), more sequencing reads (up to several million reads/cell) will be required to determine more subtle differences that reflect cell states in the transcriptomic profile, which is expected to be relevant in molecular phenotyping of cancers. At the time of writing no systematic evaluation of these different study design factors have been reported, while we expect the trade-off between these factors to be central for the success of future studies. To determine a reasonable study design, it is advisable to apply power calculations, especially when patient outcome analyses (e.g. time-to-event analyses) are a primary objective. Deciding the number of single cells to profile in each patient will be directly related to the degree of heterogeneity and the desired power to detect and profile rare cells or cell states. However, such information might not be readily available, and in such situation, a pilot study can provide the necessary information to determine a suitable study design. Single-cell methods will undoubtedly become an increasingly important tool for basic cancer research using model systems (e.g. cell lines, animal models, xenograft models). However, applications in human cancers research and precision medicine are now emerging and provide opportunity to understand how heterogeneity and tumour evolution contribute to clinically relevant outcomes, including probability of treatment response, progression and survival outcomes. At the moment, liquid biopsies, including single-cell sequencing of CTCs, represent a technology that probably has the highest chance of rapid translation to the clinic. Single-cell sequencing has so far generated promising results in relatively small studies of patient derived cancer cells, and the next step will be to initiate larger studies with more patients to evaluate to what extent molecular phenotype data with single-cell resolution provide advancement in prognostication, prediction of treatment response or other relevant outcomes.
Many challenges remain to be addressed before single-cell sequencing will be routine in clinical cancer research and translated to the clinic. Sample handling and requirement of fresh cells are major obstacles in studies based on patient-derived tumour cells that have to be overcome. Development of methods and protocols that enable preservation of cells for later singlecell sequencing is essential to scale up studies based on clinical samples. Although currently available methodologies have enabled successful application of single-cell sequencing to a wide range of problems in human cancer genomics, further development of methodologies and technologies in multiple areas is required to advance single-cell profiling in cancer research. Efficient isolation of single-cells is still dependent on substantial infrastructure that is rarely available to individual laboratories. Development of efficient yet affordable methods for single-cell isolation would enable wider uptake of single-cell methods, and there are already some advances in this direction [25]. Isolation of CTCs also remains a challenge, and methods with improved capture efficiency would broaden the potential applications of non-invasive liquid biopsies of cancer patients and CTC sequencing. Development of new methodologies for single-cell multi-omics profiling, enabling measurements of multiple molecular levels in the same cell, has the potential to substantially improve our capability to characterize molecular mechanisms of cancer. In respect to analyses of single-cell RNA-seq data, there are few methods and models available that account for technical noise and zero-inflated distributions for cancer-specific analyses, including detection and classification of cell states or subtypes at the single-cell level. There is also an emerging need for novel methodologies and models that allow for analysis of multi-omics single-cell data, as these methods are starting to become available. Currently, the cost of largescale application of single-cell sequencing can be prohibitive; however, we anticipate that the with the currently ongoing rapid development of both single-cell and sequencing methodologies costs will inevitably come down over time and open up many new application areas.
Conclusions
The field of single-cell sequencing is rapidly developing and has over the past few years reached a maturity level, where clinically relevant information (including intra-tumour heterogeneity, development of treatment resistance and tumour evolution) can be collected through profiling of single-cells from cancer patients. Many of the fundamental objectives of cancer precision medicine (prediction of treatment response, prognostication, detection of treatment resistance) are possible to address at a higher resolution with single-cell methods compared with conventional bulk average molecular phenotyping. Therefore, it is highly likely that single-cell molecular phenotyping will supersede bulk average profiling in the future in many application areas. We are now just starting to see the beginning of this trend. The next step in application of single-cell methods in the study of human cancers is to initiate studies that include larger patient cohorts; larger numbers of singlecells and that also consider clinical outcomes.
Key Points
• Intra-tumour heterogeneity is an inherent property of many cancers and may play a central role in respect to clinical outcomes • Single-cell sequencing technologies provide means for high-resolution molecular phenotyping of large numbers of individual cancer cells and enable characterization of intra-tumour heterogeneity • Most single-cell studies of human cancers to date include few patients, which limit the opportunity to investigate effects on clinical outcomes.
• Larger studies that include more patients are now needed to establish potential associations between the unique information captured by single-cell sequencing and clinically relevant outcomes
|
2018-04-03T04:17:09.726Z
|
2017-11-02T00:00:00.000
|
{
"year": 2017,
"sha1": "5e24299104f7ffec7ca9a50024664e6b19b55cfe",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bfg/article-pdf/17/4/273/25244796/elx036.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e24299104f7ffec7ca9a50024664e6b19b55cfe",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
145194808
|
pes2o/s2orc
|
v3-fos-license
|
Selected Reference Books of 1998–1999
This article follows the pattern set by the semi-annual series initiated by the late Constance M. Winchell more than fifty years ago and continued by Eugene Sheehy. Because the purpose of the list is to present a selection of recent scholarly and general works, it does not pretend to be either well balanced or comprehensive. A brief roundup of new editions of standard works is provided at the end of the articles. Code numbers (such as AH226) have been used to refer to titles in the Guide to Reference Books , 11 th ed. (Chicago: ALA, 1996).
does not; there are entries and informa tion in the older set not available in the newer one, and vice versa, of course. For example, the entry on George Arliss in the DAB refers the reader to the holdings in the Harvard Theatre Collection; the entry in the ANB does not, but lists hold ings in the Performing Arts Library at Lincoln Center not provided in the DAB.
The ANB "has substantially broad ened the criteria for the inclusion of sub jects" (Pref.), and the editors made a con certed effort to expand the coverage of women and minorities. The ACLS and Oxford University Press "have estab lished a Center for American Biography, whose charge is to update and enlarge the ANB …" Oxford University Press has announced a Web version available Janu ary 2000. All general academic and large public libraries will need this new set, but all will want to keep the older one handy. -M.C. cluding with a bibliography and a list of additional archival sources for newspa pers.
This catalogue is indispensable for any serious research collection of French his torical studies. -J.S. Danky, James P. and Maureen E. Hady.
African-American Newspapers and Peri odicals: a National Bibliography. Cam bridge: Harvard Univ. Pr., 1998. xxxv, 740p. $125.00 (ISBN 0-674-00788-3). LCCN 98-026099. James Danky and Maureen Hady have compiled a remarkable list of African-American newspapers and periodicals from Freedom's Journal of March 16, 1827, to the latest Hip Hop magazine. Their mission was to identify, locate and exam ine each issue of "literary, political and history journals as well as general news papers and feature magazines … " (Brief History) in order to compile an alphabeti cal list of 6,562 numbered entries. For each is given: most recent title, and, if appli cable, years of publication, frequency, cur rent edition and editorial address, sub scription rates, publisher, number of pages in the latest issue or volume exam ined, indication of line drawings, photos, commercial advertisements, height in centimeters, previous editions, variant titles, where indexed, availability of mi crofilm, ISSN, OCLC number, LCCN, sub ject focus and features, library locations with holdings. The work that went into compiling this list is amazing.
The volume ends with indexes for sub jects and features (abolitionists or abor tion or Zydeco music), editors, publish ers, geographical area. There are crossreferences in the text; they could have been separated from the preceding entry a little more making them easier to spot. But what is most needed is a title index to encompass all those variant titles. The other disquieting feature is the lack of any indication of the union lists and finding aids used to begin the research; for ex ample, the Boston Guardian is indexed for [1902][1903][1904] Guardian 1902-1904. Nowhere in the entry for the Bos ton Guardian (which is under Guardian with no cross-reference) is that index mentioned. Does this mean that Danky did not see the Campbell book which could have helped him identify and lo cate titles and indexes?
Two other wishes: I wish there were a chronological index within the volume which would make the identification and location of primary resources easier, and I wish there were an indication of the newspapers available on CD-ROM.
But I don't mean to disparage this su perb work which will be of great benefit to scholars. for persons including biographical data (date and place of birth, names of parents, spouses and children, educational attain ment, and religion where available) and the name of the work for which the prize was awarded. Selected entries include photo graphs of the winners as well. Addition ally, readers are given names of other awards received, a career synopsis, and a list of selected works (when available). Most helpfully, the compilers have tracked down citations to newspaper or magazine articles about the winners or their work, included in the "For More Information" section. The "Commentary" section of the entry includes specific information from either the Pulitzer Prize board or the win ners themselves on the work receiving the award. Indexes included in the book list individual winners, newspaper and orga nization winners, educational institution, and a year-by-year chronology of award winners. The work is prefaced by a com prehensive history of the prizes and a brief biography of Joseph Pulitzer written by the administrator of the Pulitzer Prizes, Seymour Topping.
The one index that might have been included, which is absent, is an index list ing the individuals who have won prizes by the particular news organization. An swering a question such as, "Who is the Washington Post journalist whose Pulitzer Prize was revoked?" is not possible us ing the materials available in the Who's Who since it is a product of research us ing Pulitzer Prize office materials. A dis cussion of prize controversies might also have been a useful resource to include.
In all, this is a thoughtfully researched book put together by librarians who have answered questions about Pulitzer win ners in the past and serves as a good allin-one resource for collections with either a journalism or humanities focus or a need for biographical information. -D.W. When the English translation of Pierre Grimal's Dictionary of Classical Mythology (CF27) was published, a reviewer for the TLS said that the then standard dictionary of mythology by Lempriere could now be honorably retired (TLS August 8, 1986: 868). The same could not be said of this new dictionary of mythology in relation to its predecessor.
Mythology
March does not mention Grimal's work in her bibliography, but the presen tation and organization of her work bear resemblance and one suspects that this new dictionary was inspired by it. Both are illustrated by monochrome photo graphs, accompanied by charts of family trees and arranged alphabetically. Grimal, however, includes more extensive notes and bibliographic sources.
March's stated aim is to "retell the myths as readably as possible, detailing any major variant and including, where appropriate, translations from ancient writers to give life to [her] narrative" (Introd). Her effort to be readable unfor tunately renders her prose overly "chatty" at times. Libraries which own Grimal should keep it for its scholarly apparatus and the elegant concision of the prose.
The book is illustrated with black-and white photographs of vase paintings and sculptures and accompanied by maps, genealogical tables of gods, goddesses and heroes, a list of Greek and Roman authors cited in the entries and a short bibliography of works on classical my thology. -J.S. Auchter 's dictionary offers current meanings for almost 600 allusions and eponyms from all eras of history. It in cludes historical figures, as well as events from the Bible, ancient history and folk lore. Not included are allusions from mythology or fictional works. The pur pose of the book is to reconnect familiar figures of speech with their original con text. Eponyms, derived from people's names, and allusions, indirect references to historical events, are both commonly used in our spoken and written language, but their origins are not always remem bered. In well-written, informative en tries, the historical background as well as the current usage of a term or phrase is explained. Arranged alphabetically, each entry is followed by a short bibliography of sources. One learns, for example, that Luddite originates with a slow-witted knitting apprentice, Ned Ludd, who in the early 19 th century smashed his knit ting frame in frustration, leading to the short-lived anti-technology Luddite movement in England. Although other volumes in this genre may include more entries, specifically Common Knowledge (BE81) and Eponyms Dictionaries Index (AC53), this one stands out for its insight ful, historical descriptions, its inclusion of bibliographic sources after each entry and its convenient subject index, which would have benefited, however, from the addition of page numbers. Language and literature students will find this dictio nary to be beneficial as well as quite read able. -A.M.
The Cassell Dictionary of Slang; Jonathon
Green, compiler. London: Cassell, 1998. 1,316p. $37.50 (ISBN 0-304-34435-4). In a crowded field, this work distin guishes itself by its comprehensive cov erage of the English-speaking world. This new volume, published in Great Britain, covers slang terms from the UK, the USA, Canada, the anglophone islands of the Caribbean, Ireland (North and South), South Africa, Australia and New Zealand. Only India is excluded, except for some Raj-era entries spoken by Englishmen. In 70,000 entries it includes slang words and phrases from the early 16 th century to the present.
Each entry notes the usage period of the term, which may be a century or merely a decade, the geographical use (e.g., S. Afr.) and the social/cultural us age (e.g., teen). There are cross-references to other entries and to words used in other entries, as in the etymologies which are given where known. For example, stoolpigeon, an informer or one who makes a confession implicating others, comes from "a bird that is tied to a stool in order to lure other birds toward the waiting hunter." The dictionary concludes with an ex tensive bibliography of books, comics and cartoon strips, newspapers and maga zines, records, film and television, and relevant Internet sites. It is a worthwhile volume for those who want a thorough, carefully researched, all-encompassing approach to he subject. -A.M.
Poole, Russell. Old English Wisdom Poetry.
Cambridge Everything about this bibliography is appealing: the design is pleasing, the lay out is clear, the introductory essay pro vides an excellent discussion of "the sa lient features of Old English versified wisdom," (Introd.) and the annotations are well written, descriptive and evaluative. It is well organized, with a section of gen eral studies, followed by these sections treating individual poems or groups of poems: the metrical charms, The Fortunes of Men, The Gifts of Men, Homiletic Frag ments I and II, Maxims I and II, The Order of the World, Precepts, the metrical prov erbs, the Riddles of the Exeter book, Rune Poem, Solomon and Saturn and Vainglory. Within each section there is an "Orienta tion to Research" discussing manuscripts, dating, literary affiliations, and literary criticism, followed by the Bibliography, listing citations to books, journal articles and essays in collections in chronologi cal order, from the earliest dates to the present. The excellent index ranges from "acorns: as foodstuff for human con sumption" to "Yggdrasill: in relation to Riddle 92" and provides subheadings, explanations, and cross references. This is a comprehensive, scholarly bibliogra phy treating work in all languages. Although this work was designed with Russian teachers of literature and ad vanced Russian students in mind, it should be a valuable addition to any ref erence collection that seeks to provide support for Russian literature in the origi nal language. It is indeed gratifying to see a reference work of this kind, reflecting the changes of the past decade that at last make it possible to bring native Russian scholarship to bear on all of the rich and varied currents of that country's 20 th -cen tury literary experience-pre-Revolution ary, Soviet, dissident, and émigré-in a single reference book and with a full ac counting of the often tragic and difficult path that Russian writers have had to fol low.
The collective work of a team of estab lished literary scholars working under the editorship of Nikolai Nikolaevich Skatov, this two-volume dictionary offers profiles of more than 500 writers active primarily in the 20 th century. Each entry, typically several paragraphs or even pages in length, provides detailed information about an author's life, career and works, along with a basic critical overview. Re flecting the new political climate, bio graphical sketches can now include much fuller details about the political difficul ties and repression suffered by particular individuals. The representatives of offi cial literary orthodoxy are also subject, in a few cases, to some mild criticism, al though, with an examination of each school of literary work on its own terms, an overall tone of objectivity is indicative of the promising new climate for serious research. A brief bibliography of the most complete or accessible editions of each writer's publications, including collected works, if available, along with a list of some key Russian-language secondary studies follows each entry. Over a hun dred photographs of some of the most important figures are included in two gatherings of plates, which, oddly enough, are not placed in any kind of al phabetical order, making it very difficult to find the portrait of a given individual. Another slight drawback is the absence of any front or back matter, not even an index, particularly as there seem to be no cross-references whatsoever from variant forms of a given writer's name. This mi nor flaw in no way diminishes the im portance of this reference book which no serious Russian-language collection should fail to acquire.
As noted, this would appear to be the first such authoritative and comprehen sive dictionary of twentieth-century lit erature to come out of Russia since the collapse of the Soviet Union. Indeed, the only work from anywhere that can be fairly compared to it is Wolfgang Kasack's Lexikon der russischen Literatur des 20. Jahrhunderts vom Beginn des Jahrhunderts bis zum Ende der Sowjetära (2d. ed. Munich: Sagner,1992) which is also available in Russian and (for the smaller first edition only) in English translation (BE1412). While the two titles obviously cover a great deal of the same material, the newer title does not supplant the older. Rather, given their slightly different emphases, the two complement one another. Over all, Kasack provides slightly more entries for authors as well as a few topical sub ject headings for journals, organizations, movements, and the like. However, both works provide a significant number of unique entries, with the Skatov volume obviously provided more up-to-date cov erage as well. Thus, Kasack provides 57 entries (52 of which deal with individual authors) under the letter "A" while Russkie pisateli contains 45 author entries under that letter. Twenty of the "A" au thor entries in Kasack are unique to that volume, and 13 in the Skatov work. More over, the entries in the work under review here are considerably longer and more de tailed. They also represent the collective voice of a greater number of different spe cialists. In terms of bibliographic cover age, too, the works complement one an other nicely. The Skatov volume tends to point to the most accessible, authoritative Russian text, whereas Kasack also in cludes entries tracing the original publish ing history. The number of Kasack's sec ondary references is slightly smaller, but they offer Western language titles as well, which Russkie pisateli does not.
As the editors of this new work sug gest, the creation of a full encyclopedia of twentieth-century Russian literature is a task for the future. Until that time, the Russian literature community can derive considerable benefit from this well-writ ten, informative guide. The goal of the work is to provide as comprehensive a list as possible of British and Irish writers, or ones closely associ ated with the British Isles, who produced works in Latin (or are reported to have done so) down to the time of the dissolu tion of the monasteries in the mid-16 th cen tury, a convenient ending point since it coincides with the end of the major medi eval libraries in England and Wales. It spe cifically aims at superseding the first mod ern (and far less complete) attempt at a list ing of this kind: J.H. Baxter, Charles Johnson and J.F. Willards, "Index to Latin Writers of the British Isles," Archivum Latinitatis medii aevi 7 (1932): 110-219.
A well organized and very readable in troduction spells out the principles of in clusion, annotation and arrangement and is followed by an extensive listing of the abbreviations. In the main body of the work, 2,283 authors are listed alphabeti cally by first names, with dates of their life or known activity, and, where available, brief indication of office and/or affiliation with a religious order. An index of sur names at the end provides an additional access point, as do lists of cross-references from the forms of names used in the early catalogs of John Leland and John Bale, of which Sharpe has made extensive use. Each entry clearly indicates whether an author is well established as an author of extant published works (by the use of all upper case letters for the name) or is known as an author only through secondary or questionable attribution in early sources (by the use of ordinary lettering for the name). A dagger or double dagger next to the names indicates, respectively, those cases where an author is not British but has entered bibliographic tradition by mistake and those cases where an author of foreign origin was closely associated, for whatever reason, with the British Isles.
Entries then include an indication of the earlier bibliographic works in which the author was listed, and any additional ex planatory notes required to indicate aspects of the bibliographic tradition, questions of attribution and identification, and the like. A list of works by title follows, along with an indication of the most important or ac cessible published version. Where a work exists in manuscript or early published ver sion and is without adequate listing of the manuscript sources, Sharpe has endeav ored to provide the missing information for any he knows. Included here are works of uncertain or spurious attribution, as well as ones known only from secondary refer ences in other sources.
To be sure, this work does not touch on all aspects of the British and Irish Latin manuscript tradition. Authors of admin istrative, legal, or business-related publi cations have not been included, nor has there been any systematic effort to indi cate the authors of extant Latin-language letters. Nor, since this is a listing of au thors, is there any coverage here of the vast corpus of anonymous writing. More over, as Sharpe indicates, a work of this kind is inevitably incomplete, and he looks to the possibility of a new edition, once scholars have had an opportunity to digest the rich offering he places before them and to supply their own additions and corrections.
That said, it is clear that this is an es sential reference for anyone seriously seeking to address the history and cul ture of medieval Britain and Ireland. The scholarly community owes a great debt of gratitude to Sharpe. It would be won derful, too, if he were willing to apply his demonstrated bibliographic skills in the future to some of the areas that could not be included in this initial study. -R.H.S.
Spanish Dramatists of the Golden Age: a Bio-Bibliographical Sourcebook;
Mary Parker, ed. Westport, Conn.: Greenwood, 1998. 286p. $89.50 (ISBN 0-313-28893-3). LCCN 97-21976. This follows the familiar, useful format of the Greenwood Press bio-bibliographi cal volumes. Nineteen playwrights are presented alphabetically, each with brief biography, critical discussion and a short bibliography of primary and secondary sources. Each article is written by a spe cialist. This format has proven to be ex tremely useful for students needing a con cise, moderately detailed yet scholarly in troduction to a writer and any library with a Spanish department should find this useful. -M.C. One would have to agree with the edi tor that his encyclopedia is "eclectic and idiosyncratic at the same time" (Pref.). The topics selected for it clearly reflect the in terests of one individual, despite its 300 contributors and 500 entries. Where else would one think to find dumbbell tene ments and Humphrey Bogart with an entry apiece? Indeed, rather than being a strength, this sort of serendipity may in stead be a drawback to its usefulness. It would be asking a lot of any librarian to have anticipated such a pairing in a spe cialized work supposedly devoted to ur banism.
Architecture and City Planning
Other arrangements pose problems for users as well: there are six entries listed in the Table of Contents under various U.S. presidential administrations,-e.g., Carter Administration: Urban Policy; Johnson Administration: Urban Policy; and so on with the Kennedy, New Deal, Nixon and Reagan administrations also listed (Roosevelt, Truman, Ford, Bush and Clinton are unnamed). Would it not have been more efficient to have one entry, such as U.S. Urban Policy, divided by admin istration? It would have simplified not only the access points (e.g., president's name as opposed to era, inclusion of some presidents but not others, etc.), but un ambiguously listed them in chronologi cal, rather than alphabetical order.
Still, there are reasons to welcome such a reference work. The two longest articles accurately reflect the book's themes: ur banization and suburbanization. The In troduction goes into some detail to grapple with historical definitions of what makes a place in America a city or town, what determines whether it is urban, ru ral or metropolitan, and the role of the U.S. Census Bureau in these classifica tions. The alphabetical entries in the en cyclopedia include sizable U.S. cities, no table mayors, architects, city planners, musicians, artists, and public figures and a great variety of topics. Historically sig nificant suburbs such as Levittown, NY, and Reston, VA, also have their own en try (although one must consult the index for Columbia, MD, and Radburn, NJ).
There are entries which serve as defini tions for concepts such as "density" and "gentrification," some which describe building types, transit systems and ser vices typically found in U.S. cities, and others which provide narratives on the development of ethnic neighborhoods, on the practice of religion, and on the differ ent forms of municipal government found throughout the country. Each entry is signed and followed by a list of "see also" and bibliographical references. There are black-and-white photographs and illus trations every four or five pages, with a selected bibliography, classified subject list, and full index at the end of volume II.
There are no statistical tables which might be expected in such a work, e.g., lists of cities ranked by population, geo graphical area, industrial location, etc., no list of important suburbs by state or date of construction. Nor are there any maps or street plans, features that urban histo rians, city planners and urban studies stu dents take for granted. Still, if one is look ing to find out what "hoovervilles" were, how (and how much) municipal garbage is collected in the U.S., or the place women have historically occupied in American cities, one can conveniently begin here. -B.S.-A. You won't find Louis Sullivan's univer sally-quoted maxim "Form follows func tion" here. Neither will you find the fa mous "Less is more" of Ludwig Mies van der Rohe. Daniel Burnham's "Make no little plans; they have no magic to stir the blood" isn't here either. Nor is the authors' exalted claim for the book "to increase [stu dents'] understanding of the complexity and richness that exists within and be tween [sic] these disciplines" likely to oc cur (Pref.). Both "form" and "function" appear to be outside their comprehension of architecture, at least, and given their cre dentials in the areas of criminal justice and telecommunications, this is not surprising.
This book mimics the form of Charles Knevitt's 1986 Perspectives: an Anthology of 1001 Architectural Quotations (BF95n) down to the use of cartoons as illustra tions (Knevitt employed the popular Brit ish architectural cartoonist Louis Hellman). In this instance, 115 terms have been selected on topics ranging from an swer to decision, discovery, genius, inven tor, opinion, power, reason, simplicity and weight. Architectural terms include arch, architect and architecture, building, estimates, perspectives, proportion and symmetry. Engineering and technology terms include chaos, data, electrical, en gineering, experiment, gravity, inven tions, mathematics, solidity, thermody namics, and tool, among others.
It is always entertaining to peruse a book of quotations; cogent insights and good jokes are a guaranteed reward (e.g., "When all else fails, use bloody great nails," Anonymous). But if one wants to know "who said what" in a hurry, this book could perhaps have made better use of the works cited in its 26-page bibliography. In addi tion to the obvious omissions mentioned above, there is no comprehensive index to all the important words in the quotations. The reader is given a "Subject by Author Index" meaning that each author (over 600) is listed under the relevant topic, and an "Author by Subject Index," where the au thors are listed alphabetically with the top ics and page numbers (rather than the com mon practice of citing entry numbers, of which there are none) entered under them. There is the usual reliance on Shakespeare, Leonardo da Vinci, Sir Francis Bacon, Ralph Waldo Emerson, Mark Twain, and other mainstays of the quotation ranks. Some "authors," like the HAL 9000 com puter from Arthur C. Clarke's 2001: a Space Odyssey, should rightly have been listed under Clarke, who has other entries attrib uted to him.
A feature new to this genre, however, is the use of Uniform Resource Locators (URLs), which cite an address on the World Wide Web as the source of a quo tation. One can only guess how long those most ephemeral of access points will last.
There are doubtless many words of wisdom and amusement included here by a wide range of eminent and popular au thors, scientists, musicians, and practitio ners. Compilations of quotations that at tempt to capture the essence of a disci pline or profession are nearly always welcome and appreciated, since they have much to offer the term-paper and speech writer, the librarian and anyone who sim ply can't remember the exact wording of a good line they heard somewhere. This book can be added to the quotation lit erature of architecture, engineering or technology, but it cannot be relied upon to be the final word, funny or otherwise, on any of the subjects. -B. S.-A. This is a useful encyclopedia for un dergraduates and other non-specialists. Some 683 articles survey the history, meaning and application of civil rights is sues in the United States, addressing the civil rights struggles of all Americans. Signed entries are clearly written, and longer entries contain brief lists of sug gested readings. Nearly half of the other entries are illustrated with black-and white photographs. Useful features in the back of the third volume include a chro nological table of court cases, a civil rights chronology, a directory of rights organi zations arranged by state, a filmography, a bibliography arranged by topic and in dex. Sadly, the index is a disappointment. The entry "Accommodation and Public Facilities" is listed in the index only un der "Accommodation and Public Facili ties," with no cross reference under pub lic accommodation, or rest rooms or any other term. In most cases, the index is little more than a repetition of the alphabetical entries in the book.
Political Science The Encyclopedia of Civil Rights in
The Encyclopedia of Civil Rights in America does not supersede the Encyclo pedia of African-American Civil Rights, ed ited by Charles D. Lowery and John F.
Marszalek (Greenwood, 1991) Although the information gathered in this work can easily be found by consult ing other encyclopedias, the whole in this case is greater than the sum of its parts.
The editor and his collaborators set for themselves an intellectual task of vast scope: describe and explain, using an en cyclopedia format, revolutionary activ ity around the world since 1500 AD. To narrow their focus they chose to include only events that used "irregular proce dures aimed at forcing political change within a society" (Pref.) and that had a lasting effect. Thus there is, for example, an entry for the European Revolutions of 1848 but there is also one for the Women's Rights Movement and for Workers. Essays about events and lead ers gain in significance by appearing to gether with essays on such key concepts as democracy, socialism, and gender and the roles they played in the history of revolution.
The Encyclopedia of Revolutions is a well conceived and elegantly executed refer ence work. Its organization is clear; the content, selected in accordance with well articulated criteria, is made accessible through several access points. The articles are authored by reputable scholars and enhanced with illustrations, maps and bibliographies. It is an appropriate refer ence tool for any library collection-aca demic, public and school. -O.dC.
Encyclopedia of Politics and Religion;
Robert Wuthnow, ed.-in-chief. Wash ington: Congressional Quarterly, 1998. 2v. (909p). il. $250.00 (ISBN 1-56802 164-X). LCCN 98-29879. Not easily classified, this excellent twovolume encyclopedia explores the inter relationships between the institutions of politics and religion, showing how those interconnections have combined to affect social attitudes and influence government policies. Ranging from the inception of modern religions (Islam, Buddhism, Ju daism) to the present, the encyclopedia focuses primarily on the 19th and 20 th cen turies. The 256 signed entries cover an international spectrum encompassing specific countries, major religions, the matic topics, seminal events (the Cru sades, the Holocaust), and individual re ligious and political leaders (Ayatollah Khomeini, Vaclav Havel).
Alphabetical entries, each several thou sand words long, are followed by brief bib liographies of relevant sources. Cross ref erences at the end of entries refer to related topics. Written by a worldwide group of scholars, the knowledgeable, well-written essays are accessible to students as well as scholars and are accompanied by black and white photos or small maps.
An alphabetical list of articles and de tailed indexes (included in each volume) adds to the set's value. Also useful are the extensive appendix materials, for example, excerpts from or complete texts of twentyone source documents related to politics and religion ranging from the Ninety-Five Theses to the Irish Peace Accord. Also in the appendix are excerpts from twentyseven world constitutions with provisions on religion; a glossary of terms; and a com pilation of Internet sites, both political and religious, arranged by topic. This significant reference work aims to "describe the historical roots of the rela tions between politics and religion in the modern world and to explain the web of their global interconnections" (Pref.). It fulfills its mission admirably and is both a timely and relevant reference source. -A.M. The authors are aware that technical analysis has not been well treated in aca demic literature, but that market partici pants tend to rely heavily on these indi cators. An examination of this disparity is what is undertaken. Sixty different tech nical indicators are described and tested, and, in this process, the authors have moved away from the traditional chartbased approach to rely instead on the abil ity of computers to calculate the numbers.
Economics
The goal of the work is to examine ob jectively whether the most commonly used technical indicators of the securities markets do or do not work and why. This involves a performance evaluation of these indicators using quarterly data points from 1985 through 1996. Concepts underlying the indicators and calculation methods are explained.
After an introductory chapter describ ing the current debate on the value of these indicators there follows a method ological chapter, and individual chapters on moving average, oscillator, divergence, and trend indicators. Chapter seven cov ers patterns, largely candlestick, and the final three chapters are wrap-ups. There is a well selected bibliography of useful related reading and the book is indexed. This is a chronology of the First World War illustrated with period reproductions and six maps. At the end of the chronol ogy there is a section of brief biographies and a selected bibliography of Englishlanguage sources.
Though a World War I buff would probably enjoy this work, it does not seem very useful for academic libraries. All the information is more easily obtained in other historical dictionaries since the de tailed day-to-day listings make finding trends and themes difficult. The bio graphical information, too, is easy to find elsewhere, and the restriction to Englishlanguage sources in the bibliography lim its its usefulness for an academic library.
-M.C. This three-part bibliography covers works on revolutionary movements in 1848-1849 first in their European context, secondly in Germany, and thirdly, and more specifically, in Southwest Germany. Each section is then divided between lit erature since 1900 and earlier literature up to 1899. The nineteenth century sec tion includes publications during and immediately following the revolutions of 1848-1849.
Kärcher
Part II on Germany is the largest seg ment with over 4,400 items. The regional history of Southwest Germany during this period is represented by 2,561 works. Thus this bibliography is most suitable for research collections which specialize in German history.
The bibliography collects a variety of materials: contemporary memoirs and treatises, congress proceedings, periodi cal articles, monographs and academic dissertations. Each section is arranged al phabetically by author or title. The in dexes are by author (personal and cor porate) and by subjects (mainly names of persons, places, institutions and orga nizations). -J.S.
New Editions And Supplements
The ARBA Guide to Biographical Resources, 1986Resources, -1997, edited by Robert L. Wick and Terry Ann Mood (Englewood, Colo.: Li braries Unlimited, 1998. xxxiv, 604p. $60.00) draws from the reviews of bio
|
2019-05-06T14:08:17.340Z
|
1999-09-01T00:00:00.000
|
{
"year": 1999,
"sha1": "dff3fc43ef4f7bc38b8f09774c87705659ad5906",
"oa_license": "CCBYNC",
"oa_url": "https://crl.acrl.org/index.php/crl/article/download/15317/16763",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c8fedd6692f76e6883f60706b0e8c2f0d7b7128f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
56034887
|
pes2o/s2orc
|
v3-fos-license
|
MMSE-NP-RISIC-Based Channel Equalization for MIMO-SC-FDE Troposcatter Communication Systems
The impact of intersymbol interference (ISI) on single-carrier frequency-domain equalization with multiple input multiple output (MIMO-SC-FDE) troposcatter communication systems is severe. Most of the channel equalization methods fail to solve it completely. In this paper, given the disadvantages of the noise-predictive (NP) MMSE-based and the residual intersymbol interference cancellation (RISIC) equalization in the single input single output (SISO) system, we focus on the combination of both equalization schemes mentioned above. After extending both of them into MIMO system for the first time, we introduce a novel MMSE-NP-RISIC equalization method for MIMO-SC-FDE troposcatter communication systems. Analysis and simulation results validate the performance of the proposed method in time-varying frequency-selective troposcatter channel at an acceptable computational complexity cost.
Introduction
Large capacity troposcatter communication not only plays an important role in the military communications, but also has great potential in other aspects [1]. With the further increases of the required bandwidth and high-speed data transmission, the influence of multipath delay spread of troposcatter channel is becoming more and more prominent [2]. The typical delay spread is extending over tens or hundreds of bit intervals. Furthermore, the significant time-varying Doppler shift mainly due to relative motion of the scatterer causes not only rapid fluctuation in the fading channel response but also compression or dilation of signal waveforms. Thereby, the time-varying frequency-selective fading appears to be more and more severe along with the delay's increase.
In order to combat the multipath fading when the delay spread of channel impulse response (CIR) is large, some scenarios have been devised [3]. The traditional single-carrier time-domain equalization (SC-TDE) typically requires a number of multiplications per symbol that is proportional to the maximum expected channel impulse response length, which results in high computational complexity and rather slow convergence speed [4]. Further increasing of data rates imposes a greater challenge on SC-TDE system. The orthogonal frequency division multiplexing (OFDM) technique [5] has some advantages such as good antifading capability, high spectrum efficiency, and low complexity, but it also has some disadvantages such as high peak-to-average power ratio (PAPR) of its signal and its sensitivity to carrier frequency offset and phase noise [5], so it is not the best applicable one for troposcatter communication whose transmission power is severely limited.
To achieve the goal of favorable trade-off between performance in severe multipath fading channel and complexity of signal processing, the single-carrier frequency-domain equalization (SC-FDE) has recently attracted increased interest because of its similar performance, efficiency, and low signal processing complexity advantages as OFDM, and in addition it is less sensitive than OFDM to radio frequency (RF) impairments such as power amplifier nonlinearities. Consequently, by performing various operations in the frequency domain, through the discrete Fourier transform (DFT), the complexity of processing can be reduced [6].
In SC-FDE systems, linear equalization is simple and practical, but it does not do well in noise and intersymbol interference (ISI) suppression [7]. In the literature, the decision feedback equalization (DFE) with a hybrid structure (H-DFE) [8,9], where the feedforward (FF) filter is realized in the frequency domain (FD) while the feedback (FB) filter is realized in the time domain (TD), is derived to cancel the ISI with the disadvantages of relatively high design complexity and error-propagation phenomena. Likewise, there is an equivalent algorithm which is called noise-predictive DFE (NP-DFE) [10]; it is known as suboptimal DFE because the FF filter and the noise predictor (NP) are independently designed, while the FF filter and the FB filter in the conventional H-DFE structure are jointly designed. What is more, residual intersymbol interference cancellation (RISIC) algorithm based minimum mean square error (MMSE) criteria can be utilized to mitigate residual intersymbol interference (RISI), which is also a kind of decision feedback equalization, and it avoids the computation of matrix inversion [10]. As a matter of fact, NP-DFE algorithm only considers the channel noise of the system, while RISIC algorithm takes into account the RISI of the system. Therefore, nonlinear feedback and iteration mechanism are necessarily utilized to improve the performance. Iterative block decision feedback equalization (IBDFE) is an effective nonlinear algorithm with a disadvantage of rather high computational complexity [11].
After recalling the general methods to antagonize serious multipath fading, a novel MMSE-NP-RISIC scheme with advantage of alleviating the effect of both noise and RISI is proposed in this paper. The proposed MMSE-NP-RISIC scheme is the conjunction of the noise-predictive MMSEbased equalization and RISIC algorithms as mentioned previously. Applying the FF filters based on MMSE criterion, which compensates for the frequency-selective fading channel's variations of amplitude and phase, we obtain the elementary time-domain sequence. The following procedure is subdivided into two segments. The noise term is predicted by exploitation of the deterministic characteristic of unique word (UW) and the correlation of the noise at the output for frequency-domain equalization (FDE). And the RISI term can be estimated by the fast Fourier transform (FFT) and the inverse FFT (IFFT) algorithms. Simulation results show that the proposed method can achieve better performance at a modest cost.
The rest of the paper is organized as follows. In Section 2, we give summarization about the multipath effect on the high-capacity troposcatter communication systems and the solutions to combat the resulting ISI. Then, the multiple input multiple output (MIMO) system model and equalization schemes are presented in Section 3. And in Section 4, a novel MMSE-NP-RISIC scheme is proposed. Finally, the extensive simulations are presented in Section 5 and the conclusions are drawn in Section 6.
Notation. Matrices and column vectors are denoted by bold uppercase and lowercase letters, respectively. (⋅) denotes the expectation operator. Superscripts (⋅) and (⋅) stand for the transpose and conjugate transpose operators. I denotes the identity matrix of dimension M. ⊗ is the Kronecker product.
Multipath Effects on Troposcatter Channel
We know that the specific type of fading for the corresponding receiver depends on both the transmission scheme and channel characteristics. The transmission scheme is specified with signal parameters such as signal bandwidth and symbol period. Meanwhile, troposcatter channels can be characterized by two different channel parameters, multipath delay spread and Doppler spread, each of which causes time dispersion and frequency dispersion, respectively [12].
Note that any received signal in the propagation environment for a troposcatter channel can be considered as the sum of the received signals from an infinite number of scatters. By the center limit theorem, the received signal can be represented by a Gaussian random variable. The amplitude, phase, and multipath-delayed components from different multipaths are time-variant, which are known as multipath effects.
In terms of time dispersion, a transmit signal may undergo fading over a frequency domain either in a selective or nonselective manner, which is referred to as frequencyselective fading or frequency-nonselective fading, respectively. Due to time dispersion according to multipaths, channel response varies with frequency. As mentioned earlier, transmit signal undergoes frequency-nonselective fading when the troposcatter channel has a constant amplitude and linear phase response only within channel bandwidth narrower than the signal bandwidth [2]. In this case, the channel impulse response has a larger delay spread than a symbol period of the transmit signal. Because of the short symbol duration as compared to the multipath delay spread, multiple-delayed copies of the transmit signal are significantly overlapped with the subsequent symbol, incurring ISI.
For the troposcatter channel, multipath delay spread usually can be described by bilateral multipath delay spread, denoted as 2 , which constantly varies from 10 to 500 nanoseconds with the system parameters such as communication distance and antenna aperture. Particularly, in troposcatter communication systems, the empiric formula of 2 applicable to engineering is given as [13] where is the communication distance, denotes the transmitter frequency, and and are antenna aperture of the transmitter and the corresponding earth equivalent radius, respectively.
The ratio of bilateral multipath delay spread to symbol period of the transmit signal, denoted as 2 / , is the most effective parameter to measure ISI of troposcatter communication systems, with denoting the symbol period. In the following and without loss of generality, we can assume that = 150 km, = 4.7 GHz, = 2.4 m, and = 8500 km. Here, by substituting them into (1), bilateral multipath delay spread is equal to 2 = 282 ns. Given a troposcatter communication system with the data rate of 8 Mbps, we can figure out that = 192 ns, 2 / ≈ 1.42. This in turn means that how to mitigate ISI is a question for troposcatter communication systems with high-speed data transmission.
As long as 2 / is small enough, the current symbol does not affect the subsequent symbol as much over the next symbol period, implying that ISI is not significant and thereby we can take into account some straightforward measures such as adaptive predistortion technique to combat it. However, when 2 / is large, some more complex equalization technology must be utilized to mitigate the ISI of the system aroused by multipath effects existing in troposcatter communication. As a matter of fact, under the condition that channel fading is not severe, SC-TDE can do well in eliminating ISI. With the increase of transmission rate, the computational complexity of SC-TDE becomes unacceptable. As mentioned above, SC-FDE may be applicable to practical systems over troposcatter multipath channel with favorable trade-off in performance and complexity.
The System Model.
We consider a spatial multiple MIMO system. The baseband equivalent system model of the MIMO-SC-FDE transceiver is shown in Figure 1. Let and be the number of transmit and receive antennas, respectively.
At the transmitter, the data stream is subdivided into independent branches by serial-parallel converter. And then they are grouped by the length of . At the beginning of transmit signal block and in the ending of each grouped data stream, a fixed UW sequence known to the receiver is inserted periodically with the length of . For avoiding interblock interference (IBI), we assume that is not less than , the length of CIR.
Since h , is a circulant matrix, it has the eigendecomposition [15]: where F is the normalized DFT matrix of size × ; that is, its ( , )th element is given by It is easy to prove that F is the corresponding matrix performing IDFT operation. H , is an × diagonal matrix with the main diagonal entries given by the channel frequency responses, which can be expressed as Define D = I ⊗ F . With the property D D = I , we can obtain the frequency-domain MIMO system model as where the channel frequency-domain response matrix H is defined by ] .
However, the overall channel matrix H is not diagonal, which does not lead to computational savings in the present form. Here, rearrange the input and output vectors by each frequency bin where X = [ 1 ( ), . . . , ( ), . . . , ( )] as the rearranged input and Y = [ 1 ( ), . . . , ( ), . . . , ( )] as the output one. Similarly, the channel matrix for the th frequency bin is where The channel matrix H is converted into a block diagonal matrix, that is, H = diag[H 1 , . . . , H , . . . , H ]. The channel model for each frequency bin is then defined as In the next equalization process, Y is equalized by some frequency-domain equalization (FDE) schemes, which can be performed separately for each . The equalized signal is converted from the FD back to the TD by the IFFT and the resulting signal is finally detected by the receiver. Thereby, the design of the equalizer is of great importance to detection performance. Next section, some equalization schemes are discussed in detail at the th frequency tone in order to simplify the derivation and computation.
NP-DFE Scheme.
We first consider the NP-DFE scheme in MIMO systems without channel coding or interleaving. The coefficients of the feedforward NP-DFE and the feedback NP-DFE are obtained by minimizing the MSE. We prove that in uncoded systems this scheme has exactly the same performance as that of the conventional FD-DFE scheme. The advantages of NP-DFE are also discussed.
We concentrate on the receiver structure in Figure 2, which consists of a feedforward FDE processed in the frequency domain and a group of NPs processed in the time domain [16]. For simplicity, we assume that all NPs have the same order B. Thus, the coefficients of NPs for different data streams can be derived together in a matrix and vector form.
By multiplying an × matrix W , the output is given by where W is the coefficient of the feedforward FDE. By converting Z to the time frequency, we will have Then, the data vector before detection can be represented by where d − = s − −x − and b is an × matrix representing the coefficients of NPs at the th tap. Assuming that the feedback symbols are always correct, that is,x − = x − , then the error vector is given by with its average autocorrelation matrix being The coefficients of the feedforward FDE and the NP can be obtained by minimizing the MSE, which is the trace of (18). Through access to relevant references and noting that {X X } = 2 I , we get Mathematical Problems in Engineering and where This shows that the proposed NP-DFE is also an optimal design in the sense of MMSE. Furthermore, by taking advantage of the MIMO architecture, different data streams can be reliably detected. The structure of the NP can be dynamically changed without affecting the feedforward FDE.
MMSE-RISIC Scheme.
Here, when MMSE criterion is adopted for FDE, the equalized signal can be given by In fact, the MMSE-based equalization algorithm is tradeoff between the channel noise and the RISI. Given the existence of RISI, FD-DFE and NP-DFE are proposed to alleviate or eliminate residual intersymbol interference. But their performance greatly depends on the order of feedback filter operated in the time domain. Consequently, computational complexity becomes rather high with the increase of the order.
MMSE-RISIC equalization is on the base of MMSE equalization, which is a new kind of decision feedback equalizer. However, this equalization can only be used in SISO system. Here, we propose novel MMSE-RISIC equalization for MIMO system for the first time. Figure 3 shows the structure of MMSE-RISIC equalization, in which the RISI of the system is estimated and eliminated in the frequency domain. In order to estimate the RISI of the MIMO system, some manipulations are carried out after the feedforward MMSE-FDE. Specifically, we give the simple mathematical derivations. As mentioned previously, (22) can also be rewritten as follows: The observation of (23) indicates that we cannot separate out X from Z just in the method used in SISO system. Here Then, the data vector before detection can be represented by where The RISI estimate valuêcan be greatly used to combat residual intersymbol interference of the MIMO system. In addition, the complexity is low because it mainly needs once FFT and IFFT operations. Figure 4: The structure of MMSE-NP-RISIC equalization.
The Proposed MMSE-NP-RISIC Scheme
The algorithm of the NP-DFE scheme only takes into account channel noise. Likewise, the MMSE-RISIC equalization mainly focuses on elimination of the residual intersymbol interference of the MIMO system. Based on what is mentioned above, a novel MMSE-NP-RISIC equalization scheme is proposed. Figure 4 shows the structure of the proposed MMSE-NP-RISIC equalization. The NP part and MMSE-RISIC part are both based on MMSE criterion and operated in the frequency domain. Therefore, the NP part is different from the NP-DFE scheme mentioned above, where the feedback filter is operated in the time domain. We analyze and derive the frequency-domain coefficients of both the feedforward FDE and feedback FDE in the MMSE sense. As is shown in Figure 4, the output vector of the NP part, U , can be obtained as Suppose S = X ; then the detection error of the NP part in the frequency domain will be given by The autocorrelation matrix of the detection error e will be The MSE is the trace of (31). By substituting (30) into (31), differentiating the trace with respect to W , and setting the result to zero, we get where W is the frequency-domain coefficients of the feedforward FDE.
By introducing the constraint that and taking the similar manipulation, we can obtain where B is the frequency-domain coefficients of the feedback FDE.
Here, we note that the NP part is different from the traditional NP-DFE scheme. In the MMSE-NP-RISIC scheme, FF FDE and FB FDE are both operated in the frequency domain.
Then the MMSE-RISIC part is utilized to further eliminate the residual ISI existing in the output vector of the NP part. Here, we refer to the MMSE-RISIC scheme proposed above and the coefficient matrix can be obtained in a similar approach.
Actually, the input vector of the MMSE-RISIC part, U , can be rewritten as Then, C , the frequency-domain coefficients of the RISI estimation, can be given by when calculating C , W and B should be obtained before according to (32) and (34), respectively. The novel MMSE-NP-RISIC equalization scheme can eliminate the influence of both the channel noise and the residual ISI at the same time, which is the advantage that the existing other methods do not have. It is of great importance especially for the high-capacity MIMO-SC-FDE troposcatter communication systems.
Simulation Analysis
In this section, we evaluate the performance of the proposed method in terms of the normalized mean squared error (NMSE) and bit error rate (BER) and compare the computational load with other equalization methods.
In the simulated MIMO-SC-FDE system, let the number of transmit and receive antennas be = = 2. A standard Mathematical Problems in Engineering 7 convolutional code with code rate 1/2, constraint length 5, and octal generator polynomials (23, 35) is applied. The coded bits are mapped to QPSK for data (BPSK for UW). The block size has been set to = 512, while the UW extension size has been set to = 24 and the data size is = − = 488.
By reference to a large amount of measured data, we can choose a typical 300-kilometer-long troposcatter communications link in North China, which is subject to frequencyselective fast fading. Table 1 presents its channel parameters [17,18], in which nine different multiple signal paths are characterized by their relative delay and average power. The length of channel impulse response is = 9. We assumed that each channel has a fixed implied response for each block period and that the receiver has perfect synchronization and channel state information. Nevertheless, in the process of the equalization and channel decode, iterative algorithm can only provide a relatively small improvement in the accuracy, but the complexity of it will inevitably increase in multiples. Iterations are not used to obtain the results in simulations.
Here, for convenience of discussion, the implementation method based on the traditional MMSE linear equalization is called MMSE-FDE. And the NP-DFE scheme is noisepredictive decision-feedback detection proposed in [10]. The MMSE-RISIC denotes residual ISI cancellation based minimum mean square error criterion proposed in [11]. To illustrate the degradation due to error propagation phenomenon, the performance of MMSE-NP-RISIC and NP-DFE with corrected symbols fed back is provided. Furthermore, the performance of the matched filter bound (MFB) is also provided as a useful metric to compare equalizer structures. Here, Figures 5 and 6 show the average BER and NMSE performance of the equalization methods in the troposcatter channel mentioned above, respectively.
The ZF-FDE cancels the interference completely without regard to noise amplification. Thereby, the MMSE-FDE improves on this strategy by finding the optimal balance between interference cancellation and noise reduction that minimizes the total MSE. The NP-DFE consists of a linear detector and a linear prediction mechanism that reduces noise variance. The noise-predictive implementation makes it easy to upgrade an existing linear detector by appending relatively simple additional processing. The MMSE-RISIC takes into consideration the fact that there still exists the residual ISI due to imperfect channel equalization, which will severely degrade the system performance. With the initial estimates of MMSE equalizer, the RISIC algorithm is adopted to alleviate the residual ISI. The proposed MMSE-NP-RISIC scheme takes into account the noise term and RISI term of the signal equalized by MMSE-based equalizer, which can further improve the equalizer performance.
For the ideal NP-DFE and ideal MMSE-NP-RISIC, perfect decisions, that is, error-free decisions, are fed back to the FBF. However, in the presence of noise and residual ISI, decision errors are inevitable. The first noise and residual ISI induced error is known as a primary error. As the decision error is fed back through the FBF, instead of cancelling the post-cursor ISI components, it also adds additional interference.
As shown in Figure 5, the proposed MMSE-NP-RISIC method outperforms the other schemes mentioned above. For a BER of 10 −4 , the performance degradation of the proposed structure is about 1 dB when compared to the ideal 8 Mathematical Problems in Engineering MMSE-NP-RISIC. Compared with the NP-DFE and MMSE-RISIC, the new proposed structure yields SNR gain of about 2 dB and 3.5 dB at BER = 10 −3 , respectively. It can be explained as the consequence of two factors: the improved noise predictive part and the RISIC process. In fact, in the proposed structure, the noise predictive part can gradually increase the reliability of the detected data, thus reducing the effects of the error propagation that limits the traditional DFE performance, especially at low SNRs. Moreover, the RISIC process is able to remove the residual ISI existing in the equalized signal. What is more, under the condition SNR = 10 dB, the MMSE-NP-RISIC can decrease the average BER by almost one order of magnitude compared with the NP-DFE. In the simulations, we also find that the gaps in BER performance will grow over the increase of the average SNR, which illustrates that the advantage of the proposed scheme is more obvious at relative high SNRs.
However, the proposed MMSE-NP-RISIC still has a gap from the MFB. The performance degradation of the DFE with respect to the MFB can be decomposed into two parts. The first is the error propagation gap. The second component is known as the gap from the MFB. Note that the gap from the MFB increases with increasing SNR. This is due to the fact that at high SNR scenarios ISI is the dominant factor. We can also come to the conclusion that the MMSE-RISIC and NP-DFE can only slightly improve the BER performance of the system, while the MMSE-NP-RISIC will bring a significant increase.
In addition, Figure 6 illustrates NMSE performance of the proposed scheme. When average SNR < 6 dB, MMSE-NP-RISIC exhibits similar performance with MMSE-RISIC and NP-DFE, although their related implementation costs differ significantly, whereas, under the condition that SNR > 8 dB, the proposed method has a smaller NMSE than the others, especially at relative high SNRs. What is more, error propagation results in an increased probability of error during the subsequent symbol decision. Hence, the NMSE performance degradation of the proposed structure is severe when compared to the ideal MMSE-NP-RISIC. The computational complexity of the various schemes is mainly evaluated in terms of the number of complex multiplications (CMULs), for both the signal processing and filter design. Table 2 presents the CMUL per output sample needed for signal processing, and Table 3 shows the CMUL for the equalizer design [8]. As shown in the tables, the overall computational load of the proposed MMSE-NP-RISIC is higher than the other methods mentioned above. The improvements of the performance are at an increased complexity cost. From the theoretical analysis and the time complexity collected during the run, the increase of the proposed scheme is acceptable. The frequency-domain equalization can be performed for each frequency bin to further reduce the computational load.
Conclusion
In this paper, we have mainly presented a novel channel equalization scheme for MIMO-SC-FDE troposcatter communication systems. Here, we firstly analyze the wellknown methods to combat the ISI existing in large capacity troposcatter communication. Then, we focus on the MIMO-SC-FDE system model and some equalization schemes. Based on the MMSE criterion, the NP-DFE equalization and MMSE-RISIC equalization schemes are proposed to make it applicable to MIMO system for the first time, respectively. Furthermore, we present a new MMSE-NP-RISIC channel equalization scheme to combat the ISI of the troposcatter channel. The frequency-domain channel estimation and equalization can be performed separately for Mathematical Problems in Engineering 9 each frequency bin to further reduce the computational complexity of the proposed method. Numerical results show that compared with the existing methods mentioned above the proposed equalization scheme achieves better performance at an acceptable computational load cost in MIMO-SC-FDE troposcatter communication systems.
|
2018-12-07T14:56:18.275Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "01000c31a09f46809214c6a87e8db51bdb089a4e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2016/5158406.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "01000c31a09f46809214c6a87e8db51bdb089a4e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
2434516
|
pes2o/s2orc
|
v3-fos-license
|
Urotensin ‐ II and endothelin ‐ I levels after contrast media administration in patients undergoing percutaneous coronary interventions
Turgay Ulas, Hakan Buyukhatipoglu1, Mehmet S. Dal2, Idris Kirhan, Zekeriya Kaya3, Mehmet E. Demir4, Irfan Tursun5, Mehmet A. Eren, Timucin Aydogan, Yusuf Sezen3, Nurten Aksoy6 Department of Internal Medicine, Harran University, Faculty of Medicine, Sanliurfa, 1Division of Medical Oncology, Faculty of Medicine, Gaziantep University, Gaziantep, 2Department of Internal Medicine, Faculty of Medicine, Dicle University, Diyarbakir, 3Cardiology, Faculty of Medicine, 4Medicine, Division of Nephrology, Faculty of Medicine, 5Department of Internal Medicine, Igdir Training and Research Hospital, Igdir, Turkey, 6Biochemistry, Faculty of Medicine, Harran University, Sanliurfa, Urfa
INTRODUCTION
Renal hemodynamics changes due to the effects of contrast media (CM) depending on the action of many mediators, and the mediators are not still clearly known.11][12] CM injection related endothelial damage based on histopathological endpoints; leads to apoptosis, cell death of endothelial and tubular cells and may be initiated by cell membrane damage. [13]Mechanical shear stress besides physicochemical properties such as osmolality or viscosity cause endothelial damage. [14]reduction in renal perfusion caused by a direct effect of CM on the kidney and toxic effects on the tubular cells are generally regarded as the main factors.However, the pathophysiologic relevance of direct effects of CM on tubular cells is contentious. [15,16]Although based upon these relationships between CM, UT-II and The exclusion criteria that would influence UT-II, ET-I and renal functions were as follows: intravascular administration of iodinated CM within 7 days before study entry or a history of serious reaction to intravascular iodinated CM; the administration of theophylline, N-acetylcysteine, or mannitol within 7 days before or after contrast administration; the initiation, discontinuation, or change in dose of any of the following angiotensin-converting enzyme inhibitor, or angiotensin receptor blocker-within 72 h before study entry; initiation of nephrotoxic agents, or non-steroidal anti-inflammatory drugs within 72 h of study entry; acute coronary syndromes; any coexisting cardiac disease; any evidence of liver, kidney, or respiratory disease; diabetes mellitus; malignancy; any infectious, inflammatory, or infiltrative disorders; unregulated hypertension; reduced left ventricular ejection fraction, or any findings or history of congestive heart failure; pregnancy; lactation.Just before the PCI, blood and urine samples were obtained to measure baseline UT-II, ET-I.
As a contrast material, nonionic CM was used in various quantities (70-480 mL) depending on the clinical indications (Xenetix 300; Guerbet, Roissy, France, contains Iobitridol in 300 mg iodine/mL concentration).Adequate hydration was ensured before the procedure by advising all patients to drink at least 1500 ml of water during the preceding 24 h.In addition, just before the procedure, each patient was given 500 mL isotonic saline.Patients were also hydrated to ensure at least 2000 cc urine output after the procedure.Blood and urine samples were obtained again to measure UT-II, ET-I at 24 h.
Baseline definitions and measurements
Height and weight were measured according to standardized protocols.Body mass index was calculated as the weight in kilograms divided by the height in meters squared (kg/m 2 ).Blood pressure was measured using an aneroid sphygmomanometer.The average of three BP measurements was calculated after 15 min of comfortably sitting in each subject.
Biochemical analysis
All blood samples were drawn from a large antecubital vein without interruption of venous flow, using a 19-gauge butterfly needle connected to a plastic syringe.Twenty milliliters of blood were drawn, with the first few milliliters discarded.Ten milliliters were used for baseline routine laboratory tests.The residual content of the syringe was transferred immediately to polypropylene tubes, which were then centrifuged at 3000 rpm for 10 min at 10 to 18°C.Supernatant plasma samples were stored in plastic tubes at -80°C until assayed.
Measurement of UT-II and ET-I
UT-II and ET-I levels were measured by new fluorescent enzyme immunoassay (EIA) kits (Phoenix Pharmaceuticals, Burlingame, CA, USA).For the UT-II immunoreactivity assay, the cross-reactivity with human UT-II was 100%.No cross-reactivity was found with human ET-I, angiotensin II, bradykinin, neurotensin or brain natriuretic peptide.For the ET-I immunoreactivity assay, cross-reactivity with human ET-1 was 100%.No cross-reactivity was found with human angiotensin II and [Arg 8 ]-Vasopressin.The intraand inter-assay coefficients of variation for both UT-II and ET-I were <10%.
Other variables
Serum urea, creatinine, fasting blood glucose, aspartate aminotransferase, alanine aminotransferase, triglycerides, total cholesterol, high-density and low-density lipoprotein cholesterol levels were determined using the commercially-available assay kits (Abbott , Abbott Park, North Chicago, Illinois, USA) with an auto-analyzer (Abbott , Abbott Park, North Chicago, Illinois, USA).
Statistical analysis
All statistical analyses were performed using SPSS for Windows version 17.0 (SPSS, Chicago, IL, USA).Kolmogorov-Smirnov test was used to test the normality of data distribution.The data were expressed as arithmetic means and standard deviations.Paired T-Test and Wilcoxon signed-rank tests were used to analyze changes within each group.Pearson's correlation analysis was used to examine the association of demographic and biochemical variables.A linear regression analysis was performed to identify the independent predictors of UT-II and ET-I levels.A two-sided P value < 0.05 was considered statistically significant.
RESULTS
Clinical, laboratory and demographic characteristics of all subjects were presented on Table 1.Compared to baseline, twenty-fourth hour creatinine levels were significantly increased (P < 0.001).The twenty-fourth hour serum and urine levels of both UT-II and ET-I were also significantly increased compared to baseline (P < 0.001 for all) [Table 2].
In bivariate analysis, twenty-fourth hour serum and urine UT-II (r = 0.322, P = 0.004; r = 0.302, P = 0.007 respectively), ET-I (r = 0.511, P < 0.001; r = 0.266, P = 0.019 respectively) levels were significantly correlated with the amount of CM [Table 3, Figure 1].In a linear regression model with UT-II as a dependent variables, and the other continuous variables as an independent factors; no effect on UT-II levels were observed (r = 0.453, adjusted r 2 = 0.205, P = 0.567).In another model, in which ET-I level as a dependent variable the only CM was found to affect ET-I levels (r = 0.634, adjusted r 2 = 0.232, P = 0.001).
DISCUSSION
The present study yielded intriguing results, and the main findings were that; (i) twenty-fourth hour levels both in serum and urine samples of UT-II and ET-I were significantly increased compared to baseline, (ii) and were significantly correlated with the amount of CM.
The exact pathogenesis of contrast agent induced injury is still unclear, which is considered to arise from interactions of several major pathogenetic mechanisms.Researchers found evidence for direct renal tubular cell toxic effects of CM. [17][18][19] The CM induces renal vasoconstriction and subsequently causes renal medullary ischemia leading to tubular injury or even necrosis and eventually reduces the glomerular filtration rate. [20,21]This reduction may have direct cytotoxic effects due to high tissue osmolality on the renal tubules that undergo vacuolization and apoptosis and increase the local release of vasoconstrictive mediators such as ET-I, adenosine, free oxygen radicals, and calcium ions after CM administration. [8,16]anges of UT-II levels in the plasma and urine in patients with renal dysfunction imply a role of UT-II in renal diseases. [9,12]Plasma and urinary concentrations of UT-II are increased in essential hypertension; plasma UT-II is also increased in patients with renal dysfunction and in type II diabetics with renal nephropathy. [22,23]Nothaker et al. suggested that the kidney was the principal site of UT-II synthesis in humans, [24] while Matsushita et al. proposed that the human UT-II measured in urine was mainly derived from a renal source. [25]veral previous reports showed that ET-I has a pivotal role in the pathogenesis of acute renal failure of various etiologies, including ischemia, CM, glycerol injection, and obstruction. [26]Namely, it has been reported that renal injury injury. [27]It is noteworthy that renal medullary ET-I synthesis is higher than any other body tissue and renal vasculature shows greater sensitivity to ET-I than other vascular beds in the systemic circulation. [10]An involvement of endothelin in contrast induced nephropathy appears likely due to the enhanced endothelin levels in plasma and urine, which is observed after radio contrast application. [15]In addition, the transcription and release of endothelin from endothelial cells is enhanced by CM.Moreover, in patients suffering of impaired renal function, the increase in endothelin after giving radio contrast is exaggerated. [15]Abassi ZA et al.
reported that large amounts of ET are found in the urine compared with the small amounts present in blood and proposed that degradation of ET in the proximal tubule which filtered ET from plasma by neutral endopeptidases and that urinary ET is probably renal origin [28] would be the mechanisms for the inconsistency of serum and urine with regard to ET levels.Also Tsau YK et al. suggested that renal production, rather than clearance from the circulation by glomerular filtration, may be the source of urinary ET-I. [29]wever, it is important to note that only a very limited number of studies have been performed to investigate both UT-II and ET-I levels.Chai SB et al. suggested that increased plasma levels of UT-II and ET-I due to the injured endothelium following percutaneous transluminal coronary angioplasty. [11]In this study, baseline and twenty-fourth hour of both UT-II and ET-I levels were found to be increased compared to healthy subjects.At third day ET-I levels were found to be increased than baseline, however ET-I levels were similar with baseline levels.
The UT-II and ET-I levels were found to be decreased at seventh day after the percutaneous transluminal coronary angioplasty.However, the authors have not declared the CM amount and have not correlated the amount of CM with UT-II and ET-I levels. [11]Hirose T. et al. investigated possible changes of the UT-II expression in cardiovascular tissues with hypertension; they examined and compared the gene expression of UT-II with ET-I, in heart, aorta and kidney of hypertensive rats in comparison with control rats and expression of UT-II gene was significantly increased in the aorta, similarly to those in the kidney in contrast to significantly decreased expression of ET-I gene. [30]In our study, we found an increased UT-II and ET-I levels both in the serum and urine after twenty-fourth hour of CM administration compared to baseline.These findings raised the possibility that, CM injures both endothelial and tubular cells, and causes increased expressions of these levels.
Certain limitations of the present study should be considered.Firstly, it is a single center study and the sample size was relatively small.Secondly, more detailed information would be gained by assessing UT-II and ET-1 levels in consecutive days the investigation would perhaps provide deeper insight to the pathogenesis of the kidney injury and might add to the value of our manuscript.
CONCLUSION
Both UT-II and ET-I were found to be increased after the CM administration -which would be a consequence of the hazardous effects of CM on endothelial and tubular cells-and increased UT-II and ET-I might be the biochemical markers of renal injury after CM.Future large-scale prospective cohort studies are needed to confirm/exclude the findings of the present study and to elucidate the pathophysiological mechanisms of increased UT-II and ET-I levels after CM.
Figure 1 :
Figure 1: Relationship between twenty-fourth hour serum and urine urotensin-II, endothelin-I levels and the amount of CM
Table 1 : Demographic, laboratory and clinical characteristics of the patients
All measurable values were given with mean±standard deviation; BP=blood pressure; BMI=body mass index; AST=aspartate aminotransferase; ALT=alanine aminotransferase; LDL=low density lipoprotein; CM=Contrast media
Table 2 : Baseline and twenty-fourth hour comparisons of creatinine, urotensin-II and endothelin-I levels
a All measurable values were given with mean±standard deviation; UT-II=urotensin-II; ET-I=Endothelin-I; Paired sample T a and Wilcoxon signed-rank b tests were used
Table 3 : The correlation analysis of twenty-fourth hour of serum UT-II and ET-I levels
BP=blood pressure; BMI=body mass index; AST=aspartate aminotransferase; ALT=alanine aminotransferase; LDL=low density lipoprotein; CM=Contrast media
|
2017-06-04T18:58:53.459Z
|
2013-03-01T00:00:00.000
|
{
"year": 2013,
"sha1": "e26f8e31f02e67b2df3a8b0fc96d0e4f4de88041",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e26f8e31f02e67b2df3a8b0fc96d0e4f4de88041",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
113747985
|
pes2o/s2orc
|
v3-fos-license
|
Performance analysis of superconducting generator electromagnetic shielding
In this paper, the shielding performance of electromagnetic shielding systems is analyzed using the finite element method. Considering the non-iron-core rotor structure of superconducting generators, it is proposed that the stator alternating magnetic field generated under different operating conditions could decompose into oscillating and rotating magnetic field, so that complex issues could be greatly simplified. A 1200KW superconducting generator was analyzed. The distribution of the oscillating magnetic field and the rotating magnetic field in rotor area, which are generated by stator winding currents, and the distribution of the eddy currents in electromagnetic shielding tube, which are induced by these stator winding magnetic fields, are calculated without electromagnetic shielding system and with three different structures of electromagnetic shielding system respectively. On the basis of the results of FEM, the shielding factor of the electromagnetic shielding systems is calculated and the shielding effect of the three different structures on the oscillating magnetic field and the rotating magnetic field is compared. The method and the results in this paper can provide reference for optimal design and loss calculation of superconducting generators.
Introduction
In recent years, there is an increasing demand for high power density, high capacity generators. In wind power and other fields, the conventional generator volume and weight is too large, which restricted the further industry development. The magnetic induction intensity of superconducting generators can reach a few Tesla. Relative to the conventional generators, superconducting generators have small volume and weight, compact structure, high power density and efficiency, big ultimate capacity and good stability. Therefore, it is regarded as one of the most attractive and novel generators with business competitiveness in the near future [1]. The electromagnetic shielding system is one of the unique special structures of superconducting generators. When superconducting windings work in an alternating magnetic field, AC losses are produced in the windings. The losses increase the low temperature medium dosage and refrigeration power consumption and cause the temperature rise, so that the efficiency of the generators is reduced. When serious, temperature rise will lead to the quench of superconducting tapes. The electromagnetic shielding is used to shield the superconducting windings from the alternating magnetic field and reduce the effect of alternating magnetic field on the superconducting windings, in order to ensure the normal work of the superconductor in the superconducting state, improve the efficiency of generators. For superconducting generators, the electromagnetic shielding is a very important key part.
Since cryogenic superconducting material had been applied to the first superconducting machine in the 1960s, the electromagnetic shielding became research focus in the field of superconducting machines. In 1974, Kirtley at the Massachusetts Institute of Technology proposed that the electromagnetic shielding system must provide a high degree of isolation to the rotor from rapidly time-varying magnetic field components due to space harmonics of armature fields, time harmonics on the system, system imbalance resulting in negative sequence currents, and faults, and while providing adequate shielding of the rotor, it must not be too good a shield, for it must allow rotor flux changes due to control to pass through [2]. The calculation of shielding effectiveness of superconducting machine electromagnetic shielding system needs to solve electromagnetic field distribution. The main calculation methods are field analytical method and finite element method. In the paper of T. J. E. Miller et al., the fast Fourier transform was applied to a range of transient-screening problems in the design of a superconducting a.c. generator, and several important characteristics of screens in this type of machine were brought out. The integral-transform approach was shown to have important advantages in obtaining solutions to transient-field problems of electromagnetic screen [3]. A study of P.J. Lawrenson etc. on the screening and damping properties of superconducting a.c. generators shows that a single screen is unlikely to provide both adequate screening and damping. The double screen gives improved performance but is subject to degradation of both properties because of interactions between the screens [4]. Based on finite difference method or finite element method, [5], [6] introduced T-Ω and * -A φ Method to the electromagnetic field distribution calculation of superconducting generator electromagnetic shielding system. The above research subjects are low-temperature superconducting machines.
HTS tape brought new opportunities for the development of superconducting generators. HTS tape can run more stably in the superconducting state, has higher critical current, smaller AC losses in harmonic magnetic fields [7]. In order to maximize the superconducting machine power density, the thickness of a single cylindrical copper shielding is optimized using analytical method in [8]. At present, the research on the theory and optimization design of HTS generator electromagnetic shielding system is still relatively few. With the development of HTS generators, new challenges will be brought for performance analysis and simulation of electromagnetic shielding system.
Structure and main parameters of calculation prototype of superconducting generators
In order to analyze and calculate the electromagnetic shielding characteristics of superconducting generator shielding system, we chose a 1200KW, 6-pole synchronous superconducting generator. The generator stator core is made of silicon steel, the stator winding is made of a conventional wire armature, and the rotor field winding is made of superconducting tapes. Its rotor is coreless structure. The structure of the generator is shown in Figure 1.
Finite element analysis
When the superconducting generators are operated under steady state or transient operation state, the fundamental or harmonic component of the armature reaction magnetic fields produced by the stator armature winding current may be a transient or alternating magnetic field for the superconducting field windings and other metal components in rotor. The magnetic fields generate AC losses within them. In order to minimize these losses, we need to install an electromagnetic shielding tube on the generator rotor, and do the shielding system performance analysis and optimization.
The magnetic fields needed to be shielded include: the higher harmonic components of armature reaction magnetic field and the vibration magnetic field generated by generator structural asymmetry in symmetrical steady-state operation of superconducting generators; the zero sequence component and negative sequence component of armature reaction magnetic field in asymmetric steady state operation; the transient magnetic fields generated by transient current or short circuit current in transient operation, three-phase or single-phase short circuit; the oscillating magnetic field generated at the oscillation and so on. These electromagnetic processes of superconducting generators are usually very complex and we can not analyze and research one by one. In order to simplify the problem, we can break down these transient or alternating magnetic fields produced at different operating states into two categories: one is oscillating magnetic field, while the other is rotating magnetic field. Below we will analyze separately shielding effect of generator electromagnetic shielding system on these two types of magnetic fields, to achieve electromagnetic shielding system performance analysis and optimization.
Because the magnetic core is not used in the rotor of the superconducting generator analyzed, the axial component of the magnetic field within the space occupied by the rotor is greater than the conventional one, which has magnetic core in rotor. Considering that the radial and circumferential component of alternating magnetic field are maximum in cross-section through the axial center of generator, and as far away from the cross-section, they continue to decrease, we only need to analyze the shielding case of the alternating magnetic in the cross-section. On this basis, reasonable shielding system optimization can guarantee its shielding performance to meet the requirements. Therefore, the analysis of electromagnetic shielding system will be simplified as a two-dimensional field problem.
In order to consider effect of structure and size of the electromagnetic shield system on its shielding performance, we analyzed three different electromagnetic shielding system: scheme I, the electromagnetic shielding system consists of a 15 mm thick copper tube; scheme II, the electromagnetic shielding system consists of the composite structure of a 5 mm thick stainless steel outer tube and 10 mm thick copper inner tube; scheme III, the electromagnetic shielding system consists of the composite structure of a 10 mm thick stainless steel outer tube and 5 mm thick copper Siemens/m, the electrical conductivity of stainless steel is 6 1.1 10 Siemens/m.
Oscillating magnetic field
The description equation of oscillating magnetic field can be expressed as follows: where A is magnetic vector potential, electric scalar potential, permeability, conductivity, dielectric constant, angular frequency.
In the calculation of oscillating magnetic field distribution, the rotor core is not taken into account. The oscillating magnetic field can be broken down into a series of different frequency oscillating magnetic fields which vary with time according to sinusoidal law. Since the oscillating magnetic field changes in accordance with sinusoidal law, equation (1) can be simplified to: where A is the plural form of magnetic vector potential, J the plural form of conduction current density.
In addition to the most important fundamental, the 5th and 7th harmonics and tooth harmonics should be considered in analyzing the magnetic field of superconducting generators. The tooth harmonics of the calculation prototype of superconducting generators are the 11th and 13th harmonics. Here, only the fundamental, fifth harmonic and 11th harmonics are considered.
According to the theory of generator windings, when there is a sinusoidal time-varying current only in one phase of the three-phase stator windings of the generator, the magnetic field generated by the stator windings is an oscillating magnetic field. The magnetic field was calculated using the finite element method, when there was a 110A, 50 Hz sinusoidal alternating current only in one phase of the three-phase stator windings of the generator. Figure 4 is the magnetic flux density amplitude distribution curve from the starting point coordinate (0,85) to the end point coordinates (0,265) along the Y axis, that is from rotor inner diameter to shielding tube inner diameter, when there is no electromagnetic shielding system and there are three different electromagnetic shielding systems. By comparison can it be learn that: the electromagnetic shielding system dramatically reduces the oscillating magnetic field into the rotor; the greater the thickness of copper tube is, the better the shielding effect is. When the copper tube has larger thickness, not only the amplitude of the oscillating magnetic field into the rotor is smaller, but its decay is accelerated with the increasing depth into the rotor area. The reason for this phenomenon is that the conductivity of copper is greater than stainless steel. The figure 5 shows the eddy-current distribution in the shielding tube of three different electromagnetic shielding systems. It can be seen from the figure: the eddy currents in shielding tube, which are induced by oscillating magnetic field, are mainly distributed in the copper tube, and due to the skin effect, the eddy currents are concentrated on the surface of the copper tube. The oscillating magnetic field and the eddy currents were calculated, when there was a 110A, 250Hz (corresponding to the 5 magnetic field harmonics) sinusoidal alternating current and a 110A, 550Hz (corresponding to the 11 magnetic field harmonics) sinusoidal alternating current only in one phase of the three-phase stator windings of the generator. The calculation results of the oscillating magnetic field and the eddy current distribution are not given here. Based on the above calculation of the magnetic field, the shielding factors are calculated respectively, when the three different frequency currents flow in the one phase stator winding of the generator with the three different electromagnetic shielding structures. The calculation formula of the shielding factor [9] is: H is the circumferential component amplitude of the oscillating magnetic field strength at a point inside the rotor without electromagnetic shielding system, H is the circumferential component amplitude of the oscillating magnetic field strength at the same point inside the rotor with electromagnetic shielding system. Table 1 shows the shielding factors of the three different electromagnetic shielding systems at coordinate point (0,205) when there is 50Hz, 250Hz or 550 Hz 110A sinusoidal current in one phase stator windings respectively. It can be seen from the data in Table 1: the higher the frequency of stator winding current, the smaller the shielding factor, which means that the electromagnetic shielding system has better shielding effect; the shielding factor of scheme 1 is smaller than scheme 2, the shielding factor of scheme 2 is smaller than scheme 3, that is, the shielding effect of scheme 1 is better than scheme 2, the shielding effect of scheme 2 is better than scheme 3. This shows that the thicker the copper tube of electromagnetic shielding system, the better its shielding effect on the oscillating magnetic field.
Rotating magnetic field
In calculating the rotating magnetic field of the superconducting generator, a three-phase sinusoidal alternating current with a phase difference of 120 time electrical degrees between phase and phase is applied to the three-phase stator winding with a phase difference of 120 spatial electrical degrees between phase and phase. The different frequency three-phase sinusoidal alternating currents produce the corresponding different speed rotation magnetic fields in the superconducting generator. In the analysis and calculation of these rotation magnetic fields, moving conductor electromagnetic field equation is used as description equation. It can be expressed as: where A is magnetic vector potential, S J conduction current density, permeability, conductivity, v velocity of the moving part.
In the generator, the rotating magnetic field generated by a 50 Hz three-phase sinusoidal alternating current is the most important. Here, we assumed that a 110A, 50 Hz three-phase sinusoidal alternating current is applied to the three-phase stator winding and the rotor is stationary. In this case, a 1000 r / min speed rotating magnetic field in the generator is generated by the stator winding three-phase sinusoidal alternating current. Since the rotor is stationary, the rotating speed of the rotating magnetic field is 1000 r/ min relative to the rotor. The magnetic field distribution was calculated in the rotor without the electromagnetic shielding system and with the above described three different electromagnetic shielding systems. Figure 6 and 7 are the magnetic force line distribution on the cross section of the generator without electromagnetic shielding system and with a 15 mm thick copper tube respectively. Comparing the two figures, it can be found that most of the rotating magnetic field generated by the stator windings is shielded outside the rotor by the electromagnetic shielding tube.
According to the calculation results of the magnetic field, the shielding factor was obtained. For scheme 1, 2, 3, the shielding factor value is 0.067, 0.125, 0.286 respectively. The results show that the shielding effect of the scheme 1 on the rotating magnetic field is better than scheme 2, the shielding effect of Scheme 2 is better than scheme 3. This conclusion is consistent with oscillating magnetic field.
Conclusion
For any working condition of the non-iron-core rotor superconducting generators, the magnetic field from the stator windings, which is able to generate the loss in superconducting windings or rotor parts, can be decomposed into oscillating magnetic field and rotating magnetic field. The shielding effect of electromagnetic shielding systems on the oscillating magnetic field and the rotating magnetic field are then analyzed respectively, so that the original complex problem is simplified.
The comparison of the shielding factors of the three electromagnetic shielding tubes shows that both the copper tube and the stainless steel-copper composite tube can better shield oscillating magnetic field and rotating magnetic field. The better the electrical conductivity of electromagnetic shielding tube, the better the shielding effect.
The calculation method presented here can be used to analyze the performance of electromagnetic shielding systems and calculate the losses of generator parts in low temperature area when considering various complex operating conditions.
|
2019-04-15T13:06:03.502Z
|
2015-12-18T00:00:00.000
|
{
"year": 2015,
"sha1": "4444bbbb58bed54aaf4d977f9c2237b55d2b3348",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/101/1/012027",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4838eed3c62dff230043da8051726eec8fd79c03",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118474356
|
pes2o/s2orc
|
v3-fos-license
|
ZM theory I: Introduction and Lorentz covariance
We consider defining time as a function of a cyclical field, an abstraction of a clock. The definition of time corresponds to a novel interpretation of the relationship between space-time coordinates of observers at different locations in space. As a first test of the utility of this definition, we show that it leads to a Lorentz covariant description of space-time. This derivation of Lorenz covariance provides a starting point for considering more general constructions that relate to physical laws. The definition of time couples time to space, making time not orthogonal to space, and making dynamics a result of geometry, providing a vehicle for curved space-time theories that generalize general relativity.
II. INTRODUCTION
We consider a system, Z, with some set of distinctly labeled states that perform sequential transitions in a cyclic pattern, i.e. an abstract clock (similar to a conventional "non-digital" clock consisting of a numbered dial, here with a single moving hand). Discreteness of the clock will not enter into the discussion in this paper. The clock states can therefore be extended to cyclical continuum, U(1). The state change of the clock constitutes proper time, τ , as defined by the clock. Since the clock is cyclical it is possible to represent the changing state using an oscillator language: where m = 2π/T (2) is the cycle rate in radians. The clock phase is c = mτ modulo T , though this expression is not analytic so that derivatives should be defined in terms of ψ. The notation is chosen anticipating that m will become the 'rest mass' of the clock when it is reinterpreted as a particle. A-priori there is no difference between clockwise and counter-clockwise rotation, however, once one is identified it is distinct from the other.
As in traditional textbooks, we assume the clock time may be observed at any point in the environment. To introduce the space manifold, M, we consider a parameter, x, associated with the environment such that properties of the environment may lead to variation of the clock state with x (we assume x is a real number parameter). The distinctness of clock states is a kind of symmetry breaking, and the variation of clock state with space cannot be neglected as it would be if only topology mattered. The breaking of symmetry implies the reference value (coordinate origin) of clock time cannot be redefined arbitrarily. This treatment is different from the conventional description of space-time and is perhaps the most crucial defining assumption of ZM theory with other assumptions flowing more or less naturally from it. In the following we will be meticulous about providing details of derivations to clarify the assumptions.
As a first case to study, we consider the clock variation with x to be uniform in x. This is not a fundamental assumption, but a convenient simple case, i.e. it will correspond to a single particle in a flat space (as traditionally understood). In general the approach is to develop formalisms that can describe progressively more elaborate variations. In this paper we consider only the case of uniform variation. We write the rate of change of the clock in space x as −k, where the negative sign indicates that for positive k proper time decreases for increasing x values (see Figure 1). In order to properly define the rate of change of the clock in space, the units of distance measuring x must be compared to the units of τ . We assume, for convenience, that it is possible to choose the units measuring x to be the same as the units measuring τ , and thus the units of k are the same as m.
Our primary concern is the definition of time, t, as defined by the observer which is not the same as the clock state. Significantly, we assume a specific relationship between observed time in relation to the intrinsic clock state change and clock state change in space.
The assumption will only be justified by the implications that are obtained in this and subsequent papers. Specifically, observer time can be obtained by the rate of change of the clock, c, as a local gradient of the clock state in the two dimensional space given byτ and x treated as a Euclidean space, whose direction can be considered as a 'direction of time:' We assume that time measures Euclidean distance along the time direction in the same units as x and τ , so the rate of change of the clock phase in the direction of time is i.e. consider possible directions of timê where ζ is the angle of rotation ofτ into −x, with a clock change d s c =ŝ · (∂τ c, ∂xc) = m cos(ζ) + k sin(ζ); maximizing this with respect to the angle ζ gives and where ω is defined to be the magnitude of the rate of change of the clock phase.
Distances are measured by considering variation in theτ andx axes to be along independent Euclidean dimensions. Care must be taken to avoid confusion between the distance along theτ axis and the value of the τ variable representing the clock rotation. By geometry, from Eq. (7) the triangle in Fig. 2 has a vertical arm along theτ axis of length τ 0 , a horizontal arm of length and a diagonal along the time direction of length Since time measures distance along the time direction this is the value of time at the point at the end of the diagonal: We can specify locations in the two dimensional space by their coordinates along the axes At the point along the time axis at the end of the diagonal we have The value of the clock at this location can be directly calculated from the specification of the rate of its variation in the two dimensions in Eq. (3) as: τ is given by c = mτ , so that These expressions hold for τ 0 and τ smaller than a cycle, or by analytic continuation for all real values, for the case of linear variation of τ in space.
This expression, however, does not apply along other directions. Indeed, the definition of time thus far only identifies the variation of time starting from a specific initial location, e.g. x = 0. If we want to relate time starting at different coordinates x, we must add an additional assumption. One possible and reasonable (though not necessary) assumption is that the observer time t is synchronous along observer defined space coordinate x. Using this assumption, we can then consider time t in the two dimensional space (τ 0 , x 0 ). To express the dependence we can write the value of time as a function of proper clock time and position, where the variables τ and x are not the same as the coordinate axes due to the variation of the clock along the x axis. Treating τ as extended by analytic continuation, we can write the relationship: The addition of the term kx cancels the variation of the clock given by c = mτ , which is −kx along the x direction, so that time has the same value (0) is given by Since the coordinate axis is translating to the left, we might also say that the observer perceives a reference location to translate to the right with a velocity so that we can write This identifies a connection between the velocity in special relativity and k in ZM theory.
There are a number of comments that follow from the discussion of the observer clock relationship in ZM theory. These comments help in explaining the relationship of these ideas to special and general relativity and may enable further development. Still, these interpretive comments are not essential to the mathematical formalism that has been developed above.
First, it is important to recognize that the ability to observe a clock throughout space does not mean that time as defined by different observers is the same, or the same as time defined by the clock. Indeed, the focus of ZM theory, similar to special and general relativity, is on relating different observers' concepts of space-time.
Second, the approach of ZM theory in considering the relationships between observers and their clocks is similar to special relativity in which the clock rates of two observers moving in relation to each other are not the same, so that one observer can observe that the clock of the other observer does not report the same time as his/her own clock, i.e. as the clock that is stationary in his/her reference frame. Fourth, more specifically, consider two regions of space, A and B. In ZM theory, an observer at any specific location of observation reports that the clock state is ψ = exp(−imτ ).
Our concern, however, is how the observer in region A reports the behavior of the clock in region B. In order to report on the behavior of the clock, the observer in region A may extrapolate his own local definition of time and space, which we might denote (t A , x A ), but which we write more simply as (t, x), to the region of space B. Since this is an extrapolation, the behavior of space-time in region B, as defined by the observer in region B, may not be the same as that defined by the observer in region A. We therefore consider various possible clock behaviors in region B and how they are to be understood by the observer in region A.
In particular, a key way that the extrapolated space-time may be different from that defined by the observer at B is that the clock at B may not be synchronous across the extrapolated space x at B. It is the extrapolated time t which is synchronous across the extrapolated space x. Thus, the formal development we have given for space-time (x, t) as defined by an observer can be understood directly as the space-time that is extrapolated from region A to region B.
Fifth, the non-orthogonality of space and the direction of time as seen by the observer gives rise to mathematical and conceptual problems. ZM theory associates their resolution to physical laws, interactions and dynamics. Issues include the difference between observer defined time and a particular clock's time as measured by the observer, and spatial variation in the clock/observer time.
From the perspective of the observer, who considers space and time to be independent variables, the clock performs a space-time rotation that can be represented as: At any location this gives the same values as In later papers of this series these issues will play a role in quantum phenomena. Assuming we are describing variation within a single cycle, or that analytic continuations are valid, enables the description of classical theoretical frameworks. Because the definition of time is in terms of the clock, the issue of analyticity arises even in the definition of coordinate systems. In this paper, because we are considering only linear variations of the clock in space, such analytic continuations are valid throughout space. We therefore discuss coordinates assuming the cycling of τ can be ignored. The conditions for this assumption will be discussed in a later paper.
III. LORENTZ COVARIANCE
We consider the set of possible representations and how they can be transformed to each other. Since a representation is characterized only by the spatial variation of the clock, we can label two representations by k and k ′ . Alternatively, using the fourth comment in Section II (introducing the association of observers with regions of space), we can consider two observers in regions A and A ′ who have different views of the clock at B and therefore report different values of spatial variation of the clock k and k ′ .
Lorentz covariance is the ability to relate valid descriptions by different observers using Lorentz transformations. To demonstrate Lorentz covariance, we ask what redefinition of coordinates (t, x) will give rise to a change from one clock representation with spatial variation k, to one with k ′ . A representation is given by the variation of τ in space-time, so we solve Eq. (17) for τ to obtain: In order for Lorentz covariance to apply, the same equation must hold after redefinition of coordinates with the spatial variation k ′ , i.e. for a second observer, with different definitions of space-time: It is necessary to show that the Lorentz transformation is consistent with this relationship.
Writing the new coordinates as linear combinations of the old: Inserting the coordinate transformation into Eq. (26) and equating to Eq. (25) we have: which must be valid for all (x, t). For completeness we include in detail the solution of this equation, which we obtain by equating the coefficients of x and t, giving: Dividing the second equation by the first, Solving for v r gives Which is the usual Lorentz velocity composition formula for velocities in a single dimension.
To solve for γ r we rewrite Eq. (29): Adding the equations yields Solving for γ r gives Using Eq.(4) in the primed coordinate system to simplify the denominator we have Finally, defining γ = ω/m and γ ′ = ω ′ /m we obtain: This expression can either be recognized or shown by additional algebra to be equivalent to the usual Lorentz transformation expression for γ composition. It is possible to simplify significantly the above derivation of Lorentz covariance. The well known composition rules of the Lorentz transformation imply that it is sufficient to consider the case where the primed coordinate system has k ′ = 0; then the composition properties imply that we can more generally transform between (x, t) and (x ′ , t ′ ) with arbitrary values of k ′ . In the simpler case, setting k ′ = 0, we have ω ′ = m and: In this case, the consistency equation (Eq. (28)) becomes Equating the coefficients of x and t, gives: There are a number of comments that follow from the derivation of Lorentz covariance.
These interpretive comments are not essential to the mathematical formalism that has been developed above.
First, just as in special relativity, no mechanism was given to achieve a Lorentz transformation as this would require acceleration. Acceleration is not included at this level of description.
Second, comparing with the conventional treatment of special relativity we see that the Perhaps more significantly, the geometry of the transformation between the two coordinate frames appears different, not the least because of the non-orthogonality of x and t. Still, the appearance of the Lorentz transformation means that according to some perspective special relativity and ZM theory are equivalent, or have a kind of correspondence, for flat space. Thus, Lorentz covariance suggests many of the properties of special relativity may also apply to ZM theory, but they do not appear in the assumptions.
Third, despite the mathematical mixing of τ and x, they play different roles from each other in this formalism and, together, they are different from the conventional abstract notion of space-time frames of observation in special and general relativity. τ is a cyclical field coordinate, while space is introduced in a more conventional way as an extrinsic real valued coordinate, though one whose definition may vary from location to location as in general relativity. Nevertheless, Lorentz covariance follows from the assumptions as given.
Fourth, the use of a transformation that mixes x and τ raises the question as to whether there is a way that they can be treated on equal footing. Since space may also reflect a change of state, similar to τ , despite the distinction in treatment between position and time variables it is possible that we could consider the Lorentz transformation to be a reapportionment of states between field and environment, i.e. between τ and x, in a way that would place them on equal footing. While this approach may be fruitful, it will not be pursued here.
Fifth, the variability of the clock over space implies that space must be reinterpreted if it is defined by simultaneity of this clock. Choosing synchronous space to be defined by the clock state at every location leads to intervals in space including intervals in proper time.
We definex (see Figure 1) as the space for the synchronous clock. The direction of the axis is given by The values ofx can be written in terms of x and τ . Assuming thatx is the same on the τ axis, and the units of length are unchanged, implies As long as we are considering how the observer (at A) describes the behavior of the clock over space (at B) we will continue to use the space x, rather thanx.x will enter when we consider explicitly the variable metric of space in general relativistic formalizations in a future paper.
Sixth, in ZM theory time should be interpreted, as we experience it, as a sequence of state changes (whether discrete or continuous does not matter at this point) that we can identify as associated with one or more clocks. Time is not, a-priori, an extended dimension. This is consistent with our inability to observe events along time in the same way as we observe events at multiple locations of space. However, it may be possible under some circumstances to associate a dimension with the sequentiality of state changes, as we have done above in the definition of time. Such extensions of time may not, however, be possible in general.
Seventh, in conventional quantum mechanics and quantum field theory, [4] space and time are extrinsic parameters. Still, in the Dirac formulation of quantum mechanics [5] the state of the system | is a distinct entity from the spatial wave function defined as x | .
While eventually in that formalism x is considered as associated with the particle coordinate rather than as the definition of space, still, as a conceptual framework, we might consider the approach taken here to follow this prescription by separately identifying x| as a property of the observer/environment rather than of the system, i.e. space is a characteristic of the environment. The observed system has a globally (i.e. universally in space) set of degrees of freedom to which the environment may make contact. This is a-priori consistent with the non-locality of quantum states and transitions. Causal propagation is not assumed but is presumed to arise in an appropriate limit. Time, which starts as a property of the system rather than of the environment-through the transitions of the clock (proper time), can become mixed with space. The geometric mixing of time and space (distinct quantities describing environment and system relationships) is then perceived as dynamics.
IV. CONCLUSIONS
In this paper we have discussed how an unusual definition of time can result in Lorentz covariant description of flat space-times, as considered in special relativity. In subsequent papers of this series we will develop this formalism to show how defining time as a local function of field variables can give rise to a variation of properties in and of space yielding physical laws, including dynamics.
|
2019-04-14T02:23:30.425Z
|
2006-02-24T00:00:00.000
|
{
"year": 2006,
"sha1": "7f3b7ba8a2ea950f04cdb72c68c9dd0334b97fac",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "537c7164af0a7f179af3a33f74d8b3e05ba22b4d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
6056277
|
pes2o/s2orc
|
v3-fos-license
|
Impact of calcium on N1 influenza neuraminidase dynamics and binding free energy
The highly pathogenic influenza strains H5N1 and H1N1 are currently treated with inhibitors of the viral surface protein neuraminidase (N1). Crystal structures of N1 indicate a conserved, high affinity calcium binding site located near the active site. The specific role of this calcium in the enzyme mechanism is unknown, though it has been shown to be important for enzymatic activity and thermostability. We report molecular dynamics (MD) simulations of calcium-bound and calcium-free N1 complexes with the inhibitor oseltamivir (marketed as the drug Tamiflu), independently using both the AMBER FF99SB and GROMOS96 force fields, to give structural insight into calcium stabilization of key framework residues. Y347, which demonstrates similar sampling patterns in the simulations of both force fields, is implicated as an important N1 residue that can “clamp” the ligand into a favorable binding pose. Free energy perturbation and thermodynamic integration calculations, using two different force fields, support the importance of Y347 and indicate a +3 to +5 kcal/mol change in the binding free energy of oseltamivir in the absence of calcium. With the important role of structure-based drug design for neuraminidase inhibitors and the growing literature on emerging strains and subtypes, inclusion of this calcium for active site stability is particularly crucial for computational efforts such as homology modeling, virtual screening, and free energy methods. Proteins 2010. © 2010 Wiley-Liss, Inc.
Methods
Both calcium-bound and calcium-free N1 monomer simulations were performed using the GRO-MOS05 software for biomolecular simulation 1 and the GROMOS96 force field (45A3 parameter set). 2 Parameters for oseltamivir were derived from existing building blocks 3 (Table S2). Amino acid charges were defined to reproduce an apparent pH 7. The systems were solvated in boxes of SPC water molecules, with a 12 Å barrier to the periodic boundary of the cube, and neutralized with sodium ions. For the ion-bound simulations, the calcium was parametrized in the classical force field, with all ion parameters taken from those developed by Åqvist. 4 After 2,000 steps of steepest descent energy minimization, the system was then brought to the reference temperature of 300 K in six consecutive 25 ps MD periods (50 K increments), gradually Both monomer complexes of N1-oseltamivir, with and without the bound calcium ion, were simulated in ten independent trajectories, each 4 ns, for a total of 40 ns of simulation for each complex. Comprising the ten simulations were five simulations generated from the chain B monomer in the holo Loop 150 "open" crystal structure (PDBID: 2HU0) and five simulations started from the chain A monomer in the holo Loop 150 "closed" crystal structure (PDBID: 2HU4). 10 As the calcium density was not present in these crystal structures, overlap with the apo N1 structure 2HTY aided in positioning of the ion in the protein. After minimization, these structures were each initialized with random velocities assigned from a Maxwell-Boltzmann distribution at 5 K to generate the independent trajectories.
Calcium-bound 100 ns N1 tetramer simulations were performed with the PMEMD module in AMBER 10 11 and the AMBER FF99SB force field. 12 Atomic coordinates were taken from the holo, open Loop 150 crystal structure (2HU0), with the calcium inserted from overlap with the apo 2HTY structure and parameterized in the classical force field. Protonation states for histidines and other titratable groups were determined at pH 6.5 by the PDB2PQR 13 web server and manually verified. The tetrameric 2HU0 crystal structure has a single oseltamivir molecule bound in the active site of chain B. In order to introduce the oseltamivir within each of the other chains, chain B was aligned to chain A, C, and D using VMD 14 and the resulting transformation matrix was applied to the oseltamivir molecule. Oseltamivir was parameterized according to quantum chemical calculations, which included performing a geometry optimization with Gaussian03 15 at the Hartree-Fock/6-31G* level. The resulting atomic partial charges were then determined according to the RESP method, 16 and the atom types were assigned by the Antechamber module of AMBERTools 1.2. The GAFF 17 force field within AMBER was employed to generate the bond, angle, and dihedral parameters. As no water molecules were reported in the 2HU0 structure, we structurally aligned the 2HTY and 2HU0 systems and kept all crystallographic water molecules that did not clash with oseltamivir in the binding pocket. The system was built using the AMBER9 program Leap and the Amber99SB force field. Each monomer chain contained 8 disulfide bonds, which were properly enforced using the CYX notation in AMBER. A box of TIP3P 18 waters was added to solvate each system, resulting in a rectangular box of dimensions 124 x 127 x 77 Å 3 . The system was neutralized with the addition of sodium counter ions and a 150 mM NaCl salt bath was introduced.
The constructed N1 ion-bound tetramer complex was first subjected to 2000 steps of steep-est descent, followed by 5000 steps of conjugate gradient minimization using 5 kcal·mol −1 · Å −2 harmonic restraints on all non-hydrogen protein atoms. Then, 5000 steps of conjugate gradient minimization with just the backbone atoms restrained cleaned up the initial hydrogenated complex. A further 25,000 conjugate gradient minimization steps were then performed on the entire complex, without restraints, in order to alleviate any steric clashes prior to performing molecular dynamics. Following minimization, the system was linearly heated to 310 K in the NVT ensemble using a Langevin thermostat, with a collision frequency of 1.0 ps −1 , and harmonic restraints of 4 kcal·mol −1 · Å −2 on the backbone atoms. Further, three 250 ps periods were run in the NPT ensemble with the restraint force constant being reduced by 1 kcal·mol −1 · Å −2 each time. A final 250 ps of NPT dynamics was run without restraints. Production runs were then made for 100 ns duration in the NVT ensemble; temperature was controlled with a Langevin thermostat (1.0 ps −1 collision frequency), and pressure was controlled using a Berendsen Barostat 5 with a coupling constant of 1 ps and a target pressure of 1 atm. The time step used was 2 fs and all hydrogen atoms were constrained using the SHAKE algorithm. 6 Long range electrostatics were included using the Particle Mesh Ewald algorithm 19 with a 4th order B-spline interpolation and a grid spacing of <1.0 Å and a direct space cutoff of 8 Å. The trajectories for each monomer of the tetramer were extracted and concatenated to approximate 400 ns of monomer N1 sampling.
The ion-free tetramer system was simulated with atomic coordinates and parameters identical to that described above for the AMBER10 simulations, except for calcium presence, and using the AMBER ff99SB force field and the Desmond MD engine developed by D. E. Shaw Research. 20 The Maestro modeling suite was utilized for system construction in an 128 x 130 x 80 Å 3 orthorhombic box, for a minimum distance of 12 Å between protein heavy atoms and box edges.
Sodium and chloride ions were added to neutralize the system charge and create an approximately 150 mM NaCl solution, as in the AMBER10 simulation. Following 10,000 steps of steepest-decent minimization, the systems were equilibrated with restraints of 10 kcal· mol −1 ·Å −2 for 50 ps followed by 200 ps in which the restraints were continuously and slowly removed; then, unrestrained molecular dynamics were performed for 99.75 ns. Numerical integration was performed with a 2 fs
|
2017-06-17T17:36:24.141Z
|
2010-03-05T00:00:00.000
|
{
"year": 2010,
"sha1": "f0defbd86478d6c1f0d2ffd3ee9327fb5035814c",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2902668",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "249661971844ad2a4fede821c2a9edfb21f40768",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
16712964
|
pes2o/s2orc
|
v3-fos-license
|
Inferring monopartite projections of bipartite networks: an entropy-based approach
Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.
Introduction
Many real-world systems, ranging from biological to socio-economic ones, are bipartite in nature, being defined by interactions occurring between pairs of distinct groups of nodes (be they authorships, attendances, affiliations, etc) [1,2]. This is the reason why bipartite networks are ubiquitous tools, employed in many different research areas to gain insight into the mechanisms driving the organization of the aforementioned complex systems.
One of the issues encountered when modeling bipartite networks is obtaining a (monopartite) projection over the layer of interest while preserving as much as possible the information encoded into the original bipartite structure. This problem becomes particularly relevant when, e.g. a direct measurement of the relationships occurring between nodes belonging to the same layer is impractical (as gathering data on friendship within social networks [3]).
The simplest way of inferring the presence of otherwise unaccessible connections is linking any two nodes, belonging to the same layer, as long as they share at least one neighbor: however, this often results in a very dense network whose topological structure is almost trivial. A solution which has been proposed prescribes to retain the information on the number of common neighbors, i.e. to project a bipartite network into a weighted monopartite network [3]. This prescription, however, causes the nodes with larger degree in the original bipartite network to have, in turn, larger strengths in the projection, thus masking the genuine statistical relevance of the induced connections. Moreover, such a prescription lets spurious clusters of nodes emerge (e.g. cliques induced by the presence of-even-a single node connected to all nodes on the opposite layer).
In order to face this problem, algorithms to retain only the significant weights have been proposed [3]. Many of them are based on a thresholding procedure, a major drawback of which lies in the arbitrariness of the chosen threshold [4][5][6]. A more statistically-grounded algorithm prescribes to calculate the statistical significance of the projected weights according to a properly-defined null model [7]; the latter, however, encodes relatively little information on the original bipartite structure, thus being more suited to analyze natively monopartite networks. A similar-in-spirit approach aims at extracting the backbone of a weighted, monopartite projection by calculating its minimum spanning tree and provides a recipe for community detection by calculating the minimum spanning forest [8,9]. However, the lack of a comparison with a benchmark makes it difficult to asses the statistical relevance of its outcome.
The approaches discussed so far represents attempts to validate a projection a posteriori. A different class of methods, on the other hand, focuses on projecting a statistically validated network by estimating the tendency of any two nodes belonging to the same layer to share a given portion of neighbors. All approaches define a similarity measure which either ranges between 0 and 1 [10,11] or follows a probability distribution on which a p-value can be computed [12][13][14]. While in the first case the application of an arbitrary threshold is still unavoidable, in the second case prescriptions rooted in traditional statistics can be applied.
In order to overcome the limitations of currently-available algorithms, we propose a general method which rests upon the very intuitive idea that any two nodes belonging to the same layer of a bipartite network should be linked in the corresponding monopartite projection if, and only if, significantly similar. To stress that our benchmark is defined by constraints which are satisfied on average, we will refer to our method as to a grand canonical algorithm for obtaining a statistically-validated projection of any binary, undirected, bipartite network. A microcanonical projection method has been defined as well [15] which, however, suffers from a number of limitations imputable to its nature of purely numerical algorithm [3].
The rest of the paper is organized as follows. In the methods section, our approach is described: first, we introduce a quantity to measure the similarity of any two nodes belonging to the same layer; then, we derive the probability distribution of this quantity according to four bipartite null models, defined within the exponential random graph (ERG) formalism [16]. Subsequently, for any two nodes, we quantify the statistical significance of their similarity and, upon running a multiple hypothesis test, we link them if recognized as significantly similar. In the results section we employ our method to obtain a projection of two different data sets: the countriesproducts World Trade Web and the users-movies MovieLens network. Finally, in the discussions section we comment our results.
Methods
A bipartite, undirected, binary network is completely defined by its biadjacency matrix, i.e. a rectangular matrix M whose dimensions will be indicated as N N R Ć , with N R being the number of nodes in the top layer (i.e. the number of rows of M) and N C being the number of nodes in the bottom layer (i.e. the number of columns of M). M sums up the structure of the corresponding bipartite matrix: m 1 rc = if node r (belonging to the top layer) and node c (belonging to the bottom layer) are linked, otherwise m 0 rc = . Links connecting nodes belonging to the same layer are not allowed.
In order to obtain a (layer-specific) monopartite projection of a given bipartite network, a criterion for linking the considered pairs of nodes is needed. Schematically, our grand canonical algorithm works as follows: A. choose a specific pair of nodes belonging to the layer of interest, say r and r¢, and measure their similarity; B. quantify the statistical significance of the measured similarity with respect to a properly-defined null model, by computing the corresponding p-value; C. link nodes r and r ¢ if, and only if, the related p-value is statistically significant; * repeat the steps above for every pair of nodes.
We will now describe each step of our algorithm in detail.
Measuring nodes similarity
The first step of our algorithm prescribes to measure the degree of similarity of nodes r and r ¢. A straightforward approach is counting the number of common neighbors V rr ¢ shared by nodes r and r ¢. By adopting the formalism proposed in [16], our measure of similarity is provided by the number of bi-cliques K 1,2 [17], also known as V-motifs [16]: where we have adopted the definition V m m rr c rc r c º ¢ ¢ for the single V-motif defined by nodes r and r ¢ and node c belonging to the opposite layer (see figure 1 for a pictorial representation). From the definition, it is apparent that V 1 rr c = ¢ if, and only if, both r and r¢ share the (common) neighbor c. Notice that naïvely projecting a bipartite network corresponds to considering the monopartite matrix defined as V V rr rr naive = ¢ ¢ whose densely connected structure, described by is characterized by an almost trivial topology.
Quantifying the statistical significance of nodes similarity
The second step of our algorithm prescribes to quantify the statistical significance of the similarity of our nodes r and r¢. To this aim, a benchmark is needed: a natural choice leads to adopt the ERG class of null-models [16,[18][19][20][21][22].
Within the ERG framework, the generic bipartite network M is assigned an exponential probability , whose value is determined by the vector C M ( ) of topological constraints [18]. In order to determine the unknown parameters q , the likelihood-maximization recipe can be adopted: given an observed biadjacency matrix M*, it translates into solving the system of equations ) which prescribes to equate the ensemble averages C q á ñ ( ) to their observed counterparts, C M* ( ) [19]. Two of the null models we have considered in the present paper are known as the bipartite random graph (BiRG) model and the bipartite configuration model (BiCM) [16]; the other ones are the two 'partial' configuration models BiPCM r and BiPCM c : the four null models are defined, respectively, by constraining the total number of links, the degrees of nodes belonging to both layers and the degrees of nodes belonging to one layer only (see appendix for the analytical definitions).
The use of linear constraints allows us to write P M ( )in a factorized form, i.e. as the product of pair-specific probability coefficients the numerical value of the generic coefficient p rc being determined by the likelihood-maximization condition (see appendix). As an example, in the case of BiRG, p p r c , , with L being the total number of links in the actual bipartite network.
Since ERG models with linear constraints treat links as independent random variables, the presence of each V rr c ¢ can be regarded as the outcome of a Bernoulli trial: It follows that, once r and r¢ are chosen, the events describing the presence of the N C single V rr c ¢ motifs are independent random experiments: this, in turn, implies that each V rr¢ is nothing else than a sum of independent Bernoulli trials, each one described by a different probability coefficient.
The distribution describing the behavior of each V rr¢ turns out to be the so-called Poisson-Binomial [23,24]. More explicitly, the probability of observing zero V-motifs between r and r¢ (or, equivalently, the probability for nodes r and r¢ of sharing zero neighbors) reads the probability of observing only one V-motif reads etc. In general, the probability of observing n V-motifs can be expressed as a sum of (notice that the second product runs over the complement set of C n ).
Measuring the statistical significance of the similarity of nodes r and r¢ thus translates into calculating a pvalue on the aforementioned Poisson-Binomial distribution, i.e. the probability of observing a number of Vmotifs greater than, or equal to, the observed one (which will be indicated as V rr * ¢ ): Upon repeating such a procedure for each pair of nodes, we obtain an N N R Ŕ matrix of p-values (see also the appendix). In order to speed up the numerical computation of p-values, a Python code has been made publicly available by the authors 4 .
As a final remark, notice that this approach describes a one-tail statistical test, where nodes are considered as significantly similar if, and only if, the observed number of shared neighbors is 'sufficiently large'. In principle, our algorithm can be also used to carry out the reverse validation, linking any two nodes if the observed number of shared neighbors is 'sufficiently small': this second type of validation can be performed whenever interested in highlighting the 'dissimilarity' between nodes.
Validating the projection
In order to understand which p-values are significant, it is necessary to adopt a statistical procedure accounting for testing multiple hypotheses at a time.
In the present paper we apply the so-called false discovery rate (FDR) procedure [25].
with t representing the usual single-test significance level (e.g. t=0.05 or t=0.01). The third step of the FDR procedure prescribes to reject all the hypotheses whose p-value is less than, or equal to, Notably, FDR allows one to control for the expected number of false 'discoveries' (i.e. incorrectly-rejected null hypotheses), irrespectively of the independence of the hypotheses tested (our hypotheses, for example, are not independent, since each observed link affects the similarity of several pairs of nodes).
In our case, the FDR prescription translates into adopting the threshold i t In other words, every couple of nodes whose corresponding p-value is validated by the FDR is joined by a binary, undirected link in our projection. In what follows, we have used a single-test significance level of t=0.01.
Summing up, the recipe for obtaining a statistically-validated projection of the bipartite network M by running the FDR criterion requires that R 1 , according to null model nm used. Notice that the validation process naturally circumvents the problem of spurious clustering (see also appendix).
The aforementioned approaches providing an algorithm to project a validated network differ in the way the issue of comparing multiple hypotheses is dealt with. While in some approaches this step is simply missing and each test is carried out independently from the other ones [3,14], in others the Bonferroni correction is employed [12,13]. Both solutions are affected by drawbacks.
The former algorithms, in fact, overestimate the number of incorrectly rejected null hypotheses (i.e. of incorrectly validated links). A simple argument can, indeed, be provided: the probability that, by chance, at least one, out of M hypotheses, is incorrectly rejected (i.e. that at least one link is incorrectly validated) is The latter algorithms, on the other hand, adopt a criterion deemed as severely overestimating the number of incorrectly retained null hypotheses (i.e. of incorrectly discarded links) [25]. Indeed, if the stricter condition FWER 0.05 = is now imposed, the threshold p-value can be derived as p t M value 0.05 th = -which rapidly vanishes as M grows. As a consequence, very sparse (if not empty) projections are often obtained.
Naturally, deciding which test is more suited for the problem at hand depends on the importance assigned to false positive and false negatives. As a rule of thumb, the Bonferroni correction can be deemed as appropriate when few tests, out of a small number of multiple comparisons, are expected to be significant (i.e. when even a single false positive would be problematic). On the contrary, when many tests, out of a large number of multiple comparisons, are expected to be significant (as in the case of socio-economic networks), using the Bonferroni correction may, in turn, produce a too large number of false negatives, an undesired consequence of which may be the impairment of, e.g. a recommendation system.
As a final remark, we stress that an a priori selection of the number of validated links is not necessarily compatible with the existence of a level t of statistical significance ensuring that the FDR procedure still holds. As an example, let us suppose we retain only the first k p-values; the FDR would then require the following inequalities to be satisfied: The aforementioned condition, however, can be easily violated by imaging a pair of subsequent p-values close enough to each other (e.g. p value 0.039 3 = -and p value 0.040 4 = -).
Testing the projection algorithm 2.4.1. Community detection
In order to test the performance of our method, the Louvain algorithm has been run on the validated projections of the real networks considered for the present analysis [26]. Since Louvain algorithm is known to be orderdependent [27,28], we considered N outcomes of the former, each one obtained by randomly reshuffling the order of nodes taken as input (N being the network size), and chose the one providing the maximum value of the modularity. This procedure can be shown to enhance the detection of partitions characterized by a higher value of the modularity itself (a parallelized Python version of the reshuffled Louvain method is available at the public repository 5 ).
World trade web
Let us now test our validation procedure on the first data set considered for the present analysis: the World Trade Web. In the present paper we consider the COMTRADE database (using the HS 2007 code revision), spanning the years 1995-2010 6 . After a data-cleaning procedure operated by BACI [29] and a thresholding procedure induced by the RCA (for more details, see [30]), we end up with a bipartite network characterized by N R = 146 countries and N C = 1131 classes of products, whose generic entry m 1 rc = indicates that country r exports product c above the RCA threshold.
Countries layer. Figure 2 shows three different projections of the WTW. The first panel shows a pictorial representation of the WTW topology in the year 2000, upon naïvely projecting it (i.e. by joining any two nodes if at least one neighbor is shared, thus obtaining a matrix V R rr . The high density of links (which oscillates between 0.93 and 0.95 throughout the period covered by the data set) causes the network to be characterized by trivial values of structural quantities (e.g. all nodes have a clustering coefficient very close to 1).
The second panel of figure 2 represents the projected adjacency matrix using the BiRG as a null model. In this case, the only parameter defining our reference model is p 0.13 As a consequence, p p rc BiRG = for every pair of nodes and formula (2.7) simplifies to the binomial The projection provided by the BiRG individuates a unique connected component of countries (notice that the two blocks at bottom-right and top-left of the panel are linked through off-diagonal connections) beside many disconnected vertices (the big white block in the center of the matrix). Interestingly, the latter represent countries whose economy heavily rests upon the presence of raw-materials (see also figure 3), in turn causing each export basket to be focused around the available country-specific natural resources. As a consequence, the similarity between these countries is not significant enough to allow the corresponding links to pass the validation procedure. In other words, the BiRG-induced projection is able to distinguish between two extreme levels of economic development, thus providing a meaningful, yet too rough, filter.
On the other hand, the BiCM-induced projection (shown in the third panel of figure 2), allows for a definite structure of clusters to emerge. The economic meaning of the detected diagonal blocks can be made explicit by running the Louvain algorithm on the projected network. As figure 3 shows, our algorithm reveals a partition into communities enclosing countries characterized by similar economic development [31]. In particular, we recognize the 'advanced' economies (EU countries, USA and Japan-whose export basket is practically constituted by all products [8,30,[32][33][34][35][36]), the 'developing' economies (as centro-american countries and southeastern countries as China, India, Asian Tigers, etc, for which the textile manufacturing represents the most important sector) and countries whose export heavily rests upon raw-materials like oil (Russia, Saudi Arabia, Libya, Algeria, etc), tropical agricultural food (south-american and centro-african countries), etc. An additional group of countries whose export is based upon sea-food is constituted by Australia, New Zealand, Chile and Argentina, which happen to be detected as a community on its own in partitions with comparable values of modularity.
Our algorithm is also able to highlight the structural changes that have affected the WTW topology across the temporal period considered for the present analysis. Figure 4 shows two snapshots of the WTW, referring to the years 2000 and 2008. While in 2000 EU countries were split into two different modules, with the north-european countries (as Germany, UK, France) grouped together with USA and Japan and the south-eastern european countries constituting a separate cluster, this is no longer true in 2008. Furthermore, the structural role played by single nodes is also pointed out. As an example, Austria and Japan emerge as two of the countries with highest betweenness, indicating their role of bridges respectively between western and eastern european countries and western and eastern world countries. A second example is provided by Germany, whose star-like pattern of connections clearly indicates its prominent role in the global trade.
The block diagonal structure of the BiCM-induced adjacency matrix reflects another interesting pattern of the world economy self-organization: the detected communities appear to be linked in a hierarchical fashion, with the 'developing' economies seemingly constituting an intermediate layer between the 'advanced' economies and those countries whose export heavily rests upon raw-materials. Interestingly, such a mesoscopic organization persists across all years of our data set, shedding new light on the WTW evolution. The identified communities can be interpreted as representing: 'advanced' economies (EU countries, USA and Japan, whose export basket practically includes all products); 'developing' economies (centro-american countries and south-eastern countries as China, India, Asian Tigers, etc, for which the textile manufacturing represents the most important sector); countries whose export heavily rests upon raw-materials like oil (Russia, Saudi Arabia, Libya, Algeria, etc), tropical agricultural food (south-american and centro-african countries), etc. Australia, New Zealand, Chile and Argentina (whose export is based upon sea-food) happen to be detected as a community on its own. As shown in figure 5, the results obtained by running the BiPCM r (defined by constraining only the degrees of countries) are, although less detailed, compatible with the ones obtained by running the BiCM. In this case, the BiPCM r constitutes an approximation to the BiCM, providing a computationally faster, yet equally accurate, alternative to it. On the other hand, the BiPCM c induces a projection which is close to the BiRG one, thus adding little information with respect to the latter.
Products layer. While the BiCM provides an informative benchmark to infer the presence of significant connections between countries, this is not the case when focusing on products. For this reason, we consider the BiPCM c , i.e. the null model defined by constraining only the products degrees: figure 6 shows the BiPCM c -induced projection of the WTW on the layer of products 7 . Several communities appear, the larger ones being machinery, transportation, chemicals, electronics, textiles and live animals (a partition that seems to be stable across time).
The detected communities seem to be organized into two macro-groups: 'high-complexity' products (on the left of the figure), including machinery, chemicals, advanced electronics, etc and 'low-complexity' products (on the right of the figure), including live animals, wooden products, textiles, basic electronics, etc. This macroscopic . Mesoscopic patterns of selforganization emerge: the detected communities appear to be linked in a hierarchical fashion, with the 'developing' economies seemingly constituting an intermediate layer between 'advanced' economies and countries whose export heavily rests upon rawmaterials (same colors as in figure 3). Besides, the 'structural' role played by single nodes appear: as an example, Germany is always characterized by a star-like pattern of connections which clearly indicates its prominent role in the world economy. separation reflects the level of economic development of the countries trading these products. As figure 7 clarifies, the 'advanced' economies focus their trading activity on products characterized by high complexity, while 'developing' economies are preferentially active on low-complexity products [30,35,36].
MovieLens
Let us now consider the second data set: MovieLens 100 k. MovieLens is a project by GroupLens [38], a research lab at the University of Minnesota. Data (collected from 19th September 1997 through 22nd April 1998) consist of 10 5 ratings-from 1 to 5-given by N C = 943 users to N R = 1559 different movies 8 ; information about the movies (date of release and genre) and about the users (age, gender, occupation and US zip code) is also provided. We binarize the dataset by setting m 1 rc = if user c rated movie r at least 3, providing a favorable recension.
In what follows we will be interested into projecting this network on the layer of movies. Figure 8 shows the three projections already discussed for the WTW. As for the latter, [ ] is still a very dense network, whose connectance amounts to 0.58. Similarly, the projection induced by the BiRG provides a rather rough filter, producing a unique large connected component, to which only the most popular movies (i.e. the ones with a large degree in the original bipartite network) belong.
While both the naïve and the BiRG-induced projections only allow for a trivially-partitioned structure to be observed, this is not the case for the BiCM. By running the Louvain algorithm, we found a very composite community structure (characterized by a modularity of Q 0.58 ), pictorially represented by the diagonal blocks visible in the third panel of figure 8. The BiCM further refines the results found by the BiRG, allowing for the internal structure of the blocks to emerge: in our dicussion, we will focus on the bottom-right block, which shows the richest internal organization. Figure 9 shows the detected communities within the aforementioned block, beside the genres (provided together with the data) 9 : action, Adventure, Animation, Childrenʼs, Comedy, Crime, Documentary, Drama, Fantasy, Horror, Musical, Mystery, Noir, Romance, Sci-Fi, Thriller, War, western 10 . Since some genres are quite generic and, thus, appropriate for several movies (e.g. Adventure, Comedy and Drama), our clusters are often better described by 'combinations' of genres, capturing the users' tastes to a larger extent: the detected Figure 9. Result of the application of Louvain method to the BiCM-induced projection of the MovieLens data set. Since some genres are quite generic, our clusters are often better described by 'combinations' of genres (readable on the radar-plots beside them) capturing users' tastes to a larger extent: movies released in 1996; 'family' movies; movies with marked horror traits; 'cult mass' movies; independent and foreign movies; movies inspired to books or theatrical plays; 'classic' Hollywood movies (all icons are available on http://thenounproject.com/-see also footnote 9). communities, in fact, partition the set of movies quite sharply, once appropriate combinations of genres are considered.
As an example, the orange block on the left side of our matrix is composed by movies released in 1996 (i.e. the year before the survey). Remarkably, our projection algorithm is able to capture the peculiar 'similarity' of these movies, not trivially related to the genres to which they are ascribed to (that are quite heterogeneous: Action, Comedy, Fantasy, Thriller, Sci-Fi) but to the curiosity of users towards the yearly new releases.
Proceeding clockwise, the violet block next to the orange one is composed by movies classified as Animation, Childrenʼs, Fantasy and Musical (e.g. 'Mrs. Doubtfire', 'The Addams Family', 'Free Willy', 'Cinderella', 'Snow White'). In other words, we are detecting the so-called 'family movies', a more comprehensive definition accounting for all elements described by the single genres above.
The next purple block is composed by genres Action, Adventure, Horror, Sci-Fi and Thriller: examples are provided by 'Stargate', 'Judge Dredd', 'Dracula', 'The Evil Dead'. This community encloses movies with marked horror traits, including titles far from 'mainstream' movies. This is the main difference with respect to the following blue block: although characterized by similar genres (but with Crime replacing Horror and Thriller) movies belonging to it are more popular: 'cult mass' movies, in fact, can be found here. Examples are provided by 'Braveheart', 'Blade Runner' and sagas as 'Star Wars' and 'Indiana Jones'.
The following two blocks represent niche movies for US users. The module in magenta is, in fact, composed by foreign movies (mostly European-French, German, Italian, English-which usually combine elements from Comedy and elements from Drama), as well as US independent films (as titles by Jim Jarmush); the yellow module, on the other hand, is composed by movies inspired by books or theatrical plays and documentaries.
The As in the WTW case, running the BiPCM r (defined by constraining only the degrees of movies) leads us to obtain a coarse-grained (i.e. still informative, although less detailed) version of the aforementioned results. Only three macro-groups of movies are, in fact, detected: 'authorial' movies (as 'classic' Hollywood movies, Hitchcockʼs, Kubrickʼs, Spielbergʼs movies), recent mainstream 'blockbusters' (as 'Star Trek', 'Star Wars', 'Indiana Jones', 'Batman' sagas) and independent/niche movies (as Spike Leeʼs and European movies).
As a final remark, we point out that projecting on the users layer with the BiCM indeed allows several communities to be detected. However, interestingly enough, none of them seems to be accurately described by the provided indicators (age, gender, occupation and US zip code), thus suggesting that users tastes are correlated with hidden (sociometric) variables yet to be individuated.
Discussion
Projecting a bipartite network on one of its layers poses a number of problems for which several solutions have been proposed so far [3, 8-10, 12-14, 32], differing from each other in the way the information encoded into the bipartite structure is dealt with.
The present paper proposes an algorithm that prescribes to, first, quantify the similarity of any two nodes belonging to the layer of interest and, then, link them if, and only if, this value is found to be statistically significant. The links constituting the monopartite projection are, thus, inferred from the co-occurrences observed in the original bipartite network, by comparing them with a proper statistical benchmark.
Since the null models considered for the present analysis retain a different amount of information, the induced projections are characterized by a different level of detail. In particular, the BiRG represents a very rough filter which employs the same probability distribution to validate the similarity between any two nodes, thereby preferentially connecting nodes with large degree than nodes with small degree. By enforcing stronger constraints (increasing the amount of retained information), stricter benchmark models are obtained.
The two partial configuration models constitute the simplest examples of benchmarks retaining also the information on the nodes degrees. However, it should be noticed that the two BiPCMs perform quite differently. In fact, the BiPCM constraining the degrees of the opposite layer we are interested in finding a projection of, provides an homogeneous benchmark as well (i.e. the same Poisson-Binomial distribution for all pairs of nodes -see also the appendix), whence the expected little difference with respect to the BiRG performance; on the other hand, the BiPCM constraining the degrees of nodes belonging to the same layer we are interested in finding a projection of, provides a performance which is halfway between the BiRG one and the BiCM one. The reason lies in the fact that a (Binomial) pair-specific distribution is now induced by the constraints, i.e. a benchmark properly taking into account the heterogeneity of the considered nodes. As shown in the results section, this often allows one to obtain an accurate enough approximation to the BiCM, i.e. the null model constraining the whole degree sequence.
As also suggested in [3], the use of a benchmark which ensures that the heterogeneity of all nodes is correctly accounted for is recommended: in other words, any suitable null model for projecting a network on a given layer should (at least) constrain the degree sequence of the same layer. The use of partial null models is allowed in case of constraints redundancy, e.g. when node degrees are well described by their mean (as indicated by the coefficient of variation, for example-see also the appendix): in cases like these, specifying the whole degree sequence is actually unnecessary.
As a final remark, we explicitly notice that implementing the BiCM can be computationally demanding: this is the reason why several approximations to the Poisson-Binomial distribution have been proposed so far. However, the applicability of each approximation is limited and, whenever employed to find the projection of a real, bipartite network, they may even fail to a large extent (see the appendix). With the aim of speeding up the numerical computation of the p-values induced by any of the null models discussed in the paper-while retaining the exact expression of the corresponding distributions-a Python code has been made publicly available by the authors at [25].
Remarkably, our method can be extended in a variety of directions, e.g. to analyze directed and weighted bipartite networks, and generalized to account for co-occurrences between more than two nodes, a study that constitutes the subject of future work. Since every event is supposed to be independent, the expectation value of X is simply and higher-order moments read where 2 s is the variance and γ is the skewness.
In the problem at hand, we are interested in calculating the probability of observing a number of V-motifs larger than the measured one, i.e. the p-value corresponding to the observed occurrence of V-motifs. This translates into requiring the knowledge of the survival distribution function (SDF) for the Poisson-Binomial distribution, i.e.
proposes a fast and precise algorithm to compute the Poisson-Binomial distribution, which is based on the characteristic function of the Poisson-Binomial distribution. Let us will briefly review the main steps of the algorithm in [39]. If we have observed exactly X * successes, then where summing over C X means summing over each set of X-tuples of integers. The problem lies in calculating C X . In order to avoid to explicitly consider all the possible ways of extracting a number of X integers from a given set, let us consider the Iinverse discrete Fourier transform of f X PB ( ), i.e.
is the principal value of the argument of z i (l) and z l i | ( )| represents its modulus. Once all terms of the discrete Fourier transform of l c (i.e. the coefficients f X PB ( )) have been derived, S X PB ( )can be easily calculated. To the best of our knowledge, the approach proposed by [39] does not suffer from the numerical instabilities which, instead, affect [40].
Appendix B. Approximations of the Poisson-Binomial distribution
Binomial approximation. Whenever the probability coefficients of the N Bernoulli trials coincide (i.e. p p i = as in the case of the BiRG-see later), each pair-specific Poisson-Binomial distribution reduces to the usual Binomial distribution. Notice that, in this case, all distributions coincide since the parameter is the same.
However, the Binomial approximation may also be employed whenever the distribution of the probabilities of the single Bernoulli trials is not too broad (i.e. 0.5 s m < ): in this case, all events can be assigned the same probability coefficient p , coinciding with their average p N = m . In this case, ) is the SDF for the random variable X following a Binomial distribution with parameter p . Whenever the aforementioned set of probability coefficients can be partitioned into homogeneous subsets (i.e. subsets of coefficients assuming the same value), the Poisson-Binomial distribution can be computed as the distribution of a sum of Binomial random variables [13]. Such an algorithm is particularly useful when the number of subsets is not too large, a condition which translates into requiring that the heterogeneity of the degree sequences is not too high. However, when considering real networks this is often not the case and different approximations may be more appropriate.
Poissonian approximation. According to the error provided by Le Camʼs theorem (stating that f X f X p 2 where μ and σ have been defined in (A.1) and (A.2). The value 0.5 represents the continuity correction [39].
Since the Gaussian approximation is based upon the central limit theorem, it works in a complementary regime with respect to the Poissonian approximation: more precisely, when the expected number of successes is large. Skewness-corrected Gaussian approximation. Based on the results of [41,42], the Gaussian approximation of the Poisson-Binomial distribution can be further refined by introducing a correction based on the value of the skewness. Upon defining where f x Gauss ( ) is the probability density function of the standard normal distribution and γ is defined by (A.2), then The refinement described by formula (B.4) provides better results than the Gaussian approximation when the number of events is small.
However, upon comparing the WTW projection (at the level t = 0.01, for the year 2000) obtained by running the skewness-corrected Gaussian approximation with the projection based on the full Poisson-Binomial distribution, we found that 20% of the statistically-significant links are lost in the Gaussian-based validated projection. The limitations of the Gaussian approximations are discussed in further detail in [42,43].
Appendix C. Null models C.1. BiRG model The BiRG model is the random graph model solved for bipartite networks. This model is defined by a probability coefficient for any two nodes, belonging to different layers, to connect which is equal for all pairs of nodes. More specifically, p is the observed number of links and N R and N C indicate, respectively, the number of rows and columns of our network. Since all probability coefficients are equal, the probability of a single V-motif (defined by the pair of nodes r and r¢ belonging to the same layer and node c belonging to the second one) reads Thus, the probability distribution of the number of V-motifs shared by nodes r and r¢ is simply a Binomial distribution defined by a probability coefficient equal to p BiRG
C.2. Bipartite configuration model
The BiCM [16], represents the bipartite version of the configuration model [18][19][20]. The BiCM is defined by two degree sequences. Thus, our Hamiltonian is are the degrees of nodes on the top and bottom layer, respectively; r a and c b , instead, are the Lagrangian multipliers associated with the constraints.
The probability of the generic matrix M thus reads where Z q ( ) is the grand canonical partition function. It is possible to show that are the observed degree sequences.
C.3. Bipartite partial configuration models
Dealing with bipartite networks allows us to explore two 'partial' versions of the BiCM (hereafter BiPCM), defined by constraining the degree sequences of, say, the top and bottom layer separately. Let us start with the null model BiPCM r , defined by the following Hamiltonian: In order to estimate the values for x r , let us maximize the likelihood function P M ln * = ( )again [19]. It is thus possible to derive the Lagrangian multipliers x r r Notice that, in this case, i.e. each V-motif defined by r and r¢ has the same probability, independently from c. This, in turn, implies that the probability distribution of the number of V-motifs shared by nodes r and r¢ is again a Binomial distribution defined as Let us now move to considering the second partial null model, BiPCM c , defined by the Hamiltonian In this case, each V-motif defined by r, r¢ and c has a probability which depends exclusively on c. As a consequence, the probability distribution of the number of V-motifs shared by any two nodes r and r¢ is the same one, i.e. a Poisson-Binomial whose single Bernoulli trial is defined by a probability reading
Appendix D. Comparing different projection algorithms
Available procedures suffer from a number of limitations that our method aims at overcoming. In what follows we compare the performance of some of them in projecting the WTW on the countries layer, for the year 2000, in greater detail, see figure D1 for the results of the comparison.
The method proposed in [12] outputs an empty network for all years of our dataset: we suspect the reason to lie in the very large number of hypotheses tested at a time, leading to a too-severe correction. A similar result is obtained when applying the recipe proposed in [7]: only a tenth of links (among the group of advanced economies) are validated.
Although similar-in-spirit to ours, the method proposed in [13] prescribes to implement the Bonferroni correction as well. All links validated by applying this kind of correction are always a subset of the links validated when controlling for the FDR: this is the reason underlying the less informative community structure obtained when this algorithm is run on the WTW.
The third comparison we have explicitly carried out is the one with the forest-inducing method proposed in [9]. Links validated by such a method are characterized by the largest overlap ( 82% ) with the ones validated by our procedure. This may be due to the selection of those events which have the higher chance to be significant (i.e. the largest number of shared co-occurrences): anyway, no statistical control is explicitly provided (e.g. the forest-like topology is not per se guaranteed to encode the most significant events).
As a final remark, we explicitly notice that the problem of spurious clustering does not affect our method, by definition. In fact, the presence of a node simultaneously connected to several nodes on the opposite layer does not imply the latter to be connected in the projection: this is the case if, and only if, the similarity between the involved nodes passes the test of statistical significance. An extreme example is provided by a network having a node c (on one layer) which is connected to every other node (on the opposite layer), projected by employing the BiCM: since the fully-connected node is, actually, a 'deterministic' node (its links are described by probability coefficients which are 1), any V-motif having it as a vertex (e.g. V rr c ¢ ) is deterministic as well. Thus, P V 0 0 rr = = ¢ ( ) (one V-motif is surely present) and the distribution describing the overlap between r and r¢ is shifted, as a whole, by one. In other words, the set of events which determine the presence of a link between r and r¢ does not include the deterministic V-motif (even more so, deterministic nodes can be discarded from the validation process carried out by the BiCM from the very beginning). Figure D1. Comparison between different projection methods, tested on the WTW in the year 2000. The method proposed in [12] (top panel) outputs an empty projection: this may be due to the large number of hypotheses tested at a time, accounted for the Bonferroni correction. On the other hand, the links validated by the method proposed in [13] (middle panel) constitute a subset of ours (as apparent by the partial overlap of the detected communities): in fact, applying the Bonferroni correction means selecting part of the links validated by FDR-controlling procedures. Last, links validated by the forest-inducing method proposed in [9] (bottom panel) are characterized by the largest overlap with the ones validated by our procedure ( 82% -this large overlap may be due to the selection of those events having a high chance to be significant, even if an explicit control is missing).
|
2016-07-08T18:22:10.000Z
|
2016-07-08T00:00:00.000
|
{
"year": 2017,
"sha1": "541e77395f1124f14efacebea1713e236ec36e63",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/aa6b38",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "80b8abb79a1a275b5ab175576ffea5ee25354b2a",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
9543035
|
pes2o/s2orc
|
v3-fos-license
|
The Curious Case of Metonymic Verbs: A Distributional Characterization
Logical metonymy combines an event-selecting verb with an entity-denoting noun (e.g., The writer began the novel ), triggering a covert event interpretation (e.g., reading , writing ). Experimental investigations of logical metonymy must assume a binary distinction between metonymic (i.e. event-selecting) verbs and non-metonymic verbs to establish a control condition. However, this binary distinction (whether a verb is metonymic or not) is mostly made on intuitive grounds, which introduces a potential confounding factor. We describe a corpus-based approach which characterizes verbs in terms of their behavior at the syntax-semantics interface. The model assesses the extent to which transitive verbs prefer event-denoting objects over entity-denoting objects. We then test this “eventhood” measure on psycholinguistic datasets, showing that it can distinguish not only metonymic from non-metonymic verbs, but that it can also capture more fine-grained distinctions among different classes of metonymic verbs, putting such distinctions into a new graded perspective.
Motivation
Logical metonymy, an instance of enriched composition (Jackendoff, 1997), consists of a combination of an event-selecting metonymic verb and an entity-denoting direct (e.g., The writer began the novel). 1 Its interpretation involves the recovery of a covert event (reading, writing). Metonymy interpretation is generally explained in terms of a type clash between the verb's selectional restrictions and the noun's type, and extensive psycholinguistic work (McElree et al. (2001) and Traxler et al. (2002), among others) has demonstrated extra processing costs for metonymic constructions. For example, Traxler et al. (2002) combine metonymic and non-metonymic verbs with entity-denoting and event-denoting nouns (The boy [started/saw] V [the puzzle/fight] N P ) and report significantly higher processing costs for the "coercion combination" (metonymic verb plus entity-denoting object: The boy started the puzzle).
While there has been much debate in theoretical linguistics on individual verbs that may or may not give rise to logical metonymy (for example, on enjoy, see Pustejovsky (1995); Fodor and Lepore (1998); Lascarides and Copestake (1998)), work in psycholinguistics (McElree et al., 2001;Traxler et al., 2002;Pylkkänen and McElree, 2006) and computational modeling Lapata and Lascarides, 2003) seem to have agreed on a small set of "metonymic verbs" which is used when looking for empirical correlates of logical metonymy. However, this set of metonymic verbs is semantically rather heterogeneous, as it is selected based on intuition only. It includes not only aspectual verbs 2 (begin, complete, continue, end, finish, start) but also psychological verbs (enjoy, hate, like, love, regret, savor, try), as well as others that elude straightforward categorization (attempt, endure, manage, master, prefer).
This semantic heterogeneity calls into question a homogeneous notion of metonymic verbs. Indeed, recent work by Katsika et al. (2012) notes that "the hypothesis that eventive inferences must be attributed to the same mechanism of building meaning (coercion + type-shifting) [for all metonymic verbs] is too strong". Their eye-tracking study supports the hypothesis that aspectual verbs trigger coercion and processing cost, while psychological predicates (e.g. enjoy) do not. This gives rise a key question: Are all metonymic verbs alike?
A second potential methodological risk arises from the fact that experiments need to pair metonymic verbs with a control group of non-metonymic verbs. Verbs that are typically used as non-metonymic include forget, recall, remember, describe, praise, prepare, shelve, see, and unpack. The demarcation of metonymic vs. non-metonymic verbs is rarely motivated explicitly and in some cases even seems rather arbitrary. This raises an evident risk of circularity: the definition of logical metonymy relies on the notion of metonymic verbs, but this class is often characterized only in terms of their triggering metonymic shifts. What is needed is a set of independent and principled criteria to approach what we feel is a second crucial question: What is a metonymic verb?
In this paper, we make some progress towards answering these questions by proposing a corpus-based measure of eventhood that captures the degree to which verbs expect objects that are events rather than entities. This measure is able to: (a) distinguish between aspectual metonymic verbs, non-aspectual metonymic verbs, and non-metonymic verbs, lending support to Katsika et al.'s (2012) argument; (b) provide empirical evidence for or against the choice of materials in psycholinguistic studies of metonymy; (c) serve as a necessary (although not sufficient) indicator of new verbs that might show metonymic behavior.
Plan of the paper. Section 2 describes the the definition of our eventhood measure and uses it for data exploration. Section 3 characterizes the data used in two psycholinguistic studies on metonymy. Our results show that our measure can distinguish verb classes, reliably predicting participants' behavior in the experiments.
Measuring the Event Expectations of Verbs
Our starting point is that metonymic verbs should be statistically more associated with event-denoting objects, while the non-metonymic verbs should mainly co-occur with entity-denoting objects. We move on to define a measure of "eventhood" of a verb's object slot 3 and to use it to distinguish between verb classes. Our hypotheses are that (a) aspectual verbs have a higher eventhood score than entity-selecting verbs and (b) aspectual verbs have a higher eventhood-score than non-aspectual metonymic verbs.
Selection of typical objects from corpus data
There has been much work on modeling the various fillers of verbs, i.e. their selectional preferences, using explicit or implicit generalizations of the fillers. These rely either primarily on a lexical hierarchy (Resnik, 1996), distributional information (Rooth et al., 1999;Erk et al., 2010) or both (Schulte im Walde et al., 2008). While such computationally-intensive approaches have proven effective in modeling selectional preferences in general, we are interested in learning about only one aspect of a verb's argument, namely how 'event-like' it is.
We use the WordNet (Fellbaum, 2010) 4 lexical hierarchy to discover whether a noun has an event sense. We also use Distributional Memory (DM, Baroni and Lenci (2010)) as a source of distributional information that allows us to determine how strongly a noun is associated with a given verb as an object filler. DM is a general distributional semantic resource which allows the generation of vectorbased semantic models (Turney and Pantel, 2010) from the distribution of words in context. In general, distributional semantic models are two-dimensional, relating a word with other words in its context giving a 'bag-of-words' model (Schütze (1993), cf. Table 1 (a)); or with particular syntactic patterns to give a 'structured vector space' (Padó and Lapata (2007), cf. Table 1: Examples of a two-dimensional bag-of-words space (a), and a two-dimensional structured DM is a three-dimensional extension of such a two-dimensional matrix which includes the syntactically derived relation between the two words as an extra dimension. It is derived from the concatenation of the ukWaC 5 , the English Wikipedia 6 , and the BNC 7 , resulting altogether in a 2.83 billion-token corpus. We use the TypeDM variant of DM, 8 which contains over 130M links between nouns, verbs and adjectives, covering generic syntactic relations as well as lexicalized relations (see Baroni and Lenci (2010) for details). In DM, each triple of words w 1 , w 2 and relation r, w 1 r w 2 , is scored by the Local Mutual Information (LMI, Evert (2005), Equation 1) between its three elements. LMI contains two factors, (i) the point-wise mutual information which indicates how strongly their co-occurrence deviates from chance and (ii) the raw co-occurrence frequency: where E . is the MLE-expected frequency of the triple, and O . its actually observed frequency in the corpus. For example, since the LMI score for meeting obj postpone is greater than that for breakfast obj postpone we can say that breakfast is a less typical object for postpone than meeting. Defined in this manner, typicality is not only a function of the co-occurrence frequency between an object and a verb but of the significance of this co-occurrence compared to chance. The format of DM allows for the simple extraction of highly informative fillers for a given verb by selecting those tuples whose relation is the one of interest and sorting by score. In the following sections we will use the standard matricization of DM (W×LW) as a semantic space, which defines as dimensions the pairs of links and context words as in Table 1 (b).
Defining event nouns in a lexical hierarchy
In order to determine how event-like the typical object is for a given predicate, we have to distinguish which objects have an event sense. We define an event noun as a noun with at least one WordNet synset (Fellbaum, 2010) that is dominated in the synset hierarchy by one of the top nodes shown in Table 2. This is a simple approximation of the degree to which the noun denotes an event. A more informed measure could e.g. include distributional information of the noun's senses. It is important to note that a particular noun can have more than one event-dominated synset. There are in fact eight nouns whose synsets generalize to all of the event nodes designated: control, culture, differentiation, elimination, inspiration, pleasure, reproduction, rumination, that is, they all have an action, cognitive process, and biological process reading. This definition leads to a set EV of 14K event nouns (out of WordNet's 170K nouns), which we can use to determine to what extent 'the typical object' of a verb is event-like. First we take the k most strongly associated object fillers from DM, obj k (v) for the verb v and then define the eventhood to be the percentage of these fillers that have an event sense. In other words, the eventhood k for a verb v is defined as: (2) Selecting the top k scored fillers as prototypical arguments has proven a reliable method to characterize the expectations for the argument slot which allows, e.g., the modeling of selectional preferences (cf. Baroni and Lenci (2010); Erk et al. (2010); Lenci (2011)). For the present analysis, we fix k at 100 (i.e. := 100 ), we thereby also eliminate the issue of using words from DM which are not covered in WordNet. The following section investigates the range of eventhood scores across the verbs in DM. Figure 1 shows the distribution of eventhood across verbs in DM. Verbs with ≈ 0, i.e. verbs with low eventhood, include unfrock, detain, marry, and behead, while verbs with high eventhood, i.e. those which rank the highest with respect to (i.e. ≈ 1), include expedite, undergo, halt, and delay. While this 'linearization' of the space of verbs given by their eventhood scores does not in and of itself suggest semantic coherence-given a particular range [α; β], the class of verbs with α < (v) < β will in general be a heterogenous class-we find that in the fringe ranges, i.e. where ≈ 0 and ≈ 1, the verbs appear to be coherent with respect to their object fillers. For instance, the left most bar in the histogram corresponding to the range 0 < < 0.5 typically have people as the experiencers of the action denoted by the verb. In a sense, things that happen to or with people (e.g. marry or behead) do not typically happen to or with events. On the other side of the spectrum we have only 13 verbs with 0.9 < < 1 (e.g. commence, cease, halt, delay), most of which concern the temporal unfolding of an event. The most frequent range (0.3 < < 0.35), covering 640 verbs, contains a very diverse group of verbs: prance, fluoridate, emaciate ( ≈ 0.3) to exorcise, downsize, muddy ( ≈ 0.35). To determine the semantic coherence across eventhood scores, we computed the pairwise cosine semantic similarity between the verbs within each eventhood range (Figure 1). Figure 2 shows the semantic similarities among verbs for each bin in the range [α; β]. The similarities for each set of n verbs {v|α < (v) < β} were then contrasted with the pairwise similarities for n randomly drawn verbs. In 19 out of the 20 bins the actual verb similarities were statistically higher than the random ones (p < .001). This means that the verbs within each range form a semantically coherent group, suggesting that the eventhood score can identify semantically related verbs.
Histogram of Eventhood
Towards either end of the eventhood spectrum (Figure 2), we see that the verbs are semantically much more similar to one another, while the mid-range is the most semantically dissimilar. In the extreme cases, we are dealing with verbs that are similar to one another, while in the mid-range the semantic coherence is lost.
Evaluation on Psycholinguistic Datasets
We test our model on the experimental datasets from two metonymy interpretation studies (Traxler et al., 2002;Katsika et al., 2012). Each of these studies makes use of a classification, according to which it expects participants' behavior to differ. More precisely, they expect more difficulty in processing when 'metonymic' verbs are combined with non-event denoting objects than when 'less metonymic' verbs are.
If, as those studies claim, event-selecting verbs give rise to higher processing costs when combined with entity-denoting objects, then we expect our eventhood measure to be able to distinguish between the classes used in the psycholinguistic studies. Traxler et al. (2002) and Katsika et al. (2012).
The Datasets
The two datasets used are: Traxler et al. (2002) dataset: composed of 24 verbs used in Experiment 2 and 3 in Traxler et al. (2002). Verbs are divided in metonymic and non-metonymic verbs (event verbs and neutral verbs, according to the terminology of the study). Higher processing costs were yielded for metonymic verbs combined with entity-denoting objects than for all remaining conditions (metonymic verbs combined with event-denoting objects and non-metonymic verbs combined with entity and eventdenoting objects). Katsika et al. (2012) dataset: composed of 38 verbs used in Katsika et al. (2012) taken mostly from previous psycholinguistic experiments on type-shifting. 9 As mentioned above, Katsika et al. (2012) make a point of distinguishing between three sets of verbs: here metonymic aspectual, metonymic psychological and non-metonymic verbs. (according to the terminology of the study, aspectual, psychological and entity-selecting). Readers spent more time re-reading the verb in the metonymic aspectual condition than the metonymic psychological or non-metonymic condition.
Evaluation
A direct correlation between eventhood and reading times is not feasible, because the psycholinguistic studies do not report reading times for each verb, but rather the average per condition (and even if they did, the number of measurements per verb would be too small). Thus, we resort to two alternative evaluation methods:
For the Traxler et al. (2002) dataset, the difference between metonymic verbs and non-metonymic verbs is close to significance, with p just above 0.05 (W = 100.5, p < 0.053). The fact that this difference is less significant is compatible with the observations in Katsika et al. (2012), namely that the set of verbs typically used in studies on logical metonymy is heterogeneous and includes verbs which are less event-selecting than aspectual verbs. In fact, if we remove the four metonymic verbs that are not aspectual (endure, enjoy, expect, prefer), we find a significant difference between the non-metonymical and metonymic (now aspectual-only) classes (W = 67.5, p < 0.01).
On the Traxler et al. (2002) dataset, the model scores 23/32 in the pairwise comparisons. In other words, metonymic verbs receive higher eventhood scores for 72 % of the pairs. Table 4 shows some examples of the pairwise comparisons. We find that errors tend to occur for metonymic psychological verbs more often than for metonymic aspectual verbs. The reason is that the most event-affine nonmetonymic verbs (recall, report) prefer events to a higher degree than the least event-affine metonymic verbs (enjoy, prefer). This again suggests that Traxler et al.'s (2002) set of metonymic verbs is not clearly distinct from their non-metonymic verbs. This point is reinforced by Figure 3 which visualizes the eventhood distributions for the verb classes in both datasets as density plots and boxplots. The more homogeneous three-class distinctions in Katsika et al. (2012) seems justified as it clearly identifies three different selection behaviors (metonymic aspectual, metonymic psychological, non-metonymic), while the two-class distinction in Traxler et al. (2002) shows substantial overlap.
Discussion
Our results indicate that eventhood is a good indicator of 'metonymicity' and can even distinguish between classes of metonymic verbs. This raises the question of how strong the correlation between metonymicity and eventhood really is.
A first question is whether verbs need to have an (almost) perfect eventhood score to be metonymic. This is not plausible: if a verb is metonymic, we expect it to allow for entity-denoting objects, even if they will occur less frequently. For instance, begin is, arguably, a 'true' metonymic verb (metonymic aspectual). However, occurrences of begin in metonymical context (with entity objects) are indeed attested in the corpus. Consequently, it obtains an eventhood score of 0.91. Generally, we expect metonymic verbs to be placed at the high end of the eventhood spectrum, but not at the extreme (cf. Figure 3). A second question is whether all verbs at the upper end of the eventhood range are (or at least can be) metonymic. Our inspection shows that verbs with an extremely high eventhood tend to disprefer metonymic constructions. Among the top eventhood-scoring verbs are, for instance, perform, undergo, protest, conduct, spearhead, facilitate, undertake, witness. All of these verbs clearly prefer events and occur infrequently in metonymic constructions. However, occasional metonymic productivity occurs, as in the following examples from American discussion forums on the web: There's a huge connection between Prematurity and GBS morbidities and mortalities and I too would be more then willing to undergo the antibiotics if such a risk factor was involved.
[The Adventures of Tom Sawyer] is called the first real work of the American Literature movement, which in general spawned the Hemingways and Faulkners I would later undertake.
Taking an IPD approach, we collaborated with Zeemac using 3D modeling known as "real time design" to facilitate the floor plan.
In sum, the correlation between eventhood and metonymicity is strong but not perfect. It remains a question for further investigation which other factors are involved in determining whether a high-eventhood verb features prominently in metonymic constructions (begin) or not (conduct). One factor that we want to investigate is specificity, following the intuition that only verbs that refer to general properties of as many events as possible (like aspectual properties) rather than specific scenarios are suitable as metonymic verbs.
Conclusions
In this paper, we have introduced a simple data-driven measure of eventhood, that is, the degree to which verbs prefer events over entities as their direct objects. Our eventhood measure allows us to characterize and separate verb classes relevant for logical metonymy that were so far hand-picked on the basis of intuitive considerations.
The fundamentally graded nature of our measure suggests that there is no clear-cut binary distinction between metonymic verbs on one end and non-metonymic verbs on the other. Instead, there is a continuum with a sequence of classes (named in decreasing order of eventhood): First, verbs with an extremely high eventhood such as undergo strongly disprefer entity-denoting objects, but in some creative uses they may still combine with them giving rise to metonymic interpretation. Next, metonymic aspectual verbs strongly prefer event-denoting objects but are (albeit less frequently) attested with entity-denoting objects and form "classic" cases of metonymy. Psychological verbs have a less strong bias for event-denoting objects, but can still be considered as metonymic (although, as Katsika et al. (2012) argue, with their own behavioral patterns). Finally, there is the wide range of non-metonymic verbs that are either neutral or entity-prefering.
This picture indicates that the question of how to select verbs for the control condition against which metonymical verbs are compared is by no means trivial. We believe that our depiction of the metonymic behavior as a graded range suggests that eventhood can be used to inform and guide the design of further materials in this area.
In closing, we note that expectations for the semantic types (event/entity) of verbal arguments can be understood as a very coarse variant of selectional preferences, and our model as a much simplified version of ontological models of selectional preferences (Resnik, 1996). On the other hand, the existence of classes with graded preferences indicates that eventhood differences may not be binary distinctions, but that we might rather be dealing with a graded range of behaviors. This has clear consequences for type-clash accounts of logical metonymy: given the existence of many verbs which exhibit intermediate behavior, it seems unlikely that there are two exclusive classes (metonymic vs. non-metonymic). Within this graded picture, the function of the type clash may be taken over by mismatches between preference (expectation) for an object and the actually encountered object.
The preliminary investigations presented in this paper thus show that corpus data can be used to provide independent empirical grounding to theory-loaded notions such as the one of metonymic verbs. This can be extremely useful for future experimental work as well as to evaluate experimental results.
|
2014-10-01T00:00:00.000Z
|
2013-03-01T00:00:00.000
|
{
"year": 2013,
"sha1": "3d1504e49e82654d1f98fa076645de7ad1945b18",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "3d1504e49e82654d1f98fa076645de7ad1945b18",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
11382753
|
pes2o/s2orc
|
v3-fos-license
|
Association between the rapid shallow breathing index and extubation success in patients with traumatic brain injury
Objective To investigate the association between the rapid shallow breathing index and successful extubation in patients with traumatic brain injury. Methods This study was a prospective study conducted in patients with traumatic brain injury of both genders who underwent mechanical ventilation for at least two days and who passed a spontaneous breathing trial. The minute volume and respiratory rate were measured using a ventilometer, and the data were used to calculate the rapid shallow breathing index (respiratory rate/tidal volume). The dependent variable was the extubation outcome: reintubation after up to 48 hours (extubation failure) or not (extubation success). The independent variable was the rapid shallow breathing index measured after a successful spontaneous breathing trial. Results The sample comprised 119 individuals, including 111 (93.3%) males. The average age of the sample was 35.0±12.9 years old. The average duration of mechanical ventilation was 8.1±3.6 days. A total of 104 (87.4%) participants achieved successful extubation. No association was found between the rapid shallow breathing index and extubation success. Conclusion The rapid shallow breathing index was not associated with successful extubation in patients with traumatic brain injury.
INTRODUCTION
Traumatic brain injury (TBI) is a non-degenerative, non-congenital lesion caused by trauma or triggered by high-energy acceleration or deceleration of the brain inside the skull that causes anatomical damage to or has functional effects on the scalp, skull, meninges or encephalon. (1,2) A study conducted in 2001 found that 555 of 11,028 TBI victims admitted to the emergency department of a public hospital in Salvador (BA) required hospitalization for specialized care. (3) Traumatic brain injury victims commonly require mechanical ventilation (MV) to maintain ventilation, optimize oxygenation, and protect their airways. Such patients might require MV for long periods of time and develop complications such as ventilator-associated pneumonia or lung injury, diaphragmatic dysfunction, barotrauma, and volutrauma. (4)(5)(6)(7) Weaning patients of MV involves two different processes: (1) ventilator discontinuation; and (2) removal of the orotracheal tube (extubation).
Objective: To investigate the association between the rapid shallow breathing index and successful extubation in patients with traumatic brain injury.
Methods: This study was a prospective study conducted in patients with traumatic brain injury of both genders who underwent mechanical ventilation for at least two days and who passed a spontaneous breathing trial. The minute volume and respiratory rate were measured using a ventilometer, and the data were used to calculate the rapid shallow breathing index (respiratory rate/tidal volume). The dependent variable was the extubation outcome: reintubation after up to 48 hours (extubation failure) or not (extubation success). The independent variable was the rapid shallow breathing index measured after a successful spontaneous breathing trial.
Results: The sample comprised 119 individuals, including 111 (93.3%) males. The average age of the sample was 35.0±12.9 years old. The average duration of mechanical ventilation was 8.1±3.6 days. A total of 104 (87.4%) participants achieved successful extubation. No association was found between the rapid shallow breathing index and extubation success.
Conclusion: The rapid shallow breathing index was not associated with successful extubation in patients with traumatic brain injury. The decision regarding whether a patient can tolerate the removal of the orotracheal tube is crucial because both extubation delay and failure are associated with adverse effects and increased mortality. (4,(8)(9)(10)(11)(12) Therefore, accurate knowledge of the risk factors and predictors of extubation failure is needed.
The rapid shallow breathing index (RSBI) is one of the most widely investigated predictors of extubation failure. Values ≤105 cycles/min/L are considered predictive of extubation success. (8)(9)(10)12) The aim of the present study was to establish the association between the RSBI and extubation success in patients with TBI. Individuals with TBI aged ≥18 years of both genders who underwent MV via orotracheal tube for at least two days, had Glasgow coma scale (GCS) scores ≥8 at the time of extubation, whose guardians signed informed consent, and who passed a spontaneous breathing trial (SBT) were included in the study. Patients with spinal cord injury, unplanned extubation, or MV lasting <48 hours were excluded.
This
As a function of the observational nature of the present study, decision-making on weaning, extubation, reintubation and the use of noninvasive ventilation (NIV) was left to the staff of each participating unit, without interference by the investigators. The participants were considered fit to start SBTs by the healthcare staff when the event that led to the MV was reversed or controlled, gas exchange was adequate (PaO 2 ≥60mmHg with a fraction of inspired oxygen (FiO 2 ) ≤0.4 and a positive end-expiratory pressure (PEEP) ≤5 at 8cmH 2 O), the patients were not subjected to continuous sedation, were hemodynamically stable (signs of satisfactory tissue perfusion, no need for or low doses of vasopressors, absence of coronary insufficiency or arrhythmias with hemodynamic repercussion) and were able to perform inspiratory efforts. The ICU staff interrupted the SBTs whenever any of the following signs of intolerance appeared: respiratory rate >35 cycles/minute, ≤90% arterial oxygen saturation, heart rate >140bpm, systolic arterial pressure >180mmHg or <90mmHg, agitation, sweating, or altered level of consciousness. The ICU staff initiated NIV whenever post-extubation respiratory failure occurred (defined by the presence of clinical signs suggestive of respiratory muscle fatigue and/or increased respiratory effort and increased respiratory rate). The patients were extubated when they could tolerate the SBT over 30 to 120 minutes and exhibited neurological stability with a GCS score ≥8.
The RSBI was measured with the headboard raised to 30º to 45º and with the patient in the dorsal decubitus position while monitoring the patients' vital signs. Tracheal aspiration was performed, and 15 minutes later, the artificial airway was connected to a properly calibrated ventilometer (Ferraris Mark 8 Wright Respirometer ® , United Kingdom) for one minute under spontaneous breathing. The respiratory rate and minute volume were measured over one minute, and these data served to calculate the tidal volume by dividing the minute volume by the respiratory rate as well as the RSBI by dividing the respiratory rate by the tidal volume (in liters). The RSBI was expressed as cycles/min/L. The decision to extubate was not influenced by the RSBI because the ICU staff were unaware of the results. The volunteers were divided in two groups: RSBI ≤105 cycles/min/L and >105 cycles/min/L.
For the purposes of the analysis, the extubation outcome was considered the dependent variable and was rated successful when reintubation was not needed within 48 hours. The RSBI measured after a successful SBT was considered the independent variable.
When reintubation was needed within 48 hours after extubation, its cause was registered under one of the following categories: upper airway obstruction, respiratory failure (tachypnea, use of the accessory muscles of respiration, paradoxical breathing, or hypoxemia), reduced level of consciousness, bronchospasm, aspiration of lung secretions, excessive lung secretions, and other causes.
The results were expressed as the mean and standard deviation, median and interquartile range, or proportions as befitting. Student's t test or the Mann-Whitney U test was used in the comparison of the RSBI between the groups with successful and failed extubation, and the chi-squared or Fisher's exact test were used to compare proportions. The area under the ROC (receiver operating characteristic) curve of the RSBI was analyzed. Statistical analysis was performed using the software Statistical Package for the Social Science (SPSS) version 12.0 with p<0.05 as the significance level.
RESULTS
Of 129 consecutive patients with TBIs who were eligible for extubation during the study period, 119 passed the SBT and were included in the study. Ten patients were excluded due to self-extubation in 3 cases, MV lasting <48 hours in 4 cases, and spinal cord injury in 3 cases. Most of the volunteers were male (93.3%), the average age of the sample was 35.0±12.9 years old, and the MV lasted 8.1±3.6 days on average. With respect to treatment, 86 volunteers (72.3%) underwent surgery, and 33 volunteers (27.7%) received conservative treatment. Upon admission to the ICU, volume and pressure assist-control were the ventilation modes most frequently used in 71 (59.6%) and 46 (38.7%) participants, respectively. The demographic data, duration of MV, and clinical characterization of the 119 analyzed volunteers are described in table 1.
Extubation was successful in 104 patients (87.4%), while 15 patients (12.6%) required reintubation. The causes for reintubation were as follows: respiratory failure in 7 cases (46.7%), laryngospasm in 4 cases (26.7%), reduced level of consciousness in 2 cases (13.3%), excessive secretions in 1 case (6.7%), and sepsis in 1 case (6.7%). Comparison between the successful and failed extubation groups did not find any differences with respect to age, type of accident, GCS score upon admission to the hospital, duration of MV, or ventilation mode upon admission. The successful extubation group comprised a larger proportion of males (96.2% versus 73.33%; p=0.009) ( Table 1).
Following extubation, 8 of the 119 patients (6.7%) needed NIV, of whom 6 (5.8%) were in the group with successful intubation, and 2 (13.3%) were in the group that required reintubation; the intergroup difference was not statistically significant (p=0. 26). No difference was found in the RSBI between patients who needed NIV and patients who did not: 86.1±23.7 cycles/min/L and 74.0±32.4 cycles/min/L, respectively; p=0.3. The average RSBI of the sample was 74.8±32.0 cycles/min/L. The RSBI did not exhibit a difference between the groups with successful and failed extubation; the average values were 73.5±33.1 cycles/min/L and 83.8±21.3 cycles/ min/L, respectively (p=0.25). Among the 15 patients that needed reintubation, only 2 patients (13.3%) exhibited an RSBI >105 cycles/min/L. In the present study, no association was found between the RSBI categories (≤105 cycles/min/L and >105 cycles/min/L) and extubation success ( Table 1).
The ROC curve constructed based on the graphical representation of the sensitivity and specificity values is depicted in figure 1. The area under the ROC curve corresponding to the RSBI was 0.64 (95% confidence interval (CI): 0.52-0.76; p=0.08), which indicated a lack of clinical value to the RSBI to predict successful extubation in patients with TBI. agreed with the reports by other authors. (3,13,14) In the present study, a association was found between male gender and extubation success. Among authors who have reported on the association between gender and extubation outcomes in patients with TBI, some found more favorable results in males, (17) while others found better outcomes in females, (18) which indicated the need for further studies to clarify this matter.
The study by Yang and Tobin (19) included a cohort comprising 100 patients and found that an RSBI ≤105 cycles/min/L was predictive of extubation success. That result disagreed with the result of the present study. A possible explanation for that discrepancy is that the sample population in the study by Yang and Tobin was heterogeneous.
In a study with a prospective cohort comprising 92 neurosurgical patients, an RSBI ≤105 cycles/min/L was not associated with successful extubation, and among the 15 participants who needed reintubation, only 1 patient exhibited an RSBI >105 cycles/min/L. (12) Those findings agreed with the findings of the present study, which did not find a association between the traditional RSBI cutoff point and extubation success in individuals with TBI. In addition, other studies have found this threshold to be unable to predict extubation success, and thus the identification of a more accurate cutoff point is needed. (20)(21)(22) A prospective study was conducted with 80 patients subjected to general anesthesia to assess the use of the RSBI as a predictor of extubation outcomes. The results showed that participants with an RSBI of 80 to 100 cycles/min/L exhibited clinical complications following extubation. However, they did not need reintubation. (23) In another case series with 73 heterogeneous patients, the traditional RSBI cutoff point failed to detect 80% of the participants who needed reintubation, which further pointed to the inefficacy of that threshold. (24) Authors who assessed heterogeneous samples from two surgical ICUs were not able to find an association between the RSBI and extubation failure. (25) One study conducted with clinical and surgical patients that included serial measurements of the RSBI at the first minute and then at 30, 60, 90, and 120 minutes later found that the RSBI percent variation during an STB was a better predictor of extubation success compared with any single measurement. (26) According to the literature, successful extubation might be influenced by factors such as age, duration of MV, hemoglobin concentration, arterial carbon dioxide partial pressure, amount of endotracheal secretions, and particularly among neurological patients, airway
DISCUSSION
The present study assessed a cohort comprising 119 individuals to investigate the association between the RSBI and extubation success in patients with TBI. No association was found in the investigated population.
The investigated sample mostly comprised young male individuals, which agreed with the literature. (13)(14)(15)(16) With respect to the cause of the trauma, motorcycle accidents were the most frequent, followed by physical aggression and being run over by a car, which also Conclusão: O índice de respiração rápida e superficial não esteve associado ao sucesso da extubação em pacientes com traumatismo cranioencefálico.
RESUMO
Descritores: Traumatismos encefálicos; Respiração artificial; Unidades de terapia intensiva; Desmame protection and patency parameters, (9,10,(21)(22)(23)25,26) which might have interfered with the results of the present study. Although the ability to wean patients off mechanical ventilation is assessed in a similar manner in patients with and without neurological disorders, the assessment of the risk of extubation failure is not equally well standardized. Patients with brain injury might easily tolerate spontaneous breathing without ventilation support; nevertheless, they might require an artificial airway as a function of their reduced level of consciousness. The factors associated with extubation success in such cases have not yet been fully elucidated. Therefore, further studies are needed to assess the association between the RSBI with other predictors of extubation success in patients with TBI.
The present study had some limitations. First, because it was an observational study, its results should only be used to formulate hypotheses. In addition, it was conducted in a single center, the sample size was small, and the sample size was not calculated before the initiation of the study. Nevertheless, the multidisciplinary staff that assisted the patients was not informed about the data resulting from the present study, and the staff's decisions with respect to extubation were not influenced by the latter. Then, as a function of the lack of clinical value of the RSBI to predict success of extubation in patients with TBI, no cutoff point was selected in the present study for that variable. For that reason, further studies are needed to establish and validate a cutoff point for the RSBI in patients with TBI. In addition, other parameters that can influence extubation outcomes might not have been assessed in the present study. Finally, scores to establish the prognosis based on severity were not used. However, the GCS score upon admission was used as indicator of the severity of the neurological state. In spite of the abovementioned issues, the present study supplied data with respect to the association between the RSBI and extubation success in a specific sample of individuals with TBI at the ICUs of a public hospital that is a referral center for trauma patients.
CONCLUSION
As indicated by these results, the rapid shallow breathing index was not associated with successful extubation in the present sample of patients with traumatic brain injury.
|
2016-05-04T20:20:58.661Z
|
2013-07-01T00:00:00.000
|
{
"year": 2013,
"sha1": "40f3c84e3cb73df11163fb86a94085602457602a",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4031850?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e535dac51d68c15ae7f5a5d89f30c1bfee81586",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119245085
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Micro-Roughness Parameters and Optical Properties of Obliquely Deposited HfO2 Thin Films
Oblique angle deposited oxide thin films have opened up new dimensions in fabricating optical interference devices with tailored refractive index profile along thickness by tuning its microstructure by varying angle of deposition. Microstructure of thin films strongly affects surface morphology as well as optical properties. Since surface morphology plays an important role for the qualification of thin film devices for optical or other applications, it is important to investigate morphological properties. In present work, HfO2 thin films have been deposited at several oblique angles. Morphological statistical parameters of such thin films viz., correlation length, intrinsic roughness, fractal spectral strength, etc., have been determined through suitable modelling of extended power spectral density function. Intrinsic roughness and fractal spectral strength show an interesting behaviour with deposition angle and the same has been discussed in the light of atomic shadowing, re-emission and diffusion of ad-atoms. Further refractive index and thickness of such thin films have been estimated from transmission spectra. Refractive index and grain size depict an opposite trend with deposition angle and their variation has been explained by varying column slanting angle and film porosity with deposition angle.
Introduction
Oblique angle deposition has been attracting researchers due to its applications in interference devices, micro sensors, microelectronics, photonic crystals, and rugate structures based devices. Now-a-days, it is being used for fabricating precision interference filters [1,2] in which refractive index is varied by varying the angle of deposition resulting in varying porosity due to atomic shadowing and limited ad-atom diffusion [3,4] during growth. It generally works at angles greater than 60° with normal to the substrate. When the angle reaches around 80°, it is termed as glancing angle deposition (GLAD). Oblique angle deposition results in special nano and microstructure of thin films. By employing substrate rotation and varying deposition angle, different geometries like pillar, helix, zigzag, erect columns etc. have been achieved successfully [5][6][7][8]. Zhu [11]. Gasda et al. have fabricated nono-rod proton exchange membrane fuel cell cathodes by glancing angle deposition of Carbon [12].
It is well understood that surface morphology affects the functionality of thin film and multilayer devices for optical and other applications [18]. Surface morphology strongly perturbs the amount and distribution of scattered light from optical components and such scattering is a performance limiting factor for optical devices. Hence, it is of high importance to characterize micro-roughness parameters of such obliquely deposited HfO 2 thin films to assess their surface morphological properties. Generally RMS roughness of surface is taken as the parameter to characterize surface morphology. However, surface roughness parameter computed from RMS distribution of heights only does not take in to account the lateral distribution of surface features. Power spectral density function (PSDF) provides more complete description of surface topography. PSDF describes two aspects of surface roughness viz., the distribution of heights from a mean plane, and the lateral distances over which height variations occur [19][20][21]. Moreover, PSDF also provides useful information on superstructures and fractals of surfaces. Fractal geometry and scaling concept can concisely describe the rough surface morphology [22,23]. Surface morphology at different scales is believed to be self-similar and related to the fractal geometry. Fractal analysis can extract many different kind of information from measured surface morphology and that makes fractal approach a very attractive and useful in describing surface statistics of thin films.
Hafnium oxide exhibits high refractive index, high band gap [20,24], high laser induced damage threshold [25][26][27] and transparency from ultraviolet to mid-infrared (0.20-10µm) [28]. It is widely used high index coating material for fabrication of multilayer interference devices. In present work, 2-D extended PSDF has been computed from measured AFM data for entire set of obliquely deposited HfO 2 thin films by combining PSDF of three different scan sizes. Different PSD models in combination have been fitted with the computed PSDF to extract fractal parameters, correlation length, intrinsic RMS roughness and contribution of aggregates or superstructure to surface roughness. Further, the refractive index and film thicknesses have been computed from transmission measurements. We have found very interesting correlations among micro-roughness parameters, refractive index and angle of deposition.
Thin films were deposited on fused silica substrate at 200ºC temperature by reactive electron beam (EB) evaporation. Schematic of oblique angle deposition is shown in Fig. 1. Before deposition, entire batch of substrates were cleaned in ultrasonic cleaner and vapour degreaser to achieve good quality films. α was defined as the angle between normal to the substrate plane and incident vapour flux as shown in Fig. 1. Different values of α were set by tilting the substrate whereas direction of the incoming vapour flux was held fix. Distance between substrate and vapour source was kept ~ 45 cm. The base pressure prior to deposition was kept ~1x10 -5 mbar. During deposition, high purity (99.9%) oxygen was supplied to the deposition chamber through mass flow controller to maintain stoichiometry of HfO 2 thin films. Rate of deposition and film thicknesses were monitored and controlled by Inficon make 'XTC2' quartz crystal monitor. An optimized oxygen partial pressure of 1x10 -4 mbar and deposition rate of 5Å/s were maintained during deposition. NT-MDT make P47H AFM system was used for morphological measurements of obliquely deposited HfO 2 thin films. A super sharp diamond like carbon (DLC) coated Si probe having tip curvature 1-3 nm, resonance frequency 198 kHz and force constant 8.8 N/m has been used. Length and width of DLC cantilever probe were 125 and 35 µm respectively. DLC coated AFM probe being very hard and anti-abrasive, was chosen to get the consistency in the measurements [29]. Three different measurements having scan sizes, 2.5x2.5, 5x5 and 10x10 μm 2 with spatial resolution of 512x512 points, have been taken for all the films. For optical characterization, transmission measurements were performed from 300-1200 nm with a wavelength resolution of 1 nm on Shimadzu make UV-VIS-NIR 3101PC spectrophotometer.
Computation and analysis of power spectral density function
PSD function can be derived from many different measurements such as, morphological measurement by surface profilometer, bi-directional reflectance distribution function and AFM measurement of surface profile [19,21]. Among all, AFM is widely used and an excellent tool to characterize rough surfaces having height irregularities not more than few microns. There are large numbers of publications which describe surface statistics thoroughly [30][31][32]. In present paper, we have adopted the formulation described in refs. [33,34] for the computation of PSDF as following Here S 2 is the 2-dimensional PSDF, L 2 is the scanned surface area, N is the number of data points in both X and Y direction of scanned area, Z mn is the surface profile height at position (m,n), f x & f y are the spatial frequencies in X and Y directions respectively. ΔL (L/N) is the sampling interval.
Computation of PSDF is further followed by transition to polar coordinates in frequency space and angular averaging (φ) As the PSDF depends only on one parameter, it will be plotted in all our figures as a 'slice' of the 2-D PSDF with unit '(length) 4 .
PSDF obtained from single AFM scan has roughness in limited band of spatial frequencies and the band width depends on scan area and sampling interval. Artefacts can also constrain frequency band width of PSDF. Fortunately such band width limitation can be eliminated by combing topographical measurement performed on different scan size provided following conditions are satisfied: (1) Spatial frequency range on which different scan size measurements are performed should overlap partially.
(2) Different PSDF should be of the same order of magnitude in the overlap region.
With the conditions mentioned above, combined PSDF at a frequency is given by geometrical averaging: Here M is the PSDF overlapping at concerned frequencies.
In present work, PSD functions have been computed separately for scan area, 2.5x2.5, 5x5 and 10x10 μm 2 and combined together in a suitable manner taking care of all the conditions mentioned above. Experimental PSDF computed for morphologies of obliquely deposited HfO 2 thin films needs appropriate analysis models so that an extensive interpretation of PSDF can provide deep insight of morphological statistical parameters. Several mathematical models alone or in combinations has been proposed and used by researchers to interpret experimental PSDF. The most used extended model for PSDF of thin films is the sum of Henkel transforms of the Gaussian and exponential autocorrelation functions [35,36]. But such model fails when wide spatial frequency range is considered. To describe roughness over large spectral frequencies, PSD model should comprises contribution from substrate, pure thin film and aggregates or superstructures. PSDF of substrates generally follows inverse power law with spatial frequency (assuming fractal like surfaces) and is given as Here K is spectral strength of fractal and γ is fractal spectral indices. This PSD formulation follows self-affine surfaces only and fractal dimension Fd is given as following When γ = 0 i.e. Fd = 2, surface is extreme fractal, for Fd =1.5, surface is Brownian fractal and for Fd=1, surface is marginal fractal. Apart from substrate fractal contribution towards total roughness, thin films also exhibits strong fractal characters especially at higher spectral frequencies. PSDF of pure thin film is conventionally characterized by ABC or k-correlation model [38,39]: A, B, C are model parameters. Equivalents RMS roughness (contribution from pure thin film) and correlation length which depicts the grain size, are related to A, B and C parameters as following; Models discussed so far are monotonically decreasing function of spatial frequency and do not accounts for any local maxima in PSDF while experimental PSD functions of our films exhibits one or more local maxima due to contributions from aggregates. Such peaks in experimental PSD can be accounted by using Gaussian function with its peak shifted to a non-zero spatial frequency as described in ref. [40]. For our thin films which exhibit one or more peaks in PSD function, we have used the combination of all the three PSD models and the combined formulation is as follows:
Determination of optical constants from transmission spectra
Prior to the determination of optical constants of thin film, substrate transmission spectra was fitted with its theoretical expression [41] using suitable dispersion relation to estimate substrate refractive index and extinction co-efficient. The procedure for deriving refractive index, and thickness of thin films from the measured transmission spectrum is detailed in ref. [42].Theoretical transmission of single layer thin film was generated using Sellmeier's dispersion model. χ 2 (chi) square minimization [41] has been carried out to determine the fitting parameters. Refractive index (n) and film thickness were computed from the fitting parameters.
Result and discussion:
Generally PSDF computed from single place AFM scan is very noisy and hence morphological parameters deduced from it may be erroneous. One way to eliminate noise is to carry out data smoothing, which introduces artefacts in data and may lead to wrong estimation of micro-roughness parameters. Other way which we have chosen in present paper is to perform many scans of the same size at different places and then average them. We have performed scans of same size at 8 different places over thin film surface. In Fig. 2, PSDFs of thin film, S-3 computed from equation (1) for scan size 2.5x2.5 μm 2 at different places are shown along with their average. It is worth to notice that after averaging, fluctuations in PSDF have reduced to a great extent. Extended PSDF for entire set of obliquely deposited HfO 2 thin films which has been computed from AFM measurement using equation (1), equation (2) and equation (3), are plotted in Fig. 3. Extended PSDF of thin film S-1, deposited at 80 o , depicts the highest roughness for full range of spectral frequencies among all the films. Combined PSD model as described by equation (8), has been used to fit the experimental extended PSDF for entire set of thin films. The fitting parameters, intrinsic film roughness and correlation length are listed in Table-1. Experimental and fitted extended PSDF for thin films S-1 and S-7 are shown in Fig. 4(a) and Fig. 4(b) respectively. Fitting quality justifies the use of combined PSDF model. Contribution of different model components to total extended PSDF or spectral roughness has also been computed and plotted in Fig. 4(a) and Fig. 4(b) for films S-1 and S-7 respectively. It can be noted from table-1 that entire set of films except S-1, has been fitted using two shifted Gaussian peaks. PSDF of film S-1 fits very well using single shifted Gaussian peak only. Fig.5 of ad-atom perpendicular to the plane of substrate is the lowest and hence sticking probability of ad-atom to substrate is the lowest at glancing angle. Low sticking probability leads to high re-emission of ad-atoms which gives smoothing effect to the surface of slanted angle columnar growth film [43]. On the other hand, for glancing angle, surface roughening due to atomic shadowing effect is very high and dominates surface smoothing effects due reemission and diffusion of ad-atoms [43,44]. Consequently, GLAD film depicts the highest surface roughness amongst all the films. As angle α decreases i.e. θ increases, shadowing effect tends to diminish very fast [44]. For angle α below 70°, shadowing offers very small roughening effects. However, re-emission of ad-atoms also decreases with angle θ due to increase in sticking probability, smoothing due to re-emission and diffusion of ad-atoms starts dominating roughening due to shadowing effect. Consequently, effective smoothing of Diameter of slanted columns is very high (~ 50-100 nm) at glancing angles which ultimately leads to bigger grain size on surface. For the reference of column diameter, inter-columnar distance and column slanting angles, we have reported cross-sectional FESEM characterization of such films in our earlier work [29]. As the angle α decreases, atomic shadowing effect decreases. This leads to decrease in surface grain size because of decrease in column slanting angle. In Table- Table-1. It indicates that contribution of aggregates to total PSDF or spectral roughness dominates for lower spatial frequencies and their contribution for higher frequencies is negligible. Aggregates size and their contribution towards spectral roughness are not definite with α and hence no correlation can be set between shifted Gaussian peak parameters and angle α. Fig.7 (a) presents the transmission spectra of all the thin films. Transmission spectra depict a decrease in visibility of interference fringes with the increase in angle α. Such outcome may be attributed to the increase in porosity in films with α. Again variation in film porosity is governed by film microstructure which changes due to varying atomic shadowing effect with deposition angle. It may also be noted from Fig. 7(a) that for film S-1 (GLAD), the absorption for wavelengths less than 350 nm increases sharply and the same may be the contribution from multiple reflections of light between the columns inside the film and from high diffuse scattering from rough surface [45,46]. Such loss of light due to scattering or multiple reflections inside thin film become visible and dominant for wavelengths ≤ 300 nm and reason for the same is high inter-columnar distance in GLAD thin film. As the angle α decreases, inter columnar distance reduces steeply and become very small compared to wavelengths of interest. Hence, the films deposited at lower oblique angle do not show any additional absorption. In Fig. 7 (b), experimental & fitted transmission curves are shown for film S-3 (α=62°). Suitability of Sellmeier's dispersion model is justified by the fitting quality depicted in Fig. 7(b). Film thicknesses determined through modelling are 622, 487, 390, 478, 398, 426 and 421 nm for films S-1, S-2, S-3, S-4, S-5, S-6, and S-7 respectively. Fig. 8 presents the dispersive values of refractive index computed from modelling of transmission spectra of thin films. Finally, the variation of correlation length and refractive index with deposition angle are plotted together in Fig. 9. Variation of correlation length depicts an opposite trend to refractive index with deposition angle. Such behaviour depicts a strong correlation between grain size and refractive index. Refractive index at wavelength of 600 nm varies between 1.37 and 1.93 as the angle α varies from 80° to 0°. The lowest refractive index of 1.37 is exhibited by GLAD film and is less than the refractive index of fused silica substrate (n=1.45 at λ=600 nm). Consequently, thin film S-1 renders an antireflection effect to fused silica substrate and the same is shown in Fig 7(a). Therefore, it is concluded that variation in microstructure of obliquely deposited HfO 2 thin film has great impact on their optical and morphological properties.
Conclusion
Several HfO 2 thin films have been deposited at different oblique angles varying from 0° to 80° by reactive electron beam evaporation. Such thin films posses special microstructures due to atomic shadowing and limited ad-atom diffusion during growth. Varying microstructure with deposition angle also affects morphological and optical properties of thin film. Effect deposition angle on morphological and optical properties has been studied extensively through extended power spectral density function and transmission spectra. Among all the thin films, GLAD film exhibits the highest grain size and intrinsic RMS surface roughness.
Intrinsic roughness and fractal spectral strength obtained from the analysis of extended power spectral density function follow the similar tend with deposition angle. Behaviour of surface morphological statistical parameters and refractive index with deposition angles have been explained by the combined effect of atomic shadowing, re-emission of ad-atoms and diffusion of ad-atoms. Fig.1: Depicts the schematic of oblique angle deposition and mechanism for slanted columnar growth due to shadowing effects.
|
2019-04-13T13:09:36.917Z
|
2015-10-26T00:00:00.000
|
{
"year": 2015,
"sha1": "12eafb8e231f8668359eefabcfd9720e63fc681a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0cf2e3460c0a730ad63043271b0a88786126fdbf",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
17464201
|
pes2o/s2orc
|
v3-fos-license
|
Chocolate scents and product sales: a randomized controlled trial in a Canadian bookstore and café
We report the results of a 31-day trial on the effects of chocolate scent on purchasing behavior in a bookstore. Our study replicates and extends a 10-day randomized controlled trial in order to examine the generalizability of the original finding. We first introduce the study of store atmospherics and highlight the importance and dearth of replication in this area. In the next section, we describe the original study and discuss the theory of ambient scent effects on product sales, and the role of scent-product congruity. We then describe our design and methods, followed by presentation and discussion of our results. We find no evidence that chocolate scent affects sales. These findings indicate the importance of replication in varied settings. Contextual factors and the choices available to customers may moderate the effects of ambient scent on purchasing behavior. Our study highlights the value of examining the generalizability of experimental findings, both for theory and practice.
Background
Careful composition of a store's atmosphere, according to a stream of literature in consumer research, can tip the balance between success and failure for a business (Turley and Milliman 2000). Experimental investigation of optimal store atmospherics has a long history: 50 years ago Smith and Curnow ran a field experiment manipulating the volume of music in a supermarket (Smith and Curnow 1966). A large menu of controllable atmospheric variables (e.g., music, lighting, wall color, ambient scent) each of which can take on innumerable values (e.g., chocolate scent, citrus scent, sea breeze scent), and a continuously regenerating array of product-types on which to test the effects has allowed the development of a large experimental literature in store atmospherics. With such a wealth of potential stimuli-Turley and Milliman list 57 different categories of atmospheric variable-there is always a new atmospheric effect to investigate, and replication studies are rare.
Turley and Milliman's excellent review of the experimental literature makes no reference to replication studies. In a more recent, critical review focused on the effects of ambient scent, Teller and Dennis (2012) conduct a replication of Chebat and Michon (2003)-to our knowledge the only published replication study on ambient scent atmospherics. 1 The study of store atmospherics represents a realm of research with clear substantive implications and potentially meaningful economic effects. The generalizability and replicability of experiments like these matter both for scientific reasons and for economic reasons.
In recent years, a crisis of replicability in many scientific disciplines has drawn greater attention to the importance of replication in establishing a reliable understanding of a stimulus and behavioral response (Schooler 2014). And replication and extension take on an additional layer of import when findings carry actionable implications for practitioners, as in a field like consumer research. Because experimentation is widely recognized as the gold-standard, publicized experimental findings in consumer research are likely to be taken as recommended practice. In hopes of achieving the results suggested by experimental findings, business owners may invest scarce resources to adopt practices that may prove unreliable or unsuitable upon repeated testing.
This study reports results from an ambient scent field experiment that replicates and extends the experiment conducted by Doucé et al. (2013). Doucé et al. conducted a randomized 10-day trial in a Belgian chain bookstore examining the effects of ambient chocolate scent on consumer behavior. Doucé et al. note that total sales increased 5.07 % on days in which the chocolate scent was administered, and the reported increase in sales was publicized widely in the popular press (see, e.g., Dooley 2013) and promoted in trade publications (see, e.g., Abrams 2013). The experiment reported here aimed to assess the generalizability of the Doucé et al. results by extending the design to a common variant of the original setting: a bookstore with adjoining café area. We test whether the finding of increased sales holds when products in the same domain as the chocolate scent-coffee and food items-are offered for sale alongside books. Our experiment found no effect of chocolate scent on either bookstore or café sales.
Ambient scent and bookstore purchasing behavior
The primary findings of Doucé et al. pertain to customers' "approach behavior", or engagement with products (e.g., "closely examining multiple books"). The authors categorized two genres of books as congruent with chocolate smell (Food & Drink [Cook] Books and Romance Novels & Romantic Literature), and two genres as incongruent (History and Crime, Thrillers & Mystery). Doucé et al. recorded an increase in both incongruent book sales (22 % increase) and congruent book sales (40 % increase) on days in which the chocolate scent was administered. The authors do not report measures of the uncertainty around these estimates. In their main finding, the authors report an increased likelihood of purchasing a congruent-category book over an incongruent-category book. Doucé et al. caution that countervailing effects on congruent and incongruent product types may pose a challenge for stores that sell a variety of products: "Retailers offering more than one product type should be aware of the possible negative effects of a pleasant scent that is thematically incongruent with part of the store offerings" (Doucé et al. 2013, p. 69). Schifferstein and Blok (2002) describe the potential mechanisms through which the thematic associations of an odor may influence consumer behavior. If a stimulus scent is unconsciously detected, odor priming will increase the accessibility of knowledge related to the stimulus. If the scent is consciously detected but unidentified, then the processes of trying to place the scent will generate associations-some correct and related to the stimulus, others incorrectly associated with the stimulus, as the actual identity of the scent remains unknown. Finally, a scent may be consciously detected and identified, in which case the scent activates only one's true associations with the stimulus, which in turn may generate search behavior for related products.
The behavioral result of each of these three modes of influence depends on whether the stimulus scent is congruent with the products offered for sale (i.e., the thematic associations of the scent are related to the products) or incongruent (the thematic associations are unrelated to the products in question). Foreshadowing Doucé et al. 's concluding caveat, Schifferstein and Blok propose countervailing effects dependent on congruency: "…if some products in a store are thematically congruent with the ambient smell in that store whereas others are not, sales of the congruent products may benefit from the smell, whereas sales of the incongruent products may be hampered by it" (p. 541). Testing this hypothesis on magazine sales in three bookstores, Schifferstein and Blok found no evidence of countervailing effects: ambient scents neither increased sales of magazines deemed congruent with the scent nor decreased sales of magazines deemed incongruent.
Doucé et al. note that they selected a bookstore with no café area, and no nearby "shops associated with scents (e.g., a coffeehouse)" (p. 67). Bookstores frequently include an area serving coffee and pastries, making the bookstore with café arrangement a natural context in which to test the generalizability of Doucé et al. 's finding of an ambient scent effect on consumer behavior. We test whether a chocolate scent released on randomly assigned days had an effect on either product-type in the consumer's choice set: books or café items. This experiment thus examines whether the contextual factors of the store and the particular choice set available to customers moderates the effect of a chocolate scent on purchasing behavior. Extending the study to a slightly different setting, we gain insight into the generalizability of an effect of chocolate scent on bookstore sales.
Methods
We conducted a trial over 31 days in an 800-squarefoot independent bookstore in Canada. The bookstore adjoined a café; in addition to books, the store sold greeting cards, espresso drinks, pastries, and loose coffee and tea. Apart from the owners of the bookstore (who administered the trial), the staff were not made aware of the trial, nor were the customers.
The trial used a between-subjects design in which the 31 experimental days were assigned one of two conditions: chocolate scent dispersion (treatment) and no scent (control). Condition was determined by Bernoulli (coin flip) random assignment with probability .5 independently across days. Chocolate scent was dispersed for a full day on randomly assigned treatment days (from 9:30 a.m. until 5:30 p.m. on weekdays, 9:30 a.m. until 5:00 p.m. on Saturdays, and 11 a.m. until 5:00 p.m. on Sundays), and was not dispersed on control days.
The aroma was dispersed by two methods, one at each end of the bookstore. Chocolate essential oil (Theobroma cacao 100 % Pure Extract) was obtained from Ananda Aromatherapy in Boulder, CO. Near the entrance of the bookstore, an electric scent diffuser designed for continuous scent release was used to warm and disperse the essential oil throughout operating hours, diffusing approximately .25-.30 ml of the liquid scent over the course of a treatment day. To intensify the treatment and ensure that a chocolate scent was present through the entire store, melted dark chocolate was maintained over a low heat source in an exposed metal pan at the other end of the bookstore. Both the scent diffuser and the pan-diffused chocolate were out of the customers' line of vision. No other alterations to the store atmosphere, personnel, or layout were made during the course of the 31 experimental days, nor were any special promotions run during the experiment.
The primary goal of the present study was to test whether Doucé et al. 's finding extended to a setting in which a café area adjoined the bookstore. In addition to this variation in the environment which is the main focus of the study, two differences between the original study and this replication and extension should be noted. First, the setting in our study was an independent bookstore rather than a chain, with a total size of around one-third the area of the chain bookstore in the original study. Our replication study is based on three times as many observations (experimental days) as the original study, but the volume of traffic in our site is smaller: an average of 21 customers per day during the experimental period, as opposed to roughly 100 customers per day in the original study. Second, for cost considerations, the independent bookstore owner chose to use pan-diffused chocolate as a secondary scent source rather than purchasing a second electric scent diffuser.
Sales data were recorded in three categories: book sales, café sales (pastries, coffee, and tea), and bulk sales (primarily coffee beans, but also loose tea and spices). Approval for obtaining sales records was granted by the bookstore and authorized by the Yale University Institutional Review Board. Full replication data and code are available at the Yale Institution for Social and Policy Studies data archive (http://isps.yale.edu/research/data).
Results and discussion
A manipulation check modeled after Doucé et al. was conducted to evaluate four aspects of the scent: spontaneous detection, prompted detection, spontaneous identification, and recognition of the scent. Fifteen customers were asked to respond to questions about the store atmosphere. To assess spontaneous detection, customers were asked whether they noticed something special in the store atmosphere, with responses coded as 1 if the customer mentioned an ambient scent, 0 otherwise. If the customer did not volunteer that a scent was present, ambient scent was mentioned and the subject was asked whether they could detect a scent now that scent was mentioned. Subjects were then asked if they could identify the scent. Subjects who did not spontaneously identify the scent as chocolate were asked if they recognized the scent as chocolate.
We maintain the intensity of the treatment scent at a stronger level than in the original study to avoid the possibility of the treatment scent being overwhelmed by scents from the café. As noted above, the theory behind the signal function of ambient scent holds that a consciously detected and identified scent should activate one's true associations with the stimulus and thus promote search behavior for related products (Schifferstein and Blok 2002). In Doucé et al., the intensity of the scent was lowered until the researchers obtained a sub-sample of surveyed customers in which no respondents spontaneously detected a scent, but all recognized the scent as chocolate once it was mentioned as such (the size of this sub-sample is not reported in Doucé et al.). Our objective was to maintain a detectable ambient scent; we did not seek to eliminate spontaneous detection of the chocolate scent.
In our sample of 15 customers, six spontaneously detected a scent. Once prompted to consciously attend to scent, all but one of the respondents were able to discern the presence of an ambient scent. Six respondents spontaneously identified that ambient scent as chocolate, and another five reported recognizing the scent as chocolate once the enumerator identified the scent as such. The total proportion of our sample reporting a detectable ambient scent was .93 (95 % CI: .81, 1.06). One limitation of the small sample size in our manipulation check is that the confidence intervals for our estimates are relatively wide. However, since the lower bound of our estimate indicates that at least 80 % of patrons would discern the presence of an ambient scent, we remain confident that the scent was sufficiently detectable.
We present two regression-based analyses of the trial. The first is a simple ordinary least squares regression of our four outcomes: total sales, bulk sales, café sales, and book sales in Canadian dollars (CAD). This strategy is logically equivalent to taking a simple mean difference between treatment and control. To improve efficiency, the second strategy uses ordinary least squares adjusting for a mean-centered linear time trend as well as meancentered fixed effects for the day of week, following Lin (2013). In all cases, robust standard errors are used to estimate standard errors, with p values computed under a normal approximation.
We present the complete raw results of the trial in Fig. 1. This figure shows the movement of sales over time and the relationship between sales and the randomized administration of the chocolate scent. Gray bars indicate the randomly assigned treatment days. The markers show sales per day, with sales type indicated by the marker symbols: circles show total sales, Xs show café sales, diamonds show book sales, and triangles show bulk sales.
Visually, there is little evidence to suggest that the chocolate scent has any effect on outcomes.
In Table 1, we present our estimates of the control means and average treatment effects of the chocolate scent, as computed with ordinary least squares. The direction of the treatment effect on total sales was positive but not statistically significant. None of our estimates of treatment effects are statistically significant at the twotailed p < .10 level. Our findings are not substantively altered by adjusting for time trends.
It is possible that when the consumer's choice set includes food items such as pastries and coffee beans, these food items should be considered a product-type thematically congruent with a chocolate scent, and books a product-type thematically incongruent with a chocolate scent. In this case, countervailing effects of the sort Schifferstein and Blok test for and Doucé et al. suggest as
Sales in Canadian Dollars
Day of Experiment The markers indicate sales per day: circles denote total sales, Xs denote café sales, diamonds denote book sales, and triangles denote bulk coffee bean/loose tea/spice sales an implication of their findings may be present, increasing the likelihood of purchasing a scent-congruent product type (food items) over a scent-incongruent product type (books). More precisely, a café on premise offers consumers an opportunity to purchase a product in the same domain as the appetizing scent (Li 2008). 2 The presence of within-domain products from a café area (food items) could result in devaluation of the primary but outof-domain retail products offered by the bookstore (books) (Brendl et al. 2003).
To test for such countervailing effects, we examine café sales and bulk sales relative to book sales. In all three outcome sub-categories-café sales, bulk sales, and book sales-the ambient chocolate scent treatment produced null results, providing neither evidence that a chocolate scent increased sales of a within-domain product (food items), nor evidence that the chocolate scent induced a preference for within-domain products over out-ofdomain products (books). The direction of the treatment effect on café sales was slightly negative, though not statistically significant (average estimated decrease of 8.9CAD from a control average of 336.5CAD, two-tailed p = .59) and the direction of treatment on bulk coffee bean/loose tea/spice sales was slightly positive, though not statistically significant (average estimated increase of 23.2CAD from a control average of 71.2CAD, two-tailed p = .29).
2 Following Li's definition of "domain", within-domain products are those that can satisfy the specific physiological need targeted by the appetitive stimulus.
Conclusions
The aim of this study was to assess the generalizability of an effect of chocolate scent on product sales in a bookstore. With a slight alteration to the experimental setting-a bookstore with an adjoining café area-we find no effect of an ambient chocolate scent on sales. We find no effect within subset categories of sales, nor evidence of a countervailing effect on purchases of within-domain and out-of-domain products.
The lack of a detectable effect in our study could result from a number of different root causes that are inseparable without further testing. Our finding could indicate that the association between chocolate scent and increased book sales is spurious. On the other hand, these null results could indicate that book sales in the baseline condition have already been boosted by scents from the café area, rendering the chocolate scent treatment ineffectual in a setting where pleasant food-related scents naturally occur. In addition to the primary variation of interest in this extension-the presence of a café area adjoining the bookstore-a number of secondary features that differed between the two studies could account for the lack of a detectable effect in our study. For example, perhaps the effect of ambient scent on book sales is limited to larger, chain bookstores and does not affect sales in small, independent bookstores; or perhaps the effect is brought on only by atomized diffusion of a liquid scent and is diminished by the presence of real melted chocolate. Finally, it is possible that the effect of chocolate scents was small, and that our study is not well powered enough to detect it. Further interventions, in varied settings, could contribute to a meta-analytic understanding of the effects of chocolate scents on product sales.
Our study illustrates the value of examining the generalizability of experimental findings. These results do not preclude the existence of an effect of chocolate scent on purchasing behavior, but rather suggest the need for additional study in varied contexts.
Table 1 Estimated effects of chocolate scent on bookstore sales
Estimates of control means and average treatment effects of chocolate scent on total sales, bulk coffee bean/loose tea/spice sales, café sales, and book sales in Canadian dollars. All estimates computed using ordinary least squares regression with robust standard errors. Unadjusted estimates computed without covariates. Adjusted estimates computed using a linear control for the day of experiment and fixed effects for the day of week, all mean-centered Treatment effect 20.4 (SE = 32.0) −11.6 (SE = 32.5)
|
2018-04-03T05:52:22.102Z
|
2016-05-21T00:00:00.000
|
{
"year": 2016,
"sha1": "ce58b9fe1775917d850faf43f45f8d858673b888",
"oa_license": "CCBY",
"oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-2303-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce58b9fe1775917d850faf43f45f8d858673b888",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
236568518
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between subjectivity in managerial performance evaluation and the three dimensions of justice perception
This paper examines the relationship between subjectivity in performance evaluation and the three dimensions of justice perceptions in an emerging economy; prior research on this topic has primarily focused solely on the advanced capitalist economies of Western nations. The paper also aims to expand on existing research by focusing on the role of interactional justice perceptions in relation to subjective evaluation (Byrne et al. in Hum Resour Manag J 22(2):129–147; Folger and Cropanzano, in Organizational justice and human resource management, Sage, Thousand Oaks, 1998). Results from a survey of 160 middle managers in Vietnam indicate that subjective evaluation is associated predominantly with negative effects. We found that, in an emerging economy like that of Vietnam, subjective evaluation reduces interactional justice perception, which in turn decreases the perception of procedural and distributive justice. The mediating effects suggest that the reason subjective evaluation influences employee procedural/distributive justice perceptions lies in the interactional justice perceived from supervisors. This research clarifies the effects of subjective evaluation on the dimensions of justice perception and contributes to the literature on performance evaluation and organizational justice in a non-Western context. It also highlights the importance of respect and communication for fairness perception in both theory and practice.
Introduction
Fairness or justice perception is not only a pillar of a healthy organizational culture but also essential for employee well-being (Ashkanasy, 2011). As such, it is one of the prevailing moral standards (Whiteside & Barclay, 2016) and core elements of ethical leadership (Brown et al., 2005;Xu et al., 2016). Prior literature states that a control perceived as fair or enabling can positively affect individual behaviours and organizational outcomes (Mahama & Cheng, 2013). Specifically, positive justice perception can promote organizational commitment (Lau & Moser, 2008) and job satisfaction (Viswesvaran & Ones, 2002), and enhance performance (Zainuddin & Isa, 2019). However, low levels of justice perception, are associated with negative outcomes such as stress, retaliatory intentions or disruptive behaviour (Lau & Oger, 2012;Silva & Caetano, 2014;Virtanen & Elovainio, 2018). Justice in organizational settings typically includes three forms: distributive justice, the fairness of employee outcomes; procedural justice, the fairness of procedures used to make the outcome decisions and interactional justice, the fairness of interpersonal treatment (Choon & Embi, 2012;Colquitt et al., 2001).
Subjective judgement, which is a component of performance evaluation practices, is one of the most crucial sources of organizational justice perceptions (Folger & Konovsky, 1989); at the same time, it can entail ethical issues. On the one hand, supervisors have an ethical and legal obligation to give accurate assessments of their subordinates' performance (Sherman & Bohlander, 1988). On the other hand, external factors can arise and render performance evaluation a not entirely rational process (Gomez-Mejia et al., 2007). For example, supervisors' assessments can be lenient or untruthful in order to avoid conflicts, limit complaints, or win the favour of some employees (Bol, 2011). Subjective evaluation brings with it the dilemma that the judgement of superiors is not always aligned with subordinates' performance and contributions, which consequently affects the subordinates' perceived justice. Given the important role of justice perceptions in determining employee behaviour and organizational performance, the management accounting literature has dedicated increased attention to the effectiveness of subjectivity in performance evaluation and its relation to employee perceived justice (e.g. Bellavance et al., 2013;Hartmann et al., 2010;Voußem et al., 2016).
Despite the growing empirical research on subjectivity in performance evaluation and justice perceptions, we know little about the issue in non-Western settings. Researchers have revealed that the effectiveness of performance evaluation practices may differ across cultures (Brockner et al., 2001;Chang & Hahn, 2006;Lam et al., 2002;Stammerjohan et al., 2015;Wu & Chaturvedi, 2009). There could also be profound differences in how people of different national origins react to or perceive the justice of the same sets of controls (Chow et al., 2001;Kim & Leung, 2007), even if justice is considered universally important (Greenberg, 2001). Hence, this study aims to fill this research gap by examining performance evaluation systems and justice perceptions in a non-Western emerging context. Vietnam was chosen as our setting because it has typical cultural characteristics of many other emerging economies, such as a high level of power 1 3 The relationship between subjectivity in managerial… distance and collectivism (Hofstede et al., 2010;Power et al., 2010;Walumbwa & Lawler, 2003). The results of the study, therefore, can be applied to understanding the phenomena of management practices and justice perceptions in many other emerging markets, especially in Asia.
In the study, we examine the relationship between subjectivity in performance evaluation and justice perceptions in the Vietnamese context, as an example of emerging economies. We also aim to extend previous work by developing a model in which interactional justice is a mediator between subjective evaluation and the other two justice dimensions.
We expect that employees tend to attribute any (in)justice related to subjective evaluation primarily to their supervisors, who are directly responsible for assessing them. Interactional justice (rather than procedural or distributive justice) is the most relevant to subjective evaluation because interactional justice concerns how individuals treat and communicate with one another in the workplace (Bies & Moag, 1986). Our model is consistent with Colquitt's suggestions (2001) that interactional justice is more related to a supervisor's evaluation than procedural and distributive justice perceptions. It also adopts the arguments from Cohen-Charash and Spector (2001) that interactional justice is an antecedent to distributive and procedural justice. The model is tested from the perspective of middle managers in their roles as subordinates, as it is important and challenging to ensure a high-quality managerial team (Sokol & Oresick, 1986).
Our empirical analysis shows that the negative effects of subjective evaluation on employees' perceived justice are generally stronger than the positive ones, which is consistent with our predictions. Perception of interactional justice is found to be an antecedent of the other forms of justice perceptions: distributive justice and procedural justice. It significantly mediates the relationship between subjective evaluation and perceived procedural/distributive justice.
Our research provides the following contributions to the management accounting literature. First, it contributes to the growing literature on the effects of subjective evaluation on employees' justice perceptions (e.g. Bol, 2011;Hartmann et al., 2010). To the best of our knowledge, our research is the first broad assessment of the effects of subjective evaluation on all three aspects of justice perceptions. While prior literature has mainly examined the effects of subjective evaluation on perceived justice related to evaluation procedures and outcomes, we take a different perspective and focus on the effects on justice perceptions regarding supervisor treatment.
Furthermore, previous research suggests that interactional justice is an important form of justice because it is strongly associated with supervisor-related outcomes such as leader-member exchange, motivation, and commitment (Gupta & Kumar, 2013;Masterson et al., 2000). Libby (1999) emphasizes that communication and explanations, as components of interactional justice perceptions, have a significant effect on employee performance. Nevertheless, there is still limited research on this form of justice. Our paper answers the call for more research into interactional justice perceptions (see Cohen-Charash & Spector, 2001;Colquitt et al., 2001). We highlight the importance of interactional justice perceptions that connect subjective performance evaluation with the perceived justice of procedures and outcome distribution. Our results also indicate the importance of studying organizational justice concepts as three distinct but related dimensions.
Finally, the study provides insights into the phenomena of performance evaluation practices and justice perceptions in a different economy to those typically described as 'advanced capitalist economies'. It contributes to the generalizability of organizational justice theories (Greenberg & Colquitt, 2013). This meets calls for more research into accounting in emerging economies and the convergence of accounting practices worldwide (e.g. Ezzamel & Xiao, 2011).
Given the role of interactional justice perceptions, these findings have practical implications for applying discretion so that evaluated employees are treated with respect and are communicated with effectively, in the pursuit of positive perceived fairness and other desired outcomes. In other words, subjectivity in performance evaluation should be considered part of organizational social controls, to show consideration and concerns towards employees. For instance, a formal system could be implemented to provide and monitor feedback (such as formal appraisals, performance reviews, routine and periodic formal reporting) on a detailed and frequent basis (Libby, 1999;Pitkänen & Lukka, 2011). Employees should also receive explanations about their performance ratings and outcomes (both rewards and punishments). Furthermore, it is vital to have management teams that can communicate competently and provide feedback effectively (Pitkänen & Lukka, 2011).
The remainder of the paper is structured as follows. Section 2 presents our study's theoretical background and Sect. 3 develops our hypotheses about the relationship between subjectivity in performance evaluation and the three dimensions of justice perception. In Sect. 4, we describe the design of the empirical study and present our results in Sect. 5. Finally, we discuss the findings and their implications in Sect. 6.
Subjectivity in performance evaluation
According to Baker et al. (1994), a good performance metric should be accurate, informative, and timely, and should not expose those evaluated to undue risk. Objective performance measures linked to quantitative and verifiable targets can hardly meet all the criteria because they can be too aggregate, narrow, and retrospective (Baker et al., 1994;Bol, 2008;Ittner et al. 2003;Prendergast & Topel, 1993). Furthermore, objective measures are generally unavailable for certain functions (such as HR, accounting, or legal) and/or frequently changing environments (Frederiksen et al. 2017). Thus, in their role as the subject of an evaluation, middle managers might put more effort into job aspects that are more easily measured and well compensated, and avoid other tasks for which they are not rewarded (Bol, 2008;van der Kolk & Kaufmann, 2018). In addition, managerial performance and target achievements can be affected by uncontrollable and unforeseeable events, such as economic factors or competitors' actions. Using purely objective measurements can introduce noise and reduce the effectiveness of the performance assessments (Bol, 2008). It can also reduce employees' effort if the outcomes they receive are only linked to the 1 3 The relationship between subjectivity in managerial… target achievements. In sum, objective measures are inadequate and insufficient for measuring middle managers' multifaceted tasks and their contribution to the value of the organization (Baker et al., 1994;Bol, 2008;Ittner et al., 2003;Prendergast & Topel, 1993).
Subjective components have been described as a valuable complement to objective measures in a performance evaluation system (Gibbs et al., 2004;Golman & Bhatia, 2012). Subjectivity refers to the discretion that superiors display in evaluating their subordinates' performance and determining their salaries and bonuses (Chow et al. 2006;Moers, 2005). It can be derived from their judgement of the subordinates' qualitative performance, such as knowledge-sharing and communication skills (Bellavance et al., 2013;Chow et al., 2006;Moers, 2005). Subjectivity can take the form of ex-post adjustment in the weighting of objective performance measurements. It can also relate to flexibility in adjusting evaluations and bonuses based on factors other than pre-specified criteria (Ittner et al., 2003;Woods, 2012). These forms of adjustments normally involve considering uncontrollable events and making adjustments to ex-ante targets to filter out the effects of those events and modify performance targets (Höppe & Moers, 2011;Murphy & Oyer, 2001). Subjectivity in performance evaluation, in any form, is heavily influenced by the personal perceptions, beliefs, and experiences of the person doing the evaluation (Choon & Embi, 2012).
Previous studies have taken different approaches to the concept of subjectivity in performance evaluation. One research stream investigates subjectivity in determining salaries and rewards (Gibbs et al. 2004;Voußem et al., 2016), while another focuses on performance evaluation as a process distinct from bonus systems (Bol & Smith, 2011;Van Rinsum & Verbeeten, 2012). An additional stream of research examines supervisor discretion in evaluating performance and translating the observed performance into rewards (Bol, 2011;Höppe & Moers, 2011). We follow this last approach because it is necessary to consider both the evaluation and reward systems to understand their impacts on all three aspects of justice perceptions. Our use of the concept refers to any subjective judgement by superiors in evaluating middle managers' performance and determining their bonuses.
Subjective evaluation is informative with regard to the qualitative aspects of employees' performance (Baiman & Rajan, 1995;Baker et al., 1994). It can provide a broader view, beyond measuring just a few narrow aspects of performance (Lau & Sholihin, 2005). Since the subjective components are often non-financial, they can be relevant to both short-and long-term objectives. They are not only outcomebased measurements derived from past efforts, but also drive future performance (Lau & Sholihin, 2005). Ex-post adjustments as forms of subjectivity can filter out the effects of uncontrollable events and enhance individual performance (Kelly et al., 2015). As a result, subjectivity in performance evaluation can benefit firms by reducing incentive costs and risks (Gibbs et al., 2004;Ittner et al., 2003;Prendergast & Topel, 1993).
Perceptions of organizational justice
Organizational justice relates to perceptions of fairness in a workplace, which leads individuals to conclude that they are being treated fairly or unfairly (Folger & Cropanzano, 1998;Fortin, 2008). It is a crucial issue in both theory and practice, because of its influence on organizational outcomes and the attitudes of organization members (Cropanzano & Greenberg, 1997;Hopwood, 1972;Lind & Tyler, 1988). Positive perceptions of justice encourage cooperation, improve employee satisfaction and performance, and promote the acceptance of organizational change (Lau & Oger, 2012;Zainuddin & Isa, 2019), whereas the absence of justice can lead to negative effects on employees' well-being and various forms of disruptive behaviour (Silva & Caetano, 2014). As with most papers in this research stream, the current paper uses the terms 'justice' and 'fairness' interchangeably.
The organizational justice theory introduced by Folger and Cropanzano (1998) generally classifies justice into three main dimensions: procedural justice, distributive justice, and interactional justice. Procedural justice is the fairness of the rules and processes employed to decide outcome allocations (Cropanzano & Greenberg, 1997;Folger & Konovsky, 1989). It is 'the judgement that procedures and social processes are fair' (Lind & Tyler, 1988). Distributive justice is the fairness of those outcome allocations themselves (Cropanzano & Greenberg, 1997;Folger & Konovsky, 1989;Greenberg, 1987). It is usually based on the principle of equity between individual benefits and their proportional inputs (e.g. education, intelligence and experience) (Leventhal, 1980;Lindquist, 1995). The third type of justice, interactional justice, relates to the quality of communication between superiors and subordinates (Bies & Moag, 1986). This involves the extent to which superiors treat their subordinates with respect and provide candid, sufficient explanations for their evaluation decisions (Simons & Roberson, 2003). The three dimensions of organizational justice have positive and varying roles in employees' behaviour, satisfaction, and performance (Maaniemi & Hakonen, 2008).
It should be noted that some previous studies consider interactional justice to be part of procedural justice, while many others maintain the distinction between these concepts (Bies, 2005;Colquitt, 2001). Interactional justice distinguishes itself from procedural justice by being about the communication provided to employees during evaluation processes and explanations, instead of about the procedures themselves. Conceptual reviews and meta-analytic evidence from Colquitt and Greenberg (2003) also separate those concepts because of their different antecedents and descendants. Interactional justice perception is more strongly associated with supervisor-related outcomes such as leader-member exchange and commitment, while procedural justice perception is more strongly related to organizational-relevant outcomes such as organizational commitment (Masterson et al., 2000). Prior studies show that those who perceive high interactional justice are more likely to show high motivation and 1 3 The relationship between subjectivity in managerial… put extra effort into their work (Gupta & Kumar, 2013). In this study, we examine interactional justice as a separate form of justice to better understand different aspects of justice perceptions.
National culture
National culture is defined as 'the collective programming of the mind which distinguishes the members of one group or society from those of another' (Hofstede, 1980, p. 25), and is approached in different ways by researchers. In the study, we focus on two dimensions: power distance and individualism/collectivism (Hofstede, 1980), since they are considered the most relevant to employees' reactions to various controls in organizations. They are the most common dimensions used by researchers in considering cultural impacts on management and accounting practices (Chow et al., 2001;Cohen & Avrahami, 2006). Because of their frequent appearance in accounting research, we do not present their detailed descriptions in the current study.
Emerging markets are typically different from advanced markets in terms of cultural values. Individuals in emerging countries generally show stronger collectivism and power distance, while advanced markets are characterised by high levels of individualism and low power distance (Hofstede et al., 2010;Power et al., 2010;Walumbwa & Lawler, 2003). Vietnam was chosen as representative of emerging economies because Vietnamese culture, characterised by high collectivism and power distance, reflects the typical cultural values and orientation of many emerging markets (Du & Choi, 2010). In Vietnam, management and performance evaluation practices emphasise harmony in social relationships, are characterised by seniority preference and limited feedback (Hempel, 2001;Shen, 2004;Warner, 2010).
Empirical studies have found similarities and differences in the effectiveness of management practices and employee attitudes across cultures (e.g. Chang & Hahn, 2006;Stammerjohan et al., 2015;Wu & Chaturvedi, 2009). For instance, Chang and Hahn (2006) indicate that certain types of control practices affect employees from Korea and the US in a consistent manner, regardless of their cultural differences. On the other hand, some research indicates that some management practices may be only effective in certain national settings, but ineffective or even dysfunctional in others (Chow et al., 2001). Ng et al. (2011) examine the impacts of culture on how employees react to rating biases (rating leniency and halo), and find that employees with high power distance values are more susceptible to rating bias than those with low values.
As argued by many researchers (Folger & Skarlicki, 2008;Greenberg, 2001), justice is universally important in interpersonal relations. This is because beyond sociocultural contexts, a positive justice perception has crucial impacts on organizational outcomes and employee behaviour (Lind & Tyler, 1988). Lam et al. (2002), Leung et al. (2001) and Morris and Leung (2000) find that the positive effects of perceived justice on employee outcomes (performance, absenteeism, and job satisfaction) are similar in all cultural contexts. At the same time, other research shows differences across cultures in certain aspects of justice perceptions. For example, the degree to which justice perceptions influence individual outcomes may differ in different cultures. Erdogan and Liden (2006) find that the relation between interactional justice and leader-member exchange (LMX) is weaker for individuals high in collectivism than those low in collectivism. Research by Kim and Leung (2007) indicates that organizational injustice has a stronger negative impact on job satisfaction and intention to leave in America than in China, Korea, and Japan. They also find that distributive justice matters more in the formation of overall fairness for Chinese and Koreans than for Americans and Japanese. Interactional justice, by contrast, shapes overall fairness more strongly for Americans and Japanese than for Chinese and Koreans.
Hypothesis development
Based on the existing literature, we propose hypotheses for testing the relationships between subjectivity in performance evaluations and the three dimensions of organizational justice perception.
Subjectivity in performance evaluation and interactional justice perceptions
Subjective evaluation usually involves the adoption of broad and varied non-financial performance measures. It enables assessments of job aspects that are value-adding but cannot be measured in an objective manner, such as work attitudes, communication and knowledge-sharing (Voußem et al., 2016). As for middle managers as evaluatees, subjective evaluation can help to assess their performance on management tasks which relates to leading and increasing the work quality of their teams (Sherf, 2016). A supervisor who includes such aspects in his/her assessments is likely to increase subordinates' sense of interactional justice because their effort on the tasks is recognised and appreciated.
In addition, subjectivity in performance evaluation, which includes flexibility based on factors other than performance measures specified ex-ante, can enable evaluators to correct noisy objective targets and better capture employee effort. Specifically, they can neutralise the effects of unforeseeable events that are not under employees' control but influence their performance and rewards (Bol & Smith, 2011;Kelly et al., 2015). Supervisors can show benevolence and concern for their subordinates by being considerate in giving performance ratings (Lau & Tan, 2006). Applying subjective evaluation can also provide supervisors and subordinates with opportunities for open communication, giving feedback and explanations, because only the subordinates are fully aware of their circumstances. Previous studies show that favourable subjective allocations can signal support and benevolent intentions (Voußem et al., 2016). This suggests that the adoption of subjective evaluation can be positively related to perceived interactional justice.
However, subjective performance evaluation, being based mainly on human judgement, can be biased in several ways. First, when facing high workloads with scarce personal resources, evaluators may prioritise core organizational targets and 1 3 The relationship between subjectivity in managerial… the completion of incentivised tasks (Schmidt & DeShon, 2007). Other tasks, such as subordinate evaluation and justice, are neglected, as they are considered less important to the success of the employees and their organizations (Sherf, 2016). Supervisors tend not to put enough effort into giving feedback and not to provide sufficient explanations or clarifications of misunderstandings to their subordinates regarding their evaluations.
Second, supervisors' assessments can be unconsciously influenced by cognitive bias, meaning that they may treat their subordinates differently from one another. According to attribution theory (Feldman, 1981), a supervisor makes attributions about his/her subordinates and assigns them to categories. This categorisation produces a bias in performance evaluation because some information is irrelevant, yet salient and influences subjective assessments. In addition, subjective assessments can be influenced by favouritism and likeability (Bol & Smith, 2011;Fisher et al., 2005;Moers, 2005). A supervisor may have more positive/negative sentiments towards more/less charismatic subordinates (Scott et al., 2007). Positive sentiments lead to respectful and courteous treatment and appropriate feedback, while negative sentiments can cause prejudicial communication. When employees receive less equal and respectful treatment from their supervisors than their co-workers, they are likely to feel that their supervisors do not respect or act benevolently enough towards them.
Finally, the opportunities for communication and feedback that come with subjective evaluation do not always have positive effects on perceived justice. The feedback process can be perceived as another task to be done, which leads to anxiety for many employees (Baker et al., 2013). It may also focus on how one is not performing well or not reaching predetermined goals (Selden & Sowa, 2011). Overall, the feedback process can lead to undesirable outcomes such as one-sided conversations, misunderstanding, greater stress levels, and competition, which subsequently reduces the perceived interactional justice (DeGregorio & Fisher, 1988;Gravina & Siers, 2011;Mulder, 2013).
The above discussion suggests that employees are only likely to have favourable interactional justice perceptions if supervisors show truthfulness, concern, and goodwill in their assessments. This happens when supervisors are considerate of subordinates' needs and interests and act benevolently towards them. On the contrary, if subjective evaluation comes with bias and favouritism, it is a great source of interactional unfairness.
Given the cultural values of Vietnam, we predict that a greater level of subjective evaluation may be associated with lower perceived interactional justice. As there is high power distance-a great degree of inequality between supervisors and subordinates, supervisors are generally decision-makers and subordinates are not expected to provide their opinions (Kirkman et al., 2006). In such a context, the feedback process, if applicable, is most likely considered to be a one-sided, critical conversation rather than an opportunity for information exchange. Hence, subjective assessments are given with little information-sharing or explanation, which decreases the chance that employees feel respected and heard. A lack of explanations and communication may increase the level of information asymmetry, opening the door even wider for favouritism and bias. The collectivistic culture may also amplify the negative effects of biased ratings on perceived interactional justice, because individuals who seek common interest and collective goals tend to build greater resentment towards their supervisors' misjudgment and disrespect. Correspondingly, we form the following hypothesis for the emerging context: H1 Subjectivity in performance evaluation practices is negatively related to perceptions of interactional justice.
Interactional justice and procedural justice
People develop their perceptions of justice based on the subsequence of information they receive (Van den Bos et al., 1997). Because evaluation procedures are not always transparent and straightforward, employees are unable to perceive those procedures directly. Meanwhile, they can directly observe the communication and treatment from their supervisor who is responsible for the performance assessment.
They also tend to routinely identify the person(s) responsible for injustice if it happens (Folger & Cropanzano, 1998;Liu et al., 2013). Put differently, people perceive interactional justice on a daily basis in virtually any supervisor-subordinate encounter, including communication during the evaluation process (Bies, 2005). Hence, we predict that perceptions of interactional justice come before perceived procedural justice. Furthermore, we argue that how employees think about the procedures is influenced by how supervisors enact resource allocation procedures and treat the employees (Liu et al., 2013). Employees perceive positive interactional justice when their supervisors can use communication as opportunities for information sharing, reasonable explanations, and clarification of doubts and misunderstanding Lau & Tan, 2006). According to expectancy theory, adequate communication can enhance employees' certainty about the mission and goals to be achieved (Rosen et al., 2006). In addition, employees who are given a chance to communicate their views to their supervisors tend to believe that they can influence the process. As a result, the communication will make the evaluation process appear more transparent, thus increasing perceived accuracy and reducing potential biases (Hartmann & Slapničar, 2012). In addition, respectful treatment can enhance the sense of self-worth and group standing, thereby promoting employees' perceived procedural justice (Tyler, 1989;Tyler & Lind, 1992). Overall, and consistent with Cohen-Charash and Spector (2001), we argue that interactional justice is an antecedent to procedural justice.
Prior research provides evidence about the positive role of interactional justice perceptions across cultures (Leung et al., 2001;Morris & Leung, 2000). Kim and Leung (2007) find that interactional justice positively shapes overall fairness perceptions across their sample of Americans, Japanese, Chinese and Koreans. A lack of interactional justice perceptions, by contrast, is linked to workplace deviant 1 3 The relationship between subjectivity in managerial… behaviours, regardless of different levels of power distance. That being said, we expect to find a positive relationship between interactional justice and procedural justice perceptions in the context of emerging economies, which is the same manner in Western cultures. A positive perceived interactional justice is likely to enhance perceived accuracy and foster a favourable perception of procedural justice in emerging economies, and vice versa. We propose the following hypothesis: H2 Interactional justice perception is positively related to procedural justice perception.
Interactional justice and distributive justice
In line with Cohen-Charash and Spector (2001), we argue that interactional justice perception is also an antecedent to distributive justice perception for the following reasons. First, fairness judgements are more strongly influenced by information received at an earlier stage of interaction with the authority figure than by information received subsequently (Van den Bos et al., 1997). Since information about supervisors' treatment is usually available before any rewards are given out, interactional justice perceptions may be perceived earlier than distributive justice perceptions.
Second, perceived interactional justice influences employees' trust in their supervisors, which in turn affects their distributive justice perceptions. If the supervisors show consideration for their subordinates' needs and interests and refrain from exploiting others, the subordinates tend to see the supervisors as being trustworthy (Whitener et al., 1998). The high level of trust allows the employees to have confidence in the supervisors' knowledge and competencies to make better decisions regarding evaluation outcomes (Yang et al., 2009). By contrast, a low level of trust due to low perceived interactional justice might lead to anxiety and suspicion about any outcome decisions the superiors make.
Third, employees who perceive a greater level of interactional justice by having respectful and sufficient communication are more likely to be satisfied with their outcomes. According to expectancy theory, employees who get more feedback can better understand good performance standards and can use the feedback to improve their performance (Rosen et al., 2006). As a result, they can have more favourable outcomes, thus more positive justice perceptions. Even when the outcomes are unfavourable to the employees, the dissatisfaction can be alleviated by receiving adequate feedback and explanations, such as those related to claims of incompetence, budgetary constraints, restrictions due to company policy, and inconsistent company norms (Beugré, 2007;Libby, 1999).
We have discussed in the previous section that the links of justice forms should be similar across cultures. Kim and Leung (2007) describe that interactional justice perception is positively related to overall perceived justice in multiple countries of diverse cultures. Consistent with this, Leung et al. (2001) indicate positive effects of perceived interactional justice on decision outcomes in China. Hence, we expect to find a similar relationship between interactional justice perceptions and distributive justice perceptions across contexts.
From the above discussion, we develop the following hypothesis: H3 Interactional justice perception is positively related to distributive justice perception.
Subjectivity in performance evaluation and procedural justice perceptions: mediated by interactional justice perceptions
Thus far, we have hypothesised that subjectivity in performance evaluation influences employee interactional justice perceptions. We also suggested that interactional justice perceptions come before procedural justice perceptions. Therefore, we expect that the relationship between subjective evaluation and procedural justice perceptions is indirect through interactional justice perceptions. Subjectivity in performance evaluation is enacted and influenced by the personal judgement of a supervisor. Hence, it should be the most relevant to interactional justice which concerns the fairness of the treatment and communication towards employees (rather than procedural or distributive justice). We expect that employees tend to attribute the fairness of subjective evaluation primarily to their supervisors, who have significant decision-making roles in the assessments. If subjective judgements can enable feedback and show the supervisors' consideration and benevolence, the employees are likely to feel respected and heard, and perceive positive interactional justice perceptions. This, in turn, makes the performance evaluation procedures appear more transparent and less biased, which increases perceived procedural justice. On the contrary, if the supervisors' assessments are influenced by bias and favouritism, the employees tend to attribute the injustice to the supervisors responsible (Folger & Cropanzano, 1998). A biased evaluation can signal disrespect and interpersonal malevolence from the supervisors, which degrade employees' sense of interactional justice. Consequently, decreased interactional justice is related to decreased procedural justice perception, as the subordinates become less convinced that performance evaluation procedures are applied consistently and appropriately. Our prediction is consistent with suggestions by Colquitt (2001) and Moorman (1991) that a supervisor's evaluation may be even more related to perceived interactional justice than to the other forms of justice.
In the case of Vietnam and other emerging economies, as described in Sect. 3.1, we predict interactional justice perceptions to be lower when there is a greater subjective evaluation. As a result, that brings out decreased procedural justice perceptions. Accordingly, we propose the following hypothesis: H4 Interactional justice perception mediates the relationship between subjectivity in performance evaluation and procedural justice perceptions. Specifically, a higher level of subjectivity significantly decreases perceived interactional justice, and in turn relates to decreased perceived procedural justice.
3
The relationship between subjectivity in managerial…
Subjectivity in performance evaluation and distributive justice perceptions: mediated by interactional justice perceptions
We have suggested that subjectivity in performance evaluation affects interactional justice perceptions. In addition, interactional justice perception is expected to be an antecedent to distributive justice perception. Thus, we argue that interactional justice could explain the mechanism through which subjective evaluation is related to perceptions of distributive justice. Specifically, if the subjective evaluation shows truthfulness, benevolence, and goodwill from supervisors and enhances employee interactional justice perceptions, the employees tend to see their supervisors as trustworthy (Whitener et al., 1998). This, in turn, allows the employees to be confident of the supervisors' ability to make better decisions related to evaluation outcomes; as a result, they will perceive high levels of distributive justice (Yang et al., 2009). By contrast, if the employees observe biases from their supervisors' treatments and judgements, they are inclined to perceive a low level of interactional justice. That lowers the employees' perceived distributive justice because they become convinced that their outcomes are relatively undervalued compared to their effort and that of their peers (Bol et al., 2016). Given the Vietnamese nationals' high power distance and collectivism, the latter is likely to apply, as discussed earlier in Sect. 3.1. We expect a greater level of subjectivity in performance evaluation to relate to a lower level of perceived interactional justice, which leads to lower distributive justice perceptions. Formally, we hypothesise: H5 Interactional justice perception mediates the relationship between subjectivity in performance evaluation and distributive justice perceptions. Specifically, a higher level of subjectivity negatively affects perceived interactional justice, and in turn relates to decreased perceived distributive justice.
Sample
The sample for this study was obtained from the Vietnam Chamber of Commerce and Industry (VCCI), a national entity that brings together and represents the business community in Vietnam. The VCCI database was selected because it included firms of various sizes and industries. The large size of the database necessitated limiting the survey to three selected cities: Hanoi, Ho Chi Minh City, and Danang, the country's main economic hubs.
Following prior work, we addressed organizations with more than 20 employees, because they were more likely to utilise proper performance evaluation practices and an associated reward system. The respondents to our study were middle-level managers (such as divisional or department managers), whose tasks were usually multi-dimensional and subject to diverse performance evaluation practices, including subjective evaluation.
Survey design and data collection
We conducted a survey to collect data from our targeted participants. Before being launched, the questionnaire was pre-tested to assess the length and the comprehensibility of the English-Vietnamese translation (Dillman & Groves, 2011;Morgan, 1990; Van der Stede et al., 2005). We also assessed the validity of the measurement instruments in the Vietnamese context. For this preliminary stage, we invited academics and practitioners from various business fields to take part, in order to receive comments from various perspectives. The questionnaire was continuously improved and modified during the pre-test process.
The survey instrument was web-based. An initial invitation email containing the relevant link was sent to the targeted participants in June 2017. A cover letter was included to provide a brief introduction to the main purposes of the research. It also gave assurances regarding the confidentiality of the responses, emphasising that only aggregate results would be published and used solely for academic purposes. We sought to make the recipients feel comfortable with taking part in the survey.
As suggested by Dillman and Groves (2011), we sent reminder emails three weeks after the initial email, and a second reminder was sent three weeks after that. In all, 163 managers completed the survey form from the 700 invitations that we sent. Following the removal of three disengaged responses that featured the same answers to almost all the questions, we had a final sample of 160 (for a response rate of 22%).
Subjectivity in performance evaluation (Subj)
We measured subjectivity using a two-item scale adapted from Kruis (2010).
Respondents were asked about the degree to which they thought their supervisors used subjective judgement to evaluate their performance. The respondents also rated the degree to which the supervisors determined salaries and bonuses using their discretion. Responses used a five-point Likert scale from 'not at all' (coded 1) to 'very much' (coded 5). Higher scores indicate higher levels of subjectivity in evaluating a middle manager's performance.
Perceptions of procedural justice of the performance evaluation system (ProcJ)
We assessed perceptions of procedural justice using direct measures, as recommended by Greenberg and Colquitt (2013). This direct approach involved asking respondents explicitly about their justice perceptions, rather than judging from implicit principles or elements of justice. Perceived procedural justice was measured using a three-item Likert scale adapted from Hartmann and Slapničar (2012) and McFarlin and Sweeney (1992). The scale addressed the extent to which the participants trusted the fairness of the evaluation system. Higher scores indicate that middle managers considered the performance evaluation process to be fairer.
Perceptions of the distributive justice of the performance evaluation outcomes (DistrJ)
A three-item instrument developed by Colquitt (2001) and Moorman (1991) was used to measure distributive justice perception. The respondents indicated whether they had received fair salaries and rewards for their effort, experience, and competence. Responses were rated on a five-point Likert scale from 'strongly disagree' (coded 1) to 'strongly agree' (coded 5). Higher scores indicate higher levels of distributive justice perception.
Perceptions of the interactional justice of performance evaluation system (InterJ)
The perception of interactional justice was measured with a five-item instrument developed by Colquitt (2001) and Moorman (1991). The participants addressed the extent to which their supervisors treated them with respect and politeness, and showed concern for their rights as employees. The other three items addressed the communication between participants and their supervisors. Respondents rated the extent to which they were provided with reasonable explanations and feedback about the procedures, and the degree to which their supervisors were frank and honest in their communication. A five-point Likert scale from 'strongly disagree' (coded 1) to 'strongly agree' (coded 5) was used, with higher scores reflecting greater levels of interactional justice perception.
Control variables
We controlled for three demographic variables: age, firm size and type of firm ownership. Age was indicated in years. Firm size was measured by a dummy variable with '1' for small and medium-sized firms (20-50 employees) and '2' for large firms (more than 50 employees). Ownership type was measured by a dummy variable with '1′ for state-owned enterprises and '2′ for other ownership types. Our statistical tests indicated that none of the control variables were significantly associated with our dependent variables.
Descriptive analysis
As recommended by Van der Stede et al. (2005), we tested for non-response bias by comparing early and late respondents in terms of the mean values of various demographic variables: age, gender, work and management experience, number of employees in the organization, and type of ownership. Four main variables (Subj, ProJ, DisJ, InterJ) were also compared. The results of the non-response bias test revealed no systematic differences for any of the variables, so there is no risk of non-response bias in the study. Table 1 presents the mean, standard deviation and percentage details for the demographic variables. We used a covariance-based structural equation model (SEM) to test the hypotheses (Hair et al., 2014;Schumacher & Lomax, 1996). This method is preferable over others (e.g. partial least square-PLS) because of its ability to estimate multiple and interrelated dependence relationships and deal with inherent errors (Hair et al., 2014, p. 547). It produces better estimates of the population parameters and is considered the best choice for obtaining accurate estimates, even when the sample is small (Chumney, 2013;Goodhue et al., 2012). The quality of the measurement model was assessed by exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) for item reliability and validity. Tables 2 and 3 present the EFA results and the assessment from these tests.
The reliability of the constructs was assessed by Cronbach's alpha values. Reliability is how well 'individual items of a scale measure the same construct and thus are highly inter-correlated' (Hair et al., 2014, p. 123). As shown in Table 2, all these values are greater than 0.7, which is a satisfactory indicator of reliability.
Convergent validity assesses 'the degree to which two measures of a same concept are correlated' (Hair et al., 2014, p. 123). The condition is satisfied when items load significantly on their corresponding latent variable (Sila, 2010). In Table 3, all standardised loading estimates are higher than 0.7 (except for "InterJ1" with loading value close to 0.6), which is satisfactory. Convergent validity was also evaluated by average variance extracted (AVE), which should be higher than 0.5, and construct reliability (CR), which should be 0.7 or above. Table 2 shows that all CR and AVE values indicate good convergence or internal consistency; thus, the convergent validity is satisfactory.
The second form of validity, discriminant validity, is 'the degree to which two conceptually similar concepts are distinct' (Hair et al., 2014, p. 124). Fornell and Larcker (1981) state that the condition of discriminant validity is satisfied when a construct's AVE is greater than all pair square correlations between the construct and other constructs in the model. Table 4 presents the correlations of the variables, and shows that the square root of the AVE for each variable is greater than the offdiagonal elements. Taken together, these results provide evidence of the convergent and discriminant validity of our scales.
It should be noted from Table 4 that three types of justice are highly correlated, which may raise issues about discriminant validity. However, the high correlations are quite predictable because they are sub-categories within organizational justice theory. The analysis has shown strong support for discriminant validity, suggesting that the high correlations among justice do not substantially affect our subsequent analysis and conclusions.
The relationship between subjectivity in managerial…
Results
As stated earlier, our study proposes that interactional justice perception mediates the relationships between subjective evaluation and procedural justice perception. It also proposes that interactional justice perception is the mediator in the relationship between subjective evaluation and distributive justice perception. We tested the mediation hypotheses following the mediation model of Baron and Kenny (1986). In the first step, the correlation between the independent variable (Subj) and the mediating variable (InterJ) was examined (path a). We also examined the association between the mediating variable InterJ and the dependent variables (ProJ and DisJ) in paths b 1 and b 2 , respectively. The relationships between the independent variable and the dependent variables were also tested (path c 1 and path c 2 ). In the second step, we tested paths c 1 and c 2 again when controlling for the mediating variable. If the correlation between the independent variable and the dependent variable previously found to be significant is no longer statistically significant, it is referred to as full mediation. This implies that the relationship between the independent variable and the dependent variable unfolds through the mediator (Baron & Kenny, 1986). When both indirect and direct effects exist, the mediation effect is partial. The results are shown in Figs. 1 and 2.
The first phase of analysis shows statistical significance in all the direct effects ( Fig. 1). Subj has a significant negative effect on InterJ (the coefficient of path a is − 0.23, p < 0.01). There is also a significant positive correlation between InterJ and ProcJ (the coefficient of path b 1 is 0.80, p < 0.01). The direct relationship between Subj and ProcJ is significant and negative (the coefficient of path c 1 is − 0.24, p < 0.01). The significant results confirm H1 and H2, and offer initial support for H4.
As for the relationship paths between Subj and DistrJ, the coefficients between Subj and InterJ (path a, β = − 0.23), InterJ and DistrJ (path b 2 , β = 0.70), and Subj and DistrJ (path c 2 , β = − 0.29) are all significant (at p < 0.01) (see Fig. 1). The results confirm H3 and give initial support to H5. Even though we did not form a hypothesis regarding the relationship between procedural justice and distributive justice, we tested and found a positive association between them. Figure 2 shows the results from the full model when InterJ (the mediator) is controlled for (second step). Mediation effects are interpreted as the strength of the indirect relationship between Subj and two dependent variables (ProcJ and DistrJ) when InterJ serves as the mediator. The sizes of the indirect effects (see
3
The relationship between subjectivity in managerial… Table 3 Results of exploratory factor analysis (EFA) performed separately for each measurement instruments and for all items The values in bold present the results of factor analysis performed on all items, showing the factor loading of five distinct factors of the study (subjectivity in performance evaluation, procedural justice, distributive justice and interactional justice)
Items
Factor loadings Factor loading after promax rotation The relationship between subjectivity in managerial… Table 5) are based on the path coefficients in Fig. 2 (coefficients of path a × coefficients of path b 1 or b 2 ). We also examined adjusted R 2 values to obtain an estimate of the mediator's effect size. In the full model, Subj has a significant correlation with InterJ, but it no longer has a significant effect on ProcJ (p > 0.05) (see Fig. 2). This indicates that InterJ has a full mediation effect on the relationship between Subj and ProcJ, since it eliminates their previous significant correlation (the indirect effect path coefficient is − 0.18, p < 0.01, R 2 increased from 10 to 64.5%) (see Table 5). Subj does not intrinsically affect ProcJ; it acts through the link of InterJ, thus H4 is supported. According to Zhao et al. (2010), the result implies that there is no mediating variable omitted between Subj and ProcJ.
InterJ also acts as the mediator in the relationship between Subj and DistrJ. Subj still has a significant direct effect on DistrJ in the full model (the path coefficient is − 0.12, p < 0.1), even though the effect is not as strong as previously in path c 2 of Fig. 1. It can be stated that InterJ has a partial mediating effect on the relationship between Subj and DistrJ (the indirect effect path coefficient is − 0.10, p < 0.01, R 2 increased from 12 to 54.6%) (see Table 5). This is defined by Zhao et al. (2010) as complementary mediation, in which the mediated effect (a × b 2 ) and direct effect (c 2 ) both exist and point in the same direction; both are negative in this case. Hence, H5 is confirmed. When a greater level of Subj is applied, middle managers may doubt the consistency of treatment by their supervisors and perceive a lower level of interactional justice. The decreased interactional justice perception relates to decreased distributive justice; in other words, they think they are not fairly compensated for their performance.
To check the robustness of our model specification, we re-performed our analysis, including only participants from large organizations (with more than 50 employees). The results are almost identical, with a similar significance level for the paths, and qualitatively equivalent to those reported earlier. This provides assurance as to the robustness of our findings with respect to the size of the organizations in the sample. The relationship between subjectivity in managerial… 6 Discussion and conclusion
3
One of the outcomes desired from any performance evaluation system is positive justice perceived by employees, which, in turn, enhances performance and inspires favourable work attitudes such as motivation and commitment (Kernan & Hanges, 2002;Lau & Moser, 2008). Positive perceived justice is also an important component for ethical management practices in organizations (Brown et al., 2005;Xu et al., 2016). This study investigates the effect of a component of a performance evaluation system-subjective evaluation-on three forms of justice perception. We aim to extend the literature by analysing the role of interactional justice perceptions as a critical mechanism underlying the impacts of subjective evaluation on procedural and distributive justice perceptions. The setting of the study involves performance evaluation systems in Vietnam as a representative of emerging economies, since there has been limited research in these contexts. We specifically examine the impact of two culture dimensions-power distance and individualism/collectivism (Hofstede, 1980)-because of their relevance to employees' reactions to control practices in organizations (Chow et al., 2001).
Results from our data supported our hypotheses that the negative effects of subjectivity in performance evaluation are more prominent than positive ones. We find that subjective evaluation is negatively associated with three dimensions of justice perception. This can be explained by the limitations in human judgements and individuals' personal resources. Subjectivity in performance evaluation practices, in any form, is heavily influenced by private 'mental' aspects (LaFave, 2008) or personal perception, beliefs, or experiences (Choon & Embi, 2012). Therefore, it is subject to numerous biases and favouritism, which negatively affect the perceived justice of those middle managers being evaluated. In addition, supervisors' personal resources are limited and scarce. When they are expected to engage in multiple tasks including core technical and managerial tasks, they tend to prioritise core responsibilities, which usually result in their own desired outcomes (bonuses, promotions, and recognition). Management tasks related to interacting with subordinates, providing them with information and giving performance evaluations are often neglected (Sherf, 2016).
The results can be justified in the context of Vietnam as an emerging economy, a research setting that has been largely neglected in management accounting literature. In a high power distance culture, subjective assessments are given with little information sharing or explanations from employees, which may increase information asymmetry and the opportunity for favouritism. The collectivistic culture may also amplify the negative impact of biased ratings, since individuals who seek common interest and collective goals tend to build greater resentment towards misjudgment and disrespect. Nevertheless, our results are consistent with findings in advanced markets regardless of cultural differences (e.g. Bellavance et al., 2013;Van Rinsum & Verbeeten, 2012). More importantly, we find that interactional justice perception is a mediator in the relationship between subjective evaluation and procedural justice perceptions. People tend to attribute the fairness of subjective evaluation to supervisors who have significant decision-making roles in the assessments. Since procedural justice is not always straightforward, middle managers perceive it through the way their supervisors treat them. Similarly, the relationship between subjectivity in performance evaluation and perceptions of distributive justice is partially indirect through interactional justice perceptions. Subjectivity in performance evaluation has a negative effect on perceptions of interactional justice, which lowers employees' confidence in their supervisors' ability to make fair decisions on their evaluation outcomes.
Overall, our results advance our understanding of the complex interrelationships between subjective performance evaluation and the three dimensions of justice perception (e.g. Hartmann et al., 2010;Voußem et al., 2016). Furthermore, our research is one of the very few analyses on the effects of subjective evaluation on all three aspects of justice perceptions, as prior studies have mostly focused on perceived procedural justice and distributive justice. We take a different perspective and focus on the role of interactional justice perceptions towards supervisors' treatments. Second, the study answers the calls of Cohen-Charash and Spector (2001) for an indepth investigation of interactional justice perceptions. Our findings highlight that subjective evaluation influences justice perceptions due to the interactional justice perceived from the supervisors. The results also indicate the importance of studying organizational justice concepts as three distinct yet related dimensions. Finally, our study expands on the growing literature on performance evaluation practices and justice perceptions in emerging economies. It meets the call from Ezzamel and Xiao (2011) for richer literature across countries and contributes to the generalizability of organizational justice theories worldwide (Greenberg & Colquitt, 2013).
Our study has managerial implications related to how subjective evaluation should be applied to improve justice perceptions and encourage ethical behaviour in organizations. First, the negative effect of subjective evaluation on justice perception does not automatically imply that subjectivity should be dropped from performance evaluation systems. Rather, it suggests that top management should consider the trade-off between the benefits and drawbacks of subjective elements in performance evaluation. Perhaps excessive subjectivity in performance evaluation should be avoided. More importantly, the mediating role of interactional justice perceptions suggests that subjective evaluation could be designed to enhance respectful treatment and quality of communication towards employees to be deemed fairer by the evaluated employees. In particular, an organizational feedback-oriented culture could be implemented to provide detailed and timely feedback along with rewards or punishments to employees (Levy & Williams, 2004;London & Smither, 2002).
3
The relationship between subjectivity in managerial… Superiors should explain performance assessments and outcomes to employees in a respectful and transparent manner (Gupta & Kumar, 2013). Adequate communication is meant to enhance the quality of feedback given in the organization, to increase the acceptance of feedback, and to make both evaluators and evaluatees feel comfortable in the process (London, 2003, p. 231;Rosen et al., 2006). Additionally, it may be worthwhile to place a greater emphasis on the competence of management teams so that they can communicate and produce feedback effectively (Pitkänen & Lukka, 2011).
There are several limitations to our study. First, it was carried out in Vietnam, so any generalisation of our results to other settings should be performed with caution. Second, the measurement instruments were based on studies in English. Despite great efforts to ensure the accuracy and relevance of the English-Vietnamese translation, it is nevertheless possible that differences still exist between the English and Vietnamese versions. Such differences could introduce bias to the survey. Third, we did not ask the respondents to identify their organizations because we wanted to ensure their confidentiality and obtain a higher response rate. Hence, there might have been more than one respondent from some organizations, as the questionnaire was sent to several middle managers of each firm. This too may have led to some minor bias in the analysis.
Future studies could further examine the specific organizational circumstances that affect the relationship between subjectivity in performance evaluation and perceived interactional justice, as the relationship is prone to be influenced by contextual factors and the characteristics of those involved. In addition, since we only examined subjectivity as a whole, it might be fruitful to explore particular aspects of subjectivity in-depth, such as the use of subjective measures, flexibility in the weighting of certain performance measures, and ex-post discretional judgement. Finally, perceived interactional justice only has a partial mediating effect on the relationship between subjectivity and perceived distributive justice. This suggests the possible existence of some omitted mediators, which further studies can address.
Acknowledgements The authors acknowledge with thanks the valuable comments and guidance from editor Thomas Günther and two anonymous reviewers. We are grateful to Anne-Marie Kruis for her suggestions and comments on the questionnaire we used. We would also like to thank James Gaskin for his very useful online guidelines and video tutorials of AMOS. In addition, the authors acknowledge with gratitude the comments of participants in the Finnish Accounting Tutorial (2016), Management Control Association Doctoral Colloquium (2016), ACMAR Doctoral Colloquium (2018), the Management Control Association Symposium in Nice (2018) and research seminars at Oulu Business School (Finland). Thuy-Van Tran gratefully acknowledges the research funding support from Oulu University Scholarship Foundation.
Funding Open access funding provided by University of Oulu including Oulu University Hospital.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-08-02T00:05:26.386Z
|
2021-05-16T00:00:00.000
|
{
"year": 2021,
"sha1": "ddbb4fb9daf2a28a34515a9962d6c94df01626b4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00187-021-00319-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "531ee3cfba424984a15ef6ed7d1acde027078b89",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
258426953
|
pes2o/s2orc
|
v3-fos-license
|
Maximum Match Subsequence Alignment Algorithm Finely Grained (MMSAA FG)
Sequence alignment is common nowadays as it is used in many fields to determine how closely two sequences are related and at times to see how little they differ. In computational biology / Bioinformatics, there are many algorithms developed over the course of time to not only align two sequences quickly but also get good laboratory results from these alignments. The first algorithms developed were based of a technique called Dynamic Programming, which were very slow but were optimal when it comes to sensitivity. To improve speed, more algorithms today are based of heuristic approach, by sacrificing sensitivity. In this paper, we are going to improve on a heuristic algorithm called MASAA (Multiple Anchor Staged Local Sequence Alignment Algorithm) and MASAA Sensitive which we published previously. This new algorithm appropriately called Maximum Match Subsequence Alignment Algorithm Finely Grained. The algorithm is based on suffix tree data structure like our previous algorithms, but to improve sensitivity, we employ adaptive seeds, and finely grained perfect match seeds in between the already identified anchors. We tested this algorithm on a randomly generated sequences, and Rosetta dataset where the sequence length ranged up to 500 thousand.
I. SEQUENCE ALIGNMENT In computational biology or Bioinformatics, a sequence is either an RNA, DNA or a protein string made up of their representative character set. DNA ( A, C, G, T) , RNA ( A, C, G, U) and protein molecules (A, R, N, D, C, Q, E, G, H, I, L, K, M, F, P, S, T, W, Y, V) can be re represented as strings of letters from their alphabet set [1] [2] [3] [25].
A sequence alignment is a way of arranging these sequences made of their representative characters with an objective to find the regions of 'similarity'. These similarities would then provide additional information on the functional, structural, evolutionary and other interest between the sequences in study. Aligned sequences are represented in rows, stacked up one on top of the other as shown below in Figure 1. [25] In fig 1, there are regions, where the two sequences are aligned perfectly, these regions are called 'similar region. In some regions, special characters such as '-', also known as indels are present. These indels represents a mutation (change) or could be looked at as deletion from the other sequence's point of view [25].
Pairwise sequence alignment is used to find conserved regions in two sequences. Multiple sequence alignments are used to find common regions in more than 2 sequences. Pairwise Sequence alignment [25]. Pairwise sequence alignment is a first step and represents at times the first step in many bioinformatics solutions. In many multiple sequence alignment algorithms this remains the first step, especially the linear multiple sequence alignment algorithms.
Pairwise sequencing as in Fig 1, is alignment between 2 sequences of the same kind, DNA, RNA or Protein. The alignment would then throw some knowledge on the divergence of one sequence over the other in some cases or similarity in some cases.
Pairwise sequence alignment can be classified into local sequence alignment and global sequence alignment. Local sequence alignment finds the best approximate subsequence match within two given sequences while the global sequence alignment takes the entire sequence into consideration [25].
Local sequence alignments are therefore designed to search subregions within the two sequences. For finding similar (biologically conserved) regions, which may or may not be preserved in order or orientation, local sequence alignment is very useful. It is typically used to find similarity between two divergent sequences and for fast database searches for similar sequences [25]. Since, it is trying to find subregions and not the sequence in its entirety, local sequence alignment usually takes less computation time when compared to global sequence alignment algorithms [25].
The alignment algorithms performances in the literature are based off several key measurements. This is very touchy, subjective topic and can vary from algorithm to algorithm. Some are based on type of sequence (they can be only for DNA, or RNA), length (some algorithms can be good for shorter than others), Measure of accuracy (since there is no standard here, this is a controversial and subjective), Speed of alignment (time taken to align the sequences in study) and lastly memory efficiency (how much memory is taken to find the alignment) [25]. Today, AI is being used to align pairwise sequence alignment, and in other fields [26], but is not in the scope of this paper.
II. LITERATURE
In this section we will talk about the popular algorithms in local sequence alignment. Our algorithm is greatly influenced by previous algorithms and has taken clues and has questioned at times on the approach and ideas of the previous algorithms. We first start the section with optimal and then heuristic algorithms.
A. Local Sequence Alignment algorithms
Smith-Waterman Algorithm is an optimal local sequence alignment algorithm, employing a technique called 'Dynamic Programming', where a problem is broken into smaller problems and solving these smaller problems recursively [25]. The solutions to the subproblems are saved and are brought together to find the solution to the entire problem.
This optimal algorithm produces an optimal local sequence alignment between two sequences S1 and S2 of length m and n, respectively, in time and space equal to O(mn) [25]. As the sequence length increase the performance decreases exponentially.
To overcome short comings of the optimal algorithm, Heuristic algorithms were developed later. Heuristic algorithms find a near-optimal solutions sacrificing little sensitivity for speed. Since it is near perfect and can calculate long sequences running in billions of characters, heuristic algorithms are preferred. All heuristic algorithms run in stages, stages where they find the maximum subsequence and try to find the regions arounds them to get the sequences local sequence alignment. These subsequences are called seeds or anchors depending on whether the subsequence is one large word (anchor) or small words made of few characters (seed).
The first heuristic algorithm we are going to talk is FASTA, which stands for FAST-ALL is a heuristic algorithm developed by Lipman and Pearson [5] [25]. FASTA uses a look-up table to find perfect subsequence matches of size 'l' and then hashing them. Time is saved for searching seeds of length 'l', then proceeds to find such seeds in a diagonal path, since the final alignment is more likely to be found in this diagonal length. It then uses a directed weighted graph for regions in between the seeds along the best diagonal found to stitch the seeds. The advantage of FASTA over Optimal algorithm is the speed but if there are 2 optimal diagonals or if the seeds are smaller than the size deployed in FASTA, then the algorithm loses considerable sensitivity [25].
Basic Local Alignment Search Tool BLAST [6], uses a look-up table to identify seeds and is faster than FASTA [25]. Sliding window technique is employed to find all good neighbors seeds for each seed it finds in both directions [25]. When all seeds are found, it then proceeds to find the seeds (HSP, high scoring pairs), and extend them until they fall under a threshold score 'k'. These HSP are then stitched using a restricted dynamic programming which is a version of Smith-Waterman Algorithm [25].
BLAT -BLAST like alignment tool [17] is much faster than BLAST. BLAT differs from previous algorithms in the way sequences are indexed. "non-overlapping seeds of S2 are run through the database of sequence and then a new scan is run linearly through the S1, whereas BLAST builds an index of S1 and then scans linearly through the database" [17]. This saves time. After this stage, it then searches for seeds with some mismatches 'n' in them around the seeds it found earlier. Then the HSPs of seeds and mismatch seeds are extended like BLAST to form a final alignment. As with BLAST, BLAT cannot find smaller homologous regions as the seeds taken as not small enough.
BLASTZ [8], is the fastest among the BLAST family of algorithms, employs a different method. All repeats in the sequence are removed [25]. It then looks for seeds of length 'l' with almost one-character transition. All seeds are then extended on both sides. For regions in between the seeds, it employs smaller seeds and uses optimal alignment to stitch these seeds to form final alignment. Since matched or repeat seeds are not used again, and transition seeds set to almost one, there is a possibility that this algorithm performs poorly when it is used for divergent sequences. Meaning, say a drosophila and a pig DNA.
PatternHunter [9] introduced a seed called spaced seed to further improve the sensitivity and speed. It uses a combination of priority queues variation of red-black tree, queue and hash table to achieve speed [9]. A spaced seed is a generic seed which is converted from A, C, G, T to Binary form e.g.: 1010101,4 where 1 is a match and 4 is a score [25]. It then finds the best diagonal as in FASTA to find the final alignment [25]. The algorithm is written in JAVA, and encounters memory problems for long sequences.
UBlast [12] introduces a new technique by finding fewer good hits. Meaning, subsequences which are found least but are long, in order to improve speed on BLAST and MEGABLAST [16] which is algorithm from BLAST family. The technique is targeting more speed than sensitivity.
LAST [13] is recent algorithm. It uses adaptive alignment seeds; these adaptive seeds vary in length and the number of indels in them. So adaptive seeds can be of different lengths and weight. By weight, a score associated with the seed. The rest of the algorithm is very similar to BLAST. ALLAlign [15] is a new algorithm developed, however literature of this AWS based web algorithm is very limited.
LAMBDA [11] is new algorithm for protein sequence alignment. It implements a technique where there are more than 1 protein sequences as the target sequences to be aligned with a pre indexed database set of all other know sequences [25]. It is optimized for big or large biological data and uses a Suffix tree to get the maximal common subsequences or maximal unique sequences as our algorithm in this paper, amongst them and then goes about aligning these subsequences against a pre indexed database (pre indexed based off suffix array) [25].
MASAA [1] [3] introduced in 2008 is based on Ukkonen suffix [25] tree. The algorithm uses double indexing and back tracking and identifies maximum match subsequences (MMSS) [25]. In the subsequent stages, it finds perfect and near perfect seeds and stitches the local alignment in the last stages.
MASAA -S [25] was introduced in 2019 which is similar to MASAA but uses adaptive seeds in between the MMSS, In the later stages it uses perfect seeds to improve sensitivity. The algorithm is also more sensitive than MASAA but comparable in speed to MASAA. We use this to question two things, if adaptive seeds are not incurring speed penalty and are improving the sensitivity, then how far can we further the sensitivity without sacrificing the speed. The algorithm in this paper addresses these questions.
III. MOTIVATION
The motivation for this paper is our previous paper MASAA -S which is based on the same technique using different seed structure. That paper questioned how far we can go in terms of sensitivity without sacrificing the sensitivity. This paper further enhances the sensitivity, but we think we are hit a threshold when it comes to sensitivity and pushed the boundary using this technique to boundary. In this paper, we introduce an algorithm technique which extends MASAA and MASAA -S [1], [3], [25] an algorithm we introduced in 2008, 2019 by making it more sensitive and relatively faster at the same time compared to others.
IV. MMSAA -S (MAXIMUM MATCH SUBSEQUENCE ALIGNMENT ALGORITHM -FINELY
GRAINED) MMSAA -FG is like MASAA and MASAA -S in the first two of five stages, but it completely differs in the kind of seeds selected in between the maximum match subsequences (MMSS), extension anchor stage and final stage -stitching the whole alignment. We will explain in detail in the coming sections.
A. Finding MMSSs
In this stage two sequence are merged into one big sequence by introducing a special character in between the two sequences. A suffix tree is built for this big sequence. MMSS whose length 'l' is greater than the arbitrary length. The arbitrary length is set at 1/3rd the length of the longest MMSS. Using Ukkonen version of suffix tree with pointers, we employ back tracking are finds all MMSSs between the sequences. Since these are MMSS and not seeds, there is no question of noise in this selection.
B. MMSS anchors within a 60% of the length
All non-overlapping and non-crossing are chosen as in MMSAA -FG [1][3] [25]. In this step we want to select all anchors which fall in our neighborhood. A good neighbor is distance from the previous MMSS within which we must find the next MMSS. We always start from the longest subsequence we found in the previous stage. We try to find the MMSS with in 60% of the length of that, and then move the next MMSS, then find the MMSS which is inside 60% of that MMSS and so on, this is different from previous MASAA step.
C. Finding adaptive seeds In Between MMSSs
This step can be broken down into 2 steps.
I. In this step, we find all the adaptive seeds which are found using the Suffix tree. These adaptive seeds are set at a size 20 with 6 mismatches. This step also finds more seeds while keeping the sensitivity. This also aids in speed. II.
In this step, we find perfect match seeds of length 4 and then 2 after that. between the adaptive seeds found in the previous step. These seeds should be within 1/3 rd the distance from the adaptive seeds found. The size of seed is fixed in the algorithm primarily because, a K-mer of size between 8 or less is more likely to find more seeds than a K-mer of size ranging from 12 -48 [16] [13] where the authors have clearly established that there is a great propensity to get more hits when the size is between 8 -12 [25]. Both the steps are shown in D. Finding the non criss crossing adaptive and perfect seeds This algorithm now finds all overlapping and non-crossing matching from the adaptive seeds and perfect seed selection in the previous step. To identify anchors from overlapping and crossing matches, we use the heuristic of "closeness. The question of overlapping does not arise in the case of adaptive seeds because all overlapping seeds would amount to a bigger seed which would have been picked up already using the suffix tree in the previous step part 1. The heuristic of closeness would arise only in the case of selecting perfect seeds of length 4 and 2 either in between MMSSs and Mini-Anchors or in between MMSSs. Hence this is intensive and relatively time-consuming part when compared to our previous algorithms.
E. Final Stitching
The final stitching is like our previous algorithm The MMSSs anchors and Mini-Anchors found in previous step, forms most of the final alignment [25]. The anchors are extended on both sides starting from left to right. The algorithm terminates when all MMSS are extended and included in the final alignment.
V. IMPLEMENTATION
The algorithm is implemented in C language and the core part of the program is the Ukkonen Suffix tree with pointers. This suffix tree gives the ability to back track to find all MMSSs without sacrificing speed. It is at the same time simple and robust. The adaptive seeds are also found out the same way in between the MMSS and only their start and end positions are noted. . The algorithm is pictorially shown in figure. The metrics we subject this algorithm is speed and sensitivity. Although Speed is straight forward, sensitivity is subjective from our experience and literature. There is no standard for sensitvity. Different programs use sensitivity metrics, which can influence the results [4]. Some use 'correctly aligned residue pairs' divided by number of residue pairs in the reference sequence. Others, Total column score (TCS), the number of correctly aligned columns divided by the number of columns [25].
In this algorithm we use fraction of Exon length aligned to the corresponding exon as our sensitivity. Like in MASAA and MASAA -S we have randomly generated sequences up to 500 thousand in length. To check the sensitivity of the algorithm, we used the same ROSETTA dataset [3][23] for homologous sequences and used our own set of different genes from different animals to compare our algorithms with the rest of the algorithms VII. EXPERIEMNTAL ANALYSIS In the experimental results, we randomly generated sequences whose length is from 100k to 500k and compared the speed of alignment. For smaller sequences the speed is much faster than BLASTZ and compares well with our previous algorithms. However, when the sequences grow, then the speed of our new algorithm is slower than that previous algorithm and almost touching BLASTZ territory. We believe it will perform slower as the length of the sequence increases further. We did not compare the BLASTZ, MASAA-S and MMSAA -FG with LAMBDA, AllAlign for reason, because it needed an index database first and for a randomly generated sequences it was challenging [25]. For ALLAlign, we could not find source code to download and compare, checking the performance of the sequence on a server was not clinical. Although we know LAMBDA is 500x faster than BLASTZ, here we assume that LAMBDA is faster than MMSAA-FG too. For sensitivity, we compared the exon coverage from all four algorithms. We compared the performance of four algorithms on the Rosetta Dataset. Table 1 shows the percentage of exon coverage, here we think MMSAA-FG performs better than MASAA and MASAA-S, we attribute this to smaller anchors seeds being chosen once the Large Anchors are already chosen in the first step. Also, smaller seeds of different length <= 4 plays a role. This step is little performance heavy (speed) as we see in the fig 5. MMSAA-FG performed better than BLASTZ and our previous algorithms on divergent sequences, so, we compared the four algorithms for divergent sequences on different genes from ROSETTA dataset and compared the percentage of exon coverage. MMSAA-FG performed better than MASAA, MASAA-S and BLASTZ. We attribute this performance to Mini Anchor and perfect match seeds of smaller size in the inter anchor and inter MINI anchors. This is shown in Fig 6. VIII. CONCLUSION In this paper, we have proposed a new algorithm which is not only faster than BLAZTZ for smaller sequences but more sensitivity than BLAZTZ and our previous algorithms on divergent sequences. On Rosetta dataset, we found that the algorithm performs closer to BLASTZ. We attribute this to smaller seeds in between the MMSS and mini anchors. In the future, we would want to extend the algorithm with a different stitching algorithm which is faster than ours and perhaps parallelize.
|
2023-05-02T01:16:22.232Z
|
2023-04-29T00:00:00.000
|
{
"year": 2023,
"sha1": "9533067aa5074af48429df6e8459280bbf051f11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9533067aa5074af48429df6e8459280bbf051f11",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Biology"
]
}
|
221762037
|
pes2o/s2orc
|
v3-fos-license
|
ESTIMATING 3D LAND SUBSIDENCE FROM MULTI-TEMPORAL SAR IMAGES AND GNSS DATA USING WEIGHTED LEAST SQUARES
Analysis of multi-temporal synthetic aperture radar (SAR) satellite images using persistent scatterer interferometry is an effective approach for monitoring land subsidence, which is a serious issue in some urban areas. However, a drawback to this approach is that it is limited to displacement along the radar line-of-sight direction. An accurate understanding of land subsidence requires estimation of 3D displacement. One solution is to combine observations from multiple sources and directions, such as multi-temporal SAR images acquired on ascending and descending orbits, with global navigation satellite system (GNSS) data. While this approach estimates 3D displacement, other methods do not account for differences in data accuracy. Therefore, in this paper, we propose a method for estimating 3D land subsidence from multi-temporal SAR images and GNSS data by using the weighted least squares method. The weights for data sources are calculated from the PSI results and GNSS data. We apply the method to Kansai International Airport, using 13 ALOS-2/PALSAR-2 ascending images from 2014 to 2018 and 17 ALOS-2/PALSAR-2 descending images from 2015 to 2018. Root mean squared errors in the east–west, north–south and vertical directions are 6, 13, and 10 mm/year, respectively. These results demonstrate that combining PSI and geodetic results is effective for monitoring land deformation accurately with high spatial resolution.
INTRODUCTION
Monitoring environmental changes in urban areas is essential for local governments to maintain quality of life. Land subsidence can hinder urban management and may occur as a result of excessive extraction of groundwater or natural gas. Land subsidence can then lead to flooding and damage to infrastructure. Thus, it is important to detect subsidence early and take appropriate countermeasures. Leveling surveys using a global navigation satellite system (GNSS) such as the Global Positioning System (GPS) are a conventional approach for such monitoring. These point-based measurement approaches can measure subsidence with high accuracy, but not practical for wide-area monitoring. In contrast, satellite synthetic aperture radar (SAR) is an effective tool for a wide-area monitoring. Differential interferometric SAR (DInSAR) is a technique for observing displacement with high resolution; in particular, permanent scatterer interferometry (PSI) (Ferretti et al., 2000) estimates displacement with high accuracy using dozens of SAR images. However, with this technique, displacement can be estimated in only the direction along the radar's line of sight (LOS) and it is impossible to distinguish between the horizontal and vertical displacement. Therefore, it is necessary to separate the displacement in radar LOS direction into 3D components. Ito et al. (2019) proposed a method for estimating 3D displacement from SAR images. First, the method estimates the radar LOS direction velocity using PSI. Next, it estimates the 3D direction velocity from GPS and leveling survey data by interpolation (ordinary kriging). Finally, assuming equal weights for each observation, the ordinary least squares method (OLS) is used to combine the radar LOS direction velocity and the 3D direction velocity to estimate the 3D direction velocity with high accuracy and high resolution. However, using OLS to combine different data sources with equal weights may lower the accuracy of the displacement results because the final estimate will be strongly affected by any lower-accuracy data that are present. The weighted least squares method (WLS) can resolve this issue, but it is necessary to determine the optimal weights to apply WLS. In this paper, we propose a method for determining the weights to be used in WLS and estimate 3D displacement by combining PSI velocity and interpolated velocity by WLS.
Concept of the Proposed Method
A flowchart of the proposed method is shown in Figure 1. The proposed method is assumed to use multi-temporal SAR images and GNSS data. After estimating the displacement velocities by PSI, we apply OLS to both velocities from the PSI and GNSS data. The estimated 3D displacement field is regarded as an initial estimate. The weights required for WLS are derived from this initial estimate and the GNSS data. Once we derive the weights, we apply WLS to the velocities with the weights. Finally, we estimate the final 3D displacement field. We can calculate the displacement value by integrating the velocities from a designated reference to the permanent scatterer (PS) of interest.
. Flowchart of the proposed method for estimating 3D displacement.
Persistent Scatterer Interferometry
PSI involves selecting PSs as phase-stable pixels and using them alone to reduce the main errors (e.g., temporal and geometrical decorrelation, atmospheric artifacts) in conventional processing methods. In the present study, we used PSI software developed by the infrastructure monitoring group at the Earth Observation Research Center (EORC) of the Japan Aerospace Exploration Agency (JAXA) (JAXA/EORC., n.d.). Once the user has selected the region of interest, the software processes the PSI data from the raw data. The displacement is estimated relative to a certain point known as the zerodisplacement point. Thus, we convert the relative values to absolute values by adjusting the average PSI to those of the reference points to connect PSI to a reference frame. Specifically, we calculate the velocity difference between the leveling points and the nearest PS point and add the average of the difference to the PSI results.
Interpolation of Geodetic Data
We assume that temporal GNSS data are available. We generate the velocities along three directions, namely, the east-west, north-south, and vertical axes, in accordance with the velocities estimated from SAR images by PSI. Because the point data are spatially sparse, a majority of PSs have no neighboring GNSS data around them. Therefore, the GNSS velocities must be interpolated to match the spatial resolution of the PSI results. Several approaches can be used for interpolation. For example, inverse distance weighting (IDW) can interpolate by calculating the weight that is inversely proportional to the distance between available point data. In this research, we use ordinary kriging (Brooker, 1991;Deutsch et al., 1998), an interpolation method that allows for easy processing and intuitive understanding.
Estimation of Three-dimensional Displacement
The initial 3D displacement field is estimated from the PSI and interpolated results by applying OLS. The observation equations of the OLS model are shown as follows: Here, VASC and VDES are the estimated velocities of a PS along the radar LOS from images acquired on ascending and descending orbits, respectively.
is the heading angle of the satellite, and θ is the incident angle of the radar. ue, un, and uu are the unit vectors pointing from the PS toward the satellite. These three are the unknown components of the velocity vector.
Equation (1) can be rewritten as follows: Now we define the residual r for Equation (2): Applying OLS to the residual minimizes the sum of the squared residuals. The optimal estimate of vdis is given by Whereas Equation (4) assumes equal weights, WLS assumes non-uniform weights, expressed as P in the following equation: P is the following diagonal matrix: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume V-3-2020, 2020 XXIV ISPRS Congress (2020 edition) Here, σ 2 represents the variance of the velocities estimated from PSI or obtained from GNSS data.
Determination of Weights
We now explain how to determine the weights in Equation (6). We assume that errors are included in the displacement velocity estimated by PSI and from the GNSS data, and that the former errors are larger than the latter errors. In addition, we also assume that no data other than temporal SAR images and GNSS data are available. Under these assumptions, it is impossible to determine the weights for PS and GNSS displacement rates independently. Instead, our approach is to utilize GNSS data to determine the weights of PSs. For these weights, we start matching the GNSS point with the nearest PS, as shown in Figure 2. The distance between the two points is measured as the Euclidean distance.
Then, we calculate the variance of the PS velocities as follows: Here, n denotes the number of GNSS data points.
Next, we explain the weights for GNSS data. Figure 3 represents an example of taking out GNSS data. Figure 3. Taking out GNSS data to calculate the weight of the GNSS data.
The velocity of the GNSS data of interest is compared with the interpolated velocity estimated from the surrounding GNSS data by interpolation. The difference is defined as = − As with the variance of PS velocities, we calculate the variance of the GPS velocities :
STUDY AREA AND DATA USED
We selected Kansai International Airport (KIX; Figure 4) in western Japan as the study area for this research. It has been experiencing serious land subsidence (Kansai Airports., n.d.). KIX is located on two islands, Islands I and II, and in 2015 the subsidence velocities of those two islands were 6 cm/year and 34 cm/year, respectively.
We used level 1.1 (L1.1) satellite SAR images from the Phased Array type L-band SAR (PALSAR-2) instrument onboard the Advanced Land Observing Satellite 2 (ALOS-2). Table 1 lists the images used in this research: 13 acquired on the ascending orbit and 17 acquired on the descending orbit.
For the PSI analysis, we downloaded the 10-m-grid digital elevation model published by the Geospatial Information Authority of Japan (GSI, n.d.) to remove the effect of topography. We selected Island II because it has exhibited larger 3D displacement than Island I. Figure 4b shows the 54 leveling and 26 GPS locations measured at Kansai Airport in an annual survey, for a total of 80 GPS data points. The leveling data measured using GPS has only vertical displacement whereas the GPS data has 3D displacement. Because the leveling was conducted using GPS, we assumed that the accuracy of the leveling data was almost the same as that of the GPS data available in this research. Below, we incorporate the leveling data into the GPS data.
RESULTS
We divided the 80 GPS data points into two groups, one for interpolation (50 data points) and the other for validation (30 data points). Figure 5 shows the locations of geodetic data used for interpolation and validation. We used 32 leveling and 18 GPS locations (Figure 5a) for interpolation and the other 22 leveling locations and 8 GPS locations (Figure 5b) for validation of the land-subsidence results. We used ordinary kriging to interpolate the data. Figure 6 shows the semivariogram curves of the interpolation from GPS data. Curves were obtained for the east-west, north-south, and vertical displacement components, and the interpolated results from GPS data were generated. Following the method described in Section 2, the variance of the velocities derived from PSI and obtained from the GPS data was calculated (Figure 8). Figure 9a-c shows the velocity field in all three directions in units of mm/year, estimated by combining the results from PSI with the terrestrial measurements at KIX. We compared the final vertical deformation velocity of the nearest PS point and the GPS data classified as validation data to assess the validity of the estimated results. Figure 9d-f shows the combined estimation versus the validation data. The validation was conducted by matching PS and GPS locations within a radius of 100 m (Euclidean distance). Whereas eight GPS locations were available for validating the east-west and north-south displacement, only seven data points were used for the actual validation of the east-west displacement (Figure 8d) because one location had no PS point within a radius of 100 m. The root mean squared errors (RMSEs) of the combined results were 6, 13, and 10 mm/year for the east-west, north-south, and vertical components, respectively.
DISCUSSION AND CONCLUSION
In this paper, we estimated 3D land-subsidence components at KIX by integrating the results estimated from SAR images observed on ascending and descending orbits with GPS data. We used WLS to examine the difference in accuracy between the SAR-derived estimates and the GPS data. Figure 10 shows the validation results of the vertical components obtained by OLS. The RMSEs of the results using OLS were 14, 16, and 14 mm/year for the east-west, north-south, and vertical components, respectively. The RMSEs of the results using WLS were 6, 13, and 10 mm/year for the east-west, northsouth, and vertical components, respectively. The accuracies of the vertical results are summarized in Table 2.
For reference, the results from the PSI ascending and descending images are relatively worse than those usually obtained using PSI. This was due mainly to the mean displacement velocity in the study area being relatively large at 34 cm/year, which accordingly resulted in a larger obtained RMSE. In addition, the result of 2D fusion of PSI ascending and descending results was 20 mm/year. This estimate was performed by assuming no north-south displacement and applying OLS. It was confirmed that combining the results from PSI and geodetic deformation measurements using WLS is more effective for land-subsidence monitoring with high spatial resolution compared with using PSI, interpolation only, or combining the results using OLS. It has been reported that Islands I and II are contracting toward their respective centers because of unequal settlement of the seawall, which means that there is a load on the reclaimed side but not the seaward side (Furudoi et al., 2009 Next, we discuss the determination of the weights used in Equation (6). We resolved the observation equations using WLS and obtained the 3D displacement velocities. Several methods have been proposed for obtaining 3D displacements, such as combining DInSAR and GPS using WLS (Hu et al., 2012;Samsonov et al., 2007;Shi et al., 2015). The method proposed in this paper determines the weights by comparing the velocities of PSI with GNSS data. As shown in Figure 8, the variance of the vertical component was much larger than that of the others. This is a reasonable finding because the vertical displacement is much larger than the horizontal displacement. Compared with the results obtained using OLS, the proposed method using WLS helped to improve the accuracy of the 3D displacement.
Method
Finally, we discuss the transferability of the proposed method in terms of the number of GPS data points. In the study area, we had 54 leveling and 26 GPS locations. As previously mentioned, because the leveling data were measured using GPS, we had a total of 80 GPS data points. In actual application, it is not feasible to have this many GPS locations in such a relatively small area. As a result, it may be difficult to calculate the weight matrix P given by Equation (6). Therefore, a possible approach is to apply the weights obtained in this research to the area of interest. In the future, we aim to apply this method to areas with sparser GPS points and examine whether such an approach is effective.
|
2020-08-06T09:07:57.819Z
|
2020-08-03T00:00:00.000
|
{
"year": 2020,
"sha1": "03840b3920d6df7ad2283cb72c9c4e1d25450919",
"oa_license": "CCBY",
"oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-3-2020/165/2020/isprs-annals-V-3-2020-165-2020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9e435e0b6210a2ceb0e3a0477744a29e66aef1bd",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
}
|
260098223
|
pes2o/s2orc
|
v3-fos-license
|
Recent advance in bioactive hydrogels for repairing spinal cord injury: material design, biofunctional regulation, and applications
Functional hydrogels show potential application in repairing spinal cord injury (SCI) due to their unique chemical, physical, and biological properties and functions. In this comprehensive review, we present recent advance in the material design, functional regulation, and SCI repair applications of bioactive hydrogels. Different from previously released reviews on hydrogels and three-dimensional scaffolds for the SCI repair, this work focuses on the strategies for material design and biologically functional regulation of hydrogels, specifically aiming to show how these significant efforts can promoting the repairing performance of SCI. We demonstrate various methods and techniques for the fabrication of bioactive hydrogels with the biological components such as DNA, proteins, peptides, biomass polysaccharides, and biopolymers to obtain unique biological properties of hydrogels, including the cell biocompatibility, self-healing, anti-bacterial activity, injectability, bio-adhesion, bio-degradation, and other multi-functions for repairing SCI. The functional regulation of bioactive hydrogels with drugs/growth factors, polymers, nanoparticles, one-dimensional materials, and two-dimensional materials for highly effective treating SCI are introduced and discussed in detail. This work shows new viewpoints and ideas on the design and synthesis of bioactive hydrogels with the state-of-the-art knowledges of materials science and nanotechnology, and will bridge the connection of materials science and biomedicine, and further inspire clinical potential of bioactive hydrogels in biomedical fields.
Introduction
Spinal cord injury (SCI) is a kind of spinal surgical disease with serious conditions and poor prognosis. The annual incidence is about 10.4 ~ 83.0/million, which has high disability rate and brings heavy economic burden to the families of patients and societies [1]. Traditional methods, including the hormone shock, surgical decompression, spinal fixation, and rehabilitation, have not shown satisfied performance for treating SCI until now, and there is no successful clinical treatment to stimulate the regeneration of human central nervous system (CNS) [2]. Therefore, how to promote the recovery of the nerve function after SCI is a challenging topic for both foundmental and clinical studies currently.
Using the characteristics of neural stem cells (NSCs) such as the self-update and multi-functional differentiation, clinical applications with adding functional nerve cells have been carried out by inducing endogenous NSCs or exogenous NSCs to treat SCI [3]. However, the local inflammatory microenvironment after the SCI is an important factor to affect the cell behavior [4], and therefore it is particularly important to construct a suitable microenvironment to promote the survival, proliferation, and differentiation of endogenous stem cells so as to promote the regeneration of injured spinal cord [5]. A lot of controllable drug release systems that can support the regeneration of stem cells and the delivery a variety of bioactive factors or drugs to construct a microenvironment that suitable for the CNS regeneration have been developed previously [6,7], which are of great significance in biomedicine and tissue engineering.
In the pre-clinical SCI treatment, hydrogels have been not only used to promote the tissue repair, but also served as bioactive carriers (cells, drugs or bioactive molecules) for local treatment [8,9]. Clinically, the condition of SCI is very complicated due to different size, shape, and injury degree [10]. In complex clinical cases, surgical manipulation of the spinal cord by implanting a preformed stent or drug delivery device may result in further damage to the spinal cord tissues [11]. Therefore, targeted injecting hydrogels to the SCI sites is very consistent with clinical personalized therapy. After the injection, hydrogels can well combine with the SCI tissue, slowly release stem cells/drugs/bioactive molecules, and show special functions, such as electrical conductivity, anti-inflammatory, adhesion, absorbability, temperature degeneration, and self-healing [12,13], making hydrogels attractive materials for the SCI repair and regeneration. However, how to prepare multifunctional hydrogels with injectable, anti-inflammatory, conductive, adhesive, absorbable, thermotropic, and self-healing properties for the SCI repair is a great challenge.
Hydrogels have three-dimensional (3D) porous structures with high ater-concent constructing by physical connection or chemical cross-linking. According to the distance between entanglements, hydrogels can be divided into three types, including macroporous, microporous, and non-porous. After resembling the extracellular matrix (ECM), hydrogels can mimic natural human tissues [14]. Therefore, multifunctional hydrogels have high therapeutic potential for the treatment of SCI, and their clinical applications in the delivery of stem cells, drugs, or bioactive molecules are promising [15]. In addition, the transfer of biomaterials is thought to be a more effective alternative strategy to mediate the NSC transplantation. The loading of stem cells, drugs, or different bioactive growth factors (GFs) to hydrogels could promote the functions of ECM, which can achieve the survival, proliferation, and differentiation of transplanted stem cells into nerve cells [16]. A good delivery system can greatly improve the therapeutic effectiveness of stem cells, drugs and different bioactive substances. The neural tissue engineering of multifunctional hydrogels in combination with stem cells, drugs, or different bioactive factors provides a promising strategy for the recovery of SCI [17,18]. However, due to the limitations of multifunctional hydrogels, such as the amount of loaded stem cells, the number of bioactive molecules, and the limitations of functional transformation, the utilization of functional hydrogels to load stem cells and transmit a variety of different substances or bioactive factors at the same time is still a challenge.
Several important reviews on treating SCI using hydrogels have been released previously. For instance, Wang et al. summarized the pathophysiology and clinical manifestation of SCI [19]. In their work, the composition of polymer hydrogels, the cross-linking method, the treatment strategies, and the effects of injected hydrogels on the SCI repair have been introduced and discussed. Walsh et al. described the link between the ability of a successful delivered cells or bioactive molecules and their immune response, introduced the latest advances in the treatment of SCI by immune agents, and demonstrated both physical and chemical properties of hydrogels [14]. Silva and co-workers reviewed the advance of hydrogelbased delivery systems for repairing SCI, in which the characteristics of the flow of hydrogels, the size of the mesh, the expansion, degradation, gel temperature, and surface charge on treating SCI have been introduced and analyzed in detail [20]. Peng and co-workers summarized the current status of various hydrogel-based delivery systems that used for the treatment of secondary SCI, and also discussed the functional modification of these hydrogels in order to obtain better therapeutic results [21]. However, the above-mentioned reviews did not explain clearly the effects of the material design and the regulation of hydrogel functions and biological properties on the treating efficiency of hydrogels toward SCI. We believe the regulation of the bioactivity and bio-properties of hydrogels plays great importance for promoting the applications of hydrogels in repairing SCI, and there is still some space that could be filled in to address the promising applications of hydrogels in the SCI repair.
Therefore, in this review we focus on recent advance in the material design and synthesis of functional bioactive hydrogels for repairing SCI, specifically, from the viewpoints of optimal material design and the regulation of the bioactivity and bio-functions of hydrogels (Scheme 1). Firstly, we introduce the SCI repair mechanisms and corresponding physical, chemical, and biological SCI repair methods. Secondly, we demonstrate the fabrication of bioactive hydrogels incorporating various biological components, including DNA, proteins, peptides, biomass polysaccharides, biopolymers, and others, via various synthesis strategies. After that, the methods for tailoring the biological properties of hydrogels, including cell biocompatibility, self-healing, anti-bacterial/anti-inflammatory, injection, bio-adhesion, biodegradation, and other multi-functions are presented. Finally, functional regulation of bioactive hydrogels through the functionalization of hydrogels with drugs/GFs, polymers, nanoparticles (NPs), one-dimensional (1D) materials, and two-dimensional (2D) materials for the SCI repair applications are introduced and discussed in detail, in order to show the great effects of functional regulation of hydrogels on treating SCI. We suggest, this comprehensive review analyze the importance of the functions and properties of bioactive hydrogels on the SCI repair, which could be useful for promoting the bridging between materials science and biomedicine in a different viewpoint and creating potential effects on clinical therapy of SCI.
Mechanisms and methods of SCI repair
The spinal cord consists of both gray matter and white matter, with gray matter in the center and white matter in the periphery. Gray matter consists of interneuron, afferent neuron and efferent neuron fibers. White matter consists mainly of myelinated axons. The spinal cord provides a very efficient connection between the brain and peripheral nerves. Axons run lengthwise through the spinal cord, passing information from the brain to peripheral nerves via efferent nerves, and messages received by peripheral nerves to the brain via afferent nerves. Spinal cord neurons differentiate into axons and form synapses with dendrites, forming extensive and huge connections in the body. The effective connection of neurons can ensure the integrity and timeliness of information when the nervous system transmits signals.
Scheme 1. Model on the design and functional regulation of bioactive hydrogels for the SCI repair
Extensive progress has been made in the nerve regeneration of SCI. However, the existing studies still did not realize the regeneration of clinically meaningful regeneration of the adult CNS (i.e. restoration of motor, sensory, and autonomic nervous function), as it is not yet fully clear on the mechanisms for the recovery of the spinal cord function and the regeneration of the CNS. After reviewing the latest literature, several research mechanisms on SCI are summarized.
Mechanisms of SCI repair
Extensive progress has been made for the nerve regeneration of SCI. However, the existing studies still did not realize clinical regeneration of the adult CNS, as it is not yet fully clear on the mechanisms for the recovery of the spinal cord function and the regeneration of the CNS. SCI can be either primary or secondary, with the initial mechanical injury leading to a primary injury stage of the spinal cord that can last up to 24 h, resulting in the death of nerve and glial cells [22,23]. Primary SCI is not treated clinically and can only be prevented, and the secondary SCI includes the breakdown of the blood-spinal barrier, the influx of peripheral inflammatory cells, and the activation of endogenous microglia, as well as other processes [24].
Secondary SCI can cause the activation of inflammatory cells, changes immune microenvironment, and further aggravate a series of pathophysiological events, such as neuron injury and glial cell population apoptosis, leading to the degeneration of ECM and the formation of cystic cavity and glial scar in the injured area eventually [25,26]. Cystic cavities and glial scars impede electrical conduction of the spinal cord and the regeneration of axons, leading to severe dysfunction of the limbs below the injured level, such as permanent loss of movement (weakness or paralysis), sensory impairment, and autonomic nerve (defecation and urination) dysfunction [27,28]. The neurons are divided into the axons and form synapses with dendritic nodes, which form a wide and large connections in the body, which can ensure the integrity and functions in the signaling system. However, the regeneration ability of the axon and dendrites is often inhibited by a large degree of inhibition, including the loss of the nerve functions and their effects of the inhibitory microenvironment (glia scar formation, inflammatory stimulation, and oxidative stress) [29].
There are many other studies on exploring the mechanisms of the SCI repair. For instance, it has been reported that the mammalian target protein of rapamycin (mTOR) signaling pathway played an crucial role in the synaptogenesis, neuron growth, differentiation, and survival after the injury of CNS [30]. The modulation of mTOR signaling pathway is a potential treatment for SCI. After SCI, the astrocytes have become hypertrophic and prolifically, forming borders rich in astrocytes, and then overreact to form glial scars, which are the main obstacles to neuronal regeneration and axon recovery [31]. Previously, it has been reported that the down-regulated PI3K/Akt/mTOR signaling pathway reduced the formation of glial scars, promoted the autophagy of neuronal cells after SCI, inhibited the apoptosis, and improved functional recovery in rats of SCI [32][33][34]. Several studies have proved that the activation of the PI3K/Akt/mTOR pathway was beneficial to the SCI repair. For example, Sun and co-workers reported that the combination of bone marrow mesenchymal stem cells (BMSCs) with exercise therapy restored the motor function after SCI by activating the PI3K/Akt/mTOR pathway [35]. Zhan and co-workers found that moderate intensity treadmill exercise activated the mTOR pathway, which was dependent on the expression of neurotrophic factors in the motor cortex, and promoted functional recovery in mice of SCI [36]. In addition, previous studies [37,38] have also suggested that ATP could promote functional recovery of SCI rats by activating the mTOR signaling pathway. Therefore, the mTOR signaling pathway mechanism plays an important clinical role in the formation of glial scar, the survival, proliferation, and differentiation of NSCs, as well as the growth, differentiation, and survival of neurons after SCI.
Both glial scar and scar mechanism, which are formed mainly by reactive astrocytes, play a dual role in SCI [39]. In the acute stage of SCI, the astrocytes will secrete various GFs to renew their numbers, which not only have direct effects on the damaged nerve cells, but also reduce the concentration of toxic substances in the external environment glutamate. These efforts removed harmful substances from the extracellular fluid, and mobilized energy to the injured area, so that the living environment of nerve cells was repaired [40,41]. However, in chronic phase, hypertrophic glial scars formed by reactive astrocytes have physical and chemical barriers, which are the key culprit of hindering neuron regeneration and functional recovery [42,43]. The complexity of reactive glial scar formation in spinal axon regeneration and functional recovery has been discovered previously [44]. The obtained results indicated that there was no significant difference in the recovery of animals with and without glial scar resection in a dorsal semi-resection model of experimental animals. However, the blood-brain barrier (BBB) score of the contusion model animals was lower in the early postoperative glial scar resection group, which confirmed the duality and complexity of glial cell response after SCI.
Besides, emerging research is elucidating the mechanism of neural circuit recombination after SCI to improve the functional recovery of SCI. Researchers are trying to understand how the subsets of neurons from the brain stem and spinal cord interact to regulate the motor and autonomic functions. Their study also explained the response and recombination of these subsets of neurons after SCI, and presented an effective strategy to improve the function of SCI through the neuromodulation technique [45].
Methods of SCI repair
The current treatment strategies for the SCI include the protection of the nerve cells and the regeneration of the nerve cells [46]. The former strategy is mainly used to avoid secondary SCI and plays a positive role in the early stage of SCI. There are two common therapeutic measures for acute SCI. One is releasing the continuous mechanical compression of the spinal cord, such as early surgical spinal decompression and spinal fixation, and the other is reducing acute inflammatory reactions [23]. For example, high-dose methylprednisolone has been used to treat acute SCI within 48 h after the injury, but its side effects were serious and the treating performance was limited [31]. Other strategies have been developed to repair and regenerate nerve tissue and restore its function. For example, the transplantation of stem cells and the stimulation of the proliferation and differentiation of endogenous NSCs for the SCI repair have been reported, and clinical achievements have been obtained for protecting and repairing the damage of CNS [27,47]. Transplanted stem cells or activated endogenous NSCs are helpful to repair the damaged spinal cord nerve cells and play important role in promoting SCI repair through immune regulation or cell regeneration. However, the success rate of stem cell transplantation in the clinical stage is very low, mainly due to the poor viability of cells and poor integration of spinal cord tissue [48].
The successful clinical method for the treatment of chronic SCI patients is the bionic epidural electrical stimulation (EES). For instance, Andreas and co-workers have used the bionic EES to restore three patients with chronic paralysis to standing, walking, cycling, swimming, and torso control within one day [43] Two of the participants were able to regulate the movement of the leg during the treatment of the EES, indicating that the stimulus increased the signal of the remaining down path. The bionic EES also achieved positive and continuous motion in the early stages of SCI, and made full use of natural repair mechanisms to enhance the recovery of the nervous system. This technique opens a practical avenue by applying clinical therapies for effective treatment of patients with severe SCI.
Hydrogel materials for SCI repair
The spinal cord is a soft watery biological structure with stiffness that can range from 3 to 300 kPa. As a kind of biological nanomaterial, hydrogel has unique advantages for repairing SCI due to its high hydrophilicity and other physical properties. Previous study has indicated that the maturity of neurons was higher and the length of axon was increased after using hydrogels, which was more suitable for the implantation after SCI and conducive to the regeneration of spinal cord tissue [49].
Hydrogels are highly hydrating materials with water molecules and hydrophilic polymer networks. Their injectability, inherent biocompatibility, cell interaction, hydrophilicity, permeability, and biodegradability make them suitable substrates for simulating natural molecular microenvironments. As shown in Fig. 1a, b a recent review has indicated that injectable hydrogels could be used for the stem cell transfer, and the selection of hydrogel materials will be mainly based on the spatial structure, as well as the tissue and cell reactions with nanomaterials [50].
Hydrogels can not only be used as ideal scaffolds for nerve tissue engineering, but also provide biological microenvironments for electrical stimulation [51]. The injection of hydrogels into the injured sites of SCI has been proved to be a facile way for drug delivery and the repair of SCI. In the case of SCI, the injectable nature of hydrogels provides a clinical advantage compared to other traditional treatments, which is especially suitable for clinical minimally invasive surgery of SCI therapy [52]. The specific gel that simulates the CNS microenvironment has been utilized to improve the transplantation of exogenous stem cells and activate the survive of endogenous NSCs [53]. With good biocompatibility, hydrogels can form scaffolds in-situ to fill the irregular shape of the defect tissue, eliminate the space after SCI, guide stem cell infiltration and matrix deposition, and create a complete implant-tissue interface to restore the continuity of the SCI tissue and achieve the SCI repair [54,55].
Hydrogels with unique physical, chemical, and biological properties can be used for repairing SCI through loading cells and drugs to the injured sites [14]. As shown in Fig. 2, porous and aligned structured hydrogels with high biocompatibility and biodegradation can support molecular mobility and the regeneration of linear axon within hydrogels for the SCI repair. In addition, the adjustable mechanical properties and minimally invasive delivery of cells and drugs make them more attractive carries for pharmaceutic treating of SCI, by which cells, drugs, and GFs can be loaded into hydrogels and then released into the SCI systems. Compared to traditional drug delivery carriers, the using of hydrogels as drug carriers can promote sustainable release of drugs or GFs and avoid the blood-spinal barrier [56,57]. Besides, due to the doping of active GFs/drugs into a cross-linked hydrogel matrix via electrostatic interactions or chemical binding, the formed bioactive hydrogels exhibited better protection from enzymatic biodegradation and rapid deactivation [58].
Although hydrogel has many properties suitable for the repair of spinal cord injury, it can have some defects. Low mechanical stability, high cost, variability, and poor immunogenicity are still an obstacle to the application of hydrogel in SCI [59]. Therefore, the development of hydrogels with more excellent properties, and continuous optimization of the biomedical application of hydrogels are important links in the application of broadened hydrogels in the repair of spinal cord injury [60].
Fabrication of bioactive hydrogels
Bioactive hydrogels can be synthesized by the crosslinking various biological components or modifying the polymer hydrogels with various biomolecules. In this
DNA hydrogels
DNA hydrogels have become a type of widely studied bioactive nanomaterials in biomedicine ascribing to their high biocompatibility, controllable properties, packaging, and delivery ability [61]. For example, DNA hydrogels have shown excellent performance in drug/gene delivery, bone tissue engineering, and healthcare sensors. In particularly, DNA hydrogels have been proved to be effective drug delivery platforms as they can encapsulate and release drugs in a continuous and controlled manner [62].
Basu and co-workers reported the preparation of DNA-nSi nanocomposite hydrogels for the applications in tissue engineering and drug delivery. The DNA-nSi hydrogels were prepared using simple heating and mixing techniques through a physical cross-linking network that formed between DNA and silicate nanodisks (nSi) [63]. As shown in Fig. 3a, the gelation process consists of two steps. In the first step, DNA denaturation and re-hybridization were used to form the hydrogen bonds between complementary base pairs of adjacent DNA chains. Secondly, nSi were used to create additional network through attractive electrostatic interactions with the DNA trunk, thereby enhancing mechanical elasticity of the created DNA hydrogels. The thermal stability and mechanical properties of the formed DNA hydrogels could be adjusted by changing the concentration of nSi. The hydrogel exhibited good biocompatibility and sustained drug release properties. It is proved that the hydrogels could regulate the release of the model drug dexamethasone (Dex). In the rat skull defect model, the DNA-nSi hydrogels have been testified to be effective to enhance the osteogenic differentiation and bone formation of human adipose stem cells. This study presents a new method for the preparation of injectable hydrogels and provides a new choice for the applications of hydrogels in tissue engineering, medical device coating, and drug delivery.
Injectable self-healing hydrogels have been introduced in another similar study, in which the hydrogels were fabricated using the components of DNA, oxidized alginate (OA), and nSi [64]. As shown in Fig. 3b, DNA-OA chains are connected using the Schiff base reaction between the aldehyde group of OA and the amino group of DNA nucleotides to form a covalent bond. The reversibility of the cross-linking reaction provided shear-thinning and self-healing properties for the formed DNA-OA network structure. In addition, the addition of nSi induced the formation of additional physical cross-linking sites, thus enhancing mechanical strength of DNA hydrogels without affecting their self-healing properties and biocompatibility. The fabricated DNA-OA-nSi hydrogels acted as injectable carriers for continuous delivery of the hydrophobic drug with a half-life of about 5 days and showed no any cytotoxicity. The obtained results confirmed the bioactivity of the released drugs by testing their ability to induce osteogenic differentiation in vitro and the migration of human adipose-derived stem cells. In addition, the designed DNA-based hydrogels could be used for In addition, some DNA molecules with special functions can also be designed and prepared into hydrogels. For instance, Yata et al. designed a compound immunostimulatory DNA hydrogel, which consisted of a mixture of specific DNA sequences containing cytosine (C) and guanine (G) that separated by the phosphate groups (CpG) and gold nanospheres (AuNS) modified with DNA (hPODNA) [65]. As shown in Fig. 3c, ODN-modified AuNS was firstly synthesized and named as AuNS-ODN (cg) and AuNS-ODN (gc), by adsorbing CpG or GpC with oligodeoxynucleotides (ODN) onto the surface of AuNS. Then, AuNS-ODN (cg) and hPODNA (cg) were mixed to form the AuNS-DNA composite hydrogels. In the experiment, EG7-OVA tumor-bearing mice were treated with the formed AuNS-DNA hydrogels under the irradiation of 780 nm laser, which significantly inhibited the growth of tumor cells and prolonged the survival time of mice. The composite hydrogels had high biocompatibility and safety, and could be removed from the blood by mononuclear phagocytic system. After laser irradiation, the hydrogels released DNA and stimulated immune cells to release proinflammatory cytokines and induced strong anti-tumor immune response.
In another study, Zhang et al. designed an injectable DNA hydrogel with chemotherapy function to solve the problem of tumor recurrence [66]. As shown in Fig. 3d, camptothecin (CPT) was transplanted into the backbone of thiophosphate DNA to form DNA-drug conjugate (DDC) chains, which were then assembled into Y-shaped drug-loaded DNA hydrogels. Compared with traditional systemic chemotherapy, this drug-containing DNA hydrogel exhibited a sustainable and responsive drug release behavior, which significantly inhibited the regeneration of tumor cells and prevented tumor recurrence [66]. Meanwhile, its local administration of minimally invasive treatment can also avoid organ damage that caused by the toxicity of systemic chemotherapy. The designed hydrogel showed a continuous and responsive drug release behavior, which could well infiltrate into the residual tumor tissue and be absorbed by cells effectively. The design and preparation of this drug-containing DNA hydrogel provide a promising solution for local adjuvant therapy of tumor.
Protein hydrogels
Various protein hydrogels shows good mechanical properties and high biocompatibility, both of which can be finely regulated by adjusting the synthesis conditions of hydrogels [67,68]. The preparation of protein Fig. 3 The preparation process and structure diagram of bioactive DNA hydrogels: a DNA-nSi hydrogels. Reprinted from Ref. [63], Copyright 2018, American Chemical Society. b DNA-OA-nSi hydrogels. Reprinted from Ref. [64], Copyright 2020, Elsevier. c AuNS-DNA and AuNR-DNA hydrogels. Reprinted from Ref. [65], Copyright 2017, Elsevier. d CPT-DNA hydrogels. Reprinted from Ref. [66], Copyright 2020, American Chemical Society hydrogels is simple and feasible, which provide functional biomaterials for the tissue regeneration and therapy of stem cells. In addition, protein hydrogels are injectable and self-healing, which make them more promising for various applications [69]. At present, a variety of proteins can be used as raw materials for the preparation of hydrogels, such as silk fibroin, zein, gelatin, elastin and keratin [70,71]. This section mainly introduces some hydrogels prepared by silk fibroin and its derivatives, as well as some protein hydrogels with special functions.
For example, Wang et al. reported in their study a method for introducing inert silk fibroin nanofibers (SFN) to form SF hydrogels in an enzymatic crosslinking system for regenerating silk fibroin (RSF) [72]. The mechanical properties of the formed SF hydrogel were tunable and could guide the differentiation behavior of stem cells. During the preparation process, RSF formed dityrosine bonds in the presence of horseradish peroxidase (HRP) and then cross-linked to form a hydrogel, in which SFN was embedded in the RSF hydrogel matrix to improve its mechanical properties. By adjusting the amount of added SFN, the stiffness of the SF hydrogel was regulated to about 9-60 kPa, which was much higher than that of hydrogel without SFN (about 1 kPa).
Protein hydrogels prepared by combining SF as the main component with other bioactive materials exhibited enhanced biological functions. The Buitrago team studied a hybrid protein hydrogel composed of SF and collagen, which showed improved flexibility and tunability that individual protein materials did not have (Fig. 4a) [73]. The mechanical and biological properties of the formed hydrogel were tailored by adjusting the ratio and concentration of SF and collagen, and the stiffness ranged from 0.017 to 6.81 kPa. The biological test with cells indicated that the hydrogel promoted the cell growth, differentiation, and muscle cell formation. Besides, the hydrogel regulated the synthesis and distribution of ECM, thereby better promoted the cell regeneration and tissue repair. In a previous study, Raia and co-workers reported the development of composite hydrogels of SF and hyaluronic acid (HA) for tissue engineering application [74]. SF and HA were covalently cross-linked under enzymatic reaction to form composite hydrogels, which revealed tunable mechanical properties and degradation ability. By adjusting the concentrations of SF and HA, the formed Synthesis and structures of bioactive protein hydrogels: a SF-collagen composite hydrogels. Reprinted from Ref. [73], Copyright 2017, Elsevier b Metal sulfide-protein hybrid hydrogels. Reprinted from Ref. [75], Copyright 2017, Wiley-VCH. c TA-PVA/BSA hydrogels. Reprinted from Ref. [76], Copyright 2018, American Chemical Society. d Mfp3 hydrogels formed by photochemical gelation. Reprinted from Ref. [78], Copyright 2018, American Chemical Society hydrogels exhibited a wide range of stiffness, from 10 kPa to slightly below 1 MPa. In addition, the designed SF-HA hydrogels revealed promising degradation ability, cytocompatibility, and elasticity, making the hydrogels good candidates for long-term tissue engineering applications.
In addition to SF, other proteins with special functions can also be constructed into bioactive hydrogels. Wang et al. proposed a method to construct composite hydrogels with injectable and self-healing properties through the formation of dynamic protein-metal ion network [75]. As shown in Fig. 4b, metal ions were mixed with protein under alkaline conditions to form a complex network under the interactions between metal ions and the cysteine residues of proteins. Nanocomposite hydrogels were synthesized by the in-situ reduction of metal ions into small-sized metal sulfide NPs. In the experiment, Bi 3+ was added into bovine serum albumin (BSA) to form the Bi 2 S 3 -BSA hydrogel for photothermal therapy of tumors. The Bi 2 S 3 -BSA hydrogel exhibited injectable and self-healing properties, as well as high photothermal efficiency. The designed injectable, self-healing, and adaptable hydrogel showed several biomedical applications, especially in tissue regeneration and stem cell therapy.
In another case, BSA protein was also used to build high-strength protein hydrogels through non-covalent interactions [76]. As shown in Fig. 4c, tannic acid (TA), BSA, and polyvinyl alcohol (PVA) were mixed together to form TA-PVA/BSA hydrogel via physical cross-linking. The pre-hydrogel was prepared from BSA and PVA by repeated freezing and thawing, which was then soaked in TA solution to form cross-linked TA-PVA/BSA hydrogel. Compared with traditional hydrogels, the TA-PVA/ BSA hydrogel revealed ultrahigh tensile strength up to 9.5 MPa, and had good water-retention and similar layered structure to human skin. Furthermore, the hydrogel possessed tunable mechanical properties and anisotropy. These unique properties promoted the biological applications of designed protein hydrogels.
When stimulated by external or internal factors, such as metabolic product concentration, pH value, light/UV source, enzymes, osmotic pressure, magnetic/electric field, temperature, redox reactions, and ultrasound irradiation, stimulus-responsive hydrogels exhibit significant changes in their swelling, degradation, rheological properties, release behavior, and mechanical performance. Therefore, by achieving and controlling these stimulus conditions, researchers are able to fabricate stimulus-responsive hydrogels with adjustable properties. Additionally, the use of protein precursors with stimulus-responsive functionality can also confer stimulusresponsive properties to hydrogels [77]. In a typical case, Liu et al. [78] presented the design of a protein hydrogel by photochemical cross-linking of recombinant mussel foot protein-3 (Mfp3), as shown in Fig. 4d. The mechanical properties of the designed protein hydrogel could be regulated by adjusting the protein concentration, the cooxidant concentration, and the intensity of light used for cross-linking during the preparation process. The protein hydrogel had good biocompatibility to support cell adhesion and proliferation, and could modify and immobilize leukemia inhibitory factor under covalent interaction to activate the JAK/STAT3 pathway to induce neuronal growth. The material design with folded protein domains and photochemical gelation was beneficial to construct bioactive materials for regenerative neurobiology [78].
Peptide hydrogels
Peptide hydrogels showed high potential for biomedicine, which were excellent bioactive materials for the wound repair, cell culture, and drug/gene delivery [79]. In order to achieve better remote and precise control of hydrogel properties, researchers have proposed different strategies, including the using peptides with special bioactive functions to construct multifunctional hydrogels, using photo-sensitive peptides to construct hydrogels, and using self-assembled biomimetic hydrogels [80].
For instance, Cheng et al. introduced a new type of polypeptide-protein hydrogel that formed by cross-linking BSA, K 2 (SL) 6 K 2 polypeptide (KK), and (Ag + ) [81]. The hydrogel was formed by the S-Ag coordination and the cross-linking of BSA protein, thiol polypeptide K 2 (SL) 6 K 2 polypeptide (KK), and Ag + (Fig. 5a). The formed KK-BSA hydrogel revealed good gel effect, rich porous structure, and self-healing property. In terms of targeting wound healing, Ag + provided antibacterial function, and KK endowed the hydrogel with the property of promoting blood vessel growth. The in vivo experiments in mice indicated that the KK-BSA hydrogel promoted considerable collagen deposition and vascularization capacity in the early stage of wound healing, favoring the generation of newly emerging hair follicles. This peptide-protein hybrid hydrogel with antibacterial and vascularizing properties helped to regenerate and heal infected wounds through synergistic effects of a few components.
The self-assembly of photoactivate peptide is a general approach to construct peptide hydrogels with spatial and temporal control. In a recent report, Xiang et al. proposed a new strategy of using photosensitive peptides to construct bioactive hydrogels, which were triggered under the light irradiation to achieve remote and precise control of hydrogel properties. This strategy involved designing peptide molecules with high aggregation ability, charged amino acid sequences for preventing the selfassembly in water, and photocleavable linkers to activate peptide self-assembly upon the light irradiation [82]. As shown in Fig. 5b, a photo-responsive peptide modified with the gelling agent, a charged amino acid sequence, and a 2-nitrobenzyl (NB) ester photocleavage group was designed to activate the peptide self-assembly under the light irradiation. The designed peptide formed bioactive hydrogels in neutral aqueous solutions under the UV irradiation, which opened up the possibility of mimicking ECM and showed potential applications in cell culture and tissue engineering.
Self-assembled peptide hydrogels are useful for drug delivery. Nguyen et al. used self-assembling peptides to prepare biomimetic hydrogels, which promoted the regeneration of dental pulp stem cells [83]. As shown in Fig. 5c, the self-assembling peptide mainly contains a β-sheet-forming segment and an ECM phosphoglycoprotein-mimic sequence at the C-terminus. The presence of hydrophilic and hydrophobic residues enabled the peptide to self-assemble into β-sheet stacking nanofibers. Biodegradable and injectable properties of the formed peptide hydrogels could be tailored by adjusting the solution pH. Meanwhile, the fabricated hydrogels revealed Photosensitive peptide hydrogel via self-assembly. Reprinted from Ref. [82], Copyright·2023, American Chemical Society. c ECM protein-mimic peptide hydrogel. Reprinted from Ref. [83], Copyright 2018, American Chemical Society. d Self-assembly and gelation pathways of β-sheet forming peptides. Reprinted from Ref [84], Copyright 2022, Royal Society of Chemistry rheological properties, making them easy to be injected into the injured sites to promote the survival and proliferation of autologous stem cells and the formation of dental bone.
In another work, Elsawy and co-workers introduced the potential application of self-assembled peptide hydrogels for drug delivery using five β-sheet peptides (F8, FK, FE, F8K, and KF8K) with different physicochemical properties [84]. As shown in Fig. 5d, the self-assembly pathways and the doping of drugs (Dox) into the hydrogels are presented. Their results indicated that the ion-π and π-π interactions between drugs and peptide nanofibers affected the release of Dox. In addition, the created peptide hydrogels exhibited broad susceptibility to enzymatic degradation, which could be exploited to control the degradation rate. In addition, the Dox released from the hydrogels was pharmaceutically active and could affect the cell growth. Their study demonstrates the potential of self-assembled peptide hydrogels as a platform for drug delivery.
Biomass polysaccharide hydrogels
Biomass polysaccharides can also be used to construct hydrogel materials with a wide variety and diverse structures, which have attracted great attention in the fields of drug delivery and wound repair [85,86]. In the past few years, various types of polysaccharide hydrogels have been prepared through different methods, and their properties and applications in various fields have been explored. This section introduces the preparation method, physicochemical properties, bioactivity, and applications of polysaccharide hydrogels.
Dutta et al. utilized 3D printing technology to fabricate a biodegradable hybrid hydrogel for bone tissue engineering by using alginate (Alg), gelatin (Gel), and cellulose nanocrystals (CNC), as shown in Fig. 6a [87]. In their experiment, the Alg/Gel/CNC hydrogel-based bioink was prepared by physical and Ca 2+ -induced chemical cross-linking, which showed enhanced mechanical properties compared with pure polymer scaffolds. The biocompatibility, cell differentiation, and bone regeneration ability of the printed scaffolds were evaluated using various assays, and the results showed that the 1% Alg/Gel/ CNC hydrogel scaffolds revealed enhanced cell adhesion and proliferation, as well as mineralization and osteogenesis compared to the control group. Their study provides a new approach to develop bioactive hydrogel materials for tissue engineering.
In another work, Fiorati et al. regulated the mechanical properties of 2,2,6,6-Tetramethyl-1-Piperidinyloxy (TEMPO)-oxidized cellulose nanofibers (TOCNFs) by adding inorganic nanoparticles, while keeping the injectability and bioactivity of the cellulose hydrogel ( Fig. 6b) [88]. In their study, calcium phosphate (CaP) NPs were embedded into the injectable TOCNF hydrogel for inducing the mineralization to form hydroxyapatite layers for bone tissue regeneration. The formed CaP-TOCNF hybrid hydrogel exhibited good stability, high injectability and biological activity, as well as excellent biocompatibility, providing valuable insights on the design and synthesis of natural polymer-based hydrogels for tissue engineering applications.
Shah and co-workers developed the synthesis of an injectable hydrogel from chitosan (CTS), carboxymethylcellulose (CMC), and PF127 (Pluronic ® F127) using the solvent casting technique, which was further loaded with curcumin (Cur) to promote the diabetic wound healing [89]. The fabricated injectable CTS-CMC-g-PF127 hydrogel exhibited good mechanical properties, rheological properties, and thermal responsiveness. In addition, the biotests indicated that the created hybrid biomass hydrogel revealed better ability for diabetic wound healing by promoting the tissue regeneration, inhibiting the inflammatory cells, and increasing the angiogenesis. In a similar case, Rakhshaei and co-workers used citric acid as a cross-linking agent to fabricate a flexible nanocomposite hydrogel of CMC, ZnO-modified mesoporous silica (MCM-41), and tetracycline (TC) for wound dressing (Fig. 6c) [90]. Due to the using of antibiotic TC and the sustainable delivery ability of MCM-41, the created hydrogel relieved wound pain and promoted the wound healing.
Composite hydrogels
Besides the above-mentioned biomolecules that used for the fabrication of bioactive hydrogels, composite hydrogels are also widely used in the field of biomedicine [91]. In recent years, researchers have conducted in-depth studies on the preparation and functionality of composite hydrogels, which has continuously promoted the development of their applications [92].
Xu and co-workers reported the design and synthesis of functional hybrid polydopamine (PDA) hydrogel by conjugating PDA and copper-doped calcium silicate (Cu-CS), forming the PDA/Cu-CS composite hydrogel [93]. As shown in Fig. 7a, Cu-CS was synthesized using a sol-gel method, which further oxidized DA to PDA, while PDA complexed with Cu 2+ that released from Cu-CS. The created hydrogel exhibited multiple functions, including the abilities of photothermal reaction, antibacterial ability, angiogenesis-mediation, cell proliferation, bio-adhesion, and self-healing. In another study, Liu et al. developed an injectable PEGylated-chitosan (PEG/CTS) hydrogel that loading with TiO 2 NPs (Fig. 7b) [94]. The addition of TiO 2 NPs into the PEG/CTS hydrogel improved its physicochemical and biological properties of the PEG/CTS hydrogel. The synthesized composite hydrogel exhibited improved compression modulus and better swelling performance, enhanced adhesion to cardiomyocytes, and tissue repair function. Therefore, their study provides a promising approach for the development of highly efficient patch repair materials for cardiac tissue with superior bioactivity and mechanical properties.
Composite hydrogels based on natural polymers have been widely used in the repair and regeneration of biological tissues due to their high similarity to the structures of biological tissue. Li et al. developed HA-based hybrid hydrogels using sodium hyaluronate and CNCs as the linking substrates, which showed sufficient strength and self-healing ability to accelerate skin wound healing [95]. As shown in Fig. 7c, aldehyde-modified sodium hyaluronate (AHA), hydrazide-modified sodium hyaluronate (ADA), and aldehyde-modified cellulose nanocrystals (oxi-CNC) were dynamically operated via a double-barreled syringe. The hydrazide bonds promoted the in-situ formation of hydrogels. Their study provides a good example for the development of drug-loaded selfhealing hydrogels.
In another study that using hydrogels to repair biological tissues, Han et al. used methacrylic anhydride (MA) to chemically modify the Gel to obtain photo-cross-linkable GelMA, which was then further mixed with polyacrylamide (PAM) to form the GelMA-PAM composite hydrogel under the irradiation of UV light of 360 nm (Fig. 7d) [96]. The synthesized compisite hydrogel showed good mechanical properties and thermal stability, and could be [93], Copyright 2020, American Chemical Society. b PEG/CTS hydrogels loaded with TiO 2 NPs. Reprinted from Ref. [94], Copyright·2018, Elsevier. c Self-healing HA nanocomposite hydrogel. Reprinted from Ref. [95], Copyright·2022, American Chemical Society. d GelMA-PAM hybrid hydrogel. Reprinted from Ref. [96], Copyright 2017, Royal Society of Chemistry applied for the cartilage repair in organisms. In addition, the in vitro cell culture tests have proved that the hydrogel had good biological activity and could promote the proliferation and growth of chondrocytes.
To make it more clear, the fabrication of bioactive hydrogels that used for SCI is described in detail, and the contents are summarized in Table 1.
Functional regulation of bioactive hydrogels
In this section, the regulation of biological functions of hydrogels, including the cell differentiation, self-healing, anti-bacterial, injection, bio-adhesion, biodegradation, and other multi-functions, via various strategies are introduced and discussed.
Cell tissue behaviors
The speed of tissue repair is determined by the differentiation and regeneration of cells in the process of SCI. The differentiation and regeneration of spinal cord cells can be induced by adding GFs or bioactive drug molecules into the hydrogels. Especially in the process of vascular and nerve cell regeneration in the spinal cord, the good coating of hydrogels can guide the differentiation and regeneration of nerve cells in all directions. Because of its good infiltration, permeability, and biocompatibility, hydrogel plays an important role in the vascular regeneration, guiding the nerve differentiation, and promoting the cartilage formation [97,98].
The hydrogels with high mechanical strength have strong pressure-bearing capacity and swelling ability, which play a supporting role. Using this property of hydrogels, Zhao et al. developed a hydrogel with the ability of increasing bone mass through the self-expansion. In their study, gelatin-hyaluronic acid hydrogel (GH) was prepared by double cross-linking of oxidized hyaluronic acid (HA-CHO) and tyramine modified gelatin (GA-tyramine). A kind of swelling-enhanced GHNbBG hydrogel was prepared by adding niobium-doped bioactive glasses (NbBG) into the as-prepared hydrogel. The expansion of GHNbBG hydrogel was beneficial to the bone elevation and new bone was formed after the degradation of the hydrogel. Meanwhile, NbBG promoted the angiogenesis effectively in the process of hydrogel expansion (Fig. 8a) [99].
The cells in the sites of SCI are often accompanied by the inflammation. The reactive oxygen species (ROS) released by inflammatory immune cells will not only cause the apoptosis of normal cells around the spinal cord, but also inhibit the regeneration of neuro cells. Therefore, the removal of ROS that produced by inflammatory cells is also a very important strategy for the repair of SCI. For example, Li and co-workers proposed the synthesis of a hydrogel that can encapsulate the BMSCs and scavenge ROS [26]. As show in Fig. 8b, the neuro-specific peptide (IKVAV) is covalently linked to the hydrogel that formed by the cross-linking of hyperbranched polymer (HBPAK) containing thioacetal and methacrylate hyaluronic acid (HA-MA). Based on the good coverage and the flexibility of the formed hydrogel, the rat epidermal growth factor (EGF) and basic fibroblast growth factor (BFGF) were encapsulated only by physical methods. This kind of hydrogel could promote the polarization of M2 macrophages, protect BMSCs from the oxidation of ROS during the bone marrow interstitial transfer, and accelerate axonal regeneration.
In the preparation process of hydrogels that can be used for physiological tissue repair, the addition of therapeutic metal ions can accelerate the process of tissue repair and treatment [100,101]. For example, in the work of Zhang et al., the introduction of Mg 2+ into the formed hydrogels not only regulated the cell behavior, but also promoted local bone tissue regeneration and repair [102]. There was a complexation between Mg 2+ and acrylated bis-phosphonate (Ac-BP), which driven the co-assembly of Mg 2+ and Ac-BP to form Ac-BP-Mg 2+ NPs. The photo-initiator was added to the mixed solution of methacrylated HA (MeHA) and Ac-BP-Mg 2+ NPs, to form hybrid hydrogels by the photo-induced stimulation. In physiological tissue, the hydrogels exhibited the ability to release Mg 2+ continuously, resulting in enhanced performance for the bone regeneration and osteogenesis at the expected sites.
In the repair of SCI, the nerve repair is one of the important steps in the whole repair process. In the work of Zhou et al., a hydrogel for spinal cord repair has been developed to reverse the differentiation of NSCs into astrocytes and to differentiate as many neurons as possible. As shown in Fig. 8c, gelatin methacrylamide (GelMA) hydrogels containing BMSCs (1 × 10 7 mL −1 ) and NSCs (1 × 10 7 mL −1 ) were synthesized through the photo-encapsulation. The formed GelMA hydrogels showed enhanced ability in vitro, and promoted the differentiation of NSCs into neurons in the in vivo SCI repair. Their results proved that the designed GelMA hydrogels loading with BMSCs and NSCs promoted neuronal differentiation and recovery of motor function significantly, which exhibited high application potential in the SCI repair to promote neuronal differentiation [103].
Self-healing property
Filling the SCI cavity with self-healing materials can provide bridges and carriers for the regeneration of NSCs, axons, and myelin sheath, and create channels for the transmission of electrical signals in the spinal cord. Therefore, the regenerative microenvironment created by the self-healing materials is beneficial to the repair of SCI [104,105]. In the process of SCI repair, self-healing hydrogels can effectively avoid the damage and wear that caused by hydrogels in the transportation and harsh environment, and ensure the maximum value of hydrogels in the process of treatment through the ability of self-repair.
Meanwhile, the hydrogels can better promote the repair of SCI [106,107]. The self-healing process of hydrogels is often realized by dynamic chemical bonds. For example, a new type of xanthan gum-polyethylene glycol (XG-PEG) hydrogel was prepared by dynamic, pH-responsive, and biodegradable binding reactions in the work of Singh and co-workers [108]. As shown in Fig. 9a, under the action of dynamic covalent binding between PEG and XG, the created hydrogel exhibited excellent self-healing ability.
In the work of Luo et al., the dynamic π-π interaction between benzene groups was used to obtain the selfhealing ability of hydrogels [109]. Peptide IKVAV is a laminin-derived peptide that can promote the growth of axons in the spinal cord, and fluorenylmethoxycarbonyl (Fmoc) group contains three circular rings with a strong π-π interaction. The π-π interaction of the peptide chain is enhanced by modifying the Fmoc group at the end of the peptide molecule. As shown in Fig. 9b, the FC/FI-Cur hydrogel was synthesized by adding curcumin (Cur) into the Fmoc peptide (FI) and Fmocgrafted chitosan (FC) during the co-assembly process [109]. The dynamic and reversible π-π interaction of the FC/FI-Cur hydrogel made the created hydrogel had good self-healing ability. More importantly, Cur coated with hydrogel could be released slowly and continuously, which helped to resist the inflammation at the sites of SCI and promoted the SCI repair.
In another study, Li and co-workers demonstrated the fabrication of self-healing AHA/DTP hydrogels by insitu cross-linking of aldehyde-modified HA (AHA) and 3-methylithiobis (propionylhydrazide) (DTP) through double syringes (Fig. 9c). There are several dynamic covalent bonds in DTP, which can realize the self-healing of the synthesized AHA/DTP hydrogels. Meanwhile, the AHA/DTP hydrogels could bridge the injured sites of spinal cord and promote the healing and repair of spinal cord through their self-healing ability, creating a favorable microenvironment for the growth of nerves and axons to promote the functional repair of SCI [110].
Anti-bacterial and anti-inflammatory properties
Injured spinal cord is more prone to the infection due to the destruction of microenvironment and tissue exposure, which leads to other complications or slows down the repair and regeneration of SCI [111]. Therefore, the development of anti-inflammatory and anti-bacterial hydrogels for the repair of SCI is helpful to reduce the Fig. 9 Self-healing hydrogels for SCI repair: a XG-PEG self-healing hydrogel. Reprinted from Ref. [108], Copyright 2018, American Chemical Society. b Self-healing FC/FI-Cur hydrogel for treating SCI. Reprinted from Ref. [109], Copyright 2021, Elsevier. c Self-healing AHA/DTP hydrogel for repairing SCI. Reprinted from Ref. [110], Copyright 2022, Elsevier occurrence of various complications in the repair process [7]. In the preparation process of anti-bacterial SCI repair hydrogels, the addition of anti-bacterial factors can greatly improve the antibacterial activity of hydrogels. Chitosan (CTS), polydopamine (PDA), metal nanoparticles, as well as graphene and its derivatives all have good anti-bacterial properties, revealing potential importance for preparing functional hydrogels [112,113]. For instance, Gallardo et al. successfully introduced PDA into guanosine-boric acid (GB) to form PGB hydrogel using 3D printing technology, which greatly increased the content of PDA in the hydrogel [114]. The fabricated PGB hydrogel exhibited obvious fiber network structure, and the incorporation of PDA greatly improved the osteogenic activity and biocompatibility of PGB. In addition, PGB hydrogel revealed good anti-bacterial activity. Compared with GB hydrogel alone, PGB reduced the bacterial adhesion and biofilm formation, and fundamentally inhibited the bacterial growth.
In another case, Ou et al. reported the combination of the bone immunomodulatory and anti-bacterial ability of hydrogels for accelerated bone tissue regeneration. In their study, the silver nanoparticles/halloysite nanotubes/ gelatin-methacrylic acid (nAg/HNTs/GelMA) hybrid hydrogel was prepared by the photopolymerization, as shown in Fig. 10a [115]. GelMA has a similar environment to natural extracellular matrix with good biocompatibility. nAg reveals excellent spectral anti-bacterial activity and low toxicity, and can show strong anti-bacterial and anti-inflammatory effects in the process of wound healing. Halloysite nanotubes (HNTs) is a kind of naturally occurring silicate nanotubes, which has great potential in drug transport and bone tissue regeneration. Due to the synergistic effects of all components, the injured spinal cord was tightly wrapped after the introduction of the nAg/HNTs/GelMA hydrogel into the injured sites. The existence of HNTs strengthened the electrostatic interactions between the hydrogel and nAg, which maintained long-term and comprehensive antibacterial activity. Meanwhile, HNTs regulated the bone immune system and promoted bone tissue regeneration. Therefore, the designed nAg/HNTs/GelMA hydrogel relieved the inflammation of the SCI sites greatly, prevented the bacterial infection effectively, and accelerated the repair of SCI.
SCI can produce a very serious inflammatory microenvironment, which will affect the cell survival and proliferation to reduce the efficiency of the repair of SCI. In the work of Yuan et al., stem cells were used to enhance the adaptability and dynamics of the hydrogel and to repair SCI by reducing the microenvironment of injured sites [116]. As shown in Fig. 10b, cell-adaptable neurogenic (CaNeu) is developed as the carrier of adipose-derived stem cells (ADSCs) (1 × 10 7 cells mL −1 ), and the CaNeu hydrogel loaded with ADSCs formed a dynamic permeable network and solved the problem of the apoptosis of ADSCs in inflammatory environment by inducing the polarization of macrophages to form an anti-inflammatory microenvironment. In a recent work [117], Li and co-workers developed a spinal cord hydrogel patch, Fig. 10 Anti-bacterial and anti-inflammatory properties of hydrogels: a nAg/HNTs/GelMA for preventing bacterial infection and promoting bone tissue regeneration. Reprinted from Ref. [115], Copyright 2020, Elsevier. b ADSCs-loaded CaNeu hydrogel for the formation of anti-inflammatory microenvironment. Reprinted from Ref. [116], Copyright 2021, Elsevier which revealed good anti-bacterial, anti-inflammatory, and analgesic effects. The fabricated hydrogel patch inhibited the expression of tumor necrosis factor effectively, and the good biocompatibility expanded its broad applications in the SCI repair and the inhibition of postoperative infection.
Injectable ability
In the injured spinal cord, the cavity shape of the wound is usually irregular. The shape and strength of the hydrogels can satisfy the SCI of different traumatic depths, especially the injectability of the hydrogels can tightly fill the SCI cavity. Whether in the drug release, adhesion or promoting cell regeneration and other aspects can play a personalized treatment [118][119][120]. In addition, the injectable hydrogels are also more suitable for minimally invasive surgery, and the bioactive hydrogels with good fluidity and injectability can enter and infiltrate the injured sites through the syringe, which is beneficial to the repair of SCI in a easy way [121].
For instance, Zhou and co-workers synthesized a hydrogel using peptide and poly (ethylene oxide) diacrylate (PEGDA) by the in-situ Michael addition reaction [122]. First, the amino terminal of the peptide KYIG-SRK was coupled with Ibuprofen to form the Ibuprofen-KYIGSRK, in which the lysine at both ends of the peptide connected two PEGDA polymer chains through the Michael addition reaction to form a PEGDA hydrogel network with injectable property. In the sequence of KYIGSRK, YIGSR promoted the cell adhesion and nerve terminal growth. The presence of Ibuprofen at peptide played an anti-inflammatory effect and promoted the regeneration of neurons. In addition, Ibuprofen combined with peptide reduced the random diffusion in the SCI sites due to their synergistic effects. This injectable hydrogel was synthesized by the in-situ reduction without adding any catalyst, showing the advantages of good biocompatibility, anti-inflammation, and controllable drug release, which provides a facile strategy and new idea for treating irregular and minimally invasive SCI.
In the process of repairing SCI, the persistent inflammation is the root cause that hinders the cell regeneration, so solving the problem of the inflammation in the sites of SCI is helpful for rapid SCI repair. In the study of Wang et al., extracellular vesicles (EVs) were compounded into poly (d, l-liactide)-poly (ethyleneglycol)poly(d, l-rellism) (PLEL) for the form of PLEL/EVs hybrid hydrogel. EVs microglia M2 can reduce the inflammation and promote the nerve regeneration, and the formed hydrogel was useful for solving the inflammation in the process of SCI repair. The synthesized PLEL/EVs hydrogel showed enough fluidity to enter the injured sites of the spinal cord through a syringe, as indicated in Fig. 11a. In addition, the PLEL/EVs bioactive hydrogel exhibited sensitive temperature response that can rapidly gelate and wrap the injured sites at body temperature, promoting the nerve regeneration and accelerating the SCI repair [8]. In another work, Chen et al. reported the design and synthesis of injectable SF/DA composite hydrogels by the auto-polymerization of silk fibroin (SF) and dopamine (DA). As shown in Fig. 11b, the fabricated SF/DA bioactive hydrogels had good injectability, which could be used as potential materials for the tissue adhesion, hemostasis, and other medical applications. Meanwhile, the addition of DA into hydrogels provided the possibility for the repair of SCI cells, and played a good role in promoting the axon growth and cell differentiation [123].
Biological adhesion
The complete covering of damaged tissues in the injured sites and the close contact with the broken end of nerves, blood vessels, and muscles are still big problems for repairing SCI [124]. Usually, the free repair material cannot attach and repair the injured sites well in the complex microenvironment of SCI. Therefore, the preparation of repair materials with certain adhesion ability to the SCI sites is a key step to promote the construction of repair and treatment of SCI [125,126]. The good coating of bioactive hydrogels can achieve close contact with the SCI sites to accelerate the repair of the injured spinal cord. Through the modification of hydrogels to increase their biological adhesion ability, the injured spinal cord can be wrapped more closely, and can be repaired continuously and stably [127].
By enhancing the adhesion of hydrogels, it is possible to create a favorable environment for the proliferation, differentiation, and growth of cells in the injured spinal cord, which can effectively shorten the time of tissue repair and accelerate the speed of healing. For instance, Cai et al. successfully improved the adhesion and proliferation of NSCs in the spinal cord by the photo-fixation, which provided a good site for neuronal regeneration and produced neuronal tissue and speed up the repair of SCI [128]. The smooth surface of hydrogels is often difficult to provide the adhesion sites for cells or proteins. In the work of Staubitz et al. the problem of poor adhesion of hydrogels has been solved by adding the adhesion proteins into hydrogels [129]. The mercaptan of protein combined with the imide of poly (hydroxyethyl methacrylate) (pHEMA) through the Michael reaction, and the addition of protein into the hybrid hydrogels realized the biological functionalization of pHEMA and increased the biological adhesion ability of pHEMA hydrogels.
Liu and co-workers demonstrated a strategy to insitu form bio-adhesive hydrogels at the SCI site. As shown in Fig. 12a, glycidyl methacrylated SF (SF-GMA), laminin-acrylate (LM-AC), and photoinitiaor (LAP) were injected into the SCI site, and the cross-linking ability of LAP was triggered by the UV light irradiation, forming a SF-GMA/LM-AC hydrogel network entangling with the spinal cord tissue and stably wrapping the SCI site. Different from other physical adhesions, the SF-GMA/ LM-AC hydrogels revealed strong adhesion and infiltration to SCI sites, in which LM-AC promoted the differentiation and growth of spinal cord axons with enhanced biological activity of materials [130].
While ensuring the adhesion of the hydrogels, the elasticity and stretchability are also important for repairing SCI. In the work of Chen et al., bioactive hydrogels with good stretchability and adhesion were prepared to deliver GFs and drugs for the SCI repair, which ensured close contact with the injured sites and promoted the differentiation of neurons and the repair of SCI [131]. As indicated in Fig. 12b, the repair process of the hydrogels was presented, in which the oriented collagen-fibrin (Col-FB) hydrogel with interacting network structure was prepared by electrospinning and in-situ sequential crosslinking method. The fibrin network had good elasticity, and the formed hydrogel exhibited enhanced mechanical properties after the conjugation with collagen. After that, stromal cell-derived factor-1α (SDF1α) and paclitaxel (PTX) were injected into the as-prepared Col-FB hydrogels by electrodynamic fluid jet printing technique to form a middle-to-both sides concentration gradient. It was found that the Col-FB hydrogels exhibited excellent adhesion and tightly connected the ends of SCI. The excellent tensile and mechanical properties of hydrogels ensured the lasting connection between Col-FB hydrogels and the injured sites. In addition, the concentration gradient of SDF1α and PTX in the Col-FB hydrogels showed continuous release in the injured spinal cord. Meanwhile, differentiated neurons migrated with the help of the Col-FB hydrogels and accelerated the repair of SCI nerve.
Previously, Yan and co-workers reported a new type of hydrogel as a biomimetic matrix to promote the cell Fig. 11 Injectable ability of hydrogels: a Injectable thermosensitive PLEL/EVs hydrogel for SCI repair. Reprinted from Ref. [8], Copyright 2022, Elsevier. b Injectable SF/DA hydrogel for SCI repair. Reprinted from Ref. [123], Copyright 2020, Elsevier proliferation and adhesion [132]. As shown in Fig. 12c, peptides containing RGD ligands were co-assembled with SF to form the SF-RGD hydrogels. The presence of RGD not only adhered the BMSCs to the hydrogels, but also realized the adhesion of hydrogels to the sites of SCI. Therefore, the designed bioactive hydrogels promoted the adhesion and proliferation of mBMSCs, and provided a biomimetic microenvironment for the osteogenic differentiation.
Biodegradation ability
Through the addition of biomolecules, such as dopamine, polyvinyl alcohol, hyaluronic acid (HA), and others into the preparation process of hydrogels, it is possible to synthesize hydrogels with good biocompatibility and biodegradation ability [133]. The biodegradable hydrogels can solve the problem of the direction of materials after the repair of physiological tissue. The degradable hydrogels can trigger their degradability through pH, heat, light, and other stimulations [134,135]. In addition, the targeted release of drugs can be achieved through the degradation of hydrogels, and the accurate treatment of local damage can be achieved. Therefore, the development of biodegradable hydrogels is of great significance in the repair of SCI [136]. In the work of Shi et al., a biodegradable PEG-based hydrogel was designed and synthesized. In vivo experiments in mice, the hydrogel could be degraded within 2-8 weeks and excreted through spleen and liver [137]. In another work, Xu et al. used the degradation of hydrogels to achieve drug release. The biodegradable hydrogel can be completely degraded in 7-8 weeks, and the drug can be released slowly in the process of degradation [138]. In the work of Xu and co-workers, PDA-modified germanium phosphide (GeP) nanoparticles (GeP@PDA) were incorporated into DA-grafted HA hydrogels (HA-DA) to prepare degradable hydrogels (HA-DA/GeP@ PDA) with good electrical conductivity [139]. GeP@PDA formed a good electronic network path in the HA-DA/ GeP@PDA system, which enhanced the electrical conductivity of the composite hydrogels. The synthesized hydrogels promoted the immune regulation, endogenous angiogenesis, and neurogenesis of neural stem cells.
Li and co-workers also synthesized a biodegradable conductive hydrogel scaffold for the repair of SCI. The synthesized degradable hydrogel realized the sol-gel transformation under the control of temperature, which was beneficial to injection and in-situ gelation at the site of SCI. Based on this design, bioactive substances, cells, and drugs can be loaded into the hydrogels by simple injection. As shown in Fig. 13a, cabazitaxel (Cab)-loaded micelles (Cab-M) was mixed into thermosensitive hydrogels through in-situ synthesis. The Cab-M/H hydrogel was gelated in-situ at the site of SCI in mice. After 8 weeks of treatment, it was found that the injured site healed obviously and the Cab-M/H was degraded. The presence of Cab effectively promoted the growth of neurons. In addition, degradable Cab-M/H revealed less invasiveness and could continuously release Cab to achieve effective SCI repair [140].
Multi-functions and coordination
Usually, the microenvironment of SCI is very complex, so various factors should be considered in the process of the SCI repair, such as the inflammation, nerve repair, cell regeneration, antibacterial, tissue healing, and others [141]. The hydrogels with single treatment parameter are difficult to achieve satisfactory repair effect, and it is a very potential treatment method for the comprehensive regulation and treatment of the microenvironment in SCI [15]. For instance, in the work of Liu et al. conductive hydrogel scaffolds (ICH/NSCs) loaded with exogenous NSCs were assembled with amino gelatin (NH 2 -Gelatin) and aniline tetramer grafted with oxidized hyaluronic acid (AT-OHA). As shown in Fig. 13b, the ICH/NSCs showed good injectability and electrical signal conduction, which effectively induced the differentiation of NSCs and inhibited the formation of scar tissues. At the Fig. 13 Multi-functions of bioactive hydrogels: a Biodegradable hydrogel Cab-M/H gels for healing injured site. Reprinted from Ref. [ same time, the good degradability and self-repair ability also accelerated the efficiency of the SCI repair [142]. Mesenchymal stem cells (MSCs) can promote the repair of SCI by guiding neuronal differentiation, inhibiting the scar tissue formation and promoting the axon growth [143]. EVs derived from MSCs can improve the spinal cord microenvironment through mimicking cell paracrine secretions and have a better regulatory effect than MSCs [144,145]. Therefore, Wang et al. used MSCs-derived EVs instead of MSCs transplantation to regulate the microenvironment of SCI to promote cell regeneration and differentiation. In order to achieve long-term preservation and controlled release of EVs in SCI tissue, an anti-inflammatory F127-polycitrate-polyethyleneimine (FE) hydrogel with cell adhesion and injectability was developed. FE hydrogel achieved long-term and sustainable release of EVs into the spinal cord [25]. As shown in Fig. 13c, F127 and polycitratepolyethylene glycol-polyethyleneimine (PCE) were connected by hydrogen bonds between polymers to form FE hydrogels. Positively charged PCE was then combined with EVs to form the FE/EVs hydrogel network through electrostatic interactions. Due to the good adhesion, the FE/EVs hydrogel could be inject into the SCI sites to form a dense package. With its good biocompatibility, FE promoted skeletal muscle regeneration and inhibited the production of inflammation in the injured environment, but also provided a good carrier for EVs. Therefore, the designed FE/EVs hydrogel was useful for controlling continuous release of EVs, promoting neuronal differentiation and axon formation, and contributing to the recovery of motor function. In this case, the FE/EVs hydrogel exhibited the advantages of good injectability, anti-inflammatory activity, high adhesion, and regeneration ability, which promoted the repair of SCI effectively.
In the injured spinal cord, the loss of the electrical signal transmission is one of the important factors to inhibit the spinal cord regeneration. For this reason, Wang et al. developed a multi-functional polycitrate-based nanocomposite (PMEAC) hydrogel scaffold, which had biomimetic mechanical and electrical properties of the spinal cord and could enhance the transmission of electrical signals to the injured spinal cord to promote the repair and regeneration of the spinal cord. As shown in Fig. 13d, the PMEAC hydrogel was prepared by simple selfcrosslinking method using poly (citric acid-maleic acid)ε-polylysine (PME) and multi-walled carbon nanotubes (MWCNTs) as precursors. The created PMEAC hydrogel scaffolds revealed multi-functional properties, such as the injectability, self-healing, tissue adhesion, broadspectrum antibacterial properties, and UV light shielding. The transmission channel of electrical signal was built for the SCI, which was beneficial to the repair of motor nerve and the recovery of motor function. In addition, the natural antibacterial activity of polylysine could effectively resist the invasion of bacteria and reduce the occurrence of inflammation in the injured spinal cord. Therefore, the PMEAC hydrogel scaffolds regulated the microenvironment of the nerve regeneration by inhibiting inflammatory response and anti-bacterial activity, which effectively promoted the motor function recovery and myelin/axon regeneration after SCI. It is a safe and effective biomimetic electrical signal recovery strategy to promote the SCI repair and regeneration [146].
To make it more clear, in this part, the applications of functional hydrogels in SCI repair are described in detail, and the contents are summarized in Table 2.
Material-based bioactive hydrogels for SCI repair applications
In this section, the regulation of biological functions of hydrogels, including the cell differentiation, self-healing, anti-bacterial, injection, bio-adhesion, biodegradation, and other multi-functions, via various strategies are introduced and discussed.
Growth factors/drugs-loaded hydrogels for SCI repair
Among the available biomaterials for the SCI repair, injectable hydrogels offer enormous advantages. It can act as an ECM at the damaged sites, provide a 3D scaffold for the cell proliferation and migration, and provide a suitable local microenvironment for the nerve tissue regeneration [147][148][149]. In recent years, the use of neurotrophic factors in the repair of SCI has received considerable attention, but the barriers in the delivery of such factors remain unresolved. Hassannejad et al. designed an amphiphilic peptide hydrogel for the delivery of brainderived neurotrophic factor (BDNF) [150]. This hydrogel achieved a controllable and slow release of BDNF within 21 days while retaining the biological activity of BDNF, which solved the common problems in the deliveryof GFs. In addition, the hydrogel self-assembled by amphiphilic peptides containing the IKVAV sequences, which effectively promoted the neurite outgrowth and induce the cell function and neural tissue regeneration. The obtained results showed that 6 weeks after implantation of the functionalized hydrogel, the injury site not only did not produce an inflammatory response, but also attenuated astrogliosis after SCI, enhanced axonal preservation, and provided a permissive environment for the cell migration and growth.
Alizadeh and co-workers designed a CTS-based injectable hydrogel. In their work, nerve growth factor (NGF)-overexpressing mesenchymal stem cells (hADSCs) (1 × 10 5 cells mL −1 ) were encapsulated in chitosan/β-glycerophosphate/hydroxyethyl cellulose [138] (CTS/β-GP/HEC) hydrogel [151]. Meanwhile, hADSCs and hADSC-encapsulated hydrogels were injected into the mice alone, and the recovery of the injured parts of the mice was observed. Studies have shown that hydrogels, as a 3D scaffold, effectively inhibited the migration of hADSCs, promoted the expression of NGF, and provided a suitable local microenvironment for the survival and proliferation of hADSCs, thus effectively improving the recovery of motor function in mice. In another similar case, Wu et al. developed injectable peptide-based hydrogel microspheres for loading and delivering platelet-derived growth factor BB (PDGF-BB) to injuried sites (Fig. 14a). PDGF-BB mimic peptide hydrogel not only had the advantages of good biocompatibility and high water-content, but also effectively activated PDGF receptor β and promote the axon regeneration [152]. In addition, the hydrogel maintained the proliferation of NSCs, protected neurons, and improved inflammatory response in the presence of myelin extract. Besides, Xu et al. utilized decellularized tissue matrix (DTM) as a natural biomaterial for spinal cord repair to prepare spinal cordderived DTM hydrogel (DSCM) for spinal cord repair in piglets. Studies have found that DSCM retains many specific ECM components of the natural spinal cord, and its hydrogel has a nanofibrous structure that mimics ECM, providing a regeneration-promoting microenvironment for the treatment of spinal cord injuries. The results demonstrated slightly improved functional recovery at the site of spinal cord injury in just four weeks after DSCM hydrogel treatment. Therefore, DSCM hydrogel can be used as a microenvironment mimicking ECM to promote the enrichment, proliferation and differentiation of NSPCs [153]. Nonhuman primates and humans share many features in terms of neural architecture and organization of physiological processes, so nonprimate models of SCI can provide predictions of the safety and application potential of human SCI repair treatments.
In the work of Rao et al. chitosan hydrogel was used as a bioactive material matrix to load neurotrophic factor 3 (NT3) to achieve slow and sustained release of neurotrophic factor into the environment after implantation. In a therapeutic experiment in a rhesus monkey spinal cord hemisection SCI model, it was found that the implantation of NT3-chitosan hydrogel was able to induce robust and robust axon regeneration, and robust long-distance axon regeneration was also observed. In addition, the enhanced neurogenesis and formation of nascent relay neural networks by endogenous stem cells were also involved in the recovery of sensory and motor functions, and the anti-inflammatory function of chitosan inhibited secondary lesions. The synergistic effect of NT3 and chitosan may be the key to promote the robust regeneration of axons [154]. Promote neuronal differentiation and axon formation [25] MWCNTs PME Self-crosslinking Sprague-Dawley rats Promote motor function recovery and myelin/axon regeneration [146] Reducing the formation of glial scars and promoting neuron regeneration are effective ways to break through the treatment of SCI [155,156]. In addition to GFs, nerve regeneration and protective drugs are also commonly used for the repair after SCI, such as Cur and minocycline hydrochloride (MH), chondroitinase ABC, and VX-210 (Cethrin). In the work of Nazemi et al., they constructed a hydrogel for dual-drug delivery. Nerve regeneration drug PTX was encapsulated into polylactic acid-glycolic acid microspheres and embedded into hydrogel simultaneously with neuroprotective agent MH [157]. Their study showed that the slow sustained release of two drugs over eight weeks, the dual drug-loaded hydrogel treatment system effectively suppressed the inflammatory response of damaged tissue, increased the neuronal regeneration, and reduced the degree of fibrotic scarring. A reduction in scar tissue was observed after 28 days, suggesting potential performance for SCI treatment.
To address the tissue regeneration in fully cross-sectional SCI, Qi et al. developed another therapeutic system with dual drug delivery [10]. Two drugs, cetuximab and FTY720, were combined with NSCs (NSCs-cfGel) and delivered to the damaged area via an injectable hydrogel (Fig. 14b). The results showed that in this therapeutic system, two drugs synergistically promoted the proliferation and neuronal regeneration differentiation of NSCs, and reduced the cell differentiation around the damaged area. In turn, it reduced the formation of bruises, promoted the reconstruction of the nerve fiber network, and also played a positive role in improving the recovery of hindlimb movement. This combination of the therapy and injectable hydrogel delivery system provides a reliable idea for the repair of complete SCI in-situ [10]. In another case, Wang and co-workers reported a combined therapeutic system for multi-drug delivery to treat SCI [158]. Docetaxel (DTX) was combined with GFs to achieve the function of targeting damaged cells in the spinal cord. To improve the drug's ability to penetrate the blood-spinal cord barrier (BSCB), liposomes were further added to a solution of a novel cold heparin-modified poloxamer (HP) with highly specific binding to acidic fibroblast GF. The insitu self-assembly induced the formation of hydrogels with 3D network structure in response to temperature, which not only had the function of targeted delivery of various drugs, but also realized the controllable release of drugs, improved the local microenvironment, and promoted the reconstruction of ECM. The hydrogel was beneficial for the axon regeneration, and conducive to the recovery of signal conduction, providing an effective way for the clinical treatment of SCI.
Polymer-modified hydrogels for SCI
In the SCI repair, a hydrogel system with excellent mechanical properties and good rheology fills the postinjury cyst cavity and supports the cell migration and axon outgrowth by replicating the complex structure of ECM [159]. Polymer systems with excellent rheological properties have received widespread attention, which can achieve the transition between sol and gel states in response to the stimuli or shear force, thereby being able to fill irregular and multi-shaped cavities. This method is more direct and effective than direct implantation of hydrogel [160]. In addition, this responsive hydrogel can also be used as a carrier to deliver therapeutic agents to promote rapid repair of SCI.
Thermosensitive hydrogels of poly(N-isopropylacrylamide) (PNIPAAm) exhibited good potential for the SCI repair due to their ability to undergo phase transition at a lower critical solution temperature [161,162]. Based on the temperature phase transition ability of PNIPAAm, Bonnet et al. added PEG to increase the hydrophilicity of the hydrogel. The physically cross-linked copolymer was injected into the area of the SCI, and within minutes the copolymer gels formed a hydrogel. The PNIPAAm-g-PEG hydrogel did not induce significant inflammatory response in vivo and exhibited significant locomotor improvement [163]. In another case, Zhang et al. reported a thermosensitive poly(D,L-lactide)poly(ethylene glycol)-poly(D,L-lactide) (PDLLA-PEG-PDLLA) hydrogel as a carrier of EV that derived from M2 microglia, which provided sufficient fluidity for hydrogel injection [8]. After entering the body, it quickly gelated with the increase of temperature. This thermosensitive hydrogel system slowly and continuously released loaded EVs at body temperature, thereby inducing the M2 polarization to reduce local inflammation and promote BCCB repair. In another case, An et al. successfully prepared hydrogels combined with natural polysaccharide agarose and PNIPAAm with thermosensitive function [164]. Au NPs and bone marrow stem cells (MSCs) (3 × 10 5 cell mL −1 ) were embedded into the hydrogel matrix, and the incorporation of Au NPs and the porous morphology of the composite hydrogel significantly improved the cell proliferation and adhesion, contributed to autoneural regeneration and suppressed inflammatory responses, and played an active role in postoperative sports recovery. Compared with chemically cross-linked hydrogels, polymer hydrogels cross-linked by physical action not only have excellent biocompatibility, but also do not produce the inflammation during SCI repair. It can also exhibit unique self-healing properties and shear-thinning behavior, which undoubtedly provides an attractive strategy for injectable SCI repair hydrogels [165][166][167]. Luo et al.
reported an injectable, self-healing, conductive polymer hydrogel for enhanced tissue repair after SCI. Boraxfunctionalized oxidized chondroitin sulfate (BOC), BOCdoped polypyrrole (BOCP) and gelatin (Gel) were mixed under physiological conditions. Negatively charged BOC and positively charged BOCP were closely connected by electrostatic interactions, and the amino group in the Gel was conjugated with the aldehyde group in BOC to form a reversible Schiff base bond [168]. The presence of such dynamic covalent cross-links and non-dynamic covalent cross-links endowed the hydrogel with excellent self-healing and injectable properties. In addition, the in vivo experiments demonstrated that the presence of polypyrrole (Ppy)-enabled BOCPG hydrogel to act as a conductive bridge to promote endogenous neural stem cell migration and neuronal differentiation. In addition, it induced the regeneration of myelinated axons to the injury sites by activating the PI3K/AKT and MEK/ERK pathways, and promoted the repair of the SCI sites.
To increase the rheological properties of 3D printing materials, Song et al. developed a composite bioink with good shear thinning and self-healing properties. In the synthesis process, the in-situ redox polymerization of 3,4-ethylenedioxythiophene (EDOT) was carried out to form PEDOT in the presence of chondroitin sulfate methacrylate (CSMA) and tannic acid (TA)-doped with gelatin methacrylate (GelMA), which was then mixed with PEGDA to form a precursor solution for the synthesis of the Gel/PEG bioink (Fig. 15a). The 3D hydrogel scaffold printed with this bioink provided a good biological microenvironment for NSCs [169]. Its good electrical conductivity inhibited astrokeratinocyte generation in the scaffold effectively, providing a promising strategy for fabricating engineered neural tissue scaffolds.
In addition, some natural small molecules have excellent antioxidant capacity and can eliminate ROS in cells, and polymer hydrogels prepared with them as components also have excellent SCI repairing capacity [170,171]. TA is a natural polyphenol present in many plants with excellent antioxidant and ROS scavenging properties [172]. In addition, TA can form a rich hydrogen bond network with various polymers to form elastic hydrogels through physical cross-linking. In the work of Wang et al., the phenolic groups of TA were connected with the oxygen atoms in the polyethylene oxide (PEO) chains of Pluronic F-127 (PF127) through hydrogen bonding interactions. Therefore, the two components formed a viscous and elastic gel structure after mixing [173]. This hybrid hydrogel based on natural small molecules and polymers exhibited good sealing and anti-oxidation properties, which effectively reduced the generation of ROS, thereby inhibiting the inflammatory response.α-lipoic acid (LA) is a natural antioxidant capable of forming polymers through the ring-opening polymerization. Conductive poly (lipoic acid-co-lipoic acid sodium) (PLL) hydrogels were prepared by one-pot ring-opening polymerization using LiCl as a conductive filler (Fig. 15b) [174]. When the molar ratio of LA to sodium lipoate (SL) was 1:0.475, 14 Growth factor/drug-hydrogel for SCI treatment: a PDGF-MPHM + NSCs hydrogel for SCI repair, Reprinted from Ref. [152], Copyright 2023, American Chemical Society. b Dual-drug NSCs-cfGel system for repairing SCI. Reprinted from Ref. [10], Copyright 2022, Elsevier the formed hydrogel revealed a loose and porous 3D network structure, which could accelerate nutrient penetration and promote the cell growth. This PLL hydrogel with good adhesion and conductivity could effectively prolong the retention time of implanted materials, promoted electrical signal transfer, reduced oxidative stress, regulated inflammation, inhibited the glial scar formation, promoted the axon growth and rebuild synaptic structure, have shown great application potential for the repair of damaged tissues. In addition, Sofloud et al. exploited the Schiff-base bonds and particle interactions to develop an antioxidative and conductive hydrogel composed of polyaniline-grafted gelatin, oxidized alginate, and polyethyleneimine [175]. The composite hydrogel exhibited excellent electrical conductivity and antioxidant activity, could effectively induce the neural differentiation and promote the tissue repair. In addition, the physical properties of the hydrogel can be changed by adjusting the ratio of components, so as to achieve the purpose of injectability.
NP-functionalized hydrogels
Stem cell transplantation using biomaterial hydrogel scaffolds can serve as a very promising strategy for the SCI repair. Bioactive hydrogels have excellent biocompatibility, which serve as a bridge between injured tissue and normal tissue to provide a suitable microenvironment for the growth of endogenous cells and exogenous cells. In addition, in order to avoid the generation of ROS to hinder nerve regeneration during the treatment process, some functional nanoparticles can be added into the hydrogel to achieve the purpose of inhibiting the damage and promoting the regeneration [176,177].
Nanozyme is a nano-biological material with enzymelike activity, which has excellent application advantages in ROS scavengers [178][179][180]. Using the incubation strategy of BSA, Liu et al. successfully synthesized CeNPs with uniform and small size, and dispersed them in GelMA to obtain hydrogels with the ROS scavenging ability (CeNP-Gel) (Fig. 16a) [181]. This injectable hydrogel system could effectively induce the integration and differentiation of NSCs, increasing the survival rate of cells by about 3.5 times.
MnO 2 NPs can catalyze the decomposition of H 2 O 2 into H 2 O and O 2 in the tumor microenvironment [182]. In the work of Li et al., albumin was used as a biotemplate to biomimetically mineralize MnO 2 NPs with good biocompatibility, which was dispersed in PPFLMLLKG-STR peptide-modified HA to obtain a bioactive hydrogel with good cell adhesion and neural tissue bridging ability [183]. Experiments have shown that HA can effectively inhibit the formation of neurotic scars. The incorporation of MnO 2 NPs induced the formation of longer nerve fibers and alleviated the generation of glial fibrillary acidic protein (GFAP)-positive astrocytes, which can be excreted by circulating metabolism. CD31 labeling showed that the hydrogel system with MnO 2 NPs showed better angiogenesis after 28 days of surgery, suggesting that MnO 2 NPs have a good synergistic effect in inhibiting the glial scar formation and promoting the nerve fiber regeneration.
Bifunctional PPy with good conductivity and oxidation resistance, as a conductive polymer with good biocompatibility, can be used as a conductive scaffold for promoting the nerve tissue regeneration [184][185][186][187]. Wu et al. prepared PPy NPs with good dispersity based on a water-soluble PVA/iron ion system. The introduction of PPy NPs into the HA-collagen system successfully obtained conductive nanohydrogels with abundant cell adhesion sites (Fig. 16b) [188]. The incorporation of PPy NPs not only inhibited the increase of ROS, but also protected BMSCs from oxidative damage. In addition, the hydrogel promoted the transmission of intercellular electrical signals (ES) and external ES to BMSCs through its excellent electrical conductivity, and further promoted the neuronal differentiation of BMSCs through the PI3K/ Akt and MAPK signaling pathways to achieve enhanced tissue repair capabilities.
Similarly, Zhang et al. used HA/collagen hydrogel as a substrate to endow it with good magnetoelectric capability by decorating the hydrogel with Fe 3 O 4 @BaTiO 3 NPs. The hybrid hydrogel exhibited enhanced spinal nerve repair in the presence of an applied pulsed magnetic field [189]. Separately, Gao and the colleagues investigated a hydrogel composed of CTS and HA, in which a conjugated complex of Au NPs and ursodeoxycholic acid (UDCA) was incorporated into the hydrogel to treat local damaged areas under the irradiation of 808 nm NIR light [190]. The study showed that in the center of the injured spinal cord, Au NP-UDCA in the injectable hydrogel heated up to generate heat under NIR light irradiation, and effectively inhibited the production of inflammatory cytokines by macrophages when the ambient temperature reached 40 °C. This thermogenic approach exerts a pronounced anti-inflammatory effect on the damaged sites.
1DM-incorporated hydrogels for SCI
1DM-based functional hydrogels have been utilized for various fields due to their porous structure and enhanced mechanical properties. For the fabrication 1DM hydrogels, some nanoscale materials with 1D morphology such as polymer fibers, CNTs, protein nanofibers, and peptide nanofibers can serve as the precursors to improve the properties and functions of hydrogels, which exhibited great potential for the SCI repair in the last years.
Hydrogels reveal advantages for treating SCI by delivering drugs and genes to the injuried sites to promote the direct axon regeneration. Chew et al. reported the fabrication of 3D aligned nanofibrous hydrogels for controllable drug/gene delivery to treat SCI [191,192]. In their work, aligned poly (ε-caprolactone-co-ethyl ethylene Polymer hydrogels for SCI repair: a 3D bioprinted conductive composite hydrogel for SCI repair. Reprinted from Ref. [169], Copyright 2023, Elsevier. b PLL hydrogel and its potential application in SCI repair. Reprinted from Ref. [174], Copyright 2023, Elsevier phosphate) (PCLEEP) nanofibers were fabricated into the as-prepared collagen hydrogel to form the composite hydrogels with ordered architecture. It was found that the fabricated PCLEEP/collagen hydrogels not only imitated the size and architecture of natural ECM, but also provided a versatile nanoplatform for the loading and delivery of drugs (such as neurotrophin-3) and genes (such as microRNA), allowing the regeneration of several axons in the process of SCI repair. In this biomimetic 3D architecture, both PCLEEP and collagen exhibited synergistic effects on the SCI repair, in which aligned PCLEEP mediated robust cell penetration in vivo and neurite infiltration, meanwhile collagen promoted the cell adhesion and growth.
In another similar case, Li and co-workers demonstrated the fabrication of an injectable PCL-doped hyaluronic acid (HA) hydrogel for neural tissue repair and regeneration in spinal cord [193]. As shown in Fig. 17a, the design of hybrid hydrogel and interfacial bonding between each component are explained, in which the thiolated HA, maleimide-modified PCL fiber (MAL-PCL), and PEG diacrylate were mixed together to form the hydrogel. The cross-linking of HA to fiber and HA-HA stabilized the nanofibrous and network structure of the hydrogel. Ascribing to the addition of electrospun PCL fibers, it was possible to fabricate the composite hydrogel with a shear storage modulus of 210 Pa that similar to native spinal cord nervous tissue (50-600 Pa), and therefore the created hydrogels were potential candidates for repairing SCI. After injecting the composite hydrogel into the contused spinal cord, the designed hydrogels showed multi-functions for the SCI repair, including the inhabitation of collapse of spinal cord, the mediation of macrophage shift, and the promotion of cell invasion, blood formation, axon growth, as well as neuro-regeneration. Their study demonstrated a facile strategy to fabricate functional biocompatible hydrogels to mimic the spinal cord segment and microenvironment to facilitate the repair and regeneration of nervous tissues.
In the tissues, neurons are electrically responsive and can guide the transmit of electrical signals. Conductive hydrogels were beneficial for the creation of electrical signals in the tissues, and promoted the repair of SCI [142]. Due to their high mechanical properties and good conductivity, CNTs as scaffolds have been applied for improving the reconstruction of nerves by attracting stem cells to the injured tissues. Previously, Sang et al. reported the synthesis of conductive, thermos-responsive poly(nisopropylacrylamide) (PNIPAAM) hydrogel by the modification with self-assembled CNTs [194]. The formed CNT-PNIPAAM hydrogel was injectable and highly conductive, could potentially promote the regeneration of nerve tissues and decrease the possibility of forming scar tissues in the process of SCI repair. In another case, Liu and co-workers fabricated conductive hydrogels with modulable conductivities for SCI repair by using PEGmodified CNTs (CNTpega) and oligo (poly (ethylene glycol) fumarate (OPF), as shown in Fig. 17b [195]. Due to their high conductivity, good biocompatibility, composited structure, the fabricated OPF-CNTpega hydrogel promoted the attachment, proliferation, and neuronal differentiation of PC12 cells. In addition, by producing the nerve conduits with the formed hydrogels through the injection molding technique, several types of nerve conduits have been prepared, which exhibited practical application in the SCI repair for guiding the regeneration of axons as the molds can be operated with tweezers easily. It was found that axons were grown across the injured sites and arrived the distal ends under the guidance of the hydrogel-based conductive conduits. This study provided a feasible biocompatible and conductive product that could be useful for clinical SCI repair.
In addition, 1D nanofibers with biocompatible properties, such as protein nanofibers and peptide nanofibers are highly useful for the repair of SCI, ascribing to their natural compatibility, bioactivity, and multiple groups with easy modification. Wang and co-workers have carried out a series of studies on the fabrication of aligned fibrin nanofiber hydrogels (AFG) with tailorable structures and functions for treating SCI [196][197][198][199]. In a typical case, they proved that the hierarchical AFG with soft stiffness and aligned ordered structure could promote the regeneration of nerves under the situations of in vitro and in vivo [198]. Induced by the fabricated Fig. 17. 1DM-incorporated hydrogels for SCI repair applications: a electrospun MAL-PCL fiber-doped PEG-HA hydrogel for SCI repair. Reprinted from Ref. [193], Copyright 2020, Elsevier. b CNT-doped conductive PEG hydrogels for SCI repair. Reprinted from Ref. [195], Copyright 2018, Royal Society of Chemistry. c NGF-functionalized SFN hydrogels for scarless spinal cord repair. Reprinted from Ref. [200], Copyright 2022, American Chemical Society. d peptide nanofiber (PNF)-PEO AFG for spinal cord regeneration. Reprinted from Ref. [204], Copyright 2021, Elsevier AFG, white matter with consecutive, compact, and aligned nerve fibers have been regenerated, resulting in clear motor functional restoration of T12 SCI. In a very recent study, they further tailored the functions of the as-prepared AFG with N-Cadherin to enhance the functions of central nervous system and axon regeneration [199]. The modified AFG provided specific binding with NSCs, and could direct NSC functions and nerve regeneration, and therefore the designed hydrogels could carry exogenous NSC for repairing SCI through cell retention, immunomodulation, neuronal differentiation, and integration with inherent neurons. Besides silk fibroin nanofiber (SFN) hydrogels modified with nerve growth factor (NGF) have been presented for repairing scarless SCI [200]. As indicated in Fig. 17c, aligned SFN hydrogels were formed under the action of an applied electric field, which were then doped with NGF to form hybrid NGF-SFN hydrogels for enhanced neural differentiation and cell orientation and distribution. The created bioactive hydrogels provided a biomimetic microenvironment in vivo to guide the regeneration of scarless spinal cord, which showed similar microstructure to that of natural spinal cord, ascribing to synergistic effects of physical and biological properties.
Peptide nanofibers via the motif design and molecular self-assembly are versatile nanoscale building blocks for various materials science and biomedical applications, including the repair of SCI [201]. For instance, NSCs and neural progenitor cells have been embedded into the selfassembled peptide nanofiber (IKVAV/RGD hydrogels for nerve degeneration [202]. In addition, brain-derived neurotrophic factor and drugs (Chondroitinase ABC) [203] have been also applied for the functionalization of peptide nanofiber hydrogels for treating SCI. Recently, Man et al. presented a multi-modal delivery strategy for repairing SCI using AFG/functional self-assembling peptides (AFG/fSAP) composite hydrogel. The fabrication strategy of composite hydrogel and the SCI repair mechanism are shown in Fig. 17d. The using of the AFG/ fSAP for rat SCI repair indicated that the hydrogels could improve the motor function recovery, facilitate the regrowth and angiogenesis of axon, guide the migration of astrocytic, and promote the remyelination [204]. In another study, Cao et al. prepared a hierarchically AFG with both aligned nanostructures and low elasticity, which can effectively promote nerve fiber regeneration in a rodent SCI model. In the follow-up application, AFG was also explored in the repair of canine lumbar 2-segment hemisection spinal cord injury. The results showed that after AFG implantation, its nanometer-to-millimeter-scale hierarchical arrangement endowed it with a unique guiding effect, enabling axonal re-growth in an oriented pattern connecting rostral and caudal stumps. This can significantly improve the recovery of motor function in dogs with SCI [197].Although these hydrogel materials revealed relative advantages and performances on SCI repair, the mechanisms on how the single components regulated the bioprocesses of SCI should be further investigated. In addition, more efforts on how to improve the nanofiber hydrogels for clinical applications should be considered seriously.
2DM-incorporated hydrogels for SCI
2DMs, such as MoS 2 , GO, rGO, MXene, and MOFs have been widely used for the synthesis of various functional bioactive materials. 2DMs are potential candidates for the SCI repair because of their large specific surface area, good biocompatibility, easy modification, and potential conductivity, and catalytic activity [205].
Marques and co-workers have raised a question "Is graphene shortening the path toward spinal cord regeneration?" in a recent review [206]. By detailed analysis on a lot of studies that using graphene-based materials for SCI, they have concluded that graphene-based materials play important roles in developing complementary SCI therapeutic approaches and promoting neuroregeneration from enhanced neural cell-material interactions. In addition, they proposed that the using of graphene materials could shorten the way to achieve in clinical translation. Based on the unique physical and chemical properties of MoS 2 and GO, Chen et al. demonstrated the fabrication strategy of MoS 2 and GO-functionalized PVA (MoS 2 /GO/PVA) composite hydrogels for repairing SCI [207]. As shown in Fig. 18a, MoS 2 /GO nanohybrids were firstly prepared by conjugating MoS 2 onto the surface of GO nanosheets, which were then mixed with PVA to form composite hydrogels via repeatedly freezing and thawing. Due to the addition of MoS 2 and GO, the created composite hydrogels exhibited good suppleness, high mechanical properties, and high electrical conductivity. After injecting the hydrogels into the SCI part, the differentiation of NSCs into neuron cells and scavenged ROS were induced. Meanwhile, the using of the composite hydrogels inhibited the differentiation of M1 and active M2 macrophage, which improved inflammatory cytokines effectively. In this case, the unique properties, such as the high conductivity of MoS 2 and the good mechanical properties of GO, extended the potential of PVA hydrogels for repairing SCI.
In another case, Zhang and co-workers synthesized GO-functionalized diacerein-terminated PEG hydrogels by using the strong interactions between GO and diacerein, and further applied the formed hydrogels for repairing SCI [208]. They also proved that the electric conductivity and 3D porous structure of the hydrogels accelerated functional formation and neural activity in the neural network by mediating the migration of neural system cells and the remyelination of axons. Although the GO-based hydrogels have been presented as effective candidates for repairing SCI, the understanding on the signaling pathways in the process of repair should be studied further at the molecular level.
Although GO has satisfied biocompatibility for the SCI repair, its low electrical conductivity could be improved by the reduction of GO into reduced GO (rGO) for enhanced neuroregeneration. In a typical study, Xue et al. demonstrated the design and fabrication of a electroconductive and highly porous rGO/xanthan gum (rGO/ XG) hydrogel for repairing SCI, as shown in Fig. 18b. Benefit by the using of rGO, the formed hydrogel exhibited high electroconductivity, which promoted the transmission of electrical signals, guided the ordered growth of regenerated nerve fibers, and inhibited the formation of glial scars [209]. In addition, the in vivo tests with rats indicated that the injection of hydrogels to SCI parts mediated the restore of the motor function of rats. After the reduction to graphene, GO reveals lower biocompatibility, which could decrease the functions for cell adhesion and growth. Therefore, the using of highly biocompatible components with graphene for the fabrication of composite hydrogels is necessary. For instance, the cross-linking graphene with collagen [209,210] and silk proteins [211] for the synthesis of biocompatible graphene-based hydrogels for repairing SCI have been reported.
Besides MoS 2 and graphene materials, other 2DM-bade hydrogels have also been utilized for the SCI repair applications, although very limited studies have been reported. For instance, Kong and co-workers reported the design and synthesis of a multifunctional hydrogels by modifying the GelMA hydrogels with MXene and AuNPs for SCI repair [212]. As indicated in Fig. 18c, GelMA hydrogels were first prepared and MXene and AuNPs were then added into the as-prepared hydrogels to form MAu-GelMA composite hydrogels. After loading NSCs into the formed composite hydrogels, the hydrogels exhibited combined treatment for repairing SCI. First, the loaded NSCs can promote the recovery of SCI by the injection Fig. 18. 2DM-incorporated hydrogels for SCI repair applications: a MoS 2 /GO/PVA hydrogel for repairing SCI. Reprinted from Ref. [207], Copyright 2022, Springer Nature. b rGO/XG gel for repairing SCI. Reprinted from Ref. [209], Copyright 2022, Elsevier. c MXene and AuNPs-modified GelMA hydrogel for the recovery of SCI. Reprinted from Ref. [212], Copyright 2023, Elsevier. d 2D GeP@PDA-doped HA-DA hydrogel for enhanced repair of SCI. Reprinted from Ref. [11], Copyright 2021, Wiley-VCH of hydrogels into the injury parts. On the other hand, the good electrical conductivity of both MXene and AuNPs promoted NSC differentiation and myelin regeneration, resulting in functional recovery of SCI. The combined therapy with functional hydrogels could be an effective strategy for repairing SCI. In another study, conductive and biodegradable germanium phosphide (GeP) nanosheets have been utilized for the modification of hyaluronic acid-dopamine (HA-DA) hydrogels for enhanced SCI repair, as show in Fig. 18d. The in vitro experiments indicated that the formed HA-DA/GeP@PDA hydrogels accelerated the differentiation of NSCs into neurons and the in vivo tests with rat models proved further that the hydrogels improved the recovery of locomotor function of rats effectively [11]. In addition, other 2DMs, such as ZIF-based hydrogels are also beneficial for repairing SCI, ascribing to their versatile molecular design, tailorable functions, high biocompatibility, and large potential in biomedicine and tissue engineering [213].
Clinical development of bioactive hydrogels
It is well known that currently no effective medical treatments can be utilized for reversing the damage of SCI as the repair of SCI is a complex process that related to chemical, physical, and biological aspects. The treating performance of traditional methods, such as surgy, drug therapy, and rehabilitation could result in limited effects on the repair of SCI [206]. Previous small animal test and clinical studies indicated that the using of neuroprotective drugs, NSCs, and neuromodulatory stimulations are beneficial for promoting the neuroregeneration in the SCI area, which has guided the development direction of clinical treating SCI.
3D hydrogels with high bioactivity and biocompatibility can act as excellent vehicles for delivering NSCs and drugs for SCI treatment, and are highly potential for clinical development of SCI repair. In the above introduction and discussion, we find that bioactive hydrogels have been widely used for preliminary lab study on repairing SCI, and in most of the cases, the designed bioactive hydrogels have exhibited great performance for treating SCI, not only in the in vitro cell test, but also in the in vivo tests with small animals such as rats. However, very limited efforts have been carried out for using bioactive hydrogels for real clinical human trials. A few factors could be crucial for affecting the clinical development of bioactive hydrogels.
The bioactivity and biocompatibility, as well as their physical properties of bioactive hydrogels are unknown for large animals currently. The pre-fabricated hydrogels can mediate the regeneration of axons, but reveal high risk to damage the spared neuro tissues; the injectable hydrogels are versatile and less-invasive to fill in the irregularly shaped lesion, but they will inhibit the axonal regeneration [14]. In addition, the conditions in large animals are much complicated than that of rats, the chemical and biological reactions in both cases may have big differences [214].
The clinical trials need effective drugs usually. For the SCI drug therapy with bioactive hydrogels that loaded drugs, the cost will be very high. It is known that average time and cost for developing a useful drug that can be approved by the FDA are about 10 years and 1 billion dollars. At present, it is not easy to find a highly effective and approved drug for repairing SCI. Previously, a bioactive drug, anti-NogoA, has been testified as one potential candidate for the treatment of SCI with a phase I clinical trial [215]. It was still very far for the real clinical success.
Besides, there are many and complex mechanisms of bioactive hydrogels towards the repair of SCI. As discussed in 2.1.1, there are four common mechanisms mentioned in the section, and the action mechanisms of bioactive hydrogels for repairing SCI are also complex, and in some cases, multiple actions are responsible for SCI repair. Without fully clear understanding these mechanisms, it is hard to apply hydrogels for clinical trials really.
Conclusion and perspectives
In summary, we presented a comprehensive review on the design, synthesis, functional regulation of bioactive hydrogels for repairing SCI. Based on the above introductions and discussions, several key conclusions are given. Firstly, the development of materials science and nanotechnology provided good opportunities for the SCI repair studies. Various biomaterials are widely utilized for the preliminary and pre-clinical studies of SCI repairing, in which bioactive hydrogels showed some advantages such as 3D porous structure, high biocompatibility, injectability, easy operation, and similar properties to ECM. Secondly, bioactive hydrogels can be fabricated through chemical and physical cross-linking of various biomolecules, including DNA, proteins, peptides, biomass polysaccharides, and other types of biopolymers. These natural and synthetic biomolecules provided potential bioactivity and biofunctions for the synthesized hydrogels for repairing SCI. In addition, the loading of NSCs, drugs, GFs, and molecular active factors into hydrogels promoted the multi-functions of the composite hydrogels, further improving the repair efficiency of injected hydrogels in SCI sites. Thirdly, we demonstrated various methods for regulating the biological properties of bioactive hydrogels, such as the cell biocompatibility, self-healing, antibacterial, bioadhesion, biodegradation, and others, which played crucial roles in promoting the neuroregeneration in the SCI sites and mediating the recovery of motor function. Finally, in the SCI repair studies, the functions of bioactive hydrogels can be further regulated by introducing drugs/GFs, NPs, stimuli-responsive polymers, 1D materials, and 2D materials. These efforts provided more electrical, optical, thermal, and enzymatic functions or properties for the fabricated bioactive hydrogels, greatly inspiring the SCI repair applications from preliminary to clinical studies.
Bioactive hydrogels have shown great potential for the repair of SCI in the last years, according to the above analysis. Here we would like to provide our viewpoints on the further development on using bioactive hydrogels to treat SCI. First, more efforts should be done to understand the signaling pathways and corresponding repair mechanism by active hydrogels in the processes of SCI repairing. The theoretical studies can guide the design and synthesis of hydrogels with specific properties and functions for SCI. Second, there is challenge on the embedding and differentiation of stem cells in hydrogels, which could be the most effective ways to promote the intrathecal transplantation and neuroregeneration. Therefore, new techniques on the loading of stem cells into hydrogels and the creation of suitable cell proliferation conditions should be developed. Third, in the term of material design, some functional 2D materials, such as black phosphorus, MOFs, COFs, and MXenes could be combined with bioactive hydrogels to provide catalytic, enzymatic, and electrical functions of hydrogels, inducing specific applications in repairing SCI. Fourth, new fabrication techniques, such as 3D and 4D printing, could be developed to fabricate hydrogels with hierarchical structures that similar to natural human spinal cord. Meanwhile, the created hydrogels should maintain good injection ability and flexibility to fill in the injured sites. Fifth, the treatment of SCI with bioactive hydrogels should be combined with advanced therapy techniques, such as the biosensing, bioimaging, and physical diagnostics (for instance MRI and CT), for which functional NPs and molecular imaging agents should be applied. Final, we suggest that it is necessary to develop safe and effective drugs for treating SCI with further clinical trials, as currently nearly all the SCI repairing studies were focusing on pre-clinical studies with traditional drugs and GFs.
|
2023-07-24T13:33:25.752Z
|
2023-07-24T00:00:00.000
|
{
"year": 2023,
"sha1": "6b94f7fae3f2a4d7ec49b43f822aff05683bc36d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "6b94f7fae3f2a4d7ec49b43f822aff05683bc36d",
"s2fieldsofstudy": [
"Materials Science",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119651678
|
pes2o/s2orc
|
v3-fos-license
|
On the geometry of standard subspaces
A closed real subspace V of a complex Hilbert space H is called standard if V intersects iV trivially and and V + i V is dense in H. In this note we study several aspects of the geometry of the space Stand(H) of standard subspaces. In particular, we show that modular conjugations define the structure of a reflection space and that the modular automorphism groups extend this to the structure of a dilation space. Every antiunitary representation of a graded Lie group G leads to a morphism of dilation spaces Hom$_{gr}(R^x,G)$ to Stand(H). Here dilation invariant geodesics (with respect to the reflection space structure) correspond to antiunitary representations U of Aff(R) and they are decreasing if and only if U is a positive energy representation. We also show that the ordered symmetric spaces corresponding to euclidean Jordan algebras have natural order embeddings into Stand(H) obtained from any antiunitary positive energy representations of the conformal group.
Introduction
A closed real subspace V of a complex Hilbert space H is called standard if V ∩ iV = {0} and V + iV is dense in H ( [Lo08]). We write Stand(H) for the set of standard subspaces of H. The main goal of this note is to shed some light on the geometric structure of this space and how it can be related to geometric structures on manifolds on which Lie groups G act via antiunitary representations on H.
Standard subspaces arise naturally in the modular theory of von Neumann algebras. If M ⊆ B(H) is a von Neumann algebra and ξ ∈ H is a cyclic separating vector for M, i.e., Mξ is dense in H and the map M → H, M → M ξ is injective, then V M := {M ξ : M * = M, M ∈ M} is a standard subspace of H. Conversely, one can associate to every standard subspace V ⊆ H in a natural way a von Neumann algebra in the bosonic and fermionic Fock space of H, and this assignment has many nice properties (see [NÓ17, § §4,6] and [Lo08] for details). This establishes a direct connection between standard subspaces and pairs (M, ξ) of von Neumann algebras with cyclic separating vectors. Since the latter objects play a key role in Algebraic Quantum Field Theory in the context of Haag-Kastler nets ( [Ar99,Ha96,BW92]), it is important to understand the geometric structure of the space Stand(H). Here a key point is that it reflects many important properties of von Neumann algebras related to modular inclusions and symmetry groups quite faithfully in a much simpler environment ([NÓ17, §4.2]). We refer to [Lo08] for an excellent survey on this correspondence. In QFT, standard subspaces provide the basis for the technique of modular localization, developed by Brunetti, Guido and Longo in [BGL02].
Every standard subspace V determines by the polar decomposition of the closed operator S, defined on V + iV by S(x + iy) = x − iy, a pair (∆ V , J V ) of so-called modular objects, i.e., ∆ V is a positive selfadjoint operator and J V is a conjugation In Section 1 we discuss Loos' concept of a reflection space, which is a generalization of the concept of a symmetric space. Although symmetric spaces play a central role in differential geometry and harmonic analysis for more than a century, reflection spaces never received much attention. As we shall see below, they provide exactly the right framework to study the geometry of Stand(H). Reflection spaces are specified in terms of a system (s x ) x∈M of involutions satisfying s x (x) = x and s x s y s x = s sxy for x, y ∈ M.
One sometimes has even more structure encoded in a family (r x ) r∈R × ,x∈M of R ×actions on M satisfying r x (x) = x, r x s x = (rs) x and r x s y r −1 x = s rxy for x, y ∈ M, r, s ∈ R × . This defines the structure of a dilation space, a concept studied in the more general context of Σ-spaces by Loos in [Lo72]. For r = −1, we obtain a reflection space, so that a dilation space is a reflection space with additional structure. Other important classes of dilation spaces are the ruled spaces discussed in [Be00, Ch. VI] that arise naturally in Jordan theory.
In Section 2 we turn to the space Stand(H) of standard subspaces and show that it carries a natural dilation space structure. This corresponds naturally to dilation space structures on the sets Mod(H) and Hom(R × , AU(H)). The underlying reflection space structure on Stand(H) is given by and the map q : Stand(H) → Conj(H), V → J V onto the symmetric space Conj(H) of antiunitary involutions on H is a morphism of reflection spaces. Here an interesting point is that Conj(H) does not carry a non-trivial dilation space structure, so that the weaker notion of a reflection space on Stand(H) actually leads to the much richer dilation space structure. If (G, ε G ) is a graded topological group, i.e., ε G → {±1} is a continuous homomorphism, then, for every antiunitary representation U : G → AU(H), the natural map U * : Hom gr (R × , G) → Hom gr (R × , AU(H)) defines a morphism of dilation spaces V U : Hom gr (R × , G) → Stand(H) which is the Brunetti-Guido-Longo (BGL) map V U from [NÓ17, Prop. 5.6] and [BGL02,Thm. 2.5].
A morphism of reflection spaces γ : R → Stand(H) is called a geodesic. In Proposition 2.9 we describe the geodesics for which q • γ is continuous in terms of unitary one-parameter groups (U t ) t∈R . They are of the form On the other hand, we have for each V ∈ Stand(H) the corresponding dilation group implemented by the unitary operators (∆ it V ) t∈R . Both structures interact nicely for geodesics invariant under the dilation group. In Proposition 2.11 we show that, if (U t ) t∈R does not commute with the dilations (∆ is V ) s∈R , the geodesic is an orbit of Aff(R) 0 in Stand(H), where the action is given by an antiunitary representation.
A particularly intriguing structure on Stand(H) is the order structure defined by set inclusion to which we turn in Section 3. This structure is trivial if H is finite dimensional and it is also trivial on the subspace But if H is infinite dimensional non-trivial inclusions can be obtained from antiunitary positive energy representations of Aff(R), which actually lead to monotone dilation invariant geodesics (Theorem 3.3). This is a direct consequence of the Theorems of Borchers and Wiesbrock (cf. [Lo08], [NÓ17]) and the dilation space structure thus provides a new geometric perspective on these results that were originally formulated in terms of inclusions of von Neumann algebras ( [Bo92], [Wi93]). In view of this characterization of the monotone dilation invariant geodesics, it is an interesting open problem to characterize all monotone geodesics in Stand(H). To get some more information on the ordered space Stand(H), one natural strategy is to consider finite dimensional submanifolds, resp., orbits is a closed subsemigroup of G 1 with G 1,V = S V ∩ S −1 V and S V determines an order structure on G 1 /G 1,V by gG 1,V ≤ g ′ G 1,V if g ∈ g ′ S V for which the inclusion G 1 /G 1,V ֒→ Stand(H) is an equivariant order embedding. Of course, the most natural cases arise if V corresponds to some γ ∈ Hom gr (R × , G) under the BGL construction and then G 1,γ ⊆ G 1,V , so that O V is a G 1 -equivariant quotient of G 1 /G 1,γ ∼ = G 1 .γ ⊆ Hom gr (R × , G). We conclude this note by showing that, if G is the conformal group of a euclidean Jordan algebra E and γ : R × → G corresponds to scalar multiplication on E, the ordered homogeneous spaces U G1 V , V := V U (γ), obtained from an antiunitary positive energy representations (U, H) of G, are mutually isomorphic and the order structure can be described by showing that the semigroup S V coincides with the well-known Olshanski semigroup S E+ of conformal compressions of the open positive cone E + ( [HN93], [Ko95]). This result is based on the maximality of the subsemigroup S E+ in G 1 which is proved in an appendix.
Acknowledgment: We are most grateful to Wolfgang Bertram for illuminating discussions on the subject matter of this note and for pointing out several crucial references, such as [Lo67]. We also thank Jan Möllers for suggestions to improve earlier drafts of the manuscript.
Reflection spaces
In this first section we first review some generalities on reflection spaces ( [Lo67]) and introduce the notion of a dilation space by specialization of Loos' more general concept of a Σ-space ( [Lo72]). A key feature of these abstract concepts is that they work well in many categories, in particular in the category of sets and the category of topological spaces and not only in the category of smooth manifolds. Only when it comes the finer geometric points related to the concept of a symmetric space, a smooth structure is required. In Section 2 this will be crucial for the space Stand(H) which carries no natural smooth structure but which is fibered over the topological space Conj(H), endowed with the strong operator topology. be a map with the following properties: for all x, y ∈ M , i.e., s x ∈ Aut(M, •). Then we call (M, µ) a reflection space ( [Lo67,Lo67b]).
(b) If M be a smooth manifold and µ : M × M → M is a smooth map turning (M, µ) into a reflection space, then it is called a smooth reflection space. If, in addition, each x is an isolated fixed point of s x , then it is called a symmetric space (in the sense of Loos).
If M is a topological space and µ is continuous, we call it a topological reflection space.
Example 1.2. (a) Any group G is a reflection space with respect to the product Note that left and right translations λ g (x) = gx and ρ g (x) = xg are automorphisms of the reflection space (G, •). The subset Inv(G) of involutions in G is a reflection subspace on which the product takes the form s g (h) := ghg = ghg −1 .
(b) Suppose that G is a group and τ ∈ Aut(G) is an involution. For any subgroup H ⊆ G τ := Fix(τ ), we obtain on the coset space M := G/H the structure of a reflection space by (e) If (M, •) is a reflection space and q : M → N is a surjective submersion whose kernel relation is a congruence relation with respect to •, i.e., q(x) = q(x ′ ) and q(y) = q(y ′ ) implies q(x • y) = q(x ′ • y ′ ), then q(x) • q(y) := q(x • y) defines on N the structure of a reflection space.
In fact, that the product on N is well-defined is our assumption. That it is smooth follows from the smoothness of the map M × M → M/N, (x, y) → q(x • y) and the fact that q × q : M × M → N × N is a submersion. Now the relations (S1-3) for N follow immediately from the corresponding relations on M .
(f) In addition to (b), we consider a smooth action α : H → Diff(F ) of H on the manifold F and consider the space defines on M the structure of a smooth reflection space on which G acts by auto- That (1.4) is a well-defined smooth binary operation is clear. That we obtain a reflection space is most naturally derived from (b), (c), (d) and (e). First we note that the product manifold G × F carries a natural reflection space structure given by where we use the reflection space structure from (a) on G, and the trivial one from (c) on F . Next we note that the quotient map q(g, f ) := [g, f ] is a submersion and that, for g, g ′ ∈ G, h, h ′ ∈ H, and f, f ′ ∈ F , the image of the product in G/H × F does not change on the H-orbits: Therefore our claim follows from (e).
One of the main results of [Lo67,Lo67b] asserts that every finite dimensional connected reflection space (M, •) is of this form, where • G := s x s y : x, y ∈ M grp ⊆ Diff(M ) carries a finite dimensional Lie group structure. • H = G τ for τ (g) = s e gs e , where e ∈ M is a base point.
Typical examples with discrete spaces F arise for F := π 0 (H) on which H acts through the quotient homomorphism H → π 0 (H) by translations.
We also note that G acts transitively on G× H F if and only if H acts transitively on F . For any f ∈ F we then have G × H F ∼ = G/H f as a homogeneous space of G.
(g) If (V, β) is a K-vector space (char(K) = 2) and β : V ×V → K is a symmetric bilinear form, then the subset V × := {v ∈ V : β(v, v) = 0} is a reflection space with respect to Note that s λx = s x for every λ ∈ K × and, conversely, that s x = s z implies z ∈ K × x because z ∈ ker(s x + 1) = Kx. For K = R and V a locally convex space, we thus obtain on V × the structure of a real smooth reflection space and each level set becomes a symmetric space. The same holds for the image V × /K × in the projective space P(V ).
An easy induction then shows that We also note that, if f : is a morphism of reflection spaces and e ′ = f (e), then Note that (1.5) means that the map (Z, •) → (M, •), n → x n is a morphism of reflection spaces if Z carries the canonical reflection space structure (Example 1.2(a)).
Theorem 1.6. (Oeh's Theorem, [Oe17]) Let G be a topological group. Then the geodesics γ : R → G with γ(0) = g are the curves of the form Proof. Since right multiplication with g −1 is an automorphism of the reflection space (G, •), we may w.l.o.g. assume that g = e and show that in this case the geodesics are the continuous one-parameter groups.
Clearly, every one-parameter group γ : R → G is also a morphism of reflection spaces, hence a geodesic. Suppose, conversely, that γ is a geodesic with γ(0) = e. From (1.6) and the relation t n = nt in the pointed reflection space (R, •, 0), it follows that γ(nt) = γ(t) n for t ∈ R, n ∈ Z. It follows in particular, that the restriction of γ to any cyclic subgroup Zt ⊆ R is a group homomorphism. Applying this to t = 1 n , n ∈ N, we see that γ| Q : Q → G is a group homomorphism. Now the continuity of γ implies that γ is a homomorphism.
The second assertion is trivial.
Dilation spaces.
Although the reflection space structure is a key bridge between manifold geometry and transformation groups ( [Lo67]), there are natural situations where one has additional structures encoded by a family of actions µ x : Σ → Diff(M ) of a given Lie group Σ on M such that x is fixed under µ x and the family (µ x ) x∈M satisfies a certain compatibility condition similar to (S3). This leads to the notion of a Σ-space introduced by O. Loos in ( [Lo72]). Here we shall need only the special case Σ = R × , so that we shall speak of dilation spaces. Restricting to the subgroup {±1} ⊆ R × , we then obtain a reflection space, so that dilation spaces are reflection spaces with additional point symmetries encoded in R × -actions parametrized by the points of M .
Definition 1.7. Let M be a set and suppose we are given a map with the following properties: .1] the notion of a ruled space is defined as a smooth dilation space (M, µ) with the additional property that, for every r ∈ R × , the tangent map T x (r x ) is diagonalizable with eigenvalues r and r −1 . This ensures that (M, µ −1 ) is a symmetric space. We refer to [Be00, Thm. VI.2.2] for a characterization of the ruled spaces among symmetric spaces.
For r = −1, (1.7) specializes to the reflection space structure given by (b) For every γ ∈ Hom(R × , G), the conjugacy orbit G.γ = {γ g : g ∈ G} with γ g (t) := gγ(t)g −1 is a dilation subspace. This follows directly from (c) If G is a Lie group with Lie algebra g, we represent any smooth γ ∈ Hom(R × , G) by the pair (γ ′ (0), γ(−1)) ∈ g × Inv(G). The corresponding set of pairs is Ad σ x = x} and the reflection structure takes on this set the form The additional dilation space structure from (a) is given by For (x, σ) ∈ G, the G-orbit is the set simply is an adjoint orbit in g.
On homogeneous spaces, dilation space structure can sometimes be constructed along the lines of Example 1.2(b) if one considers subgroups with a central oneparameter group: Proof. Here (D1) is trivial and (D2) follows from r g = λ g α r λ −1 The second assertion follows immediately from the first one because the binary operations • r are obviously well-defined on G/H and (D1-3) for G/H follows immediately from the corresponding relations for G.
By specialization we immediately obtain: defines on V the structure of a dilation space.
If (M, µ) is a dilation space and e ∈ M is a base point, then α e (r)(x) := e • r x defines a homomorphism α e : R × → Aut(M, • −1 ) e whose range is central. Conversely, we have: g.e • r y := gβ(r)g −1 .y for g ∈ G, r ∈ R × + , y ∈ M defines on the reflection space (M, •) the structure of a dilation space on which G acts by automorphisms.
Proof. Let H := G e , so that we may identify M with G/H. Since the elements β(r), r ∈ R × , fix e, they commute with the reflection s e . Therefore α r (g) := β(r)gβ(r) −1 for r ∈ R × + and α −1 (g) := s e gs e define a homomorphism α : R × → Aut(G) and H = G e is fixed pointwise by each α r . We now obtain with Proposition 1.10 on G/H the structure of a dilation space by For r = −1, this means that recovers the given reflection space structure. For r > 0 we find .uH, and this coincides with (1.10). is given by t • r s : Proof. The equivalence of (a) and (b) is by definition. Further, (c) follows from (b) by specializing to t = 0. If (c) is satisfied, then we obtain Remark 1.14. The main difference between Examples 1.11(a) and (b) is that, for 0 = v ∈ V , the geodesic γ(t) = tv is a morphism of dilation spaces for the λ-dilation structure on R if and only if This is equivalent to α(r)v = r λ v for all r ∈ R × + . Therefore the geodesics which are morphisms of dilation spaces are generated by the elements of the common eigenspace
The space of standard subspaces
We now apply the general discussion of reflection and dilation spaces to the space Stand(H) of standard subspaces and its relatives, the space Mod(H) of pairs of modular objects (∆, J) and the space Hom gr (R × , AU(H)) of continuous antiunitary representations of R × . 1). The operator S is closed, so that ∆ V := S * S is a positive selfadjoint operator. We thus obtain the polar decomposition where J V is an antilinear involution and the modular relation To see more geometric structure on Stand(H), we have to connect its elements to homomorphisms R × → AU(H). This is best done in the context of graded groups and their antiunitary representations.
Definition 2.2. (a) A graded group is a pair (G, ε G ) consisting of a group G and a surjective homomorphism ε G : G → {±1}. We write G 1 := ker ε G and G −1 = G \ G 1 , so that Often graded groups are specified as pairs (G, G 1 ), where G 1 is a subgroup of index 2, so that we obtain a grading by ε G (g) := 1 for g ∈ G 1 and ε G (g) := −1 for g ∈ G \ G 1 .
If G is a Lie group and ε G is continuous, i.e., G 1 is an open subgroup, then (G, ε G ) is called a graded Lie group.
If G is a topological group with two connected components, then we obtain a canonical grading for which Theorem 2.3. We obtain the structure of a dilation space • on Hom gr (R × , AU(H)) by • and on Stand(H) by Proof. First we observe that, for each r ∈ R × we obtain by • r a binary operations on the spaces Hom gr (R × , AU(H)), Mod(H) and Stand(H), respectively. In particular, Hom gr (R × , AU(H)) is a dilation subspace of Hom(R × , AU(H)) (Example 1.9), hence a dilation space. It therefore remains to show that Φ and Ψ are compatible with all binary operations • r and this implies in particular that (D1-3) are satisfied on Mod(H) and Stand(H).
is a morphism of dilation spaces.
Proof. Since Φ and Ψ are isomorphisms of dilation spaces, it suffices to observe that U * : Hom gr (R × , G) → Hom gr (R × , AU(H)), γ → U • γ is a morphism of dilation spaces. But this is a trivial consequence of the fact that U is a morphism of graded topological groups.
In addition to the dilation structure, the space Stand(H) carries a natural involution θ: Remark 2.5. (The canonical involution) (a) On Hom gr (R, AU(H)) the involution θ(γ) = γ ∨ , γ ∨ (t) := γ(t −1 ), defines an isomorphism of reflection spaces which is compatible with the dilation space structure in the sense that The fixed points of θ correspond to • graded homomorphisms γ : R × → AU(H) with R × + ⊆ ker γ. • elements V ∈ Stand(H) which are Lagrangian subspaces of the symplectic vector space (H, ω), where ω(v, w) = Im v, w . As the symplectic orthogonal space V ′ := V ⊥ω coincides with iV ⊥ R , where ⊥ R denotes the orthogonal space with respect to the real scalar product Re v, w , the Lagrangian condition V = V ′ is equivalent to V = iV ⊥ R , which is equivalent to V ⊕ iV = H being an orthogonal direct sum. Remark 2.6. For H = C n , the space Stand(C n ) ∼ = GL n (C)/ GL n (R) carries a natural symmetric space structure corresponding to τ (g) = g on GL n (C) and given by g GL n (R)♯h GL n (R) := gτ (g −1 h) GL n (R), resp., gR n ♯hR n = gg −1 hR n .
This reflection structure is different from the one defined above by s V1 V 2 = V 1 • V 2 = J 1 J 2 V 2 . In fact, if J 1 = J 2 , then J 1 J 2 V 2 = V 2 shows that V 2 is a fixed point of s V1 and thus V 1 is not isolated in Fix(s V1 ), whereas this is the case for the reflections defined by ♯ (cf. Example 1.2(b)).
In particular, Stand J (H) is a trivial reflection subspace of Stand(H) on which the product is given by
We thus obtain a "normal form" of the reflection space Stand(H) similar to the one in Example 1.2(f). Stand(H). In a reflection space, we have a canonical notion of a geodesics. Although we do not specify a topology on Stand(H), the space Conj(H) ⊆ AU(H) carries the strong operator topology, and this immediately provides a natural continuity requirement for geodesics in Stand(H).
Geodesics in
Proposition 2.9. (Geodesics in Stand(H)) Let γ : R → Stand(H) be a geodesic with γ(0) = V such that the corresponding geodesic (J γ(t) ) t∈R in Conj(H) is strongly continuous. Then there exists a strongly continuous unitary one-parameter group The U t are uniquely determined by the relation The relation (2.5) follows immediately from (2.4).
Definition 2.10. We call a geodesic γ : R → Stand(H) with γ(0) = V dilation invariant if it is invariant under the corresponding one-parameter group (∆ it V ) t∈R of modular automorphisms, i.e., there exists an α ∈ R, such that ∆ is V γ(t) = γ(e αs t) for s, t ∈ R.
Proof. The dilation invariance of γ implies the existence of an α ∈ R with W s γ(t) = γ(e αs t) for s, t ∈ R. For the corresponding unitary one-parameter group U , this leads to Therefore (2.6) defines an antiunitary representation of G α . The converse is clear.
The order on Stand(H)
As a set of subsets of H, the space Stand(H) carries a natural order structure, defined by set inclusion. We shall see below that non-trivial inclusions V 1 ⊂ V 2 arise only if both modular operators ∆ V1 and ∆ V2 are unbounded. Therefore inclusions of standard subspaces appear only if H is infinite dimensional. Here a natural question is to understand when a non-constant geodesic γ : R → Stand(H) is monotone with respect to the natural order on R. In general this seems to be hard to characterize, but for dilation invariant geodesics, Proposition 2.11 can be combined with the Borchers-Wiesbrock Theorem ([NÓ17, Thms. 3.13, 3.15]) which provides a complete answer in the case in terms of the positive/negative energy condition on the corresponding antiunitary representation of Aff(R).
is unbounded, this argument shows that the inclusion From [Lo08,Prop. 3.10] we recall: Lemma 3.2. If V 1 ⊆ V 2 for two standard subspaces and V 1 is invariant under the modular automorphisms (∆ it V2 ) t∈R , then V 1 = V 2 . Proof. First we assume that (W s ) s∈R acts non-trivially on γ(R), which means that α = 0 in Proposition 2.11, so that we obtain an antiunitary representation of Aff(R). Now the assertion follows from [NÓ17, Thm. 3.13] or [Lo08, Thm. 3.17].
If α = 0, i.e., (W s ) s∈R commutes with (U t ) t∈R , then each γ(t) is invariant under (W s ) s∈R . Therefore γ cannot be monotone by Lemma 3.2. Problem 3.5. Find a characterization of the monotone geodesics in Stand(H). By Lemma 3.1 it is necessary that ∆ V is unbounded. For dilation invariant geodesics, Theorem 3.3 provides a characterization in terms of the positive/negative spectrum condition on U . In this case the representation theory of Aff(R) even implies that, apart from the subspace of fixed points, the operator ∆ V must be equivalent to the multiplication operator (M f )(x) = xf (x) on some space L 2 (R × , K), where K is a Hilbert space counting multiplicity ([NÓ17, §2.4.1], [Lo08, Thm. 2,8]). Note that the fixed point space H 0 := ker(∆ V − 1) leads to an orthogonal decomposition H = H 0 ⊕ H 1 and V = V 0 ⊕ V 1 with ∆ V = 1 ⊕ ∆ V1 such that ∆ V1 has purely continuous spectrum.
Since it seems quite difficult to address this problem directly, it is natural to consider subspaces of Stand(H) which are more accessible. Such subspaces can be obtained from an antiunitary representation (U, H) of a graded Lie group (G, ε G ) and a fixed γ ∈ Hom gr (R × , G) from the image O V := G 1 .V of the conjugacy class G 1 .γ under the G 1 -equivariant morphism V U : Hom gr (R × , G) → Stand(H) of dilation spaces, where V := V U (γ) (Corollary 2.4). Then [HN93,§4] for background material on semigroups and ordered homogeneous spaces) for which the inclusion G 1 /G 1,V ֒→ Stand(H) is an order embedding. Note that O V is a G 1 -equivariant quotient of G 1 /G 1,γ ∼ = G 1 .γ ⊆ Hom gr (R × , G). In the following subsection we explain how these spaces and their order structure can be obtained quite explicitly for an important class of examples including the important case where γ is a Lorentz boost associated to a wedge domain in Minkowski space.
Conformal groups of euclidean Jordan algebras.
Definition 3.6. A finite dimensional real vector space E endowed with a symmetric bilinear map E × E → E, (a, b) → a · b is said to be a Jordan algebra if x · (x 2 · y) = x 2 · (x · y) for x, y ∈ E.
If L(x)y = xy denotes the left multiplication, then E is called euclidean if the trace form (x, y) → tr(L(xy)) is positive definite.
. Then the trace form is a Lorentz form, so that we can think of Λ n as the n-dimensional Minkowski space, where the first component corresponds to the time coordinate. Let E be a euclidean Jordan algebra. Then C + := {v 2 : v ∈ E} is a pointed closed convex cone in E whose interior is denoted E + . The Jordan inversion j E (x) = x −1 acts by a rational map on E. The causal group G 1 := Cau(E) is the group of birational maps on E generated by the linear automorphism group Aut(E + ) of the open cone E + , the map −j E and the group of translations. It is an index two subgroup of the conformal group G := Conf(E) generated by the structure group H := Aut(E + ) ∪ − Aut(E + ) of E ([FK94, Prop. VIII.2.8]), j E and the translations. For any g ∈ G and x ∈ E in which g(x) is defined, the differential dg(x) is contained in H. This specifies a group grading ε G : G → {±1} for which ker ε G = G 1 = Cau(E) (see [Be96,Thm. 2.3.1] and [Be00] for more details on causal groups). It also follows that an element of G defines a linear map if and only if it belongs to the structure group H.
The conformal completion E c of E is a compact smooth manifold containing E as an open dense submanifold on which G acts transitively. By analytic extension, it can be identified with the Shilov boundary of the corresponding tube domain Thms. 2.3.1, 2.4.1]). The Lie algebra g of G has a natural 3-grading where g 1 ∼ = E corresponds to the space of constant vector fields on E (generating translations), g 0 = h is the Lie algebra of H (the structure algebra of E) which corresponds to linear vector field, and g −1 corresponds to certain quadratic vector fields which are conjugate under the inversion j E to constant ones ([FK94, Prop. X.5.9]).
To determine the homogeneous space G 1 /G 1,γ ∼ = G 1 .γ = G.γ ∈ Hom gr (R × , G), we first determine the stabilizer group G γ and derive some information on related subgroups.
Lemma 3.7. The following assertions hold: is obvious, we have to show that every element g ∈ G h acts by a linear map on E ⊆ E c . Then the assertion follows from dg(x) ∈ H for every g ∈ G and x ∈ E.
For every v ∈ E ⊆ E c we have lim t→∞ exp(−th).v = 0, and this property determines the point 0 as the unique attracting fixed point of the flow defined by t → exp(−th) on E c . We conclude that G h fixes 0. Likewise ∞ := j E (0) ∈ E c is the unique attracting fixed point of the flow defined by t → exp(th), and so G h fixes ∞ as well. This implies that G h acts on E by affine maps fixing 0, hence by linear maps ([Be96, Thm. 2.1.4]).
(ii) Let e ∈ E be the unit element of the Jordan algebra E. Then −j E (z) := −z −1 is the point reflection in the base point ie of the hermitian symmetric space T E+ = E + iE + with the holomorphic automorphism group G 1 ∼ = Aut(T E+ ) ([FK94, Thm. X.5.6]). Let K := G 1,ie denote the stabilizer group of ie in G 1 . Then K is maximally compact in G 1 and its Lie algebra k contains a central element Z with exp(Z) = s. This follows easily from the realization of T E+ as the unit ball D ⊆ E C of the spectral norm by the Cayley transform p : T E+ → D, p(z) := (z − ie)(z + ie) −1 which maps ie to 0 ( [FK94,p. 190]). Now the connected circle group T acts on D by scalar multiplications and the assertion follows.
Let θ := Ad −jE ∈ Aut(g) be the involution induced by the map −j E ∈ G 1 = Cau(E) which is a Cartan involution of g. It satisfies θ(h) = −h for the element h = γ ′ (0) defining the grading and thus θ(g j ) = g −j for j = −1, 0, 1. Therefore is a closed subsemigroup of G 1 which defines on G 1 /H 1 a natural order structure by gG 1,V ≤ g ′ G 1,V if g ∈ g ′ S V which is invariant under the action of G 1 and reversed by elements g ∈ G \ G 1 . The definition of G 1 = Cau(E) and Lemma 3.7(ii) imply that This implies that and likewise, by Lemma 3.7(i), In view of the fact that, in Quantum Field Theory standard subspaces are associated to domains in space-time, it is interesting to observe that the ordered space (G 1 /H 1 , ≤) can be realized as a set of subsets of E c ∼ = G 1 /P − , where P − = H 1 exp(g −1 ) is the stabilizer of 0 ∈ E ⊆ E c in G 1 ([Be96, Thm. 2.1.4(ii)]).
Corollary 3.9. The map is an order embedding.
Proof. Because of the G 1 -equivariance of Ξ, this follows from S C = S E+ , which implies that g 1 E + ⊆ g 2 E + is equivalent to g −1 2 g 1 ∈ S C , i.e., to g 1 H 1 ≤ g 2 H 1 in G 1 /H 1 . Finally, we connect the ordered symmetric space G 1 /H 1 to Stand(H) by using antiunitary positive energy representations.
Definition 3.11. We call an antiunitary representation (U, H) of (G, ε G ) a positive energy representation if there exists a non-zero x ∈ C + ⊆ g 1 for which the selfadjoint operator −idU (x) has non-negative spectrum.
is a closed convex invariant cone in g which is invariant under the adjoint action of G 1 and any g ∈ G \ G 1 satisfies Ad g W U = −W U .
( Proof. By assumption W U is a proper closed convex invariant cone in g and in Remark 3.12 we have seen that W U ∩ g 1 ∈ {±C + }, so that the positive energy condition leads to W U ∩ g 1 = C + . For x ∈ C + we have [h, x] = x and −idU (x) ≥ 0, so that Rx + Rh is a 2-dimensional Lie algebra isomorphic to aff(R). Therefore Theorem 3.3 implies exp(R + x) ⊆ S V and we even see that As θ = Ad −jE ∈ Ad G1 (Lemma 3.7), it leaves W U invariant. We conclude with Koufany's Theorem 3.8 that S E+ = S C = exp(C + )H 1 exp(θ(C + )) ⊆ S V . Finally, we use the maximality of the subsemigroup S E+ ⊆ G 1 (Theorem A.1) to see that
Open problems
Problem 4.1. Let (G, ε G ) be a graded Lie group with two connected components, γ : R × → G be a graded smooth homomorphism and (U, H) be an antiunitary representation of G. Then the G 1 -invariant cone W U ⊆ g can be analyzed with the well-developed theory of invariant cones in Lie algebras (see [HN93, §7.2] and also [Ne00]).
• Let V := V U (γ). Is it possible to determine when the order structure on the subset U G1 V = V U (G 1 .γ) ⊆ Stand(H) is non-trivial? Theorems 3.3 and 3.13 deal with very special cases. • Is it possible to determine the corresponding order, which is given by the subsemigroup S V ⊆ G 1 , intrinsically in terms of γ? Here the difficulty is that G 1 .γ ∼ = G 1 /G 1,γ carries no obvious order structure.
Problem 4.2. In several papers Wiesbrock develops a quite general program how to generate Quantum Field Theories, resp., von Neumann algebras of local observables from finitely many modular automorphism groups ( [Wi93,Wi93b,Wi97,Wi98]). This contains in particular criteria for three modular groups corresponding to three standard subspaces (V j ) j=1,2,3 to generate groups isomorphic to the Poincaré group in dimension 2 or to PSL 2 (R) ([NÓ17, Thm. 3.19]). On the level of von Neumann algebras there are also criteria for finitely many modular groups to define representations of SO 1,3 (R) ↑ or the connected Poincaré group P (4) ↑ + ( [KW01]). It would be interesting to see how these criteria can be expressed in terms of the geometry of finite dimensional totally geodesic dilation subspaces of Stand(H).
Appendix A. Maximality of the compression semigroup of the cone
In this appendix we prove the maximality of the semigroup S E+ in the causal group Cau(E) of a simple euclidean Jordan algebra E.
Theorem A.1. If E is a simple euclidean Jordan algebra and E + ⊆ E the open positive cone, then the subsemigroup S E+ of G 1 = Cau(E) is maximal, i.e., any subsemigroup of G 1 properly containing S E+ coincides with G 1 .
Step 2: We want to derive the assertion from [HN95, Thm. V.4]. In [HN95] one considers a connected semisimple Lie group G, a parabolic subgroup P and an involutive automorphism τ of G. In loc. cit. it is assumed that the symmetric Lie algebra (g, τ ) is irreducible (there are no non-trivial τ -invariant ideals) and that, for a τ -invariant Cartan decomposition g = k ⊕ p and the τ -eigenspace decomposition g = h ⊕ q, the center of the Lie algebra Then the conclusion of [HN95, Thm. V.4] is that, if • the subsemigroup S(G τ , P ) := {g ∈ G : gG τ P ⊆ G τ P } has non-empty interior, and • G = exp G C g g in the simply connected complex group G C with Lie algebra g C , then S(G τ , P ) is maximal in G. We next explain how the assumption that G C is simply connected can be weakened. Suppose that G injects into its universal complexification G C (which is always the case if it has a faithful finite dimensional representation). Let q C : G C → G C denote the simply connected covering group and, as G is connected, the integral subgroup G ♯ := exp G C g ⊆ G C satisfies q C (G ♯ ) = G and ker q C is a finite central subgroup of G C . Consider the covering map q := q C | G ♯ : G ♯ → G. Then P ♯ := q −1 (P ) is a parabolic subgroup of G ♯ and G/P ∼ = G ♯ /P ♯ . Let τ also denote the involution of G ♯ obtained by first extending τ from G to a holomorphic involution of G C , then lifting it to G C and then restricting to G ♯ . Now H ′ := q((G ♯ ) τ ) ⊆ G τ is an open subgroup satisfying q((G ♯ ) τ P ♯ ) = H ′ P ⊆ G τ P . As ker(q) ⊆ P ♯ and q is surjective, we even obtain (A.2) (G ♯ ) τ P ♯ = q −1 (H ′ P ).
Now the maximality of S 1 in G ♯ immediately implies the maximality of S 2 in G.
Step 3: (Application to causal groups of Jordan algebras) First we verify the regularity condition (A.1). Here the Lie algebra g is simple, which implies in particular that (g, τ ) is irreducible.
A natural Cartan involution of g is given by θ := Ad −jE which satisfies θ(h) = −h, hence θ(g j ) = g −j for j ∈ {−1, 0, 1}. Then h = g τ = g 0 inherits the Cartan decomposition h = str(E) = aut(E) ⊕ L(E), where L(x)y = xy for x, y ∈ E, where aut(E) is the Lie algebra of the automorphism group Aut(E) of the Jordan algebra E, which coincides with the stabilizer group H e of the Jordan identity e in H. This shows that h a = aut(E) ⊕ {x − θ(x) : x ∈ g 1 }.
To verify (A.1), it remains to show that the element e − θ(e) is central in h a , i.e., that it commutes with with all elements of the form u − θ(u), u ∈ g 1 . As g ±1 are abelian subalgebras of g, we have Since this is given by a Jordan multiplication, it belongs to h −θ ([FK94, Prop. X.5.8]). This proves that (g, τ ) satisfies the regularity condition (A.1).
Step 4: (The maximality of S E+ ) Let G 0 denote the identity component of G. Then the stabilizer P := G 0,e of the Jordan identity e ∈ E + ⊆ E ⊆ E c is a parabolic subgroup and G 0 /P ∼ = E c is a flag manifold of G 0 .
The subgroup H ′ ⊆ (G 0 ) τ from Step 2 consists of elements which are images of elements in the simply connected connected complex group G C fixed under the involution τ . The group G C acts by birational maps on the complex Jordan algebra E C and since G C is simply connected, the subgroup G τ C of τ -fixed points in G C is connected ([Lo69, Thm. IV.3.4]). Therefore elements of G τ C act on E C by elements of the complex group exp G C (g 0,C ), hence by linear maps. This shows that H ′ acts on E by linear maps and thus H ′ ⊆ Aut(E + ) follows from (A.3). Further, H ′ contains (G τ ) 0 = Aut(E + ) 0 and thus H ′ .e = E + , as a subset of E c . This means that H ′ P/P corresponds to E + , and therefore the maximality of S E+ = S 2 follows from Step 2.
|
2017-07-18T07:14:15.000Z
|
2017-07-18T00:00:00.000
|
{
"year": 2017,
"sha1": "ca40cb7e57ba9d1eaf5c155711e81bf7503b6d1d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.05506",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ca40cb7e57ba9d1eaf5c155711e81bf7503b6d1d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
4436065
|
pes2o/s2orc
|
v3-fos-license
|
Unsupervised Abbreviation Detection in Clinical Narratives
Clinical narratives in electronic health record systems are a rich resource of patient-based information. They constitute an ongoing challenge for natural language processing, due to their high compactness and abundance of short forms. German medical texts exhibit numerous ad-hoc abbreviations that terminate with a period character. The disambiguation of period characters is therefore an important task for sentence and abbreviation detection. This task is addressed by a combination of co-occurrence information of word types with trailing period characters, a large domain dictionary, and a simple rule engine, thus merging statistical and dictionary-based disambiguation strategies. An F-measure of 0.95 could be reached by using the unsupervised approach presented in this paper. The results are promising for a domain-independent abbreviation detection strategy, because our approach avoids retraining of models or use case specific feature engineering efforts required for supervised machine learning approaches.
Introduction
Free text narratives are a main carrier of unstructured patient-based information in clinical information systems. Clinical texts differ significantly from, e.g., newspaper or scientific articles. The following snippet demonstrates the high degree of compactness, which is typical for clinical narratives 1 : 3. St.p. TE eines exulz. sek.knot.SSM (C43.5) li Lab. majus. Level IV, 2,42 mm Tumordurchm.
As much as such highly condensed text is understandable by specialists, it poses severe problems to natural language processing (NLP) and subsequent semantic interpretation (Meystre et al., 2008), due to idiosyncrasies of telegram style language like word and term-level ambiguities, acronyms, abbreviations, single-word compounds, derivations, spelling variants and misspellings. In addition, the broad range of clinical specialties with different vocabularies and recording traditions account for a high variation of sub-language characteristics (Patterson et al., 2010). This paper deals with the disambiguation of the period character (".") in clinical narratives. In many Western languages like German, periods are used as abbreviation markers. Therefore, in a first tokenization step it is not recommended to consider trailing period characters as token delimiters, in order to identify tokens that end with a period. Three cases can be distinguished: (i) The period character marks an abbreviation and does not act as sentence delimiter. (ii) The period character marks an abbreviation and also delimits the sentence. (iii) The period does not belong to the token and therefore delimits the sentence.
Our approach is purely data-driven, which distinguishes it from recently published work (Wu et al., 2016;Griffis et al., 2016;Vo et al., 2016), predominantly based on supervised machine learning. In contrast, we avoid extensive manual annotations of training data as well as classification task triggered feature engineering, even though good results were obtained in a previous study (Kreuzthaler and Schulz, 2015). Another requirement is that the method should be easily adaptable to other clinical sub-language domains without model retraining or exhaustive dictionary or terminology management, and that classification results should be understandable in detail and traced back to core decision rules.
Data
Corpus: A sample of 1,696 de-identified German-language clinical in and outpatient discharge letters was obtained from the dermatology department of an Austrian university hospital. The documents were randomly assigned to a training and a test corpus, with 848 documents each.
Gold standard: From both corpora a list of word types followed by a period character was extracted by applying the following two regular expression sequentially: (i) \b\p{Graph}+\.(?=(\p{Punct}|\s|$)) matches any word type character sequence ending with a period character, and (ii) ([a-z]+\.|[A-Z][a-z] * \.) filters the resulting types from step one by word characters without digits. About 2,300 word types ending with a period finally constitute the training and test set. Their content was manually annotated on whether the period character belongs to the word type or not. The inter-annotator agreement was very high, with a Cohen's kappa of 0.98 (Di Eugenio and Glass, 2004;Hripcsak and Heitjan, 2002).
Dictionary: An abbreviation-free medical dictionary (~1.45 million unique word types) was built using (i) a free contemporary German dictionary 2 , (ii) a German medical dictionary (Pschyrembel, 1997), and (iii) texts from a consumer health Web portal 3 . All tokens ending with a period character were excluded from this resource, as a highly sensitive approach to keep it free of abbreviations. In addition, German abbreviations harvested from Web resources 4,5 (~5,800 acronym and abbreviation tokens) were excluded from the overall dictionary to make the final resource as abbreviation-free as possible, also accounting for potential punctuation errors in the three dictionaries such as missing abbreviation period markers. The resulting resource was used in our abbreviation detection strategy, as described in the following section.
Methods
Statistical approach: For the statistical classification approach we built a fourfold observed cooccurrence table O(k nm ) for every word type ending with a period character:
Schema
Example A Example B Type ¬Type "Pat" ¬"Pat" "auf" ¬"auf" • Table 1: Two examples of observed corpus based frequency counts, viz. the two word types "Pat" and "auf", with and without a period as rightmost character (symbolized by •).
With the observed frequency counts O(k nm ) we calculate the log-likelihood ratio (LLR) (Dunning, 1993) 6,7 of a word type and its ending period character by use of Shannon's Entropy (Shannon, 1948): For the cases mentioned in Table 1, LLR values amount to 579.11 for Example A and 571.56 for Example B. This has the advantage that per co-occurrence their relevance can be asserted assuming a χ 2 distribution (with one degree of freedom) for different significance levels. Example A and Example B have a very high LLR, which allows the conclusion that the occurrence of the word type left of the ending period character has a significant influence on the presence or absence of the final period character. In order to determine whether there is significant evidence for the presence or for the absence of the final period character we calculate, in a next step, the expected values E(k nm ) of the fourfold Table 1 via: These equations lead to the following fourfold expected co-occurrence table E(k nm ): Table 2: Two examples of expected corpus based frequency counts, again with the word types "Pat" and "auf", with and without a period as rightmost character (symbolized by •).
The final decision function is now straightforward, reconsidering the fact that the expected values E(k nm ) can be interpreted as the distribution within the table if there were no divergence from randomness: If O(k 11 ) − E(k 11 ) > 0 the period character belongs to the word type and marks an abbreviation, if O(k 11 ) − E(k 11 ) ≤ 0 the period marker does not belong to it and can be interpreted as sentence delimiter. We apply this decision function regardless of the LLR-level of the token-period co-occurrences, but its influence is inspected in the Combined approach described below.
Dictionary approach
The dictionary-based approach for period character classification is done via a simple dictionary lookup of the token under inspection 8 . If the token (without trailing period) is found in the dictionary, we decide that it is not an abbreviation, otherwise the period character is considered as belonging to the token, which is therefore classified as an abbreviation. This strategy requires an abbreviation-free dictionary, as described in Section 2.1.
Combined approach
Our decision function in the combined approach is motivated by the fact that the tokens ending with a period have a distribution pattern as depicted in Figure 1. This has a fundamental influence on our decision function: (i) For a certain proportion of the token-period co-occurrences the statistical approach will have enough frequency information to give valid classification results, (ii) but there is a relevant long tail of co-occurrences where the statistical method is not stable any more. We therefore addressed these cases by the dictionary-based approach and to prioritize it in the decision function: wherever the left context of the period is in the dictionary we decide in favor of a non-abbreviation, otherwise we take the decision of the statistical approach taking into account different significance levels (LLR 1 > 10.83, p < 0.001; LLR 2 > 3.84, p < 0.05; LLR 3 > 0, p-value not considered).
Results and Discussion
The evaluation results show that the Statistical approach on its own tends to find all abbreviations but lacks precision. The Dictionary approach returns an F-measure of 0.94, and the top performance result of F 1 = 0.95 is obtained with the combined approach. The evaluation results of the Combined approach also reflect the fact that the LLR information can be neglected in that case and the outcome of O(k 11 ) − E(k 11 ) should always be used regardless of the impact of the significance of the token-period co-occurrence. The investigation of false positives shows, e.g., a noticeable amount of token-period cooccurrences like "Lymphknotenstatus." (in English "lymph node status.") which very commonly appear at the end of a sentence, but which are not in our dictionary (search term: "Lymphknotenstatus"), and have a O(k 11 ) − E(k 11 ) > 0. False negative results typically appear with abbreviated tokens, such as "morph." (abbreviation for "morphologisch", in English "morphological"), which are erroneously found in our dictionary (search term: "morph") and are therefore classified as non-abbreviations. Kiss and Strunk (2002a), tried to reduce the amount of false positives and false negatives by applying different scaling factors to the resulting LLR. A final threshold was manually chosen, with F-measures of 0.92 and higher on newspaper corpora. Kiss and Strunk (2002b) performed an intermediate evaluation of their idea of re-scaling the LLR also for sentence boundary detection. Here, they obtained a minimum F-measure of 0.91. Both preliminary approaches finally led to the Punkt system (Kiss and Strunk, 2006), a multilingual unsupervised approach rigorously tested and evaluated. Kreuzthaler and Schulz (2014) applied an extended version of the Kiss and Strunk (2002a) texts and achieved an accuracy of 0.93 for abbreviation and sentence detection based on the interpretation of the period character. A supervised machine learning approach using a support vector machine with a linear kernel and thorough feature engineering led to an F-measure of 0.95 for abbreviation detection and an F-measure 0.94 for sentence delineation (Kreuzthaler and Schulz, 2015). Studies have also focused on the detection, normalization, and context-dependent mapping of abbreviations/acronyms to long forms . This is also part of works such as CLEF 2013 (Suominen et al., 2013), which included a task for acronym/abbreviation normalization, using the UMLS 9 as target terminology. An F-measure of 0.89 was reported by Patrick et al. (2013). Four different methods for abbreviation detection were tested by Xu et al. (2007). A decision tree classifier, which additionally used features from knowledge resources, performed best with a precision of 0.91 and a recall of 0.80. Wu et al. (2011) compared machine learning methods for abbreviation detection. Word formation, vowel combinations, related content from knowledge bases, word frequency in the overall corpus, and local context were used as features. A random forest classifier performed best with an F-measure of 0.95 and an ensemble of classifiers achieved the highest F-measure of 0.96. Wu et al. (2012) compared different clinical natural language processing systems for abbreviation handling in clinical narratives: MedLEE (Friedman et al., 1995b;Friedman et al., 1995a) performed best with an F-Measure of 0.60. A prototypical system, meeting real-time constraints, is described in Wu et al. (2013). Wu's journey finally ended in the CARD system (Wu et al., 2016) achieving an F-measure of 0.76 for finding and disambiguating abbreviations in clinical narratives. Very recently Vo et al. (2016) got very high results with a minimum F-measure of 0.94 on abbreviation detection on clinical notes applying supervised machine learning methods which a rich feature engineering process.
The main difference between the work we presented and the unsupervised approach of Kiss and Strunk is the fact that we refrained from re-scaling the LLR and avoided to set an experimental threshold for the abbreviation classification task. The statistical decision function we employed proved to be solid and robust even in cases where k 21 > k 11 (e.g. "Meta." with k 11 = 28, k 21 = 82, but nevertheless correctly classified as abbreviation), which had also been one type of motivation for introducing scaling factors by Kiss and Strunk (2006). In contrast to much of the related work, our approach is unsupervised and does not require the training of a machine learning model or a rich feature engineering effort (Vo et al., 2016;Wu et al., 2016;Kreuzthaler and Schulz, 2015). Therefore we hypothesize that our approach is especially suited to be deployed to other clinical domains, which was a main driver of our investigations. Table 3 shows that with the dictionary approach alone we got F-measure values greater than 0.93, whereas the performance by word types was much lower. For the time being, we consider this acceptable because we concentrate on high token-based evaluation measurements and do not want to misclassify frequently occurring abbreviations. The statistical approach is not applicable in isolation, because we have found many cases where a word type followed by a period occurs only once or twice in the corpus (see Figure 1). In such cases the statistical approach is not robust any longer, so we have to rely on dictionaries. The combined approach was satisfactory as both training and test yielded token-based F-measure values for period character disambiguation greater than 0.94.
Conclusion and Outlook
In this paper we presented an unsupervised approach for period character disambiguation in German clinical narratives, which we evaluated for the task of abbreviation detection. We motivated and introduced both a data-driven statistical approach and a dictionary-based method. Based on the analysis of the frequency distribution of token-period character co-occurrences we also presented a hybrid methodology. This hybrid approach put emphasis on the dictionary-based method, which was then supported by a statistical decision rule. A dermatology corpus was used for initial evaluation. For the training and test set, we obtained F-measures of 0.95 and 0.94, respectively. This supports the hypothesis that unsupervised approaches are well suited for abbreviation and sentence boundary detection in clinical narratives, which are known to abound with ad-hoc abbreviations. Furthermore, the system presented here needs no adjustment to the sublanguage, which makes it easy to reuse for other text genres and subject-matters. This consideration together with the ability to trace back decision results to their core classification logic and the avoidance of manual training data annotations were major drivers for this investigation.
We mention the following limitations: (i) Periods after digits are currently not considered despite their importance as markers of ordinals in many languages, as well as their importance in many data formats. Kreuzthaler and Schulz (2015) took this into account in a supervised rich feature engineering approach using support vector machines; (ii) The methodology presented in this paper cannot resolve cases where periods play a double role, viz. as both abbreviation markers and sentence delimiters. This can be addressed by including in-depth context information regarding the period character under investigation; (iii) We applied this method to only one kind of text, viz. medical discharge summaries of melanoma patients. Therefore, we plan to demonstrate domain independence by applying the same approach to cardiology reports; (iv) We only used German texts, so that we can say little about the generalizability to other languages. Although we have found that ad hoc abbreviation is a very common phenomenon also in other languages and text genres, it cannot always be taken for granted that the period character is used as a marker. Future investigations will address these problems. Our goal is to create a specific UIMA component for abbreviation detection and resolution with an unsupervised core, which could be integrated in a clinical NLP pipeline like cTAKES (Savova et al., 2010), The Leo framework -The VINCI-developed NLP infrastructure (Meystre et al., 2008;Patterson et al., 2014) or MedKATp.
|
2017-01-07T08:35:44.032Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "4290d77a9c25490160de7b23681ef7514844b971",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "4290d77a9c25490160de7b23681ef7514844b971",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
31953840
|
pes2o/s2orc
|
v3-fos-license
|
Rare variants of cutaneous leishmaniasis presenting as eczematous lesions
Cutaneous Leishmaniasis may present with clinical presentation such as zosteriform, sporotrichoid and erysipeloid. The eczema variant has rarely been reported. We report a 27- year- old patient with atypical cutaneous leishmaniasis resembling eczema on the hand of a man in Yazd province in the central of Iran.
Introduction
Cutaneous Leishmaniasis is a common protozoan disease, caused by Leishmania; and it is an important public -health problem in Iran (1). In its most common clinical picture it presents as nodules, papules or nodoloulcerative lesions. Unusual clinical presentations have been reported occasionally and include annular, sporotrichoid, palmoplantar, erysipeloid, whitlow, paronychial and impetigo-form (2)(3)(4). We present a patient with eczema form, a very rare and chronic variant of cutaneous leishmaniasis.
Case Report
A 27-year -old man was referred to our clinic with a 3-month history of an exudatng lesion on the hand (Fig. 1). It had started as a small insect-bite-like lesion and progressed slowly. He denied any history of burns, trauma, drug intake or allergic disorder.
The patient was an army soldier. There was no history of a similar disease in the patient and his family. He also had not any history of tuberculosis or contact with tubercular patients. The patient denied risk factors associated with HIV and also reported no chills, fever, pain or constitutional symptoms.
The total blood count, CRP, Erythrocyte sedimentation rate (ESR), FBS, and intradermal purified protein derivative (PPD) skin test and HIV serology were all normal.
On examination, there was a crusted plaque on the posterior surface of the hand. The plaque had dirty-brown crust and multiple papulopustules. There was no lymph node or palpable lymphatic cord. The clinical picture was consistent with eczema. Previous treatments icluding steroid, antihistamine and antibiotics failed to heal the lesion and its slow progression.
Special stains and cultures were negative for acid-fast bacteria, fungi, and other bacteria. Because the patient was residing in an endemic area of disease, was asked to per- The patient was treated with meglumine antimoniate (Glucantime), pentavalent antimony, at a dosage of 20 mg/kg per day intra-muscularly for 20 days (treatment administered by center for control diseases of Iran). After completion of therapy, the lesion had partially healed, and after 3 months, the ulcer healed completely. The side effect of meglumine antimoniate was mild arthralgias, myalgias, and pain at the injection site but otherwise tolerated the medication well.
Discussion
Cutaneous Leishmaniasis is caused by obligate intracellular protozoa of the genus leishmania. Rodents and canids are as common reservoir hosts and humans as incidental hosts (1). The vectors are sandflies of the genus phlebotomus in the old world. The incubation period ranges from a week to many months. Lesions typically appear on exposed areas of the body. The first manifestation is usually a papule at the site of the sandfly bite, which progressively increase in size and eventually ulcerate. Multiple primary lesions, regional adenopathy, sporotrichoid form, Zoster-form, impetigo -form, erysipeloid, and whitlow-form are variably present (4-7). Here, we described a case of cutaneous leishmaniasis showing unusual presentation, resembling eczema on the hand with a response to treatment with meglumine antimoniate (8). The pre-cise pathogenesis of the eczema form of the cutaneous leishmaniasis has been poorly documented. The clinical manifestation in cutaneous leishmaniasis depends on the infecting Leishmania species and host immune response, which is largely mediated through cellular immunity. Other factors include the site of infection, the number of parasites inoculated and nutritional status of the host. However, in the eczematous form of the cutaneous leishmaniasis, one factor could be the epidermal invasion by Leishmania causing an intense cell-mediated immune response leading to severe inflammatory and eczematous changes (7,9).
In our report the large size and the eczematous appearance of the lesion was in itself very rare, because there was no primary nodule or plaque. Our patient was an otherwise healthy, young adult with no history of other skin or systemic disease or atopy. Whether the eczematous appearance resulted from an atypical Leishmania strain or from lack of response or a specific immune response is not clear (10).
In another report a very rare case of bilateral and symmetrical cutaneous leishmaniasis was presented as eczema-like eruptions with localization exclusively on dorsal aspect of both hands (11). In another study, a 60-year-old man was presented with ulcerated infiltrative plaques over his face. The diagnosis was confirmed to be cutaneous leishmaniasis as eczema-like eruptions by histological examination and polymerase chain reaction assay of the skin biopsy. In our study the eruptions were on the hand (12).
Conclusion
In endemic areas or in cases with recenet travel to endemic areas, it is necessary for the physician to be aware of atypical skin lesion and it should be investigated for cutaneous leishmaniasis.
|
2016-05-04T20:20:58.661Z
|
2014-07-21T00:00:00.000
|
{
"year": 2014,
"sha1": "3763edc921b55a16c6a6c5b395d9b6b3018334ef",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c13a5feca997405e8846c2a10fbb1e3df36f551a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239686728
|
pes2o/s2orc
|
v3-fos-license
|
4-(((4-Methoxyphenyl)amino)methyl)- N , N -dimethylaniline and 2-Methoxy-5-((phenylamino)methyl)phenol
: Molecular structures of 4-(((4-methoxyphenyl)amino)methyl)- N , N -dimethylaniline and 2-methoxy-5-((phenylamino)methyl)phenol synthesized via Schiff bases reduction route are reported. The compounds consist of asymmetric units of C 16 H 20 N 2 O ( 1 ) and C 14 H 15 NO 2 ( 2 ) in orthorhombic and monoclinic crystal systems, respectively. Compound 1 consist of intermolecular C11—H11 ··· N2 hydrogen bonding with C11 ··· N21 = 3.463(4) Å. The hydroxyl group in 2 is also involved in intermolecular O2—H2 ··· O2 and O2—H2 ··· O21 hydrogen bonding with O2 ··· O11 = 2.8885(15) Å and O1 ··· O21 = 2.9277(5) Å. The molecular structures of the compounds are stabilized by secondary intermolecular interactions of C1—H1B ··· O11 and C5—H ··· (C41, C51, C61, C71) for 1 and H ··· C, C—H ··· O and N—H ··· C for 2 . The reported compounds are important starting material for the synthesis of many compounds such as azo dyes and dithiocarbamate.
Synthesis of the Compounds
The compounds were synthesized by condensation of the primary amines with the corresponding aldehydes in methanol and sequential reduction of the resulting Schiff bases with sodium borohydride in methanol and dichloromethane at room temperature (Scheme 1).
Materials and Methods
All solvents and chemical reagents such as p-anisidine, aniline, 4-(dimethylamino)benzaldehyde, 3-hydroxy-4-methoxybenzaldehyde were obtained from Sigma Aldrich and used as obtained without further purification. The 1 H and 13 C NMR spectra were recorded on a Bruker Avance III 400 MHz spectrometer. The proton and carbon shifts are quoted in ppm relative to the solvent signals. FTIR spectra were recorded in the region 4000 to 650 cm −1 using a Cary 630 FTIR spectrometer (Agilent Technologies). Single mass analysis was carried out using the Waters Micromass LCT Premier TOF-MS. The spectra are presented in supplementary Figures S1-S8. Single crystal X-ray crystallography of the compounds were recorded on a Bruker APEX-II CCD diffractometer.
Synthesis of 4-(((4-methoxyphenyl)amino)methyl)-N,N-dimethylaniline (1)
P-anisidine (1.1084 g, 0.009 mol) dissolved in 20 mL methanol was placed in a two neck flask and 4-(dimethylamino)benzaldehyde (1.4919 g, 0.01 mol) was added, the resulting mixture was refluxed at 80 °C for 8 h. The solvent was then removed under vacuum to give a yellow oily product. The yellow oily product was dissolved in 1:1 dichloromethane:methanol (20 mL) and added in portion to sodium borohydride (0.7566 g, 0.02 mol) at room temperature and stirred for 20 h. The solvent was removed under vacuum and the product extracted with dichloromethane and washed with water. The whitish solid product obtained was recrystallized in methanol to give single crystals suitable for X-ray crystallography. Yield
Materials and Methods
All solvents and chemical reagents such as p-anisidine, aniline, 4-(dimethylamino) benzaldehyde, 3-hydroxy-4-methoxybenzaldehyde were obtained from Sigma Aldrich and used as obtained without further purification. The 1 H and 13 C NMR spectra were recorded on a Bruker (Billerica, MA, USA) Avance III 400 MHz spectrometer. The proton and carbon shifts are quoted in ppm relative to the solvent signals. FTIR spectra were recorded in the region 4000 to 650 cm −1 using a Cary 630 FTIR spectrometer (Agilent Technologies, Santa Clara, CA, USA). Single mass analysis was carried out using the Waters Micromass LCT Premier TOF-MS (Waters, Milford, MA, USA). The spectra are presented in Supplementary Figures S1-S8. Single crystal X-ray crystallography of the compounds were recorded on a Bruker (Billerica, MA, USA) APEX-II CCD diffractometer.
Synthesis of 4-(((4-Methoxyphenyl)amino)methyl)-N,N-dimethylaniline (1)
P-anisidine (1.1084 g, 0.009 mol) dissolved in 20 mL methanol was placed in a two neck flask and 4-(dimethylamino)benzaldehyde (1.4919 g, 0.01 mol) was added, the resulting mixture was refluxed at 80 • C for 8 h. The solvent was then removed under vacuum to give a yellow oily product. The yellow oily product was dissolved in 1:1 dichloromethane:methanol (20 mL) and added in portion to sodium borohydride (0.7566 g, 0.02 mol) at room temperature and stirred for 20 h. The solvent was removed under vacuum and the product extracted with dichloromethane and washed with water. The whitish solid product obtained was recrystallized in methanol to give single crystals suitable for X-ray crystallography. Yield, 1.7995, 78%, 1
Synthesis of 2-Methoxy-5-((phenylamino)methyl)phenol (2)
Aniline (1.46 mL, 0.016 mol) dissolved in 20 mL methanol was placed in a two-neck flask and 3-hydroxy-4-methoxybenzaldehyde (2.7387 g, 0.018 mol) was added, the resulting mixture was refluxed at 80 • C for 8 h. The solvent was then removed under vacuum to give a yellow oily product. This was dissolved in 1:1 dichloromethane:methanol (20 mL), and sodium borohydride (1.3619 g, 0.036 mol) were added in portion at room temperature and stirred for 20 h. The solvent was removed under vacuum and after which the product was extracted with dichloromethane and washed severally with water. The solvent was removed to give a whitish solid product that was recrystallized in methanol to obtain single crystals suitable for X-ray crystallography. Yield, 2.9347,
Single Crystal X-ray Crystallography
Single colorless block and plank-shaped crystals of 1 and 2 were obtained from slow evaporation of methanolic solution of the compounds. Suitable crystals (0.78 × 0.34 × 0.32) mm 3 and (0.38 × 0.21 × 0.14) mm 3 of 1 and 2 were selected and mounted on a MITIGEN holder in paratone oil on a Bruker APEX-II CCD diffractometer [22] and data were collected using Olex2 [23] with the crystal temperature kept at T = 100(2) K. The structures were solved in a space group Pca2 1 and P2 1 for 1 and 2, respectively, with ShelXS-2013 [24] structure solution program, using the direct solution method. The model was refined with version 2016/6 of ShelXL [25] using least squares minimization.
Conclusions
The molecular structures of the compounds 2-methoxy-5-((phenylamino) methyl) phenol (1) and 4-(((4-methoxyphenyl)amino)methyl)-N,N-dimethylaniline (2) are reported. The compounds crystallized as monomeric entity of in an orthorhombic and monoclinic crystal system for 1 and 2, respectively. Each compound is held together in the unit cell by the combination of both intramolecular covalent and intermolecular secondary interactions. The compounds are useful starting materials for the synthesis of many important organic compounds.
Supplementary Materials: The following are available online, including copies of 1 H, 13 C NMR, FTIR, and TOF mass-spectra for the compounds 1 and 2 ( Figure S1-Figure S8). Figure S1
|
2021-09-24T15:16:00.993Z
|
2021-08-31T00:00:00.000
|
{
"year": 2021,
"sha1": "7b2f4de223b43c98da6329da5cae0289eb16a8f9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-8599/2021/3/M1274/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "271b40622f24273bbe85e049ee3d5ed6a6fe6514",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
271628481
|
pes2o/s2orc
|
v3-fos-license
|
Dorzagliatin: A Breakthrough Glucokinase Activator Coming on Board to Treat Diabetes Mellitus
Dorzagliatin, an innovative dual-acting allosteric oral glucokinase activator that targets glucose homeostasis and insulin resistance, has gained approval for treating type 1 diabetes mellitus (T1DM) and type 2 diabetes mellitus (T2DM). The effectiveness of existing antidiabetic treatments in enhancing beta cell (β-cell) activity is restricted. Currently, there are no satisfactory medications available to address the fundamental deficiency in glucose sensing for glucokinase-maturity-onset diabetes of the young (GCK-MODY), which is caused by mutations in the glucokinase gene; researchers have embarked on glucokinase activators. Dorzagliatin enhances the affinity of glucokinase for glucose and glucose-sensing capacity, improves β-cell function, and reduces insulin resistance. Two phase 3 studies, an adjunct trial of dorzagliatin with metformin for T2DM patients and a monotherapy trial for drug-naïve T2DM patients, are key clinical trials that have shown a favorable safety and tolerability profile. They also demonstrated a rapid, sustained reduction in glycated hemoglobin (HbA1c) and a significant decrease in postprandial blood glucose. This review will summarize the substantial clinical evidence supporting the safety and efficacy of dorzagliatin in treating diabetes mellitus (DM) and clarify the molecular mechanisms underlying its action.
Introduction And Background
Diabetes mellitus (DM) is a globally prevalent metabolic disorder characterized by high blood glucose levels and associated vascular complications.It develops due to the improper functioning of pancreatic beta cells (β-cells), resulting in insufficient insulin production or insulin resistance.India has often been referred to as the diabetes capital of the world, with the prevalence of diabetes being 9.3%, which accounts for almost 77 million people affected by it, next only to China.By the year 2045, it is expected that 134 million people will be affected by diabetes.Besides causing an increased risk of cardio-and cerebrovascular complications, diabetes also leads to renal failure and visual impairment.It has been reported that there have been 3 million deaths due to diabetic renal complications in 2019, despite the available medical treatment [1].
The quest for a revolutionary antidiabetic medication commenced with the discovery of insulin by Banting and Best in 1921.This breakthrough paved the way for many antidiabetic drugs targeting various insulin and glucose metabolic pathways, culminating in the approval of various distinct drugs.Despite the plethora of drugs available, there still exists a need for improvement of glycemic control and prevention of diabetic complications and drug-related adverse effects in the diabetic population.Additionally, it has been revealed that the global diabetes population has higher death rates with a twofold (95% confidence interval: 1.37-2.64)increase during coronavirus disease 2019 (COVID-19) infections, underscoring the need for more effective antidiabetic medications [2].It might also be pertinent to understand that the presentation of diabetes and its complications are also different between Asians and Europeans, possibly due to genetic and epigenetic variations.
Individuals with glucokinase-maturity-onset diabetes of the young (GCK-MODY) exhibit decreased β-cell glucose sensitivity and compromised alpha cell (α-cell) glucose sensing due to inactivating mutations in the glucokinase (GCK) gene [3,4].There isn't yet a glucose-lowering drug that effectively addresses the fundamental problem of insufficient glucose sensing in GCK-MODY.Dorzagliatin is a groundbreaking, dualacting allosteric activator of GCK, the first of its kind, which binds directly to a pocket distal to the active site of GCK [3].This reduces the threshold for glucose-stimulated insulin secretion and enhances GCK's affinity for glucose, causing earlier and more robust insulin release in response to glucose aiding in overall glucose homeostasis [5,6].This review focuses on the mechanism of action, trial evidence, and safety outcomes from the recently published literature on dorzagliatin.
FIGURE 1: Physiological role of glucokinase in insulin secretion
Developed by authors using the BioRender website Reference: [9] Conformational changes modulate the functional properties of GCK, with the super-open state being inactive and the open and closed states being active.GCK can remain in the two high-affinity conformations, the closed and open forms, by combining GCKA and GCK to prevent GCK from changing from its conformation to the super-open forms [8,9].This sustains the depolarization in the pancreatic βcells, facilitating the opening of calcium channels and promoting insulin release [6].
GCK regulates insulin secretion and release by controlling glycolytic and oxidative adenosine triphosphate (ATP) production, thus closing the potassium (K+) channels and causing gradual depolarization of the pancreatic cell.Upon reaching the threshold membrane potential, the L-type calcium (Ca2+) channels open, leading to insulin release through the activation of various signaling pathways, including those involving Ca2+, cyclic adenosine monophosphate (cAMP), inositol-3-phosphate, and protein kinase C, as depicted in Fig. 2 [6].It is believed that high glucose concentrations independently trigger GCK expression in β-cells, making them more sensitive to glucose-stimulated insulin release and biosynthesis [10].Based on their mechanistic role, GCKAs have been tried as a potential antidiabetic medication since 2001.Over 20 GCKAs have been tried, and, among them, one has reached its culmination in 2022, namely, dorzagliatin [11].
In the Pancreas
It was first discovered in 1927 that D-glucose plays a significant role in controlling blood sugar levels.It was shown that injecting minimal amounts of D-glucose into a dog's pancreaticoduodenal artery might lower its overall blood sugar levels [12,13].Significant advancements in the research were made when it was discovered that D-glucose stimulates insulin secretion from the isolated perfused rat pancreas and insulin secretion in conjunction with glucose metabolism using portions of the rabbit pancreas [14,15].GCK is a hexokinase 4 enzyme, identified by Matschinsky and Ellerman in 1968 [16].The discoveries by Dean and Matthews in 1970 and Meissner and Schmelz in 1974 contributed to the understanding that GCK activity is linked to insulin release as glucose reduces the β-cell membrane potential [17,18].Ashcroft and colleagues have shown that glucose triggers the closure of K+ ATP channels in pancreatic β-cells, indicating that the cellular energy state is crucial for linking glucose stimulation to insulin secretion [13,19].
In a study, transgenic mice exhibiting a twofold increase in hexokinase activity, specifically in pancreatic βcells, were developed [4].This enhancement causes isolated pancreatic islets to secrete insulin more quickly in response to glucose, increases serum insulin levels in vivo, and lowers the blood glucose levels of these transgenic animals by 20-50%, compared to the control [20].Mice with one functioning GCK allele have a reduced β-cell response to glucose and become hyperglycemic when the mouse GCK gene is inactivated.On the other hand, mice that lack GCK entirely are born with severe diabetes and usually die.Mice with glucose levels ranging from normal to mildly diabetic that have limited expression of GCK in their β-cells survive [21][22][23][24].GCKAs enhanced hepatic glucose intake, decreased blood glucose levels, and enhanced glucose tolerance test outcomes.GCKA drugs were found by Grimsby and colleagues [25].The important roles of magnesium ATP and 5'-AMP as metabolic coupling variables in the glucose-induced stimulation of insulin secretion were highlighted in a simple computational model for GCK-based β-cell glucose sensing [13,26,27].
In the Liver
The standard Cre-Lox recombination (Cre-LoxP) gene is a tyrosine site-specific recombinase that recognizes and binds to specific DNA sequences known as loxP sites [28].It deletes DNA segments between two sites of loxP.This is a commonly utilized technology for gene editing in mammals.A method for reducing GCK expression in mouse livers was examined.The mice appeared normal at birth but developed elevated fasting blood glucose levels as they aged.After six weeks, they exhibited impaired glucose tolerance and hyperglycemia.Higher levels of intracellular glucose 6-phosphate, glycogen, and L-pyruvate kinase activity were seen in a different group where there was an increase in hepatic GCK expression.These findings suggest that GCK overexpression may directly increase glycolysis and glycogen synthesis in vivo [28][29][30][31].
Evidence from clinical trials Phase 1: Single Ascending Dose Study
The randomized phase 1 placebo-controlled clinical trial of HMS5552 (NCT01952535) was conducted at Zhongshan Hospital, China, to establish the safety profile, absorption, distribution, metabolism, and excretion (ADME) properties, and target effects in healthy volunteers (HV).Participants with a body mass index between 18 and 24 kg/m 2 , aged 18 and 45 years, with normal physical conditions and laboratory parameters including electrocardiogram, serology, and urinalysis, were recruited.Six doses of HMS5552, such as 5, 10, 15, 25, 35, and 50 mg, were tested among 60 participants, including 31 males and 29 females.Pharmacokinetic and pharmacodynamic (PD) assessments were performed in fasting and fed conditions.PD parameters were percentage change in glucose, average glucose area under the curve (AUC), and radio-immuno-based estimation of insulin.None of the participants encountered any serious adverse events except for six mild drug-related side effects such as dizziness, palpitation, sweating, and proteinuria.Fifty milligrams was determined to be the highest tolerated dose with the best safety profile and assessed and graded using the Common Terminology Criteria for Adverse Events version 4.02.This medication demonstrated a dose-dependent decrease in blood sugar with a 5-50 mg dosage range [32].
Phase 2: Clinical Trial
A multicentric phase 2 randomized clinical trial (NCT02561338) was conducted with dorzagliatin at four different regimens such as 75 mg once daily (OD), 100 mg twice daily (BD), 50 mg BD, and 75 mg BD for 12 weeks along with an oral placebo among T2DM patients.This study was conducted at 22 trial sites in China, recruiting both genders of patients.Study participants were either treatment naïve or on treatment with oral antidiabetic agents whose glycated hemoglobin (HbA1c) level should be within the parameters of 7.5-10.5%.Those patients with glomerular filtration rates less than 60 ml/min and elevated systolic and diastolic blood pressure of more than 160 and 100 mmHg, respectively, were excluded.Improvement in HbA1c was considered the primary endpoint.Patients were allocated into four groups in a 1:1:1:1 ratio through permuted block randomization.During the four-week run-in phase, other antidiabetic medications were withdrawn and replaced with a placebo.Testing for insulin, fasting plasma glucose (FPG), and HbA1c was performed at baseline, every two weeks, and at the end of the therapy period.β-Cell function and postprandial plasma glucose were assessed at the beginning and conclusion of the treatment phase.Out of 619 screened patients, 258 were randomized into five groups: 53 in the placebo group, 53 in the 75 mg OD group, 50 in the 100 mg OD group, 51 in the 50 mg BD group, and 51 in the 75 mg BD group.The study found that after 12 weeks, dorzagliatin treatment led to a significant reduction in HbA1c levels by 0.44% (95% CI: -0.78 to -0.10) in the 50 mg BD group and 0.77% (-1.11 to -0.43) in the 75 mg BD group.Additionally, 22 participants (44%) in the 50 mg BD group (odds ratio: 3.70 (95% CI: 1.46-9.38))and 22 (45%) in the 75 mg BD group (odds ratio: 4.33 (95% CI: 1.54-12.19))achieved an optimal response with HbA1c levels below 7%.Those on dorzagliatin 50 mg BD and 75 mg BD had 2.99 (95% CI: 0.97-9.25)and 4.33 (95% CI: 1.43-13.13)times greater likelihood of reaching optimal HbA1c levels without experiencing hypoglycemia or weight gain.In homeostatic model assessment for insulin resistance (HOMA-IR), a significant improvement over placebo was documented only in the 75 mg BD group.It was evident that drug-naïve patients showed considerable change in HbA1c with all four regimens of dorzagliatin compared to patients on standard antidiabetic medications.Treatment-related mild side effects were noted.Around 6%, 4%, 6%, and 6% of patients in 75 mg OD, 100 mg OD, 50 mg BD, and 100 mg BD, respectively, reported having hypoglycemia, and all of these episodes were transient.It was concluded from the study that dorzagliatin exhibited dosedependent efficacy and limited side effects during 12 weeks of therapy, and it was also declared that 75 mg BD would be the least effective dose for further clinical evaluation.Extrapolation of study results to different ethnic groups might be restricted as the study included only the Chinese population [33].
Phase 3: Clinical Trial Study of Early and Exploratory Development (SEED Study)
Drug-naïve Chinese patients with T2DM were assessed for the safety and effectiveness of dorzagliatin in a phase 3 SEED trial involving 40 sites.Within the trial, there was a 24-week double-blind, placebo-controlled phase; participants received 75 mg dorzagliatin BD during the 28-week open-label phase; after that, there was a one-week treatment-free follow-up.Participants were randomly assigned 2:1 to receive dorzagliatin or a placebo after a two-week screening period.At week 24, the dorzagliatin group showed an HbA1c decrease of 1.07% compared to the placebo group (estimated treatment difference: -0.57%; 95% CI: -0.79 to -0.36; P<0.001).An estimated treatment difference of 3.28 (95% CI: 0.44-6.11;P<0.05) was observed in β-cell function when dorzagliatin was compared to placebo homeostatic model assessment of β-cell function (HOMA2-β change: -0.72).From week 8 to week 24, subgroup analysis showed higher dorzagliatin homeostatic control rates (odds ratio: 3.60; 95% CI: 1.81-7.14;P<0.001), particularly in patients with baseline HbA1c ≤8.0%.Dorzagliatin significantly reduced postprandial and fasting glucose levels throughout a 52-week period.In addition to acting quickly and not causing weight gain, the medication was welltolerated.During the double-blind phase, there was only one case of mild hypoglycemia in each group, and a minimal range of adverse events was seen.The results showed that dorzagliatin improves early-phase insulin release and β-cell function, resulting in quick and stable glycemic control without raising the risk of dyslipidemia or liver damage [34].
Diabetes Remission Clinical Trial (DREAM) Extension Study
A nonpharmacologic observational clinical trial, extension of the SEED project, the DREAM study, included 69 patients in 2023 who met investigator-assessed glycemia objectives after 52 weeks of dorzagliatin medication.The study's primary goal was to determine the probability of diabetes remission following the cessation of dorzagliatin and without the need for hypoglycemic medications.The Kaplan-Meier approach was employed to determine this outcome.Its results demonstrated that after 52 weeks, the chance of diabetes remission had reached 65.2%.The primary endpoint showed that patients with a baseline HbA1c of less than 6.5% had a remission probability of 80.1% at week 52, which was greater than that for patients with an HbA1c of >6.5%.After 46 weeks of dorzagliatin therapy, individuals with a time in range (TIR) of 80% or above had a greater probability of remission than those with a TIR of less than 80% [35].
Phase 3: Clinical Trial Dorzagliatin Assessment for Weekly Normoglycemia (DAWN) Study
Patients with T2DM who were not adequately controlled with metformin alone (HbA1c levels between 7.5% and 10%) were recruited for a phase 3 randomized, double-blind, placebo-controlled study to assess the safety and effectiveness of dorzagliatin as an adjuvant medication.Patients were randomized in a 1:1 ratio to receive metformin (1500 mg/day) plus dorzagliatin (75 mg BD) or a placebo for 24 weeks, with a 28-week open-label phase in between.The study was carried out at 73 sites in China.HbA1c, FPG, and two-hour postprandial glucose (2h-PPG) were the primary endpoints.HOMA2-β and HOMA2-IR indices were utilized to evaluate β-cell function and insulin resistance.The group that received dorzagliatin plus metformin after 24 weeks demonstrated a drastic decrease in HbA1c of 1.02 percentage points, while the placebo group experienced a reduction of 0.36 percentage points.With an estimated treatment difference of -0.66 percentage points (95% CI: -0.79 to -0.53; P<0.001), the dorzagliatin group had a significantly greater HbA1c response rate of 44.4%.Furthermore, higher reductions in FPG and 2h-PPG levels were observed with dorzagliatin.There were also improvements in β-cell function and insulin resistance; the dorzagliatin group had a HOMA2-β change of 3.82 compared to 1.40 in the placebo group (estimated treatment difference: 2.43; 95% CI: 0.59-4.26;P<0.01).In the dorzagliatin group, the HOMA2-IR change was -0.17, while in the placebo group, it was -0.09 (estimated treatment difference: -0.08; 95% CI: -0.15 to -0.01; P<0.05).Through week 52, there was no increase in HbA1c.The coadministration of metformin and dorzagliatin resulted in mild hypoglycemia, which was well-tolerated.Further studies with larger sample sizes and longer-term safety would be needed to conclude the possibility of using dorzagliatin as an add-on therapy to metformin in improving glycemic control in T2DM [36].
Dorzagliatin in Kidney Disease
In a study with 17 Chinese participants, eight of whom had end-stage renal disease (ESRD) and nine were HVs, the pharmacokinetics of a single oral dose of dorzagliatin, 25 mg, were investigated.Part 1 results showing the AUC geometric mean ratio that is AUC from the administration time to the last measurable concentration (AUClast) or AUC to infinity (AUCinf) between ESRD patients and HVs surpassing 100% were required for part 2. Dialysis-free ESRD patients and healthy controls participated in part 1, while part 2 involved patients with varying renal impairment (RI) from mild to severe.Eligibility was assessed using the Modification of Diet in Renal Disease formula, unique to China by which estimated glomerular filtration rate (eGFR) categorizes the severity of RI.Normal renal function (eGFR ≥90 ml/min/1.73m 2 ), ESRD not yet on dialysis (eGFR <15 ml/min/1.73m 2 ), severe renal function (eGFR 15-29 ml/min/1.73m 2 ), moderate renal function (eGFR 30-59 ml/min/1.73m 2 ), and mild renal function (eGFR 60-89 ml/min/1.73m 2 ) were the eGFR categories.The participants were given dorzagliatin following an overnight fast and a predetermined meal.High-performance tandem mass spectrometry and liquid chromatography were employed.The side effects were relatively low.The pharmacokinetics of dorzagliatin were expected to be largely unaffected by RI, according to physiologically based pharmacokinetic (PBPK) modeling.As the renal clearance (CLr) predicted by the PBPK and observed clinical pharmacokinetic values of HV and ESRD ratios are 1.13 and 0.74, respectively, it shows that PBPK modeling could predict even a tiny proportion of renal excretion of dorzagliatin with accuracy.This prediction was confirmed by part 1 results, which showed that patients with diabetic kidney disease (DKD) did not require dose adjustments.Over a dosage range of 5-150 mg BD, dorzagliatin demonstrated a sizable safe safety margin, linear pharmacokinetics, and a predictable doseresponse relationship.Consequently, at a dosage of 75 mg BD, a 30% increase in exposure AUC is not anticipated to have an impact on effectiveness or safety [37].The major clinical aspects of significant dorzagliatin trials are presented in Table 1.
Dorzagliatin Approval
While most GCKAs faced challenges in phase 2 clinical trials due to their antidiabetic potential, dorzagliatin stands out as an exception.MK-0941 was unsuccessful because of high hypoglycemia rates and limited efficacy.The hepatic GCK stimulator PF-04991532 showed a 0.7% reduction in HbA1c over 12 weeks but was halted due to toxic metabolites, a problem that also affected the dual activator piragliatin.In contrast, Hua Medicine reported in December 2020 that two phase 3 trials of dorzagliatin, SEED (HMM0301) for drugnaïve T2DM patients and DAWN (HMM0302) for those tolerant to metformin, resulted in significant decreases in 2h-PPG and HbA1c levels.Additionally, dorzagliatin enhanced β-cell function, decreased insulin resistance, and was well-tolerated with a good safety profile.It also exhibited synergistic effects when used with sitagliptin and empagliflozin in phase 1 studies.With its approval in China in September 2022, dorzagliatin is now prescribed both as a monotherapy and in combination with metformin for adults with T2DM.Considering the genetic and dietary parallels, dorzagliatin is viewed as a hopeful option for glycemic management in the Indian diabetic population.
Merits of Dorzagliatin
Dorzagliatin reduces the risk of hypoglycemia, liver damage, and weight gain while improving glycemic control by using the glucose-sensing capabilities of GCK.Dorzagliatin has successfully and safely decreased HbA1c levels in patients who were previously unresponsive, in contrast to other oral antidiabetic medications like thiazolidinediones, dipeptidyl peptidase-4 (DPP-4) inhibitors, α-glucosidase inhibitors, and sodium-glucose cotransporter-2 (SGLT2) inhibitors.It can be used safely in DKD, including ESRD, without changing the dose.Dorzagliatin can be used in conjunction with other diabetes drugs to offer prolonged glycemic control, which is essential for controlling chronic diabetes.Its specific mechanism targets GCK, lowering the risk of both hypoglycemia and hyperglycemia.And also, by maintaining the function of pancreatic β-cells, it may lessen the long-term effects of T2DM and slow its progression.
Disadvantages With GCKAs
Excess fat deposition in the liver and elevated triglycerides and blood pressure thrust more burden on T2DM patients who were already prone to metabolic complications.Fading of GCK activity over the years decreases the antidiabetic efficacy of GCKAs [38].Though dorzagliatin exhibited a similar percentage with the control group and is relatively safe, it still needs extensive studies and real-world population treatment to conclude the adverse reactions of the drug.
Future scope of research
Further research is needed to support the theory that glucose homeostasis and hormone release may also be influenced by the ability of GCK to sense glucose at the level of neurons, adrenal glands, the gut, and the anterior pituitary [13].
Neurons
According to study reports, pro-opiomelanocortin, neuropeptide Y, and γ-aminobutyric acid-containing hypothalamic neurons, along with the norepinephrine neurons of the locus ceruleus, express GCK mRNA.Four selective GCK inhibitors were found to inhibit intracellular Ca2+ oscillations in glucose-excited neurons and to stimulate them in glucose-inhibited neurons.These findings led researchers to explore the mechanism of GCK action further [13,39].
Hypoglycemia in the nucleus of the vagus nerve's tractus solitarius activates excitatory neurons that express glucose transporters (GLUT2), and GCK mediates this process.An increase in AMP and a decrease in blood glucose levels promote the inhibition of K+ leak current and the activation of AMP-dependent protein kinase.These processes result in cellular hyperpolarization and enhanced afferent electrical activity.Consequently, the importance of GCK as a glucose sensor expands the scope of this fascinating field of study [13,40].
Hypothalamus
Energy homeostasis mediated by GCK is located at the hypothalamus arcuate nucleus.A study in male Wistar rats with elevated GCK activity in the arcuate nucleus demonstrated an increase in food intake and excess weight gain.Similar effects were replicated when the K+ ATP channel was blocked by glibenclamide, which was administered intra-arcuately.Additionally, intra-arcuate injection of diazoxide, a K+ ATP channel activator, decreased GCK activity.Also demonstrated is the binding of GCK to GCK inhibited during fasting, as the GCK activity gets accelerated during fasting and the binding increased by hyperglycemia has expressed an option to evaluate the role of GCK in healthy low body mass index patients and low weight patients [41].
According to the hypothesis, GCK is important for glucose signaling in the neurons of the ventromedial hypothalamic nucleus (VMN), which is the center for feeding and glucose homeostasis regulation.Researchers looked at mice whose GCK gene in the VMN's Sf1 neurons was genetically inactivated to learn more about the function of GCK in VMN glucose sensing and physiological regulation.This study used whole-cell patch clamp studies of brain slices to study the function of GCK in glucose sensing.The results demonstrated that GCK expression prevented both gender-specific glucose-excited and glucose-inhibited Sf1 neurons from sensing glucose [42].Further studies would be required to confirm the GCK expression of VMN neurons in glucose-sensing ability.
Confirmatory studies regarding the safety and efficacy of dorzagliatin are yet to be carried out in patients with cardiovascular disease, in obese patients, and in combination with other standard treatment regimens.Dorzagliatin can also be tested as an early intervention, for the effects on liver enzymes, drug-drug interactions, and real-world effectiveness.
Adherence and pharmacoeconomic evaluation
A prospective follow-up study on patients who achieved stable glucose levels and HbA1c after long-term treatment with dorzagliatin was carried out, and interestingly, there seems to be a 52% (American Diabetes Association guidelines 2021) to 65% (Kaplan-Meier remission probability) of diabetes remission during 52 weeks among Chinese T2DM patients [35].If these results are replicated in large-scale studies of five years or longer duration, it could tremendously benefit the diabetic population.The annual expenditure for antidiabetic treatment for an adult is USD 35,219 in 2022 for the most cost-effective regimen [43].A systematic review found that medication adherence varied between 38.5% and 93.1%.Non-adherence to antidiabetic medication in this study was attributed to depression and medication costs; the two factors were consistently associated predictors of non-adherence [44].Dorzagliatin by way of significant time-in range and disease remission can have a huge pharmacoeconomic implication for 537 million adults worldwide provided its safety is established [1].
Conclusions
Dorzagliatin is a novel and potent drug that can be administered to treat T2DM.In hepatocytes and pancreatic β-cells, GCK is the primary target as it is necessary for glucose sensing and homeostasis.By modifying GCK activity, it improves insulin sensitivity.Although it is generally well-tolerated, according to early studies, there are certain research gaps and challenges, such as insufficiency of long-term safety data.Moreover, in subsequent clinical evaluations, the efficacy of combination therapy and its cost-effectiveness need to be addressed in various global populations.
FPG
one of four doses (75 mg OD, 100 mg OD, 50 mg BD, or 75 mg BD) Placebo Dose-dependent reduction in HbA1c and 75 mg was found to be the least effective dose for further study as an add-on to metformin 1500 mg Placebo Improvement in HOMA2-β and HOMA2-IR and reduction in
|
2024-08-02T16:04:59.854Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "d83d646fcab660def11e9d6a3b31afd4da8e219d",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/272009/20240730-1590545-euasx5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bc0c7bbf9e9fefcfb001ce530a5f61d514e8fb2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16095174
|
pes2o/s2orc
|
v3-fos-license
|
Continence Rate and Oncological Feasibility after Total Transurethral Resection of the Prostate as an Alternative Therapy for the Treatment of Prostate Cancer: A Pilot Study
Purpose The value of total transurethral resection of prostate cancer (TURPC) as an alternative therapy was first recognized by Hans J. Reuter. Thus, we conducted the study of prospectively collected data to verify total TURPC as an alternative therapy forlocalized prostate cancer. Methods From January 2008 to July 2011, 14 patients with a mean age of 76.1 years (range, 66 to 89 years) with clinically localized prostate cancer were treated by prostatic resection by the corresponding author with curative intention. Results The mean duration of TURPC was 51.7 minutes (range, 30 to 120 minutes) and the mean amount of prostatic tissue resected was 21.2 g (range, 5 to 66 g). An intra- and/or postoperative blood transfusion was necessary in 2 cases. Hyponatremia was found in 7 patients. Six months after TURPC, 3 cases of grade 1 and 1 case of grade 2 incontinence were observed. Three patients in the high-risk group did not achieve a prostate specific antigen (PSA) nadir of ≤0.2 ng/mL. PSA recurrence occurred in one case in our series. Newly developed lymph node or distant metastases were not observed during the follow-up period. Conclusions According to our results, transurethral resection for prostate cancer can be performed with reasonable oncological results. The PSA nadir levels, and rates of biochemical failure and postoperative complications, including incontinence, were comparable with the published results for other procedures. TURPC is also inexpensive and non-invasive, and requires short hospitalization and a short surgical time without vesicourethral anastomosis.
INTRODUCTION
The "gold standard" of prostate cancer treatment is now open, laparoscopic, or robot assisted laparoscopic prostatectomy with seminal vesiculectomy, and if indicated, staging lymphadenectomy. However, these procedures are not suitable for all patients due to several factors, including their high surgical invasiveness, risk for aged patients and those with comorbidities, and expense. Several alternative treatments, such as high-intensity focused ultrasound (HIFU), transurethral microwave thermotherapy, cryotherapy, and brachytherapy have been developed. However, the existence of a satisfactory alternative method with the potential to achieve complete cancer control was not conclusively proven prior to this study [1][2][3][4][5][6][7].
As with open, laparoscopic, or robot assisted laparoscopic prostatectomy, a complete resection of the prostate gland including cancerous tissues can be achieved using total transurethral resection of prostate cancer (TURPC) as an alternative therapy was first recognized by Hans J. Reuter. This can be complemented by simultaneous laparoscopic staging lymphadenectomy and seminal vesiculectomy. Thus, we conducted this study to verify total TURPC with related complications as an
MATERIALS AND METHODS
The study began in January 2008 and ended in July 2011, approved by Institutional Review Board. A total of 14 patients with a mean age of 76.1 years (range, 66 to 89 years) treated by TUR-PC were included in this study. Neoadjuvant hormonal therapy was administered in 5 cases to reduce total prostate volume for a shorter operating time. The patients included in this study had proven localized prostate cancer of clinical stage T1-T3a by 2002 American Joint Committee on Cancer tumor, node, metastasis stage, based on histological analysis, preoperative staging by digital rectal examination, International Prostate Symptom Score, maximal flow rate, transrectal ultrasound, prostatespecific antigen (PSA) levels, bone scan, abdomen and pelvic computed tomography (CT), and chest X-ray. Preoperatively detected metastases or treatment by radiotherapy resulted in exclusion from the study. This is a pilot study conducted for the first time in Korea. Therefore, most of the patients were aged over 70 years with or without comorbidity, and thus were not suitable candidates for conventional radical prostatectomy. The patients were stratified into the following 3 risk groups: low risk (clinical stage, T1a-T2a; preoperative PSA level, ≤10 ng/mL; and biopsy Gleason score, ≤6), intermediate risk (clinical stage, T2b-T2c; preoperative PSA level, 10 to 20 ng/mL; and biopsy Gleason score, 7), and high risk (clinical stage, T3a; preoperative PSA level, >20 ng/mL; or biopsy Gleason score, 8 to 10) [8].
We administered prophylactic antibiotics by intravenous injection 30 minutes before surgery. All patients were operated on under spinal or epidural anesthesia. Irrigating fluid level did not exceed 60 cm above the symphysis of the patient in the lithotomy position. Five patients had been receiving neoadjuvant hormonal treatment with bicalutamide and luteinizing hormone releasing hormone. Monopolar transurethral resection of prostate (TURP) was performed with a 26-Fr continuous flow resectoscope (Karl Storz GmbH & Co. KG, Tuttlingen, Germany) using Urosol (CJ, Seoul, Korea). The maximum electrical output of the resectoscope was limited to 140 W for cutting and 80 W for clotting. At the end of the monopolar TURPC, a 22-Fr 3-way urethral Foley catheter with an inflated 50 mL balloon was inserted, and gentle traction was maintained at about 250 g for 4 hours. Continuous saline irrigation was performed until the urine draining from the urethral Foley catheter became clear in the absence of irrigation. Patients were usually discharged on the second postoperative day. The transurethral catheter was removed on the seventh postoperative day in our outpatient department.
Follow-up assessments, with serum PSA testing, rectal examination, and transrectal ultrasound, were given every 3 months in the first and second year, then every 6 months with CT and bone scans once a year for the first 5 years. Subsequent follow-up visits took place once a year.
RESULTS
The 14 patients undergoing total TURPC were aged 66 to 89 years (mean age, 76.1 years). The mean duration of TURPC was 51.7 minutes (range, 30 to 120 minutes), and the mean amount of resected prostatic tissue was 21.2 g (range, 5 to 66 g). The number of patients in each risk group, based on pretreatment PSA level, biopsy Gleason score, and clinical stage (Table 1), were as follows: low risk, 1; intermediate risk, 6; and high risk, 7.
Perioperative complications are listed in Table 2. An intraand/or postoperative blood transfusion was necessary in 2 cases. Endoscopic examination for bleeding was not required. There was no need in any of the cases to perform open surgery due to complications of the total TURPC. Hyponatremia was found in 7 patients. Bladder neck contracture requiring sound dilatation occurred in 1 case. Incontinence was classified into 3 grades according to the Stamey scale. Six months after TURPC, 3 cases of grade 1, 1 case of grade 2, and 0 cases of grade 3 incontinence were observed.
INJ
Three patients in the high-risk group did not achieve a PSA nadir of ≤0.2 ng/mL. The Prostate Cancer Guidelines Panel of the American Urological Association recommend defining biochemical recurrence after radical prostatectomy as an initial serum PSA level of 0.2 ng/mL or greater, with a second confirmatory PSA level greater than 0.2 ng/mL [9]. PSA recurrence occurred in 1 case in our series.
Postoperatively, 7 patients, including the 3 who did not achieve a PSA nadir of ≤0.2 ng/mL and 1 with biochemical recurrence, received adjuvant hormonal therapy. Newly developed lymph node or distant metastases were not observed during the follow-up period.
DISCUSSION
Open, laparoscopic, or robot assisted laparoscopic prostatectomy is unsuitable for many patients, for reasons including age, general high-risk factors, prior prostate surgery, obesity, and socioeconomic factors such as religious restrictions or poverty. Analyses of alternative methods such as HIFU, transurethral microwave thermotherapy, cryotherapy, and brachytherapy have been published, but with follow-up times of <10 years and low case numbers. These assessments were therefore of limited value. A PSA nadir of 0.2 ng/mL can be achieved or not after 6 months. This may not necessarily indicate a radical character of these process [1][2][3][4][5][6][7]10].
As with conventional surgery, a complete resection of the prostate gland, including cancerous tissues, can be achieved using total TURPC under any circumstances. The boundaries of the prostate capsule, the bladder neck, and the membranous urethra can be identified with the aid of video image magnification (Fig. 1).
Reuter et al. [11,12] performed a laparoscopic staging lymphadenectomy in patients at potential risk of cancer and a life expectancy of >10 years. Patients who have tumors in the preserved seminal vesicle or lymph nodes may not experience a PSA nadir of ≤0.2 ng/mL postoperatively, and will require adjuvant hormonal therapy with or without laparoscopic seminal vesiculectomy or lymphadenectomy. However, we have only given adjuvant hormonal therapy in 3 cases with a PSA nadir >0.2 ng/mL because of aged patients, comorbidities and the refusal of invasive intervention.
For prostate cancer focal therapy, 4 modalities appear to have the most clinical promise: HIFU, cryotherapy, radiation therapy, and photodynamic therapy [13]. If patients with localized prostate cancer are suitable candidates for focal therapy, we can choose the partial TURPC as an alternative procedure. In our experience, we performed right-side TURPC as a focal therapy in a case of prostate cancer confined within the right lateral lobe. Impotence and incontinence are the major side effects of surgical treatment of prostate cancer. This is due to the vulnerability of the nerve plexus and blood vessels supplying the periprostatic tissue, sphincter, and penis [14]. Accordingly, the transurethral approach to the prostate is optimal because the periprostatic tissue, containing the neurovascular bundles, remains intact. The electrical current through the tissue is limited to 140 W for cutting and 80 W for clotting. The tissue is coagulated and cut without the formation of necrosis and with less depth, thus pro tecting the neurovascular bundle. We reduce the elec- Irrigation is needed to clarify the surgical field, and since the cutting is done by electricity, the irrigating fluid should be free of electrolytes. However, transurethral resection carries the risk of TUR syndrome caused by irrigation fluid absorption. The increase in dynamic and static pressure increases the risk of fluid absorption, which must be avoided to prevent TUR syndrome and the spread of prostate cancer cells [14,15]. Reuter et al. reported that the key to circumvent such problems is to use low-pressure irrigation with an irrigation fluid level less than 20 cm, and preferably 10 cm above the pubic region using a suprapubic trocar. Thus, the capsule can be resected without being limited by the need for a short surgical time or by prostate weight; the absorption of fluid through capsular perforations is prevented; and blood loss is reduced by a better and more spontaneous control of arterial and venous bleeding [11][12][13][14][15]. TUR syndrome was defined as a serum sodium level of 125 mmol/L or less after TURP with 2 or more symptoms or signs of TUR syndrome, such as nausea, vomiting, bradycardia, hypotension, hypertension, chest pain, mental confusion, anxiety, paresthesia, and visual disturbance. TUR syndrome did not occur, but hyponatremia was seen in 7 cases in our series. This was treated easily using intravenous furosemide injection with or without 3% NaCl solution. Due to inexperience, we used irrigation fluid at 60 cm above the symphysis of the patient without a suprapubic trocar. This may increase fluid absorption into the venous plexus. Our policy was to keep the resection time as short as possible and to not exceed 60 minutes. We consider that a prostate volume of less than 50 mL, as determined by transrectal ultrasound was sufficient to ensure safe, fast operation of this procedure.
Reuter et al. [11,12] conducted a second surgical session at 8 to 12 weeks to reduce the risk of positive margins in the remnants of the prostate, similar to the re-transurethral resection of bladder tumor procedure for bladder cancer. In general, local recurrences can also be resected with a new biopsy as the primary option before radiation or antiandrogen therapy. We did not have a second surgical session in this series. In 3 cases where a PSA nadir of ≤0.2 ng/mL was not achieved until 12 weeks after the operation, adjuvant hormonal therapy was administered because of older age and comorbidity.
Reuter et al. [11,12] performed pathological staging during surgery, taking samples in the following order: the 3 lobes one by one, the verumontanum, the 2 dorsal quadrants of the true capsule (at 6 to 9 and 3 to 6 of the endoscopic clock), the seminal vesicles, the terminal portion of the vas deferens, the prostatic pedicles at 5 and 7, and the 2 ventral quadrants of the capsule (at 9 to 12/12 to 3 of the endoscopic clock). However, we chose not to perform frequent fractional extraction of the tissue in order to speed up operation time. Therefore, the exact histological stages of tumors in this case series could not be shown.
Incontinence is caused by damage to the nerves and/or is a result of a preoperatively existing pelvic floor weakness. A direct violation of the external sphincter muscle is unlikely with careful surgery [14,15]. Although the mean age of patients and the risk of pelvic floor insufficiency were relatively high, the grade 2 incontinence rate 6 months after surgery was only 7.1%. No patient complained of nocturnal incontinence. The risk of erectile impotence was not investigated in this series. A scarred bladder neck is the most common postoperative complication [13,14]. It should be treated by early dilatation with a 24-Fr sound, or an electrical or laser incision. In addition, the urine is alkalinized with citrate to prevent scarring of the bladder neck. The prognosis of bladder neck stricture is usually good. In our case series, it was treated easily using 24-Fr sound dilatation. A PSA nadir of ≤0.2 ng/mL can be achieved in 95% of cases, which is proof of the efficacy of total TURPC [11,12]. The 5-year PSA recurrence rate was 18% for stage pT2 cancer, and the 10year survival rate was 85% for stage pT3. These results are equivalent to those of similar surgery reported in recent studies [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. PSA recurrence occurred in one case in our series although the short follow-up duration. We examined PSA nadir to predict relapse after TURPC [10]. A PSA nadir of ≤0.2 ng/mL was not achieved in 3 cases of the high risk group. Postoperatively, 7 patients including 3 with a PSA nadir >0.2 ng/mL, and 1 with biochemical recurrence received adjuvant hormonal therapy. Newly developed lymph node or distant metastasis was not observed during the follow-up period.
In conclusion, using total TURPC in all 14 cases in this series, we achieved a complete resection of the prostate gland including cancerous tissues, but excluding the seminal vesicle and lymph nodes, which was comparable to that achieved using other surgical procedures. This is reflected in the PSA nadir, which is in the majority of cases below ≤0.2 ng/mL. TURPC avoids the typical risks of extraprostatic access, because the periprostatic tissue is not severed in order to reach the prostate. Therefore, injury to the nerves in the periprostatic tissue can be avoided, reducing the risk of impotence and incontinence. TURPC is also inexpensive and non-invasive, and requires short
|
2014-10-01T00:00:00.000Z
|
2011-12-01T00:00:00.000
|
{
"year": 2011,
"sha1": "3201695f5de55b72ba81dfb2ba3db8c96c275391",
"oa_license": "CCBYNC",
"oa_url": "http://www.einj.org/upload/pdf/inj-15-222.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3201695f5de55b72ba81dfb2ba3db8c96c275391",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17880299
|
pes2o/s2orc
|
v3-fos-license
|
Functional Characterization of Fission Yeast Transcription Factors by Overexpression Analysis
In Schizosaccharomyces pombe, over 90% of transcription factor genes are nonessential. Moreover, the majority do not exhibit significant growth defects under optimal conditions when deleted, complicating their functional characterization and target gene identification. Here, we systematically overexpressed 99 transcription factor genes with the nmt1 promoter and found that 64 transcription factor genes exhibited reduced fitness when ectopically expressed. Cell cycle defects were also often observed. We further investigated three uncharacterized transcription factor genes (toe1+–toe3+) that displayed cell elongation when overexpressed. Ectopic expression of toe1+ resulted in a G1 delay while toe2+ and toe3+ overexpression produced an accumulation of septated cells with abnormalities in septum formation and nuclear segregation, respectively. Transcriptome profiling and ChIP-chip analysis of the transcription factor overexpression strains indicated that Toe1 activates target genes of the pyrimidine-salvage pathway, while Toe3 regulates target genes involved in polyamine synthesis. We also found that ectopic expression of the putative target genes SPBC3H7.05c, and dad5+ and SPAC11D3.06 could recapitulate the cell cycle phenotypes of toe2+ and toe3+ overexpression, respectively. Furthermore, single deletions of the putative target genes urg2+ and SPAC1399.04c, and SPBC3H7.05c, SPACUNK4.15, and rds1+, could suppress the phenotypes of toe1+ and toe2+ overexpression, respectively. This study implicates new transcription factors and metabolism genes in cell cycle regulation and demonstrates the potential of systematic overexpression analysis to elucidate the function and target genes of transcription factors in S. pombe.
T RANSCRIPTIONAL regulatory networks establish the gene expression programs responsible for normal growth and disease states. These networks are composed of direct interactions between transcription factors and the promoters of their target genes. Deletion mutant collections in model organisms have the potential to rapidly map transcriptional regulatory networks by systematic characterization of transcription factors. However, in Saccharomyces cerevisiae, almost 90% of transcription factor deletion strains do not exhibit growth defects in rich medium, complicating the use of this approach (Chua et al. 2004;Yoshikawa et al. 2011). One explanation for this occurrence is that most transcription factors are not active under optimal growth conditions. Transcriptome profiling of more than half of transcription factor deletion strains in rich medium have not been productive in identifying their direct target genes (Chua et al. 2004. Moreover, condition-specific transcription factors do not occupy promoters of their target genes when ChIP-chip experiments are conducted in rich medium (Lee et al. 2002;Chua et al. 2004;Harbison et al. 2004). Chemical genetic profiling has uncovered environmental perturbations that reduce the growth rate of deletion mutants, thereby identifying conditions in which gene activity may be required (Winzeler et al. 1999;Giaever et al. 2002;Hillenmeyer et al. 2008). However, the correlation between reduced fitness of the deletion strain and increased messenger RNA expression of the gene in wild type under the same conditions is surprisingly low, suggesting that growth phenotypes of deletion mutants may not indicate gene activity (Winzeler et al. 1999;Giaever et al. 2002). Alternatively, the lack of obvious phenotypes of transcription factor deletion strains in optimal conditions could be caused by a high level of functional redundancy among transcription factors. This is not likely the primary reason as the frequency of negative genetic interactions among transcription factor genes appears substantially lower than genes encoding other types of proteins (Costanzo et al. 2010;Zheng et al. 2010).
Systematic gene overexpression circumvents the difficulties associated with deletion studies and identifying the activating conditions of Saccharomyces cerevisiae transcription factors ). Global analysis revealed that genes causing reduced fitness when overexpressed resulted mostly in gain-of-function phenotypes and were functionally enriched in transcription factor genes (Gelperin et al. 2005;Sopko et al. 2006;Yoshikawa et al. 2011). The reduced fitness was attributed to the induction of transcription factor activity by ectopic expression and the inappropriate expression of their target genes (hence the term "phenotypic activation"). Transcriptome profiling of 55 overexpression strains with reduced fitness identified putative target genes and binding specificities for most known and several uncharacterized transcription factors . These results reveal the potential of systematic overexpression to characterize transcription factors in organisms amenable to transgenic technologies.
The transcriptional regulatory network of the fission yeast Schizosaccharomyces pombe consists of 100 sequence-specific DNA-binding transcription factors regulating 5000 genes in the genome. Despite being an extensively studied model organism, its transcriptional regulatory network remains substantially incomplete. Approximately two-thirds of S. pombe transcription factors have been characterized to some degree with biological roles focused mainly on cell cycle control, meiosis, mating, iron homeostasis, stress response, and flocculation (Fujioka and Shimoda 1989;Miyamoto et al. 1994;Sugiyama et al. 1994;Nakashima et al. 1995;Takeda et al. 1995;Watanabe and Yamamoto 1996;Ribar et al. 1997;Horie et al. 1998;Labbe et al. 1999;Ohmiya et al. 1999Ohmiya et al. , 2000Abe and Shimoda 2000;Mata et al. 2002;Buck et al. 2004;Cunliffe et al. 2004;Alonso-Nunez et al. 2005;Mata and Bahler 2006;Mercier et al. 2006Mercier et al. , 2008Mata et al. 2007;Rustici et al. 2007;Aligianni et al. 2009;Prevorovsky et al. 2009;Ioannoni et al. 2012;Matsuzawa et al. 2012). However, for many of these, few bona fide target genes have been identified. The remaining one-third of transcription factors are poorly characterized with unknown functions, target genes, and binding specificity.
In this study, we constructed transcription factor deletion and overexpression strains to advance the mapping of the S. pombe transcriptional regulatory network. Most transcription factor deletion strains did not exhibit defects in generation time when grown in rich medium. Consequently, we constructed and characterized an array consisting of 99 strains, each overexpressing a unique transcription factor gene. Sixty-four of 99 S. pombe transcription factor genes caused a decrease in fitness when ectopically expressed with the nmt1 promoter. Of these transcription factor overexpression strains, 76.6% exhibited an elongated cell morphology relative to the control strain with some displaying various cell cycle defects. We further investigated three previously uncharacterized genes encoding fungal-specific Zn (2)-Cys (6) transcription factors that exhibited reduced fitness and cell elongation when ectopically expressed. These genes were named toe1 + -toe3 + (toe1 + /SPAC1399.05c, toe2 + / SPAC139.03, toe3 + /SPAPB24D3.01) for transcription factor overexpression elongated. Ectopic expression of toe1 + caused a G1 delay while overexpression of toe2 + and toe3 + resulted in an accumulation of septated cells with aberrant septum deposition and nuclear missegregation, respectively. Transcriptome profiling and ChIP-chip analysis of HA-tagged Toe1-3 under control of the nmt41 promoter revealed that Toe1-regulated genes were involved in the pyrimidinesalvage pathway, while Toe3 target genes likely functioned in polyamine synthesis. Ectopic expression of several putative target genes could recapitulate the phenotype of toe2 + and toe3 + overexpression, while the deletion of certain putative target genes could suppress the phenotypes of toe1 + and toe2 + overexpression.
Materials and Methods
Yeast strains, media, and general methods Strains were grown on rich (YES) or minimal (EMM) medium and supplemented with G418, nourseothricin, and thiamine hydrochloride at a concentration of 150 mg/liter, 100 mg/liter, and 15 mM, respectively. Chlorpromazine hydrochloride (Sigma Aldrich, St. Louis) was added to YES medium at 100 and 300 mg/ml for hypersensitivity assays and transcriptome profiling, respectively. The strains used in this study are listed in Supporting Information, Table S1. Matings were performed on sporulation medium (SPA). For EMM minus nitrogen supplemented with uracil medium (EMM-N1U), NH 4 Cl was substituted with 200 mg/liter of uracil. ORFs driven by nmt1/41 promoters were ectopically expressed by culturing the overexpression strains in EMM lacking thiamine medium for 18-24 hr unless indicated otherwise. Standard genetics and molecular and cell biology techniques were carried out as described in Moreno et al. (1991).
Construction of deletion and overexpression strains
The oligonucleotides used to construct the transcription factor deletion and overexpression strains are listed in Table S2. Genes regulated by the nmt1 promoter were cloned into the pREP1 vector. For ChIP-chip experiments, toe1 + , toe2 + , and toe3 + were cloned into pSLF272 to generate C-terminal triple HA fusions (Forsburg and Sherman 1997). All clones were confirmed by sequencing, and lithium acetate was transformed to generate the overexpression strains. Western blotting with anti-HA F-7 antibody (Santa Cruz Biotechnology, Santa Cruz, CA) was used to verify the expression of the HA-tagged transcription factors. For deletion of putative target genes, the open reading frame was deleted by a PCR stitching method as described in detail in Kwon et al. (2012). The gene deletions were confirmed by colony PCR.
Fitness and cell-length scoring of transcription factor overexpression strains All transcription factor overexpression strains were induced on solid EMM medium without thiamine for 48 hr and then microscopically examined. Each strain was initially patched on EMM medium supplemented with thiamine and incubated overnight at 30°. The strains were then transferred to EMM medium lacking thiamine, incubated for 24 hr at 30°to induce the nmt1 promoter, and then transferred again onto EMM medium lacking thiamine. After 24 hr at 30°, the strains were examined for colony and cell morphologies with a Zeiss Axio-Scope A1 tetrad microscope (Zeiss, Thornwood, NY). Because the nmt1 promoter does not reach maximum induction until 18 hr, the second transfer of the strains onto EMM medium lacking thiamine was required to accurately observe the colony and cell morphologies caused by overexpression of the transcription factor. Reduced fitness was identified by a decrease in colony size and scored as slight (1), moderate (2), and severe (3) consisting of approximately 30-100 cells/colony, 10-30 cells/colony, and ,10 cells/colony, respectively, relative to the empty vector control strain (.100 cells/colony). Cell elongation was scored as mild (1), moderate (2), and severe (3) with cell lengths 1.5, 2, and 3 times of the control strain, respectively. A score of 21 was assigned to cells that appeared shorter than the control strain. The fitness and cell length of the control strain were scored as 0.
Fluorescence microscopy
Transcription factor overexpression strains were grown in liquid EMM lacking thiamine medium for 24 hr at 30°. Cells were methanol-fixed and stained with DAPI (1 mg/ml) and calcofluor white (50 mg/ml) to visualize nuclei and cell-wall material, respectively. Images were acquired with a Zeiss Axioskop 2 microscope (Zeiss) and Scion CFW Monochrome CCD Firewire Camera (Scion, Frederick, MD). Cell cycle defects detected in transcription factor overexpression strains were classified as aberrant septal deposition and/or multisepta, abnormal nuclear morphology reminiscent of condensed chromosomes and chromosome missegregation.
Microarray expression profiling and ChIP-chip experiments
Strains containing nmt41-driven HA-tagged Toe1-3 were cultured and induced in 200 ml EMM medium lacking thiamine for 20-24 hr at 30°. Half of the culture was utilized for microarray expression profiling in which the transcriptome of the transcription factor overexpression strain was compared to an empty vector control, while the other half was subjected to ChIP-chip analysis. Culturing, sample preparation, hybridization, normalization, and data analysis of the transcriptome and ChIP-chip experiments were carried out as described with detail in Kwon et al. (2012). Labeled complementary DNA samples were hybridized to Agilent S. pombe 8X15K expression and 4X44K Genome ChIP-on-chip microarrays and washed according to the manufacturer's instructions (Agilent Technology, Santa Clara, CA). The microarrays were scanned with a GenePix4200A scanner (Molecular Devices, Sunnyvale, CA), and the transcriptome and ChIP-chip data were normalized with the R Bioconductor Limma package. The ChIP-chip data were analyzed by ChIPOTle Peak Finder Excel Macro . Cluster 3.0 (Eisen et al. 1998) and Java Treeview 1.1.6r2 (Saldanha 2004) were used to create heatmap images of microarray expression and ChIP-chip data. The microarray expression and ChIP-chip data have been submitted to the NCBI Gene Expression Omnibus Database (GSE46811).
Quantitative PCR
Strains containing nmt41-driven HA-tagged Toe1-3 were cultured and induced in 100 ml EMM medium without thiamine for 20-24 hr at 30°. The expression level of putative target genes in the nmt41-driven toe-HA strains were compared against an empty vector control. Culturing and total RNA extractions were performed as in the expression microarray experiments. Reverse transcription was performed on total RNA using SuperScript II Reverse Transcriptase (Life Technologies, Carlsbad, CA) and Oligo(dT) 23 anchored primers (Sigma-Aldrich, St Louis), following the manufacturers' instructions. Quantitative PCR (qPCR) reactions were set up in MicroAmp Fast Optical 48-Well Reaction Plates using 5-50 ng cDNA, 1.2 ul of 0.5 uM forward and reverse primers, and 10 ml SYBR green master mix (Life Technologies). The act1 + gene was used as a reference for determining the relative expression of putative target genes. qPCR was performed on a StepOne Real-Time PCR system (Life Technologies) using the following program: 95°for 10 min, 40 cycles of 95°for 15 sec, and 58°for 1 min, followed by a melting curve program of 58°-95°with a heating rate of 0.3°/sec. Three replicates were carried out for each combination of query gene and strain. Fold changes were determined by the DDCt method according to the manufacturer's recommendations (Life Technologies).
Flow cytometry
A strain containing chromosomal-integrated pREP1-toe1 + was cultured in 100 ml EMM medium with and without thiamine for 20-24 hr at 30°. This strain was used to reduce the phenotypic heterogeneity caused by variations in plasmid copy number. Approximately 1 · 10 7 cells were fixed in 1 ml of 95% EtOH, resuspended in 50 mM sodium citrate (pH 7.0), and treated with 250 mg/ml RNAse A (Roche Applied Science, Indianapolis) at 50°for 2 hr and 2 mg/ml Proteinase K (Promega, Madison, WI) at 37°for 1 hr. Cells were then washed and resuspended in 50 mM sodium citrate (pH 7.0) containing propidium iodide (8 mg/ ml) and sonicated briefly to minimize doublets. Flow cytometry was carried out with a FACSCalibur Flow Cytometer and FACS-Diva 6.0 software (BD Biosciences, Franklin Lakes, NJ).
Construction and phenotypic characterization of the transcription factor overexpression array
The transcription factors were derived from a list of 129 S. pombe candidate proteins that contained bona fide DNA-binding domains and other domains known to be associated with transcriptional regulation (Beskow and Wright 2006). This list was reduced to 99 candidate sequence-specific transcription factors after removal of proteins involved in chromatin remodeling, general transcription, and nontranscriptional roles. Among the 99 genes that encode these transcription factors, 62 have gene names and are primarily implicated in cell cycle control, meiosis, mating, iron homeostasis, stress response, and flocculation (Fujioka and Shimoda 1989;Miyamoto et al. 1994;Sugiyama et al. 1994;Nakashima et al. 1995;Takeda et al. 1995;Watanabe and Yamamoto 1996;Ribar et al. 1997;Horie et al. 1998;Labbe et al. 1999;Ohmiya et al. 1999Ohmiya et al. , 2000Abe and Shimoda 2000;Mata et al. 2002;Buck et al. 2004;Cunliffe et al. 2004;Alonso-Nunez et al. 2005;Mata and Bahler 2006;Mercier et al. 2006Mercier et al. , 2008Mata et al. 2007;Rustici et al. 2007;Aligianni et al. 2009;Prevorovsky et al. 2009;Ioannoni et al. 2012;Matsuzawa et al. 2012). The remaining 37 transcription factors have not been characterized and most contain the fungal-specific Zn (2)-Cys (6) DNA-binding domain. This transcription factor family is most predominant in S. pombe and S. cerevisiae containing 32 and 56 members, respectively, and has been implicated in diverse functions such as metabolism, meiosis, and flocculation (Todd and Andrianopoulos 1997;Kwon et al. 2012;Matsuzawa et al. 2013). We measured the generation times of 91 nonessential transcription factor haploid gene deletions in rich medium and found that only 10 displayed significant differences in their generation times compared to wild type (L. Vachon and G. Chua, unpublished data. The remaining eight transcription factor genes were either essential or pre-viously published as nonessential but not able to be deleted from our study. We next constructed an overexpression array containing 99 strains of nmt1-driven transcription factor genes and microscopically examined their colony morphology to detect reduced fitness. Most transcription factor genes (64/99) resulted in a fitness defect when ectopically expressed ( Figure 1). Among these 64 strains, the relative fitness decrease compared to the empty vector control was scored as mild (32.8%), moderate (50.0%), and severe (17.2%). Additionally, cell elongation and reduced fitness appeared to be correlated in the transcription factor overexpression strains ( Figure 1). In fact, 76.6% of the strains with a fitness defect also displayed increased cell lengths relative to the empty vector control. Seven transcription factor overexpression strains displayed an abnormal cell length but no fitness defect (Figure 1). The remaining transcription factors (28.3%) did not exhibit reduced fitness or abnormal cell lengths when ectopically expressed.
The cell elongation phenotype suggested that ectopic expression of these transcription factors may cause defects in the cell cycle. Microscopic examination of these overexpression strains revealed that several exhibited cell cycle phenotypes such as multiseptation, multinucleation, nuclear missegregation, and aberrant septum deposition (Figure 1). We proceeded to investigate three uncharacterized Zn (2)-Cys (6) transcription factor genes that exhibited cell elongation when ectopically expressed. These three transcription factor genes were named toe + for transcription factor overexpression elongated. Additional cell cycle phenotypes were detected from the ectopic expression of toe2 + /SPAC139.03 (abnormally heavy septum deposition that often appeared Figure 1 Phenotypic characterization of the S. pombe transcription factor overexpression array. Graph showing the phenotypes associated with ectopic expression of transcription factors in S. pombe. Strains containing an nmt1-driven transcription factor gene were scored for fitness defects (y-axis) and cell elongation (x-axis) on EMM lacking thiamine plates after 48 hr. To observe cell cycle phenotypes, transcription factor overexpression strains were grown in EMM lacking thiamine liquid medium for 24 hr and stained with DAPI and calcofluor white to visualize nuclei and cell-wall material, respectively. Transcription factors that did not result in a phenotype when ectopically expressed were not included. Fitness defects were scored as the following: (1) slight (30-100 cells per colony), (2) moderate (10-30 cells per colony), and (3) severe (,10 cells per colony). Cell elongation was scored as the following: (1) mild (1.5 times longer than control), (2) moderate (about twice the length of control), (3) severe (about three times longer than control), and (21) short (shorter than control). Cell cycle phenotypes were classified as (red) aberrant septal deposition and/or multisepta, (green) abnormal nuclear morphology reminiscent of condensed chromosomes, and (blue) chromosome missegregation. The proportion of transcription factor overexpression strains with no cell cycle phenotypes were shown as gray sectors The relative fitness and cell length of the empty vector control were scored as 0. Transcription factor overexpression strains that do not exhibit any fitness and cell length defects were not shown. lengthwise) and toe3 + /SPAPB24D3.01 (nuclear missegregation). In contrast, the single-deletion strains of all three toe + genes did not exhibit any detectable mutant phenotype in rich medium (data not shown).
Toe1 is a novel transcriptional regulator of the pyrimidine-salvage pathway The ectopic expression of toe1 + causes a cell elongation phenotype ( Figure 2A). Transcriptome profiling of S. cerevisiae transcription factor overexpression strains that exhibit reduced fitness have successfully identified their target genes and binding specificity Chua 2009). We took a similar approach to characterize transcription factors in S. pombe, but also incorporated ChIP-chip experiments to better distinguish the target genes. An nmt41-driven toe1-HA strain was grown in medium lacking thiamine for 20-24 hr to induce the transcription factor gene, and then the culture was divided in two for transcriptome and ChIP-chip analyses. Figure 2 Identification of toe1 putative target genes by phenotypic activation. (A) Overexpression of toe1 + by the nmt1 promoter produces elongated cells. The toe1OE and empty vector strains were grown for 24 hr in EMM lacking thiamine medium at 30°. Cells were fixed with methanol and stained with DAPI and calcofluor white to visualize nuclei and cell-wall material, respectively (top panels). Cells are shown with Normarski in the bottom panels. (B) Putative target genes of toe1 involved in pyrimidine salvage are downregulated in the toe1D strain, induced in the nmt41-toe1OE-HA strain, and bound by Toe1 at their promoters. The heat map shows the relative expression of seven putative target genes in the toe1D strain compared to wild type (left column) and the nmt41-driven toe1-HA strain compared to an empty vector control (middle column) by transcriptome profiling with dye reversal. The right column shows promoter occupancy of the putative target genes by toe1 with ChIP-chip analysis of an nmt41driven toe1-HA strain. The color bars indicate the relative expression and ChIP enrichment ratios between experimental and control strains. (C) Loss of toe1 + and its putative target gene SPAC1399.04c prevents growth in medium containing uracil as the sole nitrogen source (EMM-N+U). Strains were spot-diluted on EMM and EMM lacking ammonium chloride with uracil (200 mg/liter) and incubated for 4 days at 30°. (D) Ectopic expression of toe1 + causes a G1 delay. Flow cytometric analysis of a chromosomal-integrated nmt1-driven toe1-HA strain under inducing (thiamine absent) and non-inducing (thiamine present) conditions. The histograms depict an increase in the percentage of cells in G1 and a reduction of cells in G2 in the toe1OE strain under inducing conditions compared to non-inducing conditions. (E) The cell elongation phenotype of the toe1OE strain is suppressed by the single deletion of the putative target genes urg1 + and SPAC1399.04c. An nmt1-driven toe1 + was ectopically expressed in each of the two corresponding deletion backgrounds. These strains were prepared and stained as described above. The presence of the pREP1-toe1 + vector in these strains was confirmed by growth on selective medium as well as by PCR. (F) A putative DNA motif resembling the binding specificity of Zn (2)-Cys (6) transcription factors was retrieved by promoter analysis of the toe1 putative target genes found in the heat map. The promoter regions (1000 bp upstream of the start codon) of the toe1 putative target genes were analyzed by MEME (Bailey et al. 2006).
The moderate-strength nmt41 promoter was chosen over the strong nmt1 promoter to reduce secondary transcriptional effects in the microarray experiments. Similar to the toe1OE (nmt1) strain, cells containing the nmt41-driven toe1-HA were elongated after grown for 24 hr in medium lacking thiamine (data not shown).
Transcriptome profiling of the nmt41-driven toe1-HA strain revealed that 97 genes were induced at least twofold (Table S3). Gene ontology analysis of the top 50 most induced genes with the Princeton GO Term Finder (http://go. princeton.edu/cgi-bin/GOTermFinder) showed functional enrichment for the pyrimidine salvage pathway (P = 4.6e-5). Figure 3 Response of toe1 + to chlorpromazine treatment. (A) Loss of toe1 + results in sensitivity to chlorpromazine. Exponentially growing toe1D and wild-type strains were spot diluted on YES medium lacking or containing chlorpromazine (100 mg/ml) and incubated for 3 days at 30°. (B) Transcriptome profiling of toe1D and wild-type strains treated with chlorpromazine. toe1 putative target genes were induced in the wild type but not in the toe1D strain upon chlorpromazine treatment (left and middle columns, respectively). As a result, the expression of the putative target genes is lower in the toe1D strain relative to wild type when treated with chlorpromazine (right column). The microarray experiments were performed with dye reversal. Relative expression ratios are indicated in the color bar. Chloropromazine treatment for the transcriptome profiling experiments was 300 mg/ml for 1.5 hr.
Figure 4
Identification of Toe2 putative target genes by phenotypic activation. (A) Overexpression of toe2 + by the nmt1 promoter produces elongated cells that exhibit aberrant septal deposition. The toe2OE and empty vector strains were grown for 24 hr in EMM lacking thiamine medium at 30°. Cells were fixed with methanol and stained with DAPI and calcofluor white to visualize nuclei and cell-wall material, respectively (top panels). Cells are shown with Normarski in the bottom panels. (B) Putative target genes of Toe2 are induced in the nmt41-toe2OE-HA strain and are bound by Toe2 at their promoters. The heat map shows the relative expression of six putative target genes in the toe2D strain compared to wild type (left column) and the nmt41-driven toe2-HA strain compared to an empty vector control (middle column) by transcriptome profiling with dye reversal. The right column shows promoter occupancy of the putative target genes by Toe2 with ChIP-chip analysis of an nmt41driven toe2-HA strain. The color bars indicate the relative expression and ChIP enrichment ratios between experimental and control strains. (C) Ectopic expression of the sequence orphan SPBC3H7.05c results in a similar aberrant septal deposition phenotype as seen in the toe2OE strain. The SPBC3H7.05cOE strain (nmt1-regulated) was cultured and prepared as described above. (D) The aberrant septal deposition phenotype of the toe2OE strain is abrogated by the single deletion of the putative target genes SPBC3H7.05c, rds1 + , and SPACUNK4.15c. An nmt1-driven toe2 + was ectopically expressed in each of the three corresponding deletion backgrounds. These strains were prepared and stained as described above. The presence of the pREP1-toe2 + vector in these strains was confirmed by growth on selective medium as well as by PCR. Percentages indicate the proportion of cells exhibiting the septal defect.
Strikingly, the four most highly induced genes (ranging from 35.5-to 113.8-fold relative to the empty vector control) consisted of the uracil-regulatable genes urg1 + , urg2 + , and urg3 + ) and an uncharacterized gene (SPAC1399.04c) predicted to encode a uracil phosphoribosyltransferase ( Figure 2B). Moreover, these four genes were the most downregulated in the toe1Δ strain (ranging from 3.0-to 76.4-fold relative to wild type) ( Figure 2B). These four genes contained protein sequence homology to the URC genes of Saccharomyces kluyveri, which function in the pyrimidine-salvage pathway through degradation of uracil (Andersen et al. 2008). Loss-of-function alleles of the URC genes result in growth inhibition on medium containing uracil as the sole nitrogen source (Andersen et al. 2008).
Interestingly, one of the URC genes encodes a Zn (2)-Cys (6) transcription factor, suggesting that Toe1 could be a putative regulator of the homologous genes in S. pombe. To determine if this was the case, we tested whether the toe1Δ strain and deletion of its putative target genes would be sensitive to medium containing uracil as the sole nitrogen source. Indeed, loss of toe1 + and SPAC1399.04c prevented growth under this condition ( Figure 2C). In addition, several putative genes functioning in the pyrimidine-salvage pathway such as SPBC1683.06c (uridine ribohydrolase), SPCC162.11c (uridine kinase), and SPCC1795.05c (uridylate kinase were upregulated (10.7-, 2.8-, and 2.5-fold, respectively) in the toe1OE strain ( Figure 2B).
ChIP-chip analysis of the nmt41-driven toe1-HA strain showed Toe1 association with 15 promoters (Table S4). Of the seven highly up-regulated pyrimidine-salvage pathway genes in the toe1 + overexpression data, five were detected with ChIP-chip, indicating that these genes are likely direct target genes of Toe1 ( Figure 2B). Because urg2 + and urg3 + are adjacent divergent genes, Toe1 binding in the intergenic region may result in the regulation of both these genes. The seven most highly induced putative target genes were also validated by qPCR (Table S5).
The cell elongation phenotype of the toe1OE strain suggests a defect in the cell cycle. Examination of the septation index between toe1OE and wild-type strains revealed no significant difference (data not shown). However, overexpression of toe1 + appeared to cause an accumulation of cells in G1, indicating a delay in this cell cycle phase ( Figure 2D). We also constructed single overexpressions of the pyrimidinesalvage pathway genes and examined the strains for cell elongation. None of these overexpression strains resulted in cell elongation (data not shown). Interestingly, single deletions of urg2 + and SPAC1399.04c could suppress the cell elongation phenotype of ectopic toe1 + expression (Figure 2E). Overexpression of toe3 + by the nmt1 promoter produces elongated cells that exhibit a nuclear missegregation phenotype. The toe3OE and empty vector strains were grown for 24 hr in EMM lacking thiamine medium at 30°. Cells were fixed with methanol and stained with DAPI and calcofluor white to visualize nuclei and cell-wall material, respectively (top panels). Cells are shown with Normarski in the bottom panels. (B) Putative target genes of Toe3 are induced in the nmt41-toe3OE-HA strain and are bound by Toe3 at their promoters. The heat map shows the relative expression of 10 putative target genes in the toe3D strain compared to wild type (left column) and the nmt41-driven toe3-HA strain compared to an empty vector control (middle column) by transcriptome profiling with dye reversal. The right column shows promoter occupancy of the putative target genes by Toe3 with ChIP-chip analysis of an nmt41-driven toe3-HA strain. The color bars indicate the relative expression and ChIP enrichment ratios between experimental and control strains. (C) Ectopic expression of either SPAC11D3.06 or dad5 + , which encodes a MATE family transporter and a DASH complex subunit, respectively, results in a nuclear missegregation phenotype that is similar to the toe3OE strain. The SPAC11D3.06OE and dad5OE strains (both nmt1-regulated) were cultured and prepared as described above. The presence of the pREP1-toe3 + vector in these strains was confirmed by growth on selective medium as well as by PCR. Percentages indicate the proportion of cells exhibiting the nuclear missegregation phenotype.
From screening our transcription factor deletion array to several drug compounds, we discovered that the toe1Δ strain was hypersensitive to the phenothiazine antipsychotic drug chlorpromazine ( Figure 3A; L. Vachon and G. Chua, unpublished data). Chlorpromazine may inhibit uridine kinase, a key enzyme in pyrimidine salvage (Tseng et al. 1986). The hypersensitivity could indicate that the activity of toe1 + is required for adapting to chloropromazine, and thus Toe1 target genes may be induced by chlorpromazine treatment. Indeed, most of the Toe1 putative target genes functioning in the pyrimidine-salvage pathway were induced in chlorpromazine-treated wild type, but not in the toe1Δ strain ( Figure 3B; left and middle columns, respectively). Consistently, the transcript levels of these target genes were lower in the toe1Δ strain relative to wild type when both strains were treated with chlorpromazine ( Figure 3B; right column). We also investigated whether overexpression and deletion of the putative target genes could confer resistance and sensitivity, respectively, to chlorpromazine. However, none of these strains exhibited altered responses to chloropromazine treatment, possibly because many of the enzymes in the pyrimidine-salvage pathway are encoded by multiple genes with overlapping gene function (data not shown). Altogether, these results indicate that Toe1 transcriptionally activates genes functioning in the pyrimidine-salvage pathway and has a role in regulating cell cycle progression.
Putative target genes of Toe2 are required for proper septum formation The ectopic expression of toe2 + under control of the nmt1 promoter causes defects in septum formation with abnormally heavy and often longitudinal septal deposition ( Figure 4A). The proportion of cells exhibiting this aberrant phenotype was 36%. Ectopic expression of toe2 + under control of the nmt41 promoter also caused similar defects, although to a lesser degree (data not shown). In addition, the percentage of septated cells in the toe2OE strain was significantly higher than in the empty vector control (58.8% vs. 9.5%; two-tailed t-test; P-value , 0.002), indicating a stagespecific defect in the cell cycle. The nmt41-driven toe2-HA strain was analyzed by transcriptome profiling and ChIPchip to uncover putative target genes. We found 114 genes that were upregulated at least twofold and 71 genes in which their promoters were associated with Toe2 (Table S6 and Table S7). The application of the Princeton GO Term Finder to the 114 genes showed functional enrichment for amino acid catabolism (P = 3.9e-4) while no functional enrichment was observed with the ChIP-chip data. Only 11 genes in the ChIP-chip data showed upregulation at least twofold in response to toe2 + overexpression (Table S6 and Table S7). These genes appeared to primarily function in metabolism and ion transport, and their involvement in septum formation was not obvious.
Of these 11 genes, we decided to focus on the 6 most induced genes (3-to 21-fold induction) when toe2 + was overexpressed ( Figure 4B). The induction of these 6 genes in the nmt41-driven toe2-HA strain was validated by qPCR (Table S5). These 6 genes appeared to not be differentially expressed in the toe2Δ strain ( Figure 4B). Ectopic expression of these 6 genes singly revealed that only SPBC3H7.05c, which encodes a membrane-bound O-acyl transferase, resulted in aberrant septal deposition similar to the toe2OE strain although a lower proportion of cells exhibited this phenotype ( Figure 4C). In addition, a few cells showing multiseptation and nuclear missegregation were observed in the SPBC3H7.05cOE strain (data not shown). The putative target gene SPAC23H4.01c that encodes a sterol-binding ankyrin repeat protein did not replicate the septal phenotype of the toe2OE strain when overexpressed, but produced elongated multiseptated cells (data not shown). To further validate the Toe2 putative target genes, toe2 + was overexpressed in strains containing single deletions of these genes. We found that loss of SPBC3H7.05c, as well as of SPACUNK4.15 and rds1 + that encode a predicted 29,39cyclic-nucleotide 39-phosphodiesterase and conserved fungal protein, respectively, could suppress the septal phenotype of the toe2OE strain ( Figure 4D). These results identify several putative target genes of Toe2, including SPBC3H7.05c, that appear to play a role in septation in S. pombe.
Toe3 activates putative target genes involved in arginine catabolism and nuclear segregation
The ectopic expression of toe3 + under control of the nmt1 promoter results in a defect in nuclear segregation, where 20% of cells are observed with a septum and a single nucleus positioned distally ( Figure 5A). The nmt41-driven toe3-HA strain exhibited a similar phenotype, although with reduced penetrance (data not shown). The percentage of septated cells in the toe3OE strain was also significantly higher than the empty vector control (20.6% vs. 9.5%; two-tailed t-test; P-value , 0.03), indicating a stage-specific defect in the cell cycle. Among the septated cells, over 80% exhibited the nuclear missegregation phenotype. To identify the Toe3 target genes, we performed transcriptome and ChIP-chip analyses on the nmt41-driven toe3-HA strain. We found that 95 genes were induced at least twofold relative to the control strain while the promoters of 174 genes were associated with Toe3 (Table S8 and Table S9). The 95 genes induced at least twofold by toe3 + overexpression were subjected to the Princeton GO Term Finder and found to be functionally enriched in arginine catabolic process (P = 2.4e-5). The same functional enrichment was observed in the 10 genes identified by ChIP-chip and upregulated at least twofold when toe3 + was ectopically expressed (P = 4.8e-6). The genes implicated in arginine catabolism and potentially influencing polyamine intracellular levels included car1 + , car2 + , SPAPB24D3.03, and SPAC11D3.09 ( Figure 5B). SPAC11D3.06 may have a role in polyamine transport as MatE transporters have been reported to transport agmatine in human embryonic kidney (HEK293) cells (Winter et al. 2011). In addition, Toe3 bound to its own promoter, suggesting the possibility of autoregulation ( Figure 5B). The top 10 highly induced putative target genes identified by microarray expression profiling and ChIP-chip of the nmt41-driven toe3-HA strain were validated by qPCR (Table S5). Among these putative target genes, only alr2 + and urg1 + were downregulated at least twofold in the toe3Δ strain ( Figure 5B).
We next determined whether overexpression of the putative target genes could produce the nuclear missegregation phenotype of the toe3OE strain. Eight of the 10 putative target genes were overexpressed singly with the nmt1 promoter (aat1 + , alr2 + , car1 + , car2 + , dad5 + , SPAC11D3.06, SPAPB24D3.03, and SPBC1773.13). Among these genes, ectopic expression of dad5 + and SPAC11D3.06 resulted in a nuclear missegregation phenotype with penetrance comparable to the toe3OE strain ( Figure 5C). These results were consistent with the known essential role of Dad5 as a component of the Dam1/Duo1, Ask1, Spc34/Spc19, Hsk1 (DASH) complex in chromosome segregation (Sanchez-Perez et al. 2005). However, we did not observe suppression of the nuclear missegregation phenotype caused by toe3 + overexpression when these putative target genes were deleted singly (data not shown). Altogether, these results suggest that Toe3 may play a role in nuclear segregation by regulating dad5 + , SPAC11D3.06, and potentially other genes involved in polyamine biosynthesis.
Discussion
The transcriptional regulatory network in S. pombe remains substantially incomplete. The target genes have not been identified for the majority of sequence-specific transcription factors and over one-third of them have not been investigated at all. Here, we employed systematic genetics to analyze all the transcription factors by overexpression.
Systematic overexpression analysis revealed that 65% of S. pombe transcription factors exhibited reduced fitness, approximately twice the frequency in S. cerevisiae . This difference could be attributed to variations in scoring for reduced fitness and promoter strength. Interestingly, 75% of S. pombe transcription factor overexpression strains that showed reduced fitness also exhibited cell elongation, suggesting a potential role in the cell cycle. Approximately 8-15% of S. pombe genes exhibit moderate-tostrong periodic expression during the cell cycle, and thus a considerable number of transcription factors would probably be required for their transcriptional control (Rustici et al. 2004;Oliva et al. 2005;Peng et al. 2005). Moreover, approximately one-third of S. pombe transcription factors have been detected to display strong periodic expression during the cell cycle (Bushel et al. 2009). Furthermore, in S. cerevisiae, genes causing reduced fitness when ectopically expressed were functionally enriched for transcription factor and cell cycle regulator genes, which could be similar in S. pombe (Gelperin et al. 2005;Sopko et al. 2006;Yoshikawa et al. 2011).
Another possible explanation for transcription factor overexpression toxicity is the occurrence of transcriptional squelching (Gill and Ptashne 1988). Ectopic expression of a strong transcriptional activator has been shown to sequester general transcription factors of RNA polymerase II (Liu and Berk 1995;Tavernarakis and Thireos 1995;McEwan and Gustafsson 1997). The inhibition of cell growth usually associated with squelching is likely caused by the transcriptional repression of essential genes or a lethal combination of nonessential genes. These genes could potentially encode ribosomal proteins and cell cycle activators, which are found to be predominantly repressed in a hypomorphic allele encoding the RNA polymerase II component Rpb11p (Mnaimneh et al. 2004). Although we cannot rule out squelching, downregulated genes in our toeOE strains were not enriched for ribosomal and cell cycle genes.
We discovered that the transcription factor Toe1 activates genes implicated in the pyrimidine-salvage pathway. The putative target genes urg1 + , urg3 + , and urg2 + /SPAC1399.04c appear to be homologous to URC1, URC4, and URC6, respectively, in S. kluyveri, while toe1 + is probably the homolog of URC2 (Andersen et al. 2008). The URC genes function in the catabolism of uracil in S. kluyveri (Andersen et al. 2008). Similar to the URC genes, deletion of toe1 + and SPAC1399.04c prevented growth on medium containing uracil as the sole nitrogen source ( Figure 2C). Moreover, several other genes involved in the pyrimidine-salvage pathway, such as SPBC1683.06c and SPCC162.11c, which encode a uridine ribohydrolase and uridine kinase, respectively, were induced by toe1 + overexpression ( Figure 2B). We also detected chlorpromazine sensitivity in the toe1Δ strain, suggesting that Toe1 activity and activation of its target genes may be required for the proper cellular response to this drug ( Figure 3A). Chlorpromazine has been reported to possibly inhibit uridine kinase activity in murine sarcoma cells (Tseng et al. 1986). If this is also the case in S. pombe, then inhibition of uridine kinase by chlorpromazine treatment could compromise overall pyrimidine-salvage capacity, thereby triggering a compensatory response by activating other genes of similar function. Indeed, the uracil catabolic genes were induced in chlorpromazine-treated wild type but not in the chlorpromazine-treated toe1Δ strain ( Figure 3B). Furthermore, we discovered that toe1 + overexpression causes a G1 delay ( Figure 2D). It may be that induction of pyrimidine-salvage genes could represent a signal for insufficient levels of nucleotides, thus preventing cells from undergoing a round of DNA replication.
The toe2OE strain exhibits a delay in cytokinesis with thickened and misplaced septa, indicating that this transcription factor functions in the proper formation of the division septum for cytokinesis. The uncharacterized gene SPBC3H7.05c is most likely a target gene of Toe2. Ectopic expression of SPBC3H7.05c replicated the septal phenotype of the toe2OE strain, while the septal phenotype of toe2 + overexpression was rescued in the SPBC3H7.05c deletion background. The SPBC3H7.05c gene encodes a membrane-bound O-acyl transferase (MBOAT), suggesting a function in lysophospholipid synthesis, but its exact role in septation remains unclear (Benghezal et al. 2007;Riekhof et al. 2007;Matsuda et al. 2008). In S. cerevisiae, loss of the MBOAT-encoding gene GUP1 causes defects in the cell wall and bipolar budding while loss of the homologous gene in Candida albicans showed misplaced septa and compromised hyphae formation (Ni and Snyder 2001;Ferreira et al. 2006Ferreira et al. , 2010. In addition, the single deletion of the putative target genes rds1 + and SPACUNK4.15, which encode a conserved fungal protein and predicted 29,39-cyclic-nucleotide 39-phosphodiesterase, respectively, could also suppress the septation phenotype of the toe2OE strain. The rds1 + gene appears to be stress-responsive and a putative target gene of the iron and copper starvation transcription factor Cuf1, while the SPACUNK4.15 product has been implicated in transfer RNA splicing in other organisms (Culver et al. 1994;Ludin et al. 1995;Rustici et al. 2007;Schwer et al. 2008). How these genes actually function in septation remains unknown.
Ectopic expression of toe3 + results in an accumulation of septated cells containing a single nucleus in one compartment. The putative target genes of Toe3 were functionally enriched in arginine catabolism, including five that are likely to play a direct role in influencing polyamine levels. These include genes encoding for agmatinase (SPAC11D3.09 and SPAPB24D3.03), arginase (Car1), ornithine transaminase (SPBC1773.13), and a MatE transporter (SPAC11D3.06), which may be involved in transporting polyamines (Winter et al. 2011). These results indicate a possible role for toe3 + in proper nuclear segregation through the regulation of polyamine levels in the cell. Indeed, we observed that ectopic expression of SPAC11D3.06 recapitulates the nuclear missegregation phenotype of the toe3OE strain. In addition, the nuclear missegregation phenotype was also seen when another putative target gene, dad5 + , was ectopically expressed. Dad5 is a subunit of the DASH complex involved in sisterchromatid segregation during anaphase by linking spindle fibers to the kinetochore (Miranda et al. 2005;Sanchez-Perez et al. 2005). Increased expression of dad5 + in the toe3OE strain might perturb the DASH complex by altering the stoichiometry of its components, thereby resulting in nuclear missegregation. However, deletion of dad5 + and SPAC11D3.06 singly could not suppress the nuclear missegregation phenotype of the toe3OE strain. This may be due to a functional redundancy in nuclear segregation by dad5 + and SPAC11D3.06.
In summary, we have utilized systematic overexpression to characterize transcription factors in S. pombe. Our analyses of three Zn (2)-Cys (6) transcription factors, which are commonly associated with metabolic regulation, have implicated several metabolites in cell cycle regulation. Metabolism genes are periodically expressed in the fission yeast cell cycle during maximal growth (Rustici et al. 2004). Because the majority of transcription factor genes cause reduced fitness when ectopically expressed, further analysis of these overexpression strains with approaches from this study have the potential to significantly contribute to the complete mapping of the transcriptional regulatory network in S. pombe.
|
2016-05-12T22:15:10.714Z
|
2013-08-01T00:00:00.000
|
{
"year": 2013,
"sha1": "a51b7a090cf0826c8396fd6edfb71acb8d783619",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3730917?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a51b7a090cf0826c8396fd6edfb71acb8d783619",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
50953558
|
pes2o/s2orc
|
v3-fos-license
|
QCD Sum Rule Approach to the New Mesons and the gDsJDK Coupling Constant
We use diquark-antidiquark currents to investigate the masses and partial decay widths of the recently observed mesons DsJ(2317), D ∗0 0 (2308) and X(3872), considered as four-quark states, in a QCD sum rule approach. In particular we investigate the coupling constant gDsJDK . We found that the gDsJDK obtained in this four-quark scenario is smaller than the coupling constant obtained when DsJ(2317) is considered as a conventional cs̄ state.
I. INTRODUCTION
The constituent quark model provides a rather successful descrition of the spectrum of the mesons in terms of quarkantiquark bound states, which fit into the suitable multiplets reasonably well.Therefore, it is understandable that the recent observations of the very narrow resonances D + sJ (2317) by BaBar [1], D + sJ (2460) by CLEO [2], X(3872) by BELLE [3], and the very broad scalar meson D * 0 0 (2308) by BELLE [4], all of them with masses below quark model predictions, have stimulated a renewed interest in the spectroscopy of open charm and charmonium states.The difficulties to identify the mesons D + sJ (2317) and D + sJ (2460) as c s states are rather similar to those appearing in the light scalar mesons below 1 GeV (the isoscalars σ(500), f 0 (980), the isodublet κ(800) and the isovector a 0 (980)), that can be interpreted as four-quark states [5,6].In the case of X(3872), besides its small mass, the observation, reported by the BELLE collaboration [7], that the X decays to J/ψ π + π − π 0 , with a strength that is compatible to that of the J/ψπ + π − mode: establishes strong isospin violating effects, which can not be explained if the X(3872) is interpreted as a c c state.Due to these facts, these new mesons were considered as good candidates for four-quark states by many authors [8].In refs.[9,10] the method of QCD sum rules (QCDSR) [11,12,13] was used to study the two-point functions for the mesons D + sJ (2317), D * 0 0 (2308)and X(3872) considered as four-quark states in a diquark-antidiquark configuration.The results obtained for their masses are compatible with the experimental values and, therefore, in refs.[9,10] the authors concluded that it is possible to reproduce the experimental value of the masses using a four-quark representation for these states.
Concerning their decay widths, the study of the three-point functions related to the decay widths D + sJ (2317) → D + s π 0 , D * 0 0 → D + π − and X(3872) → Jψπ + π − , using the diquarkantidiquark configuration for D sJ , D * 0 0 and X, was done in refs.[14,15,16].The results obtained for their partial decay widths are given in Table I, from where we see that the partial decay widths obtained in refs.[14,15], supposing that the mesons D + sJ (2317) and D * 0 0 are four-quark states, are consistent with the experimental upper information for the total decay width.
However, in the case of the meson X(3872), the partial decay width obtained in ref. [16] is much bigger than the experimental upper limit to the total width.
In ref. [16] some arguments were presented to reduce the value of the X(3872) decay width, by imposing that the initial four-quark state needs to have a non-trivial color structure.In this case, its partial decay width can be reduced to Γ(X → J/ψπ + π − )) = (0.7 ± 0.2) MeV.However, that procedure may appear somewhat unjustified and, therefore, more study is needed until one can arrive at a definitive conclusion about the structure of the meson X(3872).
Concerning the meson D + sJ (2317), although its mass and decay width can be explained in a four-quark scenario, they can also be reproduced in other approaches [8], and it is not yet possible to discriminate between the different structures proposed for this state.Therefore, it is important to find experimental observations that could be used to descriminate between the different quark structure of these mesons.As pointed out in ref. [17], a signal could be obtained by the analysis of certain heavy-ion collision observables.In particular, the meson D + sJ (2317) can be produced in reactions induced by photons on kaon targets in a nuclear medium formed in a heavy-ion collision.Therefore, if the coupling constant, g D sJ DK , is found to be very different depending on the structure for D + sJ (2317), then the photo-production of D + sJ (2317) can be used as a signal to descriminate its structure.
II. THE g D sJ DK COUPLING CONSTANT
The coupling, g D sJ DK , supposing that the meson D + sJ (2317) is a conventional c s state, was evaluated in ref. [18].They got: Here, we extend the calculation done in refs.[14,15] to study the hadronic vertex D sJ DK.The QCDSR calculation for the vertex, D sJ DK, centers around the three-point function given by where j D sJ is the interpolating field for the scalar D sJ meson [9]: where a, b, c, ... are colour indices and C is the charge conjugation matrix.In Eq. ( 3), p = p ′ + q and the interpolating fields for the kaon and for the D mesons are given by: where q stands for the light quark u or d.
The calculation of the phenomenological side proceeds by inserting intermediate states for D, K and D sJ , and by using the definitions: 0| We obtain the following relation: where the coupling constant, g D sJ DK , is defined by the onmass-shell matrix element: DK|D sJ = g D sJ DK .The continuum contribution in Eq.( 6) contains the contributions of all possible excited states.
In the case of the light scalar mesons, considered as diquark-antidiquark states, the study of their vertices functions using the QCD sum rule approach at the pion pole [12,13,19], was done in ref. [20].It was shown that the decay widths determined from the QCD sum rule calculation are consistent with existing experimental data.Here, we follow ref.[21] and work at the kaon pole.The main reason for working at the kaon pole is that one does not have to deal with the complications associated with the extrapolation of the form factor [22].The kaon pole method consists in neglecting the kaon mass in the denominator of Eq. ( 6) and working at q 2 = 0.In the OPE side one singles out the leading terms in the operator product expansion of Eq.( 3) that match the 1/q 2 term.Since we are working at q 2 = 0, we take the limit p 2 = p ′ 2 and we apply a single Borel transformation to p 2 , p ′ 2 → M 2 .In the phenomenological side, in the structure q µ we get: where A and ρ cc (u) stands for the pole-continuum transitions and pure continuum contributions, with s 0 and u 0 being the continuum thresholds for D sJ and D respectively [14,15].For simplicity, one assumes that the pure continuum contribution to the spectral density, ρ cc (u), is given by the result obtained in the OPE side.Therefore, one uses the ansatz: ρ cc (u) = ρ OPE (u).In Eq.( 7), A is a parameter which, together with g D sJ DK , has to be determined by the sum rule.
In the OPE side we single out the leading terms proportional to q µ /q 2 .Transferring the pure continuum contribution to the OPE side, the sum rule for the coupling constant, up to dimension 7, is given by: with
III. RESULTS AND CONCLUSIONS
In the numerical analysis of the sum rules, the values used for the meson masses, quark masses and condensates are: m D sJ = 2.317 GeV, m D = 1.87 GeV, m c = 1.2 GeV, m s = .13GeV qq = −(0.23) 3 GeV 3 , ss = 0.8 qq .For the meson decay constants we use F K = 160 MeV and f D = 0.22 GeV [23].We use u 0 = 6 GeV 2 and for the current meson coupling, λ, we are going to use the result obtained from the two-point function in ref. [9].Considering 2.6 ≤ √ s 0 ≤ 2.8 GeV we get λ = (2.9 ± 0.3) × 10 −3 GeV 5 .
1.5 1.7 1.9 M In Fig. 1 we show, through the dots, the right-hand side (RHS) of Eq.( 8) as a function of the Borel mass.To determine g D sJ DK we fit the QCDSR results with the analytical expression in the left-hand side (LHS) of Eq.( 8): 9) and λ = 2.9 × 10 −3 GeV 5 (the value obtained for √ s 0 = 2.7 GeV) we get g D sJ DK = 2.8 GeV.Allowing s 0 to vary in the interval 2.6 ≤ √ s 0 ≤ 2.8 GeV, the corresponding variation obtained for the coupling constant is 2.5 GeV ≤ g D sJ DK ≤ 3.8 GeV.
Fixing √ s 0 = 2.7 GeV and varying the quark condensate, the charm quark and the strange quark masses in the intervals: −(0.24) 3 ≤ qq ≤ −(0.22) 3 GeV 3 , 1.1 ≤ m c ≤ 1.3 GeV and 0.11 ≤ m s ≤ 0.15 GeV, we get results for the coupling constant still between the lower and upper limits given above.it is important to mention that the agreement between the RHS and LHS of the sum rule in Fig. 1 is not so good, in this case, as it was in the case of the couplings g D sJ D s π and g D * 0 0 Dπ evaluated in refs.[14,15].One possible reason for that is the fact that the kaon mass is much bigger than the pion mass.Therefore, neglecting the kaon mass in Eq. ( 6) is not an approximation as good as it is in the case of the sum rule in the pion pole.
We have presented a QCD sum rule study of the vertex function associated with the hadronic vertex D sJ DK, where the D sJ (2317) meson was considered as diquark-antidiquark state.Comparing the results in Eqs.(11) and (2) we see that when the meson D sJ (2317) is considered as a c s state one gets a g D sJ DK coupling constant much bigger than when D sJ (2317) is considered a four-quark state.This result can be usefull to experimentally investigate the quark structure of the meson D sJ (2317) through its photon production in a nuclear medium.
FIG. 1 :
FIG.1: Dots: the RHS of Eq.(8), as a function of the Borel mass.The solid line gives the fit of the QCDSR results through the LHS of Eq.(8).
TABLE I :
Numerical results for the resonance decay widths decay
|
2017-09-24T17:14:01.365Z
|
2007-06-01T00:00:00.000
|
{
"year": 2007,
"sha1": "397191c794c1b6887085d3defc442563e03a9a19",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/bjp/a/SqCFHVFtsssNrK4bBwxgPZv/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d175a7036e37e58200099492d529c006c977da92",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
22885716
|
pes2o/s2orc
|
v3-fos-license
|
Impact of bariatric surgery on non-alcoholic fatty liver disease.
Introduction; p to 300 million people have the body mass index (BMI) greater than 30 kg/m2. Obesity is the cause of many serious diseases, such as type 2 diabetes, hypertension, and non-alcoholic fatty liver disease (NAFLD). Bariatric surgery is the only effective method of achieving weight loss in patients with morbid obesity.
OBJECTIVES
The aim of the study was to assess the impact of bariatric surgery on non-alcoholic fatty liver disease in patients operated on due to morbid obesity.
MATERIAL AND METHODS
We included 20 patients who were qualified for bariatric procedures based on BMI > 40 kg/ m2 or BMI > 35kg/m2 with the presence of comorbidities. The average body weight in the group was 143.85kg, with an average BMI of 49.16kg/m2. Before the procedure, we evaluated the severity of non-alcoholic fatty liver disease in each patient using the Sheriff-Saadeh ultrasound scale. We also evaluated the levels of liver enzymes. Follow-up evaluation was performed twelve months after surgery.
RESULTS
Twelve months after surgery, the average weight was 102.34 kg. The mean %WL was 33.01%, %EWL was 58.8%, and %EBMIL was 61.37%. All patients showed remission of fatty liver disease. Liver damage, evaluated with ultrasound imaging, decreased from an average of 1.85 on the Sheriff-Saadeh scale, before surgery, to 0.15 twelve months after surgery (p < 0.001). As regards liver enzymes, the level of alanine aminotransferase decreased from 64.5 (U/l) to 27.95 (U/l) (p < 0.001), and the level of aspartate aminotransferase decreased from 54.4 (U/l) to 27.2 (U/l).
CONCLUSIONS
Bariatric procedures not only lead to a significant and lasting weight loss, but they also contribute to the reduction of fatty liver disease and improve liver function.
INTRODUCTION
Worldwide, it is estimated that there are approximately 1 billion overweight people, and more than 300 million suffer from obesity (BMI > 30kg/m 2 ) [1]. Adipose tissue is a highly active metabolic and endocrine organ that contributes to the development of diabetes, metabolic syndrome, non-alcoholic fatty liver disease (NAFLD), and other conditions. NAFLD is a broad term that encompasses many different disorders that range from fatty liver disease to inflammatory disease with fibrosis and cirrhosis. In NAFLD, the etiology of liver changes is not associated with alcohol consumption, despite a similar appearance to alcoholic liver disease. It is hypothesized that the disease is possibly related to lifestyle and genetic factors. [2] The disease was first diagnosed in the 1930s, described in 1950s, and characterized histopathologically in 1980s. However, only now has it been recognized as an important clinical problem. [3] NAFLD is characterized by lipid accumulation in the hepatocytes, and in NAFLD, lipids comprise more than 5% of the liver. NAFLD is believed to result from an increased flow of free fatty acids (FFA) through the liver. It may be caused by increased lipolysis, increased fat intake, mitochondrial dysfunction associated with insulin resistance, or by de novo lipogenesis. [7] NAFLD is considered to be one of the main causes of chronic liver dysfunction in the developed world, afflicting 9-30% of the general population. There is a well-established association between NAFLD and excessive caloric intake that leads to obesity. Corre-lation between the severity of obesity and the degree of NAFLD is found in 90% of biopsies performed during bariatric procedures. [4] Currently, there are no unequivocal guidelines regarding the treatment of NAFLD. Weight loss, achieved through lifestyle changes and exercise, offers some improvement. In patients with morbid obesity, if these methods are ineffective, bariatric surgery seems to be the most appropriate treatment. Even though weight loss is the most visible effect of bariatric surgery, its most important goal is the treatment of life-threatening comorbidities. The influence of surgical procedures on NAFLD is poorly documented compared to other comorbidities, like diabetes or hypertension.
AIM OF THE STUDY
To estimate the influence of bariatric procedures on the natural course of non-alcoholic fatty liver disease.
MATERIAL AND METHODS
As regards the indications for surgical treatment, we used the recommendations of the Section of Metabolic and Bariatric Surgery of the Polish Surgeon Society, as follows, body mass index (BMI) ≥ 35 kg/m 2 with comorbidities or BMI ≥ 40 kg/m 2 with or without comorbidities. The inclusion criteria were as follows, informed consent to participate in the study, age between 18-65 years, and fulfilment of eligibility criteria for bariatric treatment [laparoscopic sleeve gas-original article cedure was 36.42kg/m 2 (down from 49.16kg/m 2 ). The mean body mass was 102.34 kg (down from 143.85kg). The %WL was 33.01%, %EWL was 58.8%, and %EBMIL was 61.37%.
Out of all 20 patients, 14 (70%) had diabetes, and 2 (10%) had impaired glucose tolerance. Four patients required insulin administration, while the remaining patients took oral anti-diabetic drugs. Seventeen patients (85%) were diagnosed with hypertension, and 17 patients (85) also had hyperlipidemia. None of patients had obstructive sleep apnea.
In terms of concomitant diseases, we observed an improvement in diabetes control. Of all patients who were initially treated with anti-diabetic medications, 16 (80%) went into remission and did not require further diabetic treatment. Three patients continued to require insulin injections; however, their daily insulin intake dropped from a mean of 102.0 units per day to 37.6. units per day. We achieved normalization of blood pressure in 4 patients. Thirteen patients (65%) continued to require antihypertensive drugs. Ten patients (50%) had normalization of the lipid profile one year after bariatric surgery.
Before surgery, the mean Sheriff-Saadeh score in patients undergoing the procedure was 1.85 ± 1.08. One year after surgery, the mean score on the Sheriff-Saadeh scale dropped to 0.15, which was statistically significant (p < 0.001).
The mean AST level was 60.8 ± 36.5 U/l, and the average ALT level was 49.05 ± 47.6 U/l. We observed a statistically significant reduction in both ALT and AST levels to 27.7U/l for ALT (p = 0.00), and 54.4 U/l for AST (p = 0.02).
DISCUSSION
Over the past years, surgery has gained acceptance as a treatment method for morbid obesity. It leads to body weight reduction to an extent that is unobtainable with dietary modifications alone. It has been proven that both laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass surgery are both efficient and safe. [6] NAFLD is a disease that is very strongly associated with obesity. [21,22] The Dallas Heart Study also suggests that the prevalence of NAFLD varies with the ethnicity. In that study, NAFLD was diagnosed in 45% of Latinos, 33% of Caucasians, and 24% of African Americans. [8] Among patients with NAFLD, 10-20% suffer from non-alcoholic steatohepatitis (NASH), and 8-26% of patients with NASH develop liver cirrhosis. [9] It has been proven that some genetic defects related to VLDL synthesis may have an influence on morbidity. [10] There are several conditions that may contribute to the development of this disease, e.g., type 2 diabetes, metabolic syndrome, obesity, dyslipidemia, hypogonadism, hypothyroidism, polycystic ovarian syndrome, and even specific bacterial flora in the intestine [11].
In order to diagnose NAFLD, a history of alcohol use must be ruled out, along with other chronic disorders that may lead to chronic liver disease. In the course of the disease, elevated liver function tests (LFT) and a decreased level of adiponectin in peripheral blood may be observed. Other tests that may be useful in diagnosing NAFLD include ultrasound (US), magnetic resonance trectomy (LSG) or laparoscopic Roux-en-Y gastric bypass (LRYGB)]. We excluded patients who were lost to follow-up after 12 months, patients diagnosed with mental diseases, patients diagnosed with alcohol or drug dependence, and patients who had undergone different bariatric procedures. In total, 20 patients were included in the study. All patients underwent laparoscopic bariatric procedures in the 2 nd Department of General Surgery, Jagiellonian University Medical College. Among the patients, there were 12 women (60%) and 8 men (40%). The mean age was 39.55 years. The mean body mass was 143.85kg, and the mean BMI was 49.16kg/m 2 . Sixteen patients had either diabetes or glucose tolerance impairment. Currently, there are guidelines regarding the choice of surgery method is patients with diabetes, and it is believed that LRYGB is more effective in patients with a long history of diabetes. Five patients underwent LSG, and 15 patients underwent LRYGB.
All patients included in this study underwent an evaluation of liver function and structure in order to document the presence and severity of NAFLD. In order to assess liver function impairment, a standard set of blood liver enzymes was assessed, including aspartate aminotransferase (AST) and alanine aminotransferase (ALT). In order to evaluate liver structure and its impairment in the course of NAFLD, we performed ultrasound examinations. The Sheriff-Saadeh scale was used to assess the severity of steatohepatitis. The details of the Sheriff-Saadeh scale are shown in Table 1. [5] All ultrasonographic examinations were performed by the same experienced physician. The General Electric LOGIQ 7 system with a 3.5-5.5MHz convex probe was used for all examinations.
Follow-up evaluations were performed one year after surgery. During follow-up visits, we evaluated the effectiveness of bariatric procedures in terms of weight loss and reduction of comorbidities. A detailed medical history was taken, and physical examinations were conducted. Body weight was determined using the Tanita BC-420S MA device. The percentage weight loss (%WL), the percentage loss of excessive weight (%EWL), and the percentage loss of excessive BMI (%EBMIL) were used to evaluate the reduction in body mass. To evaluate NAFLD regression, follow-up ultrasound examinations were performed by the same person, according to the Sheriff-Saadeh scale, and the same laboratory tests as before surgery, including AST and ALT levels, were carried out The patients were referred for psychological or dietary advice, if they reported any problems in those areas.
The Statistica 10 software was used for statistical analysis, and the Student's t-test was used to the differences in AST and ALT levels before surgery and on follow-up. The Wilcoxon signed-rank test was performed for the Sheriff-Saadeh scale evaluation.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by the Bioethics Committee of the Jagiellonian University. The study was registered under NCT02828579 (ClinicalTrials.gov)
RESULTS
We observed a reduction in the body mass index in all patients who underwent surgery. The mean BMI one year after the pro-NAFLD may be found in up to 98% of patients undergoing bariatric surgery. However, it is not associated with a higher risk of perioperative morbidity, even if the patient suffers from non-alcoholic steatohepatitis (NASH), which is an advanced phase of NAFLD that is defined by the presence of inflammatory infiltration in the liver tissue.
Our study suggests that bariatric surgery promotes regression of hepatic steatosis. We observed significant improvement not only in liver function tests but also in Sheriff-Saadeh scores that dropped significantly, proportionally to the reduction in BMI. Our findings are similar to those from other trials that have evaluated NAFLD status after bariatric surgery. Vargas et al. revealed that the Rouxen-Y gastric by-pass not only leads to body weight reduction but also improves liver function through regression of liver steatosis. [18] Hady et al., who investigated the influence of laparoscopic sleeve gastrectomy on the metabolic status of patients, proved that it causes changes in the levels of AST and ALT; however, these changes were not statistically significant. Nevertheless, the surgery was associated with a significant decline in lipid levels. Haafez et al. compared vertical band gastroplasty and adjustable gastric banding in terms of forward regression of liver steatosis in patients who underwent bariatric surgery. [19] Moreover, according to Weingarten et al., NAFLD and NASH are not contraindications to bariatric surgery and do not increase the perioperative complication rate. [20] A limitation of our study is a small study sample; however, this is one of few available studies in the European population and the first study in the Polish population. Moreover, to out knowledge, this is the only study that has investigated the feasibility of ultrasound scales as the sole diagnostic method without the need of liver biopsy.
CONCLUSIONS
Lifestyle modification should be the first-line treatment in NAFLD; however, bariatric surgery should be considered as a treatment option in patients with severe and complex obesity.
Ethical approval: The study was approved by the Bioethics Committee of the Jagiellonian University.
Competing interest: No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.
imaging (MRI), computed tomography (CT) of the abdomen, and liver elastography. With respect to ultrasound, low cost, safety, and lack of radiation exposure make it the first-line method for the diagnosis and follow-up of NAFLD.
Liver biopsy remains the gold standard in the diagnosis of NAFLD. [12] It is the most conclusive method that can be used to exclude steatohepatitis, which is a condition that can lead to liver fibrosis and eventually cirrhosis. However, the procedure is invasive and carries a risk of complications, which may affect up to 20% of patients. [13] The rising incidence of NAFLD in Western countries underscores the necessity of developing a less invasive diagnostic test for distinguishing NAFLD from steatohepatitis. The use of liver elastography allows for the evaluation of increased liver stiffness in hepatic fibrosis. [14] This technique is very difficult to apply in bariatric patients, since in patients with a BMI greater than 28kg/ m 2 , there is a high possibility of misdiagnosis. [15] According to the authors, liver biopsy can be avoided in 75% of patients.
There are several ultrasound scales to assess the severity of NAFLD. The ultrasonographic scale developed by Sheriff and Saadeh is an easy tool for detecting and monitoring the disease. However, this scale, by itself, does not distinguish between steatosis and steatohepatitis. Nevertheless, we are convinced that this simple tool is very useful in the monitoring of liver status and indicating which patients should undergo liver biopsy.
There is strong evidence that the most effective way to diminish liver steatosis is body mass reduction. Promrat et al., who studied lifestyle modification as a treatment option, have shown that a minimum body weight loss of 7% causes improvement in liver histology. Dietary modification, together with physical activity, improves lipid levels and aminotransferases and mitigates insulin resistance. [16] Research on the pharmacological treatment of NAFLD also offers some hope. A few drugs seem to be potentially useful, including metformin, alpha-tocopherol, vitamin C, and thiazolidinediones. However, none of these agents has been proven effective in decreasing the level of liver steatosis. [16] Surprisingly, regular consumption of coffee provides some protection against liver fibrosis. [17] Although lifestyle modification should be the treatment of choice in all cases of NAFLD, it is not a practical solution in patients with morbid obesity. The long-term results of conservative treatment of obesity and related comorbidities in this study group are very disappointing. So far, surgery is the only method that has well-documented and lasting results.
Description Normal echogenicity
Slight, diffuse increase in fine echoes in liver parenchyma with normal visualization of diaphragm and intrahepatic vessel borders Moderate, diffuse increase in fine echoes with slightly impaired visualization of intrahepatic vessels and diaphragm Marked increase in fine echoes with poor or non-visualization of the intrahepatic vessel boarder, diaphragm, and posterior right lobe of the liver
|
2018-04-03T06:25:50.247Z
|
2017-04-30T00:00:00.000
|
{
"year": 2017,
"sha1": "af00b5e61153361c2c0208862044758b2e6ecb46",
"oa_license": "CCBYNCSA",
"oa_url": "https://ruj.uj.edu.pl/xmlui/bitstream/handle/item/140511/rubinkiewicz_et-al_impact_of_bariatric_surgery_on_non-alcoholic_fatty_liver_disease_2017.pdf?isAllowed=y&sequence=1",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "5c243b976fc4de3b68a465a116c9537b830ce8e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56309295
|
pes2o/s2orc
|
v3-fos-license
|
REGIONAL CORRELATION AND CYCLOSTRATIGRAPHY IN THE MID-CRETACEOUS FORMATIONS OF THE IONIAN ZONE
The mid-Cretaceous period is characterized by the widespread deposition of organic carbon-rich horizons, documenting oceanic anoxic events (OAEs), which correspond to episodes of major disturbances in the carbon cycle. The causes of these events are still widely debated. In this study, the role of orbital variations in the deposition of black shales in the Ionian Zone (Western Greece), an area already known for documenting OAEs is examined. Cyclostratigraphic data for the Lower Aptian interval are interpreted in a climate change context, and specific mechanisms for the deposition of organic carbon-rich horizons are hypothesized. Field observations, stable isotope and total organic carbon analyses, as well as biostratigraphic data enable a correlation between the Gotzikas section and the Paliambella section, also in the Ionian Zone (Epirus, Greece). In addition, Gotzikas section is also correlated with the Poggio le Guaine Fiume Bosso composite sequence of the Umbria Marche region (Italy). Lower Aptian sedimentation in the Ionian Basin seems to be controlled by orbital forcing. The short eccentricity and obliquity rhythms are most prominently recorded. Whilst the presence of amplitude modulation cycles indicates towards a lesser control by the long eccentricity and precession periodicities.
Introduction
The mid-Cretaceous period is characterized by the widespread deposition of organic carbon-rich horizons, documenting oceanic anoxic events (OAEs), which correspond to episodes of major disturbances in the carbon cycle (Jenkyns 1980).The causes of these events are still widely debated.According to the preservation model, decreased ventilation of the sea floor, due to increased detrital input, led to enhanced carbon burial, as a result of low organic matter remineralisation (Tyson 1995).Alternatively, very high primary productivity levels and overwhelming oxic remineralisation of organic matter, may have been the cause of anoxic conditions on the sea floor, through expansion of the oxygen minimum zone (Parrish 1995).Though both resulting in enhanced preservation of organic matter, it appears that the mid-Cretaceous black shales can be separated in two classes, having an either detrital (D-OAEs) or productivity driven origin (P-OAEs) (Erbacher et al. 1996).
Quaternary
Ionian flysch Hocene -Setioman Vigla Formation " Recent integrated studies of the mid-Cretaceous basin deposits in the Umbria -Marche region in Italy (Galeotti et al. 2003) have confirmed an orbital control on the deposition of bedded cycles and organic carbon-rich facies in this basin.According to these authors, climatic changes, modulated by the orbital forcing, induced changes in the hydrological and atmospheric cycles, led to in enhanced fluxes and preservation of continental-derived organic matter in the black shale horizons.
In this study, the well known Gotzikas section, documenting OAElb and OAE2 (Tsikos et al. 2004, Karakitsios et al. 2007) is correlated with the Paliambela section, also in the Ionian zone of Western Greece, and the Poggio le Guaire -Fiume Bosso composite sequence of the Umbria -Marche region (Italy).In addition, the role of orbital variations in the deposition of black shales in the Ionian Zone (Western Greece) is examined.Furthermore, cyclostratigraphic data are interpreted in a climate change context, and specific mechanisms for the deposition of organic carbon-rich horizons are being hypothesized.
Geological Setting
During the Early Jurassic, the Ionian Zone was an area of platform carbonate sediment deposition (Pantokrator limestones) on a base comprising primarily of Triassic evaporites.A period of extensional stress associated with the opening of the Neotethys, from the Pliensbachian to the Tithonian, led to a differentiation of the Ionian realm into discrete palaeogeographic ' units recording early regional subsidence (Karakitsios 1992).The Lower Cretaceous (Berriasian -Turonian) pelagic Vigla Limestone Formation represents the post-rift formation.This sequence is defined by a Lower Berriasian unconformity at the base of the Vigla Limestone Formation, which largely obscures pre-existing syn-rift structures.Minor off-and on-lap movements along the Ionian Basin margin continued until the Late Eocene, when flysch sedimentation commenced.The evolution of the Ionian Basin constitutes a good example of inversion tectonics in a basin with an evaporitic base (Karakitsios 1995).
The Vigla Limestone Formation s.l.comprise a thick succession of thin-layered (5-10 cm), sublithographic pelagic limestones with radiolaria, which are rhythmically interbedded with centimetre-to decimetre-thick radiolarian chert beds.The upper part also contains of a series of intercalated organic carbon-rich marlstones and shales, termed the Vigla Shale Member (Karakitsios 1995, Rigakis andKarakitsios 1998).According to a previous examination of the Gotzikas section, south of the village Tsamandas (NW Epirus; Fig. l) has shown that the Vigla Limestone Formation s.l. has an approximate 100 m thickness (Tsikos et al. 2004b).The upper ~ 80 m of this consist of a rhythmically alternating, thinly bedded limestone/black chert succession (Vigla limestones s.S.).Towards the lower 15-20 m of this interval, the limestones become more massive, pinkish-grey in colour and increasingly silicified.Within this portion, Tsikos et al. (2004b) have identified an isolated black shale horizon (~ 15 cm thickness), as corresponding to the "Paquier event" (OAElb).These silicified Vigla limestones continue stratigraphically lower for another 8-10 m below this black shale (Fig. 2).The sediments then pass downwards to the Vigla Shale Member.
According to Tsikos et al. (2004b), the Vigla Shale Member, in Gotzikas section, spans the Aptian -Lower Albian interval.However the upper portion of this is not visible in the field.In the lowermost 13,22 m of the succession, 20 individual organic carbon rich marly horizons are seen, ranging in thickness from 10 to 40 cm (Fig. 2).These layers are dark grey to black, well laminated and free of any evidence of bioturbation.They are interbedded with reddish-grey, 20-50 cm thick marly limestone beds, silicified in places and containing common intercalations of dark chert layers (5-10 cm thick).The stratigraphically higher parts of the Vigla Limestone Formation s.l., in this section, lack organic carbon-rich sediments, except for an isolated black shale horizon, about 40 m above the "Paquier" level.This horizon is -35 cm thick comprising of two distinct, finely laminated subunits separated by a thin, cherty layer.This horizon has been shown to correspond to the Cenomanian -Turonian "Bonarelli" event (OAE2) (Karakitsios et al. 2007).
Biostratigraphy
Biostratigraphic analyses of planktonic foraminifera and calcareous nannofossils (Tsikos et al. 2004b, Karakitsios et al. 2007) suggest that the Vigla limestones s.s. in the Gotzikas section span the Albian -Turonian interval.The lowermost 13.22 m of the section comprising of the Vigla Shale Member (i.e.26.8-14.9mbelow the OAElb) may be assigned to the Lower Aptian, based on the presence of the nannofossils Assipetra infracretacea larsonii, Hayesites irregularis, and Rucinolithus terebrasletnarius youngii, as well as the absence of Upper Aptian species (Tsikos et al. 2004b).The black shale corresponding to the "Paquier" event in the lower portion of the section marks the Lower to Middle Albian, as shown by the presence of the calcareous nannofossil Hayesites albiensis 3 m below to 10m above this horizon.The ~ 20 m-thick interval above the OAElb level is assigned to the Middle to Upper Albian period.This is suggested by the first occurrence of the calcareous nannofossil Quadrum eneabrachium, and the presence of the planktonic foraminifer Bitìcinella breggiensis, followed by the presence of the planktonic foraminifera Rotalipora appeninica and Planomalina buxtorfi, and the nannofossil Eiffellithus turriseiffelii (Premoli Silva andSliter, 1995, Karakitsios et al 2007).
A Middle Cenomanian age is given to the 6m of sediments overlying the Upper Albian interval due to the first occurrence of Rotalipora cushmani and the continued presence of R. appenninica.Also Praeglobotruncana gibba occurs 4.5 m higher in the succession.These datum have been used to correlate this black-shale unit with the Cenomanian -Turonian boundary "Bonarelli" level of the Umbria-Marche region (Tsikos et al. 2004a, Karakitsios et al. 2007).Finally the first occurrence of Marginotruncana gr.pseudolinneiana constrains the uppermost part of the succession as Turonian in age (Premoli Silva and Sliter 1995).
Sedimentation rate
An age model (Fig. 3) was contructed, based on the first and last occurrences shown in Table 2, with reference to the biostratigraphic scheme by Bralower et al. (1995) and the Geologic Time Scale 2004 (Gradstein et al. 2004).The mean sedimentation rate for the upper part of the Vigla limestones s.s.(Albian -Turonian), in Gotzikas section, can be approximated at 0.27 cm/ky (R 2 =0.97).Also, there appears to be a substantial relative increase in the sedimentation rate, to 1.10 cm/ky, about 5.5 m (500 ky) prior to the "Bonarelli event".Accumulation of sediment returns to a lower rate (0.30 cm/ky) after this event horizon.Mean sedimentation rate, for the lower part of the Vigla Shale Member in this section, is 0.66 cm/ky.
Regional correlation
The field observation, the stable isotopie and total organic carbon analyses, as well as the biostratigraphic information derived from the samples enabled the correlation between the Gotzikas section and the Paliambella section (Danelian et al. 2004) Figure 4 -Regional correlation between Gotzikas section (Tsikos et al. 2004b, Karakitsios et al. 2007), Paiiambela section (based on Danelian et al. 2004), and Poggio le Guaine -Fiume Bosso composite sequence (based on Galeotti et al. 2003).1: limestone, 2: cherty layer or nodule, 3: shale & siliceous mudstone, 4: black shale, 5: marl, 6: marly limestone, 7: cherty limestone.Notice that in Gotzikas section, the zero meter level here coincides with the level of the lowermost sample in Table 2 (i.e.V64 at -13.22 m) sufficiently comparable palaeogeographic and structural evolution to the Ionian realm (Alvarez 1989, Karakitsios 1995).All the correlations can be seen in Figure 4.
The lower part of the Gotzikas section, comprising the Vigla Shale Member, records rhythmic alternations of organic carbon rich/poor horizons, which can be correlated with the Selli event (OAEla).In the lowermost 4.5 m, cherty/marly limestones alternate with black shale horizons.This is probably an equivalent of the Fourcade siliceous level described by Danelian et al. (2004) (Fig. 4).However deposition appears to have taken place in a different palaeogeographical setting, allowing for the deposition of carbonate as well as siliceous sediments.This is in agreement with previous studies conducted in this area, which provide evidence for intense sea bottom topography within the Ionian Basin well beyond the syn-rift period, due to the continuation of halokinesis (Karakitsios 1995).It is very probable that carbonate/black shale deposition took place in shallower parts of the basin (e.g.Gotzikas section, especially the upper Lower Aptian part), and chert/black shale deposition in the deeper parts of the Ionian Basin (e.g.Paliambela section).
A single black shale horizon of Early Albian age, ~15 m below the Paquier level (OAElb), can be correlated with Mt.Nerone level described in the Umbria -Marche region.The OAElb level is correlated with the Urbino level described in the same sequence (Galeotti et al. 2003).
The sediments immediately above the Paquier level (22-33 m; Fig. 4) are most likely to be equivalent to the Dercourt member, described by Danelian et al. (2004) in the Paliambela section.The lithology of this part of the section is comprised of limestone and chert interbeds and cherty limestones.Within this unit three black shale horizons have been observed.Whereas field observation of the Paliambela section show shale and siliceous mudstones alternating with cherty layers.The biostratigraphic age of this portion of the Vigla Formation is Middle Albian in both cases.
The upper three black shale horizons, below the Bonarelli level, can be correlated with the OAElc, which, according to Bralower et al. (1993), spans the entire planktic foraminiferal Biticinella breggiensis Zone.This event includes the Amadeus Segment described in detail by Galeotti et al. (2003), during which black shale deposition appears to be orbitally controlled.
Cyclostratigraphy
A cyclostratigraphic methodology was applied to the Lower Aptian Vigla Shale Member deposits (lowermost 13.22 m of the section), using field observations and bulk ô 13 C data (Table 2).In order to identify periodic patterns in the isotopie content of the deposits, spectral analysis on the ô 13 C time-series was applied, using AnalySeries (Paillard et al. 1996;Figs 5-7).
Processing of the data, prior to spectral analysis, involved detrending (incorporating mean subtraction), pre-whitening (coeff.= 0.7), and arc sine transformation (Weedon 2003).Linear interpolation was also necessary, since the section was not equally sampled.Following this, a check for stationarity (i.e. a time-series' property of changing its characteristics significantly over the length of the studied interval; Weedon 2003) was conducted by comparing the spectra of several overlapping segments of the dataset.
The spectral analysis employed three different methods; the maximum entropy method (Percival and Waiden 1993), the multi-taper method (Thompson 1982), and the Blackman -Tukey method (Priestley 1981) (Figs 5-7).The maximum entropy method (MEM) with a Burg algorithm assumes the presence of multiple autoregressive processes (AR processes).An AR process involves a dependence of each successive value in the dataset, on previous values (Weedon 2003).However, this is probably true for Mid-Cretaceous sediments recording oceanic anoxic events.The processes that seem to trigger and control these events (gas-hydrate release, primary productivity increase, or high detrital input) are evolutionary through geological time.Thus the parameters that record the phenomena (e.g.Ô 13 C, δ 18 0) can indeed be thought to incorporate a "memory" of sorts, which in essence defines AR processes.As a result the MEM could be considered a valid method of spectral estimation for this type of cyclostratigraphic data.In order to avoid the slight shift of the frequency, that often occurs with this method, the actual valid peak frequencies were here determined by the use of the multi-taper method (MTM) and the Blackman -Tuckey method (BTM).These two methodologies are non-parametric, and therefore they do not involve any assumptions about the nature of the dataset (Weedon 2003).Also the spectra produced by each different method have a different y-axis scale.Normalization was not considered important since in cyclostratigraphic studies, the form of the spectra is significant and not the power.
Examination of the power spectra produced by all three methods has enabled an estimation of the regular cycles recorded in the sedimentary record, observed in Gotzikas section.The long eccentricity signal (E) is somewhat distorted and exhibits non-linear higher frequency combination tones, probably due to the control of long eccentricity over the sedimentation rate (Weedon 2003).However, the long eccentricity rhythm modulation appears consistently through all three spectra.In addition amplitude modulation cycles of 727 ky period, which appear in both the MEM and the BTM spectra, and 353 ky period, in the MTM spectrum, probably result from the added effect of the precession and eccentricity periodicities (Grippo et al. 2004).
As exhibited consistently by all three spectral estimation methods, the black shale deposition during the Lower Aptian interval in the Ionian Basin was strongly controlled by the short eccentricity signal of-123.9ky (el) and -94.8 ky (e2) (periodicities follow the calculations of Laskar, 1999 for the last 19 My).This el mode results from the interactions between the systems of Jupiter, Mars and Jupiter, Earth., whilst the e2 mode is controlled by the Mars, Venus and Earth, Venus systems (Laskar 1999).
The "precession of the equinoxes", which derives from the wobble of the Earth's rotational axis, has an influence on climate depending on its interaction with the ellipticity of the Earth's orbit (essentially eccentricity).Earth and Mars' gravitational attraction forms the precessional periodicity at 19 ky which, combined with the mode of 23 ky formed by the Jupiter and Mercury gravitational fields, provides the precessional cyclicity at 21 ky.
0,00035
0,0003 0,000 0,002 0,004 0,006 0,008 0,010 0,012 0,014 0,016 0,018 0,020 cycles/ky Earth's interactions with Mercury, Venus, Mars and Jupiter also affect the axial inclination, because of the pull exerted on the Earth's equatorial bulge (Laskar, 1999).This effect results in the obliquity cycles, the main period of which is 41 ky.Saturn additionally induces a lesser cycle at 54 ky.Because these periodicities of the orbital rhythm are closely spaced, they produce several amplitude modulations.The size of the sampling interval in the lower part of Gotzikas section did not allow for observation of the high frequency components of obliquity in the spectra produced.However we can observe the 54 ky-period mode in the MEM and the MTM spectra.According to Grippo et al. (2004), this peak may be suppressed (as in our case), when observed in low and mid latitudes, because its presence in these records is strongly dependent on the degree of degradation of this signal during transmission (since it is originally generated in the polar region).In all three spectra the 62.5 ky-period amplitude modulation mode of obliquity can be clearly observed.Note that Laskar (1999) estimates that the Earth's obliquity variation, the most unstable of the orbital parameters, is not expected to have remained stable over longer geological times.Nevertheless, consistency in our spectral analysis suggests an indirect estimate of obliquity regular cycles' control on sedimentation, in the lower part of Gotzikas section.
Slight shifts in the periodicities of the orbital cycles, as shown in the spectra (Figs 5-7), relative to those predicted by Laskar (1999), are observed and may be explained in a number of ways.These power spectra are based on a hypothesized age model (assuming constant sedimentation rates), formulated according to the biostratigraphic and field data, and in conjunction with the regional correlation presented.Therefore the expected discrepancies between the above age model and the actual age-depth relationship would tend to shift the regular cyclicity frequencies in the spectra.Furthermore, the calculations of the orbital parameters produced by Laskar (1999) range back to the last 19 My only.It is highly improbable that the same parameters stand true for Mid-Cretaceous time as well.Finally, general limitations of the pre-processing and spectral estimation methods applied herein may have contributed to the shifts in spectral frequencies.
Discussion
Orbital eccentricity is responsible for changes in Earth's insolation, especially expressed in seasonality.In general higher eccentricity intensifies the seasons in the hemisphere in which the perihelion occurs during the summer, and the amplitude of these climatic deviations is proportional to the amount of eccentricity.The four eccentricity rhythms, introduced by the gravitational interactions between the planets, in conjunction with the precessional cycles, have a combined effect on climate.This is maximized at the frequencies corresponding to the "difference tone" or the "combination tone" between them (Laskar 1999).
The precession -eccentricity syndrome is thought to be responsible for a decrease in deep-water oxygenation, dysoxia intensifying, at certain times, to the point of anoxia.At these periods, dark marly beds, with elevated organic content and iron sulfide, have been widely deposited (Herbert et al. 1986b, Galeotti et al. 2003, Grippo et al. 2004).These thin precessional anoxic pulsations (PAPs) segment the stratal sequence, into organic-carbon rich and organic-carbon poor horizons.It is proposed here that the lithological alternations observed in the Lower Aptian part of Gotzikas section are caused by such precessional anoxic pulsations, modulated by the precessioneccentricity syndrome.
Comparison between the orbital cyclicities and amplitude modulations calculated by Laskar (1999), and the age model hypothesized for this part of Gotzikas section, allows the conclusion that, when moving backwards within this Lower Aptian segment, the deposition of the black shale horizons was modulated by longer-period precessional amplitude modulations.It can be observed that the duration of time (according to the age model) between consecutive black marls and shales (PAPs) generally increased from -14-15 ky to -53 ky through the lowermost part of the section.This phenomenon may be due to an actual change of the modulation mechanism.However, as sedimentation of the lowermost part of the section (equivalent to the Fourcade level of Danelian et al., 2004) took place in a deeper, more siliceous environment, black shale deposition may have been controlled by lower frequency orbital periodicity.
According to Herbert et al. (1986aHerbert et al. ( , 1986b)), like the precession -eccentricity syndrome, obliquity cycles drove variations in carbonate production and bottom redox conditions.Grippo et al. (2004) report thicker and more abundant PAPs and relatively pure limestones in the Piobbico core (Scisti e Fucoidi Fm -Scaglia Bianca sequence, Umbria Marche, Italy) coinciding with strong obliquity signals.According to these authors this observation suggests that primary productivity increased during these times.
There are two main theories regarding the mechanisms triggering black shale formation and PAPs.
According to de Boer (1983) carbonate production is a function of primary productivity, meaning that episodes of exceptional organic carbon preservation triggered black shale deposition.On the other hand, Herbert et al. (1986a) provided evidence to support increased primary productivity in the carbonate-rich phase, thus suggesting organic hyperproduction as the mechanism responsible for PAPs.Grippo et al. (2004) reason that the biotic record favors de Boer's model, and thus assign the PAPs in the Scisti e Fucoidi Formation to the perihelial winter phase of the precessional cycle of low seasonality.Similarly, Galeotti et al. (2003) proposed a model for black shale deposition during the Upper Albian, in the Umbria -Marche region.According to these authors, black shale sedimentation occurred during precession minima, in times of increased precipitation, enhanced continental runoff and stratification of the upper water column.
Regarding the Lower Aptian interval in the Ionian Zone, Danelian et al. (2004) indicate that the highly siliceous Fourcade level of Paliambela section constitutes a local record of a global ocean eutrophication event during the OAEla (Baudin et al. 1998, Hochuli et al. 1999).This is in agreement with observations for the same interval of Gotzikas section.We therefore propose that the Lower Aptian portion of this section (lowermost 13.22 m) is equivalent to the OAEla event, and the lowermost more siliceous part of these sediments, to the Fourcade level of Paliambela section.Orbital forcing in conjunction with increased upwelling due to the opening of new Mediterranean Tethys gateways, as suggested by Danelian et al. (2002Danelian et al. ( , 2004)), may have indeed been the cause of the high productivity episode recorded as the OAE1 a event in this region.
Figure 1 -(a) The zones of NW Hellenides; (b) location of the study area; (c) simplified geological map of the study area (Tsikos et al. 2004b)
Figure 3 -
Figure 3 -Age model for the upper part of the Vigla limestones in Gotzikas section.The mean sedimentation rate is 0.2693cm/ky.The trend line was obtained using a linear fit in the Ionian Zone (Epirus, Greece), and the correlation of the study section and the Poggio le Guaine -Fiume Bosso composite sequence(Galeotti et al. 2003) of the Umbria -Marche region (Italy), an area
Figure 7 -
Figure 7 -Power spectrum (Ô 13 C time-series) for the lower part of the section, produced through Blackman -Tukey method (BTM), with 80% confidence interval.Numbers on the peaks indicate the period (1/frequency).Number of lags = 50
|
2018-12-18T10:06:36.044Z
|
2018-06-08T00:00:00.000
|
{
"year": 2018,
"sha1": "136b38350adeba51ade3224939430a00e644a66f",
"oa_license": "CCBYNC",
"oa_url": "https://ejournals.epublishing.ekt.gr/index.php/geosociety/article/download/16338/14563",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aee6a44fc4bf1d3c865683afc2a1d95f988bb202",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
265195597
|
pes2o/s2orc
|
v3-fos-license
|
Some statistical remarks on GRBs jointly detected by Fermi and Swift satellites
We made statistical analysis of the Fermi GBM and Swift BAT observational material, accumulated over 15 years. We studied how GRB parameters (T 90 duration, fluence, peak flux) that were observed by only one satellite differ from those observed by both. In the latter case, it was possible to directly compare the values of the parameters that both satellites measured. The GRBs measured by both satellites were identified using the knn() k-nearest neighbour algorithm in the FNN library of the R statistical package. In the parameter space we determined the direction in which the jointly detected GRBs differ most from those detected by only one of the instruments using the lda() in MASS library of R . To get the strength of the relationship between the parameters obtained from the GBM and BAT, a canonical correlation was performed using the cc() procedure in the CCA library in R . The GBM and BAT T 90 distributions were fitted with a linear combination of lognormal functions. The optimal number of such functions required for fit is two for GBM and three for BAT. Contrary to the widely accepted view, we found that the number of lognormal functions required for fitting the observed distribution of GRB durations does not allow us to deduce the number of central engine types responsible for GRBs.
INTRODUCTION
GRBs have been known for decades since their discovery by Klebesadel et al. (1973).Nevertheless, there is still no generally accepted theory for their origin, which would fully and satisfactorily explain all the observational facts.The first measurements of the BATSE instrument on board the CGRO satellite Fishman et al. (1992) have already shown that there are two characteristic maxima in the distribution of the T 90 duration: one is in the 0.1 − 1s, the other is in the 10 − 100s time frame (Kouveliotou et al. 1993).In our case, as we see later, these two characteristic peaks are in both Fermi and Swift T 90 distributions, although the times for the peaks are different.In the case of short bursts this peak is at 0.6s for Fermi and at 0.3s for Swift.
To explain these two peaks, researchers generally agree that the models can be divided into two large groups.One of them assumes that GRBs are originating from merging two compact objects (neutron star, black hole, or possibly a white dwarf).Analyzing GRB light curves one can find those that best fits one of these mechanisms (Rueda et al. 2018b).
GRBs with long (> 10s) durations are typically caused by collapsing high mass (> 10 solar mass) stars.In some cases, however, merging two compact objects may also produce long-GRBs (Rueda ★ E-mail: sandor.pinter@uni-nke.huet al. 2018a).King et al. (2007) also presented arguments for producing long GRBs by merging a massive white dwarf with a neutron star.Of course, all these models are theoretical possibilities.
There are also ideas that cannot be fitted to any of the above models.Huang et al. (2003), for example, believe that GRBs may also be formed by a neutron star kick.Bombaci & Datta (2000) studied the conversion of a neutron star to a strange star as a possible energy source for GRBs.
Naturally, all these models can be realized, however, not necessarily with the same frequency in a GRB sample, collected from observations.The two well-defined peaks in the T 90 distribution may indicate that one of the models is dominant for the short and long GRBs, respectively.Of course, this is just a statistical argument.In the case of some specific GRBs, it is necessary to carefully analyze whether one of the options has been realized or whether we are facing a new case that has not been studied in theory so far.
The distribution of T 90 , observed by BATSE, could be approximated by the superposition of two lognormal distributions.However, Horváth (1998) and Mukherjee et al. (1998) showed that supposing a third, intermediate lognormal group fits much better the T 90 distribution.Many authors (Hakkila et al. 2000;Balastegui et al. 2001;Horváth 2002;Borgonovo 2004;Horváth et al. 2004;Chattopadhyay et al. 2007;Zitouni et al. 2015) have since confirmed the existence of this Intermediate GRB class in the same database using different techniques.
Analyzing T 90 distribution obtained by the Swift satellite also re-sulted in the existence of a third, intermediate group between the short and long GRBs (Horváth et al. 2008;Huja et al. 2009;Horváth et al. 2010;Zitouni et al. 2015;Horváth & Tóth 2016;Deng et al. 2022).
Whether we look at merging or collapsar models, a very compact object is created for both types.This is the fireball model by Meszaros & Rees (1993).The energy condensed in this extremely small volume is released in a very short-lived explosion and creates the GRB phenomenon observed (Piran 2004;Mészáros 2006;Pe'er 2015).
The compact objects created in the models outlined above, in which the compressed energy is released in the form of GRB, differ in the extremely small volume in the compressed energy and in the time scale of the burst dynamics.The lognormal peak in the T 90 distribution supports the dominance of any of these.
The question arises, does the third lognormal peak suggest the presence of a third type of central engine for intermediate T 90 duration GRBs?We get closer to the answer, if we look at the GRBs that both Fermi and Swift detected.
Differences in observations' strategies
The Neil Gehrels Swift Observatory and the Fermi Gamma-ray Space Telescope have different technical layout and observational strategy.Swift has three major observational facilities: a coded mask for gamma-ray detection (Burst Alert Telescope, BAT), and two telescopes for X-ray and Ultra-Violet/Optical range (XRT and UVOT, respectively) (Gehrels et al. 2004;Barthelmy et al. 2005).
The Swift is operating in observatory mode, which means, after getting a burst alert, the BAT is slewing to point to the burst's direction in the sky (Barthelmy et al. 2000(Barthelmy et al. , 2005)).BAT covers a large fraction of the sky (over one steradian fully coded, three steradians partially coded; by comparison, the full sky solid angle is 4 or about 12.6 steradians).It locates the position of each event with an accuracy of 1 to 4 arc-minutes within 15 seconds.The BAT is sensitive in the 15 − 150 keV energy range.
The XRT can take images and perform spectral analysis of the GRB afterglow.This provides more precise location of the GRB, with a typical error circle of approximately 2 arcseconds radius.The XRT is also used to perform long-term monitoring of GRB afterglow lightcurves for days to weeks after the event, depending on the brightness of the afterglow (Burrows et al. 2000;Hill et al. 2000;Citterio et al. 1996;Holland et al. 1996;Short et al. 1998;Wells et al. 1992Wells et al. , 1997)).The XRT is sensitive in the 0.2 − 10 keV energy range.
The UVOT is used to detect optical afterglows.The UVOT provides a sub-arcsecond position and makes optical and ultra-violet photometry (Roming et al. 2005;Mason et al. 2001;Fordham et al. 1989).
The Swift strategy is to reach all new GRB positions as soon as possible and follow all the GRB afterglows as long as the signal can be distinguished from the background noise of the detector.The rotation time of the Swift baseline is less than about 90 seconds.XRT and UVOT observations begin while the burst is still in progress.When Swift is blocked in pointing to prompt observations of the most recent bursts, it will follow a schedule uploaded from the ground.This schedule makes possible to follow-up the GRB afterglows when they are in the line of sight of the detectors as long as possible, until the observable brightness of the burst become fainter than the sensitivity threshold of the detector.
The LAT is an imaging gamma-ray detector (a pair-conversion instrument) which detects photons with energy from about 20 MeV to 300 GeV, with a field of view of about 2.5 steradian (20% of the whole sky) (Atwood et al. 2009).
The GBM consists of 14 scintillation detectors (twelve sodium iodide crystals for the 8 keV to 1 MeV range and two bismuth germanate crystals with sensitivity from 150 keV to 30 MeV), and can detect gamma-ray bursts in that energy range across the whole 4 area of the sky not occluded by the Earth (Bissaldi et al. 2009;Bhat et al. 2009).
For the first few years of the Fermi mission the default observation mode was an all sky survey, optimized to provide relatively uniform coverage of the entire sky with the LAT instrument every three hours.More than 95% of the missions were carried out in this observation mode.However, Fermi's flexible survey mode is capable of patterns and inertially pointed observations, all of which allow for increased coverage of selected parts of the sky.
Due to the different energy response characteristics, technical layout, and observational strategy, the GRBs detected by Swift is not necessarily detected by Fermi and vice versa.It is an important problem, therefore, to study which part of the GRB population is observed by both of the satellites and which one is observed only by one of them.Furthermore, it is also important to know if there are physical differences between these classes (Racz et al. 2018b,a).
To compare the physical parameters of the GRB detected by BAT and GBM, we used the physical quantities obtained from measurements of both satellites.These parameters are the following: duration (T 90 ), fluence, 1024 ms peak flux.
Comparison of BAT and GBM GRB triggering
The BAT burst trigger algorithm looks for count rates over the estimated background and constant sources.The algorithm is constantly examining the criteria that determine the preburst background.The BAT processor continuously follows hundreds of such criteria in the same time.The eruption trigger threshold is adjustable by program between 4 − 11 sigma above background noise, typically 8 sigma value.One of the most important features of BAT is its imaging ability.
After the burst triggers, the onboard software checks that the trigger comes from a point source, thus many background sources can be eliminated.This yields a GRB fluence sensitivity of ≈ 10 −8 −2 −1 (in 15-150 keV rage), corresponding to ≈ 0.1 −2 −1 at 75 keV, the middle of the energy range of BAT sensitivity.
A GBM burst trigger occurs when the onboard software detects an increase in the count rate of two or more NaI detectors above an adjustable threshold in units of background count rate standard deviation (4.5−7.5).The trigger algorithms uses four BATSE compatible energy ranges (25 − 50 keV, 50 − 300 keV, 100 − 300 keV, and > 300keV) and ten different timescales between 16ms-8.192s.There are 120 distinct trigger algorithms available, with approximately 75 of them typically operating concurrently.Fermi GBM's burst sensitivity (the peak 50 − 300 keV flux for 5 detection) is less than 0.5ℎ −2 −1 .
Hence, the background estimation is different for the two satellites and therefore the integrated 9 0 calculation methods are using different methods.The BAT's coded mask restoration algorithm inherently includes the background subtraction, leaving only the statistical fluc-tuation in the lightcurve data.In the GBM case the values of the 9 0 from a lightcurve relies on the background estimation.To estimate the background during a GBM trigger, a common technique is to select background intervals on either side of the trigger and interpolate using a polynomial function.Another approach involves acquiring background spectra from orbits on preceding and subsequent days when the spacecraft occupied a similar geomagnetic position in its orbit (Fitzpatrick et al. 2012).For the precise background determination one could also take the detailed positional information of the satellite and the celestial objects (Earth, Sun, Moon) into account (Szécsi et al. 2013), or use a physically motivated detailed background model for the GBM (Biltzinger et al. 2020).Other information maximalization techniques can alse be used, e.g. the Automatized Detector Weight Optimization which maximizes the signal's peak over the background's peak over the search interval (Bagoly et al. 2016).
Although, the energy sensitivity range of GBM is much wider than that of BAT, the higher sensitivity of BAT might resulted in triggering GRBs in BAT but not in GBM.It can happen that only one of BAT or GBM is triggered, but if it is the case at both satellites, the observed physical parameters of GRBs will be different due to the different spectral characteristics of BAT and GBM.
DATA & METHODS
Our main database consists all GRBs of Swift1 and Fermi 2 detections from the beginning of their missions (December 17 ℎ , 2004 for Swift and July 14 ℎ , 2008 for Fermi) until April 14 ℎ , 2023.We used only those GRBs in our analysis when both satellites were observing simultaneously: from the first observation of Fermi.
First we assigned an angular position-trigger time frame to the GRBs detected by the Swift and Fermi satellites, respectively.For the detailed procedure see Racz et al. (2018a,b).Then we identified the closest Fermi-Swift pairs in this coordinate frame using the procedure in the library of the R statistical program (R Core Team 2017; Beygelzimer et al. 2019;Ripley 1996;Venables & Ripley 2002a).The results can be seen in Fig. 1.
Comparing the physical properties of "couples" and "widows" GRBs in BAT and GBM
We have already mentioned in the introduction that the technical layout of the Swift and Fermi satellites and, consequently, their observational strategies are different.So the question arises on which it depends whether a burst is detected by both satellites, and when only one of them.It may be a simple geometric effect, i.e. the corresponding burst is not in the observed region of the sky at one of the satellites.
In that case if the burst is in the field of view of both satellites but below the detection limit of one of them, the statistical distribution of the physical parameters of the bursts could be different.Namely, in case of a simple geometric selection effect the statistics of the physical properties of both the "couples" and "widows" bursts should be the same.In the second case, however, when the successful observation depends on the detection limit of the instrument it is not necessarily true.
Motivated by these facts it is worth comparing the statistical properties of "widows" and "couples" GRBs detected by both or only one of the satellites.In the following we discuss these issues in case of Swift BAT and Fermi GBM, separately.
Creating "couples" and "widows" frames
The results of computing -nearest neighbour distances enabled us to create three data frames: Swift-Fermi "couples", Swift "widows", and Fermi "widows".These names refer to GRBs detected by both Swift and Fermi satellites, or detected only by Swift or only by Fermi, respectively.Of course, only the time interval in which both Swift and Fermi were operating simultaneously should be taken into account in identifying the "widows".In identifying the "couples" this condition is fulfilled automatically.For the differences between the basic parameters of "couples" group observed by the different satellites see Fig. 2.
To compare the GRBs, detected by the Swift and Fermi satellites, we used T 90 duration, fluence and peak flux, physical parameters derived from the measurements of both satellites3 .The parameters were determined from photons incoming in the 15 − 150 keV energy range for the Swift (Gehrels et al. 2004) and 10 − 1000 keV for the Fermi (Meegan et al. 2009).
Fig. 2 shows even at first glance that the relationship between the quantities measured by BAT and GBM cannot be characterized simply by the = line.For duration, the slope is different, while for fluence and peak flux, the values measured by Fermi are systematically higher.These differences can be explained by the fact that the energy range of GBM includes the energy range of BAT, however, it also detects photons with much higher energy, i.e. those that are already outside the energy range of BAT.In the following, we study the differences in the values of the physical variables characterizing the "couples" and "widows" GRBs detected by BAT and GBM.The linear discriminant method was used for this purpose.We also study how the GRBs detected by both satellites differ in the observed variables.For this purpose, the canonical correlation was used.In both procedures, the linear (Pearson) correlation plays an important role.This type of correlation is sensitive to outliers in the data.A usual way using logarithmic variables to suppress their effect in the analysis.We proceeded in this way in our computations.
Linear discriminant analysis basics
Linear discriminant analysis (LDA) is a method used in statistics to find a linear combination of features that characterizes or separates two or more classes of objects or events.
Let we have a set of measured variables on cases which are assigned to one of the classes ( = 2 in our case).We look for linear combination of the { 1 , 2 , ..., } variables which give maximal separation between the groups of the cases.It means we are looking for the variable with a suitable chosen { 1 , 2 , ..., } coefficients ensuring a maximal separation between the classes.
LDA of "couples" and "widows" in BAT and GBM data
To get the best performing direction we performed linear discriminant analysis ( ) in the parameter space Fischer (1936); McLachlan (2004); Yu & Yang (2001); Martinez & Kak (2001). is available in the library of the R project Venables & Ripley (2002b); Racz et al. (2018b).Performing on BAT data we got a very pronounced difference between the "couples" and "widows" GRBs detected by the Swift satellite.
Similarly to the analysis of Swift BAT data we can look for the most discriminating direction between the "couples" and "widows" in the parameter space of the observed Fermi GBM data.The analysis demonstrated that the difference between "couples" and "widows" is much less pronounced than at GRBs detected by the Swift satellite.
Canonical correlation basics
Canonical correlation (CC) assumes we have two set of variables: X and Y.The first set, X, contains { 1 , 2 , ..., } and Y, the second one, { 1 , 2 , ..., } variables.We make observations for each variables.Using the linear combination of the and variables we develop and asking: how can one select the "" and "" set of coefficients so that correlation between and , obtained above, has the maximum value.
In our case at Swift and Fermi "couples" we have Swift (denoted with ) and Fermi (denoted with ) data for the same GRBs, observed by both satellite.In this case = = 3.The BAT and GBM data from the two set of variables represent the input of the canonical correlation.For performing canonical correlations we used cc() procedure in CCA library of the R statistical package González & Déjean (2021).We tested the significance of the variables obtained applying Wilks' -test implemented in p.asym() procedure in library of R (Menzel 2012).
Canonical correlations between BAT and GBM "couples" data
Maximizing the correlation between the and variables yields a unit vector ( 1 , 2 , 3 , 4 ) in the parameter spaces of the BAT variables and ( 1 , 2 , 3 , 4 ) in the parameter space of those in GBM.The vectors ì and ì denote the direction in the space of the BAT and GBM variables along which the correlation between the and is maximal.The components of the vectors ì and ì , respectively, indicate how strongly the variables of BAT and GBM participate in this correlation.
However, the direction thus obtained does not necessarily characterize all relationships between BAT and GBM variables.All directions perpendicular to directions ì and ì form a subspace in the parameter space of BAT and GBM, respectively, in which we can find another ( ì , ì ) pair, which denotes the directions along which the correlation between and is maximal.Repeating this procedure, we get the variables (1, 2, 3) and (1, 2, 3) in the BAT and GBM spaces, respectively.The components of their ì and ì vectors indicate the physical variables of GRBs.The whole process is coded in the cc() procedure.However, the correlation between the and variables thus obtained is not necessarily significant.Significance is obtained from the p.asym() procedure.
Remarks to LDA on Swift BAT and Fermi GBM data
The discriminant analysis between Swift "couples" and "widows" revealed that the the joint distribution of the Swift "couples" and "widows" T 90 , fluence and peak flux variables differ at a very high level of significance (Fig. 3).The LD1 variable describing the highest discrimination between Swift "couples" and "widows" has the highest correlation with peak flux and closely followed by fluence.The highest contribution, correlation to the LD1 discriminant variable is given in absolute value by the peak flux (0.95), followed by the fluence (0.56), and the T 90 duration at the end (0.15).(The difference between the mean values in "couples" and "widows" groups is given in Table 1).
The mean values of these variables are higher in the "couples" than in the "widows" group.The means of T 90 duration are higher in the "widows" group.The smaller mean value of T 90 is caused by a slight surplus of short GRBs in the "couples" group.
An interesting result of the LDA is an apparent deficit of intermediate duration GRBs in the T 90 distribution at "couples", and in the contrary, the short duration GRBs are somewhat fewer at the "widows" (Fig. 3).
These results are consistent with that obtained by Burns et al. (2016) finding that BAT detects weaker short duration GRBs than GBM.
In the case of BAT, the largest difference between "couples" and "widows" is in peak flux, but there is also a significant difference in fluence values.
Apparently, the distribution of the Fermi GBM "couples" and "widows" variables (Table 2, Fig. 4) differs much less from that of the Swift BAT.This phenomenon may be partly explained by the fact that the Swift sees a much smaller part of the sky compared to the Fermi.So at some given event, there could be also GRBs among the Fermi "widows" category that would belong to the "couples" group if they fell into Swift's field of view.
In case of GBM, the most significant difference appears in the distribution of fluences and durations.The highest contribution, correlation to the LD1 discriminant variable is given in absolute value by the fluence (0.86), followed by the T 90 (0.85) and the peak flux (0.62), at the end.
The duration of the "couples" bursts appears to be significantly longer.As the longer duration bursts are softer, a higher percentage of incoming photons fall within the range of energy detected by BAT.The "couples" bursts' fluence is also larger than the "widows" due to the correlation with duration.
Remarks to canonical correlation between BAT and GBM "couples"
Using the canonical variables obtained in the analysis we computed their correlations (canonical loadings) with the original ones.Canonical correlations resulted in tree canonical variables representing significant relationships between BAT and GBM data.The results of canonical correlation are summarized in Figs. 5 and 6 for the BAT and Figs.7 and 8 for the GBM variables.
The strongest (U,V) pair (U1,V1) dominated by the fluences in both of the Swift and Fermi data.Since T 90 and peak flux are correlating with fluence they also have strong correlations with the (U1,V1) pair.
Both of them strongly correlate with the pair (2, 2).Since the canonical variables are perpendicular to each other, this does not result from a correlation with fluence, but from a direct relationship between BAT and GBM duration and peak flux.The third (3, 3) canonical variables show a weak but significant relationship between the BAT and the GBM durations.As Figs 5,6,7, 8 and 9 demonstrates both BAT and GBM durations has some but decreasing level of correlations with all the canonical variables.
The GBM bursts average peak energy is around 200 keV which is outside the sensitivity range of BAT (Pe'er 2015).Therefore, a significant fraction of photons detected and used in GBM durations is not detected by BAT may causing a nonlinear relationship between BAT and GBM durations.Canonicial correlation is a linear theory and therefore requires a system of more orthogonal vector for accounting nonlinear relationships
CLASSIFICATION OF SWIFT AND FERMI GRBS
According to Fig. 10, the duration of bursts detected jointly by Fermi and Swift is systematically longer based on Fermi measurements than the short ones, but the opposite is true for the long ones.If the durations obtained from the measurements of the two satellites were Separation of Fermi GBM "couples" and "widows" GRBs along the best discriminating direction (LD1) obtained by the LDA (upper panel) and the degree of contribution of each measured variable.The units of variables are as before.Apparently, the difference between "couples" and "widows" is much less pronounced than in BAT.The greatest contribution to the "couples" "widows" difference is given by the fluence the same, the distribution in Fig. 2 could be fitted with a line with a slope of 1.However, the slope of the line that best fits the points is 0.746 ± 0.025 at more than 5 significance level.We also mentioned in the introduction that burst triggering procedure and the spectral range of detection are different for the two satellites.Since the two satellites see the same burst, the actual physical duration of the phenomenon must be the same.However, changes in the physical parameters of the outburst as a function of time occur differently due to the different technical design of BAT and GBM. (See Lien et al. 2016;von Kienlin et al. 2020b, to determine the duration of burst for BAT).Short bursts are generally harder, so they trigger GBM earlier and stay longer above detection level.For the long ones, since they are softer, it's just the opposite in particular at the last stage of their spectral evolution.This is reflected in the deviation of points from giving the highest BIC value.The result is given in Fig. 11 for Fermi (red colour) and Swift (cyan colour), respectively.We found that the number of its best-fit distribution components was different for Fermi and Swift measurements, although in both cases the GRBs were the same.It is worth mentioning Salmon et al. (2022) made a two dimensional clustering of Swift/BAT and Fermi/GBM Gamma-ray Bursts and also found two groups for GBM and three for BAT.
As we mentioned above, in Fig. 10 the Swift is stronger on the edge of the T 90 range, while the Fermi in the middle.Since both distributions are given by the same GRBs, we have to conclude that the T 90 distribution obtained from the observations cannot be inferred directly for the number of physical engine types operating in the background.As we pointed out, the effect can be explained by considering the different energy sensitivity ranges used by Fermi and Swift satellites to calculate the physical parameters.As we mentioned, the Fermi parameters are calculated from the photons in the energy range of 10 − 1000 keV and that of Swift in the 15 − 150 keV range.Since bursts are initially harder and then gradually become softer during eruption (see, e.g.Rácz & Hortobagyi (2018)) Fermi may notice them earlier than Swift.Although, the 15 − 150 keV range is detected by both satellites, but here Swift is more sensitive.Therefore, bursts can be followed for a longer time period.
CONCLUSIONS
We examined how the technical properties of the Swift and Fermi satellites affect the observable properties of the GRBs they detect.In our study, we examined the data obtained from the Swift BAT and Fermi GBM instruments.These data were T 90 , fluence, peak flux for BAT and T 90 , fluence, peak flux for Fermi GRBs.
In order to identify GRBs detected jointly by Swift and Fermi we looked for coincidences in GRB angular position -trigger time parameter space of both satellites.For this purpose we used the knn() procedure available in FNN library of the R statistical package.
Based in these identifications we separated the "couples" and "widows" GRBs, detected simultaneously by both satellites and only by one of them.In case of the "couples" the values of T 90 are satisfactorily the same for the medium duration, while the data of the Fermi GBM are systematically higher in the case of the short ones and the data of the Swift BAT in the case of the long ones.For fluence and peak flux, the Fermi satellite measured a systematically larger value for the same GRB.
Using the linear discriminant analysis (LDA) we compared the physical properties of "couples" and "widows" GRBs in BAT and GBM.For this purposed we utilised the lda() procedure available in MASS library of R statistical package.LDA resulted a direction in the parameter space of observed variables along with the difference between the "couples" and "widows" group is the greatest.We obtained that peak flux has the highest discriminant power in case of Swift and fluence in Fermi.
Using canonical correlation we studied the strength of the relationship between GRB parameters measured by Swift and Fermi, respectively.This relationship is represented by three orthogonal canonical variable pairs.The strongest of these is the largest contribution from fluence for both Swift and Fermi.
We tested the hypothesis that the number of lognormal distributions used to fit GRBs to T 90 distribution could be inferred for the physical mechanisms responsible for eruptions.For this purpose, we compared the distributions of the T 90 jointly detected by the two satellites in the Swift and Fermi data, separately.Since the GRBs used for this analysis are the same at both satellites one expect the same number of lognormal components necessary to fit the T 90 distributions.
In contrast, we obtained that the number of lognormal components required is three for Swift, while it is only two for Fermi.Since the GRBs used for the analysis were the same in both cases, we concluded that it is not possible to infer the number of physical mechanisms responsible for GRBs from the T 90 distribution alone.
Figure 1 .
Figure 1.Upper panel shows frequency distribution of Euclidean nearest neighbor distances between Fermi and Swift GRBs, in angular positiontrigger time parameter space.Red dashed line marks the boundary between real and random coincidences.Lower panel shows the distribution of nearest neighbor GRBs in the angular position (measured in degrees) trigger time difference (measured in days) plane.Light red points (status coinc) indicate real coincidences.The small lower bump in the left of the image represent GRBs not having durations estimated independently in BAT and GBM data.
Figure 2 .
Figure 2. Comparison of T 90 [s] duration (top), fluence [ −2 ] (middle), and peak flux [ −2 −1 ] (bottom) of Fermi and Swift.The X coordinate corresponds to the Swift and the Y to the Fermi data.The energy range is 10 − 1000 keV for Fermi GBM and 15 − 150 keV for Swift BAT.At the medium T 90 the values obtained from the measurements of the two satellites are almost the same, while at the shorter and longer T 90 duration the values of Fermi and Swift are systematically higher, respectively.For fluence and peak flux the values obtained from Fermi measurements are systematically higher.The dashed red line indicates the same identical values obtained by the two satellites.
Figure 3 .
Figure3.Separation of Swift BAT "couples" and "widows" GRBs along the best discriminating direction (LD1) obtained by the LDA (upper panel), and the degree of differences of each measured variable.Apparently, the highest difference in measured variables is given by the peak flux.See text for the definition of the LD1 dimensionless variable.
Figure 4 .
Figure 4. Separation of Fermi GBM "couples" and "widows" GRBs along the best discriminating direction (LD1) obtained by the LDA (upper panel) and the degree of contribution of each measured variable.The units of variables are as before.Apparently, the difference between "couples" and "widows" is much less pronounced than in BAT.The greatest contribution to the "couples" "widows" difference is given by the fluence
Figure 5 .Figure 6 .
Figure 5. Matrix plot of BAT data and the canonical variables obtained from BAT (U1,U2,U3).Lower panel shows 2D densities the upper one the correlations between the variables.The significance level of the correlation indicated with stars at the right side of the numbers.Seemingly, the first, the strongest, canonical variable (U1) has the tightest correlation with fluence.
Figure 7 .Figure 8 .Figure 9 .Figure 10 .
Figure 7. Matrix plot of GBM data and the canonical variables obtained from BAT (U1,U2,U3).Lower panel shows 2D densities the upper one the correlations between the variables.The significance level of the correlation indicated with stars at the right side of the numbers.Seemingly, the first, the strongest, canonical variable (U1) has the tightest correlation with fluence.
Figure 11 .
Figure11.BIC values of the fitted multi-component lognormal models.The maximum value of BIC for Fermi (ligh tred) is at two components, while for Swift (cyan) it is three components, although both satellites measured the same GRBs.
Table 1 .
Differences between "couples" and "widows" groups in BAT LDA.The error probability for rejecting the null hypothesis, i.e. differing the groups only by chance, is less than 2 • 10 −16 .
Table 2 .
Differences between "couples" and "widows" groups in GBM LDA.The probability for differing the group only by chance is less than 3.23 • 10 −5 .It is still significant, but much less pronounced than Swift BAT.
|
2023-11-15T17:42:50.813Z
|
2023-10-31T00:00:00.000
|
{
"year": 2023,
"sha1": "57100516735504176aaffd3d2c8522a3834a246b",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stad3236/52727344/stad3236.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a89e63ced28c9776caae0dbfedff1625c505a8e9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
119212225
|
pes2o/s2orc
|
v3-fos-license
|
Strong coupling constant of negative parity nucleon with $\pi$ meson in light cone QCD sum rules
We estimate strong coupling constant between the negative parity nucleons with $\pi$ meson within the light cone QCD sum rules. A method for eliminating the unwanted contributions coming from the nucleon--nucleon and nucleon--negative parity nucleon transition is presented. It is observed that the value strong coupling constant of the negative parity nucleon $N^\ast N^\ast \pi$ transition is considerably different from the one predicted by the 3--point QCD sum rules, but is quite close to the coupling constant of the positive parity $N N \pi$ transition.
Introduction
The strong coupling constants of hadrons with mesons are the key quantity for understanding the dynamics of the existing hadron-hadron, hadron-meson, and photoproduction experiments. Among many couplings, only nucleon-pion coupling constant has been measured accurately from experiments. With increasing experimental information there appears the necessity for an accurate determination of the strong coupling constants of hadrons with pseudoscalar mesons. These coupling constants have so far been estimated within various approaches (relevant references can be found in [1]). The strong coupling constants of the octet baryons with pseudoscalar mesons are calculated in framework of the light cone QCD sum rules [1].
In the present note we calculate the N * N * π coupling constant in light cone QCD sum rules. Compared to the all other sum rules approaches that take only one positive party baryon into consideration, the main novelty of the present calculation is that the contributions to the sum rules coming from the two positive parity N(938) and N(1440) states are taken into account. This fact makes the analysis of the sum rules more complicated in determination of the N * N * π coupling constant. In the present work a new method is explored for eliminating the unwanted contributions coming from the N(938) → N(938), N(1440) → N(1440), N * → N(938), N * → N(1440) transitions. In the following discussion we shall customarily denote N(1440) as N ′ . It should be noted here that the N * N * π coupling constant is determined in [2] in framework of the 3-point QCD sum rules.
The paper is organized as follows. In section 2 we derive the sum rules for the N * N * π coupling constant. In section 3 we present the numerical analysis of the sum rules for this parameter, and compare our result with the prediction of 3-point QCD sum rules.
2 Sum rules for the N * N * π coupling In this section sum rules for the N * N * π coupling constant within the light cone QCD sum rules method is derived. In determining this coupling constant we consider the following correlation function, where is the interpolating current of the nucleon, a, b, c are the color indices, C is the charge conjugation operator, and β is an arbitrary parameter. The sum rules for the N * N * π coupling can be obtained by following the QCD sum rules procedure. On the one hand, the correlation function is calculated in terms of hadrons. On the other hand, it can be calculated in the deep Eucledian domain p 2 ≪ 0, (p + q) 2 ≪ 0 by using the operator product expansion (OPE) over twist. By matching then these two representations, the sum rules for the N * N * π coupling is obtained.
The hadronic part of the correlation function is obtained by inserting a complete set of baryons, and isolating the ground state contributions of the baryons as follows, where i and j run over N, N(1440) and N * (1535), and dots denote higher state contributions. The matrix elements in Eq. (3) are defined as, where Using Eqs. (3) and (4), for the hadronic part of the correlation function we get, where The correlation function can be calculated from the QCD side by using Wick's theorem. In calculation of the correlation function from theoretical side the expression of the light quark operators in the presence of an external field are needed, and it is calculated in [3]. It should be noted here that the quark propagator gets contributions from three-particleqGq, four-particleqG 2 q,qqqq nonlocal operators. In further calculations we take into account only three-particleqGq operator and neglect contributions coming from fourparticle operators. It is demonstrated in [3] that neglecting these contributions can be legitimated on the basis of an expansion over conformal spin. Under this approximation the light quark propagator in the background field is given by, where Λ is the parameter separating the perturbative and nonperturbative domains, whose value is estimated to be Λ = (0.5 ÷ 1.0) GeV in [4]. Using the expression of the light quark propagator, the correlation function can be calculated from the QCD side straightforwardly in deep Eucledian region p 2 → −∞, (p + q) 2 → −∞ by using the operator product expansion over twist. In this calculation the matrix elements of the nonlocal operatorsq(x)Γq(0) andq(x)ΓG µν (ux)q(0) between the vacuum and pion states appear, where Γ corresponds to the matrices from full set of Dirac matrices. The matrix elements up to twist-4 are parametrized in terms of the pion distribution amplitudes in the following way [5][6][7][8][9]: where and Dα = dαqdα q dα g δ(1 − αq − α q − α g ). In these expressions ϕ π (u) is the leading twisttwo, φ π (u), φ σ (u), T (α i ) are the twist-three, and remaining ones are twist-four DAs, respectively, whose explicit expressions are given in the following section. It follows from Eq. (5) that we have four independent Lorentz structures / p/ qγ 5 , / qγ 5 , / pγ 5 and γ 5 for the problem under consideration. In principle, the relevant sum rules can be obtained by performing double Borel transformation over the variables −p 2 and −(p + q) 2 on the theoretical and hadronic parts, and matching the coefficients of the corresponding Lorentz structures. At this point we face the following problem. Among the nine coefficients given in Eq. (7) only coefficient E describes the N * N * π coupling constant. However, as has already been noted, we have only four independent Lorentz structures, and we need additional five more equations to determine the N * N * π coupling constant uniquely. Four of these additional five equations can be obtained by taking derivatives of the four equations with respect to the inverse Borel mass square. The fifth equation is obtained by taking second derivative of the coefficient of the structure / p/ qγ 5 with respect to again the inverse Borel mass square. From the numerical solution of these nine equations we get, where Π B 1 , Π B 2 , Π B 3 , and Π B 4 are the coefficients of the structures / p/ qγ 5 , / qγ 5 , / pγ 5 and γ 5 after Borel transformations with respect to the variables −(p+q) 2 and −p 2 are performed, respectively; Π B′ i stands for the first derivative of Π B i with respect to 1/M 2 1 , i.e., dΠ B i /d(1/M 2 1 ), and Π B′′ 1 is the second derivative of Π B 1 with respect to 1/M 2 1 . Here we set m 2 π = 0, and dots correspond to contributions of continuum and higher states. These contributions can be calculated by using the hadron-quark duality, i.e., above some threshold in the (s 1 , s 2 ) plane the hadronic spectral density is equal to the quark spectral density. Note that after taking derivatives of the invariant functions we set M 2 1 = M 2 2 = 2M 2 . The expressions of the functions Π B 1 , Π B 2 , Π B 3 and Π B 4 are quite lengthy and for this reason we do not present them in the present work. Once Fourier and Borel transformations are carried out, continuum subtraction can be performed by using the following formula, which leads to For the higher twist terms that are proportional to the negative power of M 2 , the subtraction procedure is not performed, since their contributions are negligibly small (for more details, see [5]).
Our final remark in this section is about the residue of the negative parity baryons, which is determined from the two-point correlation function, where η is given in Eq. (2). Saturating this correlation function with positive and negative parity baryons we get, When we calculate this correlation function from theoretical side we get, where Π i are the corresponding invariant functions. Performing Borel transformation over p 2 , and equating the coefficients of the structures we get, Expressions of the invariant functions Π B 1 and Π B 2 are given in [10]. As can easily be seen from these equations there are six unknowns, namely, m N , λ N , m N ′ , λ N ′ , m * N and λ N * , and therefore we need six equations to be able to solve for these unknown parameters. Two of these equations are given in Eq. (13), and the remaining four equations can be obtained from Eq. (13) by taking first and second derivatives with respect to (−1/M 2 ). Solving then these six equations we can determine λ N * . Our numerical analysis shows that |λ N * | 2 is positive in the regions −1.0 ≤ cos θ ≤ −0.8 (where β = tan θ), and 0.8 ≤ cos θ ≤ 1.0, and it is unphysical for all other values of cos θ. therefore in further numerical analysis we will use these domains in determination of the N * N * π coupling constant.
Numerical analysis
In this section numerical analysis for the sum rules of the strong coupling constant g N * N * π obtained in the previous section is performed. In this analysis the values of the input parameters, as well as the expressions of the pion distribution amplitudes (DAs) are needed, which are the main ingredients of the light cone QCD sum rules. The expressions of the pion DAs are given as, [5][6][7][8][9] ϕ P (u) = 6uū 1 + a P , where C k n (x) are the Gegenbauer polynomials, and The values of the parameters a P 1 , a P 2 , η 3 , η 4 , w 3 , and w 4 entering Eq. (15) are listed in Table (1) for the pseudoscalar π, K and η mesons. Table 1: Parameters of the wave function calculated at the renormalization scale µ = 1 GeV The sum rules for the N * N * π coupling constant contains three additional auxiliary parameters, namely Borel mass M 2 , continuum threshold s 0 and the arbitrary number β. Obviously, the result for the N * N * π coupling constant should be independent of these parameters. This leads to the necessity to find such regions of these parameters where the strong coupling constant does not depend on them. This issue can be handled by the following procedure. The first attempt is to find a such a region of M 2 at several predetermined fixed values of s 0 and β so that N * N * π coupling constant is independent of its variation. The lower bound of M 2 is determined from the condition that higher twist contributions are less than the leading twist contributions. The upper bound is obtained by requiring that higher states and continuum contributions constitute, say, 40% of the perturbative contribution. These conditions are both satisfied if the Borel mass parameter varies in the region 1.5 GeV 2 ≤ M 2 ≤ 2.5 GeV 2 . Note that this working region of M 2 is also obtained from analysis of the magnetic moment of negative parity baryons [10].
In Figs. (1) and (2) we present the dependence of the strong coupling constant g N * N * π on the Borel parameter M 2 at the fixed values of the auxiliary parameter β = −0.5, −0.3, 0.0, 0.3, 0.5 and at two fixed values of the continuum threshold s 0 = 4.0 GeV 2 and s 0 = 4.5 GeV 2 , respectively. It follows from these figures that g N * N * π shows rather stable behavior to the variation of M 2 in its working region.
The continuum threshold is the other arbitrary arbitrary parameter of the sum rules. This parameter is related to the energy of the first excited state. Analysis of various sum rules shows that √ s 0 = m ground + ∆, where m ground is the ground state mass, and ∆ is the energy difference between ground and first excited states which varies in the domain 0.3 GeV ≤ √ s 0 ≤ 0.8 GeV . In the present analysis we use the average value √ s 0 = (m ground + 0.5) GeV .
We also studied the dependence of the N * N * π coupling constant on s 0 , at four different values of the auxiliary parameter β = −0.5; −0.3; 0.0; 0.3, 0.5, and at two fixed values of the Borel mass parameter M 2 = 2.0 GeV 2 and M 2 = 2.5 GeV 2 . We observe that g N * N * π is practically insensitive to the variations in s 0 . The total result changes about 5-6% The final stage of sum rules is to find such a region of β where g N * N * π be independent of the variation in β. The arbitrary parameter varies in the domain −∞ ≤ β ≤ +∞. This infinitely large region can be mapped into a more restricted domain by introducing the definition β = tan θ, by running θ in the region 0 ≤ cos θ ≤ π.
In Figs. (3) and (4) we present the dependence of the coupling constant g N * N * π on cos θ, at two fixed values of the continuum threshold s 0 = 4.0 GeV 2 and s 0 = 4.5 GeV 2 , and at the fixed values of the Borel mass parameter M 2 = (1.5, 2.0, 2.5) GeV 2 , respectively. We find that the coupling constant g N * N * π is weakly dependent to the variation of cos θ in the region −1.0 ≤ cos θ ≤ −0.85. We also perform similar analysis at two more fixed values of the continuum threshold, s 0 = 4.2 GeV 2 and s 0 = 4.8 GeV 2 , which shows that the result for g N * N * π changes at most 7-8%.
Taking into account the uncertainties coming from input parameters entering into the pion DAs, as well as from quark condensates, residues of N * and from the parameters M 2 and s 0 , we finally get the following result, g N * N * π = (10 ± 2) .
Note that our prediction on N * N * π coupling is about 50% larger compared to that obtained in 3-point QCD sum rules [2]. This can be explained by the fact that in the limit q → 0 the result predicted by 3-point QCD sum rules is not reliable (for more details, see [16].
Finally we compare our result on the N * N * π strong coupling constant with the predictions of the NNπ coupling constant for the positive parity baryons. The g N N π coupling constant is calculated in various works and the results obtained are summarized in the table below, [11,12] , 9.76 ± 2.04 [13] , 13.3 ± 1.2 [14] 14 ± 4 [1] , 13.5 ± 0.5 [15] .
When we compare our results on the strong coupling constants of negative parity baryons with pion with those similar coupling constants for the positive parity baryons, we observe that our predictions are quite close the results exiting in literature for the positive parity nucleon pion coupling constant. Small difference in the results can be attributed to the different values of the input parameters, value of the residue, and continuum threshold s 0 .
In summary, we calculate the strong coupling constant of negative parity baryons with pion in framework of the light cone QCD sum rules. The unwanted contributions coming from positive-to-positive, and positive-to-negative parity transformations are eliminated by constructing combination of sum rules corresponding to different Lorentz structures. In the case of nucleons the situation becomes more challenging due to the second positive parity baryon N ′ (1440) in addition to the ground state N(938). Our prediction on N * N * π coupling constant is in good agreement with those results for the positive parity baryons existing in literature, but considerably different from the value predicted by the 3-point QCD sum rules method.
|
2016-08-19T12:53:45.000Z
|
2016-08-19T00:00:00.000
|
{
"year": 2016,
"sha1": "a1ca662b94aca5bf9e91541157d1f8481c35e378",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.05588",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a1ca662b94aca5bf9e91541157d1f8481c35e378",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18814086
|
pes2o/s2orc
|
v3-fos-license
|
Transduced PEP-1-FK506BP ameliorates corneal injury in Botulinum toxin A-induced dry eye mouse model
FK506 binding protein 12 (FK506BP) belongs to a family of immunophilins, and is involved in multiple biological processes. However, the function of FK506BP in corneal disease remains unclear. In this study, we examined the protective effects on dry eye disease in a Botulinum toxin A (BTX-A) induced mouse model, using a cell-permeable PEP-1-FK506BP protein. PEP-1-FK506BP efficiently transduced into human corneal epithelial cells in a time- and dose-dependent manner, and remained stable in the cells for 48 h. In addition, we demonstrated that topical application of PEP-1-FK506BP was transduced into mouse cornea and conjunctiva by immunohistochemistry. Furthermore, topical application of PEP-1-FK506BP to BTX-A-induced mouse model markedly inhibited expression levels of pro-inflammatory cytokines such as interleukin-1β (IL-1β), tumor necrosis factor-α (TNF-α) and macrophage inhibitory factor (MIF) in corneal and conjunctival epithelium. These results suggest PEP-1-FK506BP as a potential therapeutic agent for dry eye diseases. [BMB Reports 2013; 46(2): 124-129]
INTRODUCTION
Dry eye disease is well known as one of the most prevalent ocular surface diseases among the elderly, and it leads to potential damage to the ocular surface (1). In the United States, approximately 5 million people are affected by the condition, be-ing estimated to have moderate to severe dry eye (2). Although the pathogenesis of dry eye disease is not fully understood, various risk factors, including aging, hormonal change, environmental factors, and inflammation, contribute. Treatments involving various drugs have been applied in cases of dry eye disease (3)(4)(5)(6)(7), however, there is no ideal or satisfactory therapeutic treatment at present. Of the various risk factors, inflammation is considered to play an important role in dry eye diseases (8,9). Several studies have shown that those with dry eye disease exhibit increased expression levels of pro-inflammatory cytokines, such as interleukin-1β (IL-1β), tumor necrosis factor-α (TNF-α) and macrophage inhibitory factor (MIF) in corneal and conjunctival epithelium. Additionally, it is well known that cyclooxygenase-2 (COX-2) and matrix metalloproteinase-9 (MMP-9) expression levels are increased in conjunction with dry eye syndrome (9)(10)(11)(12)(13). Others studies have shown that increased pro-inflammatory cytokines were decreased in dry eye syndrome, when treated with various potential drugs, such as cyclosporine, corticosteroid and doxycycline (14)(15)(16)(17). FK506 binding proteins (FK506BPs) belong to a family of immunophilins that were named for their ability to bind to immunosuppressive drugs. FK506 binding protein 12 (FK506BP) is a small peptide with a single FK506BP domain that is involved in multiple biological processes (18). In a previous inflammation animal model, we showed that transduced PEP-1-FK506BP protein inhibits inflammatory response of cytokines and enzymes by blocking of NF-κB and MAPK kinase in Raw 264.7 cells. In addition, transduced PEP-1-FK506BP protein inhibits the inflammatory response in HaCaT cells. Furthermore, topical application of PEP-1-FK506BP to atopic dermatitis in NC/Nga mice was markedly inhibited by reducing the expression level of cytokines and chemokines (19,20). Protein transduction domains (PTDs) can deliver various exogenous molecules into living cells and tissues. Although the exact mechanism of transduction is unclear, many studies have demonstrated that protein transduction allows the delivery of therapeutic proteins both in vitro and in vivo (21)(22)(23)(24)(25)(26).
In the present study, we demonstrate that a PEP-1-FK506BP http://bmbreports.org BMB Reports protein can be directly transduced into human corneal epithelial cells (HCE-2) as well as mice corneal and conjunctival tissue. Also, topical application of PEP-1-FK506BP to Botulinum toxin A (BTX-A)-induced dry eye mice significantly inhibits dry eye disease and cytokine expression levels. Therefore, we suggest that the topical application of PEP-1-FK506BP protein might be a suitable therapeutic treatment for dry eye diseases.
Transduction of PEP-1-FK506BP into HCE-2 cells and cornea tissue
The construction of PEP-1-FK506BP, as well as its expression and purification have been previously described. Purified PEP-1-FK506BP was efficiently transduced into Raw 264.7 cells (19). Also, PEP-1-FK506BP was transduced into animal skin tissue (19,20). However, protein transduction efficiency by protein transduction domain (PTD) depends on various factors such as target proteins, cell type and nature of the PTD (27,28).
To assess the transduction ability of PEP-1-FK506BP into HCE-2 cells, the cells were incubated with various concentrations of PEP-1-FK506BP (0.5-5 μM) for 1 h, or were treated with PEP-1-FK506BP (5 μM) for various duration (5-60 min). As shown in Fig. 1A, PEP-1-FK506BP transduced to the cells in a dose-and time-dependent manner. However, control FK506BP did not transduce into the cells. In addition, the intracellular stability of PEP-1-FK506BP was evaluated after treatment with 5 μM PEP-1-FK506BP. The transduced PEP-1-FK506BP was maintained in the cells for nearly 48 h. We also attempted to confirm the intracellular transduction of PEP-1-FK506BP into HCE-2 cells using fluorescence microscopy. Cells were treated with PEP-1-FK506BP and the intracellular localization of PEP-1-FK506BP was visualized using fluorescence, Alexa and DAPI, staining (Fig. 1B). In cells treated with PEP-1-FK506BP, fluorescence was significantly dehttp://bmbreports.org tected in the cytoplasm. However, the fluorescent signals of FK506BP treated cells were similar to those of control cells.
To examine whether PEP-1-FK506BP transduced into the cornea and conjunctiva of mice, we performed immunohistochemistry on cornea sections of PEP-1-FK506BP treated mice. As shown in Fig. 2, transduced PEP-1-FK506BP levels were markedly increased throughout the cornea and conjunctiva of PEP-1-FK506BP treated mice. However, control FK506BP did not transduce into the cornea and conjunctiva (data not shown). These results indicate that PEP-1-FK506BP can be efficiently transduced into HCE-2 cells, as well as into mouse cornea and conjunctiva.
Inhibitory effect of PEP-1-FK506BP against corneal injury
It was reported that corneal injury is a common ophthalmo-logic disease, and corneal injury mice showed a markedly increased fluorescein staining score and inflammatory cytokine expression (15)(16)(17)29). Furthermore, we showed that transduced PEP-1-FK506BP has anti-inflammatory effects in macrophage cells and animal inflammation models (19,20). However, little is known about the potential application of PEP-1-FK506BP in corneal injury. Thus, we investigated the effect of PEP-1-FK506BP on corneal injury in a BTX-A induced animal model using fluorescein staining and immunohistochemistry.
As shown in Fig. 3, BTX-A injected mice groups showed markedly increased amounts of corneal fluorescein staining throughout the cornea. The extent of corneal fluorescein staining in FK506BP treated groups was similar to that in the BTX-A treated control. By comparison, PEP-1-FK506BP treated mice showed a significantly decreased amount of corneal fluorescein staining compared with BTX-A or FK506BP treated mice. In addition, there were few differences in the corneal fluorescein staining between PEP-1-FK506BP treated mice and the control group.
We also determined whether PEP-1-FK506BP affected the levels of pro-inflammatory cytokines, TNF-α, IL-1β and MIF. Hematoxylin and eosin (H&E) staining showed that the PEP-1-FK506BP treated group had significantly preserved corneal epithelium cell layers, as well as a preserved thicknesses of cornea stroma compared with BTX-A or FK506BP treated groups (Fig. 4A). Furthermore, PEP-1-FK506BP significantly reduced levels of TNF-α, IL-1β and MIF in the cornea (Fig. 4B) and conjunctiva (Fig. 4C). However, FK506BP failed to suppress the elevated expression of cytokines in a BTX-A-induced dry eye mouse model. These results indicate that cornea transduced PEP-1-FK506BP has anti-inflammatory effects and potent therapeutic efficacy against dry eye disease. Although further studies are needed to understand the exact mechanism, the present study revealed that PEP-1-FK506BP has anti-inflammatory effects, as it inhibited the expression of cytokines in BTX-A-induced dry eye mice. The major feature of BTX-induced dry eye diseases is well known to be elevated levels of inflammatory cytokines, such as TNF-α and IL-1β in the cornea and conjunctival epithelia. Also, mitogen-activated protein kinase (MAPK) signal pathways play an important role in the inflammatory response (8,9,30). Previously we showed that transduced PEP-1-FK506BP and PTD-fusion proteins inhibit the production of COX-2 and inflammatory cytokines, as well as the activation of NF-κB and MAPKs in LPS-stimulated macrophage cells, and in TPA-treated inflammation animal models (19,20,24,31). Furthermore, Kubo et al. (2008) demonstrated that transduced TAT-peroxiredoxin 6 (PRDX6) protects against eye lens epithelial cell death and delays lens opacity, and suggest that transduced TAT-PRDX6 can prevent or delay the progression of cataractogenesis. This should provide an effective approach towards delaying cataracts (32). In summary, it has been demonstrated that PEP-1-FK506BP can be efficiently transduced into corneal cells and tissues. Furthermore, topical application of PEP-1-FK506BP in a dry eye mice model markedly inhibits lens opacity and the expression of cytokines. Therefore, PEP-1-FK506BP may be relevant for clinical use against dry eye diseases, including eye inflammation and cataracts.
Materials
A Ni 2+ -nitrilotriacetic acid Sepharose Superflow column was purchased from Qiagen (Valencia, CA, USA). Human corneal epithelial cells (HCE-2) were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Primary antibodies against interleukin-1β (IL-1β), tumor necrosis factor-α (TNF-α) and macrophage inhibitory factor (MIF) were obtained from Santa Cruz Biotechnology (Santa Cruz, CA, USA). All other chemicals and reagents, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA), and were the highest analytical grade available.
Expression and purification of PEP-1-FK506BP proteins
Expression and purification of PEP-1-FK506BP protein was carried out as previously described (19,20). To produce the PEP-1-FK506BP, the plasmid was transformed into E. coli BL21 cells. The transformed bacterial cells were grown in 100 ml of LB media at 37 o C to a D600 value of 0.5-1.0, and were then induced with 0.5 mM IPTG at 37 o C for 4 h. Harvested cells were lysed by sonication and the recombinant PEP-1-FK506BP was purified using a Ni 2+ -nitrilotriacetic acid Sepharose affinity column (Qiagen) and PD-10 column chromatography (Amersham, Braunschweig, Germany). To remove endotoxin, purified PEP-1-FK506BP was treated using Detoxi-Gel TM endotoxin removing gel (Pierce, Rockford, IL, USA). Endotoxin levels for PEP-1-FK506BP were below the detection limit (< 0.1 EU/ml) as tested using a Limulus amoebocyte lysate assay (Bio-Whitaker, Walkersville, MD, USA). The purified protein concentration was estimated by the Bradford procedure, using bovine serum albumin as a standard (33).
To detect the transduction of PEP-1-FK506BP proteins into HCE-2 cells, the cells were grown to confluence in wells of 6-well plates, and were incubated with various concentrations (0.5-5 μM) and times (5-60 min) of PEP-1-FK506BP proteins. The cells were harvested and cell extracts were prepared for Western blot analysis.
Fluorescence microscopy
HCE-2 cells were seeded on glass coverslips and were then incubated with PEP-1-FK506BP proteins (5 μM) at 37ºC for 1 h. The cells were washed with PBS twice and were then fixed with 4% paraformaldehyde at room temperature for 10 min. The anti-histidine primary antibody (Santa Cruz Biotechnology, Santa Cruze, CA, USA) was diluted 1:2,000, and was then incubated for 3 h at room temperature. Alexa fluor 488-conjugated secondary antibody (Invitrogen, Carlsbad, CA, USA) was then diluted 1:15,000, and was incubated for 45 min at room temperature in the dark. Nuclei were stained for 30 min with 1 μg/ml 4'6-diamidino-2-phenylindole (Roche Applied Science, Basel, Switzerland). The fluorescence was analyzed using an ELIPSE 80i fluorescence microscope (Nikon, Tokyo, Japan).
Western blot analysis
Equal amounts of proteins from each cell lysate were resolved by 15% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). The resolved proteins were electrotransferred to a nitrocellulose membrane, which was then blocked with 5% non-fat dry milk in PBS. The membrane was probed with a rabbit anti-histidine polyclonal antibody (1:1,000; Santa Cruz Biotechnology, Santa Cruz, CA, USA), followed by detection with horseradish peroxidase-conjugated goat anti-rabbit immunoglobulins (dilution 1:10,000; Sigma-Aldrich). The bound antibody complexes were then visualized with enhanced chemiluminescence reagents, according to the manufacturer's instructions (Amersham, Franklin Lakes, NJ, USA).
Transduction of PEP-1-FK506BP into mouse corneal and conjunctival epithelium
Male C57BL/6 (6-8 weeks; 20-25 g) mice were obtained from the Experimental Animal Center, at Hallym University. The animals were housed at a constant temperature (23 o C) and relative humidity (60%) with a fixed 12 h light/dark cycle, and were provided free access to food and water. All experimental procedures involving animals and their care were in accordance with the Guide for the Care and Use of Laboratory Animals of the National Veterinary Research and Quarantine Service of Korea, and were approved by the Hallym Medical Center Institutional Animal Care and Use Committee.
To determine whether PEP-1-FK506BP was transduced into corneal and conjunctival epithelium, we administrated the proteins topically. The eyes of each mouse was treated with PEP-1-FK506BP (20 μg of protein in 10 μl of saline) and control FK506BP. Treatment was applied once. After 30 minutes of treatment, corneal and conjunctivas were isolated from the mouse eyes and were photographed, after which the level of transduced protein was determined by immunohistochemistry using anti-His antibody.
Botulinum toxin A-induced dry eye model
The mice (male C57BL/6; 6-8 weeks; 20-25 g) were divided into four groups, each containing seven mice. Group 1 was used as a control without any injection into lacrimal glands. Group 2 was injected with BTX-A (30 μl, 20 mU) into the lacrimal glands. Group 3 was treated with BTX-A + control FK506BP (10 μg). Group 4 was treated with BTX-A + PEP-1-FK506BP (10 μg). Control FK506BP and PEP-1-FK506BP were topically applied on the cornea four times at intervals of 30 min per day. The treatments were repeated every two days for 10 days.
Measurement of corneal injury and immunohistochemistry
Immunofluorescent staining (1 μl of 2% sodium fluorescein, Sigma-Aldrich, St. Louis, MO, USA) was performed as previously reported (17,34). The area of punctate staining was determined using a grading system. Grade 0 was determined when there was no punctate staining, grade 1 was determined when less than on eighth was stained, grade 2 was determined when less than one fourth was stained, grade 3 was determined when less than one half was stained, and grade 4 was determined when greater than one half was stained (35). For histological analysis, the cornea and conjunctiva from each group were collected by dissection, and biopsy samples were fixed, embedded in liquid OCT compound (Sakura FineTek, Torrance, CA, USA), sectioned at a thickness of 4 µm, and stained with hematoxylin and eosin (H&E) and pro-inflammatory cytokine (TNF-α, IL-1β and MIF) immunofluorescent staining was applied for the indicated specific antibody.
|
2016-08-09T08:50:54.084Z
|
2013-02-01T00:00:00.000
|
{
"year": 2013,
"sha1": "afc1a071f87d39a9f52ca004ff4cb487d872534a",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201309256123392&method=download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "afc1a071f87d39a9f52ca004ff4cb487d872534a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258218903
|
pes2o/s2orc
|
v3-fos-license
|
Mendelian randomization analysis does not reveal a causal influence of mental diseases on osteoporosis
Introduction Osteoporosis (OP) is primarily diagnosed through bone mineral density (BMD) measurements, and it often leads to fracture. Observational studies suggest that several mental diseases (MDs) may be linked to OP, but the causal direction of these associations remain unclear. This study aims to explore the potential causal association between five MDs (Schizophrenia, Depression, Alzheimer's disease, Parkinson's disease, and Epilepsy) and the risk of OP. Methods First, single-nucleotide polymorphisms (SNPs) were filtered from summary-level genome-wide association studies using quality control measures. Subsequently, we employed two-sample Mendelian randomization (MR) analysis to indirectly analyze the causal effect of MDs on the risk of OP through bone mineral density (in total body, femoral neck, lumbar spine, forearm, and heel) and fractures (in leg, arm, heel, spine, and osteoporotic fractures). Lastly, the causal effect of the MDs on the risk of OP was evaluated directly through OP. MR analysis was performed using several methods, including inverse variance weighting (IVW)-random effects, IVW-fixed effects, maximum likelihood, weighted median, MR-Egger regression, and penalized weighted median. Results The results did not show any evidence of a causal relationship between MDs and the risk of OP (with almost all P values > 0.05). The robustness of the above results was proved to be good. Discussion In conclusion, this study did not find evidence supporting the claim that MDs have a definitive impact on the risk of OP, which contradicts many existing observational reports. Further studies are needed to determine the potential mechanisms of the associations observed in observational studies.
Introduction
Osteoporosis (OP) is the most common systemic bone disease, characterized by a decrease in bone mineral density (BMD) and brittle fractures caused by deterioration of bone microstructure (1), which can easily lead to disability or even death in elderly patients (2). The standard diagnostic method for OP involves measuring BMD through dual-energy x-ray absorptiometry at the same skeletal site from childhood to old age. The femoral neck, lumbar spine, and forearm are the most commonly used skeletal sites for diagnosing OP (3,4). Recently, the heel site has also been used to estimate OP (5). Moreover, total body BMD (TB-BMD) measurement is also an appropriate method for an unbiased assessment of BMD. Fractures are another feature of OP (6)(7)(8), with the leg, arm, heel, and spine being the most representative. According to the latest report of the International Osteoporosis Foundation, one in three women and one in five men over the age of 50 will experience OP worldwide (6)(7)(8)(9). This disease not only impacts the patient's quality of life but also poses a significant burden on public health and the national economy.
Mental diseases (MDs) are becoming increasingly prevalent in modern populations and can be classified into primary and secondary psychosis. Primary psychosis includes schizophrenia (SCH), depression (MDD), mood disorders, split personality, and other related conditions. Secondary psychosis, on the other hand, is caused by somatic organ diseases, neurological diseases, or substance abuse, and includes conditions such as Alzheimer's disease (AD), stroke, Parkinson's disease (PD), and epilepsy (EP) (10). As chronic diseases, MDs have been linked to abnormal bone metabolism, with patients suffering from some of the most common psychiatric disorders such as SCH (9), MDD (11), AD (12), PD (13), or EP (14) being more likely to have lower BMD and fracture risk, including OP, compared to the general population. However, most of these reports are observational in nature and may be subject to confounding factors, making it difficult to establish a definitive etiological link between MDs and OP.
Mendelian randomization (MR) analysis, the mimic design of randomized control trials, is an epidemiological research method that uses genetic variants (typically single-nucleotide polymorphisms, SNPs) to assess the causal association between modifiable exposures (or risk factors) and outcome (15,16). MR analysis has advantages over clinical trials in terms of financial resources, material resources, and time. It is extensively applied in various studies (17,18), particularly in research related to COVID-19 (19)(20)(21).
The aim of this study is to explore the potential causal relationship between MDs and the risk of OP by leveraging genetic variation through the use of two-sample MR analysis (22). Our investigation seeks to contribute novel insights and empirical evidence to the field of research on the association between MDs and OP.
Outcome genome-wide association studies summary statistics
In this study, we estimated the causal effect of MDs on BMD using the five genome-wide association studies (GWAS) summary data (TB-BMD; femoral neck-BMD, FN-BMD; lumbar spine-BMD, LS-BMD; forearm-BMD, FA-BMD; and heel-BMD, eBMD), as well as the causal effect of MDs on fracture using an additional five GWAS summary data (leg fracture, LF; arm fracture, AF; heel fracture, HF; spine fracture, SF; and osteoporotic fractures, OPF). These data indirectly evaluated the causal impact of MDs on the risk of OP via MR analysis. At the end of those parts, we directly assessed the causal effect of MDs on the risk of OP by utilizing GWAS summary data of OP.
GWAS summary statistics for BMDs were downloaded from the GEnetic Factors for osteoporosis Consortium website (GEFOS, http://www.gefos.org/). GWAS summary statistics for fracture were downloaded from the Medical Research Council Integrative Epidemiology Unit website (MRC-IEU, http://www.bristol.ac.uk/ integrative-epidemiology/). GWAS summary statistics for OPF and OP were downloaded from the FinnGen website (https:// www.finngen.fi/en/access_results). In addition, GWAS summary statistics for MDs, BMD, fractures, and OP were downloaded from the GWAS catalog website (https://www.ebi.ac.uk/gwas/ downloads/summary-statistics). All study participants were of European descent. More detailed information can be found in Table 1.
To derive a reliable and valid inference regarding the correlation between MDs and OP, we opted for the most substantial GWAS
Selection of genetic instrumental variants
We employed stringent criteria to select SNPs as the genetic instrumental variables from the GWAS summary data of MDs, including SCH, MDD, AD, PD, and EP. Initially, SNPs with genome-wide significance (p < 5 × 10 −8 , R 2 < 0.001, kb = 10,000) of MDs were selected. Subsequently, the clumping process (R 2 > 0.001, window size = 10,000 kb) was executed to ensure that all the SNPs were not in linkage disequilibrium (LD) with the clump data function. Thirdly, if an SNP was not present in the outcome GWAS during the R calculation process, it would also be excluded. Fourthly, any ambiguous or palindromic SNPs that were ambiguous with nonconcordant alleles (e.g., A/G vs. A/C) or with an ambiguous strand (i.e., A/T or G/C) were excluded. Finally, using the PhenoScanner tool (http://www.phenoscanner.medschl.cam.ac.uk/) (30-32), we excluded any SNPs associated with the confounding factor of the outcome, and we used the F-statistic to indicate the strength of the genetic instrumental variants.
Two-sample MR analysis
The instrumental SNPs were utilized to carry out a twosample MR analysis for the purpose of evaluating the causal effect of MDs on the risk of OP. The summary statistics (OR and standard error) of BMD and fracture enabled the indirect assessment of the causal association between MDs and the risk of OP, whereas the summary statistics of OP facilitated a direct evaluation. Detailed methods of MR analysis included inverse variance weighting (IVW)-random effects, IVW-fixed metaanalysis, maximum likelihood, weighted median (WM), and MR-Egger regression, and penalized weighted median was applied to estimate the effects. Bonferroni correction (p-value = 0.05/11 outcomes) was used to adjust for multiple testing (p = 0.0045) in this MR. All of these analyses were conducted in R V.4.2.0 by using R packages of "Two-Sample MR" (https:// mrcieu.github.io/TwoSampleMR/reference/clump_data.html) (33) and p-values < 0.05 were considered statistically significant. The detailed steps are shown in the flowchart (Figure 1).
Robust analysis
IVW (random effect and fixed effect) and MR Egger regression were used to assess the potential horizontal pleiotropic effects of the SNPs. Cochran Q-test statistics were used to quantify heterogeneities. Furthermore, we performed a "leave-one-out" sensitivity analysis to identify potentially influential SNPs. In this method, we excluded each SNP in turn and checked whether it was responsible for the association. We also applied the MR Steiger filtering method to verify the causality between MDs and OP.
Causal effect of MDD with the risk of OP
The MR analysis conducted to estimate the causal effect of MDD on the risk of OP is outlined in Supplementary Material 2. In the primary IVW analyses, MDD showed no MR association with (Figure 2). Additional methods also confirmed that MDD was not associated with the risk of OP using both indirect (BMD and fractures) and direct (OP) approaches (all p > 0.05) (Supplementary Material 2, 4).
Causal effect of PD with the risk of OP
In the primary IVW analyses, PD was found to have no association with the risk of OP. In terms of its indirect aspect, PD was also found to have no MR association with BMD (Figure 2). Additional methods also demonstrated that PD had no MR association with the risk of OP (all p > 0.05) (Supplementary Material 2, 6). (Figure 2). Other methods also indicated that EP had no MR association with the risk of OP (all p > 0.05) (Supplementary Material 2, 7).
Robustness
Cochran's Q-test did not reveal any sign of heterogeneity during the sensitivity analyses (all p > 0.05) (Supplementary Material 8). The MR-Egger regression method examined the possibility of horizontal pleiotropy between SNPs and outcome, and the results indicated no evidence of such pleiotropy (all p > 0.05) (Supplementary Material 8). Additionally, the funnel plots suggested no observable horizontal pleiotropy for any of the outcomes (Supplementary Material 3-7). Furthermore, the leaveone-out sensitivity analysis plots demonstrated that no single SNP was likely to have influenced the causal association and that our conclusions were therefore robust ( Supplementary Material 3-7). Taken together, all of the findings suggest that the null association between genetic predisposition to MDs and the risk of OP was not significantly impacted by any individual SNP (Supplementary Material 3-7).
Discussion
To our knowledge, this is the first study to evaluate the causal association between five kinds of MDs and the risk of OP using twosample MR analysis. This study extensively mined the largest database, GWAS, and other relevant databases to investigate the causal association between the five most prevalent MDs (SCH, MDD, AD, PD, and EP) and the risk of OP, as well as its clinical manifestations (BMD and fracture). Our findings suggest that there is no clear causal relationship between mental disorders and the risk of OP.
Existing reports are limited to observational studies. These studies have consistently found that MDs increase the risk of OP (11,13,(37)(38)(39). Specifically, Gomez and Stubbs discovered that individuals with SCH and MDD tend to have lower BMD at the hip and lumbar spine compared to healthy individuals (37,40). Liu et al. studied PD and observed that PD patients have lower BMD at the total body, arm, and femur neck (13). Zhao et al. investigated the link between MDs and OP, and found that individuals with AD have a greater propensity for hip fractures (38). It is important to note, however, that these were all observational studies.
The aforementioned findings are in contrast to our own results, which may be attributed to the limitations of observational studies. Firstly, the studies cited were all based on case-control and crosssectional designs, leaving it uncertain whether MDs are prospectively linked to an increased risk of OP. Secondly, many patients with MDs take long-term medications such as corticosteroids, which could potentially introduce bias in the results. Thirdly, some of the original studies lacked access to raw data, which could affect the accuracy of the findings. Lastly, certain assessment criteria may lower the reliability of the results, and BMD and fracture risk of multiple body parts should be taken into account in the investigation.
Due to the limitations of conventional observational study (41), we used "two-sample MR analysis" (16, 42) to ascertain the causal effects between MDs and the risk of OP, proceeding from the indirect aspect (BMD and fracture) to the direct aspect (OP). As we all know, MDs are frequently accompanied by other illnesses and substance abuse, resulting in the traditional misconception that psychiatric disorders can result in OP. However, which returns to the genetic level of analysis, indicates that MDs cannot lead to OP directly. Meanwhile, the randomness and fixedness of alleles preclude the reverse causation bias (43). The use of heterogeneity and sensitivity analysis in our analysis, with various Mendelian tools, augments the result's stability, and the extensive sample size and singular population distribution diminish the bias of population stratification, ensuring an accurate causal effect between MDs and the risk of OP.
Undoubtedly, this study is not exempt from some constraints. First and foremost, the study's participants are exclusively of European descent, necessitating further data collection and analysis to determine whether the findings apply to other populations. Second, the absence of publicly available source data precludes us from determining the potential sample overlap bias. Finally, even though we took measures to eliminate any confounding factors, we cannot fully rule out the potential influence of horizontal pleiotropy on our findings.
Moreover, given the constraints of our current research, our findings should be viewed as preliminary and warrant further investigation. Additionally, given the intricate nature of confounding factors, it is still advisable to consider various intervention methods and prevention strategies (44) for patients with MDs. These methods could include modifications to their lifestyle, such as promoting physical activity and improving their diet and exercise regime, increasing their intake of calcium and vitamin D (45), and minimizing the risk of falls.
Conclusion
The results of this study did not provide conclusive evidence to support the notion that MDs (including SCH, MDD, AD, PD, and EP) have a direct causal effect on the risk of OP. This is in contrast to the findings of numerous observational studies. It is possible that the relationship between MDs and OP risk reported in these observational studies is confounded by other risk factors. With the availability of more sophisticated approaches and a larger sample size of OP patients, the estimates can become less biased and the results can be more accurate. It is important to acknowledge that further research is needed to fully understand the relationship between MDs and OP risk.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. The R code to perform the MR analysis is detailed in the presentation 1 (Supplementary Material 9).
Ethics statement
This study used publicly available GWAS summary databases. Thus, no ethical committee approval was required.
Author contributions XD, FT, DX, SW, and HZ conceived the idea for the study. FT obtained the genetic data and performed the data analyses. FT, DX, SW, and HZ interpreted the results of the data analyses. All authors contributed to the article and approved the submitted version.
Funding
This work was supported by the National Key R&D Program of China (No. 2021YFC2100201).
|
2023-04-20T13:08:28.182Z
|
2023-04-20T00:00:00.000
|
{
"year": 2023,
"sha1": "dd2ea2511b9afa2d03941ba346d19300ac8ec240",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "dd2ea2511b9afa2d03941ba346d19300ac8ec240",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244918933
|
pes2o/s2orc
|
v3-fos-license
|
The Improvement of Picric Acid Synthesis Methodology
Explosives have the greatest importance in human practical activities, not only at time of war, but at tranquility as well. Nowadays, huge amount different type of explosives is synthesized, and fabricated for military and civilian applications. Nevertheless, this fact doesn’t exclude necessity of synthesis of new explosives for optimization of their characteristics, such as prime cost, power, safety during production, storage, transfer and etc. Picric acid is a fairly strong and energetic explosive, at the same time, beside to its explosive properties, it is characterized with antibacterial nature and it is the best yellow dye, especially for dyeing animal and plant tissues. The synthesis of picric acid structural analogues, is the main purpose of the research. One of the reasons of synthesis picric acid and further preparation of its structural analogues is safe nature of substances of this group, which makes them safe to various manipulations. On the other hand, it is well known that the synthesis and production of explosives is classified as high risk and costly technology. Therefore, even a small increase of their production productivity, is interesting from the economic effect point of view. During an investigation, changing of reaction conditions (temperature, concentrations and dosage of reagents) the improved method for syntheses of picric acid was developed. As result the significant increase of practical yield of picric acid, from 46% to 86% was achieved. Synthesized picric acid was placed in steel tube and tested on initiation of detonation in explosive camera. The description of modified method, comparisons to conventional technology, as well as explosion testing results is described in the paper.
Introduction
A vast majority of explosives are produced by the chemical synthesis, including picric acid, trotil, octogene, nitroglycerine and other well-known explosives. At the end of XX century in the USA powerful explosives with highest technical characteristics -octanitrocubane and Hexanitrohexaazaisowurzitane -CL-20 were synthesized [1][2] Despite of their powerful explosive nature, these substances have a serious drawback -their high prime cost. Picric acid is a well-known, quite powerful and relatively cheap explosive. During the XX century, the millions of shells, charged with this explosive, were used in World wars and other conflicts.
Choice of picric acid as the main ,,working substance'' is conditioned by the fact, that phenol and its chemical analogues (anisole, cresols, resorcine…) are cheap and at the same time, valuable ,,row materials'' for the synthesis of numerous products, including well-known explosives -picric acid, isomeric dinitrophenols, methyl picrate, cresolite, styphnic acid … Besides, we have some experience in the synthesis of various alkyl phenols [3][4][5], which may be considered as interesting substrates for the synthesis of structural analogues of picric acid, presumably, by maintaining of explosive nature.
Because of acidic nature, picric acid reacts with metal casing of shell. This is its serious drawback. In this respect, trotil has an undeniable advantage. In spite of this, at war, the importance of picric acid may be increased. In this case, of course, before charging the shell with picric acid it's necessary to place explosive in non-metal casing.
For avoidance of negative acidic nature of picric acid, a chemical method may also be used: changing of molecular structure or, synthesis its structural analogues.
At the same time, the world experience considers such changing as the most successful methodology for synthesis of new explosives. On the other-hand, chemical avoidance of mentioned acidity may become stimulant for synthesis of new explosives on the base of picric acid.
Such avoidance concerns exclusively hydroxyl, as an acid group and may be realized by corresponding reactions -alkylation, acylation, etc. In early period, we used similar transformations of acetylene phenols, synthesized by us [3][4][5], which already was mentioned above.
Theoretical part of phenol's nitration
Usually, picric acid may be synthesized by nitration of phenol according to scheme: The role of the sulphuric acid is easy formation of 2,4 -disulphophenol -the first intermediate product of nitration (I). At the following step of reaction, two sulpho groups of I are easily changed to nitro groups of nitric acid and form 2,4 -dinitrophenol -the second intermediate product (II). At last, substitution of nitro group from third molecule of nitric acid, at 6-st carbon atom of phenol, leads to picric acid: In the reaction mixture the ionic fragment is formed, which is called nitronium NO2 + . This particle attacks the aromatic ring of phenol and replaces hydrogen atom, which removes as H+ ion. This is electrophilic substitution in aromatic ring which consists of intermediate formation of π-and σcomplexes, and ended by formation of reaction products.
Here is the mechanism of this process, for benzene nitration:
Methodology of picric acid synthesis
3.1. The well-known methodology [6] of picric acid synthesis In the porcelain basin 12,5 g of phenol is mixed with 34 ml of concentrated sulphuric acid and the mixture is heated on the water bath, till a formation of a limpid solution of phenol-sulphuric acid. Formation of phenol-sulphuric acid ends, usually, in 30-40 min.
Phenol-sulphuric acid solution is poured with stirring in to a 1L flask, contained 50 ml of cool water. The flask is cooled by water. With stirring, 25 ml of concentrated (d=1,4) nitric acid is added dropwise. Reaction mixture becomes red, temperature increases and red steam appears (inhaling is forbidden! Conduct work in suction cupboard!).
After, flask is placed on the water bath, add residual 10 ml of nitric acid and heated the mixture for 1,5-2 h.
After cooling, yellow crystals of picric acid are precipitated. Then, pour water into the flask, mix, filter crystals and wash. Picric acid is crystallized from 50% ethyl alcohol: m. p. 122 0 C, yield -14 g.
3.2. The improved methodology of picric acid synthesis 20 g of phenol is placed in to a flat-bottomed flask, carefully melt on the water bath, and at 50 0 C dropwise add 33 ml of concentrated (d=1,84) sulphuric acid. Rise temperature till 96-97 0 C and continue heating the mixture for 1 hour.
After cooling, rise temperature to 50 0 C and start addition of 35 ml of nitric acid (d=1.42). Self-heating (till 76-77 0 C) and darkening of the mixture, is caused by nitrous gases. After addition, reaction was continued at the same temperature, till elimination of nitrous gases emission. Then the temperature was diminished till 50 0 C, add 24 ml of nitric acid, in drops and begin heating of the reaction mixture. At 70 0 C gas emission is intensive; at 85 0 C intensity is at maximum. Than the mixture is boiled for 1.5 h. After cooling, picric acid crystals appeared, which are removed by filtration. Crystals are washed with cool, diluted nitric acid and dried. 27 g yellowish crystals are received.
Dark red filtrate was returned in to a flask, added 12 ml of nitric acid in drops and heated to 96-97 0 C for 1,0 h. After cooling, 300 ml of ice cold water is poured in to a flask, temperature is diminished till 0 0 C, and crystals are filtrated and washed. 15 g of picric acid is obtained. After this, the total amount is equal to 42g, practical yield = 86%. According to "well-known methodology", from 12,5g phenol 14g of picric acid is received (46%). Let's try to explain such an low yield. "Well-known methodology" describes, that phenol and sulphuric acid are mixed together, at the same time, not in drops, and heated for 30-40 min.
Comparison of two methodologies
In our method phenol is melted and then, at 50 0 C, sulphuric acid is added in drop, with stirring. After addition, reaction mixture is heated on steam bath (96-97 0 C), for minimum 1 h. So, formation of disulphophenol is almost accomplished.
In "well-known methodology": before addition of nitric acid, 50 ml of cool water is added to the flask, which dilutes disulphophenol. It may diminish the concentration of added nitric acid, which is not desirable.
Before nitric acid addition, the temperature of mixture is risen to 50 0 C. During addition of the first portion of nitric acid temperature, usually, self-increases till 76-77 0 C and active emission of nitrous gases starts, which intensity is synchronized with temperature increase.
At the end of the reaction, after filtration of crystals, the consistence of filtrate, shows, that it isn't the mixture of only unreacted acids, but, probably, consists of unreacted dinitrophenol too. It is noteworthy, that, at high temperatures, nitric acid evaporates. So, it needs compensation. Consequently, filtrate is poured back in to the flask and 1 portion of nitric acid is added -15-20% of its reaction dosage. After heating, renewed emission of nitrous gases, confirms continuation of reaction.
Often, at the end of the reaction, nearly 1/3 of full amount of synthesized picric acid is formed additionally.
Testing of synthesized picric acid on blast conversion
Blast testing of picric acid was carried out in a camera integrated into the tunnel system of an underground experimental explosive base of Georgian G. Tsulukidze Mining Institute.
The views of the base's portal and camera are shown on the figure 1. To test the ability to induce detonation, a standard test scheme localized in a solid state of charge was used. In particular, to obtain longitudinal charges, 15 g of picric acid was loaded into a low-carbon steel tube. One end of the tube is closed with a stopper of the same material, while the other side (in the area of the detonator) is not charged and is in a free state. The pipe diameter was selected conditionally, according to the analogy of the tests of brisant explosives. A capsule detonator KD-8 with a fuse was used to initiate the detonation. The diagram (b) and the view (c) of testing tube also are shown in figure 2. The explosion of picric acid occurred in full, without a waste. It caused complete fragmentation of the casing, which is typical for explosives with high brisant and working capacity. The results of blasting are shown in figure 3.
Conclusions
• The temperature conditions of reaction were almost completely changed.
• Considering the evaporation of nitric acid, at high temperatures, its reaction dosage was increased.
• All of this has increased a practical yield of picric acid till 86%.
In the future, the data of improved methodology, for increasing of practical yields, described in this article, will be used in the synthesis, of other analogues of picric acid -methylpicrate (from anisole), ethylpicrate (from phenetole), styphnic acid (from resorcine), etc.
|
2021-12-07T20:08:47.848Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "49be06e57c4daf9ca797c5a32a309ce15ce449ee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/906/1/012132",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "49be06e57c4daf9ca797c5a32a309ce15ce449ee",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249923807
|
pes2o/s2orc
|
v3-fos-license
|
The interaction of ice and law in Arctic marine accessibility
Sea ice levies an impost on maritime navigability in the Arctic, but ice cover diminution due to anthropogenic climate change is generating expectations for improved accessibility in coming decades. Projections of sea ice cover retreating preferentially from the eastern Arctic suggest key provisions of international law of the sea will require revision. Specifically, protections against marine pollution in ice-covered seas enshrined in Article 234 of the United Nations Convention on the Law of the Sea have been used in recent decades to extend jurisdictional competence over the Northern Sea Route only loosely associated with environmental outcomes. Projections show that plausible open water routes through international waters may be accessible by midcentury under all but the most aggressive of emissions control scenarios. While inter- and intraannual variability places the economic viability of these routes in question for some time, the inevitability of a seasonally ice-free Arctic will be attended by a reduction of regulatory friction and a recalibration of associated legal frameworks.
Sea ice levies an impost on maritime navigability in the Arctic, but ice cover diminution due to anthropogenic climate change is generating expectations for improved accessibility in coming decades. Projections of sea ice cover retreating preferentially from the eastern Arctic suggest key provisions of international law of the sea will require revision. Specifically, protections against marine pollution in ice-covered seas enshrined in Article 234 of the United Nations Convention on the Law of the Sea have been used in recent decades to extend jurisdictional competence over the Northern Sea Route only loosely associated with environmental outcomes. Projections show that plausible open water routes through international waters may be accessible by midcentury under all but the most aggressive of emissions control scenarios. While inter-and intraannual variability places the economic viability of these routes in question for some time, the inevitability of a seasonally ice-free Arctic will be attended by a reduction of regulatory friction and a recalibration of associated legal frameworks.
Arctic j sea ice j law of the sea j climate change j marine navigation Of the world's oceans, the critical trends and alternative futures wrought by climate change are most intensely captured in the Arctic. Historically, human activities on the Arctic Ocean were focused on fish, seals, whales, and bear. The new "Race to the North" is for exploitation of hydrocarbon and mineral wealth, strategic advantage, tourism opportunities, and cargo transport. Navigability is the critical condition that enables all of these activities, and a key component of Arctic navigability is sea ice cover. The temporal and geographic distribution of navigability is a critical determinant of the evolving applications of international maritime law.
With ice cover in retreat, Arctic routes for destination shipping present a plausible alternative to the Suez Canal. Whether by the Northern Sea Route, the Northwest Passage, or the Transpolar Route, Arctic routes are 30 to 50% shorter than the Suez or Panama Canals (1), with transit time reduced by 14 to 20 d assuming the same sailing speed (2). The slower sailing speeds typically adopted in the Arctic could reduce this advantage, but worldwide "slow steaming" is a candidate short-term strategy identified by the International Maritime Organization to achieve greenhouse gas emissions reductions (3). In this context, emissions reductions for viable Arctic routes are around 24% (4). Furthermore, Arctic routes are not subject to the kinds of single-vessel blockages recently exposed by the stranding of the Ever Given container ship in 2021. This 6-d incident was estimated by shipping journal Lloyd's List to have cost around $400 million per hour. However, Arctic shipping is not as active as might be expected. The Arctic remains risky because of high spatial and temporal variability, limited satellite navigation coverage and ice forecasting capacity, challenges for emergency management, and inexperienced crews. The Arctic remains expensive due to the cost and limited size of vessels permitted under the Polar Code, as well as regulatory requirements that include ice-breaker escort on the Russian-controlled Northern Sea Route.
Russia accounts for more than 24,000 km of Arctic Ocean coastline, and under anthropogenic climate change sea ice has retreated most rapidly from its coasts (5). This has enabled the expansion of the Russian Arctic presence. Since 2000, satellites have detected new infrastructure covering hundreds of square kilometers associated with oil and gas, mining, fishing, and military activities (6). Russian law describes the Northern Sea Route as the "historically established national transport communication route." (7) Significantly, Russia employs straight baselines such that segments of the route lie within internal waters. The official Russian view appears to have evolved to characterize the entire Northern Sea Route as internal waters (8).
In contrast to the Antarctic and its single-treaty regime, the constitutive process of the Arctic comprises multiple transnational legal instruments and institutions (9). The prevailing legal regime is the customary and codified law of the sea which balances "the special exclusive demands of coastal states, and other special claimants, and the general inclusive demands of all states in the world arena" (10). The widely applicable codified instrument is the United Nations Convention on the Law of the Sea of 1982, supplemented by the international code of safety for ships operating in polar waters, or "Polar Code," the International Convention for the Safety of Life at Sea, and the International Convention for the Prevention of Pollution from Ships. The five Arctic littoral states have affirmed their commitment to the Convention on the Law of the Sea through the Ilulissat Declaration. This legal framework is now unstable owing to climate change. We project that key Convention on the Law of the Sea provisions pertaining to baselines, ice-covered waters, navigation, and straits will be acutely affected by climate change. Here we focus on navigation.
Results
Projections for transit routes are constructed using scenarios (11) that span the range from very high emissions to policies that constrain average warming below around 1.5°C. All realizations demonstrate substantial interannual variability, and the projected navigable season varies widely from one model to another: Some models project no navigability by 2065, while others suggest a reliable season now. The ensemble of realizations indicates that the start of the shipping season trends earlier at almost 3 d per decade in all of the emissions scenarios considered here. There is significant extension in the close of the shipping season of 4 d per decade for the medium-and high-emissions scenarios.
Using these data, we assess the likelihood of a viable openwater shipping season that avoids Russian territorial waters (Fig. 1). These routes require transits through the Bering Strait but would not require icebreaker transport or, indeed, Polar Class ice-strengthened vessels. Consistent with the earlier retreat of sea ice from the Russian Arctic, the projections suggest that the likelihood of viable open-water shipping outside Russian regulatory reach will increase over time. The likelihood of these routes increases faster in the high-emissions scenarios, as expected. However, interannual variability, and thus uncertainty, remains high in all scenarios to midcentury. Further, it is unlikely that access in the lowest-emissions scenario will be realized with any reliability by midcentury. Fig. 2 shows the alternative routes that are generated for the highest-emissions scenario (SSP5-8.5). A viable route passing from Norwegian waters to the central Arctic just north of Svalbard is frequently available, and an even more westerly route through Danish waters emerges midcentury. This second route is significantly further west than the currently identified Transpolar Route. A significant open-water season for the Northwest Passage also becomes viable.
Discussion
The Convention on the Law of the Sea provision that will be most affected by climate change is Article 234, which allocates coastal states broad prescriptive and enforcement jurisdiction in ice-covered areas for "the prevention, reduction and control of marine pollution from vessels." Some Arctic littoral states, notably Canada and Russia, assert broad claims under Article 234. During the 1990s the Russian Federation, invoking Article 234, adopted regulations applicable to the entire Northern Sea Route. These now include mandatory insurance, navigation rules including authorization procedures, the requirement to carry a state pilot, and mandatory ice-breaker pilotage be paid for in accordance with a published schedule of charges. They apply to all vessels, purportedly including warships and government vessels, which under the international law of the sea are accorded immunity and thus should fall beyond the scope of Article 234 (12). Indeed, Article 234 itself calls out the requirement for "due regard" for navigation.
The key question for climate change projections flows from the Article 234 provision that ice cover persist "for most of the year." Changing ice phenology suggests that fewer states will be able to rely on Article 234 over less marine space. At present interannual variability of ice remains high (13), raising questions pertaining to the scope of Article 234. What extent of ice coverage over what period is required for application of this provision?
Flexibility and realism will be required for recalibrating the international law of the sea in the face of the geographic and temporal distribution of ice retreat. Alternative routes generate increased shipping choice and reduced economic friction, but confidence. The level of confidence for this assessment corresponds to the Intergovernmental Panel on Climate Change convention, that is, very likely expressed as 90 to 100% probability, likely 66 to 100%, as likely as not 33 to 66%, unlikely 0 to 33%, and very unlikely 0 to 10%. substantial financial risk remains in the face of low ice predictability (14). Nevertheless, sea traffic will be able to traverse the Russian Exclusive Economic Zone seaward of coastal islands and thus be subject to fewer navigational servitudes. Pursuant to the Convention on the Law of the Sea, the Exclusive Economic Zone navigational encumbrances are minimal. As a result, disagreements over the legal status of the Northern Sea Route as a strait used for international navigation will be moot, and our projections suggest the gradual termination of Article 234. This requires attention by governments, shipping owners, and lawyers based on available science. The consequences for Arctic shipping and global maritime trade will be profound.
Materials and Methods
Projected sea ice and associated route accessibility were calculated from four Tier 1 scenarios (SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6) (11) using ensemble members from each of 14 models as part of the Coupled Model Intercomparison Project phase 6 (CMIP6) (15). The daily ice concentration and thickness from each realization is extracted for use in the marine accessibility model. Our approach (13) builds upon the Polar Operational Limit Assessment Risk Indexing System (POLARIS) (16). Within the POLARIS framework, the viability of passage is quantified by the Risk Index Outcome (RIO): where C i is the ice concentration and RIV i is the risk index value (RIV) of a particular ice category and vessel class. For the results shown here, we assume open water vessels to simulate alternative routes that do not require icebreaker escort. A positive RIO indicates acceptable risk and a corresponding travel speed to safely navigate the given ice regime. The resulting travel speeds are used to identify the optimal least-cost route using Dijkstra's algorithm. The optimal route and its travel time are only recorded when a transit can be realized from Rotterdam to a destination at the Bering Strait. Routes can be generated that conform to any of the Northern Sea Route, the Northwest Passage, and the Central Arctic Route. For each model realization a navigable season is flagged if at least 32 continuous days of viable routes are available.
Data Availability. CMIP6 climate scenarios are freely available for download from esgf-node.llnl.gov/search/cmip6/. The algorithms described here are available from ref. 13. The resulting route shapefiles are available from Zenodo (https://zenodo.org/record/6539994#.Yokql1TMJPY). All other study data are included in the article.
|
2022-06-23T06:18:02.376Z
|
2022-06-21T00:00:00.000
|
{
"year": 2022,
"sha1": "591f6527113f8ba3187df5665f9313e8484a5733",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "02bcdd63f6d5ddb2c0abb151743c2f32f197779b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30448859
|
pes2o/s2orc
|
v3-fos-license
|
One-dimensional model of inertial pumping
A one-dimensional model of inertial pumping is introduced and solved. The pump is driven by a high-pressure vapor bubble generated by a microheater positioned asymmetrically in a microchannel. The bubble is approximated as a short-term impulse delivered to the two fluidic columns inside the channel. Fluid dynamics is described by a Newton-like equation with a variable mass, but without the mass derivative term. Because of smaller inertia, the short column refills the channel faster and accumulates a larger mechanical momentum. After bubble collapse the total fluid momentum is nonzero, resulting in a net flow. Two different versions of the model are analyzed in detail, analytically and numerically. In the symmetrical model, the pressure at the channel-reservoir connection plane is assumed constant, whereas in the asymmetrical model it is reduced by a Bernoulli term. For low and intermediate vapor bubble pressures, both models predict the existence of an optimal microheater location. The predicted net flow in the asymmetrical model is smaller by a factor of about 2. For unphysically large vapor pressures, the asymmetrical model predicts saturation of the effect, while in the symmetrical model net flow increases indefinitely. Pumping is reduced by nonzero viscosity, but to a different degree depending on the microheater location.
INTRODUCTION
In a recent paper [1] we reported numerical modeling and experimental demonstration of inertial pumping. In this effect [2][3][4], fluid is confined within a thin channel connecting two large reservoirs and pumped by a highpressure vapor bubble that repeatedly expands and collapses. The bubbles can be created by a number of actuation methods including thermal resistors [1,4], electrical currents passing through the fluid [3,5], localized laser pulses [6][7][8], and acoustic actuation [9]. If the bubbles are generated away from the geometric center of the channel, the dynamics of the two fluidic columns are asymmetric, which results in a net flow from the short toward the long arm of the channel.
The pumping action has been attributed to unequal inertia of the two fluidic columns inside the channel [1,2,4,9]. While several numerical studies have been done in realistic three-dimensional geometries [1,8,10], the main effect can be captured within a simple onedimensional model derived from momentum balance [1][2][3][4]8]. The model has been used to prove nonzero net flow [4] and illustrate the main qualitative features of the pump through numerical analysis [1].
The model is nonlinear and in general not solvable analytically. However, for realistic values of pressure (several atmospheres), channel width (tens of microns), and fluid properties (density, viscosity, and surface tension of water), the dynamics is dominated by the balance between inertia and pressure forces [1]. If only those two factors are left in the dynamic equation, the latter simplifies so much that an analytical solution becomes possible. The goal of this paper is to present this solution, which pro- * pavel.kornilovich@hp.com vides insight into the mechanism of inertial pumping and becomes a useful tool for studying this effect.
II. THE MODEL
Consider the geometry of Fig. 1(a). A thin channel of cross-sectional area A is connected to two much wider reservoirs. The reservoirs and the channel are filled with incompressible fluid of density ρ. The pressures in the reservoirs far from the channel are p 1b and p 2b , respectively. In the most common case, both bulk pressures are equal to the atmospheric pressure p 0 , but more general cases may be considered. Inside the channel is a resistive microheater that locally boils the fluid and creates high-pressure vapor bubbles. Each bubble expands and collapses, resulting in a nonzero net flow through the channel [see Figs. 1(b)-1(g)]. The microheater must be positioned asymmetrically relative to the channel's ends to achieve pumping.
The simplest approach to flow dynamics is to completely neglect transverse motion of the fluid in the channel, three-dimensional features of the bubble shape, and curvature of the vapor-fluid interface. These assumptions are approximate; however, they do not change the qualitative picture of inertial pumping. Through full threedimensional computational fluid dynamics (CFD) modeling we showed that pumping is most efficient when the bubble occupies the entire cross section of the channel pushing the fluid primarily along the channel's axis [1]. Under these assumptions, the channel is approximated by a one-dimensional line and a vapor-fluid interface by a single point on this line. Thus a single bubble is fully described by the time-dependent positions of two interfaces x 1 and x 2 . The case of more than one bubble can be formulated in the same manner. The goal of the present paper is to analyze the single bubble case.
A. Momentum balance and forces
Consider the left column of fluid (see Fig. 1). Its length is x 1 (t), velocityẋ 1 , and total momentum Q 1 (t) = ρA x 1ẋ1 . In a time interval dt the momentum changes because of two factors: (i) the force F 1 (t) acting on the fluid and (ii) momentum lost to or supplied by the left reservoir. During the expansion phaseẋ 1 < 0 and the column loses a negative momentum ρAẋ 2 1 dt to the reservoir. This element must be added to momentum balance as a positive increment: During the collapse phaseẋ 1 > 0, and the column acquires a positive momentum ρAẋ 2 1 dt from the reservoir and the same equation applies. Substituting Q 1 one derives the dynamic equation [1][2][3][4] This is a variable-mass Newton equation, but without the mass-derivative term: The latter has been dissipated by the reservoir. A similar momentum balance analysis yields, for the right fluidic column, where L is the total channel length. In this case, the right reservoir absorbs or supplies the extra mechanical momentum.
The major forces at play during the expansion-collapse cycle are (i) pressure-difference forces, (ii) viscous forces, and (iii) surface tension forces. The surface tension forces are about two orders of magnitude smaller than the other two for typical microfluidic conditions [1] and can be safely neglected. The viscous force is typically smaller than the pressure force, but is not always negligible. The simplest way to account for the viscous force is to assume that it is proportional to velocity and also to the length of the column because it acts along the fluid-surface boundary. The resulting one-dimensional model reads Here κ characterizes the relative strength of the viscous force. Under further assumptions, κ can be linked to other parameters of the system. For example, within the Poiseuille flow where η is the bulk viscosity of the fluid. This relationship is derived in the Appendix. In this work κ will be treated as an independent phenomenological parameter.
B. Impulse bubbles and boundary conditions
The next step is to specify the pressures in Eqs. (4) and (5). There p v is the vapor pressure inside the vapor bubble, while p 1,2 are the fluid pressures inside the reservoirs near the channel's ends [see Fig. 1(a)]. All pressures are time dependent and require separate considerations. Let us begin with the vapor bubble.
A typical pumping event starts with a microheater boiling the surrounding fluid within a fraction of a microsecond. This creates a thin layer of superheated vapor with pressure of about 8-10 atm [11]. This high-pressure phase lasts about 1 µs. Then the pressure quickly drops to subatmospheric levels because of bubble expansion and heat transfer losses. The residual bubble pressure (the saturation pressure of the vapor at the ambient temperature) is about p vr = 0.3 atm. Since the entire pumping cycle lasts dozens of microseconds, the initial highpressure phase can be approximated as instantaneous impact on the fluid, as a result of which the fluid acquires mechanical momentum q 0 [12]. This leads to the following impulse bubble model: In this approximation, the bubble strength is characterized by the initial fluid momentum q 0 . Since the impulse has negligible time duration, q 0 drops out from the model equations and enters the dynamics only via the initial conditions. This greatly simplifies the analysis. If a detailed physical model of the vapor bubble is known, q 0 can be estimated from Eq. (7) by integrating the vapor pressure over the bubble's lifetime. In this work, q 0 will be treated as an independent phenomenological parameter. Consider now the pressures at the channel's ends p 1,2 . In general, they are different from the respective bulk pressures p 1b and p 2b . The fluid dynamics in the reservoirs is complex. First, the formation of a jet during outward flow and the formation of a sink during inward flows results in asymmetry. Second, the pumping event is inherently transient. The fluid expelled in the reservoirs forms three-dimensional vortices that persist long after the bubble collapse [1]. One consequence of the vortices is nonhomogeneous inward flow: The fluid is filling up the channel along the walls while emptying the channel near its axis at the same time. All of this makes the task of finding a representative one-dimensional boundary condition nontrivial. However, a detailed analysis of the reservoir flow is beyond the scope of the present work. The main goal here is to elucidate the physics behind inertial pumping by using simplified one-dimensional dynamics as the analysis tool. From this perspective, replacing the complex reservoir dynamics with a deterministic boundary condition at the end of the channel is acceptable, as long as it is physically justified and consistent with net pumping.
The easiest approach is to simply neglect all the abovementioned effects and set the end pressures equal to the bulk values: p 1 = p 1b and p 2 = p 2b . This choice will be referred to as the symmetrical model because it does not distinguish between outflow and inflow. The main advantage of the symmetrical model is simplicity; it is the easiest way to derive the main qualitative features of inertial pumping.
A different model was proposed by Yuan and Prosperetti [2], who tried to account for the asymmetry between expansion and collapse. During bubble expansion, the fluid is injected into the reservoir as a jet that separates from the rest of the fluid. The pressure near the entry point is close to the bulk pressure of the reservoir, p i = p ib . During collapse, however, part of the pressure difference is expended on accelerating the fluid inside the reservoir. This is a sink flow [13]. According to the Bernoulli equation, p i = p ib − 1 2 ρẋ 2 i . As mentioned above, vortex formation near channel ends disturbs the sink flow and it is unclear to what extent the Bernoulli correction can be applied to the situation at hand. In the absence of a detailed analysis of the pressure distribution, the proposal of Yuan and Prosperetti will be adopted here as the second reasonable choice of the boundary condition. Hereafter, it will be referred to as the asymmetrical model. Compared with the symmetrical model, it provides additional insight into the inertial pumping mechanism, but is more complex mathematically.
It is of benefit to keep both options available for analysis. To this end, we introduce the discrete index m = {0, 1} that will distinguish between the models: m = 0 corresponds to the symmetrical model and m = 1 to the asymmetrical model. With such notation the pressure boundary conditions can be written as Here H(z) is the Heaviside step function: H(z > 0) = 1 and H(z < 0) = 0.
C. Dimensionless pump equations
The number of model parameters in Eqs. (4)-(9) can be reduced by transforming the equations to a dimensionless form. In a similar procedure in Ref. [1], the channel diameter was taken as the unit of length and the duration of the high-pressure phase as the unit of time. In the present case, both these parameters are zero, so a different pair of units is required. The choice of unit length is obvious: The total channel length L is the only model parameter of that dimensionality. To choose a unit time one can proceed as follows. In the most typical case, the bulk reservoir pressure is equal to the atmospheric pressure p 0 . The two fluid columns will be evolving under negative pressure difference −(p 0 − p vr ). That sets a characteristic fluid velocity as (p 0 − p vr )/ρ and a characteristic time as L ρ/(p 0 − p vr ). The characteristic time thus defined is of the order of the total bubble lifetime and the characteristic velocity is approximately the mean fluid velocity over the same period. Accordingly, one introduces dimensionless time τ and interface positions ξ i Upon substitution of Eqs. (7)-(11), the pump equations (4) and (5) become where 0 < ξ 1 (τ ) ≤ ξ 2 (τ ) < 1, the primes denote differentiation with respect to τ , and The equations of motion contain one discrete model parameter m = {0, 1} and three positive continuous dimensionless parameters: friction β and pressures γ 1 and γ 2 .
In the most common case of bulk pressures equal to the atmospheric pressure γ 1 = γ 2 = 1, only one continuous parameter β remains. Consider now the initial conditions. The expansion phase starts with a zero-size bubble located at x 0 , hence The microheater location ξ 0 is a critical parameter of the inertial pumping effect. For a symmetrically placed microheater ξ 0 = 1/2, the net flow must be zero. The net flow increases as ξ 0 deviates from 1/2, but in a nonmonotonic manner. This behavior will be analyzed in the following sections. Another important parameter is the bubble strength. Within the impulse-bubble approximation, at t = 0 the two columns of fluid acquire momentum q 0 . Converting this to initial velocities, one obtains v where The initial conditions (17) assume that expansion begins with a state of rest. One may consider a more general case of fluid already moving as a whole at τ = 0. (It occurs, for example, with high-frequency repeated pumping or when pumping against an imposed pressure gradient.) This case is not included in the present work.
To summarize, bubble kinematics is completely defined by the positions of vapor-fluid interfaces satisfying 0 < ξ 1 ≤ ξ 2 < 1. Bubble dynamics is governed by the equations of motion (12) and (13) and initial conditions (16) and (17). There are five continuous dimensionless parameters α, β, γ 1 , γ 2 , and ξ 0 and one discrete model index m. Typical parameter values are given in Table I. 14) and (6) yield β ≈ 2.0, which is adopted as a typical value of the friction coefficient.
x 0 constitutes the primary pumping effect. In addition, momenta of the two flows are in general different. (This is despite the fact that motion started with equal momenta q 0 . Reasons for the eventual inequality are analyzed in the following section.) By momentum conservation this implies that after collapse the fluid will continue to flow as a whole (i.e., with total length L and mass ρAL) until it is brought to a complete stop by the pressure difference and viscous forces. The velocity v c in the beginning of this phase will be referred to as postcollapse velocity and net flow during the postcollapse phase as the secondary pumping effect. Flow kinematics can still be described by time evolution of collapse point x(t). Acting forces no longer include the bubble vapor pressure but only endchannel pressures p 1,2 . The equation of motion reads with initial conditions x(t c ) = x c andẋ(t c ) = v c . Substituting the pressure model (8) and (9) and converting to the dimensionless units, one obtains with ξ(τ c ) = ξ c and ξ ′ (τ c ) = η c . The parameters in Eq. (20) have the same meaning as during the expansioncollapse cycle.
III. INERTIAL PUMPING EFFECT
Quite generally, inertial pumping happens because the shorter fluidic column with its smaller mass turns around faster than the longer one under the same pressure difference. The shorter column returns to the starting point earlier and has extra time before collision to keep moving in the long arm's direction. This results in the primary effect. For the same reason, the shorter arm has more time to accelerate during collapse. Consequently, it arrives at collision with a larger momentum than the longer arm, (25), its derivative, and the function leading to the secondary effect. Qualitative features of inertial pumping can be understood by analyzing return times and interface trajectories near the collision point.
In this section we study the basic inviscid pumping event with β = 0 and γ 1 = γ 2 = 1.
A. Expansion phase
It is sufficient to follow the dynamics of only one vaporfluid interface. The math is slightly more transparent for ξ 1 . The initial value problem reads [cf. Eq. (12) for Integrating once, one obtains a first integral The interface undergoes potential motion in a profile − ln ξ, with C being analogous to a total energy. At a turning point ξ ′ 1 = 0, from where the coordinate of the turning point is Integrating a second time from the start to the turning point, the total expansion time is given by Introducing the Dawson integral ( Fig. 2) the expansion time can be written as The expansion time of the right interface is obtained from here by substitution ξ 0 → 1 − ξ 0 : A crucial observation now is that the function 1 z F (z) monotonically decreases with its argument, as shown in Fig. 2. Clearly, expansion times are the same for the symmetrical microheater position ξ 0 = 1/2. However, for asymmetric positions (and the same bubble strength) the expansion times will be different: τ 1e < τ 2e for ξ 0 < 1/2 and τ 1e > τ 2e for ξ 0 > 1/2. Thus, due to its smaller mass, the shorter arm slows down faster and reaches the turnaround point earlier than the longer one. This forms the basis for inertial effects of both types. In the symmetrical model collapse dynamics is an inverse of expansion dynamics. Acceleration can be described as sliding along the same potential profile − ln ξ with a zero starting velocity. The return times (i.e., times after which both interfaces return to the starting point ξ 0 ) are simply twice the expansion times τ 1r,2r = 2τ 1e,2e . Both the primary and secondary pumping effects can be derived analytically for weak and strong asymmetries. These cases are considered first. After that a complete numerical solution of the pump equations is presented.
Weak asymmetry
Under weak asymmetry, the microheater is close to the channel's midpoint: We are interested in the pumping effects that are of first order in ψ. Substituting ξ 0 in the return times and expanding to first order, one obtains The last line follows from the identity F ′ (z) = 1−2zF (z).
The limit values for weak and strong bubbles are from where asymptotes of the return times can be deduced. Of primary interest are the time and coordinate of the collision point. They can be found by linearizing interface trajectories near the starting point just prior to collision. At τ = τ 1r , the left interface is at ξ 1 = ξ 0 and has velocity ξ ′ 1 = α/( 1 2 + ψ). Accordingly, a linearized trajectory is described by the equation Similarly, the right interface's trajectory linearized around the collision time is Equating ξ 1 (τ c ) = ξ 2 (τ c ) yields the collision time and position in the first order in ψ: The primary pumping effect is the displacement of the collapse point relative to the starting point of expansion. According to Eq. (37), the primary effect is given by Note that the primary effect is a sharp function of the bubble strength (proportional to α 4 ) for weak bubbles. The secondary pumping effect originates from the imbalance of mechanical momenta at collision. The velocities can also be found from linearizing them around the collision point. Accelerations follow from the original dynamic equations. The linearized velocities are The total momentum or velocity after collapse to the first order in ψ is Since the total mass after collapse is equal to 1, the postcollapse momentum and velocity are given by the same expression. The first term in Eq. (41) originates from the additional velocity that the short arm acquires between the return time and collision time. The second term comes from an additional mass that the short arm acquires during the same time interval. Depending on bubble strength α, either the first or the second factor dominates the secondary effect.
Strong asymmetry
Consider now the opposite case of strong asymmetry when the microheater is very close to one edge of the channel, say, to the left one. Then the starting point ξ 0 ≪ 1 is a small parameter. Of interest are the primary and secondary pumping effects in the lowest order of ξ 0 .
Let us start with the return times. Clearly, it takes the short arm a much smaller time to return than the long arm. Referring to Eq. (26) and using the large-argument Thus the short arm's return time is second order in ξ 0 . One power comes from a short travel distance and another power from a large starting velocity. In contrast, the return time of the long arm is O(1).
The collision time and position can again be found from linearized trajectories around the collision point. At τ = τ 1r , the short arm has velocity α/ξ 0 and acceleration 1/ξ 0 . The trajectory is The right arm moves comparatively slow; therefore, it is sufficient to expand its trajectory around the starting point and time. It reads: By setting ξ 1 (τ c ) = ξ 2 (τ c ) after some algebra one obtains the time and position of the collision point Thus the primary pumping effect is which is quadratic in ξ 0 . It is shown in Fig. 3 by the dashed line. The velocities needed for the secondary effect can be found from the same trajectories (43) and (44): The postcollapse velocity is The linear ξ 0 term comes from the increased mass that the short arm picks up between return time τ 1r and collision time τ c . Notice that the postcollapse momentum can exceed twice the initial fluid momentum α, as can also be seen from the exact numerical solution presented in Fig. 4. The reason for this excess of momentum is as follows. In the symmetrical model, the collapse dynamics is simply the reversal of expansion. The short arm returns to the starting position with a momentum equal in magnitude but opposite in sign to its initial value. If ξ 0 ≪ 1, the return time is very small, so the long arm has lost very little of its initial momentum. By the time the short arm returns to ξ 0 , the combined momentum is almost (slightly less than) 2α. However, since the long arm has moved away from ξ 0 , the short arm has a little more time to accelerate before collision. During this extra time the short arm picks up more momentum than the long one loses, which results in an excess. Ultimately, the momentum is provided by the left reservoir.
With just two dimensionless parameters, the system can be easily analyzed numerically to completion. The numerical solution is presented in Figs. 3-6. relevant interval 0.1 < α < 1. The most notable feature of both sets of graphs is an optimal microheater location at which the effects are maximal. Note that, although the optimal ξ 0 for the two effects are not exactly equal, they are close and share a similar trend: The optimal location is close to the channel's edge for weak bubbles and shifts toward the channel center as bubbles grow stronger. The existence of an optimal microheater location is a welcome feature for practical applications of the effect. Clearly, fabricating microheaters right at the channel's edge would be challenging. The findings imply that this is unnecessary. Figures 5 and 6 show both inertial effects for the entire parameter space 0 < ξ 0 < 1 and 0.01 ≤ α ≤ 3.0 as color maps. Bubble strengths α > 1 are considered impractical, but are still of interest from the fundamental standpoint and are included here as such. Several features are worth noting. (i) For both effects, the optimal microheater location converges to ξ 0 ≈ 0.35, 0.65 at α > 1. (ii) The primary effect saturates with bubble strength, although the α dependence is not monotonic. (iii) The secondary effect does not saturate with bubble strength. It grows roughly linearly at large α, in agreement with Eqs. (41) and (50). We have also verified that the numerical solution agrees with the analytical asymptotes derived earlier in this section. to the equation with a zero initial velocity. The Bernoulli correction results in a portion of pressure difference being spent on accelerating fluid within the reservoir. The implications are more pronounced whenever high collapse velocities are encountered, i.e., for strong asymmetries and strong bubbles. Indeed, when the mass at the turning point is small it accelerates quickly to a velocity of ∼ 1. At this moment the Bernoulli correction kicks in, the pressure difference within the channel drops and acceleration slows down. The velocity approaches the limit value of √ 2. New features of the asymmetrical model discussed below are consequences of this fact.
In the equation of motion (51) both integrations are elementary with the results (53) The left edge's return velocity and time are obtained from here by setting ξ 1 = ξ 0 : where ǫ = exp (−α 2 /2ξ 2 0 ). Notice that the return velocity ξ ′ 1r is equal to the initial velocity α/ξ 0 only in the case of weak bubbles α ≪ 1.
The weak-asymmetry and strong-asymmetry limit cases are discussed next.
Weak asymmetry
Using ξ 0 = 1 2 + ψ [Eq. (28)] and expanding for ψ ≪ 1, one obtains after some algebra the return times where ǫ 0 = exp (−2α 2 ). The return velocities follow from Eq. (54) for small ψ: With return times and velocities at hand, the collision point can be determined by linearizing trajectories around ξ 0 . Applying the same method as in the symmetrical model, the collision time and coordinate are found to be Thus the primary pumping effect is Notice that the transition from the weak-bubble to the strong-bubble regime is not monotonic. The coefficient by ψ passes through a minimum approximately equal to −3.04 at α = 1.12 before converging to its large-α limit of −1.
The postcollapse velocity can be derived from the combined mechanical momentum of the two arms at the time of collision. With the return times and velocities given above and accelerations of both arms known from the original dynamic equations, the velocities near the return times can be approximated by linear functions of time: Substituting here the collision time τ = c, multiplying the velocities by respective column lengths ξ c and (1−ξ c ), and adding together, one obtains the total postcollapse momentum or velocity in the leading order in ψ: Here g, h 0 , and h 1 are given by explicit expressions in Eqs. (58), (60), and (61). The weak-bubble and strongbubble limits of secondary pumping effect are Notice that unlike the symmetrical model [compare with Eq. (41)], here there is an optimal bubble strength (α = 1.05) for any small microheater asymmetry ψ.
Strong asymmetry
Under strong asymmetry, the microheater is close to a channel end ξ 0 ≪ 1. In this section the condition α/ξ 0 ≫ 1 will also be assumed, meaning that bubbles cannot be arbitrarily weak. The physics in this regime is dominated by the fact that the short left arm accelerates fast during collapse and quickly reaches the limit velocity of √ 2. After that motion is uniform. Referring to the general expressions (54) and (55) and taking into (12) and (13) for β = 0, m = 1, and γ1 = γ2 = 1. Shown is the primary pumping effect ∆ξ = ξc − ξ0 as a function of microheater location ξ0 and bubble strength α. The step between contour lines is ∆ξ = 0.05. account that ǫ is exponentially small, one obtains the return velocity and time Thus the return time is linear in ξ 0 rather than quadratic as in the symmetrical model [compare with Eq. (42)]. Linearized trajectories near τ 1r are For the purposes of this section, it is not necessary to include the quadratic term in the right edge's trajectory. Equating ξ 1 = ξ 2 yields the time and position of the collision point These formulas assume α < √ 2. The singularity is physical. If the initial velocity of the right edge α/(1 − ξ 0 ) ≈ α is close to the limit velocity of the left edge √ 2, it takes a long time for the left edge to catch up. Under this condition, the collision time and displacement are no longer small and higher-order terms must be taken into account. Equation (74) yields the primary pumping effect The secondary pumping effect derives from the total mechanical momentum at collision The limit value at ξ 0 → 0 is α and not 2α as in the symmetrical model [cf. Eq. (50)]. This is because the left arm has a limited velocity. The first-order correction in ξ 0 is still positive, which suggests the existence of an optimal microheater location. This conclusion is valid only for not very strong bubbles α < √ 2.
Complete numerical solution
A full numerical solution of the asymmetric pump equations is presented in Figs. 7-10. Referring to Figs. 7 and 8, several differences from the symmetrical model are apparent. The primary effect is linear at small ξ 0 rather than quadratic. At the same time, optimal microheater locations and primary maxima are roughly the same as in the symmetrical model. In contrast, the secondary effect (Fig. 8) is roughly half of the corresponding symmetrical model values. Peaks are less pronounced, but evolve with bubble strength in the same manner. Thus the asymmetrical model predicts smaller net flows.
Examination of the all-parameter solution of Figs. 9 and 10 reveals further differences at large bubble strengths. Specifically, the optimal microheater location shifts back toward the channel edge. In addition, both effects feature maxima as functions of α, implying an optimal bubble strength for a given microheater location. Finally, the secondary effect does not grow indefinitely at large α, unlike in the symmetrical model, but saturates instead. One should note that although these qualitative (12) and (13).
differences between the two models are intriguing and deserve additional scrutiny, they occur in the regime of very strong vapor bubbles, which is hard to realize in practical devices. Summarizing Sec. III, one concludes that both models studied describe robust inertial pumping. The relative simplicity of the models has allowed many results to be derived analytically, which has elucidated the physics behind the effect. There are enough differences in predictions that a comparison with CFD calculations is likely to suggest the most realistic model and the underlying channel-reservoir boundary condition. Insights gained from this analysis will be helpful in designing practical inertial pumps.
IV. EFFECTS OF VISCOSITY
So far, viscous forces have been neglected. It is intuitively clear that for a mechanism that relies on fluid inertia, viscosity should have a detrimental effect. Within the one-dimensional dynamical model the viscosity is included via the dimensionless parameter β defined in Eq. (14). Here β is directly proportional to the bulk viscosity of the participating fluid [cf. Eq. (6)]. To get a sense of the physically relevant β interval, a fluid with η = 1.3 mPa s in a microchannel 200 µm long and 20 µm wide and tall would have β ≈ 2. These parameters correspond to Reynolds numbers Re ∼ 100.
A nonzero β introduces additional nonlinearities into the equation of motion. Although some useful results can still be obtained analytically (particularly, in the postcollapse phase; see Sec. V), final pumping effects are difficult to derive. We therefore resort to numerical analysis. Re- In selecting a representative bubble strength, we chose α = 0.5, which roughly corresponds to a vapor bubble pressure of p v = 8-10 atm (cf. caption to Table I).
As expected, the finite viscosity systematically reduces both inertial effects. However, the reduction is not uniform across all microheater locations. Reductions are stronger for 0.1 < ξ 0 < 0.9, but are much less for the locations close to the channel ends, as can be observed in the plots. For the primary effect (Figs. 11 and 13) such a nonuniform change results in a systematic shift of the optimal location closer to the end of the channel. For the secondary effect, the maximum is eliminated altogether. This behavior is another indication of the complexity of inertial pumping.
Our results suggest that the inertial pump can operate down to Re ∼ 10 or even below, but its efficiency drops quickly. This is also consistent with the experimental data reported in Ref. [1]. Pumping through 1µm-wide channels should still be possible; however, the microheater will have to be positioned very close to channel's end.
V. POST-COLLAPSE PHASE
For realistic conditions, most of the net flow happens in the postcollapse phase when fluid moves by inertia against friction forces. It is therefore important to know the details of the postcollapse flow. The equation of motion was derived in Sec. II D, Eq. (20). Since this equation is simpler than its counterparts of the expansion-collapse cycle, it is possible to obtain analytical solutions even for nonzero frictions β and pressure heads γ 1 − γ 2 . In this section the most common case of a zero pressure head is solved. The more complex γ 1 = γ 2 case will not be (12) and (13).
considered herein. For the symmetrical model m = 0, the equation of motion reduces to with the obvious solution In the zero-viscosity limit, the velocity decays to zero as 2η c /[2 + η c (τ − τ c )], but the overall displacement diverges logarithmically. Again, analytical formulas for ξ c and η c from Sec. III C can be used at small β to estimate a total pumped volume.
VI. SUMMARY
A micropump is required for any active fluid-handling system. Inertial micropumps do not contain moving parts and can be made in large quantities by batch fabrication processes. As such, they are an excellent candidate for the universal integrated pump of chip-scale fluidics. The physics behind the pump operation is defined by a subtle balance between the pressure, viscous, and inertial forces. A high-pressure vapor bubble generated by a microheater expels fluid from the channel to reservoirs, but after that flow reverses under a negative pressure difference. Because of a smaller mass and inertia the shorter arm reverses flow direction earlier and then has more time to accelerate during inflow. By the time of collision, the shorter arm acquires a larger velocity and momentum than the longer arm. The two columns collide at a point that is shifted from the initial point of expansion, which constitutes the primary pumping effect. A nonzero total momentum ensures postcollapse flow from the short toward the long side of the channel, which is the secondary pumping effect. Total net flow is the sum of the primary and secondary contributions.
In this paper the inertial pumping action has been studied within a simplified one-dimensional dynamic model. Transverse motion within the channel has been neglected and the entire dynamics reduced to that of a fluid-vapor interface treated as a mathematical point. Despite these simplifications, the one-dimensional model captures the main features correctly, while using scaled units reduces the number of independent parameters. The model becomes an efficient tool for analyzing different pumping regimes. A major challenge lies in understanding the pressure at the channel-reservoir interface. The transient nature of the flow, vortex formation in the reservoir, nonuniform velocity across the channel cross section during flow reversal, and other factors make the selection of a single boundary condition nontrivial. Until a full understanding is achieved via additional experimental and numerical work, the current approach is to assume a physically reasonable boundary condition and analyze its consequences. In this paper two possible choices have been studied in detail. In the simplest symmetrical model the interface pressure is assumed constant and set to be equal to the bulk reservoir pressure.
In the more complex asymmetrical model, during inflow the interface pressure is reduced from the bulk value by a Bernoulli-type correction.
A key finding of the paper is that both models contain pumping effects and their properties are similar for weak to intermediate bubble strengths (which is the most realistic situation). In particular, the two models predict an optimal microheater location in the 0.2-0.3 range for both the primary and secondary effects. A nonzero viscosity dampens net flow in both models in a similar fashion. The predictions begin to diverge for very strong bubbles α > 1, as can be seen by comparing Figs. 5 and 6 with Figs. 9 and 10. The approach adopted in this paper can be extended in a number of ways, including sequential firing, nonzero pressure heads, and branched channels.
|
2013-02-21T14:05:30.000Z
|
2012-10-30T00:00:00.000
|
{
"year": 2013,
"sha1": "d8e634193432e7307efcc302e624762337468018",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1210.7905",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d8e634193432e7307efcc302e624762337468018",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
243911526
|
pes2o/s2orc
|
v3-fos-license
|
Identification of Coinfections by Viral and Bacterial Pathogens in COVID-19 Hospitalized Patients in Peru: Molecular Diagnosis and Clinical Characteristics
The impact of respiratory coinfections in COVID-19 is still not well understood despite the growing evidence that consider coinfections greater than expected. A total of 295 patients older than 18 years of age, hospitalized with a confirmed diagnosis of moderate/severe pneumonia due to SARS-CoV-2 infection (according to definitions established by the Ministry of Health of Peru) were enrolled during the study period. A coinfection with one or more respiratory pathogens was detected in 154 (52.2%) patients at hospital admission. The most common coinfections were Mycoplasma pneumoniae (28.1%), Chlamydia pneumoniae (8.8%) and with both bacteria (11.5%); followed by Adenovirus (1.7%), Mycoplasma pneumoniae/Adenovirus (0.7%), Chlamydia pneumoniae/Adenovirus (0.7%), RSV-B/Chlamydia pneumoniae (0.3%) and Mycoplasma pneumoniae/Chlamydia pneumoniae/Adenovirus (0.3%). Expectoration was less frequent in coinfected individuals compared to non-coinfected (5.8% vs. 12.8%). Sepsis was more frequent among coinfected patients than non-coinfected individuals (33.1% vs. 20.6%) and 41% of the patients who received macrolides empirically were PCR-positive for Mycoplasma pneumoniae and Chlamydia pneumoniae.
Introduction
Coronavirus disease 2019 , caused by the SARS-CoV-2 virus, was declared a pandemic on 11 March 2020 [1]. COVID-19 represents a major public health threat to Latin America, given that it is considered the most inequitable region in the world according to international indexes [2]. Thus, the pandemic has exposed the income inequalities and lack of access to appropriate health care services in Latin America countries [1]. For instance, the spread of COVID-19 in Peru overwhelmed the unprepared, precarious and fragmented health system [3].
For instance, the spread of COVID-19 in Peru overwhelmed the unprepared, precarious, and fragmented health system [3].
The still unknown impact of coinfection rates between SARS-CoV-2 and other respiratory pathogens added to the rapid global expansion of the virus and its variants requires establishing an efficient and sustainable diagnostic strategy over time [4]. Coinfections rates may be higher than expected, which may pose a great challenge for clinicians in the diagnosis and management of patients [5,6]. Several studies have reported a wide variance of coinfection rates in SARS-CoV-2 patients, ranging from 3% to more than 20% [5].
The most frequent pathogens identified among coinfections are group A Streptococcus [7], Mycoplasma pneumoniae [8], influenza A [9], parainfluenza [10], rhinovirus, enterovirus, respiratory syncytial virus (RSV), and other coronaviruses [5,11]. Current evidence suggests that coinfections with other respiratory viruses may complicate the disease course, leading to increased disease severity and mortality. Therefore, studies that identify the pathogens that coinfected COVID-19 patients and the evaluation of their impact on the clinical outcome are crucial. This data may guide clinicians to establish a directed antimicrobial therapy, decrease the irrational use of antibiotics, and improve the clinical outcome [5].
This study sought to identify the respiratory pathogens causing coinfections in patients with moderate/severe SARS-CoV-2 pneumonia from a hospital in Peru and determine the clinical characteristics and clinical outcome of coinfected and non-coinfected patients.
Results
A total of 295 consecutive patients with a confirmatory diagnosis of COVID-19 were enrolled during the study period. Among them, 288 (97.6%) had a confirmatory diagnosis by PCR techniques validated by the Peruvian National Institute of Health. The seven patients left (2.4%), were diagnosed with a positive IgM result by ELISA in addition to suggestive symptoms. Figure 1 shows the coinfections reported in our study, and we could observe that 141 (47.8%) patients had SARS-CoV-2 as their only infecting pathogen. The most common presenting coinfections were identified in 83 (28.1%) patients with Mycoplasma pneumoniae, 26 (8.8%) patients with Chlamydia pneumoniae, and 34 (11.5%) patients with both bacteria. Adenovirus was identified in five (1.7%) patients, Mycoplasma pneumoniae + Adenovirus in two patients, Chlamydia pneumoniae + Adenovirus in two patients, and RSV-B and Chlamydia pneumoniae in one patient. Finally, a combination of Mycoplasma pneumoniae, Chlamydia pneumoniae, and Adenovirus was presented in one patient. The most common presenting coinfections were identified in 83 (28.1%) patients with Mycoplasma pneumoniae, 26 (8.8%) patients with Chlamydia pneumoniae and 34 (11.5%) patients with both bacteria. Adenovirus was identified in five (1.7%) patients, Mycoplasma pneumoniae + Adenovirus in two patients, Chlamydia pneumoniae + Adenovirus in two patients and RSV-B and Chlamydia pneumoniae in one patient. Finally, a combination of Mycoplasma pneumoniae, Chlamydia pneumoniae and Adenovirus was presented in one patient. Table 1 shows the demographical and basal characteristics of the patients included, according to the pathogens identified. The mean age of the patients was 58 ± 14.0 years and 209 (70.9%) were male. Regarding past medical history, the two most common comorbidities found were hypertension (26.8%) and diabetes mellitus (22.3%). The most common clinical signs and symptoms on admission were cough (72.9%), dyspnea (69.8%) and fever (61.0%), which had a similar frequency in the different groups of coinfections. The group Antibiotics 2021, 10, 1358 3 of 13 of patients who had the total number of coinfections had less expectoration compared to those with no coinfections (5.8% vs. 12.8%) Days since symptom onset * 7 (5-10) 7 (5-10) 7 (6-10) 6 (3-9) 7 (4-10) 7 (6-9) 7 (5-13) 7 (6-12) Others included: Mycoplasma pneumoniae + Adenovirus (n = 2), Chlamydia pneumoniae + Adenovirus (n = 2), VRS-B + Chlamydia pneumoniae (n = 1) and Mycoplasma pneumoniae + Chlamydia pneumoniae + Adenovirus (n = 1). * Median (interquartile range); CKD = chronic kidney disease; * CURB 65: Severity Score for Community-Acquired Pneumonia; * SD = standard deviation. For each qualitative variable, the percentage and its respective 95% confidence interval are report.
Antibiotics 2021, 10, 1358 4 of 13 Table 2 shows the laboratory parameters and treatments that patients included received during hospitalization. However, no differences were observed in laboratory parameters among the different study groups.
Antibiotics 2021, 10, 1358 5 of 13 The clinical outcomes of the patients were evaluated in all study groups. The group of patients with total coinfections were more likely to develop sepsis than those patients without coinfection. Among the most relevant data, the group of coinfected had more superinfection events than those not coinfected (6.5 vs. 3.6%), a higher number of cases of heart failure (11.0 vs. 5.7%), as well as a mean number of days in ICU (16 vs. 8 days) and mechanical ventilation (16 vs. 9 days). In the coinfection between SARS-CoV-2 + Mycoplasma pneumoniae, which was the most frequently found, the highest number of cases of sepsis (37.4%) occurred. Frequency of ARDS in SARS-CoV-2 mono infection was 17.7%. Mortality was similar among all study groups, as shown in Table 3. Others included: Mycoplasma pneumoniae + Adenovirus (n = 2), Chlamydia pneumoniae + Adenovirus (n = 2), VRS-B + Chlamydia pneumoniae (n = 1) and Mycoplasma pneumoniae + Chlamydia pneumoniae + Adenovirus (n = 1). ARDS = acute respiratory distress syndrome; ICU = intensive care unit. For each qualitative variable, the percentage and its respective 95% confidence interval are reported.
Finally, an evaluation of the antibiotics prescribed was carried out. We could highlight that the majority of patients were administered an antibiotic (69.5%). The most frequently antibiotics were ceftriaxone in 143 patients, azithromycin in 95 patients and imipenem in 36 patients. We could identify that nearly half of antibiotic prescriptions were given to patients that were not infected by any bacterial pathogen (Figure 2), while 41% (n = 39) of the patients who received macrolides empirically were PCR-positive for Mycoplasma pneumoniae and Chlamydia pneumoniae.
Antibiotics 2021, 10, 1358 6 of 13 light that the majority of patients were administered an antibiotic (69.5%). The most frequently antibiotics were ceftriaxone in 143 patients, azithromycin in 95 patients, and imipenem in 36 patients. We could identify that nearly half of antibiotic prescriptions were given to patients that were not infected by any bacterial pathogen (Figure 2), while 41% (n = 39) of the patients who received macrolides empirically were PCR-positive for Mycoplasma pneumoniae and Chlamydia pneumoniae.
Discussion
In this study, more than 50% of the patients evaluated with COVID-19 upon admission presented coinfection with other respiratory pathogens. These findings differ from those reported in the meta-analysis by Lansbury et al. [12] and Langford et al. [13], in which lower frequencies of coinfection were obtained in hospitalized patients (7% and 5.9%) and in critical patients (14% and 8.1%), respectively. The estimated proportion of coinfection in patients with COVID-19 varied according to the study site, season, clinical condition, and diagnostic assays used [12][13][14][15].
Data on coinfections with SARS CoV-2 come mainly from studies carried out in China, United States and Spain [12][13][14]. We present the largest study in Peru including patients with moderate/severe COVID-19 pneumonia and coinfection with other respiratory pathogens. There are few reports in South America of cases of coinfection in patients upon admission. For example, Vial et al. [16] reported one case of coinfection with Streptococcus pneumoniae in Chile. In Brazil, it was documented that one patient presented Lautropia, Prevotella, and Haemophilus [17]. Finally, Orozco-Hernández et al. reported a case of coinfection with rhinovirus and enterovirus in Colombia [18].
The most common pathogen identified causing coinfections was Mycoplasma pneumoniae. In the current study, we used PCR to identify this microorganism, since it is highly sensitive and specific during the initial phase of infection [19][20][21], while serological techniques that detect IgM antibodies against Mycoplasma pneumoniae used in other reports in COVID-19 [12,21] may have less sensitivity in adult patients. In this age group, there is a weak antibody response, and there is a need to take paired samples with documentation of elevated IgG titers to determine their clinical significance [20]. On the other hand, a
Discussion
In this study, more than 50% of the patients evaluated with COVID-19 upon admission presented coinfection with other respiratory pathogens. These findings differ from those reported in the meta-analysis by Lansbury et al. [12] and Langford et al. [13], in which lower frequencies of coinfection were obtained in hospitalized patients (7% and 5.9%) and in critical patients (14% and 8.1%), respectively. The estimated proportion of coinfection in patients with COVID-19 varied according to the study site, season, clinical condition and diagnostic assays used [12][13][14][15].
Data on coinfections with SARS CoV-2 come mainly from studies carried out in China, United States and Spain [12][13][14]. We present the largest study in Peru including patients with moderate/severe COVID-19 pneumonia and coinfection with other respiratory pathogens. There are few reports in South America of cases of coinfection in patients upon admission. For example, Vial et al. [16] reported one case of coinfection with Streptococcus pneumoniae in Chile. In Brazil, it was documented that one patient presented Lautropia, Prevotella, and Haemophilus [17]. Finally, Orozco-Hernández et al. reported a case of coinfection with rhinovirus and enterovirus in Colombia [18].
The most common pathogen identified causing coinfections was Mycoplasma pneumoniae. In the current study, we used PCR to identify this microorganism, since it is highly sensitive and specific during the initial phase of infection [19][20][21], while serological techniques that detect IgM antibodies against Mycoplasma pneumoniae used in other reports in COVID-19 [12,21] may have less sensitivity in adult patients. In this age group, there is a weak antibody response and there is a need to take paired samples with documentation of elevated IgG titers to determine their clinical significance [20]. On the other hand, a study that evaluated IgM against M. pneumoniae determined 56.8% coinfection with SARS-CoV-2, a value above our findings [22].
In the current study, we found a total of 34 cases with simultaneous coinfection of Mycoplasma pneumoniae and Chlamydia pneumoniae, which was higher than in previous reports [23]. These bacteria have been reported to cause coinfections with other viruses; for example, it has been evidenced that a great frequency of bacterial coinfections are observed in patients with influenza [24] and it also is noteworthy that both Mycoplasma pneumoniae and Chlamydia pneumoniae were identified as coinfecting microorganisms in patients with SARS and MERS [25,26]. The impact of these findings in the adult population is not clear; however, coinfected patients presented a lower proportion of expectoration upon admission compared to noncoinfected patients. According to the score used by the Japanese Respiratory Society (JRS), cough without expectoration is one of the six criteria used to predict atypical pneumonia, with a sensitivity that reaches 83% [27,28]. We did not find differences in the leukocyte count between the groups of the total of coinfected and non-coinfected patients; nonetheless, it has been reported that leukopenia can be considered another diagnostic criterion to identify infections by atypical bacteria [28].
The majority of patients with COVID-19 presented fever, cough and dyspnea. These symptoms were similar among all study groups, which made the clinical differentiation difficult between COVID-19 monoinfections and coinfections with other pathogens [23]. In addition, the differentiation can be challenging in patients older than 60 years, in whom any respiratory infection may resemble typical bacteria pneumonia [29]. It has been proposed that pathogens such as Mycoplasma pneumoniae can exacerbate clinical symptoms, increase morbidity and prolong the stay in the ICU [30]. Our results also showed some differences between patients with monoinfection and coinfections, such as admission to the ICU, days in the ICU, mechanical ventilation. Mortality was similar among study groups. Another study found that patients with coinfection (COVID-19 and Mycoplasma) had higher mortality compared to patients with only COVID-19 disease [22].
The proportion of coinfections with other respiratory viruses was low, similar to other reports [12,31]. The most common viruses identified in our study were Adenovirus (HAdv) and only one case of respiratory syncytial virus B. According to an analysis carried out by the Pan American Health Organization, the distribution of other respiratory viruses did not exceed 5% in Peru during the pandemic [32]. Previous studies in Peru have reported a lower percentage of respiratory infections due to HAdv in people over 18 years of age, without specific characteristics that differentiate their presentation from other respiratory viruses [33].
We did not detect cases of coinfection with influenza viruses despite conducting the study during the winter months, during which this virus increases its incidence. This can be explained by social distancing and confinement orders that reduced the transmission of other respiratory viruses, including influenza and respiratory syncytial virus [34]. Another study in Peru reported cases of coinfection between SARS CoV-2 and influenza A (n = 4) and B (n = 1). While SARS-CoV-2 was identified by RT-PCR, influenza A and B were identified by indirect immunofluorescence (IFI) [35]. This fact represents a limitation of the study, given the lower diagnostic performance of IFI compared to PCR in the identification of respiratory pathogens [36].
In these coinfections, additional symptoms such as odynophagia and nasal congestion were described, with no additional complications [35]. A "synergistic effect" has been documented between influenza virus and COVID-19 that may increase the risk of mortality by almost two times, mainly in the elderly [37]. However, in the current study, we could not conclude that patients with another concurrent viral infection had a worse prognosis than patients with only SARS CoV-2 detection.
We considered that although it was not possible to document the proportion of patients who received the seasonal influenza vaccine, the proportion should be low, since in the place where the study was carried out, the vaccines were available at the end of April and the beginning of May and its application was not mandatory [38]. Social distancing measures, the massive use of masks, the closure of schools and other established biosafety measures may have reduced the transmission of other respiratory viruses.
We observed a higher proportion of sepsis in those patients with coinfections compared to monoinfection. SARS-CoV-2 has been reported to induce viral sepsis associated with secondary organ dysfunction in 25% and 83% of COVID-19 patients hospitalized in general services and critical care units, respectively [39,40]. Possible mechanisms proposed are increase in bacterial adherence, cellular destruction by viral enzymes, reduction of mucociliary clearance, reduction in chemotaxis, reduction in surfactant levels, dysbiosis of the microbiome, dysregulation of immune response and bacterial-viral synergism, among others [41].
We evaluated the use of antibiotics in our study and found that 205 (69.5%) patients received antibiotics upon admission. A previous meta-analysis found a similar proportion of antibiotic prescriptions among the studies (71.8%), in which the predominant antibiotics were quinolones and third-generation cephalosporins, comprising approximately 74% of the total antibiotics administered [13]. In our study, the most commonly antibiotics found were ceftriaxone in 143 patients, azithromycin in 95 patients and imipenem in 36 patients. This was because the so-called "respiratory" fluoroquinolones are restricted for the treatment of tuberculosis in Peru; therefore, third-generation cephalosporins and macrolides predominated in our study. We could observe that nearly 50% of the patients that received antibiotics did not have a bacterial coinfection and 41% of the patients who received azithromycin during hospitalization were coinfected with some atypical bacteria; however, the impact on the persistence of symptoms after antibiotic treatment could not be determined [42].
Routine administration of antibiotics is currently not indicated in the context of COVID-19 infection and may only be considered in the case of high clinical suspicion [43,44]. In addition, recently recommendations against empirical use of azithromycin in mild COVID-19 has been reported [45]. We are unaware of the future impact in our setting of the massive use of antibiotics after the pandemic.
Although it is necessary to document the prevalence and possible resistance mechanisms of Mycoplasma pneumoniae and Chlamydia pneumoniae in Peru [46], the local rate of resistance to macrolides in Streptococcus pneumoniae strains obtained in hospitalized patients in Lima was higher than 30% [47]. Future studies are required to determine the role of antibiotics in inpatient COVID-19 care, as well as resistance rates, following the pandemic era.
Our study had limitations. First, the percentage of coinfections at the beginning of hospitalization could be higher in relation to the number of respiratory pathogens evaluated if other multipathogenic molecular platforms were used (e.g., FilmArray); however, our institution does not have these tests for routine use. Despite this, the percentage of coinfections obtained exceeded that reported in the literature and the frequency of bacterial infections by atypical microorganisms that was obtained in an adult population occurred in established severity groups. Second, although a high frequency of coinfections was found, longitudinal studies must be carried out throughout the course of the disease to identify possible mixed infections through methods such as whole genome sequencing and to identify possible resistance mechanisms in pathogens such as Mycoplasma pneumoniae and Chlamydia pneumoniae. Third, the results obtained could not be extrapolated to other centers in Peru; however, it is possible that clusters of M. pneumoniae could circulate during the pandemic in Peru, for which it is necessary to use molecular and strain typing techniques to characterize these events.
Study Design
A descriptive study was conducted on hospitalized patients with a confirmed diagnosis of moderate/severe pneumonia due to SARS-CoV-2 infection (molecular test or confirmation according to definitions established by the Peruvian Ministry of Health). The selection criteria included patients older than 18 years who were admitted to the Guillermo Almenara Irigoyen Hospital in Lima, Peru during the period of July-November 2020. The total number of hospitalized patients with a confirmed diagnosis of moderate/severe pneumonia due to SARS-CoV-2 infection in the hospital during the enrollment period was 660 patients. The selection was consecutive until 295 patients were enrolled and coincided with the highest peak of the first wave of the pandemic in Peru. The informed consent was signed upon admission to hospitalization. Only patients who accepted to be enrolled in this study and signed an informed consent were included. The patients not enrolled in the study were minors, pregnant women, patients who refused to participate and patients admitted to shifts when the personnel in charge of enrollment for the study were not present.
Definitions
Moderate pneumonia was considered as follows: adult with clinical signs of pneumonia (fever, cough, dyspnea, respiratory distress) but no signs of severe pneumonia, including oxygen saturation ≥90% in room air. Severe pneumonia included patients with clinical signs of pneumonia (fever, cough, dyspnea, respiratory distress) plus one of the following: respiratory rate >30/min, severe respiratory distress, or oxygen saturation <90% in room air [48]. The radiological severity scored was assessed according to the study by Ho Yuen et al. [49].
Sampling and Nucleic Acids Extraction
Nasopharyngeal swab samples were collected from patients hospitalized in COVID-19 hospitalization wards and in the intensive care unit (ICU) within 48 h of hospital admission. RNA/DNA extraction was performed from 140 µL of the aliquoted samples. The QIAGEM ® QIAamp Genetic Material Isolation Kit was used according to the manufacturer's instructions; 80 µL of RNA/DNA eluted was obtained after extraction and then continued with the amplification process.
Different viral and bacterial pathogens were evaluated by polymerase chain reaction (PCR), including: influenza A and B, respiratory syncytial virus (RSV), Adenovirus, Mycoplasma pneumoniae, and Chlamydia pneumoniae. Molecular diagnostic methods were carried out in the molecular biology laboratory of the Universidad Peruana de Ciencias Aplicadas (UPC).
Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) for the Analysis of Respiratory Viruses
For the analysis of influenza A and influenza B, the primers and probes used were described by Carra et al. [50] and Selvaraju et al. [51], respectively. For the analysis of RSV-A and RSV-B, primers and probe were as described by Liu et al. [52] and for adenovirus, as described by Heim et al. [53]. For RNA viruses, a one-step RT-PCR was performed using TaqMan with a BHQ quencher probe at 125 nM and 250 nM of primers in a final volume of 20 mL. Then, 5 microliters of the extracted RNA was combined with 15 mL of the Ready RNA Virus Master (Roche Diagnostics, Mannheim, Germany). The amplification conditions for influenza A and influenza B were 50 • C for 10 min, followed by 45 cycles of 95 • C for 5 s, 57 • C for 15 s and 72 • C for 15 s; in the case of RSV-A and RSV-B, they were 50 • C for 10 min, followed by 50 cycles of 95 • C for 5 s, 51 • C for 20 s and 72 • C for 20 s. In the case of Adenovirus, the Fast Start DNA Master enzyme (Roche Diagnostics, Mannheim, Germany) was used and the amplification conditions were 50 • C for 10 min followed by 60 cycles of 95 • C for 5 s, 64 • C for 5 s and 72 • C for 15 s. All procedures were performed in a LightCycler 2.0 instrument and data were analyzed with LightCycler software 4.1 (Roche Diagnostics, Mannheim, Germany).
Conventional Polymerase Chain Reaction (PCR) for Atypical Bacteria Mycoplasma pneumoniae and Chlamydia pneumoniae
For the amplification of atypical bacteria, primers and conditions previously described by Valle et al. [46] were used. The amplification consisted of an initial incubation at 95 • C for 2 min, followed by 40 cycles of 95 • C for 30 s; 58 • C for 30 s and 72 • C for 30 s; with a final extension at 72 • C for 5 min. Amplified sequences of 275 and 126 base pairs were detected for Mycoplasma pneumoniae and Chlamydia pneumoniae, respectively, visualized under agarose gel electrophoresis and nucleic acid staining (SybrGreen, Promega).
Data Analysis
For data and variables collection, the hospital electronic clinical charts were used. The data was obtained upon discharge of the patient and the information obtained was compiled in a database stored in the Excel v.2016 program. For the data analysis, no personal identifiers were considered. Descriptive statistics were performed and for the analysis of clinical results, the group of patients with COVID-19 monoinfection was compared versus the group that encompassed all evaluated coinfections. In addition, specific coinfections with the different pathogens were described separately. All calculations were performed using STATA Software version 15.0 for Windows (College Station, TX, USA). Graphics were created with GraphPad Prism 9.0.0.
Conclusions
Our study identified Mycoplasma pneumoniae and Chlamydia pneumoniae as the main microorganisms associated with coinfections in COVID-19 patients admitted to a referral hospital. Regarding respiratory viruses, Adenovirus and RSV-B were identified less frequently than atypical bacteria. Furthermore, the presence of multiple coinfections could be described in some patients. In the hospital setting, a higher proportion of sepsis, superinfections, stay in the ICU and mechanical ventilation was found in coinfected patients. Finally, a high proportion of patients received antibiotics, even in the absence of bacterial infections. Future studies are required to determine the role of other respiratory pathogens in COVID-19 and guide the rational use of antibiotics.
|
2021-11-10T16:21:50.545Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a305b28c643d6f4ce6afbd3b924e9dde9a749cf6",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8615059",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fedf8be9f7353ab8b9f32657a2a476c6f22ed7f2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54532797
|
pes2o/s2orc
|
v3-fos-license
|
Design and Research for the Water Low-pressure Large-flow Pilot-operated Solenoid Valve
In hydraulic systems, a pure water hydraulic system is a new development direction in fluid transmission and control field [1] and [2]. Compared with conventional mineral hydraulic oil, water as a medium has certain unique advantages, such being clean/non-polluting, easily available and resistant to explosion [3]; it has been increasingly used widely in food, fire, high pressure cleaning, reactor, mining and other industries [4] and [5]. Recent research on water electromagnetic valves has mostly focussed on high-pressure valves [6], rather than on low-pressure large-flow valves. Meanwhile, most of the research is on the flow field characteristics of existing valves, but little has been done on the structural design of valves. Low-pressure large-flow valves with fast response characteristics are highly valued in industrial applications, such as high-power valve-control coupling, in which low-pressure large-flow solenoid valves are a core component. It helps to achieve gentle starts of valve-control coupling by controlling the liquid-filling processes. The working liquid can be replaced in a timely manner according to the liquid temperature and pressure in the chamber to control the speed and limit the working temperature. Valvecontrol coupling has been widely used in heavy scraper conveyors of coal mines, belt conveyers, pump, draught fans and other heavy equipment; it plays an important role in improving working conditions and saving energy [7] and [8]. The lowpressure large-flow valve group is one of the key factors of high-power valve-control coupling. The control valve for valve-control coupling can be classified as a solenoid valve. Low-pressure largeflow solenoid valves with piston liftd and diaphragms are both using pilot valves structure. Some research about solenoid valve for valve-control coupling provides a good basis for this study [9] to [11]. Other related research about valve design [12], simulation [13] and improvement [14] is also encouraging. However, the application requirement of the solenoid control valve group is stricter in some difficult operating environments. This article describes low-pressure large-flow pure pilot-operated water solenoid valves (the flow is larger than 240 L/min) according to the working requirements of the valve-control coupling. The effect of key parameters on valve group characteristics is analysed using AMESim in order to seek reasonable parameters. The pure water hydraulic test platform is set up to carry out experimental verification.
INTRODUCTION
In hydraulic systems, a pure water hydraulic system is a new development direction in fluid transmission and control field [1] and [2].Compared with conventional mineral hydraulic oil, water as a medium has certain unique advantages, such being clean/non-polluting, easily available and resistant to explosion [3]; it has been increasingly used widely in food, fire, high pressure cleaning, reactor, mining and other industries [4] and [5].Recent research on water electromagnetic valves has mostly focussed on high-pressure valves [6], rather than on low-pressure large-flow valves.Meanwhile, most of the research is on the flow field characteristics of existing valves, but little has been done on the structural design of valves.
Low-pressure large-flow valves with fast response characteristics are highly valued in industrial applications, such as high-power valve-control coupling, in which low-pressure large-flow solenoid valves are a core component.It helps to achieve gentle starts of valve-control coupling by controlling the liquid-filling processes.The working liquid can be replaced in a timely manner according to the liquid temperature and pressure in the chamber to control the speed and limit the working temperature.Valvecontrol coupling has been widely used in heavy scraper conveyors of coal mines, belt conveyers, pump, draught fans and other heavy equipment; it plays an important role in improving working conditions and saving energy [7] and [8].The lowpressure large-flow valve group is one of the key factors of high-power valve-control coupling.
The control valve for valve-control coupling can be classified as a solenoid valve.Low-pressure largeflow solenoid valves with piston liftd and diaphragms are both using pilot valves structure.Some research about solenoid valve for valve-control coupling provides a good basis for this study [9] to [11].Other related research about valve design [12], simulation [13] and improvement [14] is also encouraging.However, the application requirement of the solenoid control valve group is stricter in some difficult operating environments.
This article describes low-pressure large-flow pure pilot-operated water solenoid valves (the flow is larger than 240 L/min) according to the working requirements of the valve-control coupling.The effect of key parameters on valve group characteristics is analysed using AMESim in order to seek reasonable parameters.The pure water hydraulic test platform is set up to carry out experimental verification.
Working Principle of Differential Pressure Type Pilot Operated Solenoid Valve
The differential pressure type pilot-operated solenoid valve is constituted with a main valve and a solenoid pilot valve, as seen in Fig. 1.
In the pilot hydraulic half-bridge, R 1 and R 2 represent the throttle orifice fluid resistance and the pilot valve fluid resistance, respectively.Both are connected in series.The supply fluid pressure is p 1 ,
Design and Research for the Water Low-pressure Large-flow Pilot-operated Solenoid Valve
and upper chamber pressure of the main spool is p 2 .
The relationship between them is: The role area of the control chamber (upper chamber) is A 2 .The role area of the high-pressure chamber (lower chamber) is A 1 .Here A 1 < A 2 .The spring stiffness in the upper chamber is k.
Fig. 1. Working principle of pilot operated solenoid valve
When the pilot valve is closed, the working medium flows into the upper chamber through the orifice, and it cannot continue because of resistance.The fluid resistance R 2 is infinite at this time.From Eq. (1) it can be known that p 1 is equal to p 2 .The greater the work pressure, the tighter seal.
When the pilot valve is opened, the pressure drops are formed at the orifice and the pilot valve.So R 2 is small, and by the Eq. ( 1) it can be known that p 1 is greater than p 2 .When p 1 can overcome, the summation of p 2 , spring force and friction brought by spool movement, the main spool will open.
Spool Motion Equation
The Spool motion equation is: Here m is the spool's mass, x is spool displacement, and x 0 is spring precompression.D 1 is damped coefficient.F f is the friction force.F s is flow force.
The static equilibrium condition before main valve opening is: Ignore the spring force and the frictional force.Thus: And open the parameters condition: Then it is expressed by pressure: If the pilot valve flow capacity is strong, which means R 2 is 0, the opening pressure parameter condition is: As can be seen from Eq. ( 6), the spring mainly affects the opening pressure.In consideration of the reset, the smaller pre-tightening force is required because of the reducing opening pressure.
Characteristics of Throttle Orifice
The diameter d and length l are two basic parameters of the throttle nozzle.When the l/d (length diameter ratio) is between 0.5 and 4, it is called a 'short orifice'.When l/d is greater than 4, it is called a 'thin long orifice'.The short orifice flow calculation equation is similar to that of the thin-walled holes.
In the equation, a 0 is the cross-sectional area of the orifice.C d is flow coefficient, which is approximately 0.8 when the Reynolds number is larger.Δp is the pressure difference at orifice.
Regarding laminar flow in the thin long orifice, the flow through circular tube calculation equation is: Therefore, the flow rate equation of the orifice Eqs. ( 1) and ( 2) can be summarized as: The coefficient C is determined by the shape and dimensions of the orifice and the general nature of the liquid.The coefficient m (0.5 < m < 1) is determined by the length diameter ratio of the orifice.The orifice is at the transitional state of laminar flow and viscous flow.
Fluid Resistance Characteristic
According to the working principle of the pilot solenoid valve, the pressure distribution test system is shown in Fig. 2. The pilot valve and throttle nozzle are connected in a series to simulate the relationship between the throttle nozzle and the pilot valve.The characteristics of 'pressure difference-flow' and the pressure distribution rule for the throttle nozzle and pilot valve are investigated using this test system.The solenoid pilot valve is turned on while electrified.Fig. 3. shows the throttle nozzle structure.
Flow Characteristics of Throttle Nozzle
The diameters of throttle nozzles are 1.2 mm, and the lengths are 3.5, 5, 6 and 7 mm respectively.Fig. 4 is pressure-flow curves of different nozzle lengths, which shows that with increasing length of throttle nozzle, the flow capacity of the orifice is weakened.However, with further increases, the rate of flow decreases become slower.This means that sensitivity of the flow capacity is reduced according to the length.
When the flow coefficient C d is 0.7, the theoretical value of throttle nozzle is calculated by using Eq. ( 7) as plotted in Fig. 4. It shows that the theoretical value is significantly larger.Although the effect of length on the flow characteristics is not considered in Eq. ( 7), it can also be seen that the overall trend of each experimental curve substantially presents the exponent distribution law, which means that the flow of the water medium in the orifice is a turbulent-state flow.
As the viscosity of water is small, the resistance loss along the way of the orifice is also low.The flow through the throttle nozzle is hardly affected by the viscosity in the test range, which means it is insensitive to change for water temperature.The temperature of water in the valve-control coupling changes tremendously in start and overloading conditions.Therefore, the feature is particularly suitable for valve-control coupling.
Fig. 4. Pressure-flow curves of different nozzle lengths
When diameters of the throttle nozzle are 1.2, 1.5, 1.8 and 2 mm, the length diameter ratio is 3.5, and the corresponding lengths are 4.2, 5.2, 6 and 7 mm, respectively.Characteristics of flow in each throttling nozzle are shown in Fig. 5.It can be seen that in case of the same length diameter ratio, the larger the diameter, the stronger flow capacity.In accordance with the principles of the least squares method, fitting every pressure-flow curve according to the exponent function, here the exponent is approximately 0.5, which is closer to Eq. ( 9) and meets the flow characteristics of a short orifice.Adopting a 1.2 mm orifice, when the system pressure is 1.9 MPa, the flow is 2.15 L/min.The pressure drop at the orifice accounts for the majority of the total pressure.As a result of the small flow, the pressure drop at the pilot valve is very small.When the pressure is 2 MPa at orifice diameters of 1.5 mm, the flow is 3.8 L/min.For a 1.8 mm orifice, when the pressure is 2 MPa, the flow is 5.5 L/min.When the pressure is 2 MPa at orifice diameters of 2 mm, the flow is 6 L/min.Therefore, when increasing the orifice diameter, the pressure drop of the pilot valve at the same flow rate is also increasing.Thus, the corresponding solenoid valve upper chamber pressure is increased.If the diameter of the orifice is further increased, the open condition of the Eq. ( 5) will not be satisfied.
In comparing Fig. 7 with Fig. 6, the pressure drop brought by a 1.5 mm diameter pilot valve nearly accounts for total pressure drop of 1/3 for 1.2 mm throttle nozzle.However, the pressure drop proportion in total pressure brought by 3 mm diameter pilot valve is almost negligible.In considering the working pressure, the bigger diameter pilot valve should be selected.The control valve group works according the pressure-difference pilot principle.It consists of the filling valve (8-2), the liquid-discharged valve (8-6) and the circulating valve .The pilot valve (8-3) controls the filling valve (8-2).The solenoid valve (8-3) is used as a pilot valve, and the filling valve (8-2) is used as the main valve.There is a damping orifice on the main spool.When the solenoid valve (8-3) is opened, there will be a pressure difference between the upper and lower cavities, the main valve will open under the action of pressure difference, and then the coupling will be filled with liquid.The two-position three-way solenoid pilot valve (8-5) can control both the on-off state of the circulating valve (8-4) and the on-off state of the discharging valve .The circulating valve and the liquid-discharged valve have the same principle as the liquid-filled valve .The movement of the main spool is also caused by the pressure difference between the upper and lower cavities.The circulating valve is mainly to control the on-off state of coupling circuits, and the liquiddischarged valve is used to discharge the liquid inside coupling.Only one of the two valves could open under the action of the pilot valve .
DESIGN OF THE MAIN VALVE
The motor rotating speed is 1490 r/min.The essence of the valve-controlled coupling catheter is a rotary jet pump [15].Therefore, the theoretical maximal value that the coupling catheter can provide is approximately 0.6 to 0.88 MPa [16].
For a pure plane structure (Fig. 9a), during the process of fast reversing of the spool, a sudden block will cause sharply pressure rising inside the tube and lead to strong shock and noise.In order to relieve shock and noise, a cone is set at outlet surface of the plane valve.The fixed damping orifice is put on the main spool.The parameters of the tail cone structure are shown in Table 1.
SIMULATION OF THE CHARACTERISTICS OF SOLENOID VALVE BASED ON AMESIM
The liquid-filled valve is studied as a sample in order to investigate the influences of the throttle nozzle, spring and other parameters on the dynamic characteristics, static characteristics and opening pressure.Following that, the combination of optimized parameters of the solenoid valve group can be determined, and whether spool structure could satisfy the function requirements of valve-control coupling can also be checked.
Building a Simulation Model
The simulation model of the liquid-filled valve is established in AMESim, as shown in Fig. 10.It is composed of a pilot-operated solenoid valve and an external fluid supply system.The fluid supply system is provided liquid by a quantitative pump, and the fluid supply pressure is regulated by a relief valve.The stationary liquid resistance represents the characteristics of a damping orifice.The combination of the plane valve and poppet valve represents the main spool with tail cone structure.
Opening Pressure
The signal source is set to ensure that the opening pressure of relief valve increases linearly from 0 to 1.6 MPa.In addition, the valve remains open while the pump flow rate is set to 240 L/min.The stationary liquid resistance is given by the pressure differenceflow data of a 3 mm pilot valve.The main spool spring stiffness is 10 N/mm.The pressure difference-flow data (obtained by the test) of 1.2, 1.5, 1.8 and 2 mm throttle nozzles are assigned to the liquid resistance.
It can be seen from Fig. 11 that before opening, the pressure at position A increased synchronously with the relief valve setting pressure.However, once opened, it declines sharply.The value of the inflection point is the minimum opening pressure.The opening pressure is about 0.16 MPa when using a 1.2 mm throttle nozzle, 0.19 MPa when using a 1.5 mm throttle nozzle and 0.225 MPa when using a 1.8mm throttle nozzle.The minimum opening pressure increases to 0.23 MPa when using a 2 mm throttle nozzle.If the throttle nozzle is further enlarged, the opening pressure will rise further until it cannot be opened.In order to obtain a smaller opening pressure, considering the need of the anti-block ability, a 1.5 mm diameter throttle nozzle is selected.
Fig. 11. Comparisons of opening pressure of different nozzle diameters
As a result of the main spool with a tail cone, it is necessary to observe the pressure at inlet B. Fig. 12 shows the pressure curves of position A and position B when using a 1.5 mm throttle nozzle.When the pressure is 0.13 MPa, the pressure at B begins to rise, and the pressure at A continues to increase with an increasing relief valve.When the pressure exceeds 0.13 MPa, the spool is slightly opened, and the relief valve is still working.However, the flow through the main spool is small, which is in the non-effective open state.The setting pressure of the relief valve continues to rise.When the opening pressure of the spool is 0.19 MPa, the relief valve closes.Thus, all liquid comes through the main spool, the distance of the spool is further enlarged, the pressure drop appears at the valve orifice, which reduces until stability is attained.The pressure difference between position A and position B is the pressure drop produced by the plane valve.When spool is slightly opened, pressure drop is greater in the plane valve.When the relief valve is closed, the pressure drop at the plane circular throttle nozzle is less than 0.02 MPa.
Stable Pressure
Pump flow is set to 240 and 400 L/min.The relief valve pressure is set to 1.6 MPa.The spring stiffness is set to 10 N/mm.Simulation time is 2 s.Then, steady pressure at position A is observed.
Fig. 13 shows that the greater the flow and orifice, the greater the pressure drop at position A. When the orifice is 1.2 mm and flow is 240 L/min, the pressure drop is minimum of 0.051 MPa.When the orifice is 2 mm and flow is 400 L/min, the pressure drop is maximum of 0.095 MPa.The pressure drop changes in a small range for each combination of parameters, which satisfies the requirements of low pressure and large flow.
The Dynamic Response Characters
The pressure of relief valve is set to 1.6 MPa; the flow is set to 240 L/min.The spring stiffness is set to 10 N/mm.The reversing valve of an on-off signal is given, and then the opening and closing time of the spool is studied.In order to measure and compare conveniently, the pressure at position A is taken as a reference (it would close down when it reaches to 1.6 MPa).The comparison of response time of throttle nozzles in different diameters (1.2, 1.5, 1.8 and 2 mm) are shown in Fig. 14; the blue line represents the input signal of the pilot valve, and high position represents 'on', zero position represents 'off'.Therefore, the smaller the throttle nozzle, the longer the closing time.The closing time is 2.6 s by using 1.2 mm throttle nozzle, and the closing time reduces to 0.75 s by using a 2 mm throttle nozzle.The open time is within 0.5 s, which is hardly affected by the throttle nozzle.
The effect of different fluid supply pressure on the opening and closing characteristics are shown in Fig. 15.The throttle nozzle is 1.5 mm in diameter.
In the Fig. 15, it can be seen that the response curves of opening and closing process are overlapping.The closing time is about 1.2 s and the opening time is less than 0.5 s.However, the opening time is close to closing time at different supply pressure.Furthermore, the fluctuation of fluid supply pressure has little effect on the overall response characteristics, i.e. this valve has good stability at low pressure.The principle of the valve group test system is shown in Fig. 16.The system pressure could be adjusted by manual throttle valve (2).The flow is 20 m³/h.The head of delivery is 162 m, the rotation rate is 2900 r/ min, and the power is 22 kW.
The pressure, flow, and voltage signals are all collected with a data acquisition card.The collecting frequency is 1000 Hz.In order to prevent the impurities blocking the main valve and pilot valve orifice, a filter (3) is set in the system
Opening Pressure
After adjusting the opening of the throttle valve (16-2) to maximum, the supply pressure reaches to the minimum; keep the pilot valve in the electric state.The pilot valve is opened at this time.Slowly adjust the throttle valve so as to gradually increase the pressure.The change of pressure and flow can be observed.When the liquid flows out from the discharging valve, the opening pressure is minimum.When the valve opens, the pressure decreases.Therefore, it can be judged according to the record of the point of inflexion of pressure curve.The entrance pressure change of filling the valve is shown in Fig. 17.When the pressure is between 0.22 MPa and 0.23 MPa, a significant pressure drop appears, which means the valve is opening.Therefore, the opening pressure of filling the valve is approximately 0.22 MPa.
Response Characteristics
The inlet pressure is adjusted to 0.8, 1.2 and 1.5 MPa.The pressure change is studied on the process of the opening and closing of the spool under different pressures of the liquid supply, which are response characteristics.The opening and closing process are shown in two images.Fig. 18 shows that the opening (left) and closing (right) process under the different inlet pressures.
In the process of opening, the solenoid pilot spool opens immediately after the solenoid pilot valve is electrified.The inlet position of the main spool produces small pressure fluctuations along with sudden change of the solenoid valve, and the fluctuation is the stability process of the pilot spool.With the main spool opening, the pressure declines rapidly.When spool attains the maximum opening, an inflection point happens, after which the pressure gently declines.The process of the gentle decline is a new process of pressure balance that the pump reestablishes.Therefore, the opening process isfrom the electrified point to the pressure inflection point in Fig. 18a, b and c.
In the closing process, with the pilot valve closed, a pressure pulse appears first, and then the pressure rises after a straight section.A rapid decline and fluctuation appear after the high-point, and then the pressure gently rises.After the spool is completely shut down, it cannot be immediately restored to the pressure before opening.The process of the gentle rise after the pressure decline is a new process of pressure balance that the pump re-establishes.The fluctuation of the high-point is the impact and liquid returning as a result of the spool being completely closed.Thus, the closing process is from the electrified point to the highest pressure point in Fig. 18a, b and c.
The opening process is between 0.3 and 0.4 s.The bigger the inlet pressure, the shorter the opening time; however, the changing amplitude is small, which is basically identical with Fig. 15.The time of closing process is around 1 s, which contains two stages, straight and rise, and the trend and time are essentially identical with the AMESim simulation results in Fig. 15.
In Fig. 18, when the spool opens, it maintains in long period.Before closing, the pressure is under a stable condition, which is the low pressure flow characteristic.In different supply pressures of 0.8, 1.2 and 1.5 MPa, the stable flows are, respectively, 140, 160 and 190 L/min.Before closing, the pressure is maintained approximately 0.1 MPa.When the flow is 190 L/min, the pressure loss is minimum, which is about 0.07 MPa.The value is close to that of pressure drop of 240 L/min in the simulation process.When the flow is low, the fluid power is also small.In order to overcome the spring force, a larger pressure difference is needed.There will be a larger pressure drop, but the pressure is in the lower state.
The opening pressure is 0.22 MPa.When flow is 190 L/min, the stable pressure is 0.07 MPa.Meanwhile, the opening time is about 0.3 to 0.4 s.
The closing time is about 1 s.The circulating liquid entrance response time is about 1.1 s.
The test results show that the designed solenoid valve possesses better low-pressure characteristics and rapid response characteristics.
CONCLUSIONS
Based on the logic relations of the filling valve, circulating valve and discharging valve, we designed the pilot-operated solenoid valve group.The radial seal of main valve is adopted co-axial seal (Gelai ring), and the end face seal is adopted a plane soft seal, in order to adapt to the characteristics of the water medium.Two-stage throttling of plane and cone structure can reduce the impact during the process of opening and closing.The simulation model of filling the valve is established via AMESim.The influence of liquid damping on static and dynamic characteristics for the control valve is studied.The simulation results show that the response time is decided by the diameter of throttle nozzle and the spring stiffness (the bigger the diameter of the throttle nozzle and the spring rigidity, the faster the response), and with little influence of the supply pressure.The opening pressure and stable working pressure of the single valve are both small, which satisfies the demand of low pressure and high flow.
Fig. 2 .
Fig. 2. Water hydraulic system for pressure dividing test Adopt 2, 1.8, 1.5 and 1.2 mm four different throttle nozzles matching 3 mm pilot valve.Select 1.2 mm diameter throttle nozzle matching 1.5 mm pilot valve.Then compare the matching effect.
Fig. 8
Fig.8shows a hydraulic system of valve-control hydrodynamic coupling.It comprises a control system with an electro-hydraulic valve group as the main part
9 .
a) Pure plane structure b) Tail cone structure Fig. Structure of the main spool
Fig. 12 .
Fig.12.Comparison of opening pressures at two stages
6 ACKNOWLEDGMENTS
This work is supported by Specialized Research Fund for the Doctoral Program of Higher Education (20130095110012).
Table 1 .
Lists of parameters
|
2018-12-02T08:24:52.762Z
|
2014-10-15T00:00:00.000
|
{
"year": 2014,
"sha1": "9772f3caea1aa07f89f375b73f62c524e9d46d0d",
"oa_license": "CCBY",
"oa_url": "https://www.sv-jme.eu/?id=3143&ns_articles_pdf=/ns_articles/files/ojs/1688/public/1688-10489-1-PB.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9772f3caea1aa07f89f375b73f62c524e9d46d0d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
119178040
|
pes2o/s2orc
|
v3-fos-license
|
Non-perturbative approach for curvature perturbations in stochastic-$\delta N$ formalism
In our previous paper, we have proposed a new algorithm to calculate the power spectrum of the curvature perturbations generated in inflationary universe with use of the stochastic approach. Since this algorithm does not need the perturbative expansion with respect to the inflaton fields on super-horizon scale, it works even in highly stochastic cases. For example, when the curvature perturbations are very large or the non-Gaussianities of the curvature perturbations are sizable, the perturbative expansion may break down but our algorithm enables to calculate the curvature perturbations. We apply it to two well-known inflation models, chaotic and hybrid inflation, in this paper. Especially for hybrid inflation, while the potential is very flat around the critical point and the standard perturbative computation is problematic, we successfully calculate the curvature perturbations.
Introduction
Recently, the BICEP2 collaboration has discovered the primordial B-mode polarization of the cosmic microwave background (CMB) [2]. This strongly supports the inflationary paradigm, the accelerating expansion in the early universe. In the standard inflationary paradigm, all of the observed fluctuations including the temperature anisotropy of CMB and the seeds of the large scale structures are assumed to originate in the quantum fluctuations of the inflaton, the scalar field which drives inflation.
Though the primordial curvature perturbations generated during inflation are quite small ∼ 10 −5 on the CMB scale [3], they are not necessarily to be so on smaller scales and there may exist large curvature perturbations which lead to formation of curious astronomical objects like primordial black holes (PBHs) [4][5][6] and ultracompact minihalos (UCMHs) [9][10][11]. Moreover as we will mention later, hybrid inflation can generate large curvature perturbations with a peak profile around the scale corresponding to the critical point because the inflaton potential is very flat around that point. Therefore it is quite interesting to consider large curvature perturbations.
The generated curvature perturbations are often calculated by a perturbative approach with respect to the inflaton field φ. However, this approximation will not be good to analyze large curvature perturbations. That is because the effects of the higher-order perturbations may not be negligible in such a case. In fact, in multi-field inflation, a field fluctuation can become so large compared to its homogeneous value that the perturbative expansion breaks down. In such a case, we need the non-perturbative approach.
In our previous paper [1], we proposed a non-perturbative method which combined the stochastic approach [12][13][14][15][16][17][18][19][20][21][22][23] and the δN formalism [24][25][26][27][28]. The inflaton field which is coarsegrained over a super-horizon scale is considered in the stochastic approach. Since fluctuation modes of the inflaton which cross the horizon and are classicalized give contributions to the coarse-grained field continuously, their effects are taken into the equation of motion (e.o.m.) as statistical random noise. The duration of inflation, namely e-folds N , fluctuates because of this noise, and our method connects it to the gauge-invariant curvature perturbations ζ with use of the δN formalism in a non-perturbative manner. 1 In this paper, in order to show validity of our method (we call "the stochastic-δN formalism"), we apply it to two inflation models, chaotic inflation [29] and hybrid inflation [30,31]. In the latter case, multiple fields are involved and a high stochasticity is realized. In particular, we successfully calculate the curvature perturbations generated around the critical point and during the waterfall phase in hybrid infaltion for the parameters where the waterfall phase continues more than 10 e-folds.
It should be noted that the original type of hybrid inflation is rejected by the observation of CMB by the Planck collaboration [3] because this model predicts a blue-tilted spectrum. Moreover, the recent report of B-mode detection by the BICEP2 collaboration suggests the large-field (or super-Planckian-field) inflation models, though hybrid inflation is generally a small-field model. 2 However we don't describe these topics in detail in this paper. Instead, we consider hybrid inflation just as a toy model of multi-field inflation to show how to use the stochastic-δN method.
The rest of the paper is organized as follows. In section 2, we quickly review the standard linear perturbation theory. In section 3, we explain the stochastic-δN formalism briefly. In section 4, we demonstrate the stochastic-δN in chaotic inflation, and then, in section 5, we calculate the power spectrum of the curvature perturbations in hybrid inflation. Finally section 6 is devoted to conclusion.
Linear perturbation theory
First let us review the standard linear perturbation theory for the comparison of the stochatic-δN .
According to the Einstein equation, an accelerating expansion of a space-time can be brought about by the potential energy of a homogenous scalar field (called "inflaton field"). If the inflaton field slowly rolls down on its potential, inflation can continue for a long time. In the isotropic and homogenous FLRW space-time, the e.o.m. of the scalar field φ is given bÿ Here H =ȧ/a is the Hubble parameter, a dot represents a time derivative and V φ denotes a partial derivative ∂V /∂φ. For a slow-rolling,φ ≪ V φ , and homogenous scalar field, one can If the inflaton rolls down so slowly that the kinetic energyφ 2 /2 can be neglected compared to the potential energy V , the Friedmann equation leads a nearly constant Hubble parameter H ≃ V /3M 2 p and then the scale factor a(t) grows exponentially a(t) ∝ e Hdt . This exponent part N = Hdt is called e-folds and often used as a dimensionless time variable.
To make the slow-roll condition clear, the following slow-roll parameters are often used: where M p denotes the reduced Planck mass 1 8πG ≃ 2.4 × 10 18 GeV. Then the slow-roll condition is given by The inflaton field is decomposed into the homogenous part and the perturbation part: Assuming the perturbation δφ is much smaller than the zero mode φ 0 , the linearized e.o.m. for the Fourier mode φ k is obtained from eq. (2.2) as By approximating V φφ by a constant mass m 2 and adopting the Bunch-Davies vacuum as the initial condition of inflation, one finds the solution of this equation as where H (1) ν is the Hankel function of the first kind and ν is defined as Here the inflaton mass m should be negligible compared to the Hubble parameter for slowroll inflation. One can obtain the power spectrum which is the two-point correlator of the inflaton field in Fourier space as (2.9) With use of the asymptotic form of the Hankel function, , Reν > 0 and x → +0, (2.10) it is shown that the power spectrum gets frozen to a constant on the super-horizon scale, The perturbations of the duration of inflation due to this frozen quantum fluctuations cause the metric curvature perturbations. In fact the scale factor, which is the spatial part of the metric, is proportional to e N , and therefore the fluctuation of e-folds δN is nothing but the metric perturbation. According to the δN formalism [24][25][26][27][28], the gauge-invariant curvature perturbation ζ can be calculated up to the first order perturbation of φ as (2.12) where N (φ) denotes the e-folds taken from φ to the inflation end value φ f and can be obtained from the slow-roll eq. (2.3) as (2.13) Thus we obtain the standard result on the power spectrum of the curvature perturbations as (2.14) In this paper, we demonstrate the numerical calculations of the stochastic-δN approach, which is more general and efficient algorithm especially when the perturbative expansion (2.12) is broken down.
Stochastic-δN formalism
We briefly describe the stochastic formalism [12][13][14][15][16][17][18][19][20][21][22][23] and our algorithm [1] in this section. In the stochastic formalism, not the homogenous field but the super-horizon scale coarse-grained field is treated as the background field. In this paper, we call this coarse-grained field the IR part which can be defined as Here θ denotes the step function and ǫ is a positive constant parameter. Due to the step window function in the eq. (3.1), the IR part contains only k < ǫaH modes. With tiny ǫ, wavelengths in the IR part are much longer than the horizon scale (aH) −1 . In this paper, we set this ǫ parameter to 0.01. The IR part is assumed to be a classical field, and since the horizon scale (aH) −1 becomes shorter and shorter, the sub-horizon modes come into the IR part and get classicalized successively. At this time, the field value of that classicalized mode follows the Gaussian distribution whose variance is equal to the power spectrum. Because of this effect, the IR part follows the Langevin equation, which is the equation of motion with white noise. Taking account of only the mass term in the potential for sub-horizon modes, the e.o.m. of the IR part is written as [40], where ξ R and ξ I represent the white noise and q R and q I are the real and imaginary part of the following function q ν (ǫ), .
We will describe these terms in detail below. Note that we omit the subscript IR for simplicity. The terms of ξ R and ξ I denote the effect that the mode crossing the horizon joins in the IR part, and without these terms, the eqs. (3.2) coincides with eq. (2.2). ξ R and ξ I correspond to the classicalizations of the φ and its momentum conjugate. However since the true conjugate is notφ but the conformal time derivative of aφ, both of ξ R and ξ I contribute to the dynamics of π. ξ R and ξ I are independent zero-mean Gaussian random variables and their amplitudes are renormalized as follows.
The reason why there is no correlation over different time is as follows. Since we choose the step function as the window function, only the mode k = ǫaH joins to the IR part at each time. Therefore, for example, ξ R can formally be written as The correlator of ξ R is proportional to φ k φ k ′ ∝ δ(k − k ′ ), but due to the delta function δ(k − ǫaH), it is proportional to δ(ǫa(t)H(t) − ǫa(t ′ )H(t ′ )) ∝ δ(t − t ′ ). Similarly, ξ I also has no correlation over different time. The spatial correlation decreases by the factor sin(ǫaHr)/ǫaHr. Since this factor is oscillating and we are interested only in the coarsegrained field, it can be approximated by the step function θ(1 − ǫaHr). In other words, the noise approximately has no correlation over the horizon scale. P φ is evaluated at the horizon exit k = ǫaH in eq. (3.2), and the value H 2π 2 is often used, though one should be careful in the massive scalar case as we will mention in section 5. q R and q I are the real and imaginary part of the function q ν (ǫ) (3.3) as mentioned above. They represent the time variation of P φ . Indeed, with the slow-roll approximation ν ≃ 3/2 + η φφ , it is shown that q ν (ǫ) is the first order of η φφ and the second order of ǫ from eq. (2.10), and hence q ν is negligible. Moreover, in a highly massive case, namely ν < 3/2, the power spectrum will be suppressed to O(ǫ 3−2ν ) by a steep potential, so the q ν terms are small in either case and we omit these terms in the numerical calculation.
In summary, the super-horizon coarse-grained field is treated as the background field in the stochastic formalism, and it follows the Langevin eqs. (3.2) including the white noise, ξ R and ξ I . They have white spectra for time and no correlation over the horizons. The At first, φ IR at the three points develop in the same way because they are in the same Hubble patch and they receive the same white noise. However, φ IR at the magenta point first, and then φ IR at the blue point deviate from φ IR at the yellow point. This is because when these spatial points exit from the Hubble patch of the yellow point, their white noises and hence their time evolutions of φ IR become independent of those at the yellow point.
comoving horizon scale (aH) −1 decreases as time goes on (in other words, the physical distance a/k increases), and therefore background fields at two spatial points evolve together until their horizon crossing k = ǫaH and then they develop independently (see figure 1). In principle, solving these Langevin eqs. over all spatial points, we can obtain the coarsegrained curvature perturbation ζ. However, it is hard to solve the Langevin eqs. for all space points simultaneously considering the branching off of each point mentioned above, both analytically and numerically. Therefore, we use the stochastic-δN algorithm proposed in ref. [1].
The stochastic-δN is the algorithm to calculate the curvature perturbations without a perturbative expansion with respect to the inflaton field, taking advantage of the δN formalism [24][25][26][27][28]. In the stochastic formalism, the background field evolves, receiving the horizon scale noise. Therefore, the duration of inflation for each Hubble patch, namely e-folds N , is automatically fluctuated. According to the δN formalism, these fluctuations of e-folds δN are nothing but the gauge invariant curvature perturbations ζ. The power spectrum of the curvature perturbations is just the correlator of δN .
The key obstacle to calculate the perturbations is, as mentioned above, the difficulties of solving the Langevin eqs. over the all spatial points. On the contrary, the evolution at one space point can be calculated easily by a numerical simulation. The stochastic-δN formalism extracts the information of correlations from the one-point evolutions cleverly. Let us describe our algorithm below. 1. Choose "initial" value φ i for the inflaton field, from which the calculation is started. 3 2. Integrating the Langevin equations from that "initial" value numerically, obtain the e-folds N which the inflaton field takes until it rolls reaches φ f where the inflation ends. 4 Since the Langevin equations include random noise, the e-fold varies in each calculation. Each e-fold corresponds to the duration of inflation in some Hubble patch and their fluctuations represent the super-horizon coarse-grained curvature perturbations. Therefore, reiterating the calculations, we can get the spatial mean and variance of e-folds, namely N and δN 2 .
3. Next, reiterate the above calculations changing φ i and obtain other sets of N and δN 2 . Thus, we obtain δN 2 as a function of N finally.
4. Here, recall that the power spectrum of the curvature perturbations is defined as the Fourier mode of the correlator of δN as follows.
Inversely, the variance of e-folds can be described as the inverse Fourier mode of the power spectrum in the limit of x → 0.
with the integration between the Hubble scale at the beginning of inflation, k i = ǫaH| i , and that at the end of inflation, k f = ǫaH| f , under the assumption that every fluctuation is made during inflation. Here we also used the approximation that This approximation is good if the curvature perturbation does not exceed unity and the Hubble scale ǫaH does not spatially fluctuate much. Since the left-hand side of eq. (3.7) is already obtained as a function of N in step 3, we can get the power spectrum by differentiating both sides with respect to N : In the single-field case, this procedure is enough to obtain the power spectrum and we showed analytically that the result is consistent with that of the standard linear perturbation theory in the slow-roll limit in the previous paper [1]. However, we should be careful to extend it to the multi-field case, like hybrid inflation. If there is only one inflaton, the "initial" value φ i and N have one-to-one correspondence, and δN 2 is determined once φ i is given. Thus δN 2 is uniquely given as a function of N . However, when the inflaton field space becomes multi-dimensional, the one-to-one correspondence between a set of "initial" field values {φ i , ψ i , · · · } and N no longer exists because different sets of "initial" values can lead to the same value of N . Then, although δN 2 can be still calculated for each "initial" value, the functional form of δN 2 ( N ) is not unique but depends on a trajectory in the inflaton field space where inflatons go through. We can also rephrase it as follows. Both N and δN 2 can be computed if an arbitrary set of "initial" values of inflatons is given. Therefore one can consider that a pair of N and δN 2 values is assigned to every point in the inflaton field space like potentials. In single field cases, the field space is onedimensional and same pairs of N and δN 2 are always chosen. However in multi-field cases, the trajectory in the field space is diverse and different pairs of N and δN 2 can be selected depending on the trajectory. Remembering that the power spectrum is obtained from δN 2 ( N ) (see eq. (3.8)), one can see that P ζ depends on the trajectory. Furthermore, it should be noted that many different trajectories are actually realized depending on the spatial points within our observable universe. Therefore we should take a statistical average of the various trajectories to obtain P ζ .
Because of these issues, to obtain δN 2 as a function of N , it is needed to take the statistical average over the solutions of Langevin equation namely the trajectories of inflatons which are realized in the observable universe. Since our observable universe was in one Hubble patch at about 60 e-folds before the end of inflation, the diverse solutions should have the same set of field values at that time. Specifically, we propose a following procedure.
i. Set the initial condition corresponding to the time of N ∼ 60.
ii. Solving the Langevin equations numerically from this initial value repeatedly, obtain a lot of solutions φ I (N ) where the superscript I denotes different inflatons. These solutions are used as trajectories in the inflaton field space and they are called sample paths.
iii. For one sample path, taking a "initial" value φ I i on that path, one power spectrum can be obtained by the algorithm mentioned above. Other power spectra can also be obtained for other sample paths and the true power spectrum averaged over our observable universe is obtained by averaging these power spectra.
Let us describe why we should take an average over the sample paths in more detail. Since these sample paths branch off at N ∼ 60, their corresponding spatial points are in the same current Hubble patch, namely the observable universe. 5 The variance δN 2 for a sample path is the averaged value over the spatial region just around the spatial point corresponding to the sample path, because δN 2 are computed from solutions branching off from the sample path. Therefore the variance averaged over the observable universe is well approximated by averaging these variances again over many sample paths. See also figure 2.
The power spectrum of super-horizon coarse-grained curvature perturbations can be calculated by the above algorithms even in a case where the perturbative expansion with respect to some inflaton field is invalid because the above algorithm does not need a perturbative expansion with respect to the inflaton field on super-horizon scale. Since the result depends on the initial condition at N ∼ 60, it is required that inflaton dynamics is on some attractor at that time to make the model predictive, but it is not needed that inflatons always observable universe correspond to each sample path each variance is avaraged around corresponding sample path Figure 2. Each sample path corresponds the inflaton dynamics at some spatial point in the observable universe. The variance as a function N can be obtained from the algorithm 1.-4. for each sample path, but these are averaged over only around the corresponding sample path. Therefore, the variance averaged over the observable universe is approximated by averaging these variance again.
trace an attractor from the beginning to the end of inflation. In the subsequent two sections, we will apply the stochastic-δN to two well-known inflation models, chaotic inflation and hybrid inflation.
Chaotic inflation
In this section, we will apply the stochastic-δN to chaotic inflation as a demonstration. Chaotic inflation [29] is a simplest single-large-field inflation. The potential is given by the mass term V (φ) = 1 2 m 2 φ 2 or the quartic term V (φ) = λ 4 φ 4 . More generally, the case where the potential is described as V (φ) = λφ n nM n−4 p is also called chaotic inflation. Here we consider the mass term type chaotic inflation model with V (φ) = 1 2 m 2 φ 2 . To see the dynamics of chaotic inflation, let us calculate the slow-roll parameters, eq. (2.4). For the mass term potential, these parameters read Therefore, if the inflaton field has a super-Planckian value φ ≫ M p , inflation takes place. Just about Planck time after the beginning of the universe (it may be called "chaotic" phase), the inflaton can have approximately Planck energy by the quantum effect: Accordingly, if the inflaton mass m is smaller enough than the Planck mass, the inflaton field can get a super-Planckian value naturally as follows. In the plot in the left panel, one point corresponds to one "initial" value φ i respectively, and we make 800,000 realizations for each "initial" value. In the right panel, the red line represents the result of the standard linear perturbation theory (2.14). It can be read from this figure that the stochastic-δN is quite consistent with the linear perturbations theory in single-field inflation as we proved in the previous paper [1]. Note that we evaluate the error for both the variance and the power spectrum but that for the variance is so small that the error bars cannot be seen.
Thus chaotic inflation is free from the initial condition problem. When the inflaton rolls down below φ f = √ 2M p , the slow-roll parameters (4.1) exceed unity and inflation ends. The solution of the slow-roll eq. (2.3) is given by Here N denotes the e-folds taken from φ(N ) to φ f = √ 2M p . Then let us calculate the power spectrum of the curvature perturbations with use of the stochastic-δN . Since chaotic inflation is a single-field model, there is no difficulty and what to do is just to obtain the variance of e-folds δN 2 as a function of the mean of e-folds N . In the left panel of figure 3, we show the relation between N and δN 2 with the inflaton mass m = 0.01M p . 6 Different points of this plot correspond to different "initial" values φ i . For example, if one choose the "initial" value φ i = φ(N = 50) = √ 4 × 50 + 2M p , the mean e-folds N will be approximately 50, and at that time, it can be read from figure 3 that the variance of e-folds will be about 0.07. In this paper, we make 800,000 realizations for each point.
Differentiating the plot in the left panel, we can obtain the power spectrum shown in the right panel of figure 3. The red line represents the result of the standard linear perturbation theory (2.14). As we showed in ref. [1], the result of the stochastic-δN is quite consistent with that of the linear perturbation theory in single-field inflation. That is because, in single-field inflation, the second and higher-order terms in the perturbative expansion of ζ are suppressed by the slow-roll parameters and then the linear approximation of ζ is good enough.
Note that the errors to the power spectrum are relatively large, even though those to the variance are so small that the error bars cannot be seen in figure 3. In the stochastic-δN , we do not directly calculate the power spectrum but obtain the variance first, then the errors ∆ δN 2 are proportional to the variance, ∆ δN 2 ∝ δN 2 . 7 Since the power spectrum is connected to the variance by differentiation, which is a linear operator, the errors are propagated linearly and those of the power spectrum are also proportional to the variance. Indeed the power spectrum which is obtained by the finite difference of the variance is given by where ∆ δN 2 i denotes the error of δN 2 i and we set N i+1 − N i = 1. Therefore the error of the power spectrum is ∆ δN 2 i+1 +∆ δN 2 i ≃ 2∆ δN 2 i . On the other hand, if the power spectrum is a nearly scale-invariant, the variance can be approximated by δN 2 ∼ N P ζ from eq. (3.7). Hence the errors of the power spectrum are relatively sizable for a large N . Because of this fact, the stochastic-δN approach is not so adequate to calculate the largescale power spectrum. In contrast, to calculate the small-scale power spectrum, it is quite useful. As we will see in the next section, the stochastic-δN approach enables the calculation of the large peak profile on small scales.
Overview of the original type
Hybrid inflation [30,31] is an intriguing inflation model combining chaotic inflation and new inflation. This model does not need super-Planckian field value likes as new inflation, and moreover, the initial condition problem is softened than new inflation in a similar way to chaotic inflation. The extensions to supersymmetric (SUSY) types are also studied well [34][35][36][37].
In hybrid inflation, there exist two scalar fields, one is an inflaton φ and the other is a waterfall field ψ. The potential of the original type is given by with the model parameters Λ, µ, M and φ c . The dynamics of this inflation is as follows. For an appropriate initial condition, the ψ field settles down to ψ = 0 due to the term of ψ 4 . Then inflation is driven by the constant potential V 0 = Λ 4 , φ rolling down slowly due to the inflaton 7 For example, here we suppose that N follows some distribution whose true variance is σ 2 . From the data of e-folds {Ni}, i = 1, 2, · · · , n, we can obtain the sampling variance δN 2 = 1 n n i=1 (Ni − N ) 2 , and the variance of the sampling variance is given by E ( δN 2 − σ 2 ) 2 . Here E denotes the expected value under the assumed distribution. By straightforward but tedious calculations, it is shown that this value is approximated In this section, we use this error.
The key point is the waterfall mass m 2 ψ = 4Λ 4 M 2 φ 2 φ 2 c − 1 becomes negative when φ rolls down below φ c . Then, the waterfall field rolls down rapidly to the potential minimum (φ, ψ) = (0, ±M ) and slow-roll inflation is over. The point (φ, ψ) = (φ c , 0) is called "critical point", and the phases before and after the critical point are called "valley phase" and "waterfall phase", respectively. Even though the waterfall phase usually ends rapidly, it is possible that the waterfall phase lasts for more than 10 e-folds with some parameters. We consider such a case in this paper, since we are interested in a peak profile in power spectrum because of the flatness of the potential around the critical point.
The power spectrum of the curvature perturbations during the valley phase can be calculated easily because in this phase, ψ settles down to zero and it is almost single-field inflation. At this epoch, the slow-roll parameter ǫ φ (2.4) reads (5.2) Therefore, the power spectrum (2.14) is written as On the other hand, the power spectrum around the critical point and during the waterfall phase is not solved fully analytically. After φ approaches the critical point, not only φ but also ψ contribute the inflation dynamics and the curvature perturbations. Around the critical point (φ c , 0), the potential is extremely flat in the direction of ψ as easily checked from eq. (5.1). Therefore the quantum fluctuations of ψ surpass the zero mode, namely δψ 2 ≫ ψ 2 0 , and then the perturbations with respect to ψ are broken down. Many authors calculated the power spectrum during the waterfall phase for special cases and Lyth provided more general treatment when the linear approximation of e.o.m. is good [38,39]. This paper gives the full solution using the stochastic formalism. Naively speaking, because of the flat potential, the curvature perturbations will rapidly grow and show peak profile. In subsection 5.3, we will see the calculated curvature perturbations indeed show such a peak profile.
Amplitude of noise
Before calculate the power spectrum, we should mention the amplitude of the noise term. As showed in section 3, the noise term is proportional to the power spectra of the scalars evaluated at the horizon crossing k = ǫaH. Then let us consider the evolution of the subhorizon mode. Similarly to eq. (2.6), we linearize the e.o.m. with respect to φ k as where superscript I, J denote φ and ψ. This linearization requires φ I IR ≫ φ I UV where φ I UV is a sum of the sub-horizon modes k > ǫaH, namely φ I x). If the homogenous zero mode is much larger the fluctuations of the inflaton, φ I 0 ≫ δφ I , as usual, the linearization is valid because the IR part includes the zero mode. Moreover, even if the zero mode is smaller than the fluctuation, the IR part is generally larger than the UV part because the IR part receives the white noise and has the field value of about the Hubble parameter at least. Therefore, the above linearization is valid in many cases. 8 Note that the Hubble parameter and the derivatives of the potential are the functions of the IR fields φ IR and ψ IR evaluated around the spatial point considered here. However, φ IR and ψ IR themselves depend on the past amplitude of the noise term and thus it is hard to solve the eqs. (5.4) in an exact manner. In this paper, we approximate P φ and P ψ by the solution with the constant Hubble parameter and the constant scalar mass, eq. (2.9), where the Hubble parameter and the scalar mass m 2 I = V II are evaluated at the horizon crossing, k = ǫaH. When m 2 I /H 2 exceeds 9/4 and ν I = 9/4 − m 2 I /H 2 becomes imaginary, the corresponding noise terms will be suppressed by ǫ 3 and negligible, so we omit them in the numerical calculation.
It should also be noted that the different kinds of the inflaton fields interact with each other through the mixing term V IJ and then the noise terms for different scalar fields can have non-zero correlations. However, with the parameters considered in this paper, the effective masses of the inflaton and the waterfall field are quite different, so we can neglect the effect of mixings. 9
Dynamics and power spectrum
Let us then move to the calculation of the power spectrum. In this paper, we consider the original type of hybrid inflation whose potential is represented as eq. (5.1) with following parameter values, With an above value, it takes about 15 e-folds from the critical point to the end of inflation. The energy scale Λ is determined so that the amplitude of the curvature perturbations during a valley phase satisfies the observed value (P 8 Actually, in our calculation, ψIR remains zero for a little while because we will neglect the noise of ψ when ψ is highly massive as we will mention. Therefore, the IR part of ψ is smaller than the UV part and the linearization of the e.o.m. does not seem to be valid. However, since we will consider the case where the field value of φIR is much larger than the Hubble parameter (see (5.6)), the higher order term in V ψ , ψ 3 UV , is negligible compared to the φ 2 IR ψUV term. The other higher order term Λ 4 To take the effect of mixings into account, see appendix B. 10 Since our goal is not to construct an inflationary model as mentioned in the footnote of the previous section, there is no need to set P ζ ∼ (5 × 10 −5 ) 2 actually. Since the q ν terms in the Langevin equation are neglected, the e.o.m. is written as and ξ φ and ξ ψ are zero-mean independent white noises: Note that we use the dimensionless e-folds dN = Hdt as a time variable instead of the cosmic time t. ξ R (t) and ξ φ(,ψ) (N ) are connected by change of variables of the delta function: The time evolutions of the inflaton and waterfall field on one sample path with the initial condition (φ i /M p , ψ i /M p ) = (0.1305, 0) are shown in figure 4 as "stochastic". For the sake of comparison, we also plot the solution without noise as "classical", with tiny but non-zero initial ψ because the ψ field with ψ i = 0 remains zero thereafter without noise. N = 0 corresponds to the beginning of inflation. The waterfall field ψ remains zero at first, because the mass of ψ is as large as the Hubble parameter (or, ν ψ is imaginary) and we omit the quantum noise as mentioned in the previous subsection. Subsequently, due to the noise, ψ grows much rapidly compared with the classical solution. The inflaton field φ seems not to be affected by the noise term and its dynamics is almost same in both the stochastic and classical case. However, with the parameters in this paper, the end of inflation is determined by the value of ψ. In fact, inflation without noise continues about 20 e-folds longer than stochastic inflation because the field value of ψ does not grow fast without noise. It shows the importance of the stochastic effect not only for the calculation of the curvature perturbations but also the background dynamics. Note that, in both cases, the critical phase φ c = 0.13M p is reached about 20 e-folds after the beginning of the calculation. In addition, let us check the time developments of the slow-roll parameters. In two-scalar inflation, there are following 5 slow-roll parameters.
The time developments of these slow-roll parameters for the sample path showed in figure 4 are illustrated in figure 5. |η ψψ | exceeds unity first at N ∼ 30 and then slow-roll inflation ends.
In multi-field cases, the point where the slow-roll condition is violated is unsuitable for the end point of the field in the δN formula as we mentioned in the footnote of the previous section. That is because in multi-field inflation, slow-roll violating points are not on an equipotential line, though the end slice of the δN formula should be a uniform density slice. Therefore we use the uniform Hubble slice as the end. In figure 6, we show the contour plot of the potential with the sample path shown in figure 4 (red line) and uniform η ψψ lines (dashed lines). It shows that equipotential lines do not correspond to uniform η ψψ lines indeed.
Parameters on the contour lines represent the value of V /3M 4 p × 10 9 ≃ (H/M p ) × 10 9 . In this paper, we use H = 8.3138508 × 10 −9 M p (black thick line) as the end slice of δN formula where the slow-roll condition is violated enough as can be seen in figure 6.
Since above discussions are just for one sample path, one may doubt it depends on the sample paths. However, we checked that realizations which quite deviate from this path and violate the above discussion almost never occur. Therefore it is valid to use One point on the plot in the right panel corresponds to one "initial" value on that sample path. For each "initial" value, we make 10000 realizations from that value to the end of inflation and take the average and variance of their e-folds.
Let us calculate the power spectrum of the curvature perturbations. Recalling the algorithm mentioned in section 3, we should make many sample paths from the fixed initial condition, which we have set at (φ i /M p , ψ i /M p ) = (0.1305, 0). 11 For each sample path, we obtain a N vs. δN 2 plot taking "initial" values on that sample path. For example, we show N vs. δN 2 plot for one sample path in figure 7. Reiterating to solve the Langevin e.o.m. from one "initial" value on that sample path to the end of inflation, we can get the e-folds N and the variance of them δN 2 . Then, changing the "initial" value variously, full N vs. δN 2 plot such as the right panel of figure 7 can be obtained. In this paper, we reiterate the calculation 10000 times for each "initial" value. Similarly, we can obtain many N vs. δN 2 plots for various sample paths. Then the true N vs. δN 2 plot of our observable universe is just the average of these plots. In the left panel of figure 8, we show the average of N vs. δN 2 plots for 10000 sample paths. Finally, differentiating this plot, we obtain the power spectrum of the curvature perturbations P ζ = d δN 2 /d N as shown in the right panel of figure 8. The horizontal axis N corresponds to a wavenumber k by the relation N = ln(k f /k) where k f is the horizon scale at the end of inflation, k f = ǫaH| f , and the scale corresponding to the critical point is N ∼ 17 for example. Figure 8 shows the peak of the power spectrum in the waterfall phase after the critical point because of the tachyonic instability of the waterfall field ψ.
Not only in the hybrid case but also in any highly stochastic cases, we can calculate the power spectrum applying the stochastic-δN formalism shown here. Note that for the inflation models or parameters which we adopt, the variance of e-folds δN 2 should not exceed unity in our observable universe, namely N < ∼ 60. For δN 2 > 1, the universe becomes too inhomogeneous to account for the observed universe. Indeed, the constraints from PBHs suggest P ζ < ∼ 10 −1.5 for a wide range of k and therefore δN 2 = P ζ d(log k) < ∼ 1 [7,8].
Conclusion
In this paper, we applied the non-perturbative method that we have proposed in the previous paper [1], the stochastic-δN formalism, to chaotic inflation and hybrid inflation. Especially, in hybrid inflation, we chose the parameters where the waterfall phase lasts for more than 10 e-folds, and calculated the power spectrum of the curvature perturbations including around the critical point. The result is shown in figure 3 and 8 for chaotic and hybrid inflation respectively. In particular, it is the first time that the power spectrum in hybrid inflation is calculated fully from the phase before the critical point to the end of inflation. The resultant power spectrum shows a peak profile due to the tachyonic instability of the waterfall field during the waterfall phase N < ∼ 17. Though the recent CMB observations by the Planck and BICEP2 collaborations [2,3] favor a simple single-large-field inflationary model, several multi-field models may be worth considering and they can have the highly stochastic region. In such cases, the method we demonstrated in this paper is needed to obtain the curvature perturbations.
A Numerical calculation of stochastic process
In this appendix, we comment on the numerical calculation method of the stochastic process. There are many numerical integrating methods with excellent converging properties like a Runge-Kutta method for ordinary differential equations, while the methods for stochastic differential equations like a Langevin equation are still developing. Since applying the method for ordinary differential equations to stochastic ones directly generally violate the desired properties of the stochastic process, specific methods should be constructed. We will give the terminology of stochastic calculus first, and then describe the numerical integrating method we used in this paper. Note that this appendix is based on ref. [48,49].
A.1 Stochastic calculus
In the first place, let us define Brownian motion, which is the simplest stochastic process. Brownian motion is the continuous time limit of a random walk and mathematically defined as follows.
Definition A.1 Some stochastic continuous function W (t), t ≥ 0 is assumed to exist, satisfying W (0) = 0. Then, if for all 0 = t 0 < t 1 < · · · < t m , the increments are independent, Gaussian distributed and satisfy the condition, The zero-mean white noise ξ(t) in the Langevin equation is formally defined as the derivative of Brownian motion: 12 Indeed, if ξ(t) has a white spectrum ξ(t)ξ(t ′ ) = δ(t − t ′ ), this definition satisfies the condition A.2 as follows.
Next, let us define the integral, to integrate a Langevin equation. Here the integrand b(t) can generally depend on the past stochastic process. To define this integral, we will approximate the integrand by a simple process at first, and then take a limit.
In the first place, Π n = {t 0 , t 1 , · · · , t n } is defined as a partition of [0, t], namely Then in each sub-period [t i , t i+1 ), the integrand b(t) is approximated by the constant function b n (t) = b(t i ). In other words, b(t) is approximated by the initial value in each sub-period (see also figure 9). Generally, we can choose the partition Π n for the approximation function b n (t) to be closer to the integrand b(t) in the following sense.
Finally, with use of this approximation function, we define the integral (A.6) as The integral defined by approximating the integrand by the initial value in each subperiod like above is called "Ito integral". The point to notice is that the value of the stochastic integral can depend on where the integrand is approximated unlike the ordinary integral. The integral approximating the integrand by the midpoint value b t i +t i+1 2 is called "Stratonovich integral" and its value can be different from that of Ito integral. However, we should use Ito integral for the noise during inflation because the noise amplitude P φ (N ) should be evaluated just before the inflaton receives the noise, otherwise the causality is broken.
In this paper, all Langevin equations like are defined as Ito integral, In the next subsection, we will introduce the numerical integrations of Ito type.
A.2 Numerical method
In the numerical integration methods, the most standard one is a time discretization. For the Ito process the simplest finite difference approximation is given by the Euler-Maruyama method: Y n+1 = Y n + a(t n , Y n )∆ n + b(t n , Y n )∆W n , (A. 13) where the initial value is Y 0 = X 0 and the step sizes are ∆ n = t n+1 − t n , ∆W n = W (t n+1 ) − W (t n ). (A.14) In the viewpoint of the numerical integration, ∆W n is a Gaussian distributed random variable whose expectation and variance are zero and ∆ n respectively, ∆W n = 0, ∆W 2 n = ∆ n . (A.15) Next, let us mention the index of strong convergence.
The convergence of stochastic process is generally bad and the order of the approximation for the stochastic differential equation is often smaller than that for the ordinary differential equation. In fact, though the order of the Euler-Maruyama approximation for the ordinary differential equation is 1.0, it has been proved that the order of that for the stochastic differential equation is 0.5. Finally, we introduce the Runge-Kutta method for the stochastic differential equation. The Runge-Kutta method is quite practical since it can give stable solution even with a large step size. For the Ito process depending on independent m-dimension Brownian motion, The s-staged Runge-Kutta is parameterized as follows generally. where, ij a(t n + c Especially for independent noise, C lk reads the Kronecker delta δ lk . The parameters are often listed with use of the extended Butcher table 1. In table 2, we show the parameters of the 3-staged order 1.0 strong Runge-Kutta [50] which we use in this paper.
B Mixing
In this paper, the solutions with the constant masses (5.5), assuming the effect of the mass change during sub-horizon is negligible. However, the mixing term V IJ remains even in this case. Though we neglect this term in this paper, the effect of mixing can be taken as follows. Table 2. The parameters of the 3-staged order 1.0 strong Runge-Kutta. In this paper, we use this method. Here we treat V IJ as a constant and evaluate it at horizon crossing k = ǫaH for each mode, even though it varies as time goes on actually. Taking the diagonalizing matrix P of V IJ : the field P −1 IJ φ J k has no mixing. Therefore, the correlator of the original field reads, φ I † k φ J k = (P IL P −1 LM φ M k ) † (P JN P −1 N P φ P k ) = P JL (P −1 φ k ) L † (P −1 φ k ) L P † LI , (B.2) and the following power spectrum should be used as the amplitude of noise ξ I .
Note that the summation with respect to I is not taken. Noise ξ I also has non-zero correlation, for I = J. Here P φ I denotes the power spectrum given by eq. (B.3).
|
2014-08-09T12:21:27.000Z
|
2014-05-09T00:00:00.000
|
{
"year": 2014,
"sha1": "259144f7dad756dc3beb9752db284ef549020383",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.2187",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "259144f7dad756dc3beb9752db284ef549020383",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248693501
|
pes2o/s2orc
|
v3-fos-license
|
Automated Distance Estimation for Wildlife Camera Trapping
The ongoing biodiversity crisis calls for accurate estimation of animal density and abundance to identify sources of biodiversity decline and effectiveness of conservation interventions. Camera traps together with abundance estimation methods are often employed for this purpose. The necessary distances between camera and observed animals are traditionally derived in a laborious, fully manual or semi-automatic process. Both approaches require reference image material, which is both difficult to acquire and not available for existing datasets. We propose a fully automatic approach we call AUtomated DIstance esTimation (AUDIT) to estimate camera-to-animal distances. We leverage existing state-of-the-art relative monocular depth estimation and combine it with a novel alignment procedure to estimate metric distances. AUDIT is fully automated and requires neither the comparison of observations in camera trap imagery with reference images nor capturing of reference image material at all. AUDIT therefore relieves biologists and ecologists from a significant workload. We evaluate AUDIT on a zoo scenario dataset unseen during training where we achieve a mean absolute distance estimation error over all animal instances of only 0.9864 meters and mean relative error (REL) of 0.113. The code and usage instructions are available at https://github.com/PJ-cs/DistanceEstimationTracking
Introduction
The biodiversity crisis requires an accurate monitoring of animal density and abundance. Such estimates can then be used to identify causes of biodiversity loss and quantify the effects of conservation efforts. This is often achieved by employing camera traps, which capture image or video upon detection of an animal by a passive infrared sensor. Capture-recapture models can be used to estimate animal abundance by (re-)identifying individual animals over multiple images (O'Connell et al., 2011), which is however difficult for species without individual markings.
Within a joint project on building up a network of Automated Multisensor stations for Monitoring Of species Diversity (AMMOD) (Wägele et al., 2022), one sub-project is devoted to abundance estimation using Camera Trap Distance Sampling (CTDS) (Howe et al., 2017). Since CTDS relies on the manual and laborious evaluation of reference images, Haucke et al. (2022) presented a first approach to overcome the distance estimation bottleneck in estimating animal abundance by proposing a semi-automatic calibration procedure. Experiments have shown that this semi-automated approach reduced the manual effort for calibration by reference images by a factor greater than 21. But the semiautomated approach is still requiring reference image material, which is both difficult to acquire and not available for existing datasets. In a proof-of-concept study, we propose now a fully automatic approach to estimate camera-to-animal distances and evaluate it in our CTDS framework.
Problem Statement & Contributions
From the application perspective, there are two obstacles to using automated approaches to abundance estimation efficiently: • Demand for reference imagery. Previous works require reference images (Howe et al., 2017;Haucke et al., 2022), which are costly to obtain and often not available for existing datasets.
• Demand for local placement of reference objects. Although the process of manually comparing observation images with reference images has been automated, reference objects must still be located by hand .
Therefore, we show in a proof-of-concept study within our project-related CTDS framework a methodological new approach to AUtomated DIstance esTimation (AUDIT) to overcome both obstacles (cf. Fig. 7): • No need at all for reference imagery using a fully automated processing pipeline with a novel alignment procedure that is able to derive metric depth images that capture the absolute distances between camera traps and observed animals in meters from the corresponding pure camera trap-based observation images.
• No need at all for local placement of reference objects since the observed animals themselves are automatically detected and localized with per-animal distance estimations.
Related Work and Background
Here we present on the one hand the most relevant related work with respect to our proof-of-concept study. On the other hand, this goes along -at least partially -with a generally understandable concise introduction of essential methodological background and terminology that originates from the field of computer vision.
Abundance Estimation
Several abundance estimation methods for unmarked animal populations have been proposed, which do not require the identification of individuals: the random encounter model (REM) (Rowcliffe et al., 2008), the random encounter and staying time model (REST) (Nakashima et al., 2018), the time-to-event model (TTE), space-to-event model (STE), instantaneous estimator (IS) (Moeller et al., 2018) and camera trap distance sampling (CTDS) (Howe et al., 2017). While these methods do not require the reidentification of individual animals, they do require an estimation of the effective area surveyed by the camera trap. The effective surveyed area is dependent on the Field of View (FOV) of the camera trap and its effective detection distance. The effective detection distance is the distance below which as many individuals are missed as seen beyond (Hofmeester et al., 2017). Estimating the effective detection distance generally requires estimating the distance between the camera trap and the detected animals. So far, three different approaches have been available to derive such camera-animal distances. However, they rely either on the manual and laborious evaluation of reference images (Howe et al., 2017), even more time-consuming on-site distance measurements (Rowcliffe et al., 2011) or semi-automatic calibration of relative depth images for the specific sequence . While our proof-of-concept study focuses on our project-related application within the framework of camera trap distance sampling (CTDS), we conjecture that our basic algorithms to estimate camera-to-animal distances transcend to the reported broader range of abundance estimation approaches.
Image-based distance estimation
Computer stereo vision is the traditional and well-established approach to imagebased distance estimation. By comparing information about an observed scene from two differing camera perspectives (mostly two cameras, displaced horizontally from one another), depth information can be derived. Computer stereo In this contribution, the two terms distance and depth refer in the same way to the distance between the camera and the points observed in a scene. vision can be seen as the technical analogue to human stereopsis, that is, human perception of depth and three-dimensional structure by combining visual information from our two eyes.
However, currently all deployed camera traps do not utilize two cameras for stereo vision but use just one camera yielding one sight on the observed scene. This is called monocular vision.
Recent developments have shown that detailed distance estimations can be derived from images of conventional monocular cameras based on deep learning approaches (Facil et al., 2019). Meanwhile, various deep learning approaches have shown their effectiveness in addressing this so-called monocular depth estimation (MDE) where depth is a synonym for the distance to the camera. In this way, monocular vision via deep learning can be seen as the technical analogue of a one-eyed human who learns to estimate distances by experience.
In this proof-of-concept study in the application framework of abundance estimation, we decide for the DPT (Dense Prediction Transformers) approach that has shown superior quantitative and qualitative results in MDE (Ranftl et al., 2021).
But most MDE approaches estimate only relative depth information, where the distance-wise order and relative distances between objects in the scene are known (e.g., "point A is closer to the camera than point B"), but not absolute depth information in meters (e.g., "points A and B show distances of 1.25 meter and 2.45 meter to the camera, respectively") which is decidable for distance estimation in the framework of abundance estimation. In this proof-of-concept study, we propose a novel alignment procedure to derive absolute depth information from relative depth information.
Relative depth information as well as absolute depth information will be visualized in this contribution by so-called heatmaps where a color-based encoding depicts depth information (cf. Fig. 1).
Visual animal detection
First so-called region-based deep learning approaches to visual object detection delivered for an input image for each detected object a so-called bounding box as output, where a bounding box is just a rectangle containing the detected object. Methods such as Mask R-CNN (He et al., 2017) predict for each detected object not only a bounding box but also a so-called segmentation mask. A segmentation mask shows the exact visual appearance of an detected object, that is, all pixels that belong to the visual appearance of an detected object (cf. Fig. 2). Segmentation masks of detected animals are important for behavioral studies of individual animals and animal herds based on their poses and actions captured by video clips from camera traps (Schindler and Steinhage, 2021).
In this proof-of-concept study, we utilize the MegaDetector for visual animal detection in terms of bounding boxes. The MegaDetector which is an animal detection method for camera-trap footage developed by (Beery et al., 2019) and was trained on several hundred thousand animal detections from camera trap videos recorded in diverse biospheres and of a large variety of animals. Based on the bounding boxes of detected animals, we introduce a new so-called multiinstance DINO foreground segmentation to derive the segmentation masks of detected animals.
Materials and Methods
Our processing pipeline for fully automated distance estimation (AUDIT) is based on Deep learning methods. Deep learning methods form a class of machine learning algorithms that have led since 2012 to a breakthrough in computer vision and visual recognition, esp. in the fields of object recognition and detection in images and video clips (Krizhevsky et al., 2012).
Since training data is used for the training of machine learning approaches, we first introduce the data material that has been used for training of AUDIT. Then we explain each module of AUDIT and it's functionality.
Data Material
The data material was selected according to the following criteria: • Outdoor and wildlife scenarios. The training data for processing camera-trap imagery should cover image data from as many outdoor and wildlife environments as possible.
• Absolute depth information. To train the estimation of metric distances between observed animals and camera-traps, the training data must also show so-called RGB-D imagery. In RGB-D images every image pixel not only shows the color information in terms of it's red, green, and blue color components but also the depth information where the depth value gives the distance between the camera and observed scene part depicted in the pixel.
• Known field of view. For the estimation of the real animal-camera distances, AUDIT has to create an internal three-dimensional representation of the observed wildlife scenario, a so-called a 3D point-cloud. For this purpose, the opening angle or field of view of the camera-trap must be available in the training data.
Following these criteria, we settled on the following data material. An overview of their characteristics can be found in Table 1.
• UASOL (Bauer et al., 2019) is a stereo dataset recorded from pedestrian perspectives at the campus of the University of Alicante (Spain). We selected those five scenarios out of 33 available scenarios that contain the most outdoor components and visible vegetation, i.e., the scenarios EPS4, Garden, Nursery, Optics, and Philosophy 1.
• TartanAir (Wang et al., 2020) is a photo-realistic synthetic dataset captured from persepctives of a flying drone and rendered with Unreal Engine. We decided to use five of the 30 given scenes that were recorded in the outdoor environments which contain the most vegetation: Gascola, Neighborhood, Seasons Forest and Seasons Forest Winter.
• DIML (Cho et al., 2021) is a RGB-D-dataset consisting of more than 200 different indoor and outdoor scenes recorded with a Microsoft Kinect V2 and a ZED stereo camera. We decided to use the Scenes Field 1 and Field 2, as these depicted scenes are comparable to camera trap videos.
• LVPD (Niu et al., 2020) is a forest environment dataset collected in woodland areas in Southampton Common (Hampshire, UK). The camera was mounted 15 cm above the ground on a broom-like contraption to simulate the perspective of a robot ground rover. The images provided by this dataset were most similar to real world camera trap videos of a camera mounted to a tree in a dense forest biosphere.
• Lindenthal is, to our best knowledge, the only outdoor dataset that provides depth as well as tracking information of observed animals (Haucke and Steinhage, 2021). It was recorded by an Intel RealSense D435i stereo camera which was mounted above an animal enclosure at the Lindenthal Zoo (Cologne, Germany). The near infrared camera of the Intel RealSense D435i was used during day-and nighttime to capture gray scale video at 15 frames per second. At nighttime, an infrared lamp was used for active illumination. The animals observed are: geese, goats, donkeys and deer.
There is a total amount of 14 scenes, which we enumerated from S00 to S13 (Table 2).
From the more technical viewpoint: The DIML dataset has been used as the validation dataset in training, i.e. for an unbiased tuning of the model hyperparameters. The Lindenthal dataset has been used as test dataset, i.e., for the unbiased final evaluation. AUDIT ( Fig. 3) takes video clips from conventional camera-traps as input. These video clips can be color video clips taken at daytime or grayvalue video clips taken at dawn or nighttime using infrared cameras and infrared illumination.
AUDIT shows two parallel processing branches. The depth estimation branch (left) derives absolute depth information for the complete observed scene, i.e., all observed animals and the background (i.e., all visible plants, rocks, trees etc.). The localization branch (right) derives now the missing information, i.e.: where are the visible animals in the observed scene? The depth estimation branch (left) first derives the relative depth for every video frame using the DPT-Monodepth model (Dense Prediction Transformer, Section 2.2.1, (Ranftl et al., 2021)). The relative depth information without a metric scale is then aligned by the PVCNN-module (Point-Voxel Convolutional Neural Network, Section 2.2.2, (Liu et al., 2019)) to to absolute depth information with distance values in meters. In the localization branch (right), animals are visually detected in each video frame by using the MegaDetector-framework (Section 2.2.3, (Beery et al., 2019)) that outputs the bounding boxes of every animal detection (red rectangles). For every bounding box, we employ a newly adapted DINO method (self-DIstillation with NO labels, Section 2.2.4, (Caron et al., 2021)) in a multi-instance approach to extract a segmentation mask for every detected animal.
Relative
We now have to combine the results of both branches to obtain the desired animal-camera distances. We achieve that by applying the segmentation mask of every animal detection derived in the right branch to the corresponding absolute depth information derived in the left branch and by determining the median of the so selected absolute depth values as the absolute animal-camera distance in meters.
Deriving Relative Depth in the Left Branch: DPT
For the relative depth estimation, we use the DPT (Dense Prediction Transformer) model developed by (Ranftl et al., 2021). Combined with a large amount of diverse depth datasets, the authors achieve a new state-of-the-art performance during the evaluation on unseen datasets and thus create a robust model for a wide variety of scenes. However, the model only estimates the relative depth and not the absolute metric depth in meters to avoid instability due to the wide range of possible depth scales in the training data.
Adapting DPT: Technically, DPT derives for each input frame a so-called disparity imaged. Such a disparity image encodes the relative depth information by the differences in coordinates of corresponding image points. The values in such a disparity image are inversely proportional to the scene depth at the corresponding pixel location. To obtain in the end depth information, we convert such a disparity imaged to a first approximation of a depth image d (Eq. 1). We determine the necessary conversion parameters scale m and shift c by aligning each DPT disparity output of every image in the training dataset to its disparity Groundtruth via RANSAC (Random Sample Consensus, Section 3.1) and averaging across the resulting scales and shifts.
Absolute Distance Estimation in the Left Branch: PVCNN
Finally, we want to calculate a metric depth estimation for each input image. To this end, we now have to align the approximated depth output of DPT d again with a scale m and a shift c to a metric depth image d m , such that This time, the scale parameter m determines the visible range of depth values, while the shift parameter c determines the distance of the closest object to the camera and the lowest value of the depth range.
To derive these both parameters, we adapt and modify an approach by (Yin et al., 2021) to recover the 3D shape of an observed scene from just a single image. Thereby, (Yin et al., 2021) estimate a relative depth image, convert it to a point cloud representation (i.e., a set of three-dimensional points where in this case all points origin from the pixels of the approximate depth image), and then utilize a Point-Voxel-CNN (PVCNN) (Liu et al., 2019) to estimate the focal length and the shift needed to create three-dimensional reconstruction of an observed scene.
Adapting PVCNN: Technically, we extend the PVCNN architecture of (Liu et al., 2019) to estimate both scale m and shift c, as well as by introducing extensive data augmentation and a novel training regime.
Data Augmentation. Generally, data augmentation techniques are used in machine learning to increase the amount of training data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. We apply the following augmentation steps to the approximated depth images d to improve generalization in training with respect to unseen resolutions, unseen scenes and different focal lengths: • Flipping: Random horizontal flips of given depth images from training data with a probability of 0.5 • Cropping: Random crops of given depth images from training data to 16:9 or 4:3 aspect ratio • Scaling: Select random factor s ∈ [0.75, 1] and multiply centered crop of a given depth image and its depth ground truth in the training data by s; then resize this scaled copy to the original resolution and finally multiply the corresponding focal length by 1 s .
Training Regime. A new training regime (Fig. 4) shows the following steps: First, we unproject the relative depth image back to 3D space similar to (Yin et al., 2021). In more detail, we assume as camera model a pinhole camera for the point cloud reconstruction and convert 2D image coordinates to 3D by: where (u 0 , v 0 ) is the optical center of the camera, f is the focal length, and d is the approximated depth.
Contrary to (Yin et al., 2021), we presume that the focal length of the camera is known, since the focal length of commercial camera traps can usually be found in the specifications provided by the manufacturer or can be easily calculated from the opening angle of the camera (Field Of View, FOV) and the image width in pixels: The point cloud is then given to the PVCNN as input. From there on, the PVCNN estimates the needed scale and shift for the input image. In training of a deep learning model, the training loss is a metric used to assess how the model fits the samples of training data that show the correct output (ground truth) for every sample. The training loss is then minimized to improve the model performance. To calculate this training loss, we align the initial approximated depth input with the scale and shift and apply our loss function (Eq. 5) to the output and the ground truth. Now it is important to note that simply calculating a common pixel-wise loss (difference between derived result and ground truth) would have the disadvantage that training would aim to minimize the loss for all pixels in the same way and would be susceptible to either outliers in the depth estimation of DPT (especially for high distances) or to errors in the ground truth.
Instead, we propose a weighted loss function that shifts the learning objective to pixels closer to the camera. Let d m be the aligned metric depth image, g the depth ground truth image, n valid the number of valid pixels in the ground truth, exp the exponential function, and α the weight factor, then we define the weighted loss L w as: The factor α controls how much closer pixels with lower depth values influence the overall loss. As a result, we set α to 0.04 during training, as it achieved the best results on the validation data. We train our model for seven epochs on batches of 50 images, employ a learning rate of 0.0001 with a decay factor of 0.1 applied every fourth epoch, and we train with a dropout probability of 0.3 for the classifier layer.
For comparison: Direct training of the scale and shift parameters using the well-established RANSAC method (Fischler and Bolles, 1981) yields inferior results during training. Furthermore, we decided against an approach that estimates the two parameters directly from images or image features. As (Yin et al., 2021) observed, "the domain gap is significantly less of an issue for point clouds than for images" for this kind of task, which requires an accurate 3D reconstruction of the scene.
Animal Detection by Bounding Boxes in the Right Branch: MegaDetector
The MegaDetector is an animal detection model for camera trap footage proposed by (Beery et al., 2019) and was trained on several hundred thousand animal detections from camera trap videos recorded in diverse biospheres and of a large variety of animals. We decided for the MegaDetector because of its robustness: It is able to localize animals and species not seen during training and it reliably detects animals in unseen ecosystems and weather conditions as well. The DINO approach by Caron et al. (2021) stands for self-DIstillation with NO labels and describes a method to learn class features to classify the detected animals as dear, boar, etc. and so-called attention maps (Fig. 5 (c)). In simple words: an attention map indicates what image locations are important for each animal detection. Thereby, these attention maps can be used to derive the segmentation masks of the detected animals by depicting the pixels belonging to a detected animal inside the bounding boxes ( Fig. 5 (d)). We decided for DINO as the segmentation model because it is an unsupervised machine learning approach, i.e., DINO requires no additional training. Furthermore, DINO correctly handles partial occlusion by vegetation and provides a precise segmentation result.
Animal Detection by Segmentation Masks in the Right
The original DINO was trained on the ImageNet dataset (Deng et al., 2009), which mainly contains images showing only one target object (car, truck, cat, etc.) to detect and identify. Therefore, we apply DINO not to the complete camera trap images but instead apply it separately to the bounding box of each animal detected by the MegaDetector.
We call this adapted version Multi-Instance DINO and it shows the following steps (cf. Fig.5): 1. Input: The bounding box of a detection (a) 2. Increase the bounding box size by doubling its height and width (b) to mimic the format of the ImageNet dataset and as a consequence create optimal input images for DINO to operate on 3. Generate the attention map of image crop within this extended bounding box using DINO (c) 4. Create a segmentation mask by thresholding the attention map at 10% of the maximum attention value (d) The segmentation mask is then used to determine the distance of the animal by taking the median of the corresponding depth pixel values in the aligned depth image.
Evaluation and Discussion
We examine the performance of the distance estimation module by a zeroshot evaluation, i.e., by evaluating the distance estimation module on test samples from a location that were not used during training. For this zero-shot evaluation, we decided for the Lindenthal-dataset, the only outdoor dataset that provides depth as well as tracking information of observed animals (Haucke and Steinhage, 2021) (cf. section 2.1). The evaluation mirrors the two branches of AUDIT, that is, the depth estimation branch and the localization branch.
Evaluation of Distance Estimation
The proposed distance estimation module consists of two steps: The DPTbased relative distance estimation and the alignment of relative distances to absolute distance estimations via the PVCNN step. Here, it is important to note that the first step of relative distance estimation via DPT is not an original contribution of this proof-of-concept study. A comparative evaluation of DPT is reported by Ranftl et al. (2021).
Consequently, we do not evaluate the DPT module against other depth estimation approaches. Instead, we juxtapose the following step using the adapted PVCNN module with the Random Sample Consensus (RANSAC) (Fischler and Bolles, 1981) alignment method. We evaluate its alignment quality by comparing a transformed DPT-depth image with its corresponding ground-truth image. We consider the complete depth image and the median depth value of the segmentation mask of each detection separately.
For using RANSAC on every image to align the DPT-based relative disparitŷ d (Section 2.2) with the ground truth g, we invert g and estimate the unknown scale m * and unknown shift c * using RANSAC such that the parameters minimize the absolute disparity error: Next, we convert our relative disparityd to a metric depth image d using m * and c * : To select one distance value for every detection, the segmentation mask of an animal (Fig. 2) is applied to the corresponding depth image and the median of their values is taken. While RANSAC processes the DPT disparity image and the ground truth depth (Section 2.1) for alignment, PVCNN only takes the approximated depth as input.
Afterwards, the resulting aligned depth images of the two approches, i.e., RANSAC-based alignment and PVCNN-based alignment are compared to the ground truth of Lindenthal depth images for distance values smaller than 25 meters, as this is the realistic application range for camera trap videos (Capelle et al., 2019), (Corlatti et al., 2020). The animal enclosure observed in the Lindenthal dataset as well as the annotated animals are located in a distance smaller than 20 meters from the camera.
A training epoch on an Intel Xeon 4215, a Nvidia P5000, and 30 GB of RAM took approximately 2.5 hours, while the loading and augmentation of the ground truth, as well as the precomputed DPT images, were responsible for most of the processing time. Table 3 depicts the spatial depth metrics that are commonly applied. N denotes the total number of valid pixels; invalid pixels are masked out during evaluation. d i and g i are the estimated and ground truth depths of pixel i, respectively: RMS: Table 4 depicts the results of the comparative evaluation of the adapted PVCNN-based alignment method against the RANSAC-based alignment method for animal-camera distances of 25 m maximum. It is important to note that the RANSAC algorithm has the advantage of using the ground truth of the Lindenthal dataset as the alignment goal while the PVCNN-based alignment method has never seen the Lindenthal dataset during training but is only using the relative depth images of DPT. Nevertheless, the PVCNN-based alignment method is not far behind RANSAC in the REL and MAE metrics, that is, only by 19 cm in the MAE and by 0.05 in the REL, while PVCNN outperforms RANSAC with respect to the RMS and ME.
Results:
For methods such as CTDS, the accuracy of depth estimation on animal instances is most relevant. Therefore, we additionally evaluate the PVCNN performance on the provided ground truth bounding boxes. For each such bounding box, we apply DINO (Section 2.2.4) to separate animal from background pixels and then use the median value of the corresponding depth pixels as an estimation of the animal distance. We compare this value to the ground truth distance S01 S02 S03 S04 S06 S07 S09 S05 S10 S11 S12 S13 of the animal extracted from the depth images with the median of the annotated pixel mask. The metrics display an additional improvement, with an MAE of only 0.99 m, a significantly lower RMS of only 1.68 and a REL of 0.113, suggesting a higher precision of the distance estimation for closer and non-background objects. Figure 6 further visualizes the distance estimation error averaged over all detected animals for each scene in the Lindenthal dataset (Table 2). We generally see a low median error (orange line) in most scenes except for S10 and S13. These two scenes show a part of the roof the camera was mounted under at the top of the image. This object introduces a new reference point without any context to the rest of the scene close to the camera, which causes DPT to output highly variant output value ranges. Consequently, this leads to a high spread of estimated alignment parameters and to higher errors (Fig. 6).
Comparison with other automatic methods:
In the DeepChimpact competition, organized by DrivenData (Bull et al., 2016), the goal was to estimate animal distance from camera trap images. Training and testing was performed on mutually exclusive subsets of a single dataset. In other words, no zero-shot evaluation was used, and the models might first need to be re-trained if applied to new datasets. The winning entry achieved a mean absolute error of 1.6203 m (DrivenData Inc., 2022). Another semi-automated approach achieves a mean absolute error of 1.8527 m .
Comparison with manual distance estimations: Traditionally, distance estimations in ecology have been carried out by humans. Here, distances are not estimated in a continuous fashion, but instead assigned to intervals of at least 1 m. For example, Howe et al. (2017) assign animals to 1 m intervals out to 8 m, and then increase the interval size for larger distances. This illustrates that the resulting manual distance estimations are inherently coarse. They are also not objective, as shown in the user study of Haucke et al. (2022). This study resulted in a mean standard deviation between five participants of 0.62 m, a pairwise MAE of 0.7796 m, and a mean relative error of 0.2189 m. In comparison, our method achieves a mean absolute error of MAE instance = 0.9864 m, a mean error (bias) of ME instance = 0.1754 m and a mean relative error of REL instance = 0.1130 m (Table 4). Errors may be influenced by factors such as the distribution of distances present in the image (errors tend to get larger with growing distance) and animal visibility (the distance of poorly visible animals is harder to estimate). As we evaluate our method on the novel Lindenthal dataset, some of these factors might influence the above comparison. However, the lower mean relative error suggests that our method is overall more accurate at larger distances than the participants in the user study conducted by Haucke et al. (2022).
Degree of Automation
We compare the traditional workflow and our novel AUDIT in figure 7. The traditional workflow requires capturing reference footage, e.g. by placing a measuring tape in the scene and then holding up a paper sign with the respective distance in 1 m intervals. In contrast, our method does not need any reference footage, significantly reducing the effort required during the camera setup. In the next step of the traditional workflow, researchers will need to (1) watch the observation videos, (2) localize animals appearing in the video, (3) compare the animal locations with reference material to obtain a distance estimation, and (4) document the measurement. This process takes an experienced individual roughly 10 minutes per 1 minute of video (Kühl, 2022). In contrast, our method is fully automatic and only requires images or video depicting animals together with the focal length specification of the corresponding camera trap. On a computer with an Intel Xeon 4215 CPU, 30 GB of RAM and an Nvidia P5000 GPU, our method takes about 0.5 seconds per image / video frame to estimate animal distances. This process can be left unattended. By saving this manual effort, the complete automation of the process enables the possibility of largescale animal abundance studies and could hence accelerate biodiversity research.
Applicability and Application Potentials
Distance sampling relies much more on low bias than the magnitude of random errors (Buckland et al., 2004). As our bias is relatively low (ME instance = 0.1754 m, we argue that our distance estimations are well-suited for CTDS in Crossed out arrows designate steps which are no longer needed in our approach. The traditional workflow needs reference footage, e.g. in the form of paper sheets designating the respective distance, as shown here. To estimate distances, animals must be localized (solid red arrows) and their position manually matched with the reference material (dashed red arrows). In contrast, our AUDIT fully automates the animal localization and distance estimation.
real-world scenarios. While our proof-of-concept study focuses on the application to CTDS, we conjecture that our methodology transcends to other abundance estimation methods such as the random encounter model (Rowcliffe et al., 2008), the random encounter and staying time model (Nakashima et al., 2018), the time-to-event model, space-to-event model, and instantaneous estimator (Moeller et al., 2018). This is because the estimation of detection probability is required for all mentioned methods. As a challenging example, we demonstrate the application of AUDIT for visual tracking of animals in video clips captured by camera traps, which is important for behavioral studies of individual animals and animal herds based on their movements and actions (Schindler and Steinhage, 2021). Additionally, in the context of this study, reliable tracking is important for one other approach to abundance estimation, namely the Random Encounter Model (Rowcliffe et al., 2008), which requires velocity estimations of the observed animals.
For this demonstration, we decided for the SORT (for Simple Online and Realtime Tracking) approach proposed by Bewley et al. (2016). SORT takes bounding box of animal detections (as derived in the localization branch of AUDIT) as input. SORT connects these animal detections over all frames to cohesive tracks based on a Kalman-Filter framework (Chen, 2012) and the association metric of two visual detections by Intersection over Union (IoU) (Jaccard, 1912). This means, IoU measures how appropriate the bounding box of an animal detection in a frame fits to a bounding box of an animal detection in the previous frame for continuing the tracking of this detected animal.
We adapted SORT to include the depth information in the Kalman-Filter predictions and replace IoU with a new customized association metric SimScore (8) that combines the traditional IoU with a distance similarity metric DIST Z . α controls the weight of each metric. DIST Z depends on the hyperparameter DIST max . If the depth distance between the tracker prediction z T and the detection z DET is larger than DIST max , DIST Z is clipped to zero, otherwise the difference is subtracted from DIST max and then normalized. Going forward, we will refer to the adapted SORT version as SORT 2.5D (due to adding and processing depth information).
The evaluation employs two established multi object tracking metrics that were developed for the KITTI dataset benchmark: the CLEAR MOT metrics (Bernardin and Stiefelhagen, 2008): Multi Object Tracking Accuracy (MOTA) and Multi Object Tracking Precision are original MOT metrics defined by where FN, FP, TP are the false negatives, false positives and true positives, with IDS being the number of identity switches of predicted tracks. A detection is considered to be a true positive if the distance in 3D space to the corresponding ground truth track is smaller than 2.2 meters and to be a false positive if it is higher than 2.2 meters. We decided to use this approximated threshold, as in traditional CTDS via reference images the distance measurements are assigned to intervals of, for example, 1 meter for 0-8 meters, to intervals of 2 meters for 8-12 meters and to an interval of 3 meters for 12-15 meters (Capelle et al., 2019). While MOTA relies entirely on the fraction of correctly identified individuals, MOTP quantifies the precision of ground truth bounding boxes against predicted bounding boxes. The SORT 2.5D version achieves a MOTA score of 56.3%, an average localization precision for correct detections of only 0.648 meters (MOTP) and a high precision of 90.3%. Figure 8 demonstrates a qualitative tracking result.
Conclusion
We propose AUtomated DIstance esTimation (AUDIT), a fully automated processing pipeline for estimating animal distances in video and still images of camera traps. We derive absolute distances in metric values based on monocular relative depth estimation by exploiting a novel 3D point cloud-based alignment model that is trained on a diverse collection of outdoor datasets and thus entirely eliminates the need for reference images. We detect and localize animals using our multi-instance DINO method. We evaluate the optimized approach in a zero-shot evaluation on the Lindenthal zoo scenario dataset, which was not seen during training. On the Lindenthal dataset, we achieve a mean absolute error over all animal instances of only 0.9864 meters and a mean relative error of 0.113. In contrast, the previous automated approaches have much higher mean absolute errors of 1.8527 m and 1.6203 m (DrivenData Inc., 2022). Manual estimations in the user study by Haucke et al. (2022) have a higher mean relative error of 0.2189. By comparing AUDIT with the traditional workflow (Fig. 7), we show that we relieve ecologists of a significant workload, by requiring neither the time-cosuming comparison of observation and reference material nor the capture of any reference material in the first place. Although we focused on the application to CTDS, we conjecture that our methodology transcends to other abundance estimation methods such as the random encounter model (Rowcliffe et al., 2008), the random encounter and staying time model (Nakashima et al., 2018), the time-to-event model, spaceto-event model, and instantaneous estimator (Moeller et al., 2018).
|
2022-02-10T06:47:57.393Z
|
2022-02-09T00:00:00.000
|
{
"year": 2022,
"sha1": "03f045d61e4f10aa036bdc67584a90ab8e91dab8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2202.04613",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "24909974387da2bf3157651cd472b52d9595e606",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
239648823
|
pes2o/s2orc
|
v3-fos-license
|
Effects of chlorpyrifos-methyl, chlormequat, deltamethrin, glyphosate, pirimiphos-methyl, tebuconazole and their mixture on oxidative stress and toxicity in HUVEC cell line
Background and Aims: Humans and animals have daily contact with various chemicals, including food additives, pesticides, antibiotics, other veterinary drugs, and other xenobiotics. Pesticide exposure causes many health disorders. Mixed exposure to pesticides is an important issue for human and environmental health. Methods: In this study, we have determined the cytotoxicity of chlormequat pirimiphos-methyl, glyphosate, tebuconazole, chlorpyrifos-methyl, deltamethrin, and the mixture of these six pesticides. We further investigated the role of oxidative stress, total oxidant status (TOS), lactate dehydrogenase (LDH) and antioxidant defense mechanism TOS, total glutathione (GSH) levels with the observed cytotoxicity. Results: In this study, the mixtures of pesticides reduced total antioxidant status (TAS) and GSH level one by one and increased the reactive oxygen species (ROS) generation in HUVECs, respectively. The results also showed a significant contribution of oxidative stress on cytotoxicity during pesticide mixture exposure. Conclusion: The findings are that pesticide mixture exposure might have an impact on human health risk at contaminated sites and under occupational exposure conditions.
INTRODUCTION
Pesticides are commonly used compounds in farming that have a wide range of classes, such as insecticides, herbicides, fungicides, nematicides, acaricides, rodenticides, avicides, wood preservatives, and antifoulants. A remarkable amount of these pesticides spreads into the environment and this brings about immunotoxicity, carcinogenesis, and endocrine and developmental toxicity. Authorities such EFSA, EPA, and the Food, Drug, and Cosmetic Act, attaches importance to assessing pesticide compound mixture effects on the environment and living organisms (Laetz, Baldwin & Collier., 2009;Oyesola, Iranloye & Adegoke, 2019). Risk assessment of pesticide mixture is a highly important issue due to environmental and human exposure. People are always exposed to pesticide combinations due to protective strategies against pests in farms in farming. However, there is still no strict rule for risk assessment of pesticide compound mixture. Different combination risk assessment models can be seen in the literature. The most common risk assessment model is the dose addition model found in the European Food Safety Authority (EFSA) guidelines (Staal et al., 2018).
Conazole fungicides are widely used for agriculture and also treatment of mycosis and candida infections in humans and animals. Conazole fungicides cause disarrangement of fungal membranes via inhibition of cytochrome P450 enzyme lanosterol 14α-demethylase (CYP51). CYP51 is one of the main elements of ergosterol biosynthesis and normal fungal membrane integrity. Canozole fungicides have side effects such as endocrine disruption. These compounds exert their endocrine disrupting effects via CYP51 and CYP19 inhibition, which play a significant role as steroidogenic enzymes (Roelofs, Temming, Piersma, van der Beg & van Duursen, 2014). Tebuconazole is a widely used fungicide that has a sterol demethylation inhibitor effect. It has been reported that the Tebuconazole plasma half-life is approximately 600 days, and it causes toxic effects on the thyroid, liver, nervous system, and reproductive organs. In addition, it also exposes developmental and genetic toxicity .
Chlormequat chloride is a quaternary ammonium compound which is used for floriculture as a plant growth regulator by reducing longitudinal shoot growth effect (Vijitharan, Warnasekare, Lokunarangoda, Farah & Siribaddana, 2016). Exposure to human beings is quite common. According to WHO, chlormequat is excreted 98% invariably and its acceptable daily intake (ADI) level is 0.05 mg/kg.bw. It is totally eliminated from the body within 46 h. It is important to investigate the potential toxic effects of chlormequat on cellular mechanisms due to the wide range of species that are exposed to it (Xiagedeer, Wu, Liu, & Hao, 2016). Organophosphorus pesticides (OP) are the most common pesticide type used worldwide. OPs are absorbed via inhalation, skin, eyes, and ingestion. The main acute toxic effect is inhibition of the acetylcholinesterase enzyme, which results in a cholinergic crisis. Pirimiphos-methyl belongs to the OP group of pesticides, which are non-cumulative and broad spectrum compounds. Even though pirimiphos-methyl is classified as "no reprotoxin, no teratogen" by OECD, its harmful effects on development and on the reproductive and immune systems have been shown in studies (Oyesola et al. 2019;Olsvik, Berntssen & Søfteland, 2017;Anogwih, 2014). Glyphosate is a non -selective organophosphorus herbicide and it has bioaccumulation potential in the environment. Glyphosate exposure causes toxic effects in mammalians by increasing oxidative stress levels. Due to its common usage, this pesticide residue can be found in food and environmental samples. The main toxicity mechanism of glyphosate increases the emergence of ROS and causes disruption on cellular macromolecules by ROS (Cai et al., 2020;Odetti et al., 2020;Zhang, Yang, Ma, Shi & Chen, 2020). Chlorpyrifos is another widely used broad-spectrum organophosphorus insecticide. It has been reported that chlorpyrifos might have some toxic effects on organs and systems. Its common toxic effect is on the nervous system by acetylcholinesterase (AChE) inhibition. Its main toxic effect mechanism is associated with cellular oxidative stress increment (Deng, Zhang, Lu, Zhao & Ren, 2016;Dokuyucu, et al., 2016). Deltamethrin is part of the potent pyrethroid class of insecticides and acaricides. Also, deltamethrin has been used for controlling human diseases caused by mosquitos that are vectors of Zika virus and Dengue virus. It has been reported that deltamethrin exerts its toxic effects by increasing cellular ROS and reactive nitrogen species (RNS). Oxidative stress remarkably increases the deltamethrin toxicity mechanism (Lu et al., 2019).
Oxidative stress is a cellular imbalance condition which is formed between free radical production and the antioxidant defense system. Several chemicals may cause free radical production and through this, these radicals interact with important cellular molecules and impair their functions pathologically (Abdollahi, Ranjbar, Shadnia, Nikfar & Rezaie, 2004). Pesticide exposure may cause reactive oxygen species production in the cell that exceeds antioxidant defenses system. The cellular antioxidant system includes several molecules and enzymes, including reduced glutathione (GSH), non-protein thiols, catalase (CAT), glutathione-S-transferases (GST), superoxide dismutase (SOD), and glutathione reductase (GR) (Ferreira, et al., 2010). The pesticide-induced oxidative stress molecular mechanism has still not been clarified and is under scrutiny by scientists seeking to evaluate risk factors and understand related pathologic diseases (Agrawal & Sharma, 2010).
There are limited studies in the literature for these six pesticides' effects on cellular oxidative stress in endothelial cells. HUVECs are cell models that are frequently used to investigate the underlying mechanisms of endothelial diseases (Medina-Leyte, Domínguez-Pérez, Mercado, Villarreal-Molina, & Jacobo-Albavera, 2020) In this study, we have evaluated the toxicity of singly and mixture In this study, we have evaluated the toxicity of chlorpyrifos-methyl, chlormequat, deltamethrin, glyphosate, pirimiphos-methyl, tebuconazole singly and as a mixture. Further oxidative stress-inducing potential was evaluated in HUVECs. This is the first study of the effects of this pesticide mixture on HUVECs.
Cell culture and cell viability assays
The Human Umbilical Vein Endothelial (HUVEC) cell line was provided from American Type Culture Collection (ATCC® PCS-100-010™). Cells were grown in RPMI 1640 (GIBCO, Uxbridge, UK) including 10% fetal bovine serum (GIBCO, Uxbridge, UK), 100 U/ml penicillin, and 100 µg/ml streptomycin (GIBCO, Uxbridge, UK), with incubation conditions of 37°C and a humidified atmosphere with 5% CO 2 . Cell passage was performed with 0.025% trypsin-0.02% EDTA after cell co-fluency. Cells passage was performed twice a week. HUVEC cells are a sufficient model for investigating the molecular mechanisms of cardiovascular diseases under different chemical exposure conditions (Medina-Leyte et al., 2020).
Oxidative stress quantification assays
Lactat Dehydrogenase (LDH) Assay: LDH assay was performed with an LDH assay kit (ab102526-abcam). This assay was based on quantifying LDH activity via reducing NAD to NADH, then NADH interacts with a specific kit reagent that produces a color which can be detected at 450 nm. Cell lysates were mixed with reaction reagent and the colored sample was read at 450 nm with a spectrophotometer (Epoch, Biotek).
Total Antioxidant Status (TAS): TAS assay was performed via a TAS reagent kit (Rel Assay Diagnostics -Turkey) through the manufacturer's service bulletin. This assay was based on antioxidants reducing kit ABST component, and the absorbance change was detected by spectrophotometer at 660 nm. The assay is calibrated via trolox equivalent. Cell lysates and standards were mixed with reagent 1 and this mixture was measured spectrophotometrically. After reagent 2 was added into the mixture, the measurement was done at 5 min interval at 660 nm.
Total Oxidant Status (TOS): TOS assay was performed by TOS a Reagent kit (Rel Assay Diagnostics -Turkey) with the manufacturer's instruction. This assay was based on oxidants in the sample which oxidize the ferrous ions. Oxidized ferric ions make a colored complex that can be measured by spectrophotometer at 530 nm. The assay is calibrated via hydrogen peroxide. Cell lysates and standards were mixed with reagent 1 and the first spectrophotometric measurement wasdone with 530 nm. Then reagent 2 was added into the mixture and absorbance measurement was was done at 5 minutes interval.
Total Glutathione (GSH) Assay: GSH assay was performed by Glutathione Detection Assay Kit (Fluorometric) (Abcam-ab65322). This kit detects both reduced and oxidized glutathione. This assay is based on monochlorobimane (MCB), which is adducted with GSH that is catalyzed by GST. Bound MCB creates fluorescent blue light (Ex/Em=380/461 nm), which is detected by fluorometer (CARY ECLIPSE -USA).
Cell viability
According to MTT assay results, IC 50 values of five pesticides and their combination exposure effects have been listed in Table 2 (Figure 1). 50 and 200 mM concentrations of each pesti- Figure 2). LDH levels after pesticide exposure are shown in Table 3.
Total antioxidant status
TAS levels decreased in concentration dependent, but not significantly in the chlorpyrifos-methyl group. TAS levels decreased in pthe pirimiphos-methyl and deltamethrin groups. TAS levels decreased in the glyphosate, chlormequat chloride, and tebuconazole groups only in 25 and 200 mM exposure concentrations compared to the control group. TAS levels decreased in mixed groups compared to the control group ( Figure 3). TAS levels after pesticide exposure are shown in Table 3.
Total oxidant status
TOS levels increased significantly in pthe pirimiphos-m, glyphosate, chlormequat chloride, deltamethrin, and tebuconazole single exposures and in the mixture exposure compared to the control group (Figure 4). TOS levels after pesticide exposure are shown in Table 3.
Glutathione levels
GSH levels decreased in all exposure concentrations significantly in the chlormequat chloride, deltamethrin, tebuconazole, and mix groups. In the glyphosate group 50, 100, and 200 mM concentrations, GSH levels decreased significantly compared to the control group. In pthe Pirimiphos-methyl 100 and 200 mM concentrations, GSH levels decreased significantly compared to the control group. However, for the 25 mM group, GSH level increased dramatically. For the chlorpyrifosmethyl group 100 and 200 mM concentrations, GSH levels decreased significantly compared to the control group ( Figure 5). GSH levels after pesticide exposure are shown in Table 3.
DISCUSSION
Pesticides are commonly used chemicals that protect agricultural products from weeds, insects, fungus, and rodents. However, pesticides are mostly toxic for the environment and living organisms. Also, pesticides are commonly used for controlling malaria and dengue disease vectors and controlling plant growth in public places, such as parks and gardens, which trigger important risk factors for public health. Pesticide exposure has different acute and chronic effects, including cancer, asthma, diabetes, and neurodegeneration (Kim, Kabir & Jahan, 2017). One of the main mechanisms underlying reasons of pesticide toxicity is oxidative stress. Pesticides have induced oxidative stress mechanisms in relation with diseases, and this is still a question of debate. Oxidative stress is a cellular homeostatic imbalance between ROS and antioxidants. While ROS increases in the cell, ROS products interact with important cellular molecules and cause detrimental effects in the cell that bring about several different diseases. Antioxidants are very important cellular soldiers against ROS products. There are several different studies on the oxidative stress induction effects of pesticides in the literature (Agrawal &Sharma, 2010).
In our study, we have investigated the cytotoxicity and oxidative stress inducing effects of six different pesticides (chlorpyrifos-methyl, pirimiphos-methyl, glyphosate, chlormequat chloride, deltamethrin, and tebuconazole) and It has been reported that LDH levels increase during oxidative stress induction in the cells (Jovanovic et al., 2010 In conclusion, living organisms always encounter different types of pesticides and their combination in daily life through different exposure pathways. It is very important to clarify their toxic effect mechanisms and relations with diseases to make effective risk assessments and set regulations based on this information. Endothelial cells situate in the first phase of exposure to xenobiotics, and they are affected more. This study has confirmed that the oxidative stress inducing effects of exposure to different types of pesticides and their mixture (chlorpyrifos-methyl, pirimiphos-methyl, glyphosate, chlormequat chloride, deltamethrin, tand tebuconazole) and reducing antioxidant capacity of the cells. Further in vitro and in vivo studies need to clarify molecular interactions between pesticide inducing oxidative stress and endothelial cell related diseases. When these mechanisms are understood, protective and therapeutic strategies for the treatment of pesticide exposure's toxic effects can be developed.
|
2021-09-24T15:08:11.777Z
|
2021-08-31T00:00:00.000
|
{
"year": 2021,
"sha1": "e012543834fd869687d25cbdf30d134458d4f1e9",
"oa_license": "CCBYNC",
"oa_url": "https://cdn.istanbul.edu.tr/file/JTA6CLJ8T5/EFDD71A3DF4D4F2AACCE86C0D05DA8C8",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "77469a5e4a716140f1ddbc229be0529e2abd2cf7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
247223027
|
pes2o/s2orc
|
v3-fos-license
|
Bayesian Spillover Graphs for Dynamic Networks
We present Bayesian Spillover Graphs (BSG), a novel method for learning temporal relationships, identifying critical nodes, and quantifying uncertainty for multi-horizon spillover effects in a dynamic system. BSG leverages both an interpretable framework via forecast error variance decompositions (FEVD) and comprehensive uncertainty quantification via Bayesian time series models to contextualize temporal relationships in terms of systemic risk and prediction variability. Forecast horizon hyperparameter $h$ allows for learning both short-term and equilibrium state network behaviors. Experiments for identifying source and sink nodes under various graph and error specifications show significant performance gains against state-of-the-art Bayesian Networks and deep-learning baselines. Applications to real-world systems also showcase BSG as an exploratory analysis tool for uncovering indirect spillovers and quantifying systemic risk.
INTRODUCTION
We consider the task of learning temporal interactions and important components over time in a dynamic network. Many real-world systems can be described by a multivariate time series (MTS) and a natural framework for analyzing temporal relationships is Granger causality [Granger, 1969], which tests for whether one time series is useful for forecasting another one. Network Granger causality (NGC) [Basu et al., 2015] extends this concept into the multivariate setting. NGC is useful for identifying one-step ahead predictive relationships within a system, and may be considered causal under very specific conditions [Pearl et al., 2000].
Many methods have been developed to estimate NGC. Vector Autoregression (VAR) [Sims, 1980] and its variants [Lütkepohl, 2005] remain a standard-bearer for macroeconomics and financial forecasting. Bayesian networks [Pearl, 2011;Ben-Gal, 2008] are also a powerful collection of probabilistic graph models for learning NGC, usually via a directed acyclic graph (DAG). Dynamic Bayesian Networks (DBN) [Murphy, 2002] are particularly useful for modeling state changes and temporal structure learning, although it is restricted by acyclic representations. Alternative methods for estimating NGC adjacency matrices use deep learning variants, e.g., attention networks [Nauta et al., 2019], Statistical Recurrent Units (SRU) [Khanna and Tan, 2019], and sparse RNNs [Tank et al., 2018]. Recently, Generalized Vector Autoregression (GVAR) [Marcinkevičs and Vogt, 2021], which utilizes Self-explaining Neural Nets (SENN), also proposed aggregating model coefficients over lagged time series to estimate signs of NGC in addition to edge detection.
However, NGC has several drawbacks. First, it is not designed to capture cumulative interactions or multi-step ahead effects that evolve over longer forecast horizons [Marcinkevičs and Vogt, 2021], which may be particularly important in forecasting or inference for real-world systems [Diebold and Yılmaz, 2014;Billio et al., 2012]. Spillovers, in particular, is an interesting subset of temporal relationships (graph edges) that can materialize beyond 1-step ahead forecasts [Diebold and Yilmaz, 2015] in the context of forecast variability and network connectivity. Furthermore, indirect spillovers between components can also manifest via intermediary nodes despite having no direct link via NGC. Estimating NGC via DAG constraints are hence not representative of true network interactions, which can be selfdirected, bi-directional, or cyclic over time. Prior NGC methods also do not quantify strengths of temporal relationships [Marcinkevičs and Vogt, 2021] nor provide ample interpretation for related graph measures. Identification of important nodes relies on standard graph theory metrics [Kramer et al., 2009;Yusoff and Sharif, 2016] such as eigen-centrality [Bonacich, 1987] or in/out degrees [Freeman, 1978]. These metrics are also static point estimates based on NGC graphs.
And although methods such as GVAR offer sign estimation for temporal relationships, the actual coefficient values (edge weights) are not necessarily meaningful. Figure 1: Comparison of BSG vs. Prior NGC Methods. BSG combines Bayesian VAR estimation with interpretable FEVD framework over forecast horizons h to quantify strength of temporal interactions (BSG edge weights) and systemically important nodes over time.
To summarize, the major drawbacks of current methods are (1) lack of flexibility for observing network interactions over multiple forecast horizons, (2) lack of interpretable network measures that are contextualized, (3) and lack of uncertainty quantification for strength of temporal relationships and node influence. To this end, a promising solution is to leverage forecast error variance decomposition (FEVD) from classic time series forecasting, which estimates the temporal effect of shocks to individual nodes in the system [Barbaglia et al., 2020;Tsay, 2013;Diebold and Yilmaz, 2015], and Bayesian VAR models [Rossi et al., 2012;Koop and Korobilis, 2010] which provide comprehensive uncertainty quantification.
In particular, the formulae behind FEVD is a cornerstone of classic multivariate time series analysis when we are interested in relationships between time series components. It is commonly cited as (generalized) impulse response functions in statistical literature and multiplier analysis in economic literature [Tsay, 2013], and key applications include quantifying the effect of one time series component over forecast horizons, a key advantage over NGC. Under careful assumptions and conditions, it can also be a viable causal inference tool to analyze impact of specific policies [Swan-son and Granger, 1997]. The idea of standardizing FEVD as a measure of risk and connectivity has been motivated by macroeconomic and financial applications [Diebold and Yilmaz, 2015;Barbaglia et al., 2020].
Formally, we define spillovers as the predicted impact of one component on all other components in a dynamic network with respect to forecast variability and forecast horizon h. Intuitively, we are learning how unexpected shocks in one component cascades throughout the network to all other components, as well as examining how this impact evolves over time. Statistically, we can estimate h-step ahead spillovers based on normalized FEVD for one-step ahead forecasts and beyond after parameter estimation via Bayesian VAR; interpretation of resulting spillover effects is then contextualized by the input time series while also accounting for parameter estimation variability.
Motivation. We present Bayesian Spillover Graph (BSG) for analyzing temporal interactions over multiple forecast horizons, identification of systemic influential and at-risk nodes, and uncertainty quantification for novel network measures with interpretation beyond simple NGC. BSG is both a powerful exploratory data analysis and inference tool; key contributions include: 1. We model temporal relationships in a dynamic system based on a single observed MTS; forecast horizon hyperparameter h allows for flexibility in learning short-term vs. long-term spillover effects. 2. We propose interpretable network measures for contextualizing spillovers with respect to prediction variability and identifying sink and source nodes within a dynamic network. We demonstrate the robustness of these measures across various graph and error dependency specifications. 3. We provide uncertainty quantification for BSG measures through functionals of model parameter posterior distributions via Bayesian estimation, compared to point-estimates from baseline VAR and NGC retrieval methods. We showcase how BSG can quantify strengths of temporal interactions (including spillovers) and identify systemically vulnerable nodes in a wildfire risk application.
We emphasize the distinction between Bayesian DAGs versus BSG, which models temporal, bi-directional relationships that can potentially amplify spillovers over multi-step horizons. DAG structure is a popular assumption in causal inference and can be viewed as a special case of BSG. BSG learns important edges (temporal interactions) and nodes (time series components) directly from estimated statistical network metrics. It also accounts for various dependencies in error terms that deviate from standard Gaussian noises, which are more descriptive of real-world systems. A brief overview of BSG vs. prior methods is shown in Figure 1.
VECTOR AUTOREGRESSION (VAR)
Let z t be a stationary d-dimensional multivariate time series, and {z jt } be the j-th component of this time series at time t. A VAR(p) model with order p is defined as: where Z and A are (T − p) × d matrices, and the ith row is z p+i and a p+i . β is a d × (dp + 1) matrix, and X is a (T − p) × (dp + 1) design matrix with ith row as (1, z p+i−1 , z i ). The likelihood function for the data is (3) where n = T − p is the effective sample size. We utilize Normal-inverse-Wishart conjugate priors f (β, Σ a ) = f (Σ a )f (β|Σ a ) : where hyperparameters V 0 is a d × d matrix, n 0 is some real number, C is a (dp + 1) × (dp + 1) matrix, and β 0 is a (dp + 1) × d matrix. The posterior distribution is then: where β = ((X X + C) −1 (X X β + Cβ 0 )) and S = based on hyperparameter choices from the prior; β is the least-squares estimate of β. Usually, V 0 is set to identity I d and n 0 is a small number; as sample size n increases, the choice of n 0 has very little effect on the final posterior. Similarly, we can choose vague priors for vec(β) by letting vec(β 0 ) = 0 and C −1 = c 0 I dp+1 , where c 0 is some large real number, and hence the posterior distribution f (vec(β)|Z, X, Σ a ) is also mainly updated via the data X.
Although Σ a is unknown, we can sample M i.i.d samples from the joint posterior distribution by iterative sampling from f (Σ a |Z, X) and f (vec(β)|Z, X, Σ a ), replacing Σ a with posterior estimate Σ a (m) .
BAYESIAN SPILLOVER GRAPHS
In brief, we adopt Bayesian estimation for Vector Autoregressions (VAR) to estimate posterior distribution for model parameters [β , Σ a ] from a single realized MTS. We then construct G h (β, Σ a |Z), the BSG for forecast horizon h, with components of MTS as nodes and temporal interactions as directed, weighted edges. Specifically, we can estimate BSG edge weights by computing h-step ahead normalized spillovers between two nodes via FEVD for M posterior samples of {β , Σ a }, and taking averages over M . Consequentially, BSG is an interpretable graph where both magnitude and specific values of edges are meaningful.
We also introduce three network measures based on functionals of BSG: the spillover index, vulnerability score, and influence score. These measures describe systemic-wide behavior over time and are useful for monitoring influential and at-risk nodes for a dynamic network. With a Bayesian framework, we can quantify uncertainty for both BSG edges and network measures. Under stationarity assumptions, estimated normalized spillovers are finite after some fixed forecast horizon h.
Interpretable BSG Edges from Forcast Error Variance
Decomposition. We adapt generalized FEVD for analyzing h-step ahead spillover effects [Diebold and Yılmaz, 2014;Diebold and Yilmaz, 2015]; the accuracy of a forecast can be measured by its forecast error. Let σ kk be the k-th diagonal of Σ a , and ψ i be the coefficient matrix for a non-orthogonalized VAR under an infinite moving-average representation. The jk-th entry of the h-step ahead forecast error variance is which measures the amount of information of the h-step ahead forecast error variance for variable j accounted for by innovations/exogenous shocks to variable k. The h-step ahead normalized spillover from component k to j is: wherew h,jk is the normalized variance decomposition. s k− →j h is the proportion of the h-step ahead forecast error variance for node j attributed to changes in node k, and becomes the weight for a directed edge from node k to j for BSG, G h (β, Σ a |Z). This definition makes BSG an interpretable graph with respect to forecast errors, with direct explanation of edge weight meaning. Prior methods such as GVAR would only estimate the sign of a temporal relationship [Marcinkevičs and Vogt, 2021]. See Algorithm 1 for details on estimating BSG edges from posterior distributions of Bayesian VAR parameters.
BSG Network Measures as Systemic Risk Indicators.
We propose novel BSG network measures based on functionals of BSG edges over forecast horizon h that can describe system-wide behavior and node importance over time. The goal is to quantify cumulative temporal interactions and spillovers within a system, as well as identify strongly influential or vulnerable nodes.
We define the h-spillover index as the magnitude of hstep normalized spillovers across all components, which describes the total spillover effect experienced over the full graph. The h-spillover index can be viewed as a measure of cumulative risk within the system after h time periods; the higher it is, the more fragile the system is to innovations in any individual node.
We may then be interested in identifying specific nodes at high risk over the full graph. For example, say we wanted to rank the individual nodes by the magnitude of spillovers experienced. We define s * →j h as the total spillover effect from all other components to a specific component j.
can be viewed as the vulnerability score for a specific node at h-steps ahead, and can theoretically take on values between [0, 100]. The vulnerability score for node j can be interpreted as the proportion of FEVD not attributed to innovations to j itself. In particular, nodes with higher vulnerability are more susceptible to shocks and cascading effects from other components within the system. Alternatively, we may be interested in pinpointing the sources of risks to the system. We define the influence score for a specific node, s k→ * h , as: Note that the numerator of this expression quantifies the total spillover effect on the graph originating from component k, which is then standardized by the h-spillover index. This allows us to interpret the influence score for node k as the proportion of total spillover effect on the entire system attributed to innovations in k, which again takes on values between [0, 100] and is comparable across different networks. In particular, nodes with higher influence leads to greater impact on the entire system if there is a shock or change to the node.
is a weighted directed edge from node k to node j. BSG nodes are the individual components of Z t . BSG network measures can also be computed directly by averaging over M samples, e.g., the influence score for node k would be esti- . See Algorithm 1. This process also allows for uncertainty quantification for any BSG edge or network measure by constructing credible intervals over M estimates. We can also leverage the simplicity of Highest Posterior Density Interval (HPDI) or Bayes Factor [Kass and Raftery, 1995]. See Section 5 for an example with California wildfire data.
Stationarity and Optimal h * for Equilibrium BSG. A VAR(1) model can be written with an infinite sum as: If the series is stationary, then the absolute value of the eigenvalues of φ 1 will be strictly less than 1. Various transformations, including detrending, removing seasonality, or differencing the series [Granger and Newbold, 2014] are recommended to ensure stationarity before parameter estimation. MTS with DAG temporal network structures can be viewed as a subset of VARs with restrictive assumptions on β. In the special case of a VAR(1) model where the temporal network structure of z t can be described by a DAG, z t is stationary; see Theorem 1 and proof in Appendix B.
Theorem 1. If φ 1 is a DAG, then (1) no component-wise autocorrelation exists, (2) φ 1 can be specified by a strictly triangular matrix, (3) all eigenvalues of φ 1 are 0 and hence z t is stationary.
Under stationarity, BSG can reliably model cumulative response functions if shocks are not persistent and the system will return to equilibrium. See Algorithm 1 for choosing the optimal h * -step. The horizon h can be interpreted as a tuning parameter that controls the trade-off between learning immediate versus cumulative effects for BSG.
BSG FOR QUANTIFYING INDIRECT SPILLOVERS
We showcase how BSG models temporal spillovers that materialize after multiple periods. Consider a 5-dimensional Figure 2: Normalized spillover evolution from Node 3 to 5 (red) over h. Arrow width is prop. to BSG edge strength. VAR(1) time series represented by the directed graph of temporal interactions (φ 1 ) in Figure 3, with true parameters: Σ a = diag(5).
Eigen-decomposition of φ 1 indicates that all eigenvalues have magnitude ≤ 1 and this network is stationary with standard independent error terms. Nodes 3 and 1 are analogous to source nodes with high out-degree centrality, and 5 and 3 to sink nodes with high in-degree centrality [Borgatti, 2005;Bollobás, 2012;Goldberg et al., 1989]. Node 5 will experience spillovers from Node 3 via Node 4 after multiple time periods, but this relationship is omitted in a simple NGC. This limitation is suitably addressed with a BSG with h > 1; see Figure 2 where indirect spillover (red arrow from 3 to 5) becomes stronger as h increases.
In Figure 4, we plot average BSG directed edge weights (h-step ahead normalized spillover) from Nodes 1-4 into Node 5. The indirect spillover effect through intermediary Node 4 manifests after 2-steps ahead forecast and significantly amplifies as the forecast horizon increases (turquoise line) before flattening after h = 17. We can directly interpret this edge: the posterior mean for s 3→5 20 is 80.1% with 95% HPDI of (71.9%, 87.7%), which predicts that after 20 periods, roughly 80.1% of forecast variability for node 5 can be attributed to changes in node 3. In contrast, the edge from Node 4 to Node 5 rapidly declines past h = 4. With prior methods of only estimating static NGC, we would not be able to observe nor quantify these spillover effects that evolve over longer forecast horizons.
BSG FOR IDENTIFYING NETWORK SOURCE & SINK NODES
We illustrate how BSG network measures accurately ranks and identifies nodes of interest compared to baselines with simulated MTS. Since relative order matters, this is a ranking instead of prediction task. Performance is evaluated by Normalized Discounted Cumulative Gain (NDCG) [Valizadegan et al., 2009]. NDCG measures ranking quality of a node ordering by BSG network measures or other graph measures, e.g., source nodes are ranked highly influential. NDCG is between [0, 1] and directly comparable across methods; see Appendix C.
Identifying Nodes Across Network Specifications. 3 stationary network specifications (φ 1 ) are used for simulating 5 MTS replicates: (1) a DAG, (2) a directed cyclic graph with autocorrelation = 0.5, and (3) a bi-partite graph. Networks (1) and (2) nodes respectively. The first set of baselines are 4 standard graph measures on a NGC graph: in/out degree distributions, eigen-centrality, betweenness centrality, and closeness centrality. NGC is constructed from a VAR(1) model fitted via the MTS package, and significant edges are identified via multiple-testing with Benjamini-Hochberg procedure [Benjamini and Hochberg, 1995]. Another set of baselines is DBN and GVAR 2 combined with the 4 graph measures above, because these methods are designed only to retrieve NGC graphs. For fairness of comparison, GVAR lag is restricted to 1 and run with default hidden units/layer (50), hyperparameters λ = 0.1 and γ = 0.01, and 500 epochs in PyTorch. DBN uses default settings with the dbnR package.
Average NDCG are reported in Table 1 for each combina-tion of baseline NGC graph-recovery method and network measure. Out-and in-degree centralities (Degree) are used for source and sink nodes respectively. BSG with h = 10 yields the highest accuracy for both node types across all three networks specifications.
Effect of Forecast Horizon h and Error Covariance Σ a
We perform an ablation experiment to answer two questions: (1) How does choice of hyper-parameter h impact BSG quality and accuracy? (2) How well does BSG perform across different error dependency structures?
We utilize Network (2), which allows for bi-directional temporal relationships and cycles. Each component has unit variance (σ kk = 1), and pairwise covariance is {0.1, 0.3, 0.5, 0.7, 0.9} corresponding to the strength of dependencies in Σ a . d = 24 with 8 source and sink nodes; for each Σ a specification, we generate 5 replicates and estimate corresponding BSG for 20 values of h, then compute accuracy (NDCG) for source node identification. Figure 5 shows that good choices of h ranges between 5-10, and BSG performance quickly stabilizes after a few forecast periods while successfully identifying the proper source nodes. Good choices for h depends mostly on φ 1 and is influenced by the speed at which the system reaches equilibrium (meanreverts), not necessarily the size of the network. Lower h values yield higher accuracy for identifying sink nodes; a good BSG should select h that maximizes both quantities.
In Table 2 of Appendix D.1, we report NDCG for identifying sink and source nodes in networks with weak, medium, and strongly correlated Σ a , using the same VAR, DBN, and GVAR specifications as previous experiments. Results show that BSG influence and vulnerability scores outperform all benchmarks even under strongly correlated error terms. When σ jk is moderately or strongly correlated, standard VAR breaks down and produces a degenerate graph (i.e., multiple testing results in zero significant edges); benchmark network measures collapse in this case. DBN performs mostly consistently, while for GVAR, corresponding in/outdegrees do not distinguish between influential nodes. BSG avoid these pitfalls since it inherently accounts for error dependencies and is more applicable for real-world dynamic networks with strong correlations.
Non-Linear Dynamic Systems
Recent works have also focused on dynamic systems with non-linear or higherorder temporal relationships. A prime example is the Lokta-Volterra predator-prey model Bacaër [2011]. Four parameters {α, β, γ, δ} correspond to prey → itself, predator → prey, predator → itself, and prey → predator interaction strengths. We generate 5 MTS replicates using the same parameter specifications ({1.2, 0.2, 1.1, 0.05}) as Marcinkevičs and Vogt [2021], with T = {50, 200, 1000}. We compare BSG influence/vulnerability scores vs. benchmarks for correctly identifying nodes as predator (source) and prey (sink). Results and example MTS simulation is reported in Table 3 and Figure 8 in Appendix D.2; BSG at all forecast horizons outperforms baselines for T = 50 and T = 200. For T = 1000, BSG performs consistently well for identifying source nodes, but has lower accuracy for identifying sink nodes, likely due to long-range dependence for a longer MTS. GVAR-Closeness has marginally higher accuracy (+0.014) for identifying predators compared to BSG (h = 1) but very low accuracy (0.554) for identifying prey. Meanwhile, standard VAR after FDR adjustment produces degenerate graphs. On average, BSG still performs well on between both source and sink node identification; in practice, it may be useful to first difference MTS with higher-order autocorrelation. Figure 6: BSG for Kincade Fire, h=12 hours ahead. Red indicates source and blue indicates sink nodes. Arrow width is prop. to BSG edge weight. See Figure 11 in Appendix E for 95% HPDI of spillovers.
BSG FOR UNDERSTANDING REAL-WORLD SYSTEMS
Inferring Spillovers from California Wildfires. The Kincade Fire was the largest California wildfire in 2019, burning a total of 77,758 acres. It originated in Sonoma County and dangerous PM10/PM2.5 particles in the air posed a serious public health risk spillover for nearby counties with high population density. We use BSG to investigate spillovers and rank at-risk nodes (counties) as measured by hourly PM 2.5 particle concentrations from Oct 22-Nov 7. We have a reasonable ground-truth for underlying network structure with Sonoma County as the single source node. Therefore, any strong BSG edges detected between Sonoma and nonadjacent counties, or two counties that does not include Sonoma, can be considered indirect spillover effects.
Data Description. Using public data from EPA (Environ- Figure 7: 12-hour normalized spillover for Kincade Fire.
Blue arrows indicate direct risk for adjacent counties, and orange arrows indicate spillovers for non-adjacent counties.
mental Protection Agency), hourly PM 2.5 concentrations are extracted for 10 counties within 50 miles of Sonoma County in Northern California; Yolo, Sutter, and Lake counties had no data available. See Figure 9 in Appendix E for MTS plot. No visible trend or seasonality effects are observed; autocorrelation plots show evidence of long memory for some counties and we also observe prominent spikes, particularly initially in Sonoma and later with time lag in other counties. To ensure stationarity, we proceed with the first order difference of the MTS.
Quantifying Spillover & At-risk Nodes. In Figure 6, we illustrate all BSG edges (h = 12) greater than the 80th percentile in magnitude for simplicity, with arrow width proportional to edge weights. The top source node Sonoma (by BSG influence score) is shaded in red, and top sink nodes (by vulnerability score) is shaded in blue. The BSG neatly captures the Kincade Fire in that Sonoma has the majority of all outgoing edges, while further away, non-adjacent counties (sink nodes) such as Colusa and Alameda have strong spillovers both directly from Sonoma and indirectly via other counties as well. In particular, note the cycle from Sonoma → Contra Costa ↔ Alameda where sink nodes also interact and amplify spillover effects. We can further quantify downstream spillovers via BSG edge weights for counties to the southeast of Sonoma; see Figure 7 for county map with spillovers. Roughly 10% of FEVD for each county can be attributed to changes in Sonoma's PM 2.5 concentration. One possible explanation is downsloping winds from the north [Mass and Ovens, 2019], which is particularly concerning due to the far higher population density of impacted counties. Two other notable indirect spillovers not involving Sonoma include those from San Mateo to Contra Costa (12.3%) and Alameda (9.3%).
BSG influence and vulnerability scores for each county are reported in Figure 10 in Appendix E. Sonoma County is the most influential node, accounting for more than 40.9% of total spillover effect across all 10 counties on average, with the 95% HPDI as (17.9%, 62.7%). BSG accurately identifies the origin of the Kincade Fire while also showing Sonoma itself is the least vulnerable node. Locations most at risk to the fire, by vulnerability score, are Alameda and Contra Costa followed by San Francisco, Solano, and Colusa. None of these 5 counties are adjacent to Sonoma; they incur higher risk via spillovers from intermediary Marin and Napa counties, accumulated over multiple time periods. These risk quantifications from BSG have practical implications for policies with respect to wildfire relief and public health. For example, although FEMA allocated nearly 60 million dollars in federal relief [FEM, 2019], the funds were strictly designated for Sonoma County. Meanwhile, BSG as an exploratory tool clearly identifies much broader spillovers and at-risk counties.
DISCUSSION
BSG is a novel framework for modeling temporal interactions and identifying important nodes within a dynamic system based on a single realized multivariate time series. BSG combines interpretable forecast error based network measures with uncertainty quantification via sampling from posterior graph distribution, and demonstrates robust performance across various graph specifications and error dependency structures. The hyperparameter h allows for custom learning of both short and long-term temporal relationships, including indirect spillovers, which are better suited for understanding how real-world systems evolve over time.
Careful choice of horizon h can help model equilibrium state of systems and optimize proper ranking of sink and source nodes.
A key application of BSG could be for analyzing spillover impact in response to new regulations and economic policies. For example, consider when a significant event occurs in a particular city, e.g., a new tax policy is passed or a local manufacturer is shut-down and off-shored. Prior works have utilized impulse response functions to analyze policy interventions [Sims, 1980;Ericsson et al., 1998;Lütkepohl, 2005]; we propose leveraging BSG to examine and quantify both positive and negative externalities (spillover effects) in terms of employment statistics, traffic congestion, local rent, wages, etc., for neighboring cities or counties. Inference via BSG can be for both short-term and long-term impact based on forecast horizon, and used to inform both the public and policymakers.
Another potential BSG application is in time series analysis of fMRI data in healthcare and medicine [Penny et al., 2005]; for example, we can examine individual brain fMRI time series where each component are atlas based regions of interest, i.e. aggregated behavior from sets of voxels, which represent smaller unit regions in the brain. The time series could measure brain activity in response to some stimuli or treatment, and a BSG can illustrate cumulative effect of temporal interactions between different brain regions over time. The novel BSG network measures (influence score, vulnerability score) can also pinpoint critical components of brain connectivity, analogous to sink or source nodes.
Future work can dive deep into applying BSG for some of these datasets aforementioned, as well as extending the BSG framework for Bayesian networks with time-varying coefficients [Kowal et al., 2019] or latent state-space representations.
A MOVING AVERAGE REPRESENTATION OF VAR(1)
We can rewrite a VAR(1) model with a moving average representation [Tsay, 2013] using the mean-adjusted model, which is useful for computing variances of forecast errors.
B PROOF OF THEOREM 1 Theorem 2. If φ 1 is a DAG, then (1) no autocorrelation exists, (2) φ 1 can be specified by a strictly triangular matrix, (3) all eigenvalues of φ 1 are 0 and hence z t is stationary.
Proof: By definition of DAG, no cycles can exist in the adjacency matrix, in this case, φ 1 . Hence, the diagonal entries which indicate dependency of z it on z i,t+1 is necessarily 0, and thereby proving point (1).
Note that by definition, there exists a topological ordering on the vertices if and only if a graph has no directed cycles. Because φ 1 is a DAG, we can relabel the d vertices (time series components) as v 1 , v 2 , ..., v d . If v i v i is a directed edge into i from i (indicating Granger-causality), then i > i . Hence, all entries above the main diagonal are also 0 because these are entries for which i < i . Combined with point (1) where main diagonal entries are also 0, this satisfies the definition of a strictly lower-triangular matrix (2).
We've shown that the adjacency matrix of a DAG is strictly lower-triangular via permutation, and note that the order of individual time series components does not matter, although in this case the d vertices are ordered from source to sink nodes. The eigenvalues of any lower-triangular matrix is just its diagonal components [Axler, 1997], meaning that all eigenvalues for φ 1 is just 0. Since these are strictly less than 1 in magnitude, we can conclude that z t is stationary (3).
C EVALUATING ACCURACY FOR SOURCE & SINK NODE IDENTIFICATION
First, define Discounted Cumulative Gain (DCG) at position d, for d nodes arranged in a particular order: where rel i is the graded precision score of node at position i, e.g. {1, 0.5, 0} for {source, intermediary, sink} nodes respectively. Greater penalty is given for source or sink nodes ranked in lower positions. NDCG [Valizadegan et al., 2009] then equals DCG divided by Ideal Discounted Cumulative Gain (IDCG): rel i log 2 (i + 1) and |rel d | represents the optimal order of nodes, which is given by the ground truth labels of each node.
|
2022-03-04T06:47:20.322Z
|
2022-03-03T00:00:00.000
|
{
"year": 2022,
"sha1": "0565ad98a87c098cb00880b01252c7d41f8d42c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e1785713a2d238cf3a2bdee971b24e2ab235229e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
235719562
|
pes2o/s2orc
|
v3-fos-license
|
Advances in Understanding the Alkali-Activated Metallurgical Slag
+is paper summarized and reviewed the mechanism and macro-performance of alkali-activated metallurgical slag, including steel slag, copper slag, ferronickel slag, and lead-zinc slag. Better activated method and alkali-activator are still needed to be developed to improve the performance of themetallurgical slag with low reactivity. Besides, the chemical components’ variation of these metallurgical slags from different regions will lead to unpredictable performance, which needs further study.
Introduction
It is widely accepted that alkali-activated material (AAM) is a potential alternative for ordinary Portland cement (OPC) [1]. ese materials are commonly generated by aluminosilicate precursor, which can be obtained from solid industrial waste, such as granulated blast furnace slag (GBFS), fly ash, mineral processing tailings [2], catalyst residues, waste glass, waste ceramic, coal bottom ash [3], rice husk ash [4], palm oil fuel ash, etc. [5]. In 2016, approximately 1.45 Gt CO 2 , which is about 8% of CO 2 total emissions from human activities, was released from the cement industry [6]. About 50%-60% CO 2 emission of cement industry comes from the calcination of limestone. erefore, the replacement of OPC by AAM is recognized as one potential way to reduce carbon emission [7]. Life-cycle analysis (LCA) of alkali-activated materials has been thoroughly discussed by Habert. According to the statistics of different studies, LCA of AAM reduces approximately 40%-80% CO 2 emissions compared to an OPC baseline [1]. Need to add that the baselines of OPC are specified inconsistently among various reports because the mix design, local conditions (such as transport distances and cost of electricity generation) as well as industry and environmental policy significantly affect the baseline [1]. structure (mainly Q4 with few Q3) [12]. Blast furnace slag is more reactive than fly ash, as a higher pH and temperature is needed to activate the fly ash [5]. is means AAS has a wider range of activators compared to AAF, such as sodium carbonate and sodium sulfate. Besides the different gels in these two systems, a wide range of secondary phases such as hydrotalcite and AFm-like crystal were observed in AAS [15].
Recently, various metallurgical slags are generated in the production of metal processing, such as steel slag, copper slag, ferronickel slag, and lead-zinc slag. Steel slag is a solid waste generated during the conversion of iron into steel, which is about 15% of the crude steel output [16,17]. Copper slag is an industrial by-product produced by the coppermaking process, whose yield is 2-3 times that of copper output [18,19]. Ferronickel slag as an industrial waste comes from the process of nickel-iron alloy production [20,21]. Approximately 12-14 tons of ferronickel slag per ton of nickel is produced [22,23]. e main solid waste generated during lead and zinc production is lead-zinc slag. According to statistics, the extraction cost per 100 tons of lead and zinc is 71 tons of lead slag and 96 tons of zinc slag [24,25]. It is estimated that the annual production of steel slag, copper slag, ferronickel slag, and lead-zinc slag each year worldwide is 200 million tons, 70 million tons, 150 million tons, and 25 million tons, respectively [17,23]. Although the yield of these slags is high, the utilization rate is low. For example, until now, the steelmaking industry produced nearly 1.2 billion tons of steel slag in China, and only less than 30% slag is recycled and applied in some low value-added fields [26,27]. Most of metallurgical slags are stockpiled in an open field. Dealing with a large amount of industrial waste is a severe challenge to global environmental governance. e best way to solve this problem is to transform metallurgical slags into new materials with high added value, which will also bring huge economic benefits to the society.
Most metallurgical slags are used as potential alternative materials in civil engineering. ey are usually used as aggregates or fillers in place of other conventional sand and stone materials due to their low activity [28,29]. Powdered slag has higher market value as compared to granulous slags for construction. A possible application for those solid wastes is to produce alkali-activated materials because they are high-quality aluminosilicate resources. With the further implementation of the concept of sustainable development, the research on cement with less clinker and no clinker has been paid more attention. Alkali-activated material as a kind of inorganic polymer material has great potential and is expected to be an alternative to cement and concrete. Meanwhile, in recent decades, with the development of mine cemented backfill technology in underground mine backfill, more and more mines use cementitious materials and alternative binders to replace conventional hydraulic backfilling at home and abroad [30]. Compared to building materials, the quantity of mine backfill material is large and the strength requirement is easily met. When alkali-activated materials are used to replace other conventional materials for mine backfill, it helps to effectively deal with the solid wastes, substantially preserve natural resources and energy, and create the conditions for reducing potentially harmful waste disposal costs. Hence, it indicates that alkali-activated material is suitable for mine backfill.
Within this context, the purpose of this paper is to review steel slag, copper slag, ferronickel slag, and lead-zinc slag as a precursor in alkali-activated material. e challenges and opportunities of using slags in alkali-activated material are also discussed.
Physical Properties and Chemical Composition.
e type of modern steel determines the elimination of different impurities in the steelmaking process. In terms of carbon steel, it can be produced in a ladle furnace (LF), an electric arc furnace, and a basic oxygen furnace (BOF) in different countries [30][31][32]. us, depending on the type of furnace, steel slag can be broadly classified into three categories, i.e., BOF steel slag, EAF steel slag, and LF steel slag [33]. As for steel slag in alkali-activated material, it usually refers to BOF steel slag, which is also called converter steel slag [34]. Today, in China and the United States, BOF steel slag makes up approximately 70% and 40% of steelmaking, respectively [35,36]. BOF steel slag is rock-like and dark. e density is 3-3.6 kg/m 3 , which is higher than the natural aggregate [37]. e water absorption rate of steel slag is 0.4%-3.5% [38,39]. BOF steel slag is very hard and not easy to be ground due to its high Fe content, so BOF steel slag and its products have good abrasion resistance [37,38].
Different chemical compositions are heavily affected by steel slag type. BOF slag has more FeO than EAF steel slag and less SiO 2 than LF steel slag [35,37]. e main chemical compositions of BOF steel slag are presented in Figure 1. In general, BOF steel slag primarily consists of 35%-50% CaO, 15%-35% Fe 2 O 3 , 10%-20% SiO 2 , 2%-10% MgO, 0%-5% MnO, 1%-7% Al 2 O 3 , 1%-3% P 2 O 5 , and 0%-2% TiO 2 . It is worth noting that there is a great difference in Fe 2 O 3 content. High Fe 2 O 3 content in steel slag plays an important role in grinding and application quality of steel slag. However, with the improvement of magnetic separation technology of BOF slag, the Fe 2 O 3 content in BOF slag has been effectively reduced [26,37]. e chemical composition analysis on newly produced slag has showed that the total amount of Fe 2 O 3 is less than 20%.
Steelmaking slag is usually air-cooled to ambient conditions, and so BOF steel slag is highly crystallized [38,39].
ose oxides in BOF steel slag form different mineralogical compositions. Essential mineral phases in BOF steel slag are tricalcium silicate (C 3 S), dicalcium silicate (C 2 S), CaO-FeO-MnO-MgO solid solution (RO phase), dicalcium ferrite (C 2 F), tetracalcium aluminoferrite (C 4 AF), merwinite (Ca 3 MgSi 2 O 8 ), lime (free CaO), and periclase (free MgO) [16,26,37,40]. e Fe mainly exists in forms such as RO phase, C 4 AF, and C 2 F, and these phases have no sufficient reactivity. During cooling, C 2 S undergoes polymorphic transformations, where β-C 2 S transforms c-C 2 S at approximately 500°C, resulting in volumetric expansion of 12% [37][38][39]. A small amount of C 3 S and β-C 2 S with dense structure and large crystal size have low reactivity, while c-C 2 S is considered to have a negligible cementitious capability [41]. erefore, BOF steel slag powder can result in poor hardening reaction after prolonged curing at room temperatures but show better cementitious properties under the action of chemical activator. e content of free CaO increases with the alkalinity of steel slag. And, even the content in BOF steel slag is up to 10%, which has a negative impact on the stability of steel slag products. Although the contents of free CaO and MgO in BOF steel slag have become very low with the improvement of heat and vapour process, the stability of steel slag should be considered when it is used as aggregate [37,42,43].
Reaction Mechanism of Alkali-Activated Steel Slag.
Compared to amorphous GBFS, fly ash, and metakaolin, the biggest challenge of steel slag as a precursor is its high crystallization. erefore, it should have different inorganic polymerization mechanisms between alkali-activated steel slag materials and alkali-activated amorphous slag materials. Although few in number, some publications regarding the exploration of the reaction mechanism of steel slag in the production of alkali-activated materials do exist in literature. e hydration sensitivity and even mechanical behavior of the material to activation depends on several factors, such as the phase compositions and fineness of the precursor, the curing conditions and alkaline conditions containing initial alkalinity, and the type and concentration of activator used [37]. Wang et al. [44] changed the pH value of NaOH solution in NaOH-activated steel slag and studied effects on kinds and morphologies of hydration products. ey found that although increasing the initial alkalinity could promote the early hydration of active components like C 2 S, C 3 S, and C 12 A 7 , it had little effect on their late-age hydration degree [44]. As for inert components like Fe phases, the hydration degree of steel slag was still very low even under strong alkaline conditions [44]. ey also found that changing the alkaline conditions did not change the type of hydration products [44].
e alkaline activator has a very important function. According to research findings, compared to sodium sulfate, sodium hydroxide, and sodium carbonate as activators, liquid sodium silicate (water glass) could activate steel slag more efficiently and is an appropriate activator for alkali-activated steel slag materials [34,37,45,46]. Sun et al. [47] had investigated the hydration properties and microstructure characteristics of alkali-activated steel slag binder [47]. According to their findings, both hydration processes and products between water glass-activated steel slag and Portland cement were similar: (i) five hydration stages including the rapid exothermic stage, the dormant stage, the acceleration stage, the deceleration stage, and the steady stage and (ii) C-(A)-S-H gel and crystalline Ca(OH) 2 as the main hydration products [47]. e increasing of the moduli of water glass solution from 0.5 to 2.0 lead tothe finerpore structure and higher mechanical strength [48]. Meanwhile, additional silicate had a retarding effect on the development of hydration process and the formation of hydration products [48]. However, increasing modulus had a negligible impact on the type of products of alkali-activated steel slag [48]. In addition, they also conducted detailed comparisons between the alkali-activated steel slag binder and Portland cement with the same water/binder ratio of 0.45 due to similar reaction conditions [47]. ey found that alkaliactivated steel slag has a faster reaction, fewer hydration products, poorer crystallization of Ca(OH) 2 , a lower Ca/Si ratio, and a similar Al/Si ratio of gels than Portland cement [47]. Meanwhile, in terms of microstructure, alkali-activated steel slag hardened paste had more pores and looser microstructure causing long-term adverse impact on strength development [47].
Liu et al. [49] investigated the early age evolution including microstructure and reaction degree of alkaliactivated steel slag from multiple perspectives under high curing temperature. ey used ground steel slag with a specific surface area of 440 m 2 /kg, SiO 2 /Na 2 O molar ratio equal to 2.42 in the activator, and curing temperature of 60°C [49]. e most important conclusion is that they demonstrated the type of gel product [49]. According to their findings, the nano-C-S-H and nano-C-A-S-H gel first condensed due to the dissolution Si and Al phases, and then the formation of C-A-S-H gel was continuously conducted at longer curing time because Si-O-Si bond translated into Si-O-Al [49]. Kang et al. [50] synthesized a novel CeO 2loaded porous NaOH-activated steel slag-silica fume catalyst for photocatalytic water-splitting of hydrogen production, and they found that three-dimensional polymeric structure C-S-H gel (Ca 1.5 SiO 3.5 ·xH 2 O) was the main phase in the alkali-activated steel-slag-based material.
Properties of Alkali-Activated Steel Slag.
e potential utilization of alkali-activated steel slag as an alternative binder has been drawing much attention recently. However, unfortunately, its strength is very low even under strong alkaline conditions. e reason is that less active components limit the amount of hydration products from steel slag, although the activation effect of alkaline condition on hydration of steel slag is obvious. Wang et al. [44] and Sun et al. [47] found that the strength of alkali-activated steel slag is far from the strength of cement. e compressive strength of alkali-activated steel slag is only 30%-40% of that of cement slurry [47]. But, adding 20% GBFS can increase the 28-day Advances in Civil Engineering compressive strength by 40% [51]. So steel slag as a solo precursor is not an ideal material for the production of alkali-activated materials. In most studies on alkali-activated steel slag materials, better cementitious property is achieved by blending with other materials like GBFS, fly ash, and metakaolin.
When blended with blast furnace slag, alkali-activated GBFS-steel slag material shows significant cementitious properties in the presence of alkaline activator. You et al. [52] systematically studied the effect of steel slag on properties of alkali-activated GBFS material at room temperature.
e Na 2 O content was 4% by total weight of precursors and the modulus of water glass was 1.5 in all the alkali-activated mortars [52]. e content of steel slag was 50% by mass in the precursor [52]. Hydration process, strength, autogenous and drying shrinkages, pore structure, water absorption, and chloride ion penetration resistance of mortars were investigated [52]. ey found that adding steel slag could decrease the hydration heat but prolong the setting time and improve workability [52]. Furthermore, incorporating steel slag could increase water absorption, reduce autogenous and drying shrinkage, and chloride ion penetration resistance [52]. e reason was that the replacement of steel slag could significantly increase the total porosity of the matrix due to its lower activity and the consequent less products [52]. You et al. [53] also investigated corrosion behavior of low-carbon steel reinforcement in alkali-activated GBFS and alkaliactivated GBFS-steel slag under simulated marine environment. ey found that the corrosion products were hematite and goethite [53]. e addition of steel slag had a beneficial influence on corrosion resistance due to improved interface transition zone between reinforcements and mortars [53].
Several studies have been undertaken to understand the investigation effects of steel slag on hydration properties of alkali-activated fly ash materials. Song et al. [54] used steel slag with various replacement levels (0, 10%, 20%, 30%, 40% and 50% by mass) to replace fly ash for alkali-activated binary composite material. ey evaluated the influence of steel slag on setting times, flowability, viscosity, strength, absorptivity, and microstructural properties at standard curing conditions [54]. Adding steel slag obviously increased the setting times and flowability but decreased the viscosity [54]. e optimum content of steel slag was found to be 20% due to the negligible 28-day compressive strength loss and best flexural strength, elasticity modulus, and absorptivity [54]. e reason for the development of the strength was the formation and coexistence of C-S-H gel and C-A-S-H gel exhibiting better bonding [54]. Guo and Yang [55] synthesized engineered cementitious composite by using fly ash-steel slag activated by water glass with the modulus of 1.5 and polyvinyl alcohol fibers. ey also thought C-S-H gel and N-A-S-H gel as self-healing products had a positive effect on self-healing property [55]. However, Niklioć et al. [56] had different conclusions about the type of reaction product and the development of compressive strength due to high curing temperature of 65°C at the early age. ey thought that the main products were N-(C)-A-S-H gel along with N-A-S-H gel [56]. ey found that steel slag up to 30% in the range of 0%-40% positively affects the strength evolution [56]. e 28-day compressive strength of alkaliactivated fly ash mortar containing 30% steel slag exceed 35 MPa, and the study by Guo X had come to the same conclusions [56][57][58]. Niklioć et al. [56]also evaluated the thermal resistance of alkali-activated fly ash-steel slag materials. ey found that steel slag had a negative effect on the thermal resistance,i.e.,the mechanical and dimensional stability was above 600°C [56].
Bai et al. [59] and Furlani et al. [60] investigated the content and fineness of steel slag as a precursor on the properties of alkali-activated metakaolin material. In the study by Bai et al. [59], they set two curing conditions (exposed curing at room temperature, sealed curing, and moist curing) and four substitution rates (0, 10%, 20%, and 40%). Mechanical properties, acid and alkali erosion endurance, and microstructure were investigated [59]. ey found that adding 10% steel slag could ensure the optimum properties and moist curing was the best curing method [59]. e highest compressive strength and bending strength could reach 70 MPa and 8 MPa, respectively [59]. Moreover, microstructure was enhanced due to beneficial physical and chemical reactions between the active components of steel slag and metakaolin [59]. According to the research of Furlani et al. [60], two steel slag maximum particle sizes (250 µm and 125 µm) were used to replace metakaolin (0%, 20%, 40%, 60%, 80%, and 100% by mass). ey found that finer steel slag could play a better role, and 40% steel slag was the best dosage [60]. ey thought that the increase of compressive strength was attributed to the formation of stronger mechanical bonds replacing part of the original N-A-S-H gel [60].
Besides the binary systems above, steel slag is commonly mixedwith slag to form ternary and other composite systems. It is also expected to be an effective way to use steel slag. In alkali-activated fly ash-GBFS-steel slag ternary system by Song et al. [61], water glass with a modulus of 1.6 was used as activator, and composite additive of GBFS-steel slag varied from 10% to 50%. e optimum content of GBFS-steel slag was found to be 40% [61]. e setting time, initial flow, and early and later compressive strength of paste increased due to the presence of steel slag [61]. In addition, the brittleness decreased by adding steel slag [61]. More gel products formed by hydration of GBFS-steel slag refined the pore structure, which was the main reason for the improvement of strength [61]. In alkali-activated ultrafine palm oil fuel ash-steel slag composite system, Yusuf at al. [62] evaluated the contributions of steel slag on compressive strength and shrinkage of pastes and mortars. e dosage of steel slag varied from 0% to 80% for pastes and 0% to 60% for mortars [62]. ey found that steel slags reduced shrinkage by refining pores, eliminating microcracks, and increasing the density and strength of microstructure [62].
Properties of Copper Slag.
Copper slag (CS) is a byproduct generated from the refining of copper. About 2.2 tons of copper slag will be produced for each ton of copper produced [63], and about 40 million tons of CS are produced annually in the world [64]. Depending on different cooling processes, CS can be divided into two different groups, namely, granulated water-cooled slag and air-cooled slag [64] . Granulated CS (GCS) contains an amorphous phase, which mainly consists of iron oxides, silicon dioxide, and calcium oxide [65]. Air-cooled slag with a slower cooling process mainly contains crystalline phases, which consist of similar chemical components [66]. Figure 2 shows the typical XRD patterns of GCS and air-cooled CS. Table 1 shows the chemical composition and mineral composition of copper slag cooled with different processes in other studies. Mineral composition of granulated water-cooled CS and air-cooled CS usually contains same mineral components, namely, magnetite (Fe 3 O 4 ) and fayalite (Fe 2 SiO 4 ) [67]. Figure 3 shows the statistical chemical content of copper slag from other studies. e common utilization options of copper are recovering of the metal and producing value-added products, such as abrasive and cutting tools, tiles, glass, road-base construction, pavement, as well as cement and concrete [74]. Due to the amorphous nature of Granulated CS, the hydration properties of GCS are more active compared to aircooled CS [73], which means granulated CS is more suitable for supplementary cementitious materials, while air-cooled CS is more suitable to be used as aggregate in concrete [75,76].
Using granulated CS as supplementary cementitious material involves an optimal dosage of 5%-15% [77]; a higher dosage of GCS decreases the strength of the cementitious material [78]. us, this utilization method is not enough for utilizing GCS. Alkali-activated CS has been investigated by some researchers. CS can be used as a filling material activated by sodium hydroxide [70]. About 20-30 MPa compressive strength of the binder of alkaliactivated CS was achieved [79][80][81], which shows that alkaliactivated granulated CS is a potential environment-friendly material for replacing cement.
Mechanism of Alkali-Activated Copper Slag.
e mechanism of alkali-activated granulated copper slag is different when different activators are used [82]. Compressive strength result shows that the activation effect of sodium silica (SS) is better than sodium hydroxide (SH) [82], as the binder compressive strength of sodium silica is 5-6 times higher than that of sodium hydroxide. e mineralogical characterization of alkali-activated CS with XRD shows that different products were formed when different activators were used. In SS-activated GCS, a weak peak, which represents the poor crystallinity of C-S-H, was formed. In SH-activated GCS, a sharp peak occurs at the similar position of weak peak in SS, which represents the plombierite (tobermorite 14Å) [82]. e reaction products in the pastes of CS activated with SS are mostly amorphous C-S-H gels with higher degrees of polymerization, which bond the matrix together with fewer pores. However, the products formed in the pastes of CS activated with SH contain some highly crystalline plombierite, and the matrix is loose and porous [82]. Besides, quantitative XRD shows that the original crystal in GCS, especially the fayatite and monticellite, are reduced in SS-activated GCS. is can be interpreted as the original crystal in GCS dissolved and participated in the formation of product [82]. e reaction degree of alkali-activated CS of different activators is consistent with the compressive strength and XRD result. CS reaction degree of SH and SS are 37.8% and 47.8%, respectively. e different mechanisms of SH and SS were determined by pH and [SiO 4 ] 4− concentrations. CS surface will dissolve with the attack of OH − , Ca 2+ and [SiO 4 ] 4 were dissolved into the solution to form the product. Compared to SH, although the pH of SS is lower, more [SiO 4 ] 4− , which dissolved slower than Ca 2+ , is provided in SS.
is can be explained that although the initial reaction rate of SS is slower than SH, the total heat release of SS is higher than SH [73].
e precipitation of the product might also be hindered when OH − is excessive [82].
Performance of Alkali-Activated Copper Slag.
Alkali-activated GCS has the potential to be used as a construction material. e compressive strength of GCS varies when different activators are used. SS is more effective than SH, and a higher modulus of SS increases the compressive strength of alkali-activated CS mortar [75]. Twentyeight-day compressive strength of alkali-activated CS can reach 20 MPa, and the strength can still develop before 90 days [73].
SH is not a suitable activator for GCS, and 28-day compressive strength of SH-activated GCS is lower than 5 MPa [73,82]. e strength development of later stage (90 days) is not developed [73]. is might due to the products of SH-activated CS containing highly crystalline plombierite, which is small in specific surface area and thus loosens the matrix [82].
Shrinkage of alkali-activated CS was also investigated [81]; drying shrinkage of alkali-activated CS is higher than Portland cement due to refined pore structures of alkaliactivated CS. Increasing of both alkali content and modulus will increase the shrinkage of alkali-activated CS. e porosity result shows that the increasing of alkali dosage refines the pore structure [81]. Similar with Portland cement, shrinkage was smaller after 14 days. erefore, it is suggested that a lower alkali content and modulus is more effective for controlling the shrinkage of alkali-activated CS.
Raw Material Properties of Ferronickel Slag.
Ferronickel slag is an industrial waste obtained from ferronickel alloy production. e ferronickel industry uses two main smelting technologies: the electric furnace method and the blast furnace method. e electric furnace method is currently the main method of ferronickel alloy production, while the blast furnace method is only used in parts of eastern China. According to the differences of raw material and manufacturing technology, ferronickel slag can be categorized as electric furnace ferronickel slag (EFFS) and blast furnace ferronickel slag (BFFS) with different chemical and mineralogical compositions. In addition, the cooling method of the molten slag has an important influence on its composition. e chemical compositions of ferronickel slag obtained from different sources are presented in Table 2. In general, BFFS are composed of SiO 2 , Al 2 O 3 , and CaO, as BFFS has a large amount of amorphous phase [83,84]. EFFS is mainly composed of SiO 2 , MgO, and Fe 2 O 3 , and its mineral composition is mainly composed of crystalline phases, such as enstatite, forsterite, and dropsied. EFFS can be divided into air-cooled slag and water-cooled slag depending on different cooling methods. Generally, EFFS generated from laterite ore contains a high FeO/Fe 2 O 3 and low MgO, whereas that from garnierite ore contains low Fe 2 O 3 and high MgO [85][86][87][88][89]. e chemical composition of air-cooled EFFS differs only relatively little from that of water-cooled EFFS. However, the glassy phase content of water-cooled EFFS is higher than that of air-cooled EFFS. It can be seen that the nature of ferronickel slag depends on its source and treatment process.
Reaction Mechanism of Alkali-Activated Ferronickel Slag.
e reaction process of alkali-activated ferronickel slag is similar to that of alkali-activated slag/fly ash. e reaction of alkali-activated BFFS can be simplified as in Figure 4. After the addition of the activator solution, the structure of the BFFS is first attacked by the alkaline solution; then, BFFS is subsequently depolymerized to low polymer or silicate and aluminate tetrahedral units. Finally, the depolymerized substances polymerize to form the crystalline product strätlingite and amorphous product C-A-S-H, which is responsible for the high strength of alkali-activated BFFS [83]. BFFS tends to form strätlingite rather than hydrotalcite, mainly because of the high aluminum content of BFFS and the small amount of magnesium dissolved, which is mainly present in the form of spinel and forsterite with a stable structure, and hardly reacts in alkaline solutions. e Ca/Si and Al/Si ratios of C-A-S-H are 0.64 and 0.57-1.44, respectively. e difference in composition between EFFS and BFFS results in the generation of different hydration products. Hydroxysodalite was found in the reaction products of alkali-activated low-calcium and lowmagnesium EFFS with kaolinite that exhibited higher strength [89]. Maragkos et al. [88] found that increasing the OH − concentration could enhance the dissolution of silicon and aluminum in EFFS. And high SiO 2 /Na 2 O ratio can promote the condensation reaction. e presence of alkali metal cations plays a catalytic role and has an important influence on gel hardening and crystallization. Compared to NaOH, KOH provides more inorganic polymer precursors, as the larger K + size contributes to the formation of larger silicate oligomers, to which Al(OH) 4 − tends to bind; thus, better solidification and higher compressive strength are obtained [95]. e main product of alkali-activated high-Mg EFFS/fly ash is N-A-M-S-H, where the magnesium dissolved in EFFS participates in the reaction and is incorporated into the N-A-S-H molecular structure [99,100]. ere are three typical phases identified in water-quenched high Mg EFFS, namely, FNS I, II, and III phases, which are Mg-Si phase, Si-Ca-Al phase, and Gr-Fe phase, respectively [94]. e FNS I phase (Mg-Si phase) is more prone to dissolve and preferentially participate in the reaction process than the other two phases. e dissolved Mg from FNS is mainly involved in the formation of hydrotalcite and N-M-S-H gels. e heavy metals in ferronickel slag mainly include Mn, Cr, and Ni. Wang et al. [83] reported that the alkali-activated matrix has a good stabilization effect on heavy metals, which greatly reduces the risk of heavy metals leaching. Cao et al. [94] found that Cr exists in EFFS in the form of Gr-Fe phase, which remains stable under the activation of alkali. Cr could not be detected during the leaching process. According to Komnitsas et al. [101], alkali-activated EFFS encapsulated the heavy metals such as Pb, Cr, and Ni. erefore, heavy metals could not leach out from the concrete and maintained the structural integrity. In summary, alkali-activated ferronickel slag is an environmentally friendly material, and there is no problem of heavy metal leaching.
Properties of Alkali-Activated Ferronickel Slag.
Wang et al. [83] found that alkali-activated BFFS showed comparable compressive strength and lower 7-day autogenous shrinkage to the alkali-activated slag. And BFFS activated by Ms � 0.5 waterglass obtained the highest compressive strength (70 MPa) at 90 days. Xu et al. [92] investigated the type and content of solid activators on the compressive strength of alkali-activated BFFS. e results showed that alkali-activated BFFS with Na 2 SiO 3 /Na 2 CO 3 activators have a denser microstructure, lower porosity, and smaller pore size than alkali-activated BFFS with Na 2 SiO 3 or NaOH activators. e compressive strength of the Na 2 SiO 3 / Na 2 CO 3 sample can reach 96 MPa when the Na 2 O content is 0.107 mol. It had been shown that low-calcium EFFS can also be used to prepare alkali-activated materials with superior properties. According to Maragkos et al. [88], the properties of alkali-activated EFFS depends on the solid to liquid ratio (S/L). e optimum quantities of S/L and NaOH concentrations were 5.6 g/mL and 7 M, respectively. Under optimum conditions, the alkali-activated EFFS exhibited a very high compressive strength of 118 MPa and a very low water absorption of about 0.8%. Komnitsas et al. [89] investigated the performance of sodium-silicate-activated EFFS/kaolinbased materials. ey found that only aging period had a very significant effect on the final compressive strength while heating time and the temperature had a negligible effect on strength development. e alkali-activated samples showed excellent resistance to freeze-thaw cycles. However, the strength declined in acidic environment due to the formation of halite, magnesium calcite, calcite, aragonite, and akermanite on the surface of the immersed samples. Sakkas et al. [87] evaluated the effect of fire exposure on the alkaliactivated EFFS materials. e results showed that these samples had a low thermal conductivity and high fire resistance like commercial fire-resisting materials. It is even possible to prepare alkali-activated ferronickel slag concrete, which belongs to the category of ultrahigh performance concrete, with a Advances in Civil Engineering strength of up to 120 MPa. Komnitsas et al. [101] reported that NO 3 − orSO 2− 4 ions reduced the strength of alkali-activated EFFS due to the fact that they consumed most of the available moles of alkali activators, hindering the polymerization reaction and therefore producing a limited number of gels.
According to the literature, it was found that an EFFS with a high MgO content is not very well utilized due to the low reactivity and high magnesium oxide content, which may lead to bulk stability problems [8]. Cao et al. [102] used EFFS with high MgO content as a partial replacement for blast furnace slag to prepare alkali-activated cements (AACs). e results showed that the incorporation of less than 40% FNS has no significant effect on the setting time; however, the incorporation of 60% FNS will not only prolong the setting time but also reduce the compressive strength. For Na 2 SiO 3 -activated AACs, increasing EFFS content in the mixture will lead to the increase of autogenous shrinkage, drying shrinkage, and total porosity. For NaOHactivated AACs, the autogenous shrinkage and drying shrinkage are decreased with the addition of EFFS, while the total porosity is increased. Kuri et al. [103] reported that EFFS replacing part of the fly ash reduced workability and shortened setting time but increased compressive strength, with 75% EFFS being the optimum content. ey found that Mg was involved in the formation of N-M-A-S-H and therefore did not cause bulk stability problems, whereas Komnitsas et al. [89,95] argued that magnesium acted as chemically inert in alkali-activated EFFS materials. It also depends on the source of the FNS. It is closely related to the source of ferronickel slag. Yang et al. [100] further suggested that EFFS can improve the thermal stability of alkaliactivated materials by replacing some of the fly ash, and can effectively reduce the shrinkage of alkali-activated materials below 600°C.
Characterization of Lead-Zinc Slag.
Lead-zinc slag is a by-product of the lead and zinc production industry, generated from the ores during smelting [104,105]. It is reported that the production of 1t of lead and zinc discharges 7,100 kg and 9,600 kg slag [24], respectively. ese lead-zinc slags are generally landfilled, not only occupying large areas of arable land but also polluting the environment due to the leaching of heavy metals and the radiation of nuclides. e chemical composition of lead-zinc slag changes depending on the ores, the fluxes, the smelting process, and [24,[104][105][106][107][108][109][110][111][112][113][114][115][116][117][118][119][120][121][122][123] are summarized in Table 3. It could be found in Table 3 that the major composition of lead-zinc slag is FeO x , SiO 2 , CaO, Al 2 O 3 , and ZnO. As shown in Figure 5, the major constituents of the lead-zinc slag, in decreasing order of wt%, were the following: FeO x (34%, ranging from 27% to 37%), SiO 2 (28%, ranging from 21% to 33%), CaO (17%, ranging from 14% to 23%), ZnO (8%, ranging from 3% to 10%), and Al 2 O 3 (5%, ranging from 2% to 8%). Figure 5 also shows that the contents of the major compositions of lead-zinc slag are significantly different from various areas and the smelting factory. Some heavy and toxic elements, such as Pb, Zn, Cd, Cr, Mn, Cu, etc., could be found in lead-zinc slag, which limits its utilization due to the large leaching risk. e density of lead-zinc slag ranges from 3.6 g/cm 3 to 3.9 g/cm 3 [105,118,124], which is larger than that of traditional aluminosilicate waste due to high iron oxide content. e phase composition of lead-zinc slag greatly changes depending on the ores, the fluxes, the smelting process, and the cooling method. e lead-zinc slag was reported to be mostly composed of an iron-silica-lime glass matrix and the content of the glass phase in lead-zinc slag is generally larger than 80% [24,115,117]. e kinds of the crystal phases in lead-zinc slag are also dependent on the ores and fluxes. Figure 6 provides an example of the XRD pattern of leadzinc slag, which proves that most components are amorphous. As shown in Figure 6, the crystal phases in lead-zinc slag are ZnS, FeS, FeO, Fe 3 O 4 , and Pb metal, which is consistent with the results reported by Weeks [105]. Xia et al. Table 1). ..
Reaction Mechanism of the Alkali-Activated Lead-Zinc
Slag. Similar to the blast furnace slag [127], fly ash [128], and rice husk ash [129], most of the lead-zinc slag is composed by amorphous aluminosilicate phases. So the reaction process of alkali-activated lead-zinc slag is similar to the traditional alkali-activated materials: dissolution and dispersion of raw materials, rearrangement and exchange of dissolved species, gelation and solidification, and continuous gel evolution toward crystallization [130,131]. e leaching risk of the heavy metals in lead-zinc slag is the key factor that limits its application as construction materials. Alkali-activated material is an effective system for the solidification of heavy metals. But, high Si content and low Al content in lead-zinc slag make it difficult to form a rigid Al-Si network structure [117]. e lack of AlO 4 − unit tend to decrease the capacity of alkali-activated materials for element immobilization since the heavy metals mainly bonded with alumina tetrahedron [132]. So, some studies were conducted by mixing the lead-zinc slag with fly ash and alkalis to form a geopolymer [113,117,119]. e morphology of the fly ash-zinc slag composite geopolymer was reported by Nath [117]. It was found that the reaction products were refined with the increasing of the zinc slag content (see Figure 8). Zhang [24] has also studied the selfcementation of lead-zinc slag through alkali-activated materials and found that the solidification efficiency was larger than 90% for most of the heavy metals. e physical encapsulation was found to be the main mechanism of the solidification of the heavy metals in alkali-activated materials.
Properties of the Alkali-Activated Lead-Zinc Slag Material.
e performance of the alkali-activated materials containing lead-zinc slag is also dependent on the properties of the raw materials. Xia et al. [122] found that the compressive strength of hardened alkali-activated materials decreases with the increasing of the lead-zinc slag content. It means that lead-zinc slag has a negative effect on the performance of the alkali-activated materials. It was supposed that the high iron content in lead-zinc slag tend to be oxidized and the oxidation might increase the porosity and volume of solidified body, which ultimately resulted in decrease of compressive strength. But, Nath [117] found that the relatively high CaO content in lead-zinc slag results in the formation of Ca-rich dense gel and the development of compact microstructure. e 28-day compressive strength of alkali-activated zinc slag even reached 71 MPa. And the highest compressive strength even reaches 96 MPa. It means that the dispersion of the chemical composition of raw materials significantly affects the mechanical performance of the alkali-activated lead-zinc slag.
Conclusion and Outlook
e main mineral phases in steel slag are tricalcium silicate, dicalcium silicate, RO phase, tetracalcium aluminoferrite, etc. e phase composition of steel slag is similar to that of cement and thus steel slag has the potential cementitious property. But, the low activity of steel slag is the biggest obstacle and challenge to the proper utilization of alkaliactivated steel slag materials. Moreover, there is great uncertainty about the hydration mechanism of the steel slag as a solo precursor.
Copper slag is mainly composed of crystal phases, i.e., magnetite (Fe 3 O 4 ) and fayalite (Fe 2 SiO 4 ). e utilization of copper slag as an alkali-activated material depends on the cooling process of the copper slag. Granulated water-cooled copper slag with a relatively higher amorphous phase is more suitable for alkali activation. In terms of activator, sodium hydroxide is less effective than water glass due to the formation of the high crystalized product. Although many studies have evaluated the mechanical properties of alkali-activated copper slag, heavy metal leaching assessment should be considered in the future study as copper usually contains heavy metals.
BFFS has good reactivity due to the large amount of amorphous phase; therefore, alkali-activated BFFS has superior mechanical properties. e reaction products are strätlingite and C-A-S-H. e reactivity of EFFS is closely related to its source and treatment process. EFFS generated from laterite ore contains a low MgO, whereas that from garnierite ore contains a high MgO. e amorphous phase content of EFFS with low Mg content is high, while the amorphous phase content of EFFS with high Mg content is low, and there are a large number of Mg-containing crystals for the main mineral phases in EFFS, namely, forsterite (Mg 2 SiO 4 ), enstatite (MgSiO 3 , and clinoenstatite (MgSiO 3 ). Good properties can be obtained in fly ash-based alkaliactivated materials by incorporating EFFS, and the generation of the amorphous phase N-M-A-S-H leads to a denser microstructure. ere is no negative impact on the environment due to the utilization of ferronickel slag in the alkali-activated materials. e chemical composition of lead-zinc slag significantly changes depending on the ores, the fluxes, the smelting process, and the impurities in the coke and the iron. us, the phase composition and the reaction activity of lead-zinc slag and the mechanical performance of alkali-activated lead-zinc slag reported in different papers are quite inconsistent. Generally, the lead-zinc slag is composed of iron-silica-lime amorphous phases. In order to improve the mechanical property and the solidification efficiency for heavy metals, lead-zinc slag is usually mixed with fly ash or granulated blast furnace slag to form a geopolymer.
ere are few studies to fully understand the properties of alkali-activated metallurgical slag composite system, and in-depth research on durability is still a new topic. Better activated methods and alkali-activators are still needed to improve the performance of the metallurgical slag with low reactivity. It is expected that the acid-activated method may be an alternative method due to the high contents of iron, calcium, and magnesium in metallurgical slag. e chemical composition of the metallurgical slag is closely dependent on the ores, the fluxes, and the smelting process.
us, the relationship between the chemical composition and the reactivity of metallurgical slags needs to be constructed. If these problems are solved, they will bring great environmental benefits to slag yards and enormous economic benefits to steel industries.
Data Availability
Previously reported data were used to support this study. ese prior studies are cited at relevant places within the text as references.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2021-07-02T20:40:52.843Z
|
2021-05-22T00:00:00.000
|
{
"year": 2021,
"sha1": "aad8a0db363842ce94fe1a9cbcadb9eeaf276752",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/8795588",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "aad8a0db363842ce94fe1a9cbcadb9eeaf276752",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
122935407
|
pes2o/s2orc
|
v3-fos-license
|
Domain Walls in Strongly Coupled Theories
Domain walls in strongly coupled gauge theories are discussed. A general mechanism is suggested automatically leading to massless gauge bosons localized on the wall. In one of the models considered, outside the wall the theory is in the non-Abelian confining phase, while inside the wall it is in the Abelian Coulomb phase. Confining property of the non-Abelian theories is a key ingredient of the mechanism which may be of practical use in the context of the dynamic compactification scenarios. In supersymmetric (N=1) Yang-Mills theories the energy density of the wall can be exactly calculated in the strong coupling regime. This calculation presents a further example of non-trivial physical quantities that can be found exactly by exploiting specific properties of supersymmetry. A key observation is the fact that the wall in this theory is a BPS-saturated state.
Introduction
Domain walls are inherent to field theories with spontaneously broken discrete symmetries.This phenomenon -occurrence of the domain walls -is quite common in solid state physics.In high energy physics, in many models, such symmetries are present too, for instance, a discrete symmetry associated with CP .Domain walls then naturally appear in course of evolution of our Universe and must be taken into account in cosmological considerations.Discussion of the issue started over twenty years ago [1].Recently domain walls in supersymmetric (SUSY) theories (along with other topological or non-topological defects) were proposed as a possible mechanism for dynamical compactification which, simultaneously, ensure spontaneous SUSY breaking [2].Within this approach, the matter our world is built from is nothing but the zero modes localized on the wall.
In this Letter we consider new classes of the domain walls which appear in the strongly coupled gauge theories.The first model is an example of a domain wall which traps massless gauge particles on the wall.Localizing the gauge bosons in the core of the defects is important for building realistic phenomenology based on the dynamical compactification scenarios [2].Spinless bosons and spin-1/2 fermions, whose interactions are properly arranged, admit localized zero modes on the wall, a well-known fact.To the best of our knowledge, no fully satisfactory mechanism was suggested for localizing the massless vector gauge fields so far although attempts in this direction were reported in the literature (e.g.[3]).
The second example of an unusual domain wall is specific to SUSY gauge theories.This particular wall is not suitable for dynamical compactification.Nevertheless, we find this object extremely interesting since it has a remarkable property.Although the wall appears in the strong-coupling theory, its energy density per unit area ε is exactly calculable.In a sense, the situation reminds that with the monopole mass in the Seiberg-Witten solution [4], although we deal with N = 1 supersymmetry, not N = 2. Explicit calculation of ε, to be carried out below, helps clarify one old question in supersymmetric Yang-Mills theory, which is in the focus of the ongoing polemics.It is known that the supersymmetric gluodynamics possesses Z 2T (G) symmetry, a remnant of the anomalous U(1) [5].(Here T (G) is the Dynkin index for the adjoint representation of the gauge group G, normalized in such a way that for SU(N) the index T (SU(N)) = N.) A controversy continues as to whether this Z 2T (G) is spontaneously broken in a standard way, implying the conventional domain walls, or there is an additional superselection rule, implying that the vacuum angle θ varies not from 0 to 2π, as usually believed, but, rather from 0 to 2πT (G).Arguments pro and contra were given; they are summarized in Ref. [6].Our result, confirming a finite value of ε, favors the first option.
Massless gauge fields on the wall
A mechanism we suggest does not depend on whether or not the theory at hand is supersymmetric.To elucidate the idea we will consider a simple (non-supersymmetric) example.Assume we have SU(2) Yang-Mills theory whose matter sector contains: (i) one left-and one right-handed fermion doublet field (ψ L ) α and (ψ R ) α in the fundamental representation of SU(2), one scalar field χ a in the adjoint representation, and one real scalar field η carrying no color indices.The interaction Lagrangian has the form where G a µν is the gluon field strength tensor, v and κ are positive parameters of dimension of mass assumed to be much larger than the scale parameter Λ of the SU(2) gauge theory at hand 1 , λ and λ ′ are (small) dimensionless coupling constants, g is the gauge coupling constant.
The theory is obviously Z 2 invariant under the transformation η → −η and ψ L → iψ L , ψ R → −iψ R .In the true vacuum of the theory this symmetry is spontaneously broken, and the field η develops a vacuum expectation value (VEV), Correspondingly, the self-interaction potential for χ is stable, and the gauge SU(2) is not spontaneously broken.The theory is in the confining phase.All observable degrees of freedom are bound states of gluons and/or matter fields, with masses 2 of order of Λ.The mass of the η quantum is of order The theory has a stable domain wall interpolating between two different vacua in Eq. ( 2), For definiteness the wall is placed in the {x, y} plane; the width of the wall in the z direction is of order of m −1 .
Let us consider gauge-non-singlet massless modes localized on the wall.First, there are two massless fermion doublet modes localized on the membrane, where ν depends only on x, y and t, and γ z ν = ν.Localization of these modes is due to the index theorem [7] and has nothing to do with the gauge dynamics.This is 1 More exactly, we will assume that λv 2 ≪ Λ 2 but √ λv 2 ≫ Λ 2 .The latter requirement is introduced for simplicity.If this requirement is imposed we can ignore the shift of the vacuum energy due to the gluon condensate outside the wall. 2 In the SU (2) gauge theory with one quark flavor there is an "accidental" global SU (2) symmetry due to the fact that anti-doublet in SU (2) is the same as doublet.If this global symmetry is spontaneously broken we might get massless "pions".This is not a serious problem, however.To avoid massless "pions" suffice it to consider gauge SU (3).Another possibility is to assume that hv ∼ Λ, in which case the would-be pions acquire a mass of order Λ.
simply because of the topologically non-trivial boundary conditions on the fermion mass, m ψ (−∞) = −m ψ (+∞) = −hv.The localization scale is governed by h and becomes infinite if h → 0. One of the crucial observations of the present work is that the gauge-charged fermions (or scalars), as well as massless gauge fields can stay localized on the wall even in the limit h = 0 due to confining gauge dynamics outside the wall (see below).
To see that this is indeed the case consider the behavior of the χ in the classical wall background.As was mentioned, far away from the wall, when η is close to v, the self-interaction potential for χ is stable and there is no spontaneous breaking of the gauge SU(2).However, inside the wall η ≈ 0, and the self-interaction potential for χ becomes unstable.It is not difficult to check that for the wide rang of parameters χ becomes tachionic in the core and develops a vacuum expectation value Indeed, consider3 a linearized equation for small perturbations in χ a = δ 3a χ 0 e −iωt in the kink background (4) This equation, say, for κ = 0 is a one-dimensional Shrödinger equation with a negative-definite potential which is known to have a normalizable bound-state solution with negative ω 2 .Due to continuity this bound-state solution should persist for a finite range of non-vanishing κ's.Thus, χ becomes tachionic, marking an instability of the χ = 0 solution in the core of the defect.This means that inside the wall the SU(2) gauge symmetry is spontaneously broken down to U(1).Two out of three gluons acquire very large masses of order of v.The third gluon becomes a photon.Two degrees of freedom in the χ a field are eaten up by the Higgs mechanism, the remaining degree of freedom is neutral.The "quarks" ψ α have charges ±1/2 with respect to the surviving photon.
Let us have a closer look at the theory emerging in this way.Outside the wall the theory has a wider gauge invariance, SU(2), and is in the non-Abelian confining phase.The U(1) gauge invariance is maintained everywhere -inside and outside the wall.The light degrees of freedom inside the wall are massless "quarks", interacting through the photon exchange.We disregard the non-interacting degrees of freedom.The theory inside the wall is in the Abelian Coulomb phase.The photon and the light "quarks" can not escape in the outside space because there they become a part of the SU(2) theory with no states lighter than Λ.The three-dimensional observer confined inside the wall needs energies of order Λ to be able to feel that his/her Universe is actually embedded in the four-dimensional world.
Let us parenthetically note that the Abelian Coulomb phase in 2+1 dimensions confines electric charges since the potential grows logarithmically with distance.The three-dimensional electromagnetic coupling constant α will be of order of m, this is also a typical mass of the neutral bound states whose size will be of order L ∼ (µm) −1/2 .(Recent work [9] discusses a related issue -the behavior of the fermion zero modes trapped in the (2 + 1)-dimensional wall in a delocalized electromagnetic field dispersed in four dimensions.) Needless to say that a similar mechanism will work for trapping, say, SU(2) gluons inside the wall submerged into, say, SU(3) environment, or in any other problem of this type.
Domain wall in supersymmetric gluodynamics
Consider the simplest SUSY gauge theory, supersymmetric SU(2) gluodynamics.The Lagrangian of the theory is where λ is the gluino field (in the Weyl representation).Witten's index of this theory is 2, which means that it has two degenerate supersymmetric vacua [5].The vacua are marked by the order parameter λλ (note that the order parameter is λ 2 or λ †2 , not λλ † ).One can always adjust the vacuum angle θ in such a way that the corresponding VEV is Tr where Λ is the scale parameter of SUSY gluodynamics.The θ dependence of the condensates can be found on general grounds [10,11], , so that at θ = 2π the vacua interchange.In this Letter we will limit ourselves to θ = 0.The Z 4 symmetry of the model is spontaneously broken by the gluino condensate (8) down to Z 2 .If this is a conventional discrete symmetry breaking, there must exist a wall interpolating between the two vacua.The explicit construction of the wall, routinely done in the weak coupling theories, is impossible, however, since λ 2 is a composite operator, and we are in the strong coupling regime.A way out can be indicated.
The key point is as follows.Superalgebra of the model under consideration has a very peculiar central extension which reveals itself only in the wall-like situations.How N = 1 SUSY can have a central extension is explained in detail in Ref. [2].Briefly, there exists a trivially conserved "current" J µνα = ǫ µναβ ∂ β λ 2 ; the corresponding charge is an anti-symmetric tensor assuming non-vanishing values only in the presence of walls.More specifically, here Q † is the supercharge, and Σ i = σ i σ 2 where σ i stands for the Pauli matrices.Equation ( 9) is a quantum anomaly which will be discussed in more detail in subsequent publications.For completeness we give here the result referring to the gauge group SU(N c ), rather than to SU(2).
In the true vacuum, when all excitations are localized, the integral on the righthand side vanishes identically.On the wall it reduces to (for SU(2)), i.e. a non-vanishing central charge emerges.In the sector with the given (non-zero) value of the central charge the masses of all states can be shown to satisfy a quantum Bogomol'nyi bound, The lower bound is achieved on the BPS-saturated states [12,13].The domain wall in supersymmetric gluodynamics has to be such a state.Although we have no rigorous proof of this statement, we see no reason which could prevent the BPS saturation.Moreover, in the weakly coupled models with the non-vanishing central charge one can easily verify that the wall is a BPS-saturated state.See e.g. the so-called minimal wall in Ref. [2].For the minimal wall the equlity between ε and the central charge is explicitely derived in [2].(Similar relation was first observed in a two-dimensional model in [12]).For the BPS-saturated states one half of the supersymmetry transformations act trivially, i.e. supersymmetry is preserved.
The only scenario with no BPS saturated states is when supersymmetry is completely broken by the solution.Although we mention this possibility, it is hard to imagine it is realized in SUSY gluodynamics.Additional arguments in favor of the BPS-saturated wall are provided by consideration of the weakly coupled SQCD plus holomorphy arguments, see below.
If the wall in SUSY gluodynamics is the BPS-saturated state (with half of SUSY transformations acting trivially), the energy density of the wall is nothing but the value of the central charge q, It remains to be added that the gluino condensate in supersymmetric gluodynamics was exactly calculated in Ref. [10] for unitary and orthogonal groups, and in Ref. [14] for all other groups.Moreover, the wall profile of the order parameter λλ is related to the Lagrangian density, For arbitrary gauge group there are (1/2)T (G)[T (G) − 1] different walls, whose energy densities reduce to where k and ℓ are integers running from 0 to T (G)−1.The fact that ε is proportional to the gluino condensate is not surprising by itself, since both quantities are of order of Λ 3 .What is non-trivial is the exact proportionality coefficient.Inside the wall there lives a massless composite boson and a massless composite fermion; a half of supersymmetry is preserved.If one does not want to rely on the previous results on the gluino condensate one can calculate directly ε using the same strategy as was first suggested in Ref. [10] for exact calculation of the gluino condensate.Namely, one additional quark flavor is introduced, with the mass term m.Thus, the original supersymmetric gluodynamics is substituted by SQCD with one flavor.If the scale parameter of SQCD with one flavor is Λ and m ≪ Λ, the theory turns out to be in the weakly coupled Higgs phase [15].Then the construction of the domain wall can be carried out, and ε calculated.The result depends on the bare mass parameter m in a holomorphic way; therefore, the exact m dependence can be found, much in the same way as in Refs.[10,11].Then we can tend m → ∞, making the matter fields very heavy.If they are very heavy, they can be integrated out, and we return back to SUSY gluodynamics.The result for ε will still be valid.The last step to be done is to express the parameters of SQCD with one light flavor in terms of parameters relevant to SUSY gluodynamics.
Let us outline some basic elements of the procedure.The structure of the SU(2) model with one flavor is exhaustively described in the review [16]; the reader is referred to this paper for details and definitions.One flavor is comprised of two chiral superfields, S αf , where α is the SU(2) index, while f is the subflavor index, f = 1, 2. Classically the model has a one-dimensional vacuum valley (flat direction) parametrized by VEV of the composite operator S 2 = S αf S αf .Quantummechanically, a superpotential is generated along the valley [15]; it has the form where we have included also the (tree level) mass term.The latter stabilizes the theory eliminating the run-away vacuum.Note that the mass term explicitly breaks the original continuous R symmetry of the theory down to Z 2 , under which S 2 → −S 2 .It is a spontaneous breakdown of this discrete subgroup in the vacuum that gives rise to a domain wall solution.If m is small, the fields residing in S 2 are light, the vacuum expectation value of S 2 is large, the SU(2) symmetry (as well as Z 2 ) is spontaneously broken, the gluons acquire a large mass (so that they are actually W bosons) and can be integrated over.As a result of this integration, the superpotential in Eq. ( 12) is generated through instantons.Equation ( 12) is exact -it has neither perturbative nor non-perturbative corrections [15,10].The vacuum expectation values of S 2 are The low-energy theory is that of one chiral superfield; it resembles the Wess-Zumino model [17].The only difference is a slightly unusual form of the superpotential, but its particular form is unimportant for our purposes.The domain wall in the Wess-Zumino model was discussed in detail in Ref. [2] (the "minimal wall").
In the case at hand it is possible, in principle to find an explicit solution interpolating between two vacua of Eq. ( 13), as a function of z, which will be valid almost everywhere.A small interval, of the size of order Λ−1 , near the origin where the value of S 2 is small, is a strong coupling region.The semiclassical description of the wall in terms of one superfield S 2 is invalid here, since in this region excitations corresponding to composite gauge invariant operators generically have masses of order Λ.The correct description requires many degrees of freedom.Outside the above narrow region the wall is properly described semiclassically by a profile of S 2 .The width of the wall in the z direction is of order m −1 .Thus, our wall is a two-component construction.Our ignorance of the small central region does not preclude us from calculating the wall energy density ε exactly, provided that the description is continuous.Indeed, the central charge q is related to the integral At large separations from the wall we are approaching the true vacuum, where the theory is in the weakly coupled Higgs phase, and the values of W(S 2 ) z→±∞ are exactly known.In this way we find that q = a known const.× Λ5/2 m 1/2 .Moreover, the condition that 1/2 of SUSY transformations act trivially (equivalent to the BPS saturation) is explicitely satisfied everywhere in the weak coupling region.By holomorphy we conclude it must be satisfied in the central region as well.
Conclusions
The peculiar features of the domain wall discussed above are due to intricacies of the non-Abelian gauge dynamics.The confining property of gluodynamics and the dynamical mass gap generation is the basic element of the mechanism we suggested for trapping massless gauge bosons inside the wall.A task that lies ahead is using this mechanism in the context of the dynamic compactification scenarios outlined in [2].
As it happened more than once in the past, miracles of supersymmetric (N = 1) gauge dynamics allowed us to exactly calculate the energy density of the supersymmetric wall in the strong coupling regime.This new example of non-trivial physical quantity that can be found exactly by exploiting specific properties of supersymmetry is interesting by itself.Moreover, it settles the issue of spontaneous breaking of Z 2T (G) in supersymmetric gluodynamics versus a new superselection rule, which was debated over a decade, in favor of the first option.
|
2014-10-01T00:00:00.000Z
|
1996-12-11T00:00:00.000
|
{
"year": 1996,
"sha1": "11686ccf4a9326a56280eb8f3b429525039ba255",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s0370-2693(97)00808-3",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "11686ccf4a9326a56280eb8f3b429525039ba255",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
71568903
|
pes2o/s2orc
|
v3-fos-license
|
Genes Involved in Human Ribosome Biogenesis are Transcriptionally Upregulated in Colorectal Cancer
Microarray gene expression profiling comprising 168 colorectal adenocarcinomas and 10 normal mucosas showed that over 79% of the genes involved in human ribosome biogenesis are significantly upregulated (log2 > 0 . 5, p < 10 − 3 ) when compared to normal mucosa. Overexpression was independent of microsatellite status. The promoters of the genes studied showed a significant enrichment for several transcription factor binding sites. There was a significant correlation between the number of binding site targets for these transcription factors and the observed gene transcript upregulation. The upregulation of rRNA processing genes points towards a coordinated process enabling the overproduction of matured ribosomal structures. distribution, any medium, the original work
Introduction
Protein translation and ribosome biogenesis are essential cellular processes tightly regulated at different levels. Ribosome biogenesis processes and assembles precursor rRNA into mature rRNA that together with ribosomal proteins become the mature ribosome. Single components of this machinery are deregulated in cancer. Increased cellular growth or proliferation needs an enhanced protein content and protein synthesis [1].
In colorectal cancer (CRC), the third most frequent form of cancer worldwide [2], overexpression and differential expression of several ribosomal proteins have been reported [3,4].
The contribution of the PES1-BOP1 complex, involved in ribosomal biogenesis, has been studied and showed that BOP1 is upregulated in CRC. BOP1 upregulation is associated with increased gene copy number suggesting that BOP1 overexpression may be one of the main oncogenic consequences of 8q24 amplification in CRC [5].
In order to have a more general vision of the rRNA maturation process we have analyzed the expression pattern of the genes that comprise Coute's [6] human ribosome biogenesis dynamics model and found a significant (p < 10 −3 ) upregulation of more than 79% of the genes associated with this model when CRC adenocarcinomas are compared to matching normal mucosa.
There are two major molecular subgroups in CRC, microsatellite stable (MSS) [7] and microsatellite unstable (MSI) adenocarcinomas that represent approximately 15% of the total incidence and associate with a better prognosis [8]. In this report we show that the upregulated transcript profile was evenly distributed among MSS and MSI subgroups except for one gene, DDX27, that differs from this pattern. Correlation of expression, to chromosomal gain showed that only 20% of the transcriptional upregulation in MSS specimens were linked to a possible gene dosage.
We identified several transcription factors (TFs) with putative binding sites overrepresented in the promoters of the ribosome biogenesis genes when compared to all genes.
In conclusion, the observed transcriptional upregulation of multiple rRNA processing genes may favour the enhanced production of fully processed rRNA that in combination with the protein components may enable the overproduction of mature ribosomal structures in CRC.
Materials and Methods
In a previous microarray transcript profiling study [9] we analyzed 168 colorectal tissue samples and ten normal Microsatellite status determination and sample processing were carried out as described [9]. Biotin labeled cRNA was prepared from 10 μg of total RNA and hybridized to the Human Genome U133plus2.0 GeneChip (Affymetrix) containing >55.000 probe sets. The readings from the quantitative scanning were analyzed by the Affymetrix Software MAS5.0. The resulting cell files for all 178 samples were imported into ArrayAssist version 3.3 (Stratagene) and data were normalized using GC-RMA. The expression profile of the human model of ribosome biogenesis genes [6] was analysed. Those probes that did not correctly matched the NCBI data base sequences were excluded from further analysis. Median log2 values and standard deviations from CRC samples and normal matching mucosa biopsies were calculated for the set of studied genes. When two or more probes from the same gene were available a mean value was used.
The 170 genes cited in the human model of ribosome biogenesis dynamics [6] were used for chromosomal distribution analysis and transcriptome correlation map studies and 166 genes were analyzed in the microarray study due to probe restrictions.
Transcription Factor
Analysis. An analysis for enrichment of transcription factor binding sites in the promoters of the 166 genes mentioned above was performed using the java-based tool Expander4.0.1 (Expression Analyzer and DisplayER). Expander utilizes the PRIMA (PRomoter Integration in Microarray Analysis) software to identify transcription factors whose binding sites are significantly overrepresented in a given set of promoters. All genes were used as background in the analysis. The promoter region analysed for each gene started 1000 bases upstream the transcription start position and ended 200 bases downstream the start position. The threshold P-value was set to .0001. The program was run with no multiple tests correction (default setting). Bonferroni correction and a threshold P-value of .001 retrieved the same transcription factors.
Correlation Study.
For the 106 genes that contain at least one putative binding site for NRF1, HIF1A or ELK1, we added the number of transcription factor binding sites (TFBSs) for each of these three TFs. A regression analysis was done to study the correlation between the 106 genes overexpression and the number of TFBSs in their promoter regions.
Results
We have studied the transcription profile of the genes involved in ribosome biogenesis dynamics as described in Coute's model [6] in CRC adenocarcinomas. In our study 168 CRC specimens, mostly stage II, were compared to 10 normal matching mucosas (Table 1(a)). Strikingly 78.9% of the 166 genes studied were significantly (p < 10 −3 ) upregulated with a log2 ratio >0.5. This upregulation greatly contrasts with the overall tendency of all probes tested, as only 13.2% of the almost 55000 probes showed the same upregulation, as seen in Table 1 Moreover, when we extended the study to genes upregulated with log2 ratio >0.1, up to 93.4% of the genes studied fulfilled this requirement, compared to 23% of all probes studied (Table 1(b)).
Interestingly, the gene expression upregulation profile described above was almost identical between the two major subgroups of CRC, MSS and MSI (see Supplementary Table 3 in Supplementary Material available online at doi:10-3814/2009/657042). Only DDX27, the human counterpart of yeast drs1, was significantly (p < 10 −7 ) upregulated in MSS compared to matching normal mucosa (log2 ratio 1.2), but not in MSI specimens (p = 0.1).
In MSS specimens the observed transcript upregulation could be caused by a gene dosage effect derived from chromosomal amplification. We mapped the studied genes to a chromosomal position in a transcriptome correlation map (unpublished results), and found that only 20% could be located in areas susceptible of chromosomal gain.
We also investigated the genes chromosomal distribution. The genes involved in ribosome biogenesis dynamics were distributed along the 23 human chromosomes with no obvious clustering. The highest number of genes (15) mapped to chromosome 2. No genes were mapped to chromosome 18.
We then analysed the promoters of the studied genes for transcription factor binding sites. (Table 3). Our transcript microarray analysis showed that NRF1 and ELK1 were significantly upregulated, while H1F1A was highly expressed and slightly upregulated, (Table 3). MYCN was not detected in our data set, and therefore, it was not used in the subsequent correlation study.
Interestingly, there was a significant linear correlation (p = 0.03) between the number of targets for NRF1, HIF1A and ELK1 in the promoters of the ribosome biogenesis genes and the level of transcript upregulation observed (Figure 1). The genes with more putative TFBSs (either several targets for the same TF or targets for more than one TF) showed a higher log2 ratio value than those with none ( Figure 1 and Table 2).
Discussion
The role of protein translation in the cancer process is far from well understood. In cancer cells with enhanced cell division rates and with abnormal cellular activity, the amount of proteins should increase and concomitantly the ribosome presence. However, translation and ribosome biogenesis are very well controlled processes.
Individual ribosomal proteins Sa, S8, S12, S18, S24, L13a, L18, L28, L32 and L35a have been shown to be upregulated in CRC [3] and S3, S6 L5 have also been observed upregulated at both the protein and mRNA levels at the early stages of tumorigenesis [4]. We have considered all genes involved in ribosome biogenesis in a transcript gene profile. Our results clearly showed that in CRC the majority of genes involved in ribosomal biogenesis are overexpressed at the transcriptional level.
It is tempting to argue that this upregulation could be a consequence of the abnormal growth as pointed out above. However, there are already reports indicating that some of these genes are involved in tumorigenesis. For instance, besides BOP1, WRN depletion in cancer cells inhibits tumor growth, RUVBL1 modulates transformation and apoptosis with a functional role in MYC-mediated oncogenesis, NOLA2 is associated with MYC-induced tumorigenesis, NPM1 can contribute to oncogenesis through many mechanisms, and RRP1 is related to metastasis [10][11][12][13][14]. Even more compelling is the fact that the regulation of rDNA transcription is critically altered in cancer. Conditions that harm metabolism, such as starvation, toxic lesions, aging, cancer or viral infections downregulate rDNA transcription [15]. However, the regulatory mechanism that would impair this downregulation is abrogated in cancer cells. The upregulation of genes that activate RNA pol I could be one of the mechanisms that help the rDNA dysregulation [15]. Our microarray analysis indicates that the transcript of RNA pol I subunits and proteins that directly modulate its activity, such as UBF, TIF1A, TIF1B and TBP are also transcriptionally upregulated (Supplementary Table 1). In addition, UBF related activating kinases casein kinase II (CKII), CDK4, cyclinD1, CDK2 and cyclin E [16] are also upregulated in CRC (Supplementary Table 1), thus allowing a possible increase in rRNA synthesis. Interestingly, a possible activation of UBTF via ERK1/2 does not seem to be favored as both kinases seem to be downregulated in colon cancer [17].
Moreover, the mechanism behind this general upregulation seems to be independent of the MSS status as only 20% of the studied genes mapped to chromosomal areas Table 2: The 20 most upregulated ribosome biogenesis genes in CRC compared to matching normal mucosa. Values represent log2 differences between the median of tumor samples and normal mucosa. Those genes that, in MSS specimens, localized to chromosomal areas with possible gene-dosage contribution to overexpression are italic. The number of binding sites per TF is indicated in brackets. * MYCN is not detected in our CRC microarray profiling analysis.
Gene
Chromosomal location (Table 2), only BOP1, TBL3, WDR74, EXOSC4, MRTO4 and NOC2L were located in chromosomal areas that can be subjected to gene dosage alteration. The same genes were also found highly upregulated in MSI specimens that are chromosomal stable, suggesting that the detected upregulation is accomplished by other mechanisms besides amplification. These data are consistent with published data indicating that BOP1 contributes to colorectal carcinogenesis and its overexpression is associated not only with a dosage increase of the individual gene but also with other mechanisms [5]. Apparently, not individual genes, but the whole machinery responsible for transcription of rDNA and its processing, including associated and regulatory factors, RNA polymerase I, the processome, the exosome, and several processing factors, are upregulated at the transcriptional level in CRC.
The upregulation of genes described in CRC was analysed in a similar bladder cancer gene expression profile carried out on U133A affymetrix arrays [10][11][12]. The results were comparable (Supplementary Table 2). Interestingly, out of the 15 most upregulated genes in the bladder profile analysis only two, DDX56 and DKC1 (Supplementary Table 2A), were also found among the 20 highest upregulated genes in the CRC study. Moreover, the TFs that were upregulated in CRC (NRF1 and ELK1) were downregulated in bladder (Supplementary Table 2B). In contrast to CRC where eight of the ten highest upregulated genes Table 2 showed putative TFBSs for the studied TFs, in the bladder study only four TFs showed TFBSs (Supplementary Table 2A).
Interestingly, ELK-1 antisense oligonucleotide is capable of suppressing hepatocellular carcinoma cells [18] and the combined expression of HIF1A and EPAS1 (HIF2A) may play an important role in tumor progression and prognosis of CRC carcinomas [19]. Interestingly, even though NMYC was not expressed in our CRC assay and thus was excluded for further analysis, the ubiquitous and well-known cancerrelated MYC shares its E-box binding site with NMYC. This relation was studied in neuroblastoma cells transfected with N-myc [20]. The report revealed the upregulation of a number of genes. Most of these genes were ribosomal proteins, translational factors and genes controlling rRNA maturation. Moreover, N-myc induced rRNA content rather than protein synthesis rate. It was quite revealing that as MYCN can replace MYC in transgenic mice, they studied whether NMYC downstream targets were also induced by c-myc. Eight out of twenty targets were indeed induced, including ribosomal proteins but also NCL and NPM1 [20].
In conclusion, c-MYC could also be responsible for the activation of some genes baring an NMYC binding site.
The overall transcript upregulation concerning the genes involved in ribosome biogenesis seems to be shared for both CRC and bladder cancer.
It could be interesting to see whether other cancers also show this tendency and whether cancers with slower proliferation rate, such as prostate cancer, also share a similar pattern of expression.
|
2019-03-08T14:20:04.649Z
|
2009-06-07T00:00:00.000
|
{
"year": 2009,
"sha1": "80ccde5ee5eb0509a8e860faf36bce4c936ff9ae",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2009/657042.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b303b593105ac34af510043a131a55a811e4c094",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
5360533
|
pes2o/s2orc
|
v3-fos-license
|
Molecular modelling of the mass density of single proteins
Using molecular dynamics (MD) simulations, the density of single proteins and its temperature dependence was modelled starting from the experimentally determined protein structure and a generic, transferable force field, without the need of prior parameterization. Although all proteins consist of the same 20 amino acids, their density in aqueous solution varies up to 10% and the thermal expansion coefficient up to twofold. To model the protein density, systematic MD simulations were carried out for 10 proteins with a broad range of densities (1.32–1.43 g/cm3) and molecular weights (7–97 kDa). The simulated densities deviated by less than 1.4% from their experimental values that were available for four proteins. Further analyses of protein density showed that it can be essentially described as a consequence of amino acid composition. For five proteins, the density was simulated at different temperatures. The simulated thermal expansion coefficients ranged between 4.3 and 7.1 × 10−4 K−1 and were similar to the experimentally determined values of ribonuclease-A and lysozyme (deviations of 2.4 and 14.6%, respectively). Further analyses indicated that the thermal expansion coefficient is linked to the temperature dependence of atomic fluctuations: proteins with a high thermal expansion coefficient show a low increase in flexibility at increasing temperature. A low increase in atomic fluctuations with temperature has been previously described as a possible mechanism of thermostability. Thus, a high thermal expansion coefficient might contribute to protein thermostability.
Introduction
Proteins are versatile molecules with interesting material properties. Based on sequence information only, they fold into a well-defined structure with high mechanical (Gosline, Demont, & Denny, 1986) and thermal stability (Singleton & Amelunxen, 1973), and have an amazing degree of flexibility (Huber, 1979). A delicate balance between structure and flexibility is essential for their biological function. All these material properties are coded in a linear sequence of a few 100 amino acids. Since the first description of sequence (Sanger & Tuppy, 1951a, 1951b and structure (Kendrew et al., 1958) of proteins, understanding the properties of proteins on the basis of their sequence and structure is of utmost interest to biochemists and biotechnologists.
The density of a single protein (protein mass divided by protein volume) is a material property that since long has attracted attention, not only because it can be easily and precisely measured, but also because protein density and packing of amino acid side chains are the basis of many biophysical and biochemical properties such as stability towards heat (Goldstein, 2007) or organic solvents (Kawata & Ogino, 2010), flexibility and dynamics (Haliloglu, Bahar, & Erman, 1997;Halle, 2002) and aggregation (Meersman & Dobson, 2006). The densities of many proteins have been experimentally determined in dry state and solvated in water. At 20-25°C, the majority of proteins have a density of about 1.35 g/cm 3 in aqueous solution (Fischer, Polikarpov, & Craievich, 2004) and slightly less in dry state (Berlin & Pallansch, 1968;Brill, Siegel, & Olin, 1962), but some proteins have been identified with densities that are increased by almost up to 10% (Squire & Himmel, 1979).
To understand the molecular basis of the observed dependency of density on sequence and structure, different models have been suggested. For proteins with experimentally determined structure (Berman et al., 2000), the density has been evaluated from their volume. However, the volume sensitively depends on the method and parameters used, and therefore the absolute values of protein density evaluated by different methods might differ considerably (Andersson & Hovmoller, 1998;Quillin & Matthews, 2000;Tsai, Taylor, Chothia, & Gerstein, 1999). For example, the protein density values of pyruvate kinase determined by different volume calculation methods ranged from 1.18 g/cm 3 (Andersson & Hovmoller, 1998) to 1.47 g/cm 3 (Quillin & Matthews, 2000). In a second approach, the densities of proteins were related to their molecular weight. By systematically comparing the densities of proteins with different molecular weights, it was observed that the densities are similar for all proteins with a molecular weight above 20 kDa (1.35 g/cm 3 ) and increase gradually for low-molecular weight proteins (Fischer et al., 2004). Therefore, a molecular-weight-dependent function with an exponentially increasing protein density for decreasing molecular weights was proposed. However, this function cannot explain the observed large variations of density for proteins with similar low molecular weight. In a third approach, the density of proteins was correlated with their sequence and the amino acid composition of proteins was analysed. While for proteins in their dry state, the protein volumes could not be calculated from the sum of the volumes of its amino acids (Berlin & Pallansch, 1968), the experimentally determined density of proteins in solution could be well reproduced from their amino acid composition (Cohn & Edsall, 1943;Iqbal & Verrall, 1988;Kharakoz, 1997;Makhatadze, Medvedkin, & Privalov, 1990;Zamyatnin, 1972Zamyatnin, , 1984. While the densities of folded proteins vary by less than 10%, their experimentally determined thermal expansion coefficients vary by a factor of almost two (Chalikian, Totrov, Abagyan, & Breslauer, 1996). To account for thermal effects, the temperature dependence of the density of amino acid residues was determined (Amend & Helgeson, 2000;Hedwig & Hinz, 2003;Lee, Tikhomirova, Shalvardjian, & Chalikian, 2008;Makhatadze et al., 1990) and combined to model the temperature dependence of protein densities. However, the reproducibility of the experimental values using this method is limited as this strategy is primarily designed for unfolded proteins (Hedwig & Hinz, 2003). Furthermore, some approximations are made for the calculations. Therefore, the values determined over a wide temperature range are less reliable than those calculated for a temperature of 25°C (Hackel, Hinz, & Hedwig, 1999). Thus, a predictive model of protein density should describe properly the dependency of density not only on sequence and structure of the protein, but also on temperature. Therefore, we have developed a general molecular model to predict the density of a protein and its temperature dependence by using molecular dynamics (MD) simulations of protein-solvent systems. The method uses only the experimentally determined structure of a protein and a generic, transferable force field to evaluate the absolute value of protein density, and no prior parameterization is needed.
Materials and methods
Setup of the simulations Ten proteins were selected which cover a wide range of molecular weights from 7 to 97 kDa (Table 1) setup of the simulations, crystal water molecules and ligands were removed from the protein structure. For ubiquitin, two series of simulations with and without crystal water were performed to estimate the effect of crystal water on the simulated protein density. Subsequently, each protein was solvated in four water boxes of different size, according to protein concentrations of 8, 16, 24 and 32 mg/cm 3 . The box sizes were in a range of 365-20,062 nm 3 depending on the molecular weight of the simulated protein and the protein concentration. For Candida antarctica lipase B, a series of simulations was additionally performed with one and eight proteins in a box at concentrations of 8, 12, 16, 20, 24, 28 and 32 mg/cm 3 . Moreover, the density of pure water was determined by simulation of a large box of 46,657 water molecules. The negatively charged proteins at pH 7 (Pseudomonas glumae lipase (À1), Candida antarctica lipase B (À1), RP2 lipase (À1), malate dehydrogenase (À4), glycogen phosphorylase B (À7)) were neutralized by protonating the respective number of histidines (the catalytic histidine or histidines at the protein surface). The positively charged proteins (ribonuclease-A (+4), lysozyme (+8), adenylate kinase (+2)) were neutralized by adding chloride ions. For these three proteins, control simulations were carried out with water boxes of corresponding size containing water and the respective number of ions.
MD simulations were performed using the software package groningen machine for chemical simulation (GROMACS), version 4.0.7 (Van Der Spoel et al., 2005). The simulations were performed at 298 K using the optimized potentials for liquid simulations (OPLS) all-atom force field (Jorgensen & Tirado-Rives, 1988) and the extended simple point charge (SPC/E) water model (Berendsen, Grigera, & Straatsma, 1987). For ribonuclease-A and adenylate kinase, the simulations were performed at 293 K, the temperature at which the density was determined experimentally.
For an analysis of the temperature dependence of the protein density, additional simulations of scorpion toxin variant-3, ribonuclease-A, lysozyme, P. glumae lipase and C. antarctica lipase B were done at 288 K and at 308 K.
Energy minimization, equilibration and production phase
The systems were minimized using the steepest descent algorithm with a maximum number of steps of 100,000. Subsequently, all systems were equilibrated. The equilibration was started at an initial temperature of 10 K. Within 200 ps, the systems were heated up to 100 K, followed by 100 ps simulation at constant temperature. Within the following 450 ps, the temperature was increased to 288, 293, 298 or 308 K and equilibrated for another 250 ps at the respective temperature. Temperature coupling was done using velocity rescaling with a stochastic term with a temperature coupling time constant of .1 ps. The pressure was kept constant at 1 bar with a pressure coupling time constant of .5 ps (NPT-model). A time step of 1 fs was used.
After equilibration, the simulations were continued for 10 ns with a time step of 2 fs. Temperature coupling was performed using the Berendsen thermostat (Berendsen, Postma, van Gunsteren, DiNola, & Haak, 1984) with a temperature coupling time constant of .1 ps and a reference temperature of 298 K. Pressure coupling was done using the Berendsen barostat (Berendsen et al., 1984) with a pressure coupling time constant of 2 ps and a constant pressure of 1 bar (NPT-model). Simulation of 10 ns was sufficient to allow for complete relaxation of the solution density, which was constant during the last 5 ns of the simulation.
Sensitivity to simulation parameters
Four additional simulations of lysozyme were performed with a different thermostat, barostat, water model or protein force field: (1) temperature coupling by the velocityrescaling thermostat (Bussi, Donadio, & Parrinello, 2007) was used instead of the Berendsen thermostat; (2) pressure coupling by the Parrinello-Rahman barostat (Parrinello & Rahman, 1981) was used instead of the Berendsen barostat; (3) the water model TIP4P (Jorgensen & Madura, 1985) was used instead of the water model SPC/E; (4) the protein force field GROMOS96 43a1 (Christen et al., 2005) was used instead of the OPLS all-atom force field.
Evaluation of protein density by MD simulation
The density of the protein solution ρ total consisting of protein with mass m prot and volume V prot and water with mass m wat and volume V wat is calculated by By inserting Equation (2) into Equation (1), the solution density ρ total depends linearly on the protein concentration c prot (Kupke, 1973): For each protein, simulations were performed for at least four different protein concentrations c prot . For each simulation, the box volume was averaged over the last 5 ns of the simulation using the GROMACS tool g_energy, and the solution density ρ total , the protein concentration c prot and their standard deviations were calculated. By plotting the densities of pure water and protein-water systems for at least four protein concentrations against the protein concentration, the protein density ρ prot was calculated from the slope, and the water density ρ wat from the intercept with the y-axis.
Thus, the calculation of protein density does not require an evaluation of protein surface or volume, and reflects amino acid packing and thermal motion.
Evaluation of protein density by amino acid composition
As an alternative method, the density was calculated based on the amino acid composition. For the calculation of the protein densities by amino acid composition, an additive scheme for the apparent specific volume of proteins according to Cohn and Edsall (1943) was used ( Table SI).
Calculation of thermal expansion coefficients
For the analysis of the temperature dependence of the protein density, protein density values were calculated at 288, 298 and 308 K, plotted against the corresponding temperature, fitted by a straight line, and the thermal expansion coefficient was defined as the negative ratio of the slope of the straight line to the protein density at 298 K.
Protein structure and flexibility
For analysis of protein structure, secondary structure elements were assigned according to dictionary of secondary structure of proteins (Kabsch & Sander, 1983). To analyse the flexibility of the simulated proteins, the root mean square fluctuation (RMSF) was calculated over the last 3 ns for all protein atoms using the GRO-MACS tool g_rmsf. For each protein atom, the RMSF values were converted in the corresponding temperature factor B using the equation For each simulation, the median of the B-factor values of all protein atoms was calculated, and the median values were averaged over the four simulations at different protein concentrations. To analyse the variation of flexibility with temperature, median B-factors were calculated at 288, 298 and 308 K and plotted against the corresponding temperature. The correlation was fitted by a straight line, and the slope of the straight line was used as a measure of the increase in protein flexibility with temperature.
Quality of the protein simulations
To confirm the quality of the water model for different temperatures, the density of a water box consisting of 46,657 water molecules at 298 K was determined by MD simulations. The calculations resulted in a value of .9960 ± .0005 g/cm 3 , which deviates by only .1% from the experimental value (.9970 g/cm 3 ; Lide, 2009). Also at temperatures of 288 and 308 K, the simulations led to density values that differed by only .1 and .3% from the experimental values, respectively. Thus, the SPC/E parameters are sufficiently accurate, and the system size and simulation time are large enough to reproduce experimental data.
To evaluate the quality of the protein simulations, the density of the 10 protein-water systems was followed as a function of simulation time. For all concentrations of all proteins, the density quickly relaxed to its equilibrium value during the first nanosecond of the simulation ( Figure S1). Reproducible values with standard deviations of less than .1% could be obtained by averaging the solution density over a time interval of 5 ns. Thus, a simulation time of 10 ns was sufficient to determine the solution density. Furthermore, one simulation (the simulation of eight proteins of Candida antarctica lipase B at a concentration of 20 mg/cm 3 ) was carried out for 20 ns instead of only 10 ns. Calculation of the solution density of this box over the time range of 5-10 and 15-20 ns led to identical values.
To evaluate the effect of simulating multiple proteins in a box as compared to a single protein, two series of simulations were carried out for C. antarctica lipase B, a single protein in a box and a system of eight proteins in a box ( Figure S2). The protein density calculated by the two series deviated by .07% (Table 1). Thus, simulating a single protein in a box was sufficiently accurate, and protein-protein interactions in a realistic protein solution had no influence to the density. However, the observed relative standard deviations of the box volumes with one protein are slightly increased due to the smaller number of particles in the small boxes.
To evaluate whether variations in the density occur between the monomeric and dimeric form of a protein, simulations of malate dehydrogenase were done for the monomer as well as for the dimer ( Figure S8). The resulting densities of the monomeric and dimeric malate dehydrogenase differed by .04% (Table 1). Therefore monomers and dimers of the same protein do not show different protein densities.
To evaluate the number of simulations at different protein concentrations that are necessary to reliably define a straight line, the simulations with C. antarctica lipase B were done for three additional protein concentrations. The resulting data points lay exactly on the straight line defined by four simulations at different concentration ( Figure S2), thus it was concluded that four protein concentrations plus the value for pure water were sufficient.
To evaluate the effect of crystal water to the evaluation of protein density, two series of simulations of ubiquitin (with and without crystal water) were compared ( Figure S3). For both series, the protein densities differed by .1% (Table 1), thus the absence or presence of crystal water molecules does not influence the equilibration time of the system nor the resulting protein density.
For three positively charged proteins, chloride counterions were added to neutralize the protein system. To examine the influence of the ions on the solution density, boxes of corresponding sizes containing water and ions but without protein were simulated ( Figure S5). The effect was analysed for lysozyme, for which eight chloride ions were added. Calculations of the protein density with and without consideration of the effect of ions on the solution density led to a protein density of 1.3809 and 1.4029 g/cm 3 , respectively. Thus, the influence of counterions should not be neglected upon calculation of the protein density.
To study whether the modelled density is sensitive towards changes in the thermostat, barostat, water model or force field, control simulations of lysozyme were performed. Replacing the SPC/E water with TIP4P water increased the simulated density by .17% (Table SII). Using the GROMOS force field instead of the OPLS all-atom force field increased the simulated density by .41%. Changes in the temperature coupling and pressure coupling decreased the simulated density by 1.2%. Using the Berendsen thermostat and barostat, the SPC/E water model and the protein OPLS all-atom force field, the simulated density of lysozyme deviated by À1.4% from the experimental value. Thus, when the v-rescale thermostat or the Parrinello-Rahman barostat were used, the experimental density was underestimated by 2.6%, with the TIP4P water model by 1.2%, and with the GROMOS force field by .95%.
Modelling the protein density by MD simulations
The densities of aqueous solutions of 10 proteins with molecular weights ranging from 7 to 97 kDa were determined by simulations of protein-water systems at four protein concentrations, 8, 16, 24 and 32 mg/cm 3 for each protein (Figure 1 and Figures S2-S8). The simulations were performed at 293 or 298 K, corresponding to the temperature of the experimental density measurement. From the slope of these graphs, the density of the 10 proteins was determined (Table 1). Four proteins (C. antarctica lipase B, glycogen phosphorylase B, P. glumae lipase and malate dehydrogenase) had a low protein density (1.3257, 1.3259, 1.3330 and 1.3418 g/cm 3 , respectively), slightly lower than the average (1.35 g/cm 3 ) of the majority of proteins. Four proteins (ubiquitin, RP2 lipase, adenylate kinase and lysozyme) had a medium protein density (1.3603, 1.3623, 1.3664 and 1.3809 g/cm 3 , respectively). Two proteins (ribonuclease-A and scorpion toxin variant-3) had a high density (1.4050 and 1.4290 g/cm 3 , respectively). As described previously (Fischer et al., 2004), the simulated density of a protein is related to its molecular weight: the molecular weight of the four proteins with low density is in the range 33-97 kDa, of the four proteins with medium density in the range 8.6-48 kDa, and of the two proteins with high density in the range 7-14 kDa (Figure 2). However, large differences in the protein densities are observed for the two small proteins, scorpion toxin variant-3 and ubiquitin. Figure 1. Plot of the solution density against the protein concentration for scorpion toxin variant-3 (♦; R 2 = .9999), RP2 lipase (▪; R 2 = 1) and glycogen phosphorylase B (N; R 2 = 1) each at 298 K. Figure 2. Plot of protein densities against the molecular weight; protein density values calculated by Tsai et al. (1999) (□), values measured experimentally (Squire & Himmel, 1979) (♦) and protein densities determined by MD simulations (4); numbering of the proteins with simulated protein density according to Table 1. For four proteins, experimental data of protein density in solution have been published previously. The simulated densities were in excellent agreement with their experimental values: for ribonuclease-A, the simulated density deviated by À1.2% from the experimental value (1.422 g/cm 3 at 20°C [Richards 1971]); for lysozyme, the simulated density deviated by À1.4% from the experimental value (1.40 g/cm 3 at 25°C) determined using a precision density meter (Gekko & Noguchi, 1979). This deviation is considerably smaller than a previous evaluation of the density of lysozyme (1.43 g/cm 3 ) (Tsai et al., 1999), corresponding to a deviation of +2.1% from the experimental value. For adenylate kinase, the simulated density was slightly larger (+1.2%) than the experimental value (1.35 g/cm 3 at 20°C) as determined by pycnometry (Schirmer, Schirmer, Schulz, & Thuma, 1970). For malate dehydrogenase, the simulated and the experimental density (1.348 g/cm 3 at 25°C (Gerding & Wolfe, 1969) deviated by less than .5%.
Evaluation of protein density by amino acid composition
To investigate the molecular reason for the variation in protein density by about 10%, the amino acid composition of the proteins was analysed, and the density was calculated as the sum of the experimentally determined specific volumes of the amino acids (Table SI) (Cohn & Edsall, 1943). According to the differences in density of the amino acids, proteins with a high content of hydrophobic residues have a lower density than proteins with a high content of hydrophilic residues (Figure 3). These calculations resulted in values for the protein densities that differed by only .7 and 1.3% on average from the values determined experimentally and by MD simulations, respectively (Table 1). Thus, the amino acid composition is the dominant factor that describes the protein density.
Analysing thermal expansion of proteins
To evaluate the temperature dependence of the simulated density, simulations of five proteins (scorpion toxin variant-3, ribonuclease-A, lysozyme, P. glumae lipase and C. antarctica lipase B) were performed at three different temperatures (288, 298 and 308 K). In a temperature range of 20°, the density depends linearly on the temperature (the R 2 values for the linear fit of protein density and temperature are .9788 for scorpion toxin variant-3, .8005 for ribonuclease-A, .8462 for lysozyme, .9968 for P. glumae lipase and .9891 for C. antarctica lipase B), and the thermal expansion coefficient of the four proteins varies nearly twofold between 4.3 and 7.1 × 10 À4 K À1 (Table 2). For ribonuclease-A and lysozyme, the two proteins for which the thermal expansion coefficient was determined experimentally (Chalikian et al., 1996), the simulated and experimental values (deviations of 2.4 and 14.6%, respectively) agree well.
Modelled protein density
To assess the quality of evaluating the protein density by MD simulations, the effects of simulation time, multiple proteins in a box, monomeric and dimeric forms, number of protein concentrations, crystal water and counterions were analysed, and the simulated densities were compared to the experimental values. The analysis revealed that simulation of 10 ns at four different protein concentrations is sufficient for a reliable determination of protein density. The number of proteins in a box, monomeric or dimeric forms of a protein and crystal water had no effect on the simulation result, whereas the effect of counterions on the solution density has to be considered in the calculation of protein density. The results of the MD simulation are in good agreement with the experiments. For four of the 10 simulated proteins (ribonuclease-A, lysozyme, adenylate kinase and malate dehydrogenase), experimental data on protein density are available. These four proteins cover a wide range of molecular weights (14-73 kDa) and Figure 3. Plot of the protein density against the content of hydrophobic amino acids (numbering of the proteins according to Table 1. Table 2. Thermal expansion coefficients of scorpion toxin variant-3, ribonuclease-A, lysozyme, P. glumae lipase, and C. antarctica lipase B determined by MD simulation and their experimental values (Chalikian et al., 1996).
Protein
Simulated thermal expansion coefficient (10 À4 K À1 ) (experimental value) Scorpion toxin variant-3 7.1 Bovine ribonuclease-A 4.3 (4.2) Hen egg white lysozyme 4.7 (4.1) P. glumae lipase 6.1 C. antarctica lipase B 5.8 densities (1.348-1.422 g/cm 3 ). For these four proteins, the deviations between the simulated and the experimentally determined densities were less than 1.4%, thus we consider the strategy to model protein density by simulation of protein solutions at different concentrations as a reliable method to model protein density at high accuracy. Furthermore, modelling of solvated proteins by MD simulations is a highly generic strategy, because protein force fields are based on a small set of force field parameters which are assigned by amino acid type and are independent of the environment of the amino acid. Thus, the same set of force field parameters applies to all proteins and a high transferability is guaranteed. With the exception of an initial choice of a force field, no further parameterization is needed to model the physical properties of molecular systems such as density. Modelling of protein density by MD simulation was insensitive towards the choice of the protein force field and the water model (<.5% deviation) and depended only slightly on the coupling method (1.2% for a different thermostat or barostat). This is an advantage and in contrast to previously suggested methods to calculate the density. In widely used methods to determine the protein density, the protein volume is calculated by defining the protein surface. Using the GROMACS tool g_sas (Van Der Spoel et al., 2005), the density of hen egg white lysozyme varied between 1.05 and 1.43 g/cm 3 upon changing the probe radius from 1.4 to .7 Å. Similarly, for the same set of representative proteins, two different algorithms resulted in different values for the average protein density: 1.43 and 1.47 g/cm 3 as calculated by the Connolly and the Voronoi algorithm, respectively (Quillin & Matthews, 2000). These values deviate by 6 and 9%, respectively, from the average protein density of 1.35 g/cm 3 determined experimentally (Fischer et al., 2004).
Molecular reasons for the variation of protein density
While all proteins consist essentially of the same 20 amino acids, their densities deviate by up to almost 10%. It was observed that proteins with a molecular weight of less than 20 kDa tend to have an increased density (Fischer et al., 2004). This tendency was essentially confirmed by analysing experimentally determined protein densities (Squire & Himmel, 1979), values calculated by Tsai et al. (1999) and by MD simulations as a function of molecular weight (Figure 2). However, for proteins of similar molecular weight, large deviations of the protein density may occur. The two proteins, scorpion toxin variant-3 and ubiquitin, have a similar molecular weight below the 20 kDa threshold (7.1 and 8.6 kDa, respectively), but have significantly different simulated protein densities (1.43 and 1.36 g/cm 3 , respectively). The density of ubiquitin is similar to the considerably larger RP2 lipase (48 kDa). Therefore, a simple dependence of the protein density on the molecular weight is not adequate to describe the molecular basis of protein density.
To explain the observed differences in protein densities, it was suggested that the amino acid composition determines protein density (Cohn & Edsall, 1943;Iqbal & Verrall, 1988;Kharakoz, 1997;Makhatadze et al., 1990;Zamyatnin, 1972Zamyatnin, , 1984. Indeed, proteins with high density have a low percentage of hydrophobic amino acids (scorpion toxin variant-3: ρ = 1.4290 g/cm 3 , 17% hydrophobic residues), while proteins with low density have a high percentage of hydrophobic amino acids (ubiquitin: ρ = 1.3603 g/cm 3 , 33% hydrophobic residues, RP2 lipase: ρ = 1.3623 g/cm 3 , 32% hydrophobic residues). This observation is in accordance with the low density and high specific volume υ of hydrophobic amino acids as compared to hydrophilic amino acids (Ile: υ = .90 cm 3 /g, Asp: υ = .60 cm 3 /g [Cohn & Edsall, 1943]). Thus, the observation that small proteins in general have higher densities than large proteins is a consequence of their low content of hydrophobic amino acids. As small proteins have higher surface to volume ratios than large proteins and as the surface is hydrophilic, small proteins have a higher content of hydrophilic amino acids and thus a higher density. Therefore, the density of a protein can be predicted from its sequence (Iqbal & Verrall, 1988;Kharakoz, 1997;Makhatadze et al., 1990;Zamyatnin, 1984), and the results obtained by MD simulations are in good agreement with the calculations from amino acid composition (maximum deviation of 2.7%).
Temperature dependence of protein density
Simulations of scorpion toxin variant-3, ribonuclease-A, lysozyme, P. glumae lipase and C. antarctica lipase B in a temperature range of 288-308 K were performed to determine the thermal expansion coefficients of these five proteins. The thermal expansion coefficients of ribonuclease-A and lysozyme determined by MD simulation are in good agreement with the experimental values (deviations of 2.4 and 14.6%, respectively), thus modelling of protein density by MD simulations is valid for a range of temperatures.
In contrast to protein densities which vary by less than 10%, the thermal expansion coefficients of the five proteins investigated here vary nearly twofold (4.3-7.1 × 10 À4 K À1 ). This high variation of the thermal expansion coefficients was confirmed by experimental data on proteins such as hen egg white lysozyme (4.1 × 10 À4 K À1 ) and bovine serum albumin (7.1 × 10 À4 K À1 ) (Chalikian et al., 1996). Considering the fact that proteins consist of the same material, the variation is surprisingly large. Therefore, it would be interesting to understand the molecular basis of the thermal expansion of proteins. Interestingly, there is no correlation between the thermal expansion of a protein and its protein density at 298 K ( Figure S9). The two proteins with a low density (P. glumae lipase and C. antarctica lipase B) have a similar thermal expansion coefficient (6.1 and 5.8 × 10 À4 K À1 , respectively), while the two proteins with high density (ribonuclease-A and scorpion toxin variant-3) have different thermal expansion coefficients (4.3 and 7.1 × 10 À4 K À1 , respectively). Lysozyme has a medium density and a thermal expansion coefficient (4.7 × 10 À4 K À1 ) similar to that of ribonuclease-A.
As another approach to analyse molecular determinants of thermal expansion, correlations between thermal expansion and secondary structure composition and its variation with temperature were analysed. However, no correlations of thermal expansion and secondary structure composition could be shown ( Figure S10) and no change in secondary structure composition in the analysed temperature range was observed ( Figure S11).
Because the thermal fluctuations increase with temperature, the correlation between protein flexibility and density at different temperatures was investigated. Therefore, the median B-factor at 298 K and the variation of protein flexibility with temperature were analysed. As the median equals to the maximum RMSF of the 50% most rigid protein atoms, it measures the flexibility of the core of the protein and therefore, in contrast to the average value, is independent of individual, highly flexible loop regions. Intuitively, we expected a positive correlation between the thermal expansion coefficient and the temperature dependence of the protein flexibility. In contrast, we found a negative correlation between thermal expansion coefficient and flexibility, and between thermal expansion coefficient and temperature dependence of protein flexibility: proteins with a high thermal expansion coefficient have a low median B-factor at 298 K ( Figure S12) and a low increase in protein flexibility with temperature ( Figure 4). This surprising observation could be explained by the difference between RMSF and config-urational entropy (Missimer et al., 2007). It has been shown that the configurational entropy increases with temperature (Missimer et al., 2007), and the configurational entropy of glasses can be expressed in terms of molar thermal expansion (Casalini, Capaccioli, Lucchesi, Rolla, & Corezzi, 2001). However, the RMSF does not strictly follow the configurational entropy: while the RMSF increased strongly with temperature for a thermolabile protein, it increased less for a more thermostable protein complex, indicating that the intramolecular interactions in thermostable proteins restrict the increase in fluctuations at increasing temperatures despite an increase in configurational entropy (Missimer et al., 2007). A broad variation in the temperature dependence of the protein flexibility was also observed for the five proteins investigated here: for two proteins (scorpion toxin variant-3 and C. antarctica lipase B), the protein flexibility varied only slightly with temperature (.07 Å 2 /K), while the atomic fluctuations of ribonuclease-A increased strongly with temperature (.27 Å 2 / K). Thus, a high thermal expansion coefficient results in a slow increase in atomic fluctuations at increasing temperature. Because thermal fluctuations decrease the kinetic barrier to unfolding, a high thermal expansion coefficient is expected to contribute to protein stabilization. Increasing the thermal expansion coefficient would then be an appropriate design strategy for creating thermostable proteins. However, it is not yet clear how the thermal expansion coefficient is encoded in the sequence and structure of a protein, but it would be highly interesting to study in more detail the molecular basis of protein expansion.
Conclusion
Using MD simulations, protein density and its temperature dependence were modelled based only on the sequence and structure of a protein and a generic, transferable force field without the need of prior parameterization. The method was applied to evaluate two basic material properties of proteins, density and thermal expansion, which were in good agreement with experimental values. The protein density is determined by the amino acid composition of the protein and is independent of molecular weight. The thermal expansion coefficient is linked to the temperature-dependent increase in protein flexibility: proteins with a high thermal expansion coefficient show a small increase in flexibility at increasing temperature, indicating that a high thermal expansion coefficient might contribute to protein thermostability.
|
2018-04-03T05:56:04.499Z
|
2012-07-01T00:00:00.000
|
{
"year": 2012,
"sha1": "55ecb78a25d245094427f9534c13a4e7f605c362",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Molecular_modelling_of_the_mass_density_of_single_proteins/825409/1/files/1240560.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f30f55fa47d1da98a4b220d8550314328ec86904",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
119067623
|
pes2o/s2orc
|
v3-fos-license
|
Nanocrystal growth via the precipitation method
A mathematical model to describe the growth of an arbitrarily large number of nanocrystals from solution is presented. First, the model for a single particle is developed. By non-dimensionalising the system we are able to determine the dominant terms and reduce it to the standard pseudo-steady approximation. The range of applicability and further reductions are discussed. An approximate analytical solution is also presented. The one particle model is then generalised to $N$ well dispersed particles. By setting $N=2$ we are able to investigate in detail the process of Ostwald ripening. The various models, the $N$ particle, single particle and the analytical solution are compared against experimental data, all showing excellent agreement. By allowing $N$ to increase we show that the single particle model may be considered as representing the average radius of a system with a large number of particles. Following a similar argument the $N=2$ model could describe an initially bimodal distribution. The mathematical solution clearly shows the effect of problem parameters on the growth process and, significantly, that there is a single controlling group. The model provides a simple way to understand nanocrystal growth and hence to guide and optimise the process.
Introduction
Nanoparticles (NPs) are small units of matter with dimensions in the range 1-100 nm. They exhibit many advantageous, size-dependent properties such as magnetic, electrical, chemical and optical, which are not observed at the microscale or larger [17,9,4,30]. Consequently the ability to produce monodisperse particles that lie within a controlled size distribution is critical.
There exist a number of NP synthesis methods, including gas phase and solution based synthesis techniques. Although the first method can produce large quantities of nanoparticles, it produces undesired agglomeration and nonuniformity in particle size and shape. Precipitation of NPs from solution avoids these problems and is one of the most widely used synthesis methods [16]. The typical strategy is to cause a short nucleation burst in order to create a large number of nuclei in a short space of time, and the seeds generated are used for the latter particle growth stage. Thus the temporal separation of nucleation and growth occurs, as proposed by La Mer and Dinegar [11,12], is applied. The resulting system consists of varying sized particles. Small NPs are more unstable than larger ones and tend to grow or dissolve faster. Thus at relatively high monomer concentrations size focussing occurs (leading to monodispersity). When the monomer concentration is depleted by the growth some smaller NPs shrink and eventually disappear while larger particles continue to grow, thus leading to a broadening of the size distribution (Ostwald ripening). s * . This condition can be written as where s * ∞ is the solubility of the bulk material, σ the interfacial energy, R G the universal gas constant, T the absolute temperature. The capillary length α = 2σV M /(R G T ) defines the length scale below which curvature-induced solubility is significant [29]. This equation shows that the particle solubility increases as the size decreases (which promotes Ostwald ripening). One approximation to the OFC is to assume that the exponential term in (1) can be linearised to give the two term expression s * ≈ s * ∞ (1 + α/r * p ) [13,14,28,35]. Obviously this expansion, which is based on α/r * p , is invalid for nanoparticles where capillary length is of the same order of magnitude as the particle radius [19]. Mantzaris [16] used an expansion for the exponential term in the OFC with n terms and showed that increasing n led to higher average growth rates and a narrowing of the PSD. However, when comparing his simulation to experimental data for CdSe nanoparticles from [23], he applied the linearised version for the solubility. Talapin et al. [29], noting that for nanoparticles of the order 1-5 nm the linearised OFC may be incorrect, applied the full condition.
In the following we begin by analysing the growth of a single particle. This is the basic building block for more complex models. The treatment leads to equations similar to those of standard LSW theory, however we arrive at them following a non-dimensionalisation which highlights dominant terms and those which may be formally neglected. In this way we can ascertain which standard assumptions are appropriate and, more importantly, which are not. Under conditions which appear easily satisfied for nanocrystal growth the governing ordinary differential equation has an explicit solution, in the form r p = r p (t) and also shows that the growth is controlled by a single parameter which may be calculated by comparison with experiment. This section closely follows the work described in [19]. The single particle model is obviously incapable of reproducing Ostwald ripening, where larger particles grow at the expense of smaller ones. Consequently we then generalise the model to deal with a large number of particles. In the results section we compare the analytical solution with that of a full numerical solution and experimental data for the growth of a single particle and show excellent agreement between all three. By setting the number of particles to two in the general model we are able to clearly demonstrate Ostwald ripening. Simulations with N =10 and 1000 particles demonstrates that increasing N leads to increasingly good agreement between the average radius and that predicted by the single particle model. The single particle model may thus be considered as a viable method for predicting the evolution of the average radius of a group of particles.
2 Growth of a single particle As shown in Figure 1, we initially focus on a single, spherical nanoparticle, with radius r * p in a system of particles. The * notation represents dimensional quantities. The assumption is that particles are separated at large but finite distances compared to their radius. Their morphologies remain nearly spherical and particle aggregation is neglected. Thus, the mass flow from each particle can be represented as a monopole source located at the center of the particle [32] and the problem becomes radially symmetric. We assume the standard La Mer model [11], such that there has been a short nucleation burst and the system is now in the period of growth.
The monomer concentration, c * , is described by the classical diffusion equation in spherical coordinates This holds in the diffusion layer [r * p , r * p + δ * ] where r * is distance from the centre of the particle, t * is time and D is the constant diffusion coefficient. To conform with standard literature (see for Figure 1: Schematic of a single nanoparticle with radius r * p and the surrounding monomer concentration profile where s * , c * i and c * b are the particle solubility, the concentration at the surface of the particle and the far-field concentration, respectively. example [16,28,31]), we have included a diffusion layer of length δ * around the particle, where the concentration adjusts from the value at the particle surface to the value in the far-field. Equation (2) is subject to where c * i is the concentration adjacent to the particle surface, c * b is the concentration in the farfield and c * b,0 is a constant describing the initial concentration when the solution is well-mixed. The monomer concentration in the far-field c * b (t * ) will be derived via mass conservation. The value at the particle surface c * i is very difficult to measure [28], hence it is standard to work in terms of the particle solubility.
The particle solubility s * (with the same dimensions as concentration) is given by the Ostwald-Freundlich condition (1). If s * < c * b then monomer molecules diffuse from the bulk towards the particle to react with the surface and the particle grows, whereas if s * > c * b the particle shrinks. In order to determinate an expression for the concentration at the particle surface, we consider two equivalent expressions for the mass flux at the particle surface, J. Firstly, Fick's first law states that the flux of monomers passing through a spherical surface of radius r * is At the surface of the sphere the flux must also follow a standard first order reaction equation where k is the reaction rate, which is assumed to be constant for both growth and dissolution contributions. Equating (4) with (5) gives which defines the concentration c * i for the surface condition of (3). To complete the boundary conditions in the system, we require an expression for the timedependent bulk concentration, c * b (t * ). The mass of monomer in the system is constant because the bulk material is assumed well-mixed during the entire process. Mass conservation of the monomer atoms in the particle and surrounding solution is then where ρ p is density, M p is molar mass and N 0 the population density. Since particle occupy a small region in comparison to their radius 4πN 0 r * p 3 /3 ≪ 1, also the molar volume which will be used for the far-field concentration in (3). As stated earlier, the diffusion equation must be solved on a domain r * > r * p , where the particle radius is an unknown function of time. The flux of monomer to the particle is responsible for the particle growth Eliminating J between (9) and (4) yields This is subject to the initial condition r * p (0) = r * p,0 , where r * p,0 is the initial particle radius. The governing system is now fully defined and consists of equation (2), subject to the initial and boundary conditions (3), where c * i is defined by (6) and c * b by (8), and the unknown particle radius satisfies (10). As there is no analytical solution to the system, we proceed to simplify the problem and use numerical approximations in order to understand the behaviour of the solution.
Nondimensionalisation
A complex mathematical model can often be reduced to a simpler form by estimating the relative magnitude of terms. A standard way to achieve this is by writing the system in non-dimensional form.
Here the model is nondimensionalised via where ∆c = c * b,0 − s * 0 represents the driving force for particle growth and s * 0 = s * ∞ exp (α/r * p,0 ) is the initial particle solubility. The concentration and growth equations yield two possible time scales τ * D ∼ r * 2 p,0 /D and τ * R ∼ r * 2 p,0 /(V M Dc 0 ), respectively. To focus on particle growth we choose the growth time scale τ * = τ * R and the governing system is now transformed to c(r, 0) = 1 , where and (17) The above system contains a number of nondimensional groups. The first, ε, is generally very small for nanoparticle growth. For example, Peng et al. [23] studied Cadmium Selenide nanoparticles, with a capillary length of 6nm and initial radii in the range 1 − 100 nm, so that ε = O(10 −3 ). In general it should be expected that ε ≪ 1. If we look at the time scales, we see that τ * D /τ * G = V M ∆c = ε ≪ 1. Physically, this indicates that growth is orders of magnitude slower than the diffusion time scale, that is, the concentration adjusts much faster than growth occurs and so the system can be considered as pseudo-steady. In terms of the mathematical model, this means that the time derivative can be omitted from the concentration equation, but since time also enters into the problem through the definitions of r p and c b this is a pseudo-steady-state situation rather than a true steady-state.
The parameter Da is an inverse Damköhler number measuring the relative magnitude of diffusion to surface reactions [16]. In the past similar models have been simplified by considering diffusionlimited growth (Da ≪ 1) or surface reaction limited growth (Da ≫ 1). In practice both mechanisms play a role. In [19] it is shown that the diffusion limited case requires either k → 0, which results in zero growth, or the concentration adjacent to the particle surface matches the solubility c i ≃ s * throughout the process. Similary the reaction driven growth requires D → 0, which again indicates no growth, or c i ≃ c b throughout the process. Therefore, we will place no restrictions on Da.
A common simplification is to assume ω ≪ 1 which reduces the OFC, (1), to a constant s * = s * ∞ or a linear approximation is used, see [14,28]. This significantly simplifies the analysis. However, for particles that have just nucleated or very small nanoparticles ω is not small and the simplification is not appropriate. Despite the large errors in the prediction of s * caused by the small ω assumption authors obtain good matches to data. In [19] it is shown that this is because the pseudo-steady model is not valid for early times when the particle is small. By the time the model is valid so is the linearisation. Basically, the variation of s * plays a minor role in the study of the growth of a single nanoparticle. However, this is not the case with multiple particles where Ostwald ripening is driven by the delicate balance between the bulk concentration and the particle solubility. The reason why the pseudo-steady model is invalid at small times is due to the thickness of the boundary layer δ(t). The model involves the assumption δ(t) ≫ r p yet initially, when the fluid is well-mixed δ(0) = 0. Only when the boundary layer is sufficiently thick is it reasonable to apply the pseudo-steady model. In [19] it is shown through comparison with experiments that the initial stage can last for the order of 100s. The shift to the pseudo-steady model can often be identified simply by looking at the trend in the data. In the following we will present the model with the full OFC and then an approximation where it is neglected. We will also neglect early data points when matching to experimental data.
Pseudo-steady state solution
After integrating and applying the boundary conditions we obtain where There is no way to calculate δ(t) in the pseudo-steady approach. A time-dependent treatment, such as that described in [18] is required. Hence the standard method is to assume r p ≪ δ, which reduces (20) to Substituting (19) into the Stefan condition (13) leads to Hence, the problem has been reduced to the solution of a single first-order ordinary differential equation for r p . It is a highly nonlinear equation which must be solved numerically. The assumption that r p ≪ δ means it only holds for relatively large times. Approximate solutions, in various limits, may be found in the literature. For example if we take c b constant and ω sufficiently small for the linear approximation to the exponential to hold then equation (22) may be integrated in the limits of large and small Da.
In [19] it is shown that for sufficiently large times, for a single particle, the variation of e ω/rp does not affect the solution in which case equation (22) may be integrated analytically to find an implicit solution of the form t = t(r). By identifying negligible terms they are able to invert this to find an explicit solution, r = r(t) which depends on a single parameter, where r m is the experimental maximum radius, t 0 is the time at which the second growth stage is judged to have begun r p0 the radius at this time and f (r p0 ) = (r 2 m + r m r p0 + r 2 p0 )/(r m − r p0 ) 2 . If t 0 is greater than the true value, this should not affect the results. This is discussed in further detail in [19]. The unknown parameter G is defined as Its value is obtained by comparison with experimental data. Once G is determined, then the diffusion coefficient (D), the reaction rate (k), the solubility of the bulk material (s ∞ ) and population density (N 0 ) may be systematically retrieved. In [19] it is stated that ak ≈ bD, hence G ≈ 1/(3a 2 bk) = 1/(3ab 2 D). Further, since c * 0 ≫ c * eq a reasonable approximation is a 3 = V m c * 0 . Growth stops when the maximum radius r * m = a/b is achieved.
Evolution of a system of N particles
We now extend the single particle model to an arbitrarily large system of particles. The particle radii, initial radii and solubilities are denoted r * i , r * i,0 and s * i , respectively, where i represents the i th particle and i = 1 . . . N . We nondimensionalise via (11) with the only difference being that the mean value of the radiir * p,0 replaces the length scale r * p,0 . It has to be noted that this also affects the concentration scale, ∆c, through the initial solubility. Hence in what follows, all dimensionless parameters are the same as those defined in (17), except with r * p,0 and s * 0 replaced byr * p,0 ands * 0 , respectively. Under the pseudo-steady approximation and assuming that there are no interparticle diffusional interactions, the growth of each particle is now described by an equation of the form (22) with r i and s i,0 substituting for r p and s 0 , respectively. The bulk concentration equation must account for all particles, that is where N may decrease with time due to Ostwald ripening. Assuming that the solution is sufficiently In dimensionless form the problem is then governed by the system of differential equations for each i = 1, . . . , N . Equation (27) represents a system of N non-linear ODEs which must be solved numerically.
Comparison of model with experiment
The accuracy of the various forms of the mathematical model will now be ascertained through comparison with the experiments on CdSe nanocrystal synthesis reported by Peng et. al. [23]. Certain parameter values concerning the experiment and CdSe are provided in that paper, others, such as D, k, s * ∞ , N 0 must be inferred. Here they will be determined through fitting to equation (23). Since this only contains one free parameter the fitting is a very simple process. We then show that the pseudo-steady state model (PSS model) gives virtually identical results to equation (23). Once it has been established that the analytical solution and the PSS model give such good correspondence and match to experimental data we move on to comparing with the numerical results. The PSS is an approximation to the full system defined by equations (12)(13)(14)(15)(16), the analytical solution equation (23) is an approximation to the PSS. Using the full system in the N particle model would be extremely computationally expensive, for this reason the PSS model is the basic component of the N particle model. Consequently we demonstrate that the PSS closely matches the numerical solution of the full model (and consequently so does equation (23)). The N particle model is then examined. First, by setting N = 2 we are able to demonstrate Ostwald ripening. We go on to show that as N increases the prediction for the average radius tends to the analytical solution for a single particle as N becomes large.
Parameter estimation via the analytical solution
In Figure 2 we show the first eleven data points from [23]. As discussed earlier not all data points correspond to the pseudo-steady regime, here it is clear that the first three points follow a linear trend so these will be neglected. In the experiment extra monomer was added after three hours, so we have ignored all data beyond the eleventh point. Using the remaining eight data points in the nonlinear least-squares Matlab solver lsqcurvefit to fit to equation (23) we obtain G ≈ 958. The results of equation (23), with G = 958, is shown as the solid line in Figure 2.
To determine the necessary parameters for the other models we first note that the maximum radius attained during this part of the experiment is r * m ≈ 3.8nm= a/b = D/k. The experimental concentration at the end of the growth process is known, this defines c * eq = s * ∞ e α/r * m . This is enough information to determine D, k, s * ∞ , N 0 . The values taken from [23] are shown as the first ten rows of Table 1, the final four (in italics) are the ones calculated after G has been determined. These values may be used for the pseudo-steady state (PSS) model, or the arbitrary N model. The dashed line in Figure 2 represents the result predicted by the PSS model. Clearly there is excellent agreement between this and the analytical solution, thus verifying the claim that the solubility may be set to a constant without greatly affecting the solution (provided the early time data is neglected). [23]. The parameters in italics are not given in explicitly and are thus obtained via a fitting approach.
Validating the pseudo-steady state approximation
The PSS model is described by equations (19)- (22). Since this forms the basis for the N particle model it is important to verify its accuracy. We do this by comparison with the numerical solution of the full system (12)-(15) (referred to as the full model). Although we have already shown that the PSS is very well approximated by the analytical solution and that for a single crystal the solubility variation may be neglected, we must employ the PSS in the N particle model. This is because when an individual particle's solubility drops below the bulk concentration Ostwald ripening occurs. The analytical solution neglects variation in solubility, so cannot capture this behaviour. Problems similar to the full model frequently occur in studies of phase change where it is termed the one-phase Stefan problem (one-phase because the temperature is neglected in one of the phases, this is analogous to neglecting the concentration in the crystal). Examples of one-phase problems occur in laser melting and ablation, Leidenfrost evaporation of a droplet and in supercooled materials. At the nanoscale there are many studies on nanoparticle melting and growth, see [8,7,20,21]. The nanoparticle studies are particularly relevant, since they deal with a spherical geometry and at the nanoscale the melt temperature varies in a manner similar to the variation of the solubility in the current problem. For this reason we follow the numerical scheme outlined in these studies. It requires a standard boundary immobilization transformation and then a semi-implicit finite difference scheme is applied to the resulting equations. For further details see [8,7]. The PSS model requires the solution of a single nonlinear ordinary differential equation, (22). To do this we simply use the Matlab ODE solver ode15s. Once r p is determined the concentration is given by equation (21). Figure 3: Solution of the full and the PSS models (represented by circles and by a solid line, respectively) for the growth of a single particle. Panel (a) shows the evolution of the particle radius and panel (b) the concentration of monomer around the particle at five different times.
In Figure 3 we compare the numerical solution of the full and PSS models using the parameters of Table 1, where panel (a) shows the evolution of the particle radius and panel (b) the concentration profile at five different times. In both cases the agreement between the full and PSS models is excellent, thus justifying the use of the simpler PSS model in the N particle system. Panel (a) shows how the particle grows rapidly until around t ≈ 1 hr when the growth rate decreases, subsequently the radius slowly approaches the maximum value of r p ≃ 3.8 nm. This behaviour can be understood by analysing the concentration profiles presented in panel (b). The growth rate is proportional to the concentration gradient, see equation (10). From Figure 3 (b) it is clear that the concentration gradient between the particle surface and the far field is relatively large at small times, leading to rapid growth. After t ≈ 1 hr the concentration profile is practically flat, leading to a slow growth rate.
Ostwald ripening with N = 2
Ostwald ripening (OR) occurs when the bulk concentration falls below a given particle's solubility. With a single particle the growth rate tends to zero as the solubility and bulk concentration become similar, hence OR will never occur. However, with a group of particles of different sizes theoretically OR must always occur. In practice this could take a very long time and so be difficult to observe.
To demonstrate that the current model can predict OR we now investigate the simplest possible case, with two particles.
The system is defined by equation (27) with N = 2. We take parameter values from Table 1 and choose initial radii 2 nm and 2.5 nm. The governing equations may be solved again using the Matlab ODE solver ode15s. Results are presented in Figure 4. The first figure shows the evolution of the radii for more than 25 hours. The solid line represents the evolution of the 2.5 nm particle, the dashed line is the 2 nm one. As can be seen, for small times both particles grow rapidly however, after around 1.7 hours the smaller particle starts to shrink, while the larger one grows linearly. In Figure 4(b) the variation of the particle solubility and concentration is shown, solid and dashed lines correspond to the 2.5, 2nm particle's solubility respectively, while the dotted line is the bulk concentration. With reference to the variation of the radius it is clear that the rapid growth phase corresponds to a sharp decrease in the bulk concentration. Initially the solubility of each particle is below the bulk concentration and decreases as r p increases. Ostwald ripening begins when the solubility of the smaller particle crosses the c b curve, c b = s 1 at t ≃ 1.7 hr, subsequently its size decreases. The solubility of the larger particle keeps slowly decreasing, in keeping with its slow growth, and remains below the bulk concentration until the end of the simulation. If we continued the simulation the smaller particle would eventually disappear.
N particles system
To simulate the experiments of [23] we consider a distribution of N nanoparticles where the initial distribution is generated by random numbers, with an initial mean radiusr * p,0 of 2.92 nm and a standard deviation of σ o = 8.9%. In the numerical solution if a particle decreases below 2nm it is assumed to break up and all the monomer enters back into the bulk concentration.
In Figure 5 we compare the prediction for the average radius of 10 and 1000 CdSe particles (dashed lines) with the corresponding data of Peng et al. [23]. The single particle analytical solution for r(t), equation (23), is shown as a solid line. The inset shows the difference between the N particle and analytical solution. In Figure 5(a) the maximum difference between the two solutions is of the order 1.5%, which decreases rapidly with time. The solution with N = 1000, shown in Figure 5(b), has a maximum difference of the order 0.15% from the analytical solution. In practice N would be much higher. In Table 1 the population density is given as N 0 = 8.04×10 21 crystals/m 3 , so in a volume V ≈ 7 × 10 −6 m 3 we would expect around 10 16 crystals. The Figure demonstrates that as N increases the solution tends to the analytical solution. Given that N is typically very high it is then clearly not necessary to solve the large system. Obviously the analytical solution is easier to understand and implement than a 10 16 particle model. However, it is important to note that in the present example there is no significant Ostwald ripening. From Figure ?? we observe defocussing starts around 1.7 hours and after nearly 30 hours the radius of the smaller particle has only decreased by 7%. In the experimental data used here extra monomer is added to the solution after three hours and we stop our calculations then. So, in the absence of significant OR we may assume that the analytical solution may be used to predict the average evolution of nanocrystal growth. If OR is to be modelled an N particle model should be used, since this accounts for the solubility of each particle.
Conclusions
We have developed a model for the growth of a system of N particles, where N may be arbitrarily large. The model involves a system of first order nonlinear ordinary differential equations, which are easily solved using standard methods. The basis of the N particle model is the pseudo-steady approximation presented in [19]. This incorporates the particle solubility variation which then permits the model to capture Ostwald ripening.
It has been shown that the pseudo-steady model for a single particle has an accurate approximate explicit solution. This was verified by comparison with the full pseudo-steady model. The explicit solution shows that there is a single main parameter controlling nanocrystal growth.
The main drawback to the single particle model is that it cannot capture Ostwald ripening, whereby smaller particles disappear at the expense of larger ones. By studying the system with N = 2 we were able to emulate Ostwald ripening on a very simple system. By allowing N to become large and calculating the average particle radius we showed that the results approached the single particle explicit solution, which may thus be considered to represent the average growth of a large distribution of particles. A consequence of this is that the N = 2 model can equally well represent the average radii for an initially bimodal distribution of nanocrystals. An N > 2 model can represent a much larger distribution of particles.
The main advantage of the current method is that since the single particle model may be solved analytically, and this accurately describes the average radius of a distribution, then the controlling parameters are apparent. This allows us to adjust them and so optimise the growth process, paving the way for efficient large scale production. Since we only have to deal with a single particle the numerical solution is rapid (almost instantaneous), as opposed to previous large scale, time-consuming calculations.
|
2019-02-15T13:12:05.000Z
|
2019-01-25T00:00:00.000
|
{
"year": 2021,
"sha1": "3900d9cf4201f853e5e42c8bb4bce213904af5bd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.08990",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3900d9cf4201f853e5e42c8bb4bce213904af5bd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
269142770
|
pes2o/s2orc
|
v3-fos-license
|
The role of foreign MNEs in China ’ s twin transition: a study on the organization of green and digital innovation processes
Purpose – The purpose of this study is to shed light on the twin transition in China in the organization of innovation processes in arti fi cial intelligence (AI) and green technology (GT) development and to understand therole of foreignmultinationalsin Chineseinnovationsystems. Design/methodology/approach – A qualitative research approach is used by interviewing executives from German multinationals with expertise in AI and GT development and organization of innovation processes in China. In total, 11 semi-structured interviews were conducted with companies, and the data were analysedwitha thematicqualitative text analysis. Findings – The fi ndings show that AI applications for GT are primarily developed in cross-company projects that are led by local and regional authorities through the organization of industrial districts and clusters. German multinationals are either being integrated, remaining autonomous or being excluded from thesetwin transition innovationprocesses. Originality/value – This paper aims to fi ll the gap in the literature by providing oneof the fi rst qualitative approach towards twin transition innovation processes in China and exploring the integration of © Chris Brueck. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons
Introduction
Two technology paradigms will reshape future economic development at the regional level: green technology (GT), which mitigates the negative environmental impacts of innovations, and whose regional dimensions are explored within the geography of sustainability transitions literature (Hansen and Coenen, 2015;Truffer and Coenen, 2012), and digital technology, which is digitizing innovation activities and transforming economic processes through new technologies and applications such as artificial intelligence (AI) (Balland and Boschma, 2021;Capello and Lenzi, 2021;Corradini et al., 2021).The emerging concept of a "twin transition" suggests that these transformative processes are closely linked and should complement each other (Muench et al., 2022).In this context, a twin transition might be achieved through the coupling of green and digital technologies, with the goal that digital applications can facilitate and accelerate GT, whereas the green transition can, in turn, shape the priorities and objectives of digital technological innovation.Existing research on the twin transition primarily focuses on the relationship, similarities and potentials of green and digital technologies, often from a quantitative perspective by investigating twin transition processes through patent data analysis.These studies indicate that many digital technologies can combine to constitute the development of GT at the regional level (Kopka and Grashof, 2022;Cicerone et al., 2022;Montresor and Vezzani, 2023).Furthermore, the research highlights numerous potentials of digital technologies for sustainability, with general-purpose technologies, particularly AI, emerging as the most transformative, holding the greatest potential to drive the digital transformation of the economy and offering several sustainability applications (Cockburn et al., 2019;Mouthaan et al., 2023).
However, two major research gaps remain, which are addressed in this paper.Firstly, there is a gap in how these technologies can be systematically combined to unlock their full potential and develop digital green applications.Previous research has primarily focused on how regional knowledge of green and digital technologies mutually influence each other and has not yet investigated innovation modes within company-level internal innovation processes or collaborations (Montresor and Quatraro, 2020;Santoalha et al., 2021;Montresor and Vezzani, 2023).Secondly, the role of the twin transition in local innovation agglomerations such as clusters or industrial districts is still a relatively under-explored perspective.Recent research has shed light on how clusters undergo digital or green transition processes.Götz and Jankowska (2017) already suggested that clusters might facilitate digital transformation, requiring a certain level of expertise in the field, despite the common assumption that digital technology contradicts local and regional aspects due to its global interconnected nature.Additionally, Bettiol et al. (2021) demonstrated that companies in industrial districts tend to invest more in technologies related to the fourth industrial revolution.Herv as-Oliver (2021) also examined Industry 4.0 adoption in industrial districts and showed that collaborations are essential for digital transformation.Conversely, the sustainability transformation through clusters is also gaining attention (Lis and Mackiewicz, 2023).Although research on the green and digital transition of clusters is growing, there is still a lack of knowledge on how cluster organizations are systematically used to combine digital and green technologies, thus triggering twin transition processes.Overall, twin transition research has primarily focused on Western economies, with research on twin transition dynamics in emerging economies still being relatively new.This is surprising given the potential of emerging economic systems, such as China, to adapt to new technologies.On the one hand, this is due to China's comprehensive promotion of AI and its commitment to both the sustainability transition through GT and the pursuit of twin transition ideas (Filiou et al., 2023).On the other hand, it is related to the potential of the fastacting state-led innovation system, which grants local innovation actors experimental freedom (Heilmann et al., 2013).These conditions show the potential for rapid innovation development in new technology domains, making China a crucial study area.
In this paper, the two research gaps are addressed by examining the twin transition in China, with a specific focus on AI technology used for GT, e.g. machine learning models regulating and efficiently managing energy distribution from renewable energies.The paper particularly focuses on the combined utilization of these technologies, as they can be classified as radical twin innovations and hold the greatest potential for a twin transition (Mäkitie et al., 2023).Furthermore, it is investigated whether and how companies internally develop these twin innovations and how collaborative development through state-led cluster organizations is promoted and managed.Specifically, this matter is examined from the perspective of German multinational enterprises (MNEs) operating an R&D centre in China.In doing so, insights are gained from the angle of foreign multinationals, which have been instrumental in China's innovation system in the past decades.However, this role has become increasingly uncertain due to geopolitical shifts since the COVID-19 pandemic and China's shift in innovation policy that affects the integration of MNEs in clusters.Hence, the paper aims to emphasize the role of multinationals in twin transition technology domains in China.The following two research questions were formulated to address the two research gaps from the perspective of MNEs in China: RQ1.How do foreign multinationals develop AI, GT and twin innovations (AI/GT) in China?RQ2.How are innovation processes combining AI/GT organized and administered in clusters, and how are multinationals involved in the innovation processes and integrated into Chinese innovation systems?
For the empirical part of the paper, 11 expert interviews were conducted with executives from German multinationals in China.The interview data were analysed using thematic qualitative text analysis.The remainder of the paper is structured as follows: Section 2 delves into background literature concerning twin transition perspectives in China, the organization of innovation and the role of multinationals.Section 3 outlines the methodological approach, explaining data collection, analysis and evaluation.The results are presented in Section 4, and the paper concludes with a discussion and conclusion in Sections 5 and 6.
Background literature 2.1 Green, digital and twin innovation in China
In Western nations, various policy programs have been launched in recent years to exploit and systematically promote certain potentials of a twin transition.For example, the Joint Research Center of the European Commission summarizes the key requirements for a successful twin transition in the EU (Muench et al., 2022).However, research on twin transition is still relatively new and not yet comprehensively understood.For now, there are only a few studies on the impact, effects and potentials of a twin transition, and there is a current focus on Western economies in twin transition research.Cicerone et al. (2022) observe that AI knowledge positively influences green-tech specialization in EU-28 regions, although certain constraints need to be considered, such as the prerequisite of existing green knowledge within regions.Kopka and Grashof (2022) presented similar findings and examined the link between AI and sustainability in German regions.They demonstrate that regional industrial structure needs to be understood to establish the link between AI and sustainability.Bianchini et al. (2023) examined the impact of digital and green technologies on greenhouse gas emissions in European regions.They find that while digital technologies can contribute to negative environmental impacts, linking them with existing green technologies can help reduce these impacts.Benedetti et al. (2023) discussed European features of a twin transition by investigating the impact of digitalization on energy efficiency and found a positive impact of digitalization across EU member states.Additionally, Almansour (2022) explored the twin transition from a qualitative perspective indicating that digital features influence the consumer adoption of electric vehicles.Scholars also show that the combination of digital and green technologies at the firm level can positively contribute to the twin transition, for example by increasing green competitive advantage (Rehman et al., 2023), or that urban firms can better exploit the potential of digital technologies compared to rural firms (Cattani et al., 2023).Furthermore, Collini and Hausemer (2023) took an agency-based approach to understanding twin transition pathways.They conceptualize that systemic change agents, such as clusters, influence twin transition pathways.
Overall, twin transition research in Western nations indicates that the combination of digital and green technologies at the regional, local and firm levels has great potential for harnessing twin innovations.Although these research findings demonstrate the necessity of a local perspective in comprehensively understanding the twin transition in industrialized countries, the regional and local factors contributing significantly to the twin transition in emerging economies remain largely unknown.To understand these local features of the twin transition, China makes an excellent study area, shaping the global innovation landscape in both digital and green technologies and increasingly affecting the digital and green transformation worldwide.
China is establishing a comprehensive AI strategy focusing on development and implementation across several industries and is thus building on AI as a pivotal factor for digital transformation (Pan, 2016;Wu et al., 2020;Yu and Zhang, 2021).In the Chinese policy context, AI is defined from three perspectives: the basic perspective (infrastructure and hardware), the technological perspective (e.g. machine learning) and the application perspective (e.g.smart city).This paper deliberately concentrates on AI technology that can contribute to sustainability or facilitate the development and improvement of GT.Consequently, technological and application perspectives of AI are considered, and this approach is also supported by AI definitions in the economics literature (Agrawal et al., 2019).Apart from the AI strategy, China has become a leading innovation nation in several GT domains in the past years (Huang and Lema, 2021).Due to massive investments in GT to address pollution and environmental crises, there is a growing amount of scholarly interest in how GT emerged and diffused in China (Horbach, 2014;Losacker and Liefner, 2020b).In this paper, the terms GT, eco-innovation and environmental technology are treated as synonyms.Therefore, definitions provided by Kemp et al. (2019) and Barbieri et al. (2020) are used, which consider GT as new or improved products or practices that lower environmental impacts or mitigate or reverse the negative effects of human action on the environment.
Research on GT and AI in China appears to be extensive; however, the systematic combination of these technologies remains largely obscure in the scholarly literature, although initial studies on twin transition in China are emerging.For instance, Zhang and Du (2023) showed how the digital economy in Chinese cities reduces urban carbon emissions, highlighting regional variations in the potential of digital technology for green applications.Gao et al. (2023) delved into the role of big data in green innovation and demonstrated its positive effects.Furthermore, Ahmad et al. (2023) asserted that China's technological innovation fosters sustainable developmenta view shared by Chen et al. (2023), who determined that fiscal science and technology expenditure can lower CO 2 emissions, albeit with regional disparities.Li et al. (2023) discussed tangible applications of machine learning for urban sustainability in a review paper.The comprehensive political AI strategy also aligns CR with sustainability objectives.Xu et al. (2023) demonstrated that China's smart city policy positively influences green technological innovation.Furthermore, Filiou et al. (2023) explored the joint impacts of green and digital policies, analysing their influence on the emergence of eco-innovation.They assert that city-based AI policies significantly contribute to the increase in green patents.Collectively, a number of studies in recent years have examined the relationship between green and digital technologies in China.Nonetheless, a distinct perspective on innovation processes from a spatial standpoint remains notably absent.
Organization of innovation and the role of multinational enterprises in Chinese innovation systems
The innovation development in China is mainly characterized by a state-led innovation system that combines top-down processes with bottom-up dynamics (Heilmann et al., 2013;Lauer and Liefner, 2019;Fischer et al., 2021).Therefore, China's policy aims to establish cluster-based organizations of innovation actors, which facilitate innovation and are guided by authorities.This organization of innovation processes has historically revolved around pilot zones, which offer innovation actors experimental freedoms and are designed to initiate transformation and technological change in specific industries.Prominent examples include technology parks and science cities, as well as special economic zones (SEZ), which have been instrumental for attracting foreign companies' investment in the past (Teng et al., 2020;Zeng et al., 2011).Through this approach to inducing innovation, local and regional authorities can specifically address the economically heterogeneous nature of China and stimulate transformation processes in regional innovation systems (Xue et al., 2021;Liefner et al., 2021).This paper mainly refers to cluster organizations, which describe the state-led organization of innovation processes in clusters and industrial districts.The clusters are primarily created through pilot and demonstration zones, which establish networks between participating actors.In recent years, China extended these cluster organizations for developing technological solutions for green or digital domains, e.g.eco-cities for GT and sustainability applications (Chang et al., 2016;Wu et al., 2023), and AI pilot zones for AI applications (Arenal et al., 2020;Yang and Huang, 2022).Although certain processes and structures (especially in eco-cities) have already been studied, research on AI clusters is still in its initial stage and has not yet been able to show exactly how innovation processes take place and how AI/GT applications are developed.
In this context, it is also unclear what role foreign MNEs play in the innovation process in twin technologies.MNEs and foreign direct investment (FDI) have played an important role in these Chinese innovation systems.Starting in SEZs, regions attempted to spatially cluster FDI and attract MNEs resulting in rapid economic growth and the establishment of wellfunctioning innovation systems.Hereby, MNEs have not only acted as conduits for technology transfer and knowledge spillover, importing managerial expertise, advanced technologies and best practices but have also contributed to China's technological capabilities by driving industry advancement.The establishment of R&D centres by MNEs in China has adapted products to the local market and initiated technological progress and innovation.Therefore, MNEs have been crucial in fostering innovation through knowledge diffusion and technology upgrading (Blomstrom and Kokko, 1998;Liefner et al., 2013;Hayter and Han, 1998).
Many MNEs are essential components in existing innovation capacities that have been in place for a long time and are also involved in organizations and innovation dynamics, especially in several technology domains (Du and Krusekopf, 2023).However, because the Xi Government took office, there has been a strategic realignment of the national innovation strategy (Fischer et al., 2021).Starting with the "Made in China 2025" strategy, the country is Foreign MNEs in China's twin transition increasingly relying on its own innovation activities and promoting indigenous innovations (Losacker and Liefner, 2020a).The involvement of foreign MNEs in new technology domains, in which China aims to establish itself as a leader (especially in AI and GT), is becoming increasingly uncertain.Research on this topic is still relatively scarce, particularly in the area of twin transition technology.In this context, it seems important to understand the international linkages and entanglements in the organization of innovation processes.
Data collection and analysis
To answer the research questions, this paper uses a case study approach to examine the twin transition in China (Yin, 2017).The paper studies both internal innovation processes within the companies and the organization of innovation processes in AI/GT led by state actors.In doing so, the matter is examined from the perspective of German multinationals in China that are involved in innovation development, either by having an R&D centre or being well-versed in the field.The contact was initiated with the support of business representatives and committees that specifically approached various German companies with an R&D presence in China.The prerequisite for these companies was that they possessed expertise in either AI, GT or both.The interviewees were required to be knowledgeable about the innovation processes within the company and capable of assessing organizational structures and collaborations in China.The interview participants consist of 10 companies from several industries with an R&D centre in China and one management consulting company that is engaged in and advises on digital and green innovation projects.The positions of the participants are all top-level executives.The companies are predominantly major listed corporations that possess extensive global market shares in their respective industries and have contributed a large volume of investment to China.For this study, German companies were selected as the research object because they offer two advantages: firstly, German companies are among the most active FDI drivers, especially in China, where they have been an established part of the economy and innovation development for decades, and secondly, the features of German multinational firms' participation in FDI activities do not differ significantly from those of other developed countries.German MNEs can therefore provide new information on how exactly innovation processes take place in China, and they can also serve as a model for investments by other industrialized countries in China (Chen and Reger, 2006).
To carry out the exploratory approach, semi-structured interviews were conducted using interview guides (Meuser and Nagel, 2009).The interviews took place from February to June 2023 using video calls.Subsequently, the interviews were transcribed and anonymized.The research approach and the three sections of the interview guide are illustrated in Figure 1.Various descriptive data of the interview sample are summarized in Table 1.
The interview data were analysed using a thematic qualitative text analysis, as outlined by Kuckartz (2014).Based on the research questions and theoretical considerations, the main Research approach and sections of the interview guide CR categories were formed through a deductive approach, which is divided into development, organization and participation.Building on this, an inductive approach was used to form subcategories on the material, thus identifying specific approaches of different companies.This approach ensures that both the theoretical framework and the exploratory content provided by the experts are included in the analysis.The data analysis was conducted with the assistance of the qualitative data analysis tool MAXQDA.
Artificial intelligence and green technology innovation development of German multinational enterprises
The approaches of German MNEs with R&D centres in China to develop AI innovations for environmental protection and sustainability are highly heterogeneous.Across all companies, there was a consensus on the importance of GT and how AI and other digital technologies can contribute to achieving the objective of becoming carbon neutral.However, the implementation of these twin innovations is still in an early stage.Therefore, while most of the companies interviewed have engaged with both technology fields, they have not yet developed comprehensive integrated solutions.In fact, the companies interviewed were either more familiar with GT, researching sustainable alternatives to comply with environmental regulations in China and offer more sustainable alternatives in the market, or were more acquainted with digital technologies, having closer connections to digital solutions, products or processes and thus having previous experience with AI technology.The interviews reveal that companies with more experience in environmental technology development are more actively seeking ways to apply AI to GT.In the interviews, these were mainly self-learning applications for increasing energy efficiency in core products (I9), complex process flows in material extraction and allocation (I3, I7) or waste management coordination (I1, I4): We use AI explicitly for our product development and optimization, as it helps us to operate the energy processes within our applications (I9).
Contrarily, companies from the digital sector are significantly investing in AI research and its implementation in corporate processes.Nevertheless, it becomes evident that the explicit Foreign MNEs in China's twin transition exploration of AI/GT, i.e. active research aimed at connecting these technologies, is not of utmost importance.It tends to be more of a positive side-effect that often accompanies these efforts, but is not the primary motivation: Sustainability is in our identity; however, our aim is not to develop use cases of AI for environmental technology, it is more of a byproduct of AI which often comes along due to positive effects of the technology (I6).
For us, AI is integral, which means that we do not look top-down for sustainable application strategies, but apply it everywhere and thus naturally also within the scope of our environmental technologies (I2).
In addition to these diverse approaches to AI/GT, the interviews conducted reveal that the two technologies are perceived as spatially distinct.According to the interviewees, innovation development in GT primarily occurs in localized and streamlined manners, while innovation development in AI originates from a national level in China.From the perspective of some companies, regional factors are particularly responsible for funding and development of GT: The promotion of AI strongly originates from the central government at the national level and is then adopted or expanded by regions or local governments.[. ..]In the sustainability and GT domain, this is much more locally nuanced.For example, in Northern China, waste recycling aligns with agriculture; they have a lot of straw and are considering how to transform it into chemicals and utilize it intelligently (I3).
Organization of innovation and participation of German multinational enterprises
Chinese organization of innovation processes: The interviews show that new cluster organizations arise from the specific AI pilot and demonstration zone approach.These are managed and operated differently, and involve different innovation actors to former hightech parks and science cities with a new focus on green and digital technologies.The clusters consist of different industrial districts, each with a specific thematic focus determined by the regional government.There are multiple levels to these clusters, and a network is actively built between the companies and other actors, controlled and monitored by state-owned enterprises (SOEs).Moreover, being included in such a cluster organization brings numerous incentives.However, the orientation here also depends on regional factors.For instance, a local company can significantly influence the strategic orientation of the regional government: The parks, as part of the pilot zones, are organized to provide very comprehensive assistance (talent, infrastructure, service) to all companies and research institutions [. ..], the management of the parks is controlled or directed by the regional or municipal governments through state-owned companies, [. ..] we are in close contact with the park management, they help us to solve any issue (I5).
The role of the local and regional government is particularly significant in the case of AI clusters.The government plays a steering role by setting clear project objectives that affect the companies supported, which must fulfil these requirements no matter what.The government then takes on a supportive role, giving companies space to operate.There is consensus among AI-oriented companies regarding the future role of data, which is an essential foundation for the self-improvement capability and functionality of AI.Through monitoring tasks, the local and regional authorities have access to this data and can use or provide knowledge in projects.The new data protection laws in China require that this data remains in the country and allow local and regional governments to access it by taking on monitoring and review tasks.Through this access, the data and knowledge can be used and provided in cross-cutting projects, resulting in a significant advantage, especially when data is seen as a production factor for AI (Cockburn et al., 2019): Regional authorities set a target and provide massive support for companies to achieve this target, how this is done doesn't really matter, but often the path leads to AI and often AI helps to be more sustainable (I7).
There are several laws in China that prohibit the transport of data out of China.[. ..] the innovation processes are kept in the country by monitoring tasks of the regional governments; this allows regional and local governments to review the data.[. ..] so data can actually be seen as a production factor and this means that China has a massive advantage in terms of the access to data and the possibilities to train AI applications (I2).
Through the interviews, it becomes clear that the role of the regional government is also crucial for the combination of AI/GT, as they often set targets for innovation projects within the organizational structures (clusters and networks).These projects are encouraged by high funding amounts and state support for process flows, prompting them to leverage their respective entrepreneurial capabilities and potentials.As a result, AI technologies are much more likely to be used for GT or sustainability, as different actors from different backgrounds collaborate on a larger scale, with the regional government providing the framework: We have been part of an innovation project where we contributed GT and a large Chinese software company contributed AI applications which improved our product.[. ..] in the project the regional government brought us all together and explained what they wanted from us, we then collaborated with the other companies to develop an inter-city waste disposal system which was based on a self-improving AI application and therefore helped us to be much more efficient since it could coordinate and redirect the waste disposal within the city (I1).
Participation of German MNEs: The involvement of German multinationals in clusters and their inclusion in the innovation system within the technology development of AI/GT is highly diverse.The situation of German companies can be described from three directions.Firstly, companies that are important for the innovation system and developments in China have access to and are actively included in these cluster organizations.The companies sometimes receive state subsidies, including for the development of AI/GT.They collaborate with local or regional authorities and cooperate closely with Chinese companies, research institutes, startups or universities to develop products or processes.Furthermore, they are involved in large-scale innovation projects that bring together various innovation actors.When collaborating in the field of AI/GT in large-scale projects, these companies contribute significantly, although large Chinese software companies mostly undertake AI development: We collaborate with other companies, but in the field of AI implementation mainly or actually only with the Chinese tech giants, they have the expertise in the field and it is only possible to work with them when we want to work on smart city projects which are led by the government.
Secondly, autonomous companies that exclusively conduct research in their fields in China serve the local and regional markets.They do not wish to be involved in the clusters and are not included.These companies have primarily come to China because the market for their technologies is particularly attractive, and they conduct research that serves the local market.However, they continue to pursue global innovation Foreign MNEs in China's twin transition processes within the company and thus do not want to be involved in regional innovation processes: We don't want to work closely with other companies or in these state clusters, we want to serve the local market first and foremost and do research for the market by localizing here, to do this we get ideas from academia or startups into the companywith the goal of outside-in (I2).
Thirdly, companies that would like to benefit from the innovation system but cannot access the cluster organizations are systematically excluded from these local innovation processes.These companies feel systematically marginalized and would like to collaborate with other institutions to bring together AI/GT, but they lack connections to decision-makers and are not actively included in the network: We would like to participate more in digitization funding programs, but we are excluded as a foreign company, we don't have access to the clusters or local funding sources by the regional government (I8).
Discussion
The results already show that multinationals more focused on GT are more inclined to explore AI applications for GT innovations.Conversely, multinationals more familiar with AI are not necessarily seeking GT applications for their AI advancements.Although this outcome was not entirely anticipated, given the AI/GT innovation potentials of companies within China and research indicating the interconnectedness of these processes, particularly in the Chinese context (Wang et al., 2023;Gao et al., 2023), it is not entirely unexpected.Some of the companies aligned with GT mentioned that while they aim to apply AI within their organizations for innovation processes, they might not yet have reached a stage where AI is comprehensively integrated and used for operations.Furthermore, it is intriguing that companies more oriented towards AI specifically come to China to explore the digital market and seem to largely overlook the potential applications of GT within their AI endeavours, even if not every company is willing and able to develop twin innovations.Even though AI/GT twin innovations are still in their early stages, several companies are already systematically researching internal applications.However, these twin innovations are primarily implemented through state-led collaborations within innovation projects.These large-scale projects are initiated by regional and local governments, establishing a framework of goals and assembling a network of companies, startups and research institutions from various domains to jointly develop solutions.This approach enables faster and more effective utilization of both AI and GT in innovations.This aspect has been less emphasized in existing literature, yet it becomes clear that regional governments play a pivotal role, particularly in driving the twin transition at the regional level.Thus, the modes of innovation in the Chinese twin transition rely more on collaboration than intra-company innovation processes.
Furthermore, the findings indicate that the role of local and regional governments is not only crucial for initiating twin innovation but also for organizing the clusters, networks and innovation agglomerations of actors.The cluster organizations emerging from pilot and demonstration zones are primarily established by the local and regional authorities.The management and control of cluster organizations are then handled by SOEs.Moreover, it has become apparent that the role of data, especially as a production factor for AI applications, is becoming increasingly vital.While this is already a consensus in AI research (Roberts et al., 2022;Cockburn et al., 2019), the interview observations provide a novel perspective.Due to the active involvement of the government and SOEs, data is often exchanged with authorities or SOEs within organizational structures, granting them considerable data sovereignty.This raises questions about data handling and utilization that could not be answered within the scope of this study.
Finally, the paper demonstrates that the role of German companies is multifaceted.German multinational engagements fall into three types: integrated, autonomous and excluded.This result builds on the previous understanding that particularly for the combination of AI/GT, it is crucial to be integrated into the cluster organizations.Once access to networks and clusters is established, and contributions in specific technology domains are possible, advanced innovation projects can be initiated, often led by local or regional authorities involving several interdisciplinary actors.If not part of these network structures, decoupling processes seem to take place.This pattern is also consistent with the revisited Uppsala internationalization process model by Johanson and Vahlne (2009).According to this model, the liability of outsidership leads to systematic disadvantages and uncertainties for companies arising from network structures.Moreover, it appears that certain technology fields remain predominantly reserved for Chinese companies, even when foreign multinationals participate in larger innovation projects and networks.In such cases, large Chinese software companies often take on the tasks of developing and applying AI models.This development can be discussed within the framework of technology sovereignty.China's mission-oriented innovation policy aims to develop technologies according to its goals, designating AI (partially also GT) as critical to future economic competitiveness (Edler et al., 2023).This may explain why foreign multinationals are excluded from the innovation processes in certain domains.
The companies also differ in their focus on twin innovations.Integrated companies are incorporated into cluster organizations and thereby contribute to the development of local twin innovations, which arise within the framework of innovation projects.They are deeply integrated into existing innovation processes and have strong connections with various innovation actors from different domains, as well as local authorities.Autonomous companies rely on internal innovation processes and partially develop internal AI/GT solutions.They are not very engaged in knowledge exchange and primarily aim to bring in external knowledge through unidirectional innovation cooperations that involve universities or startups.Excluded companies hardly develop AI/GT applications but rather focus on one technology domain.On the one hand, this is due to the limited R&D exchange with other innovation actors and the lack of integration into existing Chinese innovation systems.On the other hand, it is also due to the absence of other global innovation activities and individual goals, which mainly differentiate them from autonomous companies.Figure 2 visualizes the connections between the type of foreign MNE and various collaborative innovation actors.Table 2 illustrates the types of foreign multinationals in China and reveals the role they take within Chinese cluster organizations and their collaborative actors.The findings suggest that integration into cluster organizations can indeed facilitate the twin transition.The role of foreign companies in China is thus greatly dependent on their integration into the Chinese innovation system, especially in the domains of green and digital technologies.Therefore, foreign companies in China should consider the role they play within the Chinese innovation system and the implications this has on their twin innovation processes.Local circumstances can significantly influence the role within their technology fields as well as the integration into cluster organizations.
The interviews also revealed that these different approaches are influenced by various factors.Formal and informal institutions, as well as informal connections to regional authorities or other key actors, play a role in the participation in AI clusters or other cooperation endeavours.Nonetheless, decoupling processes can be observed in the interviews.On the one hand, decoupling processes occur through the systematic exclusion of German multinationals that seek involvement in the innovation processes of new twin technology domains.On the other hand, decoupling processes also involve German companies contemplating diversification of their economic activities in China due to geopolitical situations.
It is important to acknowledge the limitations of this work.Firstly, it is crucial to note that the study primarily examined MNEs headquartered in Germany.These companies may be subject to formal or informal differences at the national level that distinguish them from MNEs from other countries.Nevertheless, German companies and their international operating strategies exhibit characteristics that are rather typical of investments from other industrialized nations.Furthermore, it is worth noting that the number of expert interviews seems relatively modest.However, the analysis of these interviews revealed that theoretical saturation was reached, which underlines the richness of insights obtained.This saturation is attributed to the study's deliberate focus on German MNEs with R&D centres in China, which provides a highly specific perspective that enhances the depth and relevance of the findings.
Additionally, the field in which German MNEs operate in China is also very much shaped by geopolitical tensions and developments between the EU and China.Although FDI often remains very long-term, short-term developments and thus the participation of foreign multinationals in China are partly dependent on the current political situation.Future research could dive deeper into the innovation processes of cluster organizations in the twin transition through AI/GT and compare the role of foreign multinationals with indigenous companies.
Conclusion
This paper has investigated the development and organization of the twin transition in China through interviews with German multinationals.To do so, two research questions addressed the research gaps identified.The findings related to the first research question, "How do foreign multinationals develop AI, GT and twin innovations (AI/GT) in China?" indicate that internal AI/GT development within companies is more often carried out by companies closely aligned with GT in their core business.Across companies, there are innovation projects guided by local or regional authorities, which involve various stakeholders and lead to the development of twin innovations.Regarding the second research question, "How are innovation processes combining AI/GT organized and administered in clusters, and how are multinationals involved in the innovation processes and integrated into Chinese innovation systems?", the organization continues to function through China-specific cluster organizations, where the role of local and regional government gains significance due to new data processes.Additionally, various involvements of German multinationals in twin innovation processes can be observed, with the companies either being integrated, remaining autonomous or being excluded.Integrated companies contribute to the development of twin innovation in crosscompany projects, even if the task of AI development remains primarily with large Chinese software companies.Autonomous companies partially develop AI/GT innovations on their own and stay out of collaborative innovation.Initial decoupling processes in twin innovations can be observed through excluded companies, which mostly stay out of AI/GT innovation development.Foreign companies should be aware of their role in the Chinese innovation system (integrated, autonomous and excluded) and what this means for their innovation processes.
Companies that want to participate in twin transition innovations in China in the future must be aware of the local characteristics.The findings offer new insights into achieving a twin transition through twin innovations (AI/GT applications) and can provide context for the prospective participation of foreign multinationals in these innovation processes.Consequently, the paper contributes valuable insights into the development and organization of innovation processes within the twin transition in an emerging economy, demonstrating the importance of being involved in local cluster organizations.Organization of AI/GT innovation processes (6) How are twin innovation processes (AI/GT) organized and directed by the government (e.g.political support, incentives and state-led innovation projects)?(7) How is your company involved in AI or GT cluster organizations (AI, GT clusters and networks) and how are the clusters organized (what processes take place)?(8) How does your company collaborate with other innovation actors (companies, universities, startups and government) to develop AI, GT and AI/GT innovations (involvement in specific cluster organizations or projects)?( 9) What do you see as the advantages and disadvantages of the cluster organizations in developing twin innovations?Participation of German multinationals in AI/GT innovation processes (10) What are the differences between German and Chinese companies in innovation development?(11) What challenges do you see for your company in the field of AI and GT in China in the future?(12) What kind of support for AI/GT innovations and participation in innovation projects would be desirable for your company in China?
Closing question Would you like to add anything else to the interview?
Corresponding author
Chris Brueck can be contacted at: brueck@wigeo.uni-hannover.de For instructions on how to order reprints of this article, please visit our website: www.emeraldgrouppublishing.com/licensing/reprints.htm Or contact us for further details: permissions@emeraldinsight.comForeign MNEs in China's twin transition Figure 1.Research approach and sections of the interview guide Figure 2. Collaborative innovation processes of foreign MNE types role for your company and the company's economic activities in China.AI/GT Innovation development (1) What role does GT play for your company and how do you develop GT innovations?(2) What role do digital innovation and AI play for your company and how do you develop AI innovations?(3) How are AI and GT innovations combined in your company and how do you develop AI/GT innovations (e.g.AI applications for GT/sustainability)?(4) What special conditions apply to China in terms of AI/GT development in your company?(5) What potentials and challenges do you see for the future development of AI and/or GT in your company and in China?
|
2024-04-15T15:09:07.963Z
|
2024-04-16T00:00:00.000
|
{
"year": 2024,
"sha1": "6a854a3ade71ce4deefe79644f97cdaa6a8cae92",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/CR-08-2023-0207/full/pdf?title=the-role-of-foreign-mnes-in-chinas-twin-transition-a-study-on-the-organization-of-green-and-digital-innovation-processes",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9c15786ca43bd79729efe30c0ba31dc98b44cc05",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": []
}
|
7628546
|
pes2o/s2orc
|
v3-fos-license
|
Complete genome sequence of the actinomycete Actinoalloteichus hymeniacidonis type strain HPA 177T isolated from a marine sponge
Actinoalloteichus hymeniacidonis HPA 177T is a Gram-positive, strictly aerobic, black pigment producing and spore-forming actinomycete, which forms branching vegetative hyphae and was isolated from the marine sponge Hymeniacidon perlevis. Actinomycete bacteria are prolific producers of secondary metabolites, some of which have been developed into anti-microbial, anti-tumor and immunosuppressive drugs currently used in human therapy. Considering this and the growing interest in natural products as sources of new drugs, actinomycete bacteria from the hitherto poorly explored marine environments may represent promising sources for drug discovery. As A. hymeniacidonis, isolated from the marine sponge, is a type strain of the recently described and rare genus Actinoalloteichus, knowledge of the complete genome sequence enables genome analyses to identify genetic loci for novel bioactive compounds. This project, describing the 6.31 Mbp long chromosome, with its 5346 protein-coding and 73 RNA genes, will aid the Genomic Encyclopedia of Bacteria and Archaea project.
Introduction
Strain HPA 177 T is the type strain of the species Actinoalloteichus hymeniacidonis, it was isolated from the marine sponge Hymeniacidon perlevis at the intertidal beach of Dalian, Yellow Sea, North-China, during investigation of its actinomycete diversity [1].
Members of the diverse order Actinomycetales are a major source of a variety of novel bioactive and possibly pharmaceutically important compounds and drugs, such as anticancer agents [2][3][4], antibiotics [5,6] and also other industrially relevant molecules and enzymes with diverse biological activities [5,7]. Especially marine actinomycetes became a focus of research since they have evolved the greatest genomic and metabolic diversity and are auspicious sources of novel secondary metabolites and enzymes [5,[7][8][9].
Classification and features
The genus Actinoalloteichus was established by Tamura et al. (2000) on the basis of morphological, physiological, chemotaxonomic and phylogenetic criteria. The genus contains Gram-positive, non-acid-fast, aerobic organisms with branching vegetative hyphae [20]. The aerial mycelium of Actinoalloteichus develops straight spore chains [20]. According to 16S rDNA gene sequence analysis Actinoalloteichus is part of the family Pseudonocardiaceae, suborder Pseudonocardineae, order Actinomycetales, class Actinobacteria [20,21] (Table 1). It differs from other genera of its family by its morphological characteristics, fatty acid components and its nonmotility [20].
The genus Actinoalloteichus currently contains only five known species. Besides Actinoalloteichus hymeniacidonis HPA 177 T the other currently known members are the halophilic Actinoalloteichus hoggarensis [22], Actinoalloteichus nanshanensis, isolated from the rhizosphere of a fig tree [23], the soil bacterium Actinoalloteichus spitiensis [24] and Actinoalloteichus cyanogriseus, the type species of the genus isolated from a soil sample collected from the Yunnan province of China [20].
A representative 16S rRNA sequence of A. hymeniacidonis HPA 177 T was compared to the Ribosomal Database Project database [25] confirming the initial taxonomic classification. On the basis of the 16S rDNA, A. hymeniacidonis shows highest similarity to A. hoggarensis AH97 T (99.2%) and A. nanshanensis NEAU119 T (98.3%). Together with A. spitiensis DSM 44848 T (96.8%) and A. cyanogriseus IFO 14455 T (96.4%), they form a distinct clade within the family Pseudonocardiaceae. Figure 1 shows the phylogenetic neighborhood of A. hymeniacidonis in a 16S rRNA gene based tree. Phylum ' Actinobacteria' TAS [48] Class Actinobacteria TAS [21] Order Actinomycetales TAS [49,50] Suborder Pseudonocardianeae TAS [51] Family Pseudonocardiaceae TAS [51,52] Genus Actinoalloteichus TAS [20] Species Evidence codes -TAS Traceable Author Statement (i.e., a direct report exists in the literature), NAS Non-traceable Author Statement (i.e., not directly observed for the living, isolated sample, but based on a generally accepted property for the species, or anecdotal evidence). These evidence codes are from the Gene Ontology project [53] A. hymeniacidonis HPA 177 T forms branching vegetative hyphae (Fig. 2), which are grey to black in color and tend to fragment after 3 weeks of cultivation (1). The aerial hyphae develop spores of a dimension of 0.6 × 0.8 μm [1]. HPA 177 T is strictly aerobic and non-motile [1]. Growth of A. hymeniacidonis was shown at temperatures between 15 and 45°C (optimal growth between 20 and 37°C) [1]. HPA 177 T can utilize fructose, glucose, maltose, mannitol, mannose, xylose, rhamnose, sucrose, sorbitol, sodium citrate, casein, or starch as carbon sources, but not arabinose, inositol, and raffinose [1] ( Table 1). It grows well on yeast extract/malt extract agar or oatmeal agar and produces a black soluble pigment when growing on yeast extract/malt extract agar as well as on peptone/yeast extract/iron agar [1]. It has been shown that the strain grows faster on ISP2 agar media prepared with 50% of artificial sea water, which, considering the source of isolation, probably reflects an adaptation to the marine environment. Urea is not decomposed by A. hymeniacidonis, and this strain shows neither hydrolysis of aesculin or hippurate, nor utilization The tree is built with RDP Tree Builder, which utilizes Weighbor [54] with an alphabet size of 4 and length size of 1000. The building of the tree also involves a bootstrapping process repeated 100 times to generate a majority consensus tree [55]. Streptomyces albus DSM 40313 T was used as the root organism. Species for which a complete or draft genome sequence is available are underlined Fig. 2 Colony of A. hymeniacidonis HPA 177 T grown at 28°C for 8 days on ISP2 agar medium prepared with artificial sea water of calcium malate, sodium oxalate, or sodium succinate nor reduction of nitrate [1].
The phospholipids were shown to be mainly composed of phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, phosphatidylinositol mannoside as well as of some other glucosamine containing phospholipids of unknown structure as diagnostic polar lipids [1]. A. hymeniacidonis does not contain mycolic acids [1].
Genome sequencing information
Genome project history Due to the increasing interest in exploiting new and rare actinomycetes as new sources of novel secondary metabolites [5], Actinoalloteichus hymeniacidonis HPA 177 T , a member of the rare genus Actinoalloteichus [20], was selected for sequencing. While not being part of the GEBA project [26], sequencing of the type strain will aid the GEBA effort. The genome project is deposited in the Genomes OnLine Database [27] and the complete genome sequence is deposited in GenBank. A summary of the project information is shown in Table 2.
Growth conditions and DNA isolation
A. hymeniacidonis HPA 177 T was grown aerobically in 50 ml 3% TSB medium (Oxoid, UK) in 250 mL baffled flasks at 28°C, 250 rpm. Genomic DNA was isolated using Wizard Genomic DNA Purification Kit (Promega, USA) from~2 g of mycelium (wet weight) using the manufacturer's protocol with the following modification. The clarified lysate prior to precipitation of DNA with isopropanol was extracted once with ½ volume of a 1:1 mixture of phenol/chloroform (pH 8.0).
Genome sequencing and assembly
Two libraries were prepared: a WGS library using the Illumina-Compatible Nextera DNA Sample Prep Kit (Epicentre, WI, U.S.A.) and a 6 k MatePair library using the Nextera Mate Pair Sample Preparation Kit, both according to the manufacturer's protocol. Both libraries were sequenced in a 2× 250 bp paired read run on the MiSeq platform, yielding 4,594,541 total reads, providing 159.00× coverage of the genome. Reads were assembled using the Newbler assembler v2.8 (Roche). The initial Newbler assembly consisted of 31 contigs in five scaffolds, with a total of 50 contigs larger than 100 bp. Analysis of the five scaffolds revealed three to make up the chromosome and the remaining two containing the three copies of the RRN operon. The Phred/Phrap/Consed software package [28][29][30][31] was used for sequence assembly and quality assessment in the subsequent finishing process, gaps between contigs were closed by manual editing in Consed (for repetitive elements).
Genome properties
The genome includes one circular chromosome of 6,306,386 bp (68.08% G+C content) (Fig. 3). Among a total of 5425 predicted genes, 5346 are protein coding genes. 4068 (74.90%) of the protein coding genes were assigned a putative function, the remaining were annotated as hypothetical proteins. The properties and the statistics of the genome are summarized in Tables 3 and 4, and the circular plot is shown in Fig. 3. The total is based on either the size of the genome in base pairs or the total number of total genes in the annotated genome Insights from the genome sequence
Gene clusters for biosynthesis of secondary metabolites
So far, there have been no reports on isolation of secondary metabolites from A. hymeniacidonis HPA 177 T . However, keeping in mind that all actinomycete genomes sequenced so far contain SMBGCs, the genome of strain HPA 177 T was analyzed for their presence using the online version of software antiSMASH 3.0.4 [44]. The results of the analysis were manually curated to confirm or edit borders of the clusters, identify closest homologues in the databases based on BLAST search (Table 5), and to gain a more detailed insight into the biosynthesis of the corresponding compound. In total, 25 SMBGCs were identified, 11 of which appeared to be unique at the time of analysis and based on the public database searches. This conclusion was based on the unique composition of the core genes in the clusters encoding scaffold-building enzymes, and in some cases, such as stand-alone terpene cyclase or type III polyketide synthase genes, on low (below 60%) identity of their products to proteins in the NCBI database. Based on this analysis, it seems possible that A. hymeniacidonis HPA 177 T has the genetic capacity to produce novel compounds some of which, e.g. peptidepolyketide hybrids, terpenoids, and unique lassopeptides, may represent bioactive metabolites suitable for drug development. Given its habitat, A. hymeniacidonis might be the real source of secondary metabolites that are thought to originate from its host sponge, comparable to. e.g. Theonella swinhoi and Entotheonella sp. [45]. The knowledge on the SMBGCs and their putative products will assist in identification of the corresponding compounds, and may pave the way to biosynthetic engineering toward generation of new analogues.
Conclusion
The genome sequence of A. hymeniacidonis HPA 177 T represents the first genome of the A. hoggarensis/A. hymeniacidonis/A. nanshanensis subgroup, the first complete genome of this genus as well as the first of a marine species of this genus. As such, it will be a useful basis for future genome comparisons. The presence of 25 SMBGCs indicates a great potential for secondary metabolite production, either by heterologous expression in suitable hosts or by activating the clusters by genetic engineering. Authors' contributions LS prepared and wrote the manuscript, AA and AW performed library preparation and sequencing, JK coordinated the study, SZ isolated genomic DNA, analyzed genome for the presence of secondary metabolite biosynthesis gene clusters, and contributed to writing the manuscript, and CR assembled and analyzed the genome sequence. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
|
2016-12-30T08:36:52.502Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "28360f120e5d4f83332341d8ab2870de01112679",
"oa_license": "CCBY",
"oa_url": "https://environmentalmicrobiome.biomedcentral.com/track/pdf/10.1186/s40793-016-0213-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28360f120e5d4f83332341d8ab2870de01112679",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
12206529
|
pes2o/s2orc
|
v3-fos-license
|
Predicting Potential Global Distributions of Two Miscanthus Grasses: Implications for Horticulture, Biofuel Production, and Biological Invasions
In many regions, large proportions of the naturalized and invasive non-native floras were originally introduced deliberately by humans. Pest risk assessments are now used in many jurisdictions to regulate the importation of species and usually include an estimation of the potential distribution in the import area. Two species of Asian grass (Miscanthus sacchariflorus and M. sinensis) that were originally introduced to North America as ornamental plants have since escaped cultivation. These species and their hybrid offspring are now receiving attention for large-scale production as biofuel crops in North America and elsewhere. We evaluated their potential global climate suitability for cultivation and potential invasion using the niche model CLIMEX and evaluated the models’ sensitivity to the parameter values. We then compared the sensitivity of projections of future climatically suitable area under two climate models and two emissions scenarios. The models indicate that the species have been introduced to most of the potential global climatically suitable areas in the northern but not the southern hemisphere. The more narrowly distributed species (M. sacchariflorus) is more sensitive to changes in model parameters, which could have implications for modelling species of conservation concern. Climate projections indicate likely contractions in potential range in the south, but expansions in the north, particularly in introduced areas where biomass production trials are under way. Climate sensitivity analysis shows that projections differ more between the selected climate change models than between the selected emissions scenarios. Local-scale assessments are required to overlay suitable habitat with climate projections to estimate areas of cultivation potential and invasion risk.
Introduction
Plant species are often introduced to new regions through human intervention. Plants that were introduced historically for medicinal, agricultural, or horticultural uses compose a large proportion (.60%) of the currently naturalized angiosperms in the United States and elsewhere [1]. Once established, these species have the potential to become invasive, with subsequent negative ecological and economic effects [2]. Many jurisdictions have introduced weed risk assessment methods to evaluate the risk that deliberately introduced plant species will become invasive in the future (e.g., [3], [4], [5], [6]). However, for species that were introduced prior to widespread use of such methods, their proliferation through increasing horticultural sales or industrial cultivation could increase their risk of escape. Assessing the risks and potential invasive outcomes of such species is therefore important in developing best management practices [7], [8], [9].
Several species of Miscanthus have been introduced to novel ranges in North America, Europe, and Scandinavia for both horticultural and agricultural purposes. These are tall, perennial, rhizomatous, C4 grasses native to temperate, humid subtropical, and tropical savannah climates of Asia [10]. Miscanthus sinensis Andersson was first introduced to North America as an ornamental plant in the 1890 s; it has since escaped cultivation in the northeastern United States and is considered invasive in some states [11]. Less is known about the first introduction of M. sacchariflorus (Maxim.) Franch. as an ornamental plant in North America, but escaped specimens were noted in the mid-western United States by 1950 [12], and the first escaped specimens in Ontario, Canada, were collected in 1952 [13]. These two species produce a sterile hybrid M. x giganteus J.M. Greef & Deuter ex Hodkinson & Renvoize that has been encountered infrequently in the wild [14]. M. sinensis has become a very popular ornamental plant in areas of the United States and Canada [15], and all three species are of interest as potential biofuel crops (e.g., [16], [17], [18]). Breeding programs for horticultural and agricultural improvement could enhance the potential of these species to be invasive [19], [11].
The role of weed risk assessment is to evaluate the risk of escape from cultivation and the extent of possible economic and environmental damage [4]. Thus, many assessments include estimating the potential climate suitability of the risk assessment area for candidate species for import (e.g., [3], [4], [5], [6]) because climate is a good predictor of plant distributions [20], [21]. Climate suitability provides a coarse-scale estimate of a species' distribution while ignoring factors such as substrate geology, biotic interactions, and infrastructure, which affect regional habitat suitability.
Various methods have been used to estimate potential species distributions, for example, plant hardiness zones [22], [6], climate regions occupied [23], and a wide range of bioclimatic and niche models (e.g., [24], [25], [9]). The latter use detailed information on temperature and moisture in the species' native range to determine areas with potentially suitable climate for population persistence. Estimating species preferences for certain growing conditions can be difficult, leading to uncertainty in model parameterization. However, model sensitivity to the choice of parameter values is seldom evaluated (but see [26], [27]), but is important in understanding which parameters have the greatest effect on model output and therefore should be estimated most carefully or where results should be treated most cautiously. Such information can be used in interpreting model output and in directing future research to improve model reliability.
Estimating species' potential ranges under projected climate change is also a priority. This work is of particular concern for species of agronomic and horticultural importance, as well as for invasive and potential pest species [28]. Shifts in distributions of agronomic and horticultural species are likely to keep pace with climate change if people continue to plant them where conditions are suitable [29]. Distribution shifts of pests and invaders could also be favoured under climate change because these species tend to have wide physiological tolerances and traits that allow them to take advantage of long-distance (human-assisted) dispersal vectors [30]. Previous assessments of climatic suitability indicate that the modelled distributions differ depending on the choice of climate change model and scenario (e.g., [31], [32]). Comparing multiple models and climate change scenarios therefore allows assessment of the sensitivity of results to these choices.
The purpose of this analysis was threefold. First, we estimated the current potential global distributions of M. sacchariflorus and M. sinensis using a commonly employed niche model. Second, we evaluated the model sensitivity to estimate which parameters are most critical and whether this differs between the species. Third, we examined how the potentially suitable area is projected to change under future climates, as well as how sensitive these results are to the selection of climate model and emissions scenario. The resulting potential distributions indicate areas that might be suitable for horticultural or agricultural cultivation of these species, but also susceptible to their invasion, should they escape.
Methods
We estimated the current global native and introduced distributions of M. sacchariflorus and M. sinensis using species occurrence data from the Global Biodiversity Information Facility (GBIF; www.gbif.org), botanical garden records, and a literature search of agronomic trials. All data were sorted to remove duplicate records and were separated based on whether geolocation information (latitude and longitude) was provided or whether we could geocode the record using information such as street name, city, county, or region (''inferred'' location). Records that did not provide location information below the country level were omitted.
We used the native distribution data to model the potential climatic range of each species using the CLIMEX Compare Locations model [33], [34]. We used 0.5u world grid climate data provided with the software (from the Climate Research Unit at Norwich, UK [35]). CLIMEX assumes that the geographical distribution of the species is limited by climate; it does not generally account for biotic interactions [36] or substrate type. It calculates an annual growth index based on the species' fitted temperature and moisture response functions, as well as four stress indices (hot, cold, dry, wet, and their combinations) to calculate an ecoclimatic index (EI). The EI is an estimate of a location's climatic suitability to support a persistent population of the species being modelled [34]. EI ranges from 0 to 100, with 0 meaning the area is not suitable for species persistence, and 100 meaning the climate is optimal for the species year-round. In practice, EI of 100 is only attained for species in stable and ideal climate [34].
For each species, we began with parameter values from the default temperate template and adjusted them iteratively based on the species' biology until the modelled distributions approximated the native distributions [34]; these included humid subtropical (M. sacchariflorus, M. sinensis) and tropical savannah (M. sinensis) climates of Asia. The model was then validated by comparison with the observed distribution records in the species' introduced ranges in Europe and North America. Given the satisfactory fit, no further adjustments were required at this stage for either model.
Parameter Fitting
Temperature and cold/hot stress. Based on their native distributions, both M. sacchariflorus and M. sinensis are well suited for cold-temperate regions (e.g., [37], [38]). Their percentage shoot emergence at experimental, low temperatures ranges from 10 to 100% at 7 to 15uC [39]. Their M. x giganteus hybrid shows only minor reduction in leaf photosynthetic capacity when grown at 10uC or 14uC compared to 25uC [40], [41]. Therefore, the limiting low temperature (DV0) and lower optimal temperature (DV1) were set at 5uC and 15uC, respectively, for both species (Table 1). This and the cold stress (below) accounted for the northernmost occurrence of both species in far northeastern China and the southeastern Primorsky Krai region of Russia.
The upper optimal temperature (DV2) and limiting high temperature (DV3) were adjusted based on maximum temperatures occurring in the native region [42] and observations that the experimental optimal temperature for photosynthesis of their hybrid is between 30uC and 35uC [43]. The native distribution of M. sinensis extends much further south into tropical regions than does that of M. sacchariflorus [37]. Therefore, both DV2 and DV3 for M. sinensis were greater than those for M. sacchariflorus (Table 1).
For the cold stress index, we used parameters related to overwinter survival (lethal temperatures: cold stress temperature threshold, TTCS, and cold stress temperature rate, THCS) and cold stress affecting metabolism (cold stress degree day threshold, DTCS, and cold stress degree day rate, DHCS). Both species have strong cold tolerance and overwinter underground as rhizomes. The lethal temperature at which 50% of rhizomes were killed in a freezing experiment ranged from 23.4uC to 26.3uC for both species [44]. Therefore, TTCS was set to 25uC and THCS to a very low accumulation rate below this threshold air temperature because of the expectation that rhizomes would be insulated by plant litter and snow pack in colder areas. DTCS was adjusted downward slightly from the temperate template (15uC) to 12uC for M. sacchariflorus and 14uC for M. sinensis, with a very low accumulation rate (DHCS; Table 1) because both species have high cold tolerance. DTCS was lower for M. sacchariflorus than for M. sinensis because native distribution records show M. sacchariflorus persisting at slightly higher latitudes.
There has been little investigation of heat stress effects on the growth of Miscanthus. Therefore, heat stress parameters (heat stress threshold, TTHS, and heat stress rate, THHS) were adjusted based on native distribution records for the species. TTHS was set to begin accumulating at or above the upper optimal growth temperature for M. sacchariflorus and M. sinensis, respectively, with a high accumulation rate (THHS). TTHS was higher and THHS was slightly lower for M. sinensis than for M. sacchariflorus to account for the former species' more tropical native distribution.
Soil moisture and dry/wet stress: Limiting low soil moisture (SMO) and lower optimal soil moisture (SM1) were set according to the temperate template. Upper optimal soil moisture (SM2) and limiting high soil moisture (SM3) were set according to the wettropical template for M. sinensis. In comparison, SM2 and SM3 were reduced slightly for M. sacchariflorus in conjunction with the wet stress parameters to limit its potential distribution from occurring widely in wet tropical areas.
The dry stress threshold (SMDS) and dry stress rate (HDS) were set to accommodate the high drought tolerance of both species [45]. Both species were assigned an SMDS of 0.1, which is close to the minimum moisture content at which plants can extract water from the soil [46], [47]. HDS was set at a moderate rate, given that the hybrid can recover from short-term (30 days) but not longterm (60 days) drought [48].
Both M. sacchariflorus and M. sinensis grow well in saturated areas such as drainage ditches (HAH, personal observation), but not when submerged such as in streams [49], [50]. Therefore, the wet stress threshold (SMWS) and wet stress rate (HWS) were set so that the species would grow in wet areas but would experience a high rate of stress accumulation. Parameters for M. sacchariflorus were adjusted to restrict its southerly distribution.
Hot-wet stress: A hot-wet stress was added to the model for M. sacchariflorus to exclude its distribution from tropical equatorial areas of southeastern Asia-Pacific [37]. No hot-wet stress was used for M. sinensis.
Sensitivity to Model Parameters
Once the models were validated, we performed a sensitivity analysis for each species to determine the response of the modelled [51], [34]. EI values from each iteration of the sensitivity analysis were compared with those of the original model to determine the change in area for each EI class. To do this, EI data generated by CLIMEX were reprojected to a cylindrical equal area projection in ArcGIS 10.1, masked to land area, and a nearest neighbour, inverse distance weighting interpolation was performed so that each grid cell represented an equal amount of area (50650 km). The resulting grid cells were then reclassified corresponding to the five EI classes and the number of cells counted to determine the total area in each class. We then computed the proportional change in area from the original model for each EI class. Sensitivity was evaluated as a greater proportional change in area than in parameter value.
Climate Change Scenarios
To compare the species' potential distributions under future climate change predictions, and to assess the sensitivity of these potential distributions to model selection and emissions assumptions, we used two general circulation models with two emissions scenarios (A2 and B1). We used the Bergen Climate Model 2.0 (BCM) from the Bjerknes Center for Climate Research, and the Coupled Global Climate Model 3 (CGCM3-T63) from the Canadian Center for Climate Modeling and Analysis [52]. The selected emissions scenarios represent comparatively low (B1) and high (A2) future greenhouse gas concentrations, thus spanning the likely range of probable future conditions [53].
Climate projection data for each model-scenario combination were input into CLIMEX as the 30-yr mean for three standard time periods: baseline , 2050s (2041-2070), and 2080s (2071-2100). First, monthly means of maximum temperature, minimum temperature, and relative humidity, and daily means of precipitation for the world (excluding Greenland, Antarctica, and the Arctic) were downloaded from the Canadian Climate Change Scenarios Network (CCCSN: http://www.cccsn.ec.gc.ca/ ?page = dd-gcm) for each model-scenario combination. The 30yr means were then calculated to obtain a single value for each climate variable for each 2.75u62.75u grid cell. Because CLIMEX requires monthly climate data, the average daily precipitation data were multiplied by the number of days in the month to produce monthly means. CLIMEX also requires two relative humidity values, RH% at 0900 h and at 1500 h. Although only mean monthly values are available from the CCCSN, there is a strong diurnal cycle in RH%, with maximum values at night and minimum values in mid-afternoon. Daily mean values are therefore reasonably close to observed values at 0900 h and were considered RH% at 0900 h. RH% at 1500 h was estimated using the CLIMEX method of multiplying the 0900 h value by 0.85 [34], [31].
Using these model-scenario data, the CLIMEX bioclimatic model was run for each species. EI values were obtained for a total of 20 model runs ([circulation model baseline + two scenarios 6 two future time periods] 6two models 6two species). The EI data were reprojected into a cylindrical equal area projection, masked to land area, and interpolated using inverse distance weighting of the three nearest points in ArcGIS 9.3. The areas of suitable climate under baseline and future conditions were calculated by reclassifying the raw EI data into the five EI classes. Projected changes in the potential distributions of each species were determined by overlaying future and baseline projections.
For each species and time, areas of agreement between models or scenarios were determined by overlaying projected distributions based on the two models for a given scenario or the two scenarios for a given model, respectively. To quantify the similarity between pairs of projected potential distributions, we calculated a simple index of agreement by dividing the area where the two projections agree about the potential presence of Miscanthus (all area classified as suitable, favourable, and highly favourable, or EI .10) by the area where the two projections disagree. The higher the value, the more similar the two projections are. Values .1.0 indicate that the models agree more than they differ; values ,1.0 indicate more disagreement than agreement.
Results
Occurrence records indicate that both the native and introduced distributions of M. sacchariflorus are less widespread than those of M. sinensis (Fig. 1). In the native range, both species occur throughout Japan, Korea, and south-central and eastern China, and in parts of northern and northeastern China. However, M. sinensis extends further south and east into Taiwan, the Philippines, and Vanuatu [37] (Fig. 1). In the introduced range, M. sacchariflorus occurs mainly in northwestern Europe, Denmark, Sweden, northeastern United States, and southeastern Canada. M. sinensis has been introduced to these areas as well as southeastern and parts of western United States, Mexico, Puerto Rico, Colombia, Chile, Argentina, Uruguay, southern Australia, Tasmania, and New Zealand.
For the M. sacchariflorus model, 100% of given native, 90.1% of inferred native, 89.1% of given introduced and 97.7% of inferred introduced occurrence records are in areas deemed suitable, favourable, or highly favourable (EI.10). Zero and 9.4% of given and inferred native, and 10.9% and zero of given and inferred introduced occurrences are in marginal areas. One inferred native record (0.6%) and one inferred introduced record (2.3%) are in areas deemed unsuitable. For the M. sinensis model, 100% of given native, 94.2% of inferred native, 99.7% of given introduced, and 97.5% of inferred introduced occurrences are in areas deemed suitable, favourable, or highly favourable. Zero and 5.8% of given and inferred native, and 2.0% and 2.5% of given and inferred introduced occurrences are in marginal areas. No observed occurrences are in areas deemed unsuitable. The models indicate that M. sinensis has a wider potential global distribution than does M. sacchariflorus (Fig. 1).
Sensitivity to Model Parameters
The modelled distributions differed in their sensitivity to a 610% or 1uC change in parameter values ( Table 2). Eight of 22 (36%) parameters for M. sacchariflorus and 5 of 19 (26%) parameters for M. sinensis showed sensitivity in at least one EI class. Models for both species were very sensitive to changes in heat stress temperature threshold (TTHS) and upper optimal and limiting temperatures (DV2, DV3) and were mildly sensitive to changes in upper optimal and limiting moistures (SM2, SM3). In addition, the M. sacchariflorus model was highly sensitive to an increase in dry stress rate (HDS) and a decrease in hot-wet temperature threshold (TTHW), and mildly sensitive to a decrease in wet stress moisture threshold (SMWS).
Sensitive temperature parameters all related to upper temperatures and heat tolerance thresholds (
Range Shifts Under Climate Change Projections
Projected changes in the potential area occupied by M. sacchariflorus and M. sinensis are generally large, ranging from global decreases in potential area by 2080 of 4 to 6%, depending on the species, model, and scenario chosen ( Table 3, World; Fig. 2). In all cases, the area of climatically suitable locations (EI .10) continues to decrease over time.
Limiting this analysis to North America reveals some important regional differences (Table 3, North America). Projected percent changes in climatically suitable area are smaller for North America than for the world and are often different in sign. The direction of change also differs between the two models: For both species by 2080, regardless of scenario, the BCM projects reductions in the total suitable area and the CGCM projects increases in suitable area. There are also large changes in the relative proportions of habitat categories. In all projections, including those in which the area of climatically suitable habitat increases, highly favourable area (EI .30) decreases and favourable area (EI 20-30) increases.
Comparing the future potential distributions of Miscanthus species to their baseline potential distributions provides an indication of how rapidly shifts in suitable range might occur. This analysis indicates that projected suitability shifts are moderately large for both species and under both emissions scenarios ( Table 4). The smallest projected shifts in suitable range occur under the B1 emissions scenario, with M. sacchariflorus projected to have larger shifts than M. sinensis. The mean overlap between the baseline and 2080s potentially suitable areas is only 61% for M. sacchariflorus and 78% for M. sinensis. Very little range contraction is projected to occur in the native range, except for some projections for M. sacchariflorus. In contrast, range expansion is projected in northern parts of the native range for both species. Parameter values were each decreased and increased by 10% from the baseline value (Table 1) In the non-native range, contraction is projected mainly in areas where the species have not yet been introduced such as South America, Africa, and parts of Australia for both species, as well as islands of Southeast Asia for M. sacchariflorus. Contraction of the potential range is also projected to occur in southern parts of the United States, but more so for M. sinensis than for M. sacchariflorus. The majority of range expansion is projected to occur in northern areas of North America, eastern Europe, and Scandinavia. By overlaying the results for the different models and scenarios, we can identify patterns of agreement and disagreement between the various projections (Fig. 3). This analysis helps to identify changes in the suitable range that are highly likely (e.g., robust to the selection of model or scenario), versus those that are more speculative (e.g., those that differ depending on the model or scenario chosen). This analysis also allows us to identify whether the projections differ because of differences between the climate models or between the scenarios. These results indicate that the choice of climate model accounts for more difference in the results than the choice of scenario and that the two Miscanthus species differ in their sensitivity to the selection of model and scenario (Table 5). For the model sensitivity analysis, agreement values are all either ,1.0 (greater area of disagreement than agreement between models; five out of eight comparisons), or very slightly . 1.0 (maximum value 1.22). For the scenario sensitivity analysis, all agreement values are substantially .1.0. Different scenarios are more similar within models than are the same scenarios between models for these species.
Potential Global Distribution
Niche and bioclimatic envelope models in general provide a coarse-scale indication of areas where the climate might be suitable for the species of interest to establish. The limitations of such models are well discussed elsewhere [34], [54], [55], [56]. Here, we note two associated factors that require consideration in interpreting our models: the species' ecology and their likely nonequilibrium distribution in the introduced ranges.
Little is known of the species' comparative ecology in the native and introduced ranges because most research to date has focused on determining optimal conditions for agricultural production (but see [57], [42], [50]). Parameterizing a model based solely on the native range distribution assumes similar biotic interactions in the introduced and native ranges. However, potential release from suppressive interactions such as competition, herbivory, parasitism, and disease (e.g., [58], [59], [60]), or a lack of mutualist organisms (e.g., [61]) could result in different realized distributions or niche shifts in the introduced compared to the native range (e.g., [62], [63], but see [64], [65]). We parameterized our model using both the native range distribution and some physiological data, which could improve model estimations compared to strictly correlative methods [66], [67]. Model validation using occurrences in the introduced range indicates that the models fit the current introduced distributions well. Determining whether an introduced species will establish beyond the modelled range requires further fine-scale assessment and/or field experiments [54], [68].
It is highly likely that M. sinensis and M. sacchariflorus are still spreading in the introduced ranges. Indeed, the species only became naturalized in North America in the mid-1900s [12], [11], and time since introduction is a well-known correlate of plant escape and abundance in the introduced range (e.g., [69], [70]). The bioclimatic models suggest that there are additional moderate to large amounts of climatically suitable area in Central America, South America, and Africa, as well as small parts of Australia and New Zealand, where these species have not yet established. The likelihood of spread beyond the modelled potential distributions is unknown, but given the increase in cultivation of these species as both horticultural and agricultural materials, which reduces dispersal limitations, and the potential for plant breeding programs to introduce new genetic material with a wider range of trait variation, proliferation within and beyond the introduced area should be monitored closely.
According to the native distributions and our models, M. sinensis has a wider range and greater climatically suitable area than M. sacchariflorus [37] (Fig. 1). Although these characteristics could make M. sinensis attractive for cultivation across a wide area, species that have wider native distributions and occur in more habitats and climate zones are likely to be more successful as invaders, regardless of other biological traits [71]. Thus, M. sinensis might also have greater potential to become a weedy or invasive plant than does M. sacchariflorus. Rather than developing one or two cultivars suitable for widespread production, a best management practice might be to develop regionally restricted cultivars to minimize widespread escape and invasion of novel species from the agriculture and horticulture trades.
Consequences of Parameter Sensitivity
The smaller native distribution and therefore narrower environmental tolerances of M. sacchariflorus likely contribute to its greater sensitivity to changes in model parameters than for M. sinensis. If the extent of the native distribution is correlated with model sensitivity across a range of species, this would have implications for modelling and interpreting models for both invasive species and rare species of conservation concern. For potential invaders, this might mean that some priority is given to species with wide distributions. For rare species, i.e., those with small native distributions that are not due to anthropogenically caused local extinctions, obtaining accurate model results could require greater accuracy in parameter estimation.
The two Miscanthus species models showed sensitivity to similar parameters, which might not be surprising, given that their distributions overlap in temperate areas. However, the main sets of parameters exhibiting sensitivity were those for which there are the least data. The most sensitive parameters were related to upper temperatures and heat tolerance, but most studies of temperaturerelated growth for these species have examined cold tolerance because of interest in their cultivation at northern latitudes (e.g., [39], [38]). Physiological heat thresholds remain to be explored for these species to improve the confidence of lower-latitude thresholds for growth in the northern hemisphere, where they have been introduced, as well as potential range contractions at lower latitudes under climate change.
Similarly, although weakly sensitive and thus potentially of lesser importance the upper temperature parameters, upper soil moistures and moisture tolerance have rarely been examined. Most studies of soil moisture effects for these species examine drought, rather than saturation (e.g., [45], but see [50]). The accuracy of these parameter estimates could be important in predicting potential invasion of these species into drainage ditches, riparian areas, and wetlands.
Most stress rate parameters were relatively insensitive to changes in value. CLIMEX determines stress as the annual exponential accumulation of weekly population reduction when a stress threshold is exceeded [34]. Stress accumulation rate is difficult to estimate empirically without extensive field or laboratory trials under various stress thresholds, and the magnitude of the accumulation rate could depend on the threshold value chosen; for many species, few data of this type likely exist [72]. However, the minimal sensitivity of the stress rate parameters implies that their accuracy is less influential than that of other, more easily estimated parameters.
Two previous tests of sensitivity using CLIMEX have some similarities. A study of the invasive tropical/subtropical shrub Lantana camara identified model sensitivity to limiting low and high temperatures and limiting low soil moisture [27]. A study of the invading pathogen Phytophthora ramorum identified model sensitivity to optimal high temperature and limiting and optimal low soil moisture [26]. However, neither study tested model sensitivity to the stress rate parameters. Nevertheless, both our and their models show high sensitivity to some of the limiting upper or lower temperature and moisture parameters. Sensitivity analyses should be performed for additional species to determine whether some parameters are consistently more sensitive than others. If sensitivity to specific parameters is consistent within biomes and species types (forb, shrub, etc.), researchers could focus their efforts on measuring those specific environmental tolerances to maximize model estimation accuracy.
Future Climate Projections
Although the climatically suitable area for the two Miscanthus species is projected to decrease globally with climate change, areas of North America, eastern Europe, and Scandinavia are projected to experience some future increase in suitable climate. This could be beneficial for cultivating these species as bioenergy crops in these regions if suitable habitat is available. However, it could also place these regions at greater risk of invasion through increases in the area of suitable climate outside of cultivation. These regions, in Table 5. The index of agreement a between different models run under the same scenario and between different scenarios run within the same model, for projected distributions of M. sacchariflorus and M. sinensis in 2050 and 2080. particular, are projected to be future hotspots of invasion for 99 of the worst invaders globally [73]. Areas of range contraction for the two Miscanthus species also coincide with areas where future invasion is projected to decrease [73]. Additionally, climate niche projections do not account for the potential that rapid evolution in introduced species and their recipient communities could allow species to become invasive beyond their current tolerances (e.g., [74], [75], [76], [77]). In these Miscanthus species, rapid evolution could be aided by the introduction of new horticultural genotypes from widely separated populations in the native range [78]. These species are obligate out-crossers [79], so isolated populations composed of a single clone do not produce seed. Introduction of different genotypes could increase the probability of sexual reproduction and longdistance spread via the wind-dispersed seed [80]. Similarly, if these plants are developed for biomass production, intensive plant breeding programs will aim to improve their performance under a variety of conditions, including resistance to pests and disease, drought and heat tolerance, cold tolerance, and possibly salinity tolerance [81]. These efforts will potentially expand the plants' realized distributions as well as their invasive potential.
Variation in the area of future projected climate was greater among climate models than among emissions scenarios. This has also been found previously, when quantified, for both plant (e.g., [82], [83], [84]) and insect (e.g., [31], [32]) species under a number of different model-scenario combinations. Coupled with the observation that some currently observed climate changes might be greater than those predicted by even the highest emissions scenarios [85], this result suggests that future bioclimatic envelope model projections should focus efforts on increasing the number of models compared using one high-emissions scenario to develop composite projections of future suitable climate areas.
|
2018-02-05T15:54:57.682Z
|
2014-06-19T00:00:00.000
|
{
"year": 2014,
"sha1": "f6e127041303d35f415d8296b27d45b2c54a5ae2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0100032&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6e127041303d35f415d8296b27d45b2c54a5ae2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
268464637
|
pes2o/s2orc
|
v3-fos-license
|
Incremental Value of Biventricular Strain in Patients with Severe Aortic Stenosis
(1) Background: Left ventricular global longitudinal (LVGLS) and right ventricular free wall strain (RVFWS) demonstrated separate prognostic values in patients with severe aortic stenosis (AS). However, studies evaluating the combined assessment of LVGLS and RVFWS have shown contradictory results. This study explored the prognostic value of combining LVGLS and RVFWS in a large group of severe AS patients referred for transcatheter aortic valve implantation. (2) Methods: Patients were classified into three groups: preserved (LVGLS ≥ 15% AND RVFWS > 20%), single-ventricle impaired (LVGLS < 15% OR RVFWS ≤ 20%), or biventricular-impaired strain group (LVGLS < 15% AND RVFWS ≤ 20%). The cut-off values were based on previously published data and spline analyses. The endpoint was all-cause mortality. (3) Results: Of the 712 patients included (age 80 ± 7 years, 53% men), 248 (35%) died. The single-ventricle impaired and biventricular-impaired (vs. preserved) strain groups showed significantly lower 5-year survival rates (68% and 55% vs. 77%, respectively, p < 0.001). Through multivariable analysis, single-ventricle impaired (HR 1.762; 95% CI: 1.114–2.788; p = 0.015) and biventricular-impaired strain groups (HR 1.920; 95% CI: 1.134–3.250; p = 0.015) were independently associated with all-cause mortality. These findings were confirmed with a sensitivity analysis in patients with preserved LV ejection fraction. (4) Conclusions: In patients with severe AS, biventricular strain allows better risk stratification, even if LV ejection fraction is preserved.
Introduction
Aortic valve stenosis (AS) is a common valvular heart disease characterized by a chronic left ventricular (LV) pressure overload, which leads to LV hypertrophy and remodeling, and ultimately results in myocardial fibrosis and dysfunction [1][2][3].The occurrence of LV dysfunction in patients with AS has been associated with a significantly worse prognosis, and the current guidelines therefore recommend aortic valve replacement in the case of reduced LV ejection fraction (LVEF < 50%), even in asymptomatic patients [4].However, several studies have demonstrated that in AS patients, LV global longitudinal strain (LV GLS) is also a more sensitive indicator than LVEF to detect subclinical LV dysfunction, and when impaired, is associated with reduced survival [5][6][7][8][9].Additionally, when the hemodynamic effects of chronic pressure overload extend to the right ventricle (RV), RV remodeling and dysfunction may occur, which have also been shown to be associated with worse outcomes after aortic valve intervention [10,11].The role of conventional echocardiographic parameters of RV function for risk stratification in AS has been debated, while RV free wall strain (RV FWS) has consistently shown to be more sensitive in the earlier detection of RV dysfunction and to be associated with higher mortality rates [12][13][14][15][16].
A combined assessment of LV and RV strain In patients with severe AS has been performed in only few studies of which have shown contradictory results, with LV GLS and RV FWS not always independently associated with outcomes [17][18][19].
The aim of this study was therefore to assess the prognostic value of both LV GLS and RV FWS in a large cohort of patients with severe AS and referred for transcatheter aortic valve implantation (TAVI).
Patient Population and Data Collection
Patients referred for a TAVI between November 2007 and December 2019 were included from an ongoing registry of patients with severe AS at the Leiden University Medical Centre, the Netherlands.Severe AS was defined as an aortic valve area <1 cm 2 (or indexed aortic valve area <0.6 cm 2 /m 2 ) and/or a mean aortic valve gradient ≥40 mmHg, and/or peak aortic jet velocity ≥4 m/s [20,21].Patients with a previous aortic valve surgery or incomplete clinical and/or echocardiographic data were excluded.Baseline demographic and clinical variables, including cardiovascular risk factors, comorbidities, New York Heart Association (NYHA) functional class, and medications, were collected from the medical records.Chronic kidney disease was defined as an estimated glomerular filtration rate <60 mL/min/m 2 .The Institutional Review Board approved this retrospective analysis and waived the need for written informed consent.
From the parasternal long-axis view, LV dimensions were assessed, and LV mass was calculated using the Du Bois formula and indexed for body surface area.LV volumes were obtained from the apical two-and four-chamber views, and LVEF was calculated using the biplane Simpson's method and indexed for body surface area [20].The left atrial end-systolic volume was obtained from the apical two-and four-chamber views using Simpson's method of disks and was indexed for body surface area [20].
LV filling pressures were estimated using the E/e' ratio, with e' representing the average value of both septal and lateral sides obtained from tissue Doppler imaging of the mitral annulus on the apical four-chamber view [22].
Pulmonary artery systolic pressure was calculated according to the Bernoulli equation, derived from the tricuspid regurgitation jet peak velocity and the estimated right atrial pressure, and was derived from inferior vena cava diameter and collapsibility.To assess the right ventricular systolic function, M-mode was used to measure tricuspid annular plane systolic excursion [20].
Finally, aortic, mitral, and tricuspid regurgitation severity were graded as none/mild, moderate, or severe according to current recommendations using an integrative approach that includes qualitative, semi-quantitative, and quantitative parameters [23].In patients with atrial fibrillation, the measurements were averaged over three consecutive cardiac cycles [23].
Speckle-Tracking Echocardiographic Examination
The LV GLS and RV FWS were measured offline by two-dimension, speckle-tracking echocardiography using dedicated LV and RV software (EchoPac V.204 GE-Vingmed Ultrasound, Horten, Norway).
The LV GLS was calculated using images from the apical four-, three-and two-chamber views zoomed on the LV at a frame rate of ≥50 frames/s.The LV endocardial border was automatically traced (with manual corrections if necessary) and tracked by the software through the cardiac cycle.The LV GLS was derived by averaging all segmental peak strain values from all apical views and was expressed as absolute values [20].
RV FWS was calculated using images from the RV-focused apical four-chamber view at a frame rate of ≥50 frames/s.The RV endocardial border was traced using the automatic RV wall-detection algorithm.Tracing (with manual corrections if necessary) and tracking quality during the cardiac cycle were verified.The RV FWS was derived by averaging the three segments of the RV free wall and expressed as absolute values [24,25].
Follow Up and Outcome
The primary outcome of this study was all-cause mortality.Data on all-cause mortality were obtained from the departmental cardiology information system (EPD-Vision 12.9.9.3), which is directly linked to the governmental death registry database and therefore complete for all patients.
Statistical Analysis
Categorical data are presented as absolute numbers and percentages.Continuous data are presented as mean ± standard deviation (SD) if normally distributed or as median (inter-quartile range, IQR) if not normally distributed.An analysis of variance with Bonferroni's post hoc analysis or Kruskal-Wallis test for normally and non-normally distributed variables, respectively, was used to compare continuous variables between groups.The Pearson chi-square test was used to compare categorical variables.
The thresholds for dichotomizing LV GLS and RV FWS were based on previously published data, with cut-off values of LV GLS < 15% and RV FWS ≤ 20% to define impaired LV and RV systolic function, respectively [5,25].The values above these cut-offs were defined as preserved chamber functions.In addition, the representability of the cut-off values in the current study population was tested with a fitted spline curve analysis.For this analysis, the estimated hazard ratio (HR) changes for all-cause mortality across the range of LV GLS and RV FWS values associated with an increased risk of all-cause mortality (i.e., predicted HR > 1) were used to define impaired LV and RV function, respectively.
The patients were divided into one of the following three groups, according to the presence of impaired LV GLS (cut-off value < 15%) or impaired RV FWS (cut-off value ≤ 20%): (1) preserved strain group: referred to patients with preserved LV GLS and preserved RV FWS, (2) Single-ventricle impaired strain group: referred to patients with either impaired LV GLS or impaired RV FWS, (3) biventricular-impaired strain group: referred to patients with impaired LV GLS and impaired RV FWS.
Cumulative event-free survival was estimated using the Kaplan-Meier survival analysis with log-rank test, stratified by the three strain-based groups.
Cox proportional hazards regression analysis was performed to investigate the association between clinical and echocardiographic parameters with all-cause mortality.Variables in the univariable Cox regression analysis with p < 0.05 were considered statistically significant and were included in the multivariable Cox regression analysis.The baseline model included clinical and conventional echocardiographic parameters.Additionally, the strain-based groups were added to the baseline model and association with outcomes was evaluated.
For uni-and multivariable analyses, HR and 95% confidence interval (CI) were represented.Collinearity between all pairs of continuous variables included in the multivariable analysis was tested by a correlation factor analysis (correlation coefficient < 0.7).
To investigate the incremental value of the strain-based groups over the baseline model in association with outcome, a likelihood ratio test was performed and the change in global χ 2 values was calculated and reported.
Additionally, a sensitivity analysis was performed in patients with LVEF ≥ 50%.Similar to the previously described approach, a Kaplan-Meier survival analysis stratified by the strain-based groups was performed, and an estimated five-year survival was reported.Furthermore, the association with all-cause mortality was tested by Cox regression analysis.The multivariable analysis included statistically significant variables (p < 0.05) on the univariable analysis.
Twenty patients were randomly selected for the evaluation of the intra-observer and inter-observer variability of LVGLS and RVFWS.Excellent agreement was defined by an intra-class correlation coefficient > 0.90, whereas good agreement was defined by a value between 0.75 and 0.90.All hypothesis tests had a two-sided significance level of <0.05.Statistical analysis was performed using SPSS for Windows, version 29.0 (IBM Armonk, NY, USA) and R version 4.2.1 (R Foundation for Statistical Computing, Vienna, Austria).
Patient Population
A total of 712 patients (mean age 80 ± 7 years, 53% men) were included from a cohort of 1064 patients who underwent TAVI for severe AS at the Leiden University Medical Center, the Netherlands (Figure 1).The majority of patients had several cardiovascular risk factors including arterial hypertension (74%) and dyslipidemia (63%).More than half of the patients (59%) had coronary artery disease, of whom 18% had a previous coronary artery bypass graft surgery (Table 1).Table 2 displays the baseline echocardiographic characteristics of the total population, including valvular and ventricular abnormalities.
model included clinical and conventional echocardiographic parameters.Additionally, the strain-based groups were added to the baseline model and association with outcomes was evaluated.
For uni-and multivariable analyses, HR and 95% confidence interval (CI) were represented.Collinearity between all pairs of continuous variables included in the multivariable analysis was tested by a correlation factor analysis (correlation coefficient < 0.7).
To investigate the incremental value of the strain-based groups over the baseline model in association with outcome, a likelihood ratio test was performed and the change in global χ 2 values was calculated and reported.
Additionally, a sensitivity analysis was performed in patients with LVEF ≥ 50%.Similar to the previously described approach, a Kaplan-Meier survival analysis stratified by the strain-based groups was performed, and an estimated five-year survival was reported.Furthermore, the association with all-cause mortality was tested by Cox regression analysis.The multivariable analysis included statistically significant variables (p < 0.05) on the univariable analysis.
Twenty patients were randomly selected for the evaluation of the intra-observer and inter-observer variability of LVGLS and RVFWS.Excellent agreement was defined by an intra-class correlation coefficient > 0.90, whereas good agreement was defined by a value between 0.75 and 0.90.All hypothesis tests had a two-sided significance level of <0.05.Statistical analysis was performed using SPSS for Windows, version 29.0 (IBM Armonk, NY, USA) and R version 4.2.1 (R Foundation for Statistical Computing, Vienna, Austria).
Patient Population
A total of 712 patients (mean age 80 ± 7 years, 53% men) were included from a cohort of 1064 patients who underwent TAVI for severe AS at the Leiden University Medical Center, the Netherlands (Figure 1).The majority of patients had several cardiovascular risk factors including arterial hypertension (74%) and dyslipidemia (63%).More than half of the patients (59%) had coronary artery disease, of whom 18% had a previous coronary artery bypass graft surgery (Table 1).Table 2 displays the baseline echocardiographic characteristics of the total population, including valvular and ventricular abnormalities.Values are expressed as mean ± SD, median (IQR) or n (%).Abbreviations: BSA: Body Surface Area; CABG: coronary artery bypass graft; NYHA: New York Heart Association; RAAS-inhibitors: Renin-angiotensin-aldosterone system inhibitors.Chronic kidney disease was defined as an estimated glomerular filtration rate < 60 mL/min/m 2 .* Significant difference with "preserved strain group"; + Significant difference with "single-ventricle impaired strain group" (after Bonferroni correction).
A spline curve was fitted to evaluate the association between LV GLS and RV FWS with all-cause mortality.With decreasing values of LV GLS and RV FWS, the HR for the primary endpoint increased.The HR exceeded the threshold of >1 for LV GLS < 15% and RV FWS ≤ 20% (Figure 2).These thresholds were concordant with previously published data and were used to stratify the population in the three strain-based groups [5,25].Values are expressed as mean ± SD, median (IQR) or n (%).Abbreviations: LV: left ventricle; TAPSE: tricuspid annular plane systolic excursion; RV: right ventricle; PASP: pulmonary artery systolic pressure.* Significant difference with "preserved strain group"; + Significant difference with "singleventricle impaired strain group" (after Bonferroni correction).
A spline curve was fitted to evaluate the association between LV GLS and RV FWS with all-cause mortality.With decreasing values of LV GLS and RV FWS, the HR for the primary endpoint increased.The HR exceeded the threshold of >1 for LV GLS < 15% and RV FWS ≤ 20% (Figure 2).These thresholds were concordant with previously published data and were used to stratify the population in the three strain-based groups [5,25].
The correlation coefficient between LV GLS and RV FWS was 0.38.There were 191 patients (27%) in the preserved strain group, 314 patients (44%) in the single-ventricle impaired group, and 207 patients (29%) in the biventricular-impaired strain group.Of the patients in the single-ventricle impaired strain group, 81% had impaired LV GLS only, while 19% of them had impaired RV FWS only.Figure 3 demonstrates an example of a patient in the biventricular-impaired strain group, with both reduced LV GLS and RV FWS, who died during follow-up.
Regarding the clinical characteristics (Table 1), significant differences were observed between groups for sex (being male more represented in the biventricular-impaired strain group), body surface area, coronary artery disease and coronary artery bypass graft The correlation coefficient between LV GLS and RV FWS was 0.38.There were 191 patients (27%) in the preserved strain group, 314 patients (44%) in the single-ventricle impaired group, and 207 patients (29%) in the biventricular-impaired strain group.Of the patients in the single-ventricle impaired strain group, 81% had impaired LV GLS only, while 19% of them had impaired RV FWS only.Figure 3 demonstrates an example of a patient in the biventricular-impaired strain group, with both reduced LV GLS and RV FWS, who died during follow-up.
The parameters representing LV diastolic dysfunction were affected in all groups.However, values for left atrial volume index, filling pressures, and systolic pulmonary arterial pressure were significantly higher in the biventricular-impaired strain group as compared to the preserved strain group.
Finally, aortic valve area did not differ between the groups and concomitant severe aortic, mitral, or tricuspid regurgitation was observed only in few patients (2%, 6% and 5%, respectively).Regarding the clinical characteristics (Table 1), significant differences were observed between groups for sex (being male more represented in the biventricular-impaired strain group), body surface area, coronary artery disease and coronary artery bypass graft surgery, renal function, atrial fibrillation, severe symptoms (i.e., NYHA class III-IV), and use of diuretics.
Regarding the echocardiographic characteristics (Table 2), patients in the singleventricle impaired strain and biventricular-impaired strain group had higher LV volumes and more hypertrophic remodeling as compared to the preserved strain group.Approximately one of four patients (27%) had LVEF < 50% in the single-ventricle impaired strain group, while 64% of patients in the biventricular-impaired strain group had LVEF < 50%.According to the group definition, LV GLS was progressively lower in the single-ventricle impaired strain and biventricular-impaired strain group (as compared to the preserved strain group, 13 ± 3% and 10 ± 3% vs. 18 ± 2%, p < 0.001).Similar for RV FWS, progressively lower values were observed in the single-ventricle impaired strain and biventricularimpaired strain group (as compared to the preserved strain group; 24 ± 5% and 15 ± 4% vs. 28 ± 5%, p < 0.001, respectively).
The parameters representing LV diastolic dysfunction were affected in all groups.However, values for left atrial volume index, filling pressures, and systolic pulmonary arterial pressure were significantly higher in the biventricular-impaired strain group as compared to the preserved strain group.
Finally, aortic valve area did not differ between the groups and concomitant severe aortic, mitral, or tricuspid regurgitation was observed only in few patients (2%, 6% and 5%, respectively).
Survival Analysis According to Ventricular Functions
The Kaplan-Meier survival analysis showed that patients in the single-ventricle impaired strain and biventricular-impaired strain groups had significantly lower estimated cumulative survival rates at three-and five-years follow up, as compared to the preserved strain group (79% and 68% for the single-ventricle impaired strain group; 73% and 55% for the biventricular-impaired strain group; vs. 87% and 77% for the preserved strain group, respectively, p < 0.001, Figure 4).In the single-ventricle impaired strain group, no difference in survival was noted between the group with impaired LV GLS and preserved strain RV FWS vs. the group with preserved LV GLS and impaired RV FWS (Supplementary Figure S1).
Survival Analysis According to Ventricular Functions
The Kaplan-Meier survival analysis showed that patients in the single-ventricle impaired strain and biventricular-impaired strain groups had significantly lower estimated cumulative survival rates at three-and five-years follow up, as compared to the preserved strain group (79% and 68% for the single-ventricle impaired strain group; 73% and 55% for the biventricular-impaired strain group; vs. 87% and 77% for the preserved strain group, respectively, p < 0.001, Figure 4).In the single-ventricle impaired strain group, no difference in survival was noted between the group with impaired LV GLS and preserved strain RV FWS vs. the group with preserved LV GLS and impaired RV FWS (Supplementary Figure S1).At the univariable Cox regression analysis (Table 3), several clinical characteristics were significantly associated with all-cause mortality.Among the echocardiographic characteristics, significant association with the primary endpoint (p < 0.05) was observed for LVEF < 50%, severe mitral regurgitation, severe tricuspid regurgitation, and strain-based groups (Table 3).
For a multivariable analysis, a baseline model was built with the following clinical and echocardiographic variables, significant of an univariable regression analysis: age, sex, smoking, diabetes mellitus, coronary artery disease, peripheral artery disease, chronic kidney disease, NYHA functional class III-IV, LVEF < 50%, severe mitral regurgitation, and severe tricuspid regurgitation.From this model, only male sex, smoking, chronic kidney disease, and severe tricuspid regurgitation remained independently associated with outcome.After adding the strain-based groups to this baseline model, an independent association between the strain-based groups and all-cause mortality was observed together with male sex, smoking, and chronic kidney disease.In particular, there was an increasing HR for the single-ventricular impaired strain group (HR: 1.716; 95% CI (1.084-2.117),p = 0.021) and the biventricular-impaired strain group (HR: 1.902; 95% CI (1.116-3.241),p = 0.018) as compared to the preserved strain group (reference group, overall pvalue = 0.040) (see Table 4).At the univariable Cox regression analysis (Table 3), several clinical characteristics were significantly associated with all-cause mortality.Among the echocardiographic characteristics, significant association with the primary endpoint (p < 0.05) was observed for LVEF < 50%, severe mitral regurgitation, severe tricuspid regurgitation, and strain-based groups (Table 3).
For a multivariable analysis, a baseline model was built with the following clinical and echocardiographic variables, significant of an univariable regression analysis: age, sex, smoking, diabetes mellitus, coronary artery disease, peripheral artery disease, chronic kidney disease, NYHA functional class III-IV, LVEF < 50%, severe mitral regurgitation, and severe tricuspid regurgitation.From this model, only male sex, smoking, chronic kidney disease, and severe tricuspid regurgitation remained independently associated with outcome.After adding the strain-based groups to this baseline model, an independent association between the strain-based groups and all-cause mortality was observed together with male sex, smoking, and chronic kidney disease.In particular, there was an increasing HR for the single-ventricular impaired strain group (HR: 1.716; 95% CI (1.084-2.117),p = 0.021) and the biventricular-impaired strain group (HR: 1.902; 95% CI (1.116-3.241),p = 0.018) as compared to the preserved strain group (reference group, overall p-value = 0.040) (see Table 4).Abbreviations: HR: hazard ratio; CI: confidence interval; CABG: coronary artery bypass graft; NYHA: New York Heart Association; LVEF: LV ejection fraction.Age is expressed per 5-year increase, CKD was defined as an estimated glomerular filtration rate < 60 mL/min/m 2 .
Additionally, a likelihood ratio test was performed to determine the incremental value of the strain-based groups over the baseline model.The addition of the strain-based group to the baseline model resulted in a significant increase in the χ 2 value (χ 2 difference = 7, p = 0.030), demonstrating the incremental value of this biventricular assessment to classify patients with severe AS undergoing TAVI (Figure 5).Abbreviations: HR: hazard ratio; CI: confidence interval; CABG: coronary artery bypass graft; NYHA: New York Heart Association; LVEF: LV ejection fraction.Age is expressed per 5-year increase, CKD was defined as an estimated glomerular filtration rate < 60 mL/min/m 2 .
Additionally, a likelihood ratio test was performed to determine the incremental value of the strain-based groups over the baseline model.The addition of the strain-based group to the baseline model resulted in a significant increase in the χ 2 value (χ 2 difference = 7, p = 0.030), demonstrating the incremental value of this biventricular assessment to classify patients with severe AS undergoing TAVI (Figure 5).
Sensitivity Analysis in Preserved Left Ventricular Ejection Fraction
Further sensitivity analysis was performed in patients with preserved LVEF (i.e., LVEF ≥ 50%).Of the 494 patients, 155 (31%) patients died during a median follow up of 52 months (IQR: 34-73 months).The Kaplan-Meier survival analysis showed a significant difference in estimated the five-years survival rates between the single-ventricle impaired strain and biventricular-impaired strain group as compared to the preserved strain group (67% and 58% vs. 79%, overall log-rank test p = 0.009) (Figure 6).On the uni-and multivariate Cox regression analysis, the single-ventricle impaired strain and biventricular impaired strain group (with the preserved strain group as reference) remained significantly and independently associated with the primary endpoint (Table 5).
Sensitivity Analysis in Preserved Left Ventricular Ejection Fraction
Further sensitivity analysis was performed in patients with preserved LVEF (i.e., LVEF ≥ 50%).Of the 494 patients, 155 (31%) patients died during a median follow up of 52 months (IQR: 34-73 months).The Kaplan-Meier survival analysis showed a significant difference in estimated the five-years survival rates between the single-ventricle impaired strain and biventricular-impaired strain group as compared to the preserved strain group (67% and 58% vs. 79%, overall log-rank test p = 0.009) (Figure 6).On the uni-and multivariate Cox regression analysis, the single-ventricle impaired strain and
Discussion
In this large cohort of patients with severe AS referred for TAVI, the prognostic importance of biventricular strain assessment was evaluated.The main findings are as follows: (1) both LV and RV strain measurements were superior to conventional echocardiographic measurements and were independently associated with all-cause mortality, (2) mortality risk increased progressively when the strain of one or both ventricular (LV/RV) chambers was impaired and (3) similar results were observed in patients with preserved LVEF.
Discussion
In this large cohort of patients with severe AS referred for TAVI, the prognostic importance of biventricular strain assessment was evaluated.The main findings are as follows: (1) both LV and RV strain measurements were superior to conventional echocardiographic measurements and were independently associated with all-cause mortality, (2) mortality risk increased progressively when the strain of one or both ventricular (LV/RV) chambers was impaired and (3) similar results were observed in patients with preserved LVEF.
LV GLS and RV FWS as Markers of Subclinical Dysfunction and Prognosis in Patients with Severe AS
In severe AS, increased afterload induces concentric LV remodeling in order to compensate for the increased LV wall stress.However, over time, especially when progressive myocardial fibrosis occurs, this adaptive mechanism may fail and lead to LV dysfunction.LV function deterioration typically affects first LV longitudinal contraction, reflected by an impairment of the LV longitudinal strain.In AS patients, LV GLS has been associated with the severity of myocardial fibrosis and has been shown to be a more sensitive marker for LV dysfunction than LVEF, since its impairment precedes the reduction in LVEF [26,27].
A recent meta-analysis showed that an impaired baseline LV GLS was associated with a significantly higher post-TAVI risk for all-cause mortality and with an incremental value over the conventional echocardiographic parameters [5].Even in asymptomatic or only mildly symptomatic patients with preserved LVEF and severe AS, LV GLS showed prognostic importance for risk stratification [28,29].
The hemodynamic effects of chronic pressure overload due to AS are not limited to the LV.Post-capillary pulmonary hypertension due to elevated LV filling pressures, and possibly concomitant mitral regurgitation, can lead to secondary tricuspid regurgitation, RV dilatation, and eventually RV dysfunction [10,12,30].Because of the complex RV geometry and physiology, conventional echocardiographic parameters are limited in the assessment of RV remodeling and function [16].Medvedofsky et al. showed that in patients with severe AS, the degree of RV function as assessed by RV FWS, rather than conventional RV function parameters, was a major determinant of 1-year mortality post TAVI [13].
Incremental Value of Biventricular Strain for Risk Stratification in Patients with Severe AS
Few studies have evaluated the incremental prognostic value of a biventricular strain assessment in patients with severe AS [17][18][19].In a cohort of 128 patients with severe lowflow, low-gradient AS, and after the exclusion of more than mild left-sided valve disease, Dahou et al. demonstrated that both LV GLS and RV FWS are independent predictors of mortality.Furthermore, in this high-risk subgroup of low-flow, low-gradient AS patients, both LV GLS and RV FWS showed an incremental prognostic value of known demographic and echocardiographic predictors of outcomes [18].These findings were confirmed and extended in the present study with a larger population and, importantly, included the complete spectrum of AS subtypes.
Similarly, Ye et al. implemented a multi-chamber, strain-based staging model including the left atrium, LV and RV strain, in patients with more than moderate AS in whom aortic intervention (surgical or transcatheter) was performed with 56% of patients having tricuspid and 85% of patients having bicuspid AS.Multi-chamber, strain-based staging was independently associated with all-cause mortality with increasing risk per stage, and provided additional value in risk stratification compared to the conventional echocardiographic staging approach [1,6,19,31].The present study confirmed the incremental value of and the association of biventricular strain with all-cause mortality in a homogeneous population with severe AS who underwent TAVI.
Conversely, in a selected cohort of 100 patients with severe AS referred for TAVI, only RV FWS but not LV GLS was associated with cardiovascular mortality [17].In comparison with the present study, those patients probably presented with a more advanced stage of AS disease since they had significantly lower values of LV GLS and RV FWS (11% and 18% vs. 13% and 22%, respectively) [17].
Of interest, in the current study, a group of patients with preserved LV GLS but impaired RV FWS was identified possibly due to underlying primary RV pathology or pulmonary vascular disease.This subgroup was characterized by higher mortality rates, as compared to the preserved strain group, but lower mortality rates as compared to patients with biventricular impairment.
Of note, biventricular strain measurement may be influenced by sex differences, as men and women have shown different chamber remodeling in response to aortic stenosis [32,33].In the current study, (male) sex was significantly and independently associated with outcomes together with the strain-based groups.Further research could explore the potential implications of sex differences in biventricular strain.
Clinical Implications
The current study shows that assessing both LV and RV strain may help detect subclinical myocardial dysfunction and may improve risk stratification in patients with severe AS referred for TAVI.Since current guidelines recommend interventions only in symptomatic patients or in asymptomatic patients with reduced LVEF, assessment of biventricular strain could be considered to improve selection of patients at higher risk for adverse events, who may require close follow up and may benefit from earlier valve intervention.
Limitations
This study was limited by its retrospective design and the findings need to be confirmed in a prospective, multi-center setting.Patients with incomplete echocardiographic biventricular strain data were excluded, which may have created selection bias.However, as shown in the supplemental Table S1, patients included in the study had similar clinical and echocardiographic characteristics as compared to the ones excluded.Also, one echocardiographic vendor was used for strain assessment, and the current cut-off values applied to define strain impairment might not be applicable to other echocardiography vendors.
Conclusions
In patients with severe AS undergoing TAVI, biventricular strain impairment is associated with an increased risk of all-cause mortality post TAVI, and may improve risk stratification, particularly in patients with preserved LVEF.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcdd11030090/s1, Figure S1: Kaplan-Meier estimated survival curves for the single-ventricle impaired strain group according to LV GLS or RV FWS impairment.Table S1: Baseline clinical and echocardiographic characteristics of the included and excluded patients.
Figure 1 .
Figure 1.Flowchart of the study population.
Figure 2 .
Figure 2. Spline curves for all-cause mortality according to LV GLS (A) and RV FWS (B).The spline curves describe the HR change for the primary endpoint with 95% CI (shaded blue areas) across the range of values of LV GLS (A) and RV FWS (B).The HR starts to increase and exceeds the HR of one for LV GLS < 15% (A) and for RV FWS ≤ 20% (B).LV GLS: left ventricular global longitudinal strain; RV FWS: right ventricular free wall strain; HR: hazard ratio; CI: confidence interval.
Figure 2 .
Figure 2. Spline curves for all-cause mortality according to LV GLS (A) and RV FWS (B).The spline curves describe the HR change for the primary endpoint with 95% CI (shaded blue areas) across the range of values of LV GLS (A) and RV FWS (B).The HR starts to increase and exceeds the HR of one for LV GLS < 15% (A) and for RV FWS ≤ 20% (B).LV GLS: left ventricular global longitudinal strain; RV FWS: right ventricular free wall strain; HR: hazard ratio; CI: confidence interval.
Figure 3 .
Figure 3. Example of LV GLS and RV FWS measurement in a patient with severe AS who died during follow up.(A) Echocardiographic images of LV GLS from four-, two-, and three-chamber views with bull's eye plot.The bull's eye plot demonstrates an impairment of LV GLS (11.7%), particularly of the basal LV segments.(B) Echocardiographic images of RV FWS from the RV-focused apical four-chamber view.The absolute value of 13.6% of RV FWS demonstrates an impaired RV FWS.LV: left ventricle; LV GLS: LV global longitudinal strain; RV: right ventricle; RV FWS: RV free wall strain; AS: aortic stenosis.Strain values are expressed as absolute values.4-ch: 4-chamber view; 2-ch: 2chamber view; APLAX: apical long axis view.TAPSE: tricuspid annular plane systolic excursion; ANT: anterior; ANT_SEPT: antero-septal; INF: inferior; LAT: lateral; POST: posterior; SEPT: septal.
Figure 3 .
Figure 3. Example of LV GLS and RV FWS measurement in a patient with severe AS who died during follow up.(A) Echocardiographic images of LV GLS from four-, two-, and three-chamber views with bull's eye plot.The bull's eye plot demonstrates an impairment of LV GLS (11.7%), particularly of the basal LV segments.(B) Echocardiographic images of RV FWS from the RV-focused apical four-chamber view.The absolute value of 13.6% of RV FWS demonstrates an impaired RV FWS.LV: left ventricle; LV GLS: LV global longitudinal strain; RV: right ventricle; RV FWS: RV free wall strain; AS: aortic stenosis.Strain values are expressed as absolute values.4-ch: 4-chamber view; 2-ch: 2-chamber view; APLAX: apical long axis view.TAPSE: tricuspid annular plane systolic excursion; ANT: anterior; ANT_SEPT: antero-septal; INF: inferior; LAT: lateral; POST: posterior; SEPT: septal.
Figure 5 .
Figure 5. Likelihood ratio test for the incremental value of adding strain-based groups to the baseline model to evaluate the association with all-cause mortality.The baseline model included: age, sex, smoking, diabetes mellitus, coronary artery disease, peripheral artery disease, chronic kidney disease, history of atrial fibrillation, New York Heart Association functional class III or IV, left ventricular ejection fraction < 50%, severe mitral regurgitation, and severe tricuspid regurgitation.
Figure 5 .
Figure 5. Likelihood ratio test for the incremental value of adding strain-based groups to the baseline model to evaluate the association with all-cause mortality.The baseline model included: age, sex, smoking, diabetes mellitus, coronary artery disease, peripheral artery disease, chronic kidney disease, history of atrial fibrillation, New York Heart Association functional class III or IV, left ventricular ejection fraction < 50%, severe mitral regurgitation, and severe tricuspid regurgitation.
Author Contributions:
Conceptualization, C.S., X.G., T.D.B., J.J.B. and N.A.M.; methodology, C.S. and X.G.; software, C.S.; validation, X.G., M.C.M., S.C.B., K.H. and R.M.; formal Analysis, C.S. and X.G.; investigation, C.S. and X.G.; resources, C.S., M.C.M., S.C.B., K.H. and R.M.; data curation, C.S., M.C.M., S.C.B., K.H. and R.M.; writing-original draft preparation, C.S. and N.A.M.; writing-review and editing, X.G., M.C.M., S.C.B., K.H., R.M., T.D.B., J.J.B. and N.A.M.; visualization, C.S., T.D.B., J.J.B. and N.A.M.; supervision, T.D.B., J.J.B. and N.A.M.; project administration, C.S.; resources and data curation, F.v.d.K.; funding acquisition, N.A.M.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Leiden University Medical Center (date of approval abstract: 18 January 2024).Informed Consent Statement: Patient consent was waived due to the retrospective character of the study, the large study population, and the estimated mortality rate exceeding >30% at time of ethical approvement.Data Availability Statement: Data are available upon reasonable request.Conflicts of Interest: The Department of Cardiology, Heart Lung Centre, Leiden University Medical Centre has received unrestricted research grants from Abbott Vascular, Alnylam, Bayer, Biotronik, Bioventrix, Boston Scientific, Edwards Lifesciences, GE Healthcare, Medtronic, and Novartis.J.J.B. received speaker fees from Abbott Vascular and Edwards Lifesciences.N.A.M. received speaker fees from Abbott Vascular, Philips Ultrasound and GE Healthcare.The remaining authors have nothing to disclose.
Table 1 .
Baseline demographic characteristics of the total study population and per strain-based group.
Table 2 .
Baseline echocardiographic characteristics of the total study population and per strain-based group.
Table 3 .
Univariable Cox proportional hazard analysis for all-cause mortality.
Table 3 .
Univariable Cox proportional hazard analysis for all-cause mortality.
Abbreviations: HR: hazard ratio; CI: confidence interval; NYHA: New York Heart Association; LVEF: left ventricular ejection fraction; TAPSE: tricuspid annular plane systolic excursion; PASP: pulmonary artery systolic pressure.Age is expressed per 5-year increase.Chronic kidney disease was defined as an estimated glomerular filtration rate < 60 mL/min/m 2 .
Table 4 .
Multivariable Cox proportional hazard analysis for all-cause mortality.
Table 5 .
Cox proportional hazard analysis for all-cause mortality in patients with LVEF ≥ 50%.
Table 5 .
Cox proportional hazard analysis for all-cause mortality in patients with LVEF ≥ 50%.
|
2024-03-17T16:14:00.176Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "8b13911f563010ab5ef5a980a73e3b9d101c676c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3425/11/3/90/pdf?version=1710313007",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a7c85247ca899c13eca457ee799aa9e7e1c4ee6b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
96443822
|
pes2o/s2orc
|
v3-fos-license
|
Health Disparities in Nonreligious and Religious Older Adults in the United States : A Descriptive Epidemiology of 16 Common Chronic Conditions
In this paper, we compute prevalence estimates for nonreligious and religious people in relation to 16 common chronic conditions in contemporary American society. Using survey data from the National Social Life, Health, and Aging Project, we speak to current debates concerning potential relationships between religion, nonreligion and health in older adult populations with two key findings. First, we show no consistent relationships between religion or nonreligion and chronic condition prevalence. Second, we demonstrate race, sex, and class variations within nonreligious people’s health outcomes consistent with patterns noted in previous analyses of religious populations. In conclusion, we draw out implications for future research concerning the importance of (1) using caution when interpreting correlations between religion (i.e., a privileged social location) and health; (2) developing intersectional approaches to religion, nonreligion, and health; and (3) building a diverse base of scholarship concerning nonreligion and health.
As studies of nonreligious populations and experiences have expanded in recent years, an intriguing debate has emerged concerning health.Many studies published in the last few decades found negative correlations between lower levels of religiosity and specific health-related outcomes, and used these correlations to argue that religion had positive benefits for overall health and well-being (see, e.g., Brennan 2004; Koenig et al. 2001;Vance et al. 2008).At the same time, other scholars have pointed out that these studies often have significant methodological flaws (i.e., asserting causal possibilities from correlations that could be explained in many ways), sampling limitations (i.e., many of these studies rely on entirely religiousidentified samples of people and thus cannot compare to nonreligion or establish any concrete benefit from religion itself), and pro-religious bias (i.e., definitions of health that assume religion as a positive force from the outset) embedded within them (see, e.g., Hwang et al. 2009;Levin 1994;Sloan & Bagiella 2002).These review articles suggest any relationship between religion and health is likely tenuous at most, and that identification of consistent positive relationships between the two may stem partly from confirmation bias related to attitudes about religion itself.
In fact, recent studies beginning to actually compare religious and nonreligious respondents support these articles (see, e.g., Cragun et al. 2015;Cragun et al. 2016).
Reviewing existing literature claiming associations between religion and health, for example, Cragun and associates (2015) note that most studies only compare respondents who are more or less religious, and generally ascribe religious explanatory power to outcomes that could be accomplished via secular organizations, relationships, and resources just as easily (see also Hwang et al. 2009).Further, comparing religious and nonreligious respondents on multiple mental, social, and physical health measures in two samples, Cragun and associates (2016) found that religion had little to no effect on health.Rather than demonstrating relationships between religion and health, such studies suggest that existing studies are granting religion explanatory power it does not empirically deserve by conflating it with social activities and resources -like social support -that may be gained from secular and religious experiences, organizations, and networks (see also Galen & Kloet 2011).
As Hwang and associates (2009) suggest, one reason previous researchers have granted religion false explanatory powers may result from the use of correlations to assert broader and causal relationships (see also Cragun et al. 2016).While this issue is generally only mentioned in passing or left unsaid in studies claiming relationships between religion and health (likely due to the amount of faith placed in correlation-based research in the social sciences at present), a casual glance at established epidemiological practice reveals the importance of first establishing the prevalence of a given health issue before turning to correlational methods to tease out nuances within such phenomena (Gordis 2004).The prevalence of a health condition is the proportion of a population currently living with that condition (Gordis 2004).Without knowing the prevalence of a given health outcome within and between specific populations, researchers sometimes extrapolate the meaning of a given correlation based on their own assumptions and experiences (Link & Phelan 2010).Because literature on religion and health does not yet offer prevalence estimates for diverse conditions among religious and nonreligious populations simultaneously, asserting that religion does or does not influence health may be premature.
For readers less familiar with epidemiological methods and standards, an example of the pitfalls of correlational interpretation devoid of prevalence estimates may be useful.A classic example concerns correlations between lower educational attainment and poorer health outcomes (Link & Phelan 2010).For years, scholars relied on this correlation to promote health education programs for populations with less education, but these efforts rarely resulted in any concrete benefits.In fact, practitioners learned that many people with lower education already knew how to live healthy lifestyles.When researchers and practitioners turned to prevalence estimates of varied health conditions among those with less education, however, they realized quickly that such groups typically had less access to social resources (i.e., stoves, parks, transportation, grocery stores, disposable income, and health care access to name a few) that rendered their knowledge or ignorance about how to sustain good health irrelevant.In fact, prevalence estimates revealed that even those with more education who lacked such social resources had poorer health outcomes.Taken out of context, the correlation between education and health lost meaning, instead masking the more likely mechanisms producing lower health status among less educated people.
Findings from recent studies comparing religious and nonreligious respondents suggest a similar misunderstanding of correlational relationships may have been at work in past years (Cragun et al. 2016).When studies demonstrated correlations between religious service attendance (the most common measure used in such studies, see Musick et al. 2004 for review) and health outcomes, they missed the context that would suggest what these relationships actually meant (i.e., prevalence estimates) and without comparisons to nonreligious people that could have shown whether or not religion itself actually mattered (Cragun et al. 2015).At the same time, religious service attendance could -as many studies have argued (Koenig 2012) and more recent studies contradict (Cragun et al. 2016) -facilitate better health outcomes.However, it is equally likely -given the structure of healthcare in the United States (Sloan 2006) -that religious attendance might simply be one of the scant few health-promoting social resources in a given community.In that case, a secular community center could provide the exact same results (Cragun et al. 2016).Similarly, it could be that since people with more social resources -who are also already more likely to be healthier (Link & Phelan 2010) -are more able to attend services (religious and/or secular) as a result of these resources.Put simply, the correlations between attendance and health do not suggest religion itself is either beneficial or harmful to health overall, but rather that there may be some relationship between attending community events (religious or secular) and health outcomes.The potential benefits may or may not stem from actually being religious or from anything related explicitly to religion (Cragun et al. 2016).
Considering debates within the field about whether or not religion matters to health, simultaneously generating prevalence estimates for a wide variety of health outcomes among religious and nonreligious people represents a key step forwards (see also Cragun et al. 2016).For example, if such estimates reveal that the prevalence of negative health outcomes are higher or lower for religious or nonreligious people, we can describe a relationship in detail using raw data.We can then use inferential mathematical models with multiple covariates accounting for a variety of social and contextual factors to refine and elaborate our understanding of identified relationships.By the same token, if prevalence estimates from raw data reveal no apparent relationships between religion and health, we should instead investigate the possible influence of other social factors on health outcomes of interest in religious and nonreligious populations.For example, well-documented social determinants of health (see Phelan, Link & Tehranifar 2010) such as race, sex, class, marginalization, resources, and healthcare access may be creating the correlations we find when we examine religious and nonreligious variables.
In short, descriptive epidemiological analysis can help to mitigate potential pitfalls in secondary data and correlational research on religion and health by exploring nuances of identity and well-being in depth.Two specific advantages emerge: (1) a means of meaningfully using what may be very small group-specific samples for populations that are glossed over or explicitly marginalized in research, and (2) comparing and contrasting results with findings from pooled inferential analysis.A direct corollary to both of these benefits is that detailed descriptive epidemiological investigation can indicate specific gaps in data collection and management that can subsequently be addressed through more sophisticated and thorough primary data gathering and coding in the future.
By establishing initial prevalence estimates and using them to contextualize complementary inferential analyses given mathematically adequate sample sizes, we can thus begin to, as suggested by recent studies (Cragun et al. 2015;Cragun et al. 2016;Hwang et al. 2009), illuminate unique contributions to health status from religion and nonreligion versus other social factors with demonstrated influences on health.Such efforts follow the insights of intersectional scholars by critically examining the concrete experiences of people in varied social locations instead of expecting specific relationships a priori or simply repeating dominant cultural discourses (i.e., religion is good, see Barton 2012) that say more about societal power relations than actual health experience (see Grollman 2012;Schultz & Mullins 2006 for examples).This process should begin with assessment of baseline variation in health status between and within populations, and proceed to exploration of the ways in which health may be influenced by intersecting social locations.
In this research report, we begin this process of exploring and mapping the prevalence of common health outcomes among religious and nonreligious populations.Rather than assuming a relationship a priori, we examine a series of chronic health conditions among both religious and nonreligious respondents from a large survey dataset to gain a picture of variation (or lack there of) in such conditions between these populations.Following intersectional recognitions that health outcomes often vary along lines of social privilege and oppression (see Nowakowski & Sumerau 2015;Grollman 2012), we then compute the prevalence of such conditions among nonreligious respondents occupying different race, class, and sex locations in contemporary American society.In so doing, we seek to (1) provide health outcome prevalence estimates for religious and nonreligious people to aid in evaluating and contextualizing prior research and (2) offer a framework for comparative analyses and further exploration using a wide variety of data sources.
Research Questions
Rather than assuming any relationship between religion, nonreligion, and health outcomes, we began our study with a foundational epidemiological question (Gordis 2004): how are common chronic conditions distributed among nonreligious people and religious people?To further map the prevalence of chronic conditions within and between such populations, we then asked a core question in intersectional studies of health (Grollman 2012): how do chronic physical condition frequencies vary in relation to intersecting social locations among nonreligious people?We defined "chronic conditions" as any diagnosed health condition capable of producing consistent and recurrent symptoms.We focus on these conditions because they generally influence large portions of the life course, and thus allow health researchers and practitioners to gauge potential influences that go beyond specific or discrete events or outcomes (see Elder & Giele 2009).We defined "nonreligious people" as individuals expressing no religious identity or behavior, and "religious people" as those who did express religious identity or behavior.Although this is a simplistic way of separating these populations for analysis, data limitations do not allow us to further disaggregate these groups.Further, following Cragun & associates (2015), we utilize this limited measurement option to effectively compare respondents who identify as religious to those who do not in order to avoid patterns in existing literature wherein studies often only compare more religious respondents to less religious respondents (see also Hwang et al. 2009).As such, studies that build on our endeavors here should seek ways to unpack the nuances of nonreligious and religious distinction over the course of people's lives.
Data and Subject Selection
We explored these questions using data from Wave I of the National Social Life, Health, and Aging Project (NSHAP).Developed between 2005 and 2006, this biosocial dataset provides information on physical, mental, and social health among cisgender United States resi-dents aged 57 to 85. Data for the NSHAP are collected via a combination of questionnaires (administered during home visits), in-home interviews, and basic clinical techniques such as using cotton swabs to collect small amounts of saliva (performed during home visits).
NSHAP data documentation describes the study sample as "a nationally representative probability sample of community-dwelling individuals" (Waite et al. 2007).Certain groups within the study population (African Americans, Latinos, men, and persons 75 to 85 years of age) are oversampled to boost statistical power (Waite et al. 2007).Several key demographic groups are also not captured explicitly; we comment on this in our discussion of study limitations (for limitations of cisgender samples also see Westbrook & Saperstein 2015).
We used NSHAP data capturing religious preference and attendance; diagnosed chronic conditions; and sex identity, ethnoracial background, and educational attainment.While gender is often significantly related to chronic and other health experience (see Nowakowski & Sumerau 2015), the NSHAP -like most other "representative" surveys -currently has no measure of gender, but rather only collects cissex (i.e., female/male) responses from subjects (see also Nowakowski et al. 2016;Westbrook & Saperstein 2015).The NSHAP dataset includes 3,005 individual cases in total.After dropping any cases with missing values on our variables of interest, we retained an analytic sample of 2,966 people, accounting for 98.7% of the total NSHAP population at Wave I. Of these individuals, 189 reported no religious preference and 573 reported no religious attendance.Our study sample is described fully in Table 1.Among these individuals, we were able to assess the distribution of 16 different chronic conditions, as well as the frequency of not having any of those conditions.We were also able to assess the overall relationship between burden of chronic disease and each of our predictor constructs by computing basic count regression models for inferential analysis.
We sought to achieve a high level of detail in our description of chronic condition prevalence estimates across religious and demographic groupings of older adults.We did this both because such estimates appear absent from current religious and nonreligious studies, and because very few explicitly health science surveys contain measures on religiosity at present.Seeking to capture an epidemiological map of (non)religion and health as these aspects intersect with other social locations, we thus chose to represent the full range of characteristics assessed by the NSHAP on our measures of interest, instead of collapsing any of the categories for variables with wide ranges of response options.In the case of ethnoracial background, we actually used data from two different NSHAP variables to create our own diversified measure of heritage including information about Hispanic ethnicity in the dataset's large White population.In all other cases, we simply recoded real and missing values of single NSHAP variables to facilitate analysis.
Strategies for Analysis
We used descriptive epidemiology techniques to analyze our data (see Hajat 2011 for epidemiological methodological instructions and techniques).To create our "prevalence" tables, we computed frequencies of each chronic condition in each religious category and sociodemographic group of interest.We used Stata 12 Special Edition to create and describe our analytic variables as outlined above, and to drop any cases from the full NSHAP sample that lacked real data on one or more measures of interest.These cases were dropped after recoding all included variables using a unified operator for missing values to ensure that no cases with missing data were erroneously included in the study sample.
We then continued working in Stata to compute counts of people with each included chronic condition across any categories we were interested in for each of our two research questions.Using Stata's "summarize" and "bysort" commands with "if" statements, we obtained counts of prevalent cases of each chronic condition within each religious category and sociodemographic group.We also used "summarize" and "bysort" commands to compute the group-specific sample sizes (e.g., number of people with no religious preference identifying as Black) that we would need for the next phase of analysis.
To compute chronic condition frequencies in each group of interest, we next transferred our counts of prevalent cases to Microsoft Excel, along with our overall counts of people with specific (non)religious and sociodemographic characteristics from the full analytic sample.Using "product" functions in Excel, we proceeded to compute the percentage of people in each social location of interest diagnosed with a given chronic condition.These functions multiplied number of prevalent cases by one over the number of people in the relevant risk pool (e.g., people with Bachelor's degrees who never attend religious services).These computations in Excel yielded contingency values for Tables 2, 3, 4a-c, and 5a-c.Overall sample sizes for these tables were total numbers of people with no religious preference (Tables 2 and 4a-c, n = 189) or no service attendance (Tables 3 and 5a-c, n = 573).We used a similar process to describe our overall study population, using the full sample size (Table 1, n = 2,966) as the denominator for product functions.Outputs from each product function were expressed as percentages for ease of interpretation across disciplines.We thus refer to these values as "frequency" rather than "prevalence" estimates, as the latter are usually expressed in cases per 100,000 population (Gordis 2004).
After conducting our descriptive analyses and submitting our manuscript for review, we received feedback from reviewers affirming our concerns about doing inferential analysis with relatively small samples for some of our predictor categories, but also encouraging us to provide a couple of basic count regression models for purposes of comparison.We thus went back and computed negative binomial regression models for relationships between chronicity and religiosity.
For our outcome variable in these models, we generated a measure of total chronic disease burden by adding together indicator variable values for each of the 16 conditions we assessed independently.We also created a binary variable for religious preference to use as a predictor in the first set of inferential models, given that our original preference variable was nominal rather than ordinal.The religious attendance variable remained unaltered for inferential analysis.Finally, we recoded the nominal variable for race into a binary indicator of whether or not a person identified as a racial minority.Variables for sex (already binary due to lack of attention to intersex physiology in the NSHAP) and education (ordinal) were left unaltered for inferential analyses.
In constructing our regression models for each predictor construct, we first computed raw models using a negative binomial framework to assess apparent net effects from religiosity on chronic disease burden.After computing these models we checked significance test results to see if negative binomial regression was required for these data due to violation of data dispersion assumptions for standard Poisson models.Having verified that negative binomial regression was indeed the appropriate modeling framework, we proceeded to compute two models per predictor construct: one illustrating apparent net effects from religiosity; and one expanded to include covariates for sex, race, and education.
Results
We first examined the prevalence of the 16 common chronic conditions captured in the NSHAP in relation to religious identification.Results from these descriptive analyses are shown in Table 2. Prevalence estimates presented in Table 2 suggest at most a minor (or tenuous, see Hwang et al. 2009) relationship between (non)religious identification and health.Specifically, these frequency statistics indicate that nonreligious and religious NSHAP participants typically have roughly the same prevalence of chronic conditions overall.In some cases (arthritis and emphysema), nonreligious older adults have lower frequencies; in others, (enlarged prostate, asthma, and forms of cancer not noted explicitly), religious older adults have lower frequencies.These relationships also have complex nuances in some instances.For example, in some cases nonreligious and Jewish respondents show lower chronic condition frequencies (ulcers, stroke, hypertension, diabetes, Alzheimer's or dementia, and poor kidney function) than Christian people; in others, (cirrhosis, leukemia, skin cancer) there is no clear relationship.Finally, participants identifying as nonreligious appear most likely to have none of the 16 common chronic conditions assessed by the NSHAP (i.e., "none of the above").These findings suggest that whatever relationships exist between (non)religion and health among older adults are likely nuanced and intersectional in nature rather than direct, concrete, or significant.Using religious service attendance as our measure of religiosity helps to elucidate these nuances.Results from these analyses are shown in Table 3. Chronic condition frequency estimates in Table 3 suggest that no uniformly positive relationships exist between religious attendance and health.Rather, it appears that the NSHAP participants who are least likely to have chronic conditions by the time they reach later life are those who attend religious services no more than once a month.In fact, the only column where the highest religious attendance matches the lowest condition frequency (i.e., lymphoma) suggests people may achieve the exact same frequency if they attend services As intersectional scholars of health have noted (Grollman 2012), such oversimplifications potentially mask important social resources and processes that directly influence health.
Because comparisons between religious and nonreligious respondents do not reveal clear relationships in terms of health outcomes, we next sought to ascertain whether examining health outcomes among nonreligious older adults -as noted among religious people in other health research capturing such populations (see Koenig et al. 2001 or studies reviewed in Sloan & Bagiella 2002) -would reveal nuanced variations or uniform trends in chronic condition prevalence.Results from these analyses are shown in Tables 4a through c.Following intersectional scholarship on health to date (Schultz & Mullins 2006), we expected that if religious and nonreligious NSHAP respondents accomplished similar outcomes (i.e., religion is not driving health disparities), then nonreligious people would echo their religious counterparts by experiencing varied health outcomes in relation to sex, race, and educational status in society.
Table 4a presents chronic condition frequency estimates among nonreligious identified older adults of different binary sex categories (i.e., female and male).Since many studies have noted sex differences -in cissex (see Nowakowski et al. 2015 for reviews of this literature), intersex (see Davis 2015 for review) and transitioning between sexes (see Miller & Grollman 2015 for reviews of this literature) populations -we sought to see how nonreligion intersects with sex to gauge whether or not nonreligious females and males would show similar variation.As demonstrated in Table 4a, nonreligious NSHAP respondents appear to experience variations in health related outcomes by sex that are very similar to those observed in their religious counterparts (see, e.g., Calasanti & Slevin 2001 for reference).Specifically, nonreligious male identified people exhibit lower prevalence estimates for most chronic conditions than their female peers do as their cohorts age.
Similar patterns appear for race in The lack of a uniform positive association between religion and health becomes even clearer when examining relationships between service attendance and chronic condition frequency among varied sex, race, and education related social locations in our older adult study population.Results from these analyses are shown in Tables 5a through c.All three tables once again demonstrate substantial variation in the frequency of specific chronic conditions across diverse social locations adults may occupy in late life.These findings mirror our results from Tables 4a through c, which used religious preference rather than attendance as a marker of religiosity.Rather than any consistent overall relationship between religion and health, we again wind up with a nonreligious population that appears similar to both religious populations and US society as a whole.While prior correlational studies suggest consistently higher negative health outcomes among those who never attend religious services, our analyses of such respondents in the NSHAP instead reveal tremendous variation by sex, education, and race.
The inferential models we computed are shown in Tables 6a and 6b.These are negative binomial regression Overall, our inferential models revealed little evidence to suggest that having a religious preference or attending religious services substantially impacts burden of chronic disease one way or another.This is generally consistent with our findings from descriptive analysis.Our inferential models did identify a marginally significant association between having a religious preference and burden of chronic disease.However, contrary to findings from much of the literature discussed in the front matter, our own models actually show that people who express a religious preference have a slightly *higher* burden of disease in both a raw model incorporating only religious preference as a predictor and an expanded one incorporating covariates for sex, racial minority status, and education level.
Discussion
Despite the proliferation of both studies examining religion and health and studies of the nonreligious, many gaps and controversies persist between these lines of scholarship.Researchers have often based their arguments for whether or not religion and/or nonreligion benefits health on correlations removed from any concrete social context, and devoid of comparisons to the prevalence of health outcomes among religious or nonreligious populations.As Hwang and colleagues (2009) note, little is known about what these correlations might actually mean beyond theoretical assertions and mathematical postulations, and even less is known about the overall health of nonreligious people in society, either in general or among older adults specifically.
Our study contributes to these conversations by outlining the prevalence of chronic conditions among religious
Table 6b: Negative Binomial Regression Models of Chronic
Disease Burden by Religious Attendance (n = 2,966).† p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001.and nonreligious respondents.In addition, it draws on intersectional insights to provide variations in prevalence rates among nonreligious people occupying disparate sex, race, and educational positions within society.The combination of these endeavors reveals an absence of any clear religious influence upon 16 common chronic health conditions within a diverse sample of older adults who, given their advanced ages, would be most likely to have experienced the benefits and/or pitfalls of religious or nonreligious beliefs and practices over the course of their lives.Most strikingly, religious NSHAP participants appear to be those least likely to have none of the 16 included conditions later in life.This finding offers a counterpoint to extant scholarship suggesting clear and consistent positive ties between religion and health.Building on this observation, our study offers three key insights researchers may extend to better understand potential dynamics concerning religion, nonreligion and health.
First, our descriptive and inferential analyses both call into question ongoing assertions concerning religious benefits for health.Additionally, our elaboration of prevalence estimates suggests that in many cases nonreligious people may have better long term health outcomes or lower likelihood of major health issues.In addition, our exploration of the most common measure used to argue for religious health benefits revealed that service attendance could actually be negative if people attended more than once a month, and that this measure overall did not suggest any direct or consistent relationship to health outcomes.As much of social scientific scholarship on religion, nonreligion, and the relation of these phenomena to health and other social experiences currently depends heavily on the interpretation of correlations, these observations reveal the need for developing baseline prevalence estimates that will allow us to judge such interpretations against concrete, data-based outcomes in the concrete world.In so doing, we may catch instances -like the current religion and health literature -where apparent correlations in aggregated data may lead us in unproductive directions.More detailed assessment of relationships between religion and health can enable us to direct our efforts toward understanding social resources and processes that explain these associations.
Second, results reveal the importance of attending to power and intersectionality in studies of religion and nonreligion.Although it may seem counterintuitive at first that decades of studies have focused on a relationship that does not appear in prevalence estimates of religion/ nonreligion and health, this makes a lot more sense when we think about the power and privilege granted to religion in contemporary American society (see, e.g., Cragun & Sumerau 2015;Edgell et al. 2006;Hammer et al. 2012).Rather than neutral categories of existence, intersectional scholars have long noted that people are trained to see the world in relation to dominant assumptions, patterns, and power structures (Collins 2000).We currently live and work in a society where religion is typically defined as good, moral, beneficial, and useful for all people (see Barton 2012).Within such a context, it is not surprising that researchers would see or seek correlations suggesting a potential positive effect, and then uncritically interpret such correlations as "evidence" that religion is in fact good (see also Cheng & Powell 2015).It may thus be the case that social training protocols or dominant discourses (Collins 2000) promoting the "benefits" of religion have overshadowed or outright contradicted the data themselves in many prior studies of religion and health.
Researchers can build a better foundation for nuanced understanding of relationships between religion and health by beginning with the assumption that no relationship necessarily exists between religion and health in one direction or the other, and instead examine their data first from an exploratory perspective.This may lead to different conclusions that more accurately capture the diversity of possible associations between religion and health, as well as any uniform trends therein (see also Hwang et al. 2009).
Our descriptive analyses support theoretical evidence (see Magyar-Russell & Pargament 2006) that religion can be good, bad, or ineffectual in relation to many health outcomes over the life course.Likewise, our findings suggest that nonreligion can be interpreted as equally good, bad, or ineffectual in relation to health outcomes.This possibility appears to be especially strong when having no common chronic conditions in late life is included as a key outcome.
The ability or inability of particular indicators of religiosity (rather than a general "umbrella" measure that likely captures a broad range of experiences related to spiritual life) may offer some clues as to why we did not find positive associations between religion and health (i.e., negative associations between religiosity and chronic disease burden) in either our descriptive analyses or our inferential ones.Specifically, one of our measures (religious belief) says little on its own about what types of practices a person might engage in as a result of their beliefs that would in turn yield opportunities for social support.Indeed, many people with extremely devout beliefs focus their energies on cultivating strong personal relationships with deity, rather than participating in organized worship activities.Our other religiosity measure (religious attendance) may yield better insight into opportunities for social support, but remains limited in its predictive value for this potential mediator because it cannot independently capture the nature or tenor of specific activities in which people engage when attending services.Magyar-Russell & Pargament (2006) explain that organized worship that encourages anxiety about punishment in the afterlife can actually foment social anomie and harm health, whereas worship services that encourage personal empowerment and secure attachment to deity are likely to do the opposite.
Rather than a clear relationship that would support dominant discourses within a society where religion is privileged above other ideological and interpretive forms (see Cragun & Sumerau 2015), our research reveals a nuanced, intersectional set of relationships and variations that suggest religion (and even nonreligion) may not matter at all for health except in cases where it diverts our scholarly attention away from social forces that catalyze health outcomes more directly.Scholars of religion and nonreligion alike may do well to pay close attention to the ways religion -as a privileged system of power in contemporary America (see Barton 2012;Edgell et al. 2006) -intersects with other systems of power and inequality in the course of people's lives and the reporting of scientific results.
Third and finally, our research also reveals the importance of establishing studies of nonreligious health and well being (see also Hwang et al. 2009).While religious aspects of these phenomena have received thorough attention in the last few decades, studies concerning nonreligious people's health are fairly rare at present (see also Brewster et al. 2014).Yet in our own analyses, we found considerable variation among nonreligious people in relation to sex, race, and education status.Our findings echo intersectional studies in health specifically (Grollman 2012) and social life generally (Collins 2000) in suggesting that we can learn at least as much about how intersecting social forces influence nonreligious people's health as we can from similar analyses of religious people.As is common in epidemiological practice (see Gordis 2004), our prevalence estimates of nonreligious health variation can provide a foundation for systematic analyses of the prevalence of various mental and physical health conditions among nonreligious populations, correlational studies seeking to tease out nuances and influences in nonreligious health outcomes or experiences of chronic conditions, and qualitative and quantitative analyses of the ways intersectional statuses play out in the health experiences and behaviors of nonreligious people.
While these insights may dramatically expand research on religion, nonreligion and health in the years to come, our study does have some important limitations, and thus opportunities for further examination of these dynamics.First, as with any current data set called "nationally representative," several key demographic groups (especially in relation to health) are not captured explicitly (Nowakowski et al. 2016;Westbrook & Saperstein 2015).For example, people transitioning between sexes, intersex, transgender of any type, gender nonbinary, same sex attracted, bisexual, asexual, and nonsexual people are often not represented in such surveys and they are not in the data set used for this report (see, for example, Ivankovich et al. 2013;Wentling et al. 2008;Westbrook & Saperestein 2015).Although people with these characteristics may be included in the total participant pool (Westbrook & Saperstein 2015), we cannot comment meaningfully on their experiences at present using this dataset.As calls continue for more truly representative data, it may thus be useful to re-estimate prevalence rates including these and other often unrepresented populations to gain a better picture of overall population health (Ivankovich et al. 2013).
Speaking in detail to the above limitations of NSHAP as a whole, in this specific study we also cannot offer insight into the human immunodeficiency virus (HIV) status of marginalized sex, gender, and sexuality groups.HIV status was also assessed in the original data collection effort for Wave I, but later pulled from the restricted use dataset due to confidentiality concerns.These data were never released for use by other researchers, and thus represent a lost opportunity for assessing the nuances of well-documented inequalities in HIV prevalence (Gorman et al. 2015) in older adult populations.It would thus be wise for researchers to examine what (if any) relationship exists between HIV prevalence and experience among religious and nonreligious populations.
We also cannot comment substantively on prevalence patterns for chronic mental and behavioral health conditions.Our analytic sample did include data on one cognitive condition group (Alzheimer's or dementia) included in the NSHAP's assessment of commonly diagnosed chronic conditions in older adults.The other 16 common conditions with reported outcomes only offered information about physical health.The NSHAP does collect data on mental and behavioral health experiences (e.g., depression) and practices (e.g., smoking).However, it does not capture diagnosis status, and thus does not offer a meaningful opportunity to compare findings across different condition categories in a single study.We intend to follow up with separate studies engaging the available NSHAP data for experiences and practices indicative of chronic mental and behavioral health conditions among nonreligious and religious respondents.Finally and perhaps most importantly, our study deals solely with older adults.Literature on the etiology and dynamics of health in late life has long acknowledged that volunteer work and other forms of organized civic activity in both the secular and religious spheres appears to exert a substantial positive impact on both physical and mental health.Social support is one of several hypothesized mediation mechanisms in this research.Indeed, a recent study (Cragun et al. 2016) suggests that social support may play a key role in any potential positive associations between religion and health.Although we have no means of directly comparing our population in the NSHAP to themselves at younger ages, extant literature certainly suggests that these individuals may have increased their civic engagement across the board as they grew older.So among older adults, it may be more difficult to distinguish unique influences on health from social support stemming from secular versus religious activities.
Our emphasis on older adult populations also introduces some notable strengths, such as the fact that chronic conditions remaining latent in earlier portions of the life course are more likely to manifest and progress to clinical diagnosis in later years.Likewise, in the United States most adults over the age of 65 -a substantial portion of the total NSHAP study sample -are eligible for health insurance coverage via Medicare.Adults in this age bracket may thus be more likely to obtain clinical diagnoses for their chronic conditions due to expanded access to health care if they did not previously have consistent ability to pay for office visits.
Indeed, our report also offers significant other strengths (especially in relation to epidemiological methods and prevalence estimates, see Gordis 2004).First, we began our study with substantial samples of people with no religious preference (n = 189) and no service attendance (n = 573).Although these sample sizes are often regarded as adequate even for basic inferential analysis, good epidemiological practice requires thorough description of a study population prior to attempting inference (Gordis 2004).We present both perspectives here for comparison and contrast.We were able to achieve a high level of detail in our analysis by breaking each sample of nonreligious people into smaller contingency groups by sociodemographic characteristics.This allowed us to illuminate potential variations with implications for academic and applied health practice alike.
By contextualizing these observations with a basic inferential analysis for both predictors, our findings from the descriptive epidemiology open doors for many possibilities in scholarship.As noted regarding small sample sizes, it is very possible that these inferential analyses (especially the one for religious preference, where the sample of people in the "none" category contained only 189 cases) would miss a significant "true" effect because not enough people responded that they had no religious preference to provide adequate power in that sample.They also offer opportunities to explore the inconsistency in statistical associations and reliability we found here as further evidence that the impact of religiosity on health may be much more nuanced than previously believed.
Because we only used data from Wave I of the NSHAP to assess condition frequencies, we avoided potential issues with cohort inversion -the phenomenon in which certain groups in an analytic sample appear to become gradually healthier over time relative to their peers because members with profound health challenges die prior to subsequent waves of data collection (Noymer, Beckett & Elliott 2001).That said, we caution against over-interpretation of our findings for the "none of the above conditions" measure.People in this category did not necessarily have no chronic conditions at all, only none of those captured by the 16 commonly diagnosed condition variables in NSHAP.
We also feel confident that we captured religiosity meaningfully in our study population (especially considering the tendency for health data sets to have little or no measures of religiosity).Although the NSHAP includes additional response options for the services question that indicate attending rarely or occasionally, we focused our analysis only on the 573 people who said that they never attended.Using the NSHAP also gave us access to what extant literature suggests are two of the three most common measures of religiosity (see also Hwang et al. 2009).We did not have access to the last of these three measures -belief in a higher power -because the NSHAP (like many health surveys) does not ask this question.Despite limitations on religious variables in health data, we were thus able to utilize both commonly accepted religious measurements and diverse collections of health information in this report.
Conclusion
Our study sheds light on some ways in which descriptive epidemiological approaches may help scholars make sense of evolving controversies and debates concerning religion, nonreligion and health.Considering that health outcomes are facilitated by multiple, interlocking systems of social power and privilege (Schultz & Mullins 2006), fully understanding such controversies requires establishing baseline portraits of diverse health outcomes in people who identify as religious and nonreligious, as well as those who do or do not attend religious services regularly.Further, such understanding requires attention to how nonreligious people's health -like that of their religious counterparts -is shaped by intersections of sex, race, and class inequalities in the broader social world.
To this end, we first explored the frequency of common chronic health conditions among religious and nonreligious populations simultaneously then stratified our frequency estimates by sex, race, and education characteristics within nonreligious populations.Our findings offer little reason to believe religion or nonreligion plays a major positive or negative net role in health outcomes across the life course.We thus echo Hwang and colleagues (2009) in suggesting that prior studies indicating otherwise based on simple aggregate correlations may thus oversimplify what is actually a complex and nuanced causal landscape.Our analyses also revealed considerable variation in the health outcomes of nonreligious respondents.This suggests that developing an intersectional field of nonreligious health scholarship may be an important step for scholars seeking to illuminate intersections between religion, nonreligion, and health embedded within the broader social world and influenced by other systems of power and privilege.
Table 1 :
Characteristics of Study Population at NSHAP Wave I (n = 2,966).
Table 3 :
Chronic Condition Frequency by Religious Attendance (n = 2,966).and Sumerau: Health Disparities in Nonreligious and Religious Older Adults in the United States Art. 4, page 7 of 15
Table 4b :
Chronic Condition Frequency by Race Among People with No Religious Preference (n = 189).
Table 4a :
Chronic Condition Frequency by Sex Among People with No Religious Preference (n = 189).
less than once a year.Further, the most common chronic health condition in America at present (i.e., arthritis) appears least frequently among those older adults who never attend religious services or only attend about once or twice a year.Echoing reviews of religious studies of health(Hwang et al. 2009), our findings in Table3suggest that asserting a clear, uniform relationship between service attendance and health outcomes oversimplifies what is actually an inconsistent and nuanced association.
Table 4b ,
and for education in
Table 4c :
Chronic Condition Frequency by Education Among People with No Religious Preference (n = 189).
Table 5a :
Chronic Condition Frequency by Sex Among People with No Religious Attendance (n = 573).
Table 5b :
Chronic Condition Frequency by Race Among People with No Religious Attendance (n = 573).
models representing total number of diagnosed chronic conditions as the outcome, and each of our two religiosity measures as a predictor.Each table presents Model 1 (raw model of the relationship between chronic disease burden and religiosity) and Model 2 (expanded model of the above relationship with covariates for sex, racial minority status, and education level).
Table 5c :
Chronic Condition Frequency by Education Among People with No Religious Attendance (n = 573).
|
2019-01-30T22:36:36.590Z
|
2017-01-24T00:00:00.000
|
{
"year": 2017,
"sha1": "4817859d83a8707c396c0ae5491c07c5b3fa4653",
"oa_license": "CCBY",
"oa_url": "http://www.secularismandnonreligion.org/articles/10.5334/snr.85/galley/94/download/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4817859d83a8707c396c0ae5491c07c5b3fa4653",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
91511519
|
pes2o/s2orc
|
v3-fos-license
|
Chemical composition of clupeids fish as alternative replacement for foreign fish meal in fish feeds in West Africa region
Sun dried Clupeids fish was acquired from fishermen in Kainji Dam Basin and Danish fish meal also procured in order to compare their nutrient composition. The clupeids fish were milled into fish meal as Danish fish meal and both were analysed for chemical composition to determine the nutrient profile using standard procedures. The results of proximate composition showed that Clupeids fish meal has 70.68% crude protein while Danish fish meal has 72.6% crude protein which were not significantly different (P>0.05). The results of amino acid profile analysis showed that they both contain all the essential amino acids (EAA) and methionine and cysteine in clupeids fish meal was significantly higher than that in Danish fish meal (P<0.05). The results also showed that both contain minerals and calcium and phosphorus level was higher in clupeids fish meal than Danish fish meal. It was concluded that clupeids fish showed possibility of replacing Danish fish meal in fish diets on a ratio 1:1 basis and such will be confirmed in a feeding experiment. The success of clupeids replacing foreign fish meal will help the West Africa region to reduce importation. *Correspondence to: Ibiyo LMO, Fish Nutrition and Health Programme, National Institute for Freshwater Fisheries Research, PMB, New Bussa, Niger State, Nigeria, Tel: 08059241879; E-mail: oniviemercy@yahoo.com
Introduction
Fish is one of the most important sources of animal protein available in the tropics and has been widely accepted as a good source of protein and other elements for the maintenance of healthy body [1] for the populace. Substantial percentage of the protein needs of the population of the villages and towns is supplied through fishing in several Nigerian communities [2]. The less developed countries capture 50% of the world fish harvest and a large proportion of the catch are consumed internally [3]. In Asian countries over 50% of the animal protein intakes comes from fish while in Africa; the proportion is 17.50% [4]. In Nigeria fish constitute 40% of the animal protein intake [5]. To boost availability of fish and its intake aquaculture development has been encouraged in recent times as capture fisheries supply continues to decline. However, the success of aquaculture production is largely dependent on fish feed supply and feed production requires fish meal.
Fishmeal is an essential component of fish feed and it is important for good growth of fish thereby leading to a profitable output in fish production. A large quantity of fishmeal used for feed in Nigeria and West African Region is imported and it is very expensive. Small quantity of local fishmeal is available in the market. They are poor in quality, produced from trash fish, heads and crumbs gathered from fish sellers. Sixty to seventy (60-70) % cost of a kilogram of fish feed is borne by fishmeal, so lowering the cost of fishmeal will reduce the unit cost of fish feed by 20%. Research work on replacement of some quantities of fishmeal in feed formulation with soybean [6], poultry feather meal [7] live maggot [8], Moringa oleifera leaves [9] and some other by-products has only succeeded in reducing the level of inclusion. There is need for a local source of fishmeal to totally replace the amount of foreign fish meal required to impact palatability, acceptability, enticing aroma, good growth performance on the fish feed for fish production. Lantern fish (a deep-sea fish) is abundant in our coastal water and are yet to be exploited which can be assessed by the Nigerian Institute of Oceanography. Also, the clupeids fish produce abundantly in the Nigerian freshwater bodies and are yet to be totally exploited. The nutritional composition of fish varies greatly from one species and individual to another, depending on age, feed intake, sex and sexual changes connected with spawning, the environment and season [10]. There is dearth of information on good quality local fishmeal availability in Nigeria and West Africa in general. This necessitates the initiation of the project on local fishmeal development from clupeid and lantern fish which seems very suitable for the purpose if harnessed. This particular work is to harness the advantage of abundance and prolificacy of clupeids fish species to produce local fish meal and reduce the importation of fish meal. Fish feed will be cheaper, employment generated, more fish produced, farmers income increased and aquaculture production to National GDP will improve.
Material and method
Danish fish meal was procured from feed ingredients marketers at Lagos. Clupeids fish caught by fishermen with a trawl net of 3mm-codend-mesh size from in Kainji Dam basin was procured. The caught population comprises of the two species of clupeids namely; Pellonula afzeluisi and Sierrathrissa leonensis [11]. The sample was processed by sun drying on a rack of mosquito wire net supported by wooden framwhich is the common method adopted by the fishers of clupeids fish. Effort was not made to separate the fish into the species group because the fishermen will always fish both and process them together and bagged same. It was a deliberate to assess what will eventually be available to fish farmers or feed producers. The sun-dried sample was milled and taken for some nutritional analysis which comprised of proximate composition, amino acid profile, mineral content and fatty acids analysis.
The analyses of clupeids fish meal were carried out by University of Ilorin Analytical Laboratory under the supervision of the collaborating staff (D r. R.M.O. Kayode). Proximate composition was measured following the procedures of [12]. Nitrogen was measured following the micro-Kjeldahl methods and multiplied by 6.25 to estimate crude protein content.
Amino acids analysis
Amino acids were measured after acid hydrolysis of protein using the Pico Tag method and high-pressure liquid chromatography (HPLC). Sulphur amino acids (Cysteine or methionine) were measured separately using similar methods after oxidation with performic acid.
Mineral analysis
The digested sample was sub-sampled into pre-cleaned borosilicate glass containers for Atomic Absorption Spectrophotometer analysis.
Phosphorus
The sample was digested with nitric acid. The content was boiled within a minute to ensure a complete conversion of phosphorus pentoxide to orthophosphate. The solution was allowed to pass through the resin packed column of 10cm and the filtrate was collected in a 10ml pyrex test tube. 2.0ml of the colour development reagent was added. The absorbance of both standard and the sample were measured at 650nm.
Statistical analysis
The data obtained were subjected to student T-test of significance after which values were represented in graphs.
Chemical analysis of reference and test ingredients
Proximate composition: The results of proximate composition analysis of the reference (Danish fish meal) and the test (Clupeids fish meal) ingredients are presented in Figure 1. The clupeids' crude protein content (C.P.) 70.68% is not significantly different from that of Danish fish meal (imported fish meal) which is 72.60% C.P. (P<0.05). Large portion of the lipids in clupeids fish is majorly under the skin with some in the peritoneum.
Amino acid profile of clupeids and danish fish meals: Figure 2 showed the amino acids' profile of the two (2) fish meals. The values of amino acids of both are not far from each other. However, the sulphur amino acids (methionine and cysteine) values are higher in clupeids fish meal than in Danish fish meal despite the limiting nature of these amino acids.
Mineral composition of clupeids and danish fish meals:
The mineral composition showed in Figure 3, indicated higher quantities of phosphorus and potassium for clupeids fish meal.
Discussion
Generally, the analytical results showed an indication of the clupeids fish meal being able to replace the foreign fish meal in fish feed on a ratio of 1:1. This will be confirmed by the results of the feeding experiments. The lipid content from the proximate composition is higher than 1.32% and 1.04 respectively reported for adult and fingerlings of Clarias gariepinus [13] and within the range of 1. It can be concluded from the results of the studies that clupeids fish meal is an important local resource that can totally replace the foreign fish meal in feed production in Nigeria and West Africa sub-region due to the fact that it has a well balance nutrient composition. It will be necessary to invest in stocking the water bodies where they are not presently available for accessibility and availability to the fish farmers all over the country and West Africa sub-region.
|
2019-04-03T13:09:44.376Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e09324a3e384de5243796bf08cb1736dc7efcead",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/AHDVS-2-145.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dacd3759d98698cb54368fa7d88eb9fd74576aa7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
218650659
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy of the Rotary Instrument XP-Endo Finisher in the Removal of Calcium Hydroxide Intracanal Medicament in Combination with Different Irrigation Techniques: A Microtomographic Study
Objectives: This study aims to evaluate the efficacy of the rotary instrument XP-endo Finisher for the removal of Ca(OH)2 aided by different irrigation regimens. Methods: Sixteen double-rooted upper premolar human teeth were selected for the study. Thirty-two canals were prepared using a ProTaper Next rotary system up to X3. Then, the canals were filled with Ca(OH)2. The volume of Ca(OH)2 inside the canals was measured by microcomputed tomography (micro-CT). After that, the teeth were randomly allocated into two experimental groups, i.e., A and B (n = 16 canals). In group A, Ca(OH)2 was removed using the master apical file (X3). In group B, Ca(OH)2 was removed using a XP-endo finisher. In half of both groups (n = 8), syringe irrigation (SI) was used, while passive ultrasonic irrigation (PUI) was used for the other half. After removal, the remaining volume of Ca(OH)2 was measured. All data were statistically analyzed using two-way ANOVA with Tukey’s post hoc test. Results: The percentages of remaining Ca(OH)2 in the apical thirds of all canals were significantly higher as compared with the middle and coronal thirds in all groups (p < 0.05). There was no significant difference between different files and techniques (p > 0.05). Clinical Significance: This study presents a new method for the removal of Ca(OH)2 from root canals.
Introduction
Previous and recent endodontic technologies have focused on the reduction and eradication of microbes and microorganisms from the root canal system [1,2]. There is no available instrumentation method that can thoroughly disinfect the root canal system [2]. However, the placement of intracanal medication has been implemented to facilitate the disinfection procedure. Calcium hydroxide is extensively used as an intracanal medicament as it has antibacterial properties and its use is suggested for the treatment of infected canals or between root canal treatment visits.
Before root canal obturation, Ca(OH) 2 medicament must be removed completely to allow adaptation of the obturation materials to the root canal walls [3]. Several studies have shown that it is difficult to completely remove Ca(OH) 2 from root canals [4]. Remnants of Ca(OH) 2 on the root canal walls can react with the endodontic sealer and change its properties [5]. Such changes include increasing its viscosity, reducing its flow, and affecting its setting time, and thus preventing sealer penetration and adhesion to dentinal tubules [1,6,7]. These changes affect the bond strength between the sealer and dentine [8]. A study by Kim and Kim, also found that residual Ca(OH) 2 resulted in increased post-obturation apical leakage when a zinc oxide-eugenol root canal sealer was used [9]. The remnants could also react chemically with the sealer and affect the hermetic seal of the permanent root canal filling [10]. Furthermore, the solubility of Ca(OH) 2 inside the root canal can cause voids on the dentine-filling interface that can enhance bacterial growth [11,12]. Therefore, the predictable and complete removal of Ca(OH) 2 medicament before root canal obturation is crucial and is probably directly related to a successful treatment and favorable prognosis [13].
The removal of calcium hydroxide has been investigated using several products and protocols. The most commonly used protocol for the removal of Ca(OH) 2 is the mechanical instrumentation using a master apical file (MAF) combined with sodium hypochlorite (NaOCl) irrigation [14]. In addition, several protocols have been recommended to improve the removal of Ca(OH) 2 , which include the use of NaOCl in combination with the chelating agent ethylenediaminetetraacetic acid (EDTA) and mechanical agitation provided by rotary files instrumentation or ultrasonic/sonic activation together with irrigation [14][15][16][17][18]. Margelos et al. reported that the best technique for calcium hydroxide removal from the root canal was flushing with NaOCl plus EDTA and filing, but even this method was unable to remove the material completely [7]. The use of rotary instruments or passive ultrasonic irrigation (PUI) has been found to remove more intracanal medicament as compared with conventional irrigation [14,19,20].
However, studies on different Ca(OH) 2 removal protocols have shown that residual volumes ranging from 3% to 20% remained, mainly in the apical region [14]. Silva conducted a microtomographic study to assess the efficacy of passive ultrasonic irrigation (PUI) for the removal of calcium hydroxide medication with or without an additional file (F5). The results showed that the use of PUI was more efficient for the removal of Ca(OH) 2 paste regardless of the use of the additional file. The highest residual volume in all techniques was in the apical region [20].
Wiseman showed that sonic and ultrasonic irrigation could not eliminate Ca(OH) 2 from the mesial root canals of mandibular molars. Microcomputed tomography (micro-CT) scanning of the root canal system showed that the combination of rotary instrumentation and passive ultrasonic irrigation of 20 s for three periods significantly reduced the remnants of Ca(OH) 2 as compared with sonic irrigation [21].
The XP-endo Finisher (FKG Dentaire, La Chaux de Fonds, Switzerland) is a rotary root canal instrument that has been introduced into the market. It is a universal NiTi-based instrument measuring ISO 25 in diameter with zero taper and it is indicated for use for instrumentation of canals with complex morphology and with inaccessible areas. The file is highly flexible and can expand. During use, the file reaches 100-fold of an equivalent sized file or 6 mm in diameter. These features help in dentine preservation. The new technology behind XP-endo Finisher files manufacturing is based on the shape-memory principles of the NiTi alloy. When the file is cooled, it becomes straight (M-phase). When the file is exposed to the body temperature inside the canal its shape changes to the A-phase, caused by its molecular memory. During rotation mode, the A-phase shape enables the file to access areas that are inaccessible with conventional instruments. According to the manufacturer's guidelines, it can remove the medication inside the canal and the residual obturation material during retreatment.
The purpose of this study is to evaluate the efficacy of the finishing rotary instrument XP-endo Finisher in combination with different irrigation regimens for the removal of Ca(OH) 2 from root canal dentin walls using computerized microtomography evaluation as compared with a master apical file (X3).
Experimental Teeth Selection
Approval from the research ethics committee (090-10-17) was granted. Thirty-two double-rooted human teeth with completely formed apices were selected from a pool of teeth. The patient's gender and age were unknown. Tooth selection criteria included teeth free of visible root caries, cracks, or fractures, and have completely formed root upon visual examination. The teeth were randomly coded and allocated blindly into two experimental groups. The external root surface was sealed with nail polish [22]. Buccolingual and mesiodistal periapical radiographs were taken to confirm the teeth had non-calcified canals.
Teeth Preparation
All teeth preparations were carried out at the Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia. Selected teeth were mounted in plastic tube holders with a radiolucent rubber base impression material. Access cavity preparation was done using round and tapered fissure carbide burs. The working length was determined by introducing a size 15 K-file in the canal up to the apical foramen, until the file was extruded from the apical foramen and 1 mm was subtracted from this length. Canal preparation was made using a ProTaper Next system (Dentsply Sirona, Ballaigues, Switzerland) (PTN, Dentsply Maillefer, Ballaigues, Switzerland) according to the manufacturer's instructions after forming the glide path to full working length using a size 15 K-file. The files were powered by an electric motor (X-Smart plus, Dentsply Maillefer) with the manufacturer's recommended rotational speed of 300 rpm and 200 g/cm torque. The MAF (X3) was used to form a minimal apical preparation with a dimension of 0.30 mm at the working length. After the preparation of each root canal, each file was carefully cleaned from debris.
Standard Irrigation Protocol
A standardized protocol was followed to irrigate all canals. The protocol included using two milliliters of 5.25% NaOCl to irrigate the canal after each instrument. After completing the instrumentation, 10 mL of 5.25% sodium hypochlorite NaOCl was used to irrigate the canals using a 30-gauge side-vented needle (Ultradent, South Jordan, UT, USA), with the cannula placed 2 mm short of the working length, followed by 3 mL of 17% EDTA (Sigma Lab Chem. Inc., Pitts-burgh, PA, USA) for one minute. A final rinse with 3 mL of 5.25% sodium hypochlorite (NaOCl) was performed, using a 30-G endodontic needle at 2 mm from the working length.
Placement of Ca(OH) 2
All canals were dried using absorbent paper points (Dentsply Maillefer) and Ca(OH) 2 (Ivoclar Vivadent) was inserted into the canals using a mechanically driven lentulo-spiral carrier (size 25, Dentsply Maillefer), adjusted to 3 mm from the WL. Radiographs were taken in two angulations, mesiodistal and buccolingual, to confirm that the canals were filled with Ca(OH) 2 . Access cavities were temporarily sealed with cotton pellets and cavit (3M ESPE Germany). Then, samples were saved in vials containing gauze saturated with saline at 37 • C for one week. Then, the specimens were scanned using microcomputed tomography scanning (micro-CT).
Experimental Groups
After seven days, root canals were randomly distributed into two groups (n = 16), according to the procedure used for Ca(OH) 2 removal ( Figure 1). The medicament was removed using the MAF (X3). This group was divided into two subgroups. In subgroup A1, the root canals were irrigated with 5.25% NaOCl (10 mL), and the MAF (X3) was inserted in rotary motion up to the WL. During its removal from the canal, the NaOCl solution was renewed. This procedure was repeated three times, and then the canal was filled with 17% EDTA (3 mL) for one minute which was replaced every 15 s, followed by a final rinse with 5.25% NaOCl (3 mL).
In subgroup A2, after the use of the MAF (X3) file and regular irrigation, ultrasonic irrigation was performed for one min with an intracanal ultrasonic tip (Irri-Safe 20-25 mm thin intracanal, Satelec: Acteon group, Mérignac, France) 2 mm short from the WL. Ultrasonic activation was delivered for 20 s, twice during NaOCl irrigation and once during EDTA irrigation (mini Endo, SybronEndo, CA, USA). The total activation time was 60 s. The device was adjusted to 80% of maximum power. Then, the canals were dried with absorbent paper (35/4%; Dentsply Maillefer).
Group B: Ca(OH)2 Removal Using the XP File
The medicament was removed using 5.25% NaOCl and the XP-endo Finisher. A contra-angle handpiece Element Motor (SybronEndo, Orange, CA, USA) was used. Root canals were each filled with 0.5 mL of 5.25% NaOCl. The instrument was adjusted to the WL, and then cooled down with Endo-Frost (Roeko, Langenau, Germany) according to the manufacturer's instructions. After removal of the plastic tube, the instrument was inserted into the canal without rotation. Afterward, the instrumented was used in continuous rotation motion with 800 rpm and 1 N/cm torque. The instrument was used inside the canal for 1 min with slow 7-8 mm up-and-down movement the entire length of the canal. This procedure was repeated three times. Upon the instrument's removal from the canal, the NaOCl solution was renewed. After removal of the instrument, a final irrigation protocol was performed using a 30-gauge side-vented needle with 10 mL of 5.25% NaOCl. Then, the canal was irrigated with 17% EDTA (3 mL) for one minute, followed by a final rinse with 5.25% NaOCl (3 mL). This group was divided into two subgroups. In subgroup B1, the XP file was used with regular irrigation technique. In subgroup B2, after the use of the XP file and regular irrigation, ultrasonic irrigation was performed using 5.25% NaOCl for one minute with an intracanal ultrasonic tip 2 mm short from the WL. The device was adjusted to 80% of maximum power. Then, the canals were dried with absorbent paper points (35/4%; Dentsply Maillefer).
Micro-CT Scanning Procedures and Evaluation Protocol
All micro-CT scanning and analysis were carried out at the College of Dentistry, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia. Two microtomographic scans were performed on each sample. The first scan was done one week after placement of the intracanal The medicament was removed using the MAF (X3). This group was divided into two subgroups. In subgroup A1, the root canals were irrigated with 5.25% NaOCl (10 mL), and the MAF (X3) was inserted in rotary motion up to the WL. During its removal from the canal, the NaOCl solution was renewed. This procedure was repeated three times, and then the canal was filled with 17% EDTA (3 mL) for one minute which was replaced every 15 s, followed by a final rinse with 5.25% NaOCl (3 mL). In subgroup A2, after the use of the MAF (X3) file and regular irrigation, ultrasonic irrigation was performed for one min with an intracanal ultrasonic tip (Irri-Safe 20-25 mm thin intracanal, Satelec: Acteon group, Mérignac, France) 2 mm short from the WL. Ultrasonic activation was delivered for 20 s, twice during NaOCl irrigation and once during EDTA irrigation (mini Endo, SybronEndo, CA, USA). The total activation time was 60 s. The device was adjusted to 80% of maximum power. Then, the canals were dried with absorbent paper (35/4%; Dentsply Maillefer).
Group B: Ca(OH) 2 Removal Using the XP File
The medicament was removed using 5.25% NaOCl and the XP-endo Finisher. A contra-angle handpiece Element Motor (SybronEndo, Orange, CA, USA) was used. Root canals were each filled with 0.5 mL of 5.25% NaOCl. The instrument was adjusted to the WL, and then cooled down with Endo-Frost (Roeko, Langenau, Germany) according to the manufacturer's instructions. After removal of the plastic tube, the instrument was inserted into the canal without rotation. Afterward, the instrumented was used in continuous rotation motion with 800 rpm and 1 N/cm torque. The instrument was used inside the canal for 1 min with slow 7-8 mm up-and-down movement the entire length of the canal. This procedure was repeated three times. Upon the instrument's removal from the canal, the NaOCl solution was renewed. After removal of the instrument, a final irrigation protocol was performed using a 30-gauge side-vented needle with 10 mL of 5.25% NaOCl. Then, the canal was irrigated with 17% EDTA (3 mL) for one minute, followed by a final rinse with 5.25% NaOCl (3 mL). This group was divided into two subgroups. In subgroup B1, the XP file was used with regular irrigation technique. In subgroup B2, after the use of the XP file and regular irrigation, ultrasonic irrigation was performed using 5.25% NaOCl for one minute with an intracanal ultrasonic tip 2 mm short from the WL. The device was adjusted to 80% of maximum power. Then, the canals were dried with absorbent paper points (35/4%; Dentsply Maillefer).
Micro-CT Scanning Procedures and Evaluation Protocol
All micro-CT scanning and analysis were carried out at the College of Dentistry, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia. Two microtomographic scans were performed on each sample. The first scan was done one week after placement of the intracanal medicament and the second one was done after medicament removal. The teeth were mounted in a plastic tube holder with a radiolucent rubber base impression material. Scanning was performed using a micro-CT SkyScan 1172 machine (Belgium) with the following parameters: source voltage 90 v [23][24][25], source current (uA) = 112, image pixel size 13.73 [26][27][28], filter = Al + Cu, image format = tiff, exposure = 2900 [29], rotation step = 0.500 [30], frame averaging = 3, random movement = 10, and used 360 • rotation. Then, raw images with tiff format were processed for reconstruction using NRecon Version 1.6.4.8, SkyScan 2011 software with the following settings: smoothing = 6, smoothing kernel = 2 (Gaussian), ring artifact correction = 6, beam hardening correction = 20%, and result file type = BMP. The CT scan version 1.11.10.0+ (64-bit) was used. SkyScan 2003-11 software was used for the calculation of the remaining calcium hydroxide in a cubic millimeter. The CT vol version 2.2.1.0 was used for realistic three-dimensional (3D) visualization.
Outcome Assessment
The mean volume of Ca(OH) 2 before the removal was calculated. The higher greyscale value of Ca(OH) 2 than that of dentine, allowed its identification by the manual segment procedure. The percentage of the volume of remaining Ca(OH) 2 inside the canals after removal was calculated as (the mean volume of Ca (OH) 2 before removal -the mean volume of Ca(OH) 2 after removal) * 100/the mean volume of Ca(OH) 2 before removal.
The calculation of Ca(OH) 2 volume in each specimen was implemented using micro-CT software. Each dataset was also segmented using a uniform grayscale threshold to visualize and calculate the volume of residual Ca(OH) 2 material. The volume of Ca(OH) 2 is expressed as mm 3 .
Statistical Analysis
The Shapiro-Wilk normality test was used to test the data distribution of Ca(OH) 2 for the different groups. Two-way ANOVA with Tukey's post hoc test was used for statistical analysis with statistical significance at (p < 0.05). The Prism 8 software (Version 8, GraphPad Software, La Jolla, CA, USA) was used for analysis.
Descriptive Analysis
Three-dimensional rendered images were constructed from micro-CT scans of root canals filled with Ca(OH) 2
Quantitative Analysis
The efficacy of Ca(OH) 2 removal from whole root canals after using the XP-endo Finisher and X3 files showed some differences when combined with either syringe irrigation (SI) or PUI. When X3/SI was used, the efficacy was 92.3% ± 11.2%, whereas when X3/PUI was used, the efficacy was 95.7% ± 4.5% (Table 1). This shows a slight increase in the efficacy when PUI was used, with no significant difference. When the XP-endo Finisher was used, the efficacy was calculated as follows: 94.3% ± 5.4% when XP/SI was used and 99.8% ± 0.002% when XP/PUI was used (Table 1). Calculation of the percentage of the remaining volume of Ca(OH) 2 in different thirds of the root canals showed that there were significant differences in the apical third as compared with the middle and coronal thirds (p < 0.05). All techniques completely removed Ca(OH) 2 from coronal and middle thirds of all root canals ( Table 2).
Discussion
This study aimed to compare the efficacy of different protocols using X3, XP, PUI, and SI for the removal of Ca(OH) 2 from the cleaned and shaped root canals. The widely used protocol for Ca(OH) 2 removal is mechanical instrumentation using MAF combined with SI. Mechanical agitation provided by rotary files instrumentation or ultrasonic/sonic activation along with irrigation has been proven to be superior to SI in removing Ca(OH) 2 from the root canal space. However, all the studies have shown that using these methods failed to completely remove Ca(OH) 2 residues, especially from the apical third.
Complete removal of Ca(OH) 2 from the middle and coronal thirds of root canals was observed in all experimental groups. However, complete removal of Ca(OH) 2 from the apical third was not achieved in any of the experimental groups. The percentage of Ca(OH) 2 residual volumes in the apical third ranged from 0.18% to 7.69%. This finding can be attributed to the normal anatomical morphology of the conical root canal system. The larger coronal diameter as compared with the middle and apical diameters, facilitates the irrigation and Ca(OH) 2 removal from the coronal third [14,18]. Moreover, Ca(OH) 2 tends to accumulate apically during the removal procedure, especially when apical anatomical variations are present [4]. In addition, the placement of the irrigation cannula and the ultrasonic tip 2 mm short of the working length, leaves this area without the direct effect of ultrasonic activation and limits the irrigation effect [13,21].
We noticed that the combined use of XP-endo Finisher with PUI resulted in almost complete removal of Ca(OH) 2 from the root canals with the highest efficacy of 99.8% ± 0.002%. These results can be justified by the higher velocity and volume of irrigant flow created by passive ultrasonic irrigation, along with the increased flexibility of XP-endo Finisher and its ability to expand which makes it more efficient for the removal of Ca(OH) 2 from root canals [31].
The results of our study are in agreement with those reported by Wiseman in that ultrasonic irrigation was more effective than sonic irrigation after instrumentation for removal of calcium hydroxide from the mesial root canals of mandibular molars with reported residual volumes ranging from 14% to 28% [21]. However, the reported residual volume sof Ca(OH) 2 in our study were lower, ranging from 0.18% to 7.69%, which demonstrated a more efficient protocol for calcium hydroxide removal.
Another microtomographic study which was conducted to assess the efficacy of PUI for removal of calcium hydroxide medication with or without an additional file (F5) showed that the use of PUI was more effective for the removal of Ca(OH) 2 paste regardless of the use of the additional file [20]. The reported residual volumes of Ca(OH) 2 were in the same range of our study, ranging from 2.9% to 8.8%, with the highest residual volume in the apical region in all techniques similar to our findings.
Another study compared the effectiveness of five different instruments for the removal of Ca(OH) 2 combined with irrigant agitation from simulated internal root resorption cavities, under scanning electron microscopy analysis; none of the instruments used was able to completely remove the Ca(OH) 2 paste. However, the EDDY ® and XP-endo ® Finisher were more effective for the removal of Ca(OH) 2 residues than the Ultrasonic, EndoActivator ® , and XP-endo ® Shaper, which was in agreement with the results of our study [32].
In a similar study, in which only an optical microscopy was used for Ca(OH) 2 residue analysis, the XP-endo Finisher file and PUI significantly removed more Ca(OH) 2 than conventional needle irrigation, with no significant differences between them from artificial standardized grooves in the apical third of root canals [33].
Our results are also consistent with the results of Kfir et al. who showed that the XP-endo Finisher is a more superior method for the removal of Ca(OH) 2 from the apical third [34]. The XP, which was intended to be used as a finisher file, also failed to completely remove the Ca(OH) 2 from the canal. This could be due to the lack of enough contact time between the file and the canal wall during the one-minute window indicated by the manufacturer's instructions. Further testing would be worthwhile by keeping the XP running longer or using it for multiple cycles to check if it could perform more effectively in Ca(OH) 2 removal [33,34].
More research studies should be carried out to identify an irrigation protocol that could effectively remove Ca(OH) 2 residues from root canal spaces.
Conclusions
For the removal of Ca(OH) 2 , there were no significant differences between the different method combinations. However, removal of Ca(OH) 2 from the coronal and middle thirds was more efficient than from the apical third in all experimental groups. The XP-endo Finisher showed comparable efficacy to the Master Apical File for the removal of Ca(OH) 2 regardless of the irrigation system used. Funding: This research received no external funding.
Acknowledgments:
The authors would like to acknowledge Mariya Jameel for doing statistical analysis for this study.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-05-16T13:05:30.736Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "75c736d25afcb17c3de8db628afc37f1dcfca105",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/10/2222/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bfd63f241cdc70750d8b276eee1e930dee76a55f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
34009405
|
pes2o/s2orc
|
v3-fos-license
|
Active learning in annotating micro-blogs dealing with e-reputation
Elections unleash strong political views on Twitter, but what do people really think about politics? Opinion and trend mining on micro blogs dealing with politics has recently attracted researchers in several fields including Information Retrieval and Machine Learning (ML). Since the performance of ML and Natural Language Processing (NLP) approaches are limited by the amount and quality of data available, one promising alternative for some tasks is the automatic propagation of expert annotations. This paper intends to develop a so-called active learning process for automatically annotating French language tweets that deal with the image (i.e., representation, web reputation) of politicians. Our main focus is on the methodology followed to build an original annotated dataset expressing opinion from two French politicians over time. We therefore review state of the art NLP-based ML algorithms to automatically annotate tweets using a manual initiation step as bootstrap. This paper focuses on key issues about active learning while building a large annotated data set from noise. This will be introduced by human annotators, abundance of data and the label distribution across data and entities. In turn, we show that Twitter characteristics such as the author's name or hashtags can be considered as the bearing point to not only improve automatic systems for Opinion Mining (OM) and Topic Classification but also to reduce noise in human annotations. However, a later thorough analysis shows that reducing noise might induce the loss of crucial information.
I INTRODUCTION
In the last decade, there has been a historical change in the way we express our opinion. In a world of online networked information, people are getting used to talk about anything and everything on a multitude of participative social media : forums, reviews, blogs, micro-blogs, etc., user-generated contents in the form of reviews, ratings and any other form of opinion, should be dealt with OM, Pak and Paroubek (2010). Usually, it is a positive or negative judgment towards a product, formulated by an explicit vote score between one and five stars and/or implicitly by means of natural language (e.g., "I like the speed of this printer."), Hu and Liu (2004). Recently, using human labeled datasets, the SemEval challenges included tasks about Aspect Based Sentiment Analysis, Pontiki et al. (2015) using words, terms and sentences as they are naturally expressed in reviews and tweets.
Since information control has moved to users, OM on micro-blogs such as Twitter has also become very popular to predict future trends. Afterwards, each act of a public entity is scrutinized by a powerful global audience, Jansen et al. (2009). Therefore, OM had then been used in broader and more difficult contexts such as reputation and politics, Wang et al. (2012). This led to the creation of an emerging research trend towards Online Reputation Monitoring, Burton and Soboleva (2011). However analyzing reputation about companies and individuals is a challenging task requiring a complex modeling of these entities (e.g. company, politician). Moreover in the case of tweets there are no explicit ratings to be directly used in an opinion processing. This explains the need for new Reputation Monitoring tools and strategies which also become an interesting way to process large amounts of opinions about various kind of entities, Malaga (2001).
Currently, market research employing user surveys is typically performed and traditional Reputation Analysis, Glance et al. (2005); Hoffman (2008) is a costly task when done manually. Processing large amounts of reputation data is a real challenge not only to deal with specific requirements in Information Retrieval or OM, but also to understand important issues in political science, Gerlitz and Rieder (2013); Boyadjian (2014). Politics have already been addressed in previous works but mostly in English, German or Spanish, Kato et al. (2008); O'Connor et al. Hendricks and Schill (2014); Pla and Hurtado (2014) and more recently with Bulgarian, Smailović et al. (2015). As far as we know, nothing in French has been done from a machine learning perspective until now.
The work presented in this paper is oriented towards the extraction of opinions together with their target aspects on French political tweets focusing on the two main candidates in the last presidential election in France, in May 2012. This work involved academics as well as industrial partners, including end users (politics researchers) who have been involved in the whole process (from design to evaluation). In contrast to previous research, the scientific contribution is threefold.
-Firstly, we collaborate with experts in political science in order to design a full annotation framework and usage scenarios. This will lead to an annotated seed dataset with the involvement of specialists in political science. The annotations are aspect-oriented polarity for reputation. In other words, the opinion expressed on a specific aspect is linked to a dedicated attribute of the entity. -Secondly, we develop dedicated automated classification techniques able to deal with short texts and aspect-oriented opinion statements related to French politics. Our approach relies on automatic propagation of the reduced set of expert annotations we just described among larger collections of tweets. -Thirdly, we intend to study the impact of automatic label proposal on the annotator assessments and investigate the classification performances. Our propagation approach deals with three key issues about active learning while building a large annotated data set : -Identify and remove noise introduced by human annotators, -Use data abundance, -Harmonize label distribution across data and entities, Xu et al. (2007).
The rest of the paper is organized as follows. Section II provides an overview of related works. In Section III we detail the annotation platform and give basic statistics of the first annotated set. We then study the main characteristics of crowd-sourced annotations about politics in Section IV. In Section V we propose a new pseudo-active learning algorithm for bias correction to improve the quality of annotations and the automatic annotation procedure to increase the final amount of labeled data. Section VI introduces use cases evaluation of our algorithm. Finally, we conclude and give some research directions.
Tweets mining
Previous works on reputation monitoring in tweet collections and streams have been made to extract sets of messages requiring a particular attention from a reputation manager, Amigó et al. (2013). For example, recent contributions to this issue on Twitter data have been done in the context of the 2013-14 editions of Replab 1 and TASS 2 challenges where the lab organizers provide a framework to evaluate Online Reputation Management systems on Twitter.
Reputation polarity is substantially different from standard sentiment analysis, since both author, facts and opinions have to be considered. The goal is to find what implications a piece of information has on the reputation of a given entity regardless of whether the message contains an opinion or not (i.e. news just factually reporting wrong governance decision). To illustrate, if ten humans disagree on the sentiment of a given text, it then issues if what is acceptable or relevant for one individual is the same for others. Multilingual aspects, cultural factors and context awareness are among the main challenges of sentiment natural language text classification when dealing with reputational micro-blogs.
Furthermore, topic detection is used to guess the topic of the text or the aspect linked to the opinion with two possibilities : one among those of a predefined set of categories or classes, so as to be able to assign the reputation level of the company into different facets, axes or points of view of analysis. Another employing users networks and text similarities to build message groups and consider the topic as the concept expressed by the key features (terms extracted) of each group. Nevertheless, in micro blogging, due to the 140 characters limit, messages are often allusive with few words making both tasks harder.
Data building
Crowd-sourcing is an increasingly popular and collaborative approach for acquiring research annotated corpora with the idea of collecting annotations from volunteer contributors, this is an advantage over expert-based annotation. Although designing such a dataset of training examples has proven quite an interesting challenge, Amigó et al. (2013), Villena Román et al. (2013, it is still expensive and relatively inaccurate. The background literature, Walter and Back (2013) focuses on central points which describe a current research issue. Indeed, although the use of paidfor crowd-sourcing approach is intensifying 3 , the reuse of annotation guidelines, task designs, and user interfaces between projects is still problematic, since these are usually not available for the community despite their important role in result quality. Moreover, the cost to define a single annotation task remains quite a substantial challenge for crowd-sourcing projects.
Literature is also full of innovative approaches about definition of crowd-sourcing success, especially on how to evaluate the results and the application of text mining approaches. Much recent researches focused on the reliability and applicability of crowd-sourcing annotations for NLP, Wang et al. (2013). Previous works using so-called active learning, Settles (2012) have been done to automatically build high-quality annotated datasets on twitter monitoring, Carrillode Albornoz et al. (2014). Most part of research projects leave behind them a small annotated corpus and a large amount of unlabeled data. The small data set can be used as bootstrapping for systems, Di Fabbrizio et al. (2004) but how can we make use of the remaining unlabeled set ? The idea is to utilize the unlabeled examples by adding labeled data which has been well studied in the last decade, Blum and Mitchell (1998); McCallumzy and Nigamy (1998).
In our case, as manual annotation is a costly work, we use state-of-the-art approaches to build and improve a dataset. Text mining is then not only applied to handle the issue of semisupervised annotation but also to fulfill an optimal semi-supervised selection of the messages we want to submit for manual annotation. To answer these key issues, we have designed a protocol which aims to automatically annotate tweets and extract semantic relationships between the expressed polarity and the aspect. In addition to a dataset, we also provide a full open-source annotation platform 4 and its design. This design comprises different processes such as data selection, formal definition and instantiation of the reputation.
Annotation platform for E-Reputation Analysis of tweets in French
To analyze the public image of French politicians in Twitter we designed an annotation platform where users are given tweets and are asked to first identify the opinion passage ; then to assign it to a polarity and finally to identify its specific aspect target. Our Web architecture, shown in Figure 1, is based on the three-tier models which allow a quick adaptation to any annotation needed because mostly the top-most level source code must be modified.
System demo can be tested at http://dev.termwatch.es/~molina/sentaatool/ info/systeme_description.html. Figure 2 shows the interface used during the annotation of tweets and its main components : 1. Tweet area allows selection but not modification.
2. Polarity buttons assign the polarity of a selected passage and make appear a target text bar when pressed.
3. Targets section contains one editable target text bar for each selected passage showing the color depending on the polarity. 4. Restart button restores the interface to initial conditions. 5. Send button sends the annotations to the database and displays the next tweet to analyze.
6. Confidence radio buttons allow annotators to indicate if the tweet is out of context. Useful if the corpus was extracted automatically.
Annotation design
Designing the set of appropriate aspects is a key element of the whole annotation process. This step has been done under the supervision of experts in political sciences. The following 9 aspects have finally been selected to describe French politicians : attribute 5 , assessment, skills, ethic, injunction 6 , communication, person, political line, project, adding the entity itself and the case of no aspect belonging to this list. The aspects are moreover decomposed into sub-aspects such as polls and support in case of attribute, which signifies the entity's features expressed in pools and supports. At all 23 sub-aspects have been created for this fine-grained description and reporting. The polarity levels vary from very positive (positive) to (negative) very negative opinions, with a neutral opinion (used for facts reports). We also considered an ambiguous opinion for undecidable cases.
First annotated dataset, descriptive Statistics
Here we provide some statistics about the first dataset (more detailed statistics are available in, Velcin et al. (2014)). This dataset 7 consists of 11527 manual annotations expressing the opinion describing two French politicians over time, 5286 annotations for François Hollande (FH) and 6241 annotations for Nicolas Sarkozy (NS).
Data has been annotated by 20 academics from various fields, Table 1 provides some additional details. It is interesting to notice that NLP researchers and people from industry focus on terms or N-grams with shorter annotations (in terms of selected passages) probably following respectively algorithm schemes and keywords extraction for dashboards. While at the same time, engineers and politics researchers tend to select larger parts of text. To handle the subjectivity of annotators, we allowed a tweet to be annotated at most three times by different annotators. It also happens that the same content (in case of retweet) has been annotated several times by the same annotator which allows us to evaluate the annotator's consistency (details are given below). 7.283 unique tweets (6.369 unique contents) are annotated, of which 48% are annotated only once, 46% twice, 6% three times or more. But, is this enough ? How much are these examples really informative ?
Opinions
For a reasonable analysis, as observed in the literature for comparable annotation tasks, Carrillode Albornoz et al. On the whole dataset opinions are biased to the negative with a slight difference between the two entities ; for example, 47% of the opinions about NS are negative for 20% positive and 5. Poll results and comments 6. Call for voting 7. The raw dataset is available there : http://mediamining.univ-lyon2.fr/velcin/ imagiweb/dataset.html around 53% are negative for FH while 14% are positive. The neutral class distribution is equivalent for both candidates with 32%. However, in the period just before the election (mid-May 2012), the negativity about FH decreases to 41% while that of NS increases to 52%. After the election (June to December 2012), the negativity about FH increases dramatically to 72% as the positivity collapses to 5%. Per month distributions are summarized in Table 2. This justifies the necessity of temporal analysis related to the image, with well-split time periods.
Aspects
The 9 aspects are globally well distributed. As a global class, the entity aspect dominates with 23%, followed by political line and ethic with 13 and 11% respectively. The evolution of the frequency of each aspect according to time is interesting. Some aspects are much more dependent on time such as injunction and communication obtaining very high frequencies just before the election and disappearing after. Both candidates obtained positive opinions for the injunction because this aspect is dedicated to the clear encouragement or warning (rare) about voting for an entity. On the contrary, for the communication FH obtained a better score compared with his competitor.
Annotator bias and disagreements
The manual annotations may reflect the subjectivity of each annotator because of the granularity of the labels. Despite the task's difficulty, the annotator's low-confidence indicator was only used for 10% of the annotations and related to the "ambiguous" polarity level. As it was not properly used, we reconsidered the quality of mainly non-expert annotations on different aspects. While for a machine a word sequence will match a unique model or a weighted number of models, the language acquisition skills of humans result to a multidimensional experience. Then, annotating is dependent from annotator's language acquisition skill. Analyzing the annotation disagreement among annotators for each tweet would provide us with a better understanding of the opinion properties. Considering this, we try to analyze the problem at the content level by taking a closer look at the annotations from the text level. We can now observe more severe disagreement for a unique content since annotators may have different backgrounds and points of view on the same document We assume here that we do not need to explore the idea of recalibrated annotator judgment to more closely match expert behavior or to exclude some annotators from the process. If pola-rity disagreement is less than 20%, disagreements on aspects (including sub-aspects) exceed 60% with a basic analysis. Things get worse when considering the cascade, disagreements increase dramatically on the polarity-aspect. This is explained on one hand by natural language variability, the background knowledge of the annotator that may make him interpret a hidden meaning of the message while others did not notice the irony. On the other hand it comes from the concept variability. It can be illustrated with the case 'Sarkozy-Kadhafi' which has been correctly tagged as ethic by the two annotators, but the chosen sub-aspect differs (ethic : honesty vs. ethic : case). A typical example for polarity, despite the guidelines, can be a tweet that describes the result of a public poll. If the poll is in favor of a candidate, some annotators give a positive (resp. negative) polarity while others give a neutral polarity since they consider this information as a fact.
Things become interesting when looking how annotators labeled a repeated content. For each content annotated more than five times we can observe that there is on average one annotation different from the others, annotator's consistency is estimated to be around 80%. It illustrates the fact that different aspects can be selected depending on the individual point of view but also offers us the possibility to see the trends of an annotator. As the annotation stage lasted over several weeks it will be subject to variation.
IV INTELLIGENT ANNOTATION FRAMEWORK
We described within this study how we exploit text mining techniques to analyze a real-world data sample from Twitter. As mentioned before, one difficulty is to have enough data and information to build models for employing machine-learning approaches. Despite the recent advances and good practical results, improvements remain to be achieved. "How much is enough ?" is still an open question. Our main objective is to bootstrap machine learning techniques using limited annotated data to detect how a given entity is perceived. Then, to follow-up Active Learning minds and enhance data informativeness on the time, we also experiment with some approaches to apply recommendation system's adaptation to re-build models on the fly.
An important fact is that this public perception is not static and may change in time which implies to adapt models. However, experts in political science need a huge amount of tweets to release a deep, complete and reliable analysis over time. Therefore, getting involved on such an annotation campaign is not possible in financial terms. In our case it has been decided that both NLP and political researchers will work jointly in a pseudo-active learning process. To achieve this objective, we set a semi-automated step which aims at evaluating the quality of text mining technique submissions. These automatic suggestions are then compared to real-world results, that is to say expert committee decisions to validate our algorithms. These choices have been submitted to validation through a couple of experiments (see Section 6.2).
Data Diversity
Our objective is then to train machine learning techniques using human behaviors in order to propagate their knowledge and automatically label forthcoming data. Something important before handle a large unannounced data set is to be sure about the training set reliability. As reported by the literature, Artstein and Poesio (2008) and as we have just seen, human annotations of language features and concept are prone to human errors. These errors need to be considered in the model learning process since it is well known that the quality of manual annotation is critical when it comes to train automatic methods. We assume that the objective is not to build the most reliable dataset in the meaning of a particular aspect but to build a consistent dataset Data: Large amount of unlabeled tweet Result: Large amount of labeled tweet Small amount of tweets are manually labeled; while Not enough labeled data or insufficient classifiers' performance do Build models with the labeled data; Classify a subset of unlabeled data; Select and send a sample of automatic classification outputs for manual confirmation; if Automatic classification is sufficient then Annotation for the whole dataset; else Go back to the beginning with more labeled data for learning; end end Algorithm 1: Annotation process.
regarding to the upcoming analysis we want. For the training step, instead of misleading the automatic algorithms, we can consider that this situation will reflect the diversity of interpretation. We could consider that a message conveys two messages (e.g. two topics with each a different opinion) by the multi-label, Tsoumakas and Katakis (2006)) but we made the questionable assumption that one tweet = one opinion and one topic. Because when it comes to the evaluation set, it is critical to agree about only one reference. When working with learning and data mining on text contents we have to keep a high variability in the data distribution (in terms of contents and labels) to prevent falling into a biased distribution that will lead us in over-training and then to overvalue our systems. It is also difficult to distinguish the real informative examples (from the non-informative ones), and the fact that it will only be possible to annotate content similar to labeled ones is an important drawback. Regarding the cost of a such annotation stage we need to maximize the effectiveness of each annotation by having certified label on the largest vocabulary as possible. This step can be seen as text processing since each content is cleaned in order to detect duplicate messages and ignore them for the further annotation steps. For a more focused work on aspects (e.g. statistics about voting) we keep a track of all duplicates in order to propagate the annotation.
Annotators-based decisions
It is still possible to estimate the task difficulty with inter-annotation agreement measures such as Kappa, Kohen (1960); Cohn et al. (1994); Koehn and Knight (2003); Sabou et al. (2014) but once disagreements have been identified what can be done ? In our case, each tweet has been annotated from one to three times and as we before noted severe disagreements at text level we have chosen a majority-based rule system. For each annotated content with divergent annotations, we selected, whenever possible, the human annotation that has a relative majority according to : Label frequency > 1/Number of labels (1)
Profiles-based decisions
For a given tweet, none of the labels has the majority therefore we have chosen to work at the user level (as shown in figure 3). An important aspect in social networks is the possibility for users to answer each other thus building their own network. We can consider as an extra feature that a user belongs to a group or has the same opinion (or aspect) as the person to whom he or she responds or re-tweets. Moreover, considering the political dimensions of the data set, we assume that in a short time period gossipers expressing their revulsion about one candidate cannot find something positive in only one message. We then need to pay attention to these annotations. For instance, we can consider users having more than 100 negative messages related to a given entity. We can hardly imagine the next tweet to be positive and even if it has been annotated as such, it may be withdrawn or submitted to a new validation. This process can be seen as smoothing the user point of view, even if we know that this is an assumption is not always verified. Some NLP analysis has also been considered with a few nicknames such as nainportekoi (dwarf + anything with bashing) or hollandouillette (contraction between Hollande and sausage which also means stupid).
Although this method might be the first step towards specific processing for polarity, we are not able to apply it on the aspect classification task, since tweets' authors are not only talking about one specific aspect. A similar method can then be considered for hashtags since it has been proven that hashtags often carry specific topic information , Brun and Roux (2014).
Then, in addition to hashtag, we considered statistical NLP, Sparck Jones (1972); Salton and Buckley (1988) with N-grams to compose the tweet discriminant bag-of-words (BOW) representation using normalized, inverse term frequencies (tf-idf), Robertson (2004) and Gini criterion, Cossu et al. (2015); Torres-Moreno et al. (2012). We consider that a tweet requires additional attention when the most discriminant terms it contains are not corresponding to its label. For instance we used the statistical information to correct annotations considering terms such as : "au-secours sarko revient" ( Help Sarko is coming back), "sarkocasuffit" (Sarko that is enough) directly and negatively related to NS. Rather than considering french domain-specific lexicons such as those mentioned by, Smeaton (1999); Pla and Hurtado (2014) for English and Spanish, this approach is more flexible and requires less resources.
V SETTING-UP MACHINE LEARNING FRAMEWORK : ISSUES AND CHALLENGES
Machine Learning Committee-based correction
Unlike, Dagan and Engelson (1995), we consider a different committee-based validation composed by several classifiers which are described above under the very light supervision. Domain non-specialist check different random samples of system outputs to validate the process. Some studies worked well in the first direction, such as, Liere and Tadepalli (1997) where the authors obtained 2-to 30-fold reductions of the amount of human annotation needs for text categorization.
After the rules-based corrections, for all remaining cases we now resort to several classifiers used to "self annotate" the training corpus. A wide number of methods have already been explored to correct the bias of annotators. Having multiple annotators is a case that we allow, however an important fact here is that we do not consider annotations as gold standard reference and we can question them especially if none of them has a true label of the systems agreed on. We assume that classifier outputs can be considered as several additional referees for a committee-based validation at the same level as human annotators (as described in figure 4) in different way such as leave one out process. In the self annotating corpus, we observed that for the original set classifiers are not able to find the correct label for a part of the set. For instance with the cosine distance Accuracy and µ F-Score to be respectively .84 and .87 for FH, .84 and .83 for NS. From these classification errors we distinguished several cases : 1. All system agreed on a label different from the human annotations ; 2. A majority agreed on a label different from the human annotations ; 3. No agreement ; Based on the majority rule expressed above, we now consider for the two first cases (around 60%) the prediction of the classifiers as the new "reference" annotation for the tweet. In the last case tweets are submitted again for human verification. It is interesting to notice that except for some ironic tweets, after the correction classifiers are now able to find the correct label for a very high majority of tweets obtaining more than .98 in each measure.
Classifiers
For the purpose of this experiment and following the background literature, Cossu et al. (2015), we investigated statistical NLP, Sparck Jones (1972); Salton and Buckley (1988). N-grams also compose the tweet discriminant bag-of-words (BOW) representation using normalized (tf) inverse term frequencies (tf-idf) and Gini criterion, Cossu et al. (2015); Torres-Moreno et al. (2012). The statistical BOW approach is used to compute the similarity of a given tweet to each class BOW and rank tweets according to Jaccard index, cosine distance and the score provided by several classifiers (Poisson-based classifier, Hidden Markov Model) Cossu et al. (2013). We also proposed a kNN-based classification method that uses the same discriminant factor as the one used in the BOW representation. We match each d document from the test collections to the K-most similar d documents in the training set using Jaccard index and cosine distance to measure document similarity. The K most similar tweets vote for their class according to their similarity with the tested tweet.
Rather than selecting the best hypothesis we considered all output scores provided by classifiers for each class. Then all scores have been normalized, between 0 and 1, so that they can be merged considering a linear combination, weighted linear combination and multi-criterion optimization methods, Lamontagne and Abi-Zeid (2006); Batista and Ratte (2012). The combination procedure follows two rules : 1. maximize the confidence of automatic annotation by using combined classifier scores, 2. follow the label distribution observed in the training set. We consider a specific combination for each entity and sub-task (polarity or aspect).
Metrics
The absolute values from confusion matrix are used to calculate usual text mining metrics as Accuracy. Which although it is easy to interpret, it is nevertheless easy to be cheated under unbalanced test sets. For instance, a non-informative method returning all tweets in the same class (all "NEGATIVE" in our case), may have high accuracy. We also compute an average F-Score, based on Precision and Recall for each class, typical in categorization tasks which is calculated as follows : Precision c + Recall c Number of classes (2)
Datasets
We divided the corpus into two parts, chronologically sorted : training (Tr) and development (D). D was built with the 3 last months (approx. 800 unique contents associated with each entity).
This initial subset has been extended to more unlabeled tweets extracted from Jan. 2012 to Dec. 2014 : -A first set concerning FH containing 240k tweets (around 6700 tweets per month) -A second set concerning NS containing 81k tweets (around 2500 tweets per month) This new data is used for the validation process and the experts need them for drawing conclusions at large scale by using the prototype. Around 3000 tweets have randomly been selected each month over 21 months from January 2012 to December 2013 which led to 51020 unique contents for FH (and 16050 for NS) to provide background context for systems. All tweets from 2014 will form our validation set which will be reviewed by experts (see below).
Integrating users information
For the users concerned by profiles-based annotation corrections, we considered a smoothing in the machine learning approaches (as summarized in figure 5). We first added a class tag in the bag-of-words of the future tested tweets (which represents the main polarity they were associated with, by the classifiers in the BOW of their tweets). Nevertheless this tag implies that the user will not change his mind. To prevent this bias and also accept that people can change their mind without breaking the BOW robustness, we then added the user identifier with its associated classes' probabilities, Li et al. (2011). This way, by looking at the past of this user, we penalize the contribution the non-majority class without closing doors to a further change in user's mind. Since, we are in an Active process, as time goes on it will automatically return on the premise that one user has only one opinion. Table 3 summarizes the corrections made. Although NS only possess 17% additional raw annotations regarding FH, it concentrates much more corrections regarding the opinions while conversely the trend is reversed with respect to the aspects. We can mainly explain this with the label distribution since the positive classes are not really existing with FH, it lower the task's complexity. ML and content-based approaches did not help much to improve the annotationcorrection process for the opinion detection issue while profile statistics appeared to play a key role. In addition, it is interesting to notice that for NS even after a committee statement it was still impossible to agree on a label for some messages which were finally rejected. Finally in many cases regarding aspects, neither rules, ML approach nor the committee were able to agree and an additional referee was asked to provide a supplementary annotation. Table 4 summarizes the correction with regard to the annotators groups. It is interesting to note several points. We can observe a major difference between NS and FH, in the first case (NS), there are more opinion corrections (since it is a 3 class problem while FH holds two polarity levels having only poll and injunction as positive examples). Whereas in the second case FH holds much more mistakes on aspects mainly concentrated between assessment, political line and project but also between skills and communication. For NS aspects mistakes appear to be limited between ethic and person. Annotations on tweets concerning NS presents more stability between aspect and opinion with similar error rates.
Wrap-up
Concerning groups of annotators, there are several tiers, for aspects with FH, engineers are leading while politics and IR researchers missed something. Conversely, with regards to opinion, IR researchers made less mistakes doing even better than politics. The situation is quite different with NS, because error rates for opinions are quite similar between groups with a lead for engineers. However for aspects, IR researchers group still obtain the lower results and politics fall short with the lead.
After all changes introduced by the process described above, the polarity distribution from the original set can be altered. In the period after the election (June to December 2012), the negativity about FH increases dramatically to 79% near the end of the year while there was only 5% left in positivity. We then study the impact of the harmonization process described above on the results of classifiers on the last tweets of the dataset considered here as test set (as if we were simulating incoming data or temporal expansion). In other words, we consider improvement in the output of polarity assignment to evaluate the gain offered by the harmonization process. cosine performances then increased for both FH and NS from respectively F-Score and Accuracy .37,.60 to .44,.69 and .40,.46 unlabeled tweets and considered these new annotated data as new training material and retried polarity assignment on our small test-set. Regardless of its size, this training set may not be completely reliable, performances for FH respectively reached F-Score and Accuracy .46, .66. Moreover, observed improvements for positive and neutral tweets prove that our propagation do contain relevant information that improves the polarity classification and that was missing on the original set.
Expansion, temporal propagation
Now that the training set has been corrected, we can use our classifiers to annotate a large set of unlabeled messages (as summarized in figure 6). The unlabeled examples can be used with unsupervised or supervised learning methods to improve the classification performance and the correction of the labeled examples by applying the above rules according to a principle of homogeneity at content and user level.
Additionally we considered 'outliers' which are examples that differ from the rest of the data. In our case in terms of agreement or content. We first considered excluded-outliers as tweets that neither systems or annotators agreed on the same label. These tweets will be ignored because of understanding shortages. We also excluded unique contents with no common words with other contents and with the labeled set. A second interpretation of reliable-outliers is to respectively consider tweets for which every system agreed on the same label by adding them in the labeled set before iterating, Spina et al. (2015). These 'reliable-outliers' were verified by human annotator which agreed on automatically chosen label. After this step, as we consider them reliable enough to be used as models, these tweets were no longer candidates for a next manual annotation step (as shown in figure 7).
Evaluation data
We consider as test data set a selection of 5200 tweets in 2013 (430 each month) for NS and 3600 tweets (March and April 2013) for FH. These selected tweets were automatically annotated with the workflow presented below and were also manually reviewed by an expert in political science following the annotation guidelines (as summarized in figure 8). Note that, for the entity NS, we divided the set in two parts : a first one where the automatic label was completely hidden to the annotator (similarly to raw tweets annotation), and a second one where the automatic label was shown to the annotator (validation/correction stage if it was wrong). The test set of entity FH was validated following this second scheme. Below, we compare the expert annotation with the hypotheses automatically produced. The goal of this setup is twofold, first we intend to evaluate the performance of machine learning approach in an operating scenario. Secondly, we want to estimate how much an annotator can be influenced (or not) by automatic suggestion during the validation step.
Results
As preliminary experiments, we first report in Table 5 the system performances for the classification tasks (polarity and aspect respectively) on the two studied entities on our test sets. To keep things simple we only report the performances of a cosine-based approach and the combination of all machine learning techniques used during the annotation process. Although there is a significant improvement in the evaluation of the classification, the most important is that the combination of classifiers also appear to be robust enough to handle the large variety of hypotheses.
Then, regarding the fact that the annotator was able to see the automatic label (or not for half of NS tweets) when he was annotating the tweets, differences are not significant for the polarity classification (Accuracy between .62 and .63 for combination of classifiers). Although as the task of annotating the tweet according to only one aspect is difficult, so we can consider that the annotator validated the proposed aspect by convenience because it was not so wrong even if there could have been another possible choice. F-S for Combination was situated at .25 when the annotator was not able to see the automatic label, and .33 in the second case, given the number of aspects the difference is quite significant. In additional experiments, we tried to switch and combine entities models. That is to say predict NS polarity using NS, or FH or FH+NS training set. The aim of these experiments is to test how well the method can perform without proper training material or with opposite sentiment. For the polarity classification, the results were obviously lower with combined and switched models than with the entity specific models. Trying to classify FH tweets with FH models leads to a F-S value of .52 and around .66 Accuracy while FH+NS models stay a bit lower with respectively .48 and .64, considering only NS models. Cf. performances collapsing at .41 for both metrics. Indeed, in terms, for example, of political balance sheet and project, what can be seen as a positive statement about one candidate may be rather negative for the opposite side whereas it is expressed with the same words. Conversely, we have considered that aspects do not depend on a specific entity but have a consistent cross-entity behavior. Consequently, we considered both entities altogether to address the aspect-oriented classification issue. Combined models appeared to be a semantic enrichment and show a slight improvement in classification performance for both entities. This led us to then consider and report only combined models performances.
VII CONCLUSION AND PERSPECTIVES
Depending on the domain it is applied on Sentiment Detection. This task is even more difficult when it comes to combine it with the specific aspects. In this paper, we presented an approach to annotate a French political opinion dataset from annotation design to machine learning experiments.
First, we have shown that we can improve our dataset and obtain good classification performances even though statistical methods are without linguistic and domain specific processing. That makes our approach easily applicable to other languages and dataset. Instead of addressing a more complex modeling, experiments reported in this paper have shown that by considering additional Twitter Features combined to light knowledge, this can provide a robust support to improve both annotation quality and classification performance.
We employed methods known to remain simple but also reported to obtain results as good as the ones proposed so far with the state-of-the-art approaches on comparable issues, Cossu et al. (2015). We demonstrated our approach efficiency by comparing automatic aspect-oriented opinion annotation of tweets to label that have been proposed by experts in political science.
As the need for in-domain annotated data still persists we hope that the methods and tools presented here will help researchers in their quest of bigger and better dataset. Solving this problem could help prevent from annotator bias and errors and minimize human oversight, by implementing more sophisticated computer-based annotation work-flows, coupled with in-built control mechanisms and low supervision. Such infrastructure needs to be reusable. Further on, we would like to extend our approach on simultaneously predicting the polarity and the aspect it is associated with.
|
2017-09-25T21:58:04.000Z
|
2017-06-16T00:00:00.000
|
{
"year": 2017,
"sha1": "b8f1afad3f94efa526c53bd9c364b3c7bac20130",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e32de9a8d000bdf96bfddd292edc7eabb04e0aa6",
"s2fieldsofstudy": [
"Computer Science",
"Political Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
242399336
|
pes2o/s2orc
|
v3-fos-license
|
Land Use Changes and Their Effects on Soil Physical and Chemical Properties in Abol woreda Gambella Regional State
In Abol woreda continuous cultivation, intensive grazing and investment are the major practiced in the woreda soil without out intensive care was decline in soil chemical, biological and physical properties. The current research, therefore, is intended to determine Land use Changes and consequence on Soil Physicio-Chemical Properties in Abol woreda Gambella Regional State. General visual field observation and survey were carried out and the research site was portioned in to three major land uses (cropped grazing and forest land use types). Accordingly, a total of three major land uses (cultivated grazing and forest land use types) were identified in field based on soil texture and land use systems. Finally three major land use system were identified, and from each major site, composite soil samples were taken from thirteen soil sub-samples (spots) within one depths of 0-20cm using soil auger. The physico-chemical of soils was analyzed in the standardized soil laboratory procedure through the analysis of soil samples augured from three major land use systems. The results obtained from this study explore that the kebele soil did not showed the textural class difference in three land use system all showed clay loam, texture. The value of soil bulk density was increase in cropped land followed by grazing land and low in forest or reserved area land use system it ranges from 1.37, 1.24 and 1.13 g/cm 3 respectively. The percent of total porosity soils under the cultivated land was recorded average of 48.3, grazing land use also 53.2% and land use forest 57.3%. All of the soil chemical properties of the present research site were numerical influenced by land use system and soil textural class. For example, the highest value was recorded in basic cations Ca (10.02 cmol/kg), Exch-base of Mg (5.46cmol (+) /kg),Exch-base K (5.45cmol (+) /kg)and CEC (28.17 cmol (+) /kg) were observed under the forestland as compared to the lowest values (5.64, 2.05and 1.97cmol/kg), respectively, in the cultivated land. The present study showed that soil fertility status decrease as land use type goes to from forest to pasture and farming lands. Hence, it is possible to infer that continuous and intensive cultivation degrade plant essential nutrients highly which urge to take action for modifying its fertility status of the agricultural soils of in the Abol district. and their Effects on Soil Physio- Chemical Properties in Regional
INTRODUCTION
The rapid increase of world population demands more production of food, fodder, fiber and fuel required from the existing land. To achieve this issue integrated land management system being practiced in order to maintain the land from intensive cultivation and free grazing lands that are causes overgrazed and degraded in Ethiopia.
In Ethiopia, population growth and environmental factors lead to the conversion of natural forestland and grassland into cultivated farmland (Tesfahunegn, 2016) Land use changes are regarded as important components and a primary cause of global environmental changes (Turner et al., 1995;Li, 1996).
This as a result has received the major focus of global change research as its impacts on global biogeochemical cycles, climatic and hydrologic processes are profound.
These changes are driven by the interaction in space and time between biophysical and human dimensions (Turner, 1995). In most of the developing countries, its research has evolved out of efforts to identify, predict and manage ecologically damaging land use changes such as deforestation as its implications for human livelihood systems are immense.
Hence, adequate knowledge on soils of such area especially in abol woreda is mandatory in order to conserve and use the resources based on their potentials and limitations and there by maximize crop production and conserve the soils for future use. The present study involved Land use Changes and consequence on Soil chemical, physical and biological Properties at Abol area of the Gambella region. Current research topic will give good experience for the future research activities on soil related research and effects of agricultural farming on soil fertility status of the area. Therefore, this study was conducted with the general objectives of: To determine the Land use types and their Effects on Soil Physio-Chemical Properties in Abol woreda Gambella Regional State.
Specific objective
To investigate the effects of agricultural cultivation on selected physio-chemical properties of the soil in Abol area To evaluate impacts of land use change on the chemical and physical properties of the soil
MATERIALSAND METHODS 2.1. Description of the Study Area
The study was conducted at Abol woreda , in western Ethiopia Gambella National Regional State,. It is located at about 820 km western of Addis Ababa. The average altitude of the research site is about 520 meters above sea level. Figure,1 Map of the study area
Soil Sampling and Preparation
Representative composite Soil samples were augured in the three major land uses system which is (agricultural, pasture and forest land use types). Land use system was chosen based on the current land management activities practiced in specific study area. Firstly, a general visual field observation was carried out to have a general view of the physical difference in the researched area. Based on the past land management history, the cropland selected for this study has been under cultivation for more long time whereas the adjacent grazing and forest land use types were used previously as forest and grazing use system.
From the forest lands and grazing lands, natural forest and communal grazing land were used and while from agricultural lands, cereal crop land under rain fed condition were selected. Representative soil sampling sites were chosen randomly from each land use types according to slop position. A total of three major land use type were considered and replicate four times and a total of , twelve, composite soil samples were augured from thirteen soil sub-samples within one depths of 0-20 cm using an auger. During soil sampling, furrow, dead plants old manures, was carefully excluded to avoid variation among the spot area. The representative composite soil samples taken from each site with four replications were air dried ground and sieved to passed through a 0.5 for total nitrogen and organic carbon and 2 mm sieve for the analysis of other selected soil physicochemical properties.
Soil Physicochemical Analysis
The soil samples collected from each land use system were air dried, crushed and passed through 2-mm sieve size for the determination of soil physical and chemical properties of soil except for total N and OC which were pass 0.5-mm sieve size. Total porosity and soil bulk density were obtained from undisturbed core sampler Particle-size analysis was done using the Bouyoucos hydrometer method. Total porosity was estimated from the bulk and the particle densities as: Total pore space (%) = (1-BD/PD) x 100 Where Bd = bulk density, and Pd = particle density Laboratory analysis were carried for the chemical properties of the soil including soil pH, total nitrogen (TN), soil organic carbon, (SOC) Available Phosphorus, Exchangeable bases (Ca, Mg, Na and K), and electrical conductivity (EC) on samples collected from the field. The pH of soil was measured potentiometrically using a pH meter in the suspension of 1:2.5 soils to liquid ratio of distilled water and 1M KCl solution with combined glass electrode pH meter (Thomas, 1996). Organic carbon(OC) content was determined following the wet digestion method as described by Walkley and Black (1934) and percent of organic matter (OM) was calculated by multiplying OC by 1.724. Total nitrogen (N) content in the soil samples was determined titirimetrically following the Kjeldahal Procedures as described by Jackson (1958). Available P was determined by using Olsen method (Olsen et al., 1954) as outlined by Sahlemedhin and Taye (2000). The absorbance of the P extracted by Olsen method was measured using spectrophotometer after color development. Exchangeable bases (Ca, Mg, Na and K) were extracted with 1N NH 4 OAc at pH 7 (Van Reeuwijk, 1992). Exchangeable Ca 2+ and Mg 2+ were determined from the extract with EDTA; whereas K + and Na + were determined from the same extracts with flame photometer.
Extractable micronutrients (Fe, Mn, Zn, and Cu) were extracted with diethylene tri-aminepentaacetic acid (DTPA) and all were quantified by atomic absorption spectrophotometer at their respective wavelengths (Lindsay and Norvell, 1978).
Statistical Methods
Descriptive statistics was carried out to expose the effects and relationships between among the three land use system. Land use types were analyzed or compared with each other by considering critical values for the physico-chemical properties of Ethiopia soils.
RESULT DISCUSSION AND EXPLANATION 3.1. Site Description of the study area
The study was conducted at Abol district in Gambella Regional state the area is characterized by level slope and the area divided into three major land use system accordingly soil physical soil property (texture) ( Table 1).Based on, the clay loam texture was dominated by level slop position (0.5 -2%) with moderately suitable soils, which occupied more than 65% of agricultural land use system were taken place in this area.
Soil Physical Properties
The physical properties of the study area soil showed that similarity in textural class except numerical variation among the adjacent land use systems they showed only numerical variation within land use type on the three land use type showed clay loam in texture. The highest sand fractions (37%) was observed on the agricultural land use system whereas the highest clay percentage (38%) was obtained in the forest land use system and the least value of clay soil textural percentage were recorded on cultivated fields which might probable due to high removal of clay by water (leaching) of clay particles down the profile in specifically agricultural fields. The texture of the soil determined in the field (feel method) was similar in most cases to the determinations carried out in the laboratory (Table 1). The textural class of the all land use system were obtained similar which is Clay loam, This result explore that the different land use types did not have effect on the soil texture of the study area, since texture is an permanent and intrinsic soil property that not influenced in short period of time with management practice.
Despite the fact that texture is permanent soil property, human management practices cannot alter to changes in particle size (Table 1).
Low clay and silt; and higher sand content in the soil of agricultural lands and grasslands are attributed to the selective removal of clay particles by processes of erosion, leaving behind the sand fraction in situ.
The cultivated land and grasslands (overgrazed) are more vulnerable to erosion as they have little or no protective vegetation cover. Thus, when erosion occurs, the finer and lighter materials are selectively removed.
The absence of protective vegetation covers in the surface soil of cultivated land and grasslands soil leads to directly contributes to the removal of soil finer particles size as it reduces the organic matter that flocculate soil aggregates and increase soil loss in erosion (Abbasi et al., 2007).
Soil bulk density and total porosity
In general, the bulk densities of soils were low in all research sites the soils belong to fine textured soils known to have lower bulk densities (Table 2). Soils tend to be organized in porous grains or granules, especially if ISSN 2422-8397 An International Peer-reviewed Journal Vol.75, 2021 4 adequate organic matter content is present (Brady, 1990) which is the case. In line with this Larson et, al. (1980) also reported a bulk density rise from 0.95 in plow layer to 1.00 g/cm in lower subsoil's from Oxisols clays in Brazil. It was lowest in soils under forest land and surface soils while highest in cultivated lands use types. The present study observed variation in bulk density value lowest in forest land (1.13 g/cm 3 ) flowed by grazing land (1.24 g/cm 3 ) and highest in cultivated land (1.37 g/cm 3 ) ( Table 2).this might be due to the effects of high animal trampling by grazing animals in grazing and cultivated lands and high OM content in forest land.
The other reason that can explain the change in bulk density is attached to the intense tillage practices. Cultural tillage practices could also temporarily loosen the tilled soil layer, in the long term it increases soil bulk density due to compaction. Absence of soil surface cover soil were expose to direct impact of rain drops under fields with long period of continuous cultivation might have also contributed to the increment of bulk density as raindrop impacts cause soil compaction through disintegration of the soil structure. This creates unfavorable plant growth environment through limiting root growth and air circulation which in turn has implications for agricultural productivity.
As total porosity values were derived from manipulating values of bulk and particle densities, this characteristic showed almost similar pattern of differences as that of the bulk density values. Total porosity of the study area soils registered between 48.3-57.30% (Table 2). The lowest (48.3%) and highest (57.30%) total porosity were observed in the forest and cultivated land use type, respectively. According to London (1991), sands with a total pore space of less than 40% are liable to restrict root growth due to excessive strength whilst in clay soils, limiting total porosities are higher and less than 50% can be taken as corresponding value. Hence, the value of total porosity lies almost in the usual range (30% and 70%). The results obtained from the current study area are in agreement with the findings reported by other researchers (Singh et al., 1995;Maddonni et al., 1999).
Responses of Soil Chemical Properties to Land Use Changes 3.4.1. Soil pH and electrical conductivity
In all of the land use system described, soil pH values measured in a suspension of 1:2.5 soil to water ratio (pH in H 2 O) were greater than the pH values measured in the same ratio of soil to KCl solution (pH in KCl).Land use systems changes from pasture land use system to crop land use system showed that in relatively slightly decline of soil pH of the current research area. For example, the highest value (8.56) and the lowest value (7.64) soil pH were obtained under the forest land use system and cultivated lands, respectively (Table, 3).
The least value obtained from soil pH under the agricultural land use type system might be due to low exchangeable bases and absence of application house refuse in the farm land and crop mining.
The highest values of soil pH under natural forest and grazing land use system probable due to higher values this may be due to better content of exchangeable bases and low low human disturbance in the specific area. This is evident from the positively relation between soil pH and the exchangeable bases in both land use system. Generally the effect of land use system was showed slightly variations in soil pH in the study area. Considering the soil pH (H 2 O), the soils in the present study area forest land rated as strongly alkaline and grazing and cultivated land use rated as moderately alkaline, (Table 3) according to classification set by Tekalign (1991).
Soil EC of the research site again exhibited the same trend with pH. It was showed small variation among land use. The lowest figures in electrical conductivity were agricultural land use system and followed by pasture land and higher value obtained in forest in land use types.
The lowest value of EC was registered under agricultural fields and can be related to the loss of Ca and Mg by crop mining and containing soluble salts after deforestation and cultivation. In addition the decline in electrical conductivity in agricultural fields is the implication of decreasing basic cations which forms the soluble salt that ultimately enhances electrical conductivity.
Organic matter
The amount of soil organic matter (OM) was relatively higher in the forest land use followed by grazing land use and lowest in cultivated land use. Considering the forest soils, the highest (5.32%) OM content was measured and (4.27%) and lowest (2.56%).
Most cultivated soils of Ethiopia are poor in organic matter contents due to low amount of organic materials applied to the soil and complete removal of the biomass from the field (Yihenew, 2002), and due to severe deforestation, steep relief condition, intensive cultivation and excessive erosion hazards (Eylachew, 1999).
Organic matter content was registered higher value on the natural forest land use system and medium under pasture land use types which might be due to the contribution of vegetation cover and less disturbance by anthropogenic and animal. Tate (1987) also reported similar findings that agricultural management of crooped land soil induces a drastic change in the equilibrium of soil OM attained under undisturbed conditions, and thereby affects its quantity and quality especially in the near surface soil.
.3. Total nitrogen
The nitrogen content of the study area soil showed that numerical decline among the major land use system with similar arrangement of SOM (Table 3). In addition, the findings showed that the soil in the forest, pasture and agricultural land use system, had 0.46%, 0.28% and 0.22% of total nitrogen (N) content of respectively. Generally total nitrogen content on agricultural field depleted by 52% as compared to forest land and 21.24% from adjacent grazing land use system.
Relatively better vegetation cover in grazing and forest land resulted higher OM content which might contribute to have higher total nitrogen content in the respective land use system. This findings supported by many authors (Jaiyeoba, 2003;Heluf and Wakene (2006);Abbasi et al., 2007).
Available phosphorus
The available P content in current study area had registered higher value in all land use types forest soil registered 30 mg/kg followed by grazing land 23 mg/kg and lowest value under cultivated land use system 21 mg/kg (Table 3).
This finding showed that available P content of soils numerical change when we go from forest to cultivated land use system.
Available P content of the agricultural field showed decreased by 30% from forest and 8.69% from grazing land use system this variation or decline could be due to crop mining and crop residue removal from the field for many reasons.
The highest value of available phosphorus in the protected forest soil could be arise due to high content of soil organic matter resulting in the release of organic phosphorus. Probably for this reason, available P is highly associated with SOM content.
Based on the rating set by Carrowet al. (2004), P-Olsen between 12 to 18 mg kg -1 is categorized as sufficient range in all land use. This finding is in line with (Hartz, 2007). It was suggested that soil P is more dominant in warm climate than in cool climate. Therefore, phosphorus content in the current research site could be due to hot climatic condition in the study area might have been favored of the study area along with the convenient pH range. This result is in line with the research results reported by two authors (Yacob, 2012; Teshome, 2013) that available P content of soils in the Gambella region had high range.
Exchangeable bases
The average soil exchangeable calcium of the current study area of Abol districts are 10.02 and 5.46cmol(+)/kg (Table 5) in the forest land use respectively and 7.07, 2.96cmol(+)/kg whereas cultivated land use registered lowest contents compared to adjacent land use types which is 5.64, 2.05cmol(+)/kg calcium and magnesium respectively.. According to (FAO, 2006a) exchangeable calcium and magnesium is falls or rated as high rate value.
Exchangeable bases showed the highest concentration of bases registered in the forest and grazing land use system this value highest could be probably due to the contribution of addition of more farmyard manure (FYM) and the presence of high organic matter content (Appendix 2). This finding consistent with this finding, reported by many researchers that exchangeable bases were highly influenced by soil organic matter content, soil texture and management practice (Taye et, al., 2003;Heluf and Wakene, 2006).
The minimum value of exchangeable base was registered under agricultural land use system and higher at adjacent forest land use system this might be the result of mismanagement of land might have less organic matter content and less exchangeable bases due to complete removal of crop biomass from agricultural reported by Singh et al. (1995) and He et al. (1999). This finding is consistent with the findings of (Taye et. al., 2003;Heluf and Wakene, 2006). Exchangeable bases were highly influenced by OM content of the soil maintained due to virgin land management or added to the soil of cultivated land. (2006), calcium (Ca) and magnesium (Mg) of exchangeable bases contents under forest land use system was categorized as high rate level and (K and Na) were rated as very high in forest land use system whereas grazing land use type falls under medium condition in the exchangeable bases (Ca and Mg,) and high rete value for (K and Na) and agricultural land use types was rated as medium for all exchangeable base.
Generally exchangeable bases were slightly influenced by land use type (Table 5). Considering the land use system lower exchangeable bases were registered under forest land and lower in the agricultural fields this indicates that frequent farming and mismanagement grazing contribute to crop mining and animal grazed deplete exchangeable base in the study area.
The Cation exchange capacity of the current research site was influenced by land use types and soil texture. The highest value was registered (51.08Cmol/kg -1 soil) under forest land use type's soil while the lowest (46.74Cmol/kg -1 ) was obtained under agricultural land use types ( Table 5).
The soil Cation exchange capacity values in the agricultural land use showed decline trend mainly due to the reduction in organic matter contents compared to adjacent land use system. Basically, Cation exchange capacity is the capacity of the soil to hold and exchange cations. It provides a buffering effect to changes in pH, available nutrients, calcium levels and soil structural changes. Organic matter particularly plays important role in exchange process because it provides more negatively charged surfaces than clay particles do. On the contrary percent base saturation was influenced by land use types.
Present base saturation and cation exchange capacity
The exchangeable bases of the soil that affect extent of bases also affect PBS (Table 5). It has declined from 33.94 in the forest soils to 21.11 and 27.47 in cropped and grazing soils respectively. It may not be surprising to note that deforestation and conversion to agricultural land use result in numerical changes in percent base saturation. This is usual as far as there exists loss of exchangeable bases and since percent base saturation by itself is a straight function of exchangeable bases. Therefore, the reduction in PBS values from 33.94 in forest soil to 21.11 in agricultural soil also can be attributed to the observed general reduction of bases with increasing depletion of organic matter content.
Available micronutrients
In terms of soil chemical fertility, one cannot talk about the complete fertility of soils in the absence of micronutrients. Though they are required in equally important, they are as essential as the macronutrients. Therefore, their adequate presence and availability like their counterpart primary nutrients in the soil is highly necessary for the productivity of soils. Unfortunately, there is no enough information on micronutrients of Ethiopian soils in general and particularly in the current study site.
Highest numerical variation were registered on micronutrients of Zn, Cu, Fe and Mn on the three land use system where the highest value registered (17.05, 14.52 and 10.22 mg kg -1 ) under the forest, grazing and cropped land use system respectively for extractable iron, 15, 12.3 and 10.8, 2.84, 1.44, 1.22 and 3.53, 2.23 and 2.16 for (Mn, Zn, and Cu) in the three land use types The high variability of micronutrients among land use types could be probable due to differences cultural soil management practices in the area, land use, OM application Fisseha (1996) reported similar phenomena as a result of exploring micronutrient status of three Ethiopian Vertisols landscapes. Heluf and Wakene (2006) also reported that micronutrients were highly influenced by different land use systems and significant variation was observed among the different land use systems.
CONCLUSIONS
Sustainable agriculture system requires high investment effort in order to sustain the soil resource wisely because soil is a non-renewable and it is highly important precious resource, determining the agricultural potential of a given area. Hence, study and understanding of soil physical, chemical and biological properties and behavior is crucial for the development of soil management plan for efficient utilization of land resource.
Findings on this research suggest that most of soil chemical properties showed slightly reduced in cropped land use types as compared to adjacent forest land use types due to crop mining and intensive grazing without management. The textural class of the study area soils of all land use systems did not showed textural difference among land use types all showed similarity which is clay loam all the three land use system this indicating the similarity in parent material from which the soil is formed moreover soil texture is an inherent soil property that cannot be influenced in short period of time.
The results of this study are evidences of significant changes in the quality attributes of the soils in the study area following the removal or destruction of vegetative cover and frequent tillage that lead to soil erosion and thereby declining soil fertility.
RECOMMENDATION
The information obtained from this study will help in developing sustainable and ecologically sound soil conservation or management strategies in Abol area. Moreover, different governmental and non-governmental organizations, people living in the area and others stake holders who intend to invest on land should collaborate each other to maintain and conserve this precious to benefit the current and future generation without depleted.
|
2021-08-20T18:23:02.912Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "458e1a600b12c1ea0aaf25b8f50d50146260c278",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/JRDM/article/download/56415/58256",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f94d909b9a198a25a350d381ad3f69a766159206",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
227173026
|
pes2o/s2orc
|
v3-fos-license
|
Spatiotemporal Exploration of Chinese Spring Festival Population Flow Patterns and Their Determinants Based on Spatial Interaction Model
Large-scale population flow reshapes the economic landscape and is affected by unbalanced urban development. The exploration of migration patterns and their determinants is therefore crucial to reveal unbalanced urban development. However, low-resolution migration datasets and insufficient consideration of interactive differences have limited such exploration. Accordingly, based on 2019 Chinese Spring Festival travel-related big data from the AMAP platform, we used social network analysis (SNA) methods to accurately reveal population flow patterns. Then, with consideration of the spatial heterogeneity of interactive patterns, we used spatially weighted interactive models (SWIMs), which were improved by the incorporation of weightings into the global Poisson gravity model, to efficiently quantify the effect of socioeconomic factors on migration patterns. These SWIMs generated the local characteristics of the interactions and quantified results that were more regionally consistent than those generated by other spatial interaction models. The migration patterns had a spatially vertical structure, with the city development level being highly consistent with the flow intensity; for example, the first-level developments of Beijing, Shanghai, Chengdu, Guangzhou, Shenzhen, and Chongqing occupied a core position. A spatially horizontal structure was also formed, comprising 16 closely related city communities. Moreover, the quantified impact results indicated that migration pattern variation was significantly related to the population, value-added primary and secondary industry, the average wage, foreign capital, pension insurance, and certain aspects of unbalanced urban development. These findings can help policymakers to guide population migration, rationally allocate industrial infrastructure, and balance urban development.
Introduction
Population flow refers to the short-term, repetitive, and cyclical movement of populations in geographical space. By 2016, China's floating population had reached 245 million. Large-scale population flow has been a significant phenomenon in China's social development
Study Area
There is large-scale population flow among cities in China during the Spring Festival. As portrayed in Figure 1, our study area focused on 299 prefecture-level administrative units and some county-level units in mainland China. In general, these administrative units are cities. Due to limitations in data availability, some prefecture-level cities in Hainan province, Taiwan, Hong Kong, Macao, and some ethnic minority autonomous prefectures in western China were excluded from the study area. Ultimately, 352 cities formed the research focus.
Study Area
There is large-scale population flow among cities in China during the Spring Festival. As portrayed in Figure 1, our study area focused on 299 prefecture-level administrative units and some county-level units in mainland China. In general, these administrative units are cities. Due to limitations in data availability, some prefecture-level cities in Hainan province, Taiwan, Hong Kong, Macao, and some ethnic minority autonomous prefectures in western China were excluded from the study area. Ultimately, 352 cities formed the research focus.
Study Data
Location-based services (LBS) technology pinpoints the geographic location of a mobile user via wireless communication networks or the external positioning methods of network operators. When users allow various mobile applications to call LBS, their movement trajectories are accurately recorded in real time from positioning information. Thus, every smartphone user is a mobile sensor, reflecting social characteristics and allowing an enormous amount of individual movement data to be collected efficiently in real time. These movement data are used to calculate intercity migration indices [27]. The use of travel-related big data with such high spatiotemporal resolution is more accurate and effective than the use of census data [28]. In this study, we used the population flow dataset from the AMAP Migration Map ("https://trp.autonavi.com/migrate/page.do"). Tencent and Baidu migration data have been used in similar studies because they provide migration indices of daily population inflows and outflows, with a city as the basic unit (i.e., the intensity of inflows, source, and outflows limited to the destination of a single city on a certain day). However, longer historical data for population migration, such as during the 2019 Spring Festival, are currently available only from the AMAP platform. Table 1 shows an example population flow dataset.
Study Data
Location-based services (LBS) technology pinpoints the geographic location of a mobile user via wireless communication networks or the external positioning methods of network operators. When users allow various mobile applications to call LBS, their movement trajectories are accurately recorded in real time from positioning information. Thus, every smartphone user is a mobile sensor, reflecting social characteristics and allowing an enormous amount of individual movement data to be collected efficiently in real time. These movement data are used to calculate intercity migration indices [27]. The use of travel-related big data with such high spatiotemporal resolution is more accurate and effective than the use of census data [28]. In this study, we used the population flow dataset from the AMAP Migration Map ("https://trp.autonavi.com/migrate/page.do"). Tencent and Baidu migration data have been used in similar studies because they provide migration indices of daily population inflows and outflows, with a city as the basic unit (i.e., the intensity of inflows, source, and outflows limited to the destination of a single city on a certain day). However, longer historical data for population migration, such as during the 2019 Spring Festival, are currently available only from the AMAP platform. Table 1 shows an example population flow dataset. As shown in Table 1, the population migration intensity index (PMII; provided by the AMAP Migration Map) represents the migration intensity from the origin to the destination cities. In this study, the inflow and outflow migration indexes are both representative of the intensity of population flow.
In addition, to explore the effects of associated factors on the patterns of population flow during the Spring Festival, several socioeconomic factors were selected for analysis, as shown in Table 2. Thus, population is a basic factor in population flow; gross region product, value added by primary industry (VAPI), value added by secondary industry (VASI), and value added by tertiary industry (VATI) represent the economic level of cities; the average wage, in terms of the income differential between two cities, is the main driver of migration; foreign capital investment increases the number of jobs and thus, attracts employees; mobile phone users create a record of population movement, with their number closely related to the intensity of a population flow; and the number of insured pensions and insured persons (IPIP) represents the social security system for city workers and is an important indicator of the effect of social security policy on population flow. [29,31] Gross regional product GRP Annual gross regional product (100 million yuan) [29,32] Value added by primary industry VAPI Annual value added by primary industry (100 million yuan) [29,33] Value added by secondary industry VASI Annual value added by secondary industry (100 million yuan) [29,33] Value-added by tertiary industry VATI Annual value added by tertiary industry (100 million yuan) [29,33] Average wage AW Average wage of employees on duty (yuan/person) [34][35][36] Foreign capital FC Actual utilization of foreign investment (10 million dollars) [37] Mobile phone users MPU Number of mobile phone users at year end (10 thousand persons) [29,30] Insured pension and insured persons IPIP Number of basic pension and related insurance policies available for urban employees [29,38] Note: Variable means population migration intensity index of different period; Std. Dev. means standard deviation; Min means minimum value; Max means maximum value.For each city, we established indices to express the intensity of population flow (i.e., daily inflow and outflow) and the flows for holidays, returning to a hometown (re-hometown), and returning to work (re-work). From the spatiotemporal changes of these indices in the four periods, we determined the spatiotemporal trends and patterns of population flow. Because ChunYun began on January 21 during the Spring Festival of 2019, the average PMII distribution (DPMII) from January 15 to 20 was regarded as a proxy for the daily distribution of population flow before the Spring Festival. Similarly, the average PMII distribution (RHPMII) from January 21 to February 2 was regarded as a proxy for the re-hometown distribution of population flow before the Spring Festival. The Spring Festival holiday ended on February 10, thus the average PMII distribution (HPMII) from February 3 to 9 was regarded as a proxy for the holiday distribution of population flow during the Spring Festival, and the average PMII distribution (RWPMII) from February 10 to 12 was regarded as a proxy for the re-work distribution of population flow after the Spring Festival. The basic statistical information of the intensity of population inflow and outflow during the four periods is shown in Table 3.
Methods
We illustrate the methodology used in the study by using the example of population flow between the cities of Beijing and Shanghai. First, we used the city-level population flow dataset collected from the AMAP LBS platform and the socioeconomic factors dataset collected from the Urban Statistical Yearbook of China in 2019. These processes comprised the null value, error value, data standardization, dataset partition, spatialization, and other data preprocess. Second, we used SNA methods and spatial interaction models to explore the patterns and quantify the effects of population flow. Thus, we performed the following tasks. (1) We used the PageRank model for city classification and the CNM model for community detection during daily population flow. By using the PageRank model, it is possible to quantify which city is more important for Beijing and Shanghai. By using the CNM model, it is possible to determine which urban community Beijing and Shanghai belong to, respectively. (2) We used the spatiotemporal variation of flow intensity to reveal the trends of population flow. (3) We used a family of global interaction models (the global Poisson gravity model, the origin-specific gravity model, and the destination-specific gravity model) to quantify the global effect of selected socioeconomic factors on the returning work flow. For example, the global interaction models assumed that population flow between any city conform to the same pattern and that population flow between Beijing and Shanghai follow this pattern. (4) We used an origin-focused SWIM and a destination-focused SWIM to quantify the local effect, with consideration of spatial heterogeneity. For instance, when Beijing is the origin city and Shanghai is the destination city, the origin-focused SWIM can consider the influence of cities around Beijing on population mobility between these two cities, and the destination-focused SWIM can consider the influence of cities around Shanghai on population flow between these two cities. Figure 2 shows a flowchart for this study.
City Classification and Community Detection
A population flow network is a small-world, scale-free network, an intermediate between a fully regular network and a completely random network [13]. We considered the network of population flow formed during the Spring Festival to be similar to the Internet and thus, considered that cities of greater importance attracted more people and routes. By taking cities as the network nodes and the intensity of population flow among cities as the weight, the following directional weighting matrix (P) for the four periods of population flow was constructed, where resents the intensity of population flow from city i to city j. To study the network characteristics of population flow, we used the PageRank algorithm and community detection methods, which are often used to measure node importance and community in SNA. The PageRank algorithm was originally designed to rank web pages by Google [39,40]. In addition to considering degree, betweenness, and closeness, like other centrality indices use to evaluate nodes in a network, the PageRank algorithm also considers the number and quality of connections. Thus, a node may have fewer connections yet still be important if its connections are with important nodes. The PageRank algorithm has therefore been applied to network analysis in many fields, such as bibliometrics, SNA, and road networks [13]. We used it to rank the importance of city nodes by classifying cities according to their importance, which revealed the hierarchical structure of population flow. The PageRank algorithm is as follows, Figure 2. Flowchart of this study (re-hometown means returning hometown; re-work means returning work; CNM means Clauset-Newman-Moore algorithm; GWPR means geographically weighted Poisson regression model; SWIM means spatially weighted interactive models).
City Classification and Community Detection
A population flow network is a small-world, scale-free network, an intermediate between a fully regular network and a completely random network [13]. We considered the network of population flow formed during the Spring Festival to be similar to the Internet and thus, considered that cities of greater importance attracted more people and routes. By taking cities as the network nodes and the intensity of population flow among cities as the weight, the following directional weighting matrix (P) for the four periods of population flow was constructed, where P ij resents the intensity of population flow from city i to city j.
To study the network characteristics of population flow, we used the PageRank algorithm and community detection methods, which are often used to measure node importance and community in SNA. The PageRank algorithm was originally designed to rank web pages by Google [39,40]. In addition to considering degree, betweenness, and closeness, like other centrality indices use to evaluate nodes in a network, the PageRank algorithm also considers the number and quality of connections. Thus, a node may have fewer connections yet still be important if its connections are with important nodes. The PageRank algorithm has therefore been applied to network analysis in many fields, such as bibliometrics, SNA, and road networks [13]. We used it to rank the importance of city nodes by classifying cities according to their importance, which revealed the hierarchical structure of population flow. The PageRank algorithm is as follows, where PageRank (p i ) is the PageRank value of city i, q is a damping parameter for PageRank (usually set to 0.85), N is the number of all city nodes, p j represents the population flow from city i to city j, and L p j is the number of links from city i, which is weighted by the intensity of the population flow. Community detection is used to identify city communities in a population flow network. A range of methods are used for community detection, such as the Fluid Communities algorithm, the Girvan-Newman algorithm, and the CNM algorithm [41][42][43]. We used the CNM algorithm, which is based on the CNM greedy modularity maximization and weighted by the intensity of a population flow [43].
Global Poisson Gravity Model
Spatial interaction is broadly defined as the movement or communication of objects such as people, goods, and information over geographic space that results from a decision-making process [44,45]. Thus, spatial interaction covers a wide variety of behaviors and movements such as migration, shopping trips, commuting, commodity or communication flows, trips for educational purposes, and airline passenger traffic [23]. The most general form of a spatial interaction model can be formulated as follows [46], where the interaction between any pair of origins i and destinations j is specified as T ij , V i represents a vector of origin factors measuring the propulsiveness of origin i, W j represents a vector of destination attractiveness factors, and C ij represents a vector of separation factors, with the separation between city i and j (usually) measured in terms of distance, cost, or travel time between i and j. For example, T ij is the population flow between Beijing and Shanghai. V i represents a vector of factors of Beijing, such as population and industry. W j represents a vector of factors of Shanghai, such as average wage and foreign investment. C ij represents a vector of separation factors between Beijing and Shanghai, such as distance and transportation cost. The gravity frameworks for spatial interaction were the first to be developed and are the most widely used [47]. The gravity model and its relationships assume that greater flows will occur between larger and closer places than between smaller and more distant places, ceteris paribus. It is usually formulated as follows, where P i and N j represent the repulsiveness and attractiveness factors of origin i and destination j, respectively, d ij is the distance between i and j , and k, α, γ, and β are parameters to be estimated empirically and that reflect the nature of the relationship between spatial flows and each of the explanatory variables [23]. Considering the Poisson regression, a global Poisson gravity calibration of spatial interaction models is formulated as follows, where all parameters are as defined above.
Origin-Specific and Destination-Specific Models
Population flow is a spatial interaction between the population of the origin and the destination. Its intensity is affected by both the origin and the destination attributes, e.g., population mobility between Beijing and Shanghai is affected not only by the attributes of Beijing but also by those of Shanghai. However, as with the gravity model, the global calibration of spatial interaction models, which assumes the same pattern of the population flow between any origin and destination, may not capture the spatial variation in relationships and thus, may not represent the fact that the impact of Beijing and Shanghai is different.
Local parameter estimates may provide more useful disaggregated information. These estimates are obtained for each separate origin or destination by calibration of origin-specific and destination-specific models. For example, we only consider the flow from Beijing to any city in the origin-specific model, and we only consider the flow from any city to Shanghai in the destination-specific model.
An origin-specific model is formulated as follows, where T ij represents the flow intensity between the specific origin city i and destination city j; k i , γ i , and β i are the parameters of specific origin city i; N j represents a vector of destination attractiveness factors; and d ij is the distance between i and j.
A destination-specific model is formulated as follows, where T ij represents the flow intensity between origin city i and the specific destination city j; k j , γ j , and β j are the parameters of specific destination city j; P i represents a vector of origin factors measuring the propulsiveness of origin i ; and d ij is the distance between i and j.
Origin-Focused and Destination-Focused Models
The origin-specific and destination-specific models only consider flows from a specific origin city to different destination cities or from different origin cities to a specific destination city. This means that flows emanating from other origins or arriving at other destinations are ignored. For example, in the origin-specific model, we only considered the flow from Beijing, but ignored the flow from other origin cities. In fact, the flow between origin and destination cities is affected by other cities that surround an origin and a destination. However, origin-specific and destination-specific models ignore this effect. Cities in various geographical locations have different population mobility patterns, whereas the mobility patterns of surrounding cities tend to be similar. Population flow is, therefore, spatially heterogeneous.
However, in the GWR model, a specific city is the research object, and the model generally performs better than traditional regression models because it includes geographically varying parameters. By using geographic weighting, it avoids the use of global parameter estimation, which renders traditional regression models unsuitable for analysis of spatially heterogeneous population flow patterns. The expression of the GWR model is as follows, where (u i , v i ) are the coordinates of city i and β k (u i , v i ) is the regression coefficient of independent variable X ik at city i, and the regression coefficient is the quantified result of the impact of each factor. A weighted least-squares method is used to estimate the coefficients of the GWR model; the estimation of parameters β k (u i , v i , t i ) can be given in the formula. The calculation of weight has a great influence on parameter estimation for the GWR model. A Gaussian kernel function is often used to calculate the spatially weighted matrix, which models the spatial effects of the surrounding observations by Gaussian distance decay within the bandwidth, as shown in Formula 10. Thus, bandwidth (b) selection is critical for the calculation of weight. There are two major categories of weighting methods: one uses a fixed bandwidth and the other uses an adaptive bandwidth. The bandwidth is larger when the data are sparse and in areas where the data are plentiful. Moreover, a corrected Akaike information criterion (AIC) is used to evaluate the fitting to select the optimum bandwidth [48].β . ., w in ) is the spatially weighted matrix, and its diagonal elements w ij (1 ≤ j ≤ n) are the weight given to observation city j adjacent to observation city i. It can be given as follows, where d ij is the spatial distance measuring the closeness between city i and city j, where b is a parameter called bandwidth, which is used to control the radial influence range. GWR was initially developed for linear regression modelling, where the dependent variable is assumed to follow a Gaussian (normal) distribution. It was then extended to a geographically weighted logistic regression method, based on the generalized linear modelling framework for binomial (logistic) distribution and to a geographically weighted Poisson regression (GWPR) method, based on the Poisson distribution [49]. The expression of the GWPR model is as follows, We used a geographically weighted likelihood principle to estimate the GWPR parameters. This is a variant of the local likelihood principle that is consistent with the geographically weighted least-squares approach of conventional Gaussian GWR. Thus, the model parameters at location i were estimated by maximizing the geographically weighted log-likelihood function.
With reference to the geographical weighting approach used in the GWR model and the above models, SWIMs that included origin-focused and destination-focused models were constructed [23]. These also took focused cities as their research objects. In the origin-focused model, the flows with origins closer to the calibration point have a greater weight and thus, a larger effect during the model calibration. The weights continuously decrease as the distance between the calibration point and the observed origin increases. A simplified illustration of the origin-focused and destination-focused spatial interaction is shown in Figure 3. to calculate the spatially weighted matrix, which models the spatial effects of the surrounding observations by Gaussian distance decay within the bandwidth, as shown in Formula 10. Thus, bandwidth (b) selection is critical for the calculation of weight. There are two major categories of weighting methods: one uses a fixed bandwidth and the other uses an adaptive bandwidth. The bandwidth is larger when the data are sparse and in areas where the data are plentiful. Moreover, a corrected Akaike information criterion (AIC) is used to evaluate the fitting to select the optimum bandwidth [48].
where ( , ) = diag( , , ⋯, ) is the spatially weighted matrix, and its diagonal elements (1 ≤ j ≤ n) are the weight given to observation city adjacent to observation city . It can be given as follows, where is the spatial distance measuring the closeness between city and city , where b is a parameter called bandwidth, which is used to control the radial influence range.
GWR was initially developed for linear regression modelling, where the dependent variable is assumed to follow a Gaussian (normal) distribution. It was then extended to a geographically weighted logistic regression method, based on the generalized linear modelling framework for binomial (logistic) distribution and to a geographically weighted Poisson regression (GWPR) method, based on the Poisson distribution [49]. The expression of the GWPR model is as follows, We used a geographically weighted likelihood principle to estimate the GWPR parameters. This is a variant of the local likelihood principle that is consistent with the geographically weighted leastsquares approach of conventional Gaussian GWR. Thus, the model parameters at location were estimated by maximizing the geographically weighted log-likelihood function.
With reference to the geographical weighting approach used in the GWR model and the above models, SWIMs that included origin-focused and destination-focused models were constructed [23]. These also took focused cities as their research objects. In the origin-focused model, the flows with origins closer to the calibration point have a greater weight and thus, a larger effect during the model calibration. The weights continuously decrease as the distance between the calibration point and the observed origin increases. A simplified illustration of the origin-focused and destination-focused spatial interaction is shown in Figure 3. The general formulation of the SWIM is as follows, The general formulation of the SWIM is as follows, where T ij generally represents the flow intensity between origin city i and destination city j. When r = i, the formulation is an origin-focused model, where u represents the location of the calibration point (one of the existing origins or any other point within the study region); when r = j, the formulation represents a destination-focused model, where u represents the location of the calibration point (one of the existing destinations or any other point within the study region). The notation {u , r} indicates that the data for the covariates obtained for the estimation of the parameters at u are geographically weighted on the distances between u and each r, P i , N j , and d ij , which are the model variables (i.e., the origin propulsiveness, the attractiveness of the destination, and the distance between origin i and destination j) and k, α, γ, and β, which are the parameters specific to u.
When the spatial interaction model follows a Poisson distribution, the SWIM is formulated as follows, where λ uij denotes the flow between origin i and destination j weighted according to the distance between u and r, and other variables are defined as before.
The parameter estimation for the SWIM is similar to that used for the GWPR model, being based on a geographically weighted likelihood principle with pointwise-calibrated parameter estimates. A set of equations are solved to maximize the first derivative of the weighted log-likelihood in the SWIM, with these formulated as follows, where W uij indicates the weight of flow i j according to the proximity of its r to the calibration point u.
The spatial weighting function and optimal bandwidth selection criteria of the SWIM are similar to those of the GWPR model.
Variables Selection
If there is multicollinearity in the regression models, the results will be highly unreliable. Thus, before modelling, it must be determined if multicollinearity exists between variables. We calculated the variance inflation factor (VIF) of each independent variable and discarded from the final model any independent variables with VIFs > 7.5, which were gross regional product of origin, gross regional product of destination, VATI_origin, VATI_destination, mobile phone users of origin, and mobile phone users of destination. The selected independent variables are shown in Table 4. Note: All parameters are significant at a level of 95%.
Spatiotemporal Patterns of Population Flow
Daily population flow exhibits spatiotemporality. As can be seen from Figure 4, the daily population flow is concentrated in the southeast of China, with little in the northwest of China. Furthermore, the deep red areas are four major city agglomerations, with Beijing, Shanghai, Guangzhou, and Chengdu as their respective core cities. These are known as Beijing-Tianjin-Hebei, the Yangtze River Delta, the Pearl River Delta, and Chengdu-Chongqing. In addition, the higher a city's development level, the greater its population flow, as shown by the flow of Shanghai being greater than that of Chengdu. To verify this apparent hierarchical structure, we first established a directed weighted matrix of daily population inflow and outflow between cities, then used the PageRank algorithm to rank the importance of cities in the daily population flow network.
Spatiotemporal Patterns of Population Flow
Daily population flow exhibits spatiotemporality. As can be seen from Figure 4, the daily population flow is concentrated in the southeast of China, with little in the northwest of China. Furthermore, the deep red areas are four major city agglomerations, with Beijing, Shanghai, Guangzhou, and Chengdu as their respective core cities. These are known as Beijing-Tianjin-Hebei, the Yangtze River Delta, the Pearl River Delta, and Chengdu-Chongqing. In addition, the higher a city's development level, the greater its population flow, as shown by the flow of Shanghai being greater than that of Chengdu. To verify this apparent hierarchical structure, we first established a directed weighted matrix of daily population inflow and outflow between cities, then used the PageRank algorithm to rank the importance of cities in the daily population flow network. Figure 5 shows the PageRank value distribution of importance cities in different spatial locations, and Table 5 summarizes the levels of PageRank value in different cities by the natural break classification (NBC). The following trends can be seen: (1) the importance of first-level cities is consistent with that of the core cities of the four major city agglomerations mentioned above; (2) nearly all second-level cities are first-tier cities or provincial capitals, which are important nodes in the population flow network; (3) third-level cities surround a second-level city, showing that the intensity of population flow radiates from core cities to their surrounding cities, as mentioned above; and (4) the fourth-level cities are mainly distributed in northwestern China, which shows that the daily population flow is mainly concentrated in southeastern China. Thus, there is a vertical hierarchy, with the population flow showing a high consistency with city development level. Figure 5 shows the PageRank value distribution of importance cities in different spatial locations, and Table 5 summarizes the levels of PageRank value in different cities by the natural break classification (NBC). The following trends can be seen: (1) the importance of first-level cities is consistent with that of the core cities of the four major city agglomerations mentioned above; (2) nearly all second-level cities are first-tier cities or provincial capitals, which are important nodes in the population flow network; (3) third-level cities surround a second-level city, showing that the intensity of population flow radiates from core cities to their surrounding cities, as mentioned above; and (4) the fourth-level cities are mainly distributed in northwestern China, which shows that the daily population flow is mainly concentrated in southeastern China. Thus, there is a vertical hierarchy, with the population flow showing a high consistency with city development level. The low-PageRanked cities surrounded high-level cities in geographical space; for example, Tianjin was one of the cities surrounding Beijing. This showed a possible community structure. Thus, community detection was used to reveal any community relationship that was hidden in the population flow network. Figure 6 gives a distribution map of the community structure in the network, and Table 6 summarizes the community structure of all cities. The latter reveals 16 different community structures and the following trends: (1) The core city of each community is a provincial capital city or municipality directly under central-government control; for example, the core city of the Beijing-related community is under central-government control. (2) The four major city agglomerations play an important role in the community structure, as they comprise the largest number of provinces and cities. (3) In the community structure, most communities are cross-regional, such as the Beijing-related community that encompasses Tianjin, Shandong, Shanxi, Hebei, and Henan provinces. During the Spring Festival, as Table 3 shows, the mean PMII outflow increased from 4.505 to 10.82 and the mean PMII inflow increased from 4.496 to 10.75. Clearly, there was an overall increase in population flow. Further, Figure 7 is an outflow trend map of re-hometown before the Spring Festival, obtained by subtracting DPMII outflow from RHPMII outflow . The deep-red areas show a significant increase in outflow in four major city agglomerations. This is commonly known as "returning hometown flow" and represents migrant laborers returning to their hometowns to be with their families for the Spring Festival. Similarly, Figure 8 shows the inflow trend map of re-work after the Spring Festival obtained by subtracting HPMII inflow from RWPMII inflow . The deep-red areas show an inflow tendency to population flow in the four major city agglomerations, which represents migrant laborers returning to work after the Spring Festival (also denoted "returning work flow"). These data show that workers are concentrated mainly in the four major city agglomerations but that their hometowns are elsewhere. People therefore tend to flow from low-development cities to high-development cities, which have more employment opportunities. the Spring Festival obtained by subtracting HPMII inflow from RWPMII inflow . The deep-red areas show an inflow tendency to population flow in the four major city agglomerations, which represents migrant laborers returning to work after the Spring Festival (also denoted "returning work flow"). These data show that workers are concentrated mainly in the four major city agglomerations but that their hometowns are elsewhere. People therefore tend to flow from low-development cities to highdevelopment cities, which have more employment opportunities. migrant laborers returning to work after the Spring Festival (also denoted "returning work flow"). These data show that workers are concentrated mainly in the four major city agglomerations but that their hometowns are elsewhere. People therefore tend to flow from low-development cities to highdevelopment cities, which have more employment opportunities. Overall, it was found that the spatiotemporal patterns of daily population flow had a hierarchical structure. Population flow intensity and city development were highly correlated and exhibited a community structure, indicating that the intensity of population flow radiated from core cities to surrounding cities. In terms of the hierarchical structure, the nationwide network level comprised the core cities (Beijing, Shanghai, Guangzhou, Chengdu, and Chongqing) of the four major city agglomerations; the regional network level comprised the second-level cities (e.g., Xi'an, Kunming, and Guiyang). In addition, there were more important and dense cities in eastern China than in western China, indicating a west-to-east flow of city development level in China. Cities in the same community tended to be more closely linked, indicating that they were connected by population flows more frequently than other cities. Moreover, most communities were cross-regional, illustrating that spatiotemporality will, in the future, be severely compressed: large-scale, cross-regional, and high-density population mobility will be a future development trend. During the Spring Festival, the spatiotemporal patterns of population flow were "returning hometown flow" and "returning work flow". This verified the regional differences of city development and population flow. It also showed that the difference in developmental levels between two regions was the driving force of population flow. Large-scale population flow similar to "returning hometown flow" and "returning work flow" promotes the dissemination of information, capital, culture, and technology, which aids the development of cities.
SWIMs Result
The above analysis revealed that the unbalanced development of a city was an influential factor contributing to "returning hometown flow" and "returning work flow" during the Spring Festival. The migration purpose of "returning work flow" is to return to work. To account for the effect of multipurpose migration during daily and holiday periods, we used 13 explanatory variables to explore only the relationship between the intensity of population flow and the development level of a city during "returning work flow". The dependent variable RWPMII and the independent variables are shown in Table 3.
Results from the Global Poisson Gravity Model
The parameter estimation result from the global Poisson gravity model is shown in Table 7. It represents only the average interaction behaviors across the entire study area. From the preliminary exploration, the following relationships can be seen. (1) The estimated value of α for total population of origin is 0.7154, and that of α of total population of destination is 0.1036, which shows that a population increase at origin and destination cities has positive effects on population flow.
Results of Origin-Specific and Destination-Specific Interaction Models
Although average trends at the global level were seen in the results of the global Poisson gravity model, spatial heterogeneity was seen in the interaction of population flow. Thus, to further verify whether our interpretation of the global model results was reasonable, we used origin-specific and destination-specific interaction models that considered the specific origin or destination cities separately to further quantify the effects of socioeconomic factors on population flow. Tables 8 and 9 and Figure 9 show the regression results of these two models. From the regression results of the origin-specific and destination-specific interaction models, the following conclusions were drawn. (1) The estimated coefficients of total population in these two models differed from those of the global results. In the destination-specific model, the values of for total population of origin in the first-and second-level cities (except for those in northeastern China) and in the cities surrounded by the four major city agglomerations were positive. In contrast, in a few cities in southwestern and central China and in most cities in northeastern and northern China, the estimated coefficients of total population were negative. In the origin-specific model, the values for total population of destination in the first-and second-level cities (except for the first-and secondlevel cities of northeastern China) and most cities of southwestern and central China were positive. However, in most cities in southeastern and southern China, the values for total population of destination were negative. The positive values of total population in most first-and second-level cities show that population growth promoted population inflow and outflow. However, most northeastern and northern cities and a few southwestern cities showed negative values of total population, demonstrating that these cities had a population loss. (2) In the destination-specific model, the values for VAPI_origin were negative for western and northern cities. However, the values for VAPI_origin were positive for northeastern and coastal cities (e.g., the Yangtze River Delta had high positive values). In the origin-specific model, the values of VAPI_destination were positive in some coastal, northern, and northeastern cities. However, these values were negative in central cities. Thus, the estimated coefficients of VAPI in some coastal cities and southwestern and northeastern cities of China were all positive. This illustrated that the population flow among these areas comprised primary-industry workers. (3) In the destination-specific model, the values for VASI_origin were positive for most cities of southwestern China but negative for cities in northern and southeastern coastal cities. In the origin-specific model, the values of VASI_destination were positive in Chongqing and Jiangsu, Anhui, Hubei, Sichuan, Yunnan, and Shanxi. However, cities in From the regression results of the origin-specific and destination-specific interaction models, the following conclusions were drawn. (1) The estimated coefficients of total population in these two models differed from those of the global results. In the destination-specific model, the values of α for total population of origin in the first-and second-level cities (except for those in northeastern China) and in the cities surrounded by the four major city agglomerations were positive. In contrast, in a few cities in southwestern and central China and in most cities in northeastern and northern China, the estimated coefficients of total population were negative. In the origin-specific model, the γ values for total population of destination in the first-and second-level cities (except for the first-and second-level cities of northeastern China) and most cities of southwestern and central China were positive. However, in most cities in southeastern and southern China, the γ values for total population of destination were negative. The positive values of total population in most first-and second-level cities show that population growth promoted population inflow and outflow. However, most northeastern and northern cities and a few southwestern cities showed negative values of total population, demonstrating that these cities had a population loss. (2) In the destination-specific model, the α values for VAPI_origin were negative for western and northern cities. However, the α values for VAPI_origin were positive for northeastern and coastal cities (e.g., the Yangtze River Delta had high positive values). In the origin-specific model, the γ values of VAPI_destination were positive in some coastal, northern, and northeastern cities. However, these values were negative in central cities. Thus, the estimated coefficients of VAPI in some coastal cities and southwestern and northeastern cities of China were all positive. This illustrated that the population flow among these areas comprised primary-industry workers. (3) In the destination-specific model, the α values for VASI_origin were positive for most cities of southwestern China but negative for cities in northern and southeastern coastal cities. In the origin-specific model, the γ values of VASI_destination were positive in Chongqing and Jiangsu, Anhui, Hubei, Sichuan, Yunnan, and Shanxi. However, cities in northeastern China, the Yangtze River Delta, and the Pearl River Delta had negative values. Thus, the estimated coefficients of VASI were positive values in most cities in southwestern China, which indicated that these cities have gradually transformed into centers of secondary industry. In contrast, the negative estimated coefficients of VASI in most cities of northeastern China, the Yangtze River Delta, and the Pearl River Delta showed that tertiary industries dominate in these coastal developed cities and that few secondary-industry jobs are available. Conversely, although northeastern China is a long-established industrial area, it has a low attraction level to populations because of its severely decreased population. (4) In the destination-specific model, the α values for foreign capital of origin were positive for cities in northeastern China, southwestern China, and coastal areas, whereas in cities elsewhere, they were negative. In the origin-specific model, the γ values for foreign capital of destination were positive for cities in northeastern China, southwestern China, and the Pearl River Delta, whereas in cities elsewhere, they were negative. Thus, when cities in northeastern and southwestern China are a destination due to their having increased their attraction, this is as a result of the increased investment of foreign capital. For example, the Pearl River Delta was the earliest reformed and opened-up zone, and an enormous investment of foreign capital created a large number of jobs and attracted more workers to the area via population inflow. (5) In the southern and southeastern regions dominated by the Yangtze River Delta and Pearl River Delta, the α values for IPIP_origin were negative, and the γ values of IPIP_destination were positive. This is in line with the actual situation: these areas mostly contain high-development level coastal cities and are thus, major sites of population inflows.
Results of Origin-Focused and Destination-Focused Interaction Models
Although the origin-specific and destination-specific models consider spatial heterogeneity separately, they do not consider the effect of surrounding cities. Thus, the origin-focused and destination-focused models, which do consider the effect of surrounding cities, were used for this section of the work. The results are shown in Figure 10. northeastern China, the Yangtze River Delta, and the Pearl River Delta had negative values. Thus, the estimated coefficients of VASI were positive values in most cities in southwestern China, which indicated that these cities have gradually transformed into centers of secondary industry. In contrast, the negative estimated coefficients of VASI in most cities of northeastern China, the Yangtze River Delta, and the Pearl River Delta showed that tertiary industries dominate in these coastal developed cities and that few secondary-industry jobs are available. Conversely, although northeastern China is a long-established industrial area, it has a low attraction level to populations because of its severely decreased population. (4) In the destination-specific model, the values for foreign capital of origin were positive for cities in northeastern China, southwestern China, and coastal areas, whereas in cities elsewhere, they were negative. In the origin-specific model, the values for foreign capital of destination were positive for cities in northeastern China, southwestern China, and the Pearl River Delta, whereas in cities elsewhere, they were negative. Thus, when cities in northeastern and southwestern China are a destination due to their having increased their attraction, this is as a result of the increased investment of foreign capital. For example, the Pearl River Delta was the earliest reformed and opened-up zone, and an enormous investment of foreign capital created a large number of jobs and attracted more workers to the area via population inflow. (5) In the southern and southeastern regions dominated by the Yangtze River Delta and Pearl River Delta, the values for IPIP_origin were negative, and the values of IPIP_destination were positive. This is in line with the actual situation: these areas mostly contain high-development level coastal cities and are thus, major sites of population inflows.
Results of Origin-Focused and Destination-Focused Interaction Models
Although the origin-specific and destination-specific models consider spatial heterogeneity separately, they do not consider the effect of surrounding cities. Thus, the origin-focused and destination-focused models, which do consider the effect of surrounding cities, were used for this section of the work. The results are shown in Figure 10. The regression results of the origin-focused and destination-focused interaction models were largely the same as the results of the origin-specific and destination-specific models, but they differed in a few areas. These differences were as follows. (1) In Chongqing and some cities of Henan province, the α values for total population of origin in the destination-focused model were greater than those in the two specific models. Because Henan province and southwestern regions (where Chongqing is located) are the main areas of population outflow, this increase of α was in line with the actual situation. However, for some cities in the Yangtze River Delta, the α values for total population of origin were negative. This shows that these cities are becoming saturated with people. (2) The estimated γ values for VAPI_destination were negative in Henan and Anhui provinces, distinct from their positive values in the specific models. (3) The estimated α values for VASI_origin were positive in some cities of Anhui, Henan, and Hubei provinces, distinct from their negative values in the specific models. (4) The estimated α values for average wage of origin were negative in some cities of Shanxi province, distinct from their positive values in the specific models. Similarly, the estimated γ values for average wage of destination were positive in some cities of the Pearl River Delta, distinct from their negative values in the specific models. This is in line with the actual situation, as the increased income that is obtainable in these destination cities attracts more migrant workers, especially to large city agglomerations such as the Pearl River Delta. (5) The estimated α values for foreign capital of origin were negative in some cities of Anhui province, whereas the estimated γ values for foreign capital of destination were negative in some cities of Henan province and positive in Chongqing, all of which were opposite in sign to their values in the specific models. Thus, by increasing foreign investment in Chongqing, its population attractiveness has been improved. (6) The estimated α values for IPIP_origin were positive in some cities of Zhejiang province and Yunnan province, distinct from their negative values seen in the specific models. The estimated γ values for IPIP_destination were positive in some cities of Jiangsu province and negative in some cities of Anhui and Henan province, opposite from their signs in the specific models.
It can be seen that these differences were mainly concentrated in Henan, Anhui, Hubei, and Chongqing. This was attributable to the enormous variation in socioeconomic environments in these regions. The actual pattern in these regions could not be fitted by simple local-weighting approaches. The overall trend of parameter values in the results of focused and specific models was consistent. However, the results of focused models tended to be regionally consistent, e.g., the estimated parameters for the cities that are near the Pearl River Delta region were similar to the overall trend of the Pearl River Delta region. The results of specific models also tended to be discrete. For instance, in some individual cities in southwestern and northeastern regions, such as Chongqing and Shenyang, the estimated parameters differed depending on the surrounding cities or provinces. This clearly illustrated that the results of the two specific models were one-sided but that the results of the two focused models were regionally consistent.
Comparison of Spatial Interaction Models
We compare SWIMs with other spatial interaction models, as shown in Table 10. All of these models take the re-work dataset as input and obtain the fit results. All results satisfy the statistical hypothesis testing. As shown in Table 10, SWIMs of the origin-focused model and destination-focused model have the best goodness-of-fit, with the highest mean value of McFadden's pseudo R 2 . This verifies that the SWIMs significantly outperform the other models, indicating that the weighted interactive model performed better by considering the local characteristics. The mapping of the McFadden pseudo R 2 values in Figure 11 is an example of destination-based models, which illustrate that the use of these models is reasonable in more detail. As shown in Table 10, SWIMs of the origin-focused model and destination-focused model have the best goodness-of-fit, with the highest mean value of McFadden's pseudo R 2 . This verifies that the SWIMs significantly outperform the other models, indicating that the weighted interactive model performed better by considering the local characteristics. The mapping of the McFadden pseudo R 2 values in Figure 11 is an example of destination-based models, which illustrate that the use of these models is reasonable in more detail.
(a) (b) Figure 11. The goodness-of-fits of destination-based models. (a) Pseudo R 2 of destination-specific model, (b) Pseudo R 2 of destination-focused model.
As shown in Figure 11, the Pseudo R 2 values vary significantly across cities in different locations, indicating spatial heterogeneity in population flow. The pseudo R2 values were higher in the city agglomerations with first-and second-level core cities, especially the four major city agglomerations that have been circled. This showed that cities in the same city agglomeration had similar patterns of population flow and that city agglomerations with a higher level of development had stronger radiation capacity (circled area in Figure 11b). In conclusion, the spatial distribution of the Pseudo R 2 values in the results of these two models is consistent, which also validates the reasonableness of SWIMs.
In addition, as stated in the methodology, the gravity model and its relationships assume that greater flows will occur between larger and closer places than between smaller and more distant places, ceteris paribus. That is, the intensity of population flow decreases with increasing distance between two places and by the relatively steep distance-deterrence. Similarly, by mapping the value of distance-decay parameter , the reasonableness of SWIMs can be illustrated in more detail, using the origin-based models in Figure 12 as an example. As shown in Figure 11, the Pseudo R 2 values vary significantly across cities in different locations, indicating spatial heterogeneity in population flow. The Pseudo R 2 values were higher in the city agglomerations with first-and second-level core cities, especially the four major city agglomerations that have been circled. This showed that cities in the same city agglomeration had similar patterns of population flow and that city agglomerations with a higher level of development had stronger radiation capacity (circled area in Figure 11b). In conclusion, the spatial distribution of the Pseudo R 2 values in the results of these two models is consistent, which also validates the reasonableness of SWIMs.
In addition, as stated in the methodology, the gravity model and its relationships assume that greater flows will occur between larger and closer places than between smaller and more distant places, ceteris paribus. That is, the intensity of population flow decreases with increasing distance between two places and by the relatively steep distance-deterrence. Similarly, by mapping the value of distance-decay parameter β, the reasonableness of SWIMs can be illustrated in more detail, using the origin-based models in Figure 12 as an example. As shown in Table 10, SWIMs of the origin-focused model and destination-focused model have the best goodness-of-fit, with the highest mean value of McFadden's pseudo R 2 . This verifies that the SWIMs significantly outperform the other models, indicating that the weighted interactive model performed better by considering the local characteristics. The mapping of the McFadden pseudo R 2 values in Figure 11 is an example of destination-based models, which illustrate that the use of these models is reasonable in more detail.
(a) (b) Figure 11. The goodness-of-fits of destination-based models. (a) Pseudo R 2 of destination-specific model, (b) Pseudo R 2 of destination-focused model.
As shown in Figure 11, the Pseudo R 2 values vary significantly across cities in different locations, indicating spatial heterogeneity in population flow. The pseudo R2 values were higher in the city agglomerations with first-and second-level core cities, especially the four major city agglomerations that have been circled. This showed that cities in the same city agglomeration had similar patterns of population flow and that city agglomerations with a higher level of development had stronger radiation capacity (circled area in Figure 11b). In conclusion, the spatial distribution of the Pseudo R 2 values in the results of these two models is consistent, which also validates the reasonableness of SWIMs.
In addition, as stated in the methodology, the gravity model and its relationships assume that greater flows will occur between larger and closer places than between smaller and more distant places, ceteris paribus. That is, the intensity of population flow decreases with increasing distance between two places and by the relatively steep distance-deterrence. Similarly, by mapping the value of distance-decay parameter , the reasonableness of SWIMs can be illustrated in more detail, using the origin-based models in Figure 12 as an example. The estimated value of the global distance-decay parameter β is −1.9758, as shown in Table 10, indicating the negative effects of distance on population flow, which is consistent with distance-decay. As shown in Figure 12, The distance-decay coefficient β in these two models has similar spatial distribution with the negative coefficient. β was the highest in the northern cities, followed by the southern coastal cities, and weakest in the central cities. Remarkably, the β of distance of some cities of Henan, Anhui, and Hubei provinces (among the six provinces in central China) was larger in the origin-focused models than it was in the origin-specific model (circled area in Figure 12b), because these areas are the buffer zone of the Yangtze River Delta and the Beijing-Tianjin-Hebei region, with a large population and congested traffic. Population flow within these areas is thus relatively more affected by distance factors. Therefore, on the one hand, by the fact that all β which are negative conformed to the distance-decay, the SWIMs are confirmed reasonable. On the other hand, the distinctive finding about Henan, Anhui, and Hubei provinces by distance factor is consistent with its by other factors mentioned above. By the discovery of consistency, it also showed that SWIMs are reasonable.
In summary, by comparing the goodness-of-fits of the models, SWIMs significantly outperform other spatial interaction models. At the same time, the reasonableness of SWIMs is verified based on spatial distributions of distance-decay and goodness-of-fit.
Uncertainty Analysis
Although the above highly spatiotemporally detailed data provided new support for the study of population distribution and population flow, the intensity index of population migration was calculated based on the mobility information recorded from people's mobile terminals. However, because not all users use AMAP applications, data deviation, data discontinuity, and data loss were inevitable. Moreover, privacy requirements prevented the accurate assessment of the purpose of the population flow; most is migrant worker flow, but there is some student and tourism flow. Furthermore, we only used an intensity index for population flow, rather than actual flow. All of these aspects mean that there is uncertainty in the data.
To obtain a more accurate population flow pattern and verify the results, we first divided the dataset into four subsets, according to the time node of the Spring Festival. Then, we analyzed the spatial and temporal trends of population mobility. The results of pattern exploration were consistent with previous findings in Yang et al. [13]. Thus, even though we used different platforms for dataset collection and the different methods of SNA to examine the same population flow during the Spring Festival of 2019, our results were consistent with those of Yang et al. [13]. This illustrated that our results were reasonable.
Furthermore, because population flow is restricted and influenced by many complex factors, selected socioeconomic factors devoid of multicollinearity problems were only explored with the help of spatial interaction models. We used a family of spatial interaction models to quantify the effect of socioeconomic factors on population flow. Some consensus conclusions were obtained, and these were in agreement. Although different results explained the improved performance of each model, the uncertainty of the results, due the limitations of the data, was not ignored.
To better consider the effect of surrounding cities in spatial interaction models, we applied a SWIM that incorporated the local weighting approach used in the GWR model to a spatial interaction model. Both the advantages and weaknesses of spatial-weighted regression models were inherited by this approach. The advantages were that the SWIM results were more regionally consistent than the one-sided results of specific models, which confirmed that the SWIM better considered the local characteristics of interactive processes. However, there were differences between the regression results of the SWIM and the specific models for the Henan, Anhui, Hubei, and Chongqing regions. Because these regions are large-scale population-focused and outflow areas, their population flow patterns are complex and multipatterned. Thus, their actual patterns are difficult to fit with simple local-weighting approaches. Indeed, the spatial-weighted regression models were only adapted to regions with similar patterns of population flow. Bandwidth is an important parameter that determines the range to which a city is affected. The optimal bandwidth results should be that the larger the urban agglomeration (B and C in Figure 13), the greater the bandwidth, and the greater its effects. However, in the northeastern regions (A in Figure 13), because of its sparse population, vast area, and lower level of sampling, the regression error was large, with a large bandwidth. Thus, when incorporating the local weighting approach of the GWR model into a SWIM, these ubiquitous problems must be noted. We believe that these problems will also be addressed in future work. regions with similar patterns of population flow. Bandwidth is an important parameter that determines the range to which a city is affected. The optimal bandwidth results should be that the larger the urban agglomeration (B and C in Figure 13), the greater the bandwidth, and the greater its effects. However, in the northeastern regions (A in Figure 13), because of its sparse population, vast area, and lower level of sampling, the regression error was large, with a large bandwidth. Thus, when incorporating the local weighting approach of the GWR model into a SWIM, these ubiquitous problems must be noted. We believe that these problems will also be addressed in future work.
Comparison with Related Research
Recent years have seen the emergence of a series of articles that attempted to comprehensively analyze the spatiotemporal patterns and influencing factors of population mobility. Compared with these related studies, this study has two innovations. First, we used population flow data, which are more highly spatiotemporally detailed. Second, we used advanced SNA methods and spatial interaction models to analyze spatiotemporal patterns and to quantify their effect. In particular, the SWIM is better at considering the local characteristics of an interactive process and was first implemented to study large-scale population flow. Compared with other spatial interaction models, the SWIM results are more detailed and meaningful.
Conclusions
In previous studies, the shortcomings of low spatiotemporally detailed data and the insufficient consideration of interactive differences in traditional spatial analysis models limited detailed study. In response to these problems, based on the population flow dataset collected from the AMAP Migration Map, we used a combination of SNA methods and spatial interaction models to explore the spatiotemporal patterns of population flow, and their determinants, during the Spring Festival in China. First, the SNA methods revealed that a hierarchy and a community structure existed in the spatiotemporal pattern of daily population flow. The hierarchical structure showed that the developmental level of a city was highly consistent with the intensity of its population flow and that the different network levels of population flow correlated with different developmental levels of cities. Thus, the nationwide network level was composed of the core cities (Beijing, Shanghai, Guangzhou, Chengdu, and Chongqing) of the four major city agglomerations, whereas the regional network level was composed of second-level cities (e.g., Xi'an, Kunming, and Guiyang). The community structure showed obvious correlations between city agglomerations and population flow in China, with the four major city agglomerations in China occupying core positions in these
Comparison with Related Research
Recent years have seen the emergence of a series of articles that attempted to comprehensively analyze the spatiotemporal patterns and influencing factors of population mobility. Compared with these related studies, this study has two innovations. First, we used population flow data, which are more highly spatiotemporally detailed. Second, we used advanced SNA methods and spatial interaction models to analyze spatiotemporal patterns and to quantify their effect. In particular, the SWIM is better at considering the local characteristics of an interactive process and was first implemented to study large-scale population flow. Compared with other spatial interaction models, the SWIM results are more detailed and meaningful.
Conclusions
In previous studies, the shortcomings of low spatiotemporally detailed data and the insufficient consideration of interactive differences in traditional spatial analysis models limited detailed study. In response to these problems, based on the population flow dataset collected from the AMAP Migration Map, we used a combination of SNA methods and spatial interaction models to explore the spatiotemporal patterns of population flow, and their determinants, during the Spring Festival in China. First, the SNA methods revealed that a hierarchy and a community structure existed in the spatiotemporal pattern of daily population flow. The hierarchical structure showed that the developmental level of a city was highly consistent with the intensity of its population flow and that the different network levels of population flow correlated with different developmental levels of cities. Thus, the nationwide network level was composed of the core cities (Beijing, Shanghai, Guangzhou, Chengdu, and Chongqing) of the four major city agglomerations, whereas the regional network level was composed of second-level cities (e.g., Xi'an, Kunming, and Guiyang). The community structure showed obvious correlations between city agglomerations and population flow in China, with the four major city agglomerations in China occupying core positions in these agglomerations. Most agglomerations were cross regional, and the population flow within the same community was relatively similar. In addition, most core cities of city agglomerations were the capital cities of their province.
Then, by using a family of spatial interaction models to reveal the effects of socioeconomic factors on re-work population flow, consistent conclusions were obtained. The results of these models showed that the population flow pattern was in line with the distance-decay effect, which was closely related to regional traffic development. Thus, population, as the determinant factor of the intensity of population flow, mainly flowed to the first-and second-level urban agglomerations, and population loss occurred in some cities of southwestern, northeastern, and northern China. The overall trend of value-added primary industry showed that most migrant workers were employed in primary industry. Moreover, primary-industry workers mainly flowed from the cities in southwestern and northwestern China to coastal areas. Furthermore, even though these cities were saturated with primary-industry workers, there was still a demand for secondary-industry workers; for example, in southwestern China, secondary industry was gradually increasing and attracting more workers. Income and foreign capital trends conformed to neoclassical theory, with an increase in income and foreign capital increasing the attractiveness of southwestern and northeastern China. In addition, the overall trend of pension insurance showed that attractiveness could be improved by improving the social security system.
Finally, these conclusions showed that there are obvious problems in China, such as unbalanced regional development, with population loss and unreasonable industrial allocation in some areas, which have led to differences in regional development conditions. Thus, our findings and conclusions may assist policymakers to control population loss, rationally allocate industrial structure, and balance development and will also promote progress in studies on population flow. In addition, these spatially weighted interactive models used in this study can be further applied to other large-scale population mobility issues or other spatial interaction issues, such as Thanksgiving in the United States. However, these spatially weighted interactive models suffer from some ubiquitous problems. Effectively selecting the optimal bandwidth and addressing the problem of under-sampling remain key challenges.
|
2020-11-19T09:14:34.316Z
|
2020-11-12T00:00:00.000
|
{
"year": 2020,
"sha1": "bef29634954941ab55cc88a0890dded7623cc1bc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/9/11/670/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3e4090434f7f4b314a4705cb31495861135edf63",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Geography"
]
}
|
261545423
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Women in neurotoxicology: 2021
Department of Molecular Pharmacology and Experimental Therapeutics, Mayo Clinic, Rochester, MN, United States, Department of Physical Medicine and Rehabilitation, Mayo Clinic, Rochester, MN, United States, Department of Medical Education, Texas Tech Health Sciences Center-School of Medicine, Lubbock, TX, United States, Addiction Centre, Biella, Italy, Italian Society of Toxicology (SITOX), Milan, Italy, Ser.D Biella-Drug Addiction Service, Biella, Italy
Editorial on the Research Topic Women in neurotoxicology: 2021
Women's participation in neurotoxicity research is fundamentally important to provide diverse perspectives during study design, execution, analysis, and interpretation of research.This includes a myriad of considerations from sex-specific vulnerabilities, environmental exposures, reproductive and developmental considerations, and underrepresentation in clinical trials, among others.Despite this, less the 30% of researchers worldwide identify as women.This Research Topic was inspired to highlight accomplished women researchers that wished to contribute their first or last author publications across the field of Neurotoxicology.By highlighting women in neurotoxicology, we also provide an opportunity for female researchers to serve as role models for the next-generation.
First, the manuscript by Carolina Caroso dos Santo Durão et al. examined exposure to environmental tobacco smoke (ETS) during the embryonic stage to see how it affected neuroinflammation in the adult mouse later in life.Multiple sclerosis (MS) is a neurological disorder involving inflammation and demyelination of the central nervous system and that has higher prevalence in women.A common mouse model of MS, called experimental autoimmune encephalomyelitis (EAE), was used in this study.In cell culture and EAE models, ETS exposure resulted in increased neuroinflammatory markers when compared to compressed air, including increased levels of proinflammatory cytokines IL6 and TNFα.Functionally, the ETS exposure resulted in significant worsening of EAE mouse clinical scores.
Next, a paper by Yaghoobi et al. investigated the developmental neurotoxicity potential of specific polychlorinated biphenyls (PCBs) in a zebrafish model.The researchers confirmed different PCB congeners exhibited varying potency in sensitizing ryanodine receptors (RYR) in zebrafish muscle.They found that a subset of these PCB congeners altered photomotor behavior in larval zebrafish, and the pattern of behavioral effects corresponded to the pattern of RYR sensitization, providing in vivo evidence supporting the hypothesis that RYR sensitization contributes to the developmental neurotoxicity of PCBs.
A review paper authored by Albrecht et al. summarizes effects of developmental lead (Pb) exposure on ethanol (EtOH) responses in Caenorhabditis elegans (C.elegans), a powerful model organism used to elucidate toxicant mechanisms.The authors describe morphological changes in dopamine synapses and dopamine-dependent behaviors, providing insights into the neurobiological mechanisms underlying the relationship between these neurotoxicants, and highlights the utility of C. elegans as a model for studying combined neurotoxicant effects.
Finally, a rat study by Reyes-Bravo et al. explored the role of chronic exposure to the herbicide atrazine to assess GABAergic and glutamatergic systems.After 1 year, there were changes in vertical activity episodes and several genes within the glutamatergic and GABAergic systems in the brain regions explored, which included striatum, nucleus accumbens, ventral midbrain, prefrontal cortex, and hippocampus.Of these, striatum had the most changes, followed by hippocampus.The results in this article spark interest to perform neurochemistry and neurobehavioral analysis especially in models of neurodegenerative disorders affecting basal ganglia.
Overall, the research studies uncover associations between environmental exposures and neurodevelopmental outcomes across several model systems and stages of development.The impact of environmental factors on neurological disease development has implications for the health of women across their lifespan in addition to their families, but also provides research communities valuable insight into the direction of future scientific endeavors Figure 1.
|
2023-09-06T15:18:16.824Z
|
2023-09-04T00:00:00.000
|
{
"year": 2023,
"sha1": "c4a151b9aebc1a5eb9305912ba425e92bb4b3715",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/ftox.2023.1248748/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16ef9afcea31a7aaf077d06011b081b8b3b38ee3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12136193
|
pes2o/s2orc
|
v3-fos-license
|
The Microbial Ecosystem Distinguishes Chronically Diseased Tissue from Adjacent Tissue in the Sigmoid Colon of Chronic, Recurrent Diverticulitis Patients
Diverticular disease is commonly associated with the older population in the United States. As individual’s age, diverticulae, or herniation of the mucosa through the colonic wall, develop. In 10–25% of individuals, the diverticulae become inflamed, resulting in diverticulitis. The gut ecosystem relies on the interaction of bacteria and fungi to maintain homeostasis. Although bacterial dysbiosis has been implicated in the pathogenesis of diverticulitis, associations between the microbial ecosystem and diverticulitis remain largely unstudied. This study investigated how the cooperative network of bacteria and fungi differ between a diseased area of the sigmoid colon chronically affected by diverticulitis and adjacent non-affected tissue. To identify mucosa-associated microbes, bacterial 16S rRNA and fungal ITS sequencing were performed on chronically diseased sigmoid colon tissue (DT) and adjacent tissue (AT) from the same colonic segment. We found that Pseudomonas and Basidiomycota OTUs were associated with AT while Microbacteriaceae and Ascomycota were enriched in DT. Bipartite co-occurrence networks were constructed for each tissue type. The DT and AT networks were distinct for each tissue type, with no microbial relationships maintained after intersection merge of the groups. Our findings indicate that the microbial ecosystem distinguishes chronically diseased tissue from adjacent tissue.
herniation of mucosa and muscularis mucosa through the wall of the colon, usually in areas where mural blood vessels pierce the muscle layer of the bowel wall 8,9 . In Western countries, 75-90% of diverticulosis occurs in the sigmoid colon [10][11][12] . In approximately 10-25% of diverticulosis patients, the diverticulae becomes inflamed, leading to diverticulitis 8 . Diverticulitis is empirically treated with broad spectrum antibiotics, suggesting that bacteria may contribute to its pathogenesis. In some cases, individuals with chronic, recurrent episodes of diverticulitis may require surgical resection of the sigmoid colon.
There is limited data suggesting that bacterial dysbiosis may be an important determinant in the pathogenesis of diverticulitis 13,14 . However, research on this topic is challenged by the difficulty in identifying an appropriate control group since, for reasons that are still unclear, many persist with asymptomatic diverticulosis. Additionally, the microbial ecosystem relies on relationships between bacteria and fungi, so the exclusion of fungal organisms in prior studies on this subject omits a potential key causal factor in this disease. In the present study, both bacteria and fungi were investigated separately as well as through transkingdom interactions to determine whether microbial differences in tissue chronically affected by diverticulitis (DT) versus adjacent non-affected tissue (AT) can distinguish these two tissue types.
Using 16S rRNA and internal transcribed spacer (ITS) gene sequencing, we assigned bacterial and fungal sequencing reads to operational taxonomic units (OTUs) and analyzed how the microbial ecosystem differed between patient-matched DT from the sigmoid colon and AT from the same colonic segment. Underlying the high similarity of commensal organisms, we hypothesized that a distinctive subset of mucosa-associated pathogenic microbes would be associated with DT relative to AT, and that the microbial network of bacteria and fungi would differ between the two tissue types. Inferred metagenomic analyses were performed to evaluate the predictive metagenomes of the microbial communities associated with DT and AT as further explanation of an association between the microbiome and diverticulitis. Our study describes distinct microbial ecological networks which distinguish DT and AT, and these data suggest that a dysbiotic network of microbes is associated with the mucosa of chronically diseased sigmoid colon.
Results
Unique bacterial taxa colonize chronically diseased and adjacent tissue. Prior to assessing the mucosa-associated bacterial community structure, we evaluated the cellular architecture and inflammatory cell infiltrate present in DT and AT. Hematoxylin and eosin stained tissue sections were examined from nine patients whose clinical demographics are described in Table 1. DT is an area chronically affected by diverticulitis that demonstrated a thickened bowel wall, while AT is obtained from an area adjacent to DT and is unaffected by disease (Fig. 1A,B). The cellular architecture for both DT and AT was normal-appearing, with an intact epithelium and crypts. AT showed negligible mucosal neutrophilic inflammation (Fig. 1C,D), whereas DT also showed minimal inflammation but demonstrated increased numbers of neutrophils within the lamina propria (Fig. 1E,F).
We then performed 16S rRNA gene sequencing to analyze the microbiome of patient-matched DT and AT and found that bacterial communities in both tissue types were dominated by proteobacterial taxa (Fig. 2A). Alpha diversity measures did not reveal differing trends between DT and AT, likely due to the use of patient-matched samples resulting in high similarity of commensal organisms (Supplemental Fig. S1A). Overall clustering based on weighted UniFrac distance was not significant when comparing tissue type (ANOSIM test P = 0.521) (Supplemental Fig. S1B) or patient identification (ANOSIM test P = 0.110) (Supplemental Fig. S1C).
We next analyzed the core microbiome, which typically refers to the population of bacteria comprising two or more habitats 15 . In our study, the core microbiome was defined as common bacteria with OTUs present in ≥80% of all samples. This analysis revealed that subsets of bacterial taxa were distinct between DT and AT (Fig. 2B). Although a majority of OTUs (66.0%) were shared among the two tissue types, 18.9% and 15.1% of OTUs were specific to DT and AT, respectively (Supplemental Table S2).
To further define which bacterial taxa were enriched in each tissue type, Linear discriminant analysis Effect Size (LEfSe) analysis was used (Fig. 3A). A total of 33 taxa (25 being taxonomically identified) were enriched in both groups, with a linear discriminant analysis (LDA) score >1.5 (Supplemental Table S3). In general, bacteria within the phyla Proteobacteria and Actinobacteria were most represented in the LEfSe plot for DT and AT. The OTU Pseudomonas (P = 0.038) was most predictive of AT while Microbacteriaceae (P = 0.019) were enriched in DT. We then used Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) to infer the potential mechanistic function from virtual metagenomics data collected from 16S rRNA sequencing (Fig. 3B). NSTI scores were calculated on all 23 samples included in PICRUSt metagenome prediction analysis and yielded an average score of 0.108 ± 0.013. Five pathways were identified with an LDA score >1.5.
In accordance with the observed association between methane metabolism and DT, methanogenic archaea, such as Thermoplasmata, were found to be predictive of DT (Fig. 3A).
Pathogenic fungal species are found in chronically diseased tissue. Fungal organisms also constitute a large part of the intestinal ecosystem and interact with bacteria to influence their community structure, so we next analyzed the mycobiome of DT and AT. At the fungal class level, we detected large abundance profile differences for within-subject variation and between-subject variation among tissue types (Fig. 4A). Similar to the bacterial analysis, alpha diversity measures for fungal communities did not reveal significant differences between DT and AT (Supplemental Fig. S2A). While weighted UniFrac distance analysis showed no significant clustering of overall fungal community structure based on tissue type (ANOSIM test P = 0.635) (Supplemental Fig. S2B), analysis by patient identification did reveal significant clustering (ANOSIM test P = 0.010) (Supplemental Fig. S2C). Core mycobiome analysis (OTUs present in ≥80% of all samples) revealed that a majority of OTUs (75.0%) were shared between DT and AT (Fig. 4B). Only 8.3% and 16.7% of OTUs were specific to DT and AT, respectively (Supplemental Table S5). Using LEfSe to identify fungal taxonomic biomarkers for DT and AT with an LDA score >1.5, the OTU Exophiala found in the division Ascomycota was enriched in DT (P = 0.037) while three Basidiomycota OTUs identified to Plutaceae (P = 0.039), Pluteus (P = 0.039), and Agaricales (P = 0.039) were correlated with AT ( Fig. 4C) (Supplemental Table S6). Because examining the mycobiome did not identify sufficient evidence to suggest that fungal organisms alone may be involved in diverticular disease, we analyzed the interactive network of bacterial-fungal relationships.
Distinct bipartite co-occurrence networks describe chronically diseased tissue and adjacent tissue. The gut ecosystem thrives on relationships between microbes to promote a healthy environment and physiological homeostasis, and in turn, disruption of this homeostatic ecosystem may promote disease states 2-6 . To examine how bacteria and fungi co-exist to potentially maintain the ecosystem of DT and AT, bipartite co-occurrence networks were constructed and analyzed. Positive and negative bacterial-fungal relationships were determined for each tissue type identifying co-existence or competitive exclusion, respectively 16 . The bipartite co-occurrence network plot for DT is presented (Fig. 5). As predicted, both positive and negative correlations (Spearman's rho >0.80) were observed, confirming the presence of an interactive network of microorganisms.
Using the intersection merge method to evaluate the bacterial-fungal relationships that are similar and different between DT and AT bipartite networks, we found that each tissue type exhibited differential microbial interactions, suggesting distinct microbial ecologies. For example, enrichment of the OTU Pseudomonas was identified as a taxonomic biomarker of AT ( Fig. 3A) and this OTU had a positive relationship with the fungal OTU Aspergillus in AT; however, in DT, this relationship was not observed. Thus, this analysis identified differences in the microbial ecosystem and bacterial-fungal relationships between DT and AT. Such transkingdom interactions allows for a broad overview of the ecological niche that may play a role in the inflammatory processes seen in areas of chronic disease.
Discussion
Our multi-faceted approach of analyzing the microbiome, the mycobiome, and the ecological relationship between bacteria and fungi allowed us to glean several potentially important insights into the differences between chronically diseased diverticular tissue and adjacent non-inflamed sigmoid colon. LEfSe analysis found that AT was associated with enrichment of Pseudomonas and Basidiomycota OTUs while Microbacteriaceae and Ascomycota were enriched in DT. Unique microbial ecological networks distinguished the two tissue types, with no relationships maintained upon merge of the two bipartite co-occurrence network plots. These data suggest that distinct microbial ecosystems may have a role in the inflammatory process associated with diverticular disease.
In line with our results, a previous study found higher diversity of Proteobacteria, with Pseudomonas as one of the predictive OTUs for diverticulitis patients compared to IBD and colorectal cancer patients 14 . However, whether this enrichment is associated with non-specific intestinal inflammation 17 or the diverticular disease process will require further research. Pseudomonas is an organism which in healthcare settings is known to have a broad range of behavior, but with a potential for antibiotic resistance 18 . Animal models of intestinal anastomoses have shown that Pseudomonas can release collagenases in response to tissue injury 19 . This may have a correlation with diverticular disease, where an initial mechanical insult from biochemical stimuli or stool within the diverticulae may promote the release of degradative enzymes by this opportunistic pathogen. PICRUSt allowed the inference of mechanistic pathways potentially associated with different bacterial communities. One of the enriched pathways predicted by PICRUSt, was methane metabolism, which is consistent with LEfSe findings that Thermoplasmata are predictive of DT, as archaea within this class have been identified as methylamine-degrading, gut methanogens [20][21][22] . Previous studies similarly found an enrichment of methanogenic bacteria, such as Methanobrevibacter smithii, in patients with diverticulosis 23 and with constipation 24 . A predominance of methanogenic microorganisms may, therefore, be associated with gut motility patterns which lend to the development of diverticulae. It should be emphasized that PICRUSt is a computational method used to analyze virtual metagenomic data, and due to its inferred approach, physiological interpretations of these results should be treated with caution as they require experimental confirmation. Future work should include shotgun metagenomics and meta-transcriptomics sequencing to elucidate the genetic potential and activities within these gut microbial communities.
Beta diversity analyses revealed that overall fungal community structure was more strongly driven by inter-individual variation than tissue type, suggesting that diverticulitis is perhaps not associated with fundamental differences in mycobiome composition, but shifts in a subset of taxa and their accompanying interactivity with the rest of the community. Fungal sequencing data found an enrichment of OTUs associated with Ascomycota in DT, while AT was associated with Basidiomycota OTUs. A prior study that correlated diet with gut ecology noted an inverse relationship in enrichment between these two taxa, and a correlation between these fungi and certain bacterial OTUs such as Prevotella and Bacteroides 25 . How these fungal organisms might potentially contribute to diverticulitis is unknown at this time, although in IBD, a gut disease known to be associated with a chronic intestinal dysbiosis and immune dysregulation, at least one previous study reported an increased Basidiomycota/ Ascomycota ratio from fecal samples 5 . The variable size of the ITS region along with relatively poor sequencing of reverse reads in our study prevented analysis of paired-end sequence data. However, a previous report indicates that the analysis of high quality single reads provides robust representation of present communities 26 . Nevertheless, how different fungal organisms may contribute to varying bacterial ecologies and host immune responses represents a currently unstudied aspect of diverticulitis as well as most other gut diseases.
Using tissue samples to assess the mucosa-associated microbiome rather than stool limits our study to only patients who require surgery for their disease. Additionally, these patients demonstrate a more severe phenotype by virtue of their need for surgery. All such patients required antibiotics pre-operatively; however, only four patients were on antibiotics in the three months prior to surgery. Antibiotic usage (Table 1) did not influence bacterial species richness (cephalexin P = 0.192, neomycin P = 0.175, amoxicillin and clavulanic acid P = 0.151, and doxycycline P = 0.192) or fungal species richness (cephalexin P = 0.080, neomycin P = 0.162, amoxicillin and clavulanic acid P = 0.085, and doxycycline P = 0.101). By utilizing tissue, we are able to examine the mucosa-associated population of microbes rather than cataloguing those excreted in the stool. This obtains a more complete analysis of mechanistically-relevant taxa for a disease whose hallmark is changes to the colon wall. Given the shape of diverticulae, we believe that using mucosa-associated organisms is of particular importance. Diverticulae can have narrow necks, limiting communication with the lumen of the colon and helping to create a physically partitioned, distinct microbial environment.
While perhaps at the cost of a larger sample size, another priority in experimental design was to limit the study population to well-matched and carefully selected cohorts of diverticulitis patients. To overcome inter-individual differences, each subject was used as their own control, by assaying AT collected from a macroscopically non-diseased section of sigmoid colon at the time of surgical resection. The resulting small sample size also hindered the ability to incorporate clinical data as a correlate to microbial findings, and larger studies will be necessary in this regard. Consequently, this is the first diverticulitis study to examine the microbiome using a control group that does not harbor a confounding bacterial community structure, such as in IBD patients who harbor an intestinal dysbiosis at baseline 14 .
In summary, we provide a further description of the microbial communities associated with diverticulitis. The inclusion of fungal organisms in microbial analyses of gut diseases has been lacking, and this study represents the first investigation of the mycobiome in diverticulitis. The finding of unique communities of both bacteria and fungi indicate the need to incorporate both kingdoms in future microbiome analyses. Our study additionally demonstrates potential differences between DT and AT in terms of microbial functionality. Future directions in this aim could incorporate matched metagenomics and meta-transcriptomics to glean possible roles of gut microbes in shaping diverticular etiology and progression, as well as host cell transcriptomics to define host-pathogen interactions. Development of a currently unavailable animal model for diverticulitis would also aid in characterizing pertinent microorganisms and potential treatment therapies. In total, examining the interactions between bacterial and fungal species in diverticular disease, evaluating the role anti-fungal agents may have in the treatment of diverticulitis, and exploring microbial metabolic activity, would help to further understand the impact of these organisms on disease pathophysiology.
Materials and Methods
Study design and specimen collection. This retrospective cohort study was performed at the Penn State Hershey Medical Center with Institutional Review Board (IRB) approval. Informed consent was obtained from all subjects and all methods were performed in accordance with these guidelines. Between April 2010 and August 2014, chronic, recurrent diverticulitis patients were consented to collect colonic tissue into the Penn State Hershey Colon and Rectal Diseases Biobank at the time of elective sigmoid resection. Surgical specimens were immediately transported from the operating room to the surgical pathology laboratory where Biobank staff obtained several full-thickness sections of tissue. Chronically diseased tissue (DT) was an area of chronic inflammation that demonstrated a thickened bowel wall. Adjacent tissue (AT) sections were taken from an area of sigmoid colon with normal appearing bowel wall thickness, and as far away from DT as possible (Fig. 1A,B). This section comprised our patient-matched control that was not influenced by chronic, recurrent diverticulitis. Tissue was flash-frozen at −80 °C until processing for analysis. Confirmation of diverticulitis was based on preoperative CT scans and surgical pathology. Patients with IBD, cancer, or dysplasia were excluded.
Analysis of the microbiome. To analyze populations of bacteria associated with the mucosa as opposed to those present in the stool, DNA was extracted from approximately 250 mg of colonic tissue using the Qiagen DNeasy Blood and Tissue Kit (Qiagen, Frederick, MD). DNA was quantified using a Nanodrop 2000 (ThermoFisher Scientific, Waltham, MA), aliquoted, and shipped on dry ice to Dr. Lamendella at Juniata College. DNA concentrations were quantified with the Qubit 2.0 Fluorometer High Sensitivity dsDNA kit (Life Technologies, Carlsbad, CA) according to manufacturer's instructions. PCR was performed using the Illumina-barcoded 806 R reverse primer and 515 F forward primer as previously described 4 . Pooled PCR products were gel purified using the Qiagen Gel Purification Kit (Qiagen, Frederick, MD), quantified using the Qubit 2.0 Fluorometer (Life Technologies, Carlsbad, CA), and samples were combined in equimolar amounts. Prior to submission for sequencing, libraries were quality checked using the 2100 Bioanalyzer DNA 1000 chip (Agilent Technologies, Santa Clara, CA). Pooled libraries were stored at −20 °C until they were shipped on dry ice to the California State University (North Ridge, CA) for sequencing.
Library pools were size verified using the Fragment Analyzer CE (Advanced Analytical Technologies Inc., Ames, IA) and quantified using the Qubit High Sensitivity dsDNA kit (Life Technologies, Carlsbad, CA). After dilution to a final concentration of 1 nM and addition of a 10% spike of PhiX V3 library as an internal control (Illumina, San Diego CA), pools were denatured for 5 minutes in an equal volume of 0.1 N NaOH then further diluted to 12 pM in Illumina's HT1 buffer. The denatured and PhiX-spiked 12 pM pool was loaded on an Illumina MiSeq V2 300 cycle kit cassette with 16S rRNA library sequencing primers and set for 150 base, paired-end reads.
Forward and reverse reads were merged using VSEARCH version 1.9.10 with a minimum overlap set to 40 bp 26 . Using USEARCHv7, paired sequences were quality filtered at a maximum expected error of 0.5% and truncated at a length of 253 bp. Filtered reads maintained an average Phred Q score of 37.9. Chimeric sequences were identified and removed using the UCHIME algorithm with default settings 27 . A total of 172 out of 4,308 OTUs were removed after chimera checking and a total of 391,649 paired sequences were used in downstream analyses. OTUs were picked de novo using the UPARSE pipeline 28 within USEARCHv7 using a 97% ID setting. Taxonomy was assigned using the assign_taxonomy.py script in QIIME 1.9.0 29 with default parameters using the Greengenes 16S rRNA gene database (13-5 release, 97%) 30 . Results were compiled into a biological observation matrix (biom) format OTU table in which singleton sequences were removed.
Analysis of the mycobiome. The ITS region between the 18S and 5.8S rRNA genes was amplified in 25 µL PCR reactions using the same concentrations as previously mentioned, with the exception of adding 5 µL of undiluted template DNA, and the use of ITS1 forward (5′-AATGATACGGCGACCACCGAGATCTACACGGCTTGGT CATTTAGAGGAAGTAA-3′) and Illumina-barcoded ITS2 reverse primers (5′-CAAGCAGAAGAC GGCATACGAGAT TACCGCTTCTTC CG GCTGCGTTCTTCATCGATGC-3′) designed to avoid common PCR-related biases in generating fungal amplicons of the variable target region 31 . The protocol described by Smith and Peay is one of the only one-step PCR assays compatible with Illumina sequencing and ITS1F is highly fungal specific and other primers for ITS2 are often less specific and can co-amplify host tissue 32 A total of 6,116,765 forward reads and 419,207 reverse reads were retrieved from ITS sequencing. Due to the higher read counts and the fact that we could retain more samples in our fungal dataset, only the forward reads were analyzed. This approach was used by Nguyen et al. who found that analyzing forward reads for ITS region 1 yielded a more robust analysis in a mock community analysis 33 . Forward ITS sequences were quality filtered using VSEARCH 1.11.1 with a maximum expected error of 0.5% and truncated at a length of 150 bp to retain an average Phred Q score of 35.6 throughout the entire read length. The sequences were trimmed using Trimmomatic to remove low-quality regions 32 . OTUs were picked using the open-reference UCLUST algorithm in QIIME 1.9.0 29 at the default OTU threshold of 0.97 and singleton sequences were discarded. Taxonomy was assigned using the BLAST option in the assign_taxonomy.py script against version 7 of the UNITE fungal ITS database 34,35 with the maximum e-value set to the default of 0.001. Taxa with no BLAST hits were removed from the OTU table for downstream analysis.
Diversity and Statistical Analyses. Alpha diversity rarefaction curves were created within the QIIME 1.9.1 package 29 using the untransformed OTU table. Multiple rarefactions were performed on the 16S rRNA OTU table from all samples using a minimum depth of 0 sequences to a maximum depth of 7000 sequences, with a step size of 700 for 20 iterations. Multiple rarefactions were performed on the ITS OTU table from all samples using a minimum depth of 0 sequences to a maximum depth of 6000 sequences, with a step size of 600 for 20 iterations. Rarefactions were then collated and plotted using observed species, Chao1, PD Whole Tree, and Heip's evenness diversity metrics. Alpha diversity was compared between disease states, as well as age, sex, BMI, smoking history, and antibiotic administration. Richness plots were made within Phyloseq using the untransformed OTU table against the Observed OTUs, Chao1, and ACE metrics with lines connecting each patient's diverticular and normal samples.
Both 16S and ITS OTU tables were normalized using metagenomeSeq's Cumulative Sum Scaling (CSS) algorithm 36 . Beta diversity analyses were performed using weighted UniFrac distance matrices and visualized using a 3-dimensional principal coordinate analysis (PCoA) plots in EMPeror. To assess within-group variation in DT and AT samples, average weighted UniFrac distances within each tissue grouping were calculated and compared using a two-sample t-test. Core microbiome analyses were also completed within QIIME 1.9.1 29 . The ANOSIM method within QIIME was employed on the weighted UniFrac distance matrices to test if there was a difference in beta diversity between DT and AT groups. LEfSe analysis was used to identify bacterial taxa whose sequences are differentially abundant between DT and AT groups 37 . LEfSe uses a Kruskal-Wallis test followed by pairwise Wilcoxon rank sum test correction, with both alpha values set to 0.05. The effect size cutoffs to determine significantly differentiating taxa between DT and AT groups were set at a genus-level LDA score >1.5 for both 16S rRNA gene and ITS data.
Co-occurrence networks were built and visualized in Cytoscape 3.3.0 using the CoNet plugin 38 . As created in QIIME 1.9.1, untransformed OTU tables consisting of exclusively chronically diseased tissue and adjacent tissue were uploaded. In a preprocessing step, any taxa appearing in <50% of samples were discarded and taxa had to appear as non-zero values in at least two samples to be considered for correlations to account for read count sparsity. Two correlational measures, Pearson and Spearman, and two dissimilarity measures, Bray-Curtis and Kullback-Leibler, were used to calculate correlations between the remaining taxa. The use of all four measures reduces the chance of spurious correlations due to outliers, matching zeroes, or data compositionality. The Benjamini-Hochberg-Yekutieli multiple testing correction was used to adjust P-values in the final step of CoNet processing.
Unassigned taxa were removed from networks. Remaining taxa were then labeled to the furthest identified taxonomic rank. Green lines were used to connect positive correlations, while red lines showed negative correlations. Chronically diseased tissue and adjacent tissue networks were additionally merged using both difference and intersection parameters. There were no shared correlations between DT and AT networks, suggesting differential correlations between bacteria and fungi in the different tissue types.
Predicted metagenomes were calculated with 16S rRNA gene data for the DT and AT sample groups using PICRUSt software 39 . A closed reference 16S rRNA gene OTU table was imported into PICRUSt version 1.1.0 mapped against the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and summarized at the Level 3 functional annotation.
Power analysis. The micropower R-package was used to calculate the PERMANOVA power of our study design 40 . The input for this analysis was the distance matrix made using weighted UniFrac distances from the 16S rRNA OTU table. PERMANOVA power was calculated with five, nine, and 15 subjects per treatment group specified. Each sample size was observed at alpha cutoffs of 0.01, 0.05, and 0.1 performed over 100 bootstrap iterations. The PERMANOVA power calculated from our study design of nine subjects per treatment group at an alpha cutoff of 0.05 was found to be 0.89 (Supplemental Table S1). Varying the alpha cutoffs to 0.01 and 0.1 changed our power calculations as described in Supplemental Table S1.
|
2018-04-03T01:35:33.225Z
|
2017-08-16T00:00:00.000
|
{
"year": 2017,
"sha1": "53aa49517e4d543e3239a364e3a83b1f743e1893",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-017-06787-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46dbd78eef48b33f6961d5d499aeb173d353fb5a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
220962854
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19: pathophysiology, diagnosis, complications and investigational therapeutics
The novel coronavirus disease 2019 (COVID-19) outbreak started in early December 2019 in the capital city of Wuhan, Hubei province, People's Republic of China, and caused a global pandemic. The number of patients confirmed to have this disease has exceeded 9 million in more than 215 countries, and more than 480 600 have died as of 25 June 2020. Coronaviruses were identified in the 1960s and have recently been identified as the cause of a Middle East respiratory syndrome (MERS-CoV) outbreak in 2012 and a severe acute respiratory syndrome (SARS) outbreak in 2003. The current SARS coronavirus 2 (SARS-CoV-2) is the most recently identified. Patients with COVID-19 may be asymptomatic. Typical symptoms include fever, dry cough and shortness of breath. Gastrointestinal symptoms such as nausea, vomiting, abdominal pain and diarrhoea have been reported; neurologically related symptoms, particularly anosmia, hyposmia and dysgeusia, have also been reported. Physical examination may find fever in over 44% of patients (and could be documented in over 88% of patients after admission), increased respiratory rate, acute respiratory disease and maybe decreased consciousness, agitation and confusion. This article aims at presenting an up-to-date review on the pathogenesis, diagnosis and complications of COVID-19 infection. Currently no therapeutics have been found to be effective. Investigational therapeutics are briefly discussed.
Introduction
On 31 December 2019, the Chinese authorities reported to the World Health Organization an emerging novel coronavirus in patients from Wuhan, Hubei province [1]. Currently the virus is known as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and the disease name is coronavirus disease 2019 . This virus has a higher degree of lethality than other endemic viruses, and it is also more lethal to humans compared to the earlier emerging outbreaks of SARS-CoV- 1 is not yet strong evidence for an intermediate host.
The current pandemic is caused by SARS-CoV-2. It shares with the earlier two coronaviruses the features of the Coronaviridae family. Coronavirus have large (~30 kb) single-stranded, positive-sense RNA genomes; the genome is roughly 80% identical with other coronaviruses at a nucleotide level. A virus closely related (sharing 90% of nucleotide structure) to SARS-CoV-2 is RaTG13-2013, which was identified in bats [2]. The complete genome of SARS-CoV-2 isolated from Wuhan Hu-1 is available online (https://www.ncbi.nlm.nih.gov/nuccore/NC_ 045512). Genetic epidemiology of hCoV-19 and submitted data since December 2019 are available from the GISAID database (https://www.gisaid.org/). SARS-CoV-2 is composed of at least 11 open reading frames (ORFs), with a full length of 29 903 bp. Four major structural protein-coding genes have been identified in the coronaviruses: spike protein (S), envelope protein (E), membrane protein (M) and nucleocapsid protein (N) [3]. The spike protein of SARS-CoV-2 utilizes angiotensinconverting enzyme 2 (ACE2) as its cell surface receptor, and utilization influences the tropism of the virus.
COVID-19 infects people of all ages. However, there are two main groups at a higher risk of developing severe disease: older people, and people with underlying comorbidities such as diabetes mellitus, hypertension, cardiorespiratory disorders, chronic liver diseases and renal failure. Patients with cancer and those receiving immunosuppressive medication as well as pregnant people are also thought to be at a higher risk of developing severe disease when infected [4].
Pathophysiology Transmission of infection
The transmission of infection is mainly person to person through respiratory droplets. Faecal-oral route is possible. The presence of the virus has been confirmed in sputum, pharyngeal swabs and faeces [5]. Vertical transmission of SARS-CoV-2 has been reported [6] and confirmed by positive nasopharyngeal swab for COVID-19. The median incubation period of COVID-19 is 5.2 days; most patients will develop symptoms in 11.5 to 15.5 days. Therefore, it has been recommended to quarantine those exposed to infection for 14 days.
Pathogenesis mechanisms
The SARS-CoV-2 infection enters the host cells through the S spike protein by binding to ACE2 for internalization and aided by TMPRSS2 protease. The high infectivity of the virus is related to mutations in the receptor binding domain and acquisition of a furan cleavage site in the S spike protein. The virus interaction with ACE2 may downregulate the anti-inflammatory function and heighten angiotensin II effects in predisposed patients [7]. With the challenge we face with COVID-19, some have been advocating for the use (or cessation) of Angiotensin II receptor type 1 (AT1 receptor) blockers and ACE inhibitors during the treatment of COVID-19 in patients with hypertension. Currently the recommendation of the Council on Hypertension of the European Society of Cardiology is that patients should continue their antihypertensive treatment with no changes because we do not have evidence supporting its cessation [8]. However, further research is needed to back these recommendations with more evidence.
The invasion of the virus to the lung cells, myocytes and endothelial cells of the vascular system results in inflammatory changes including oedema, degeneration and necrotic changes. These changes are mainly related to proinflammatory cytokines including interleukin (IL)-6, IL-10 and tumor necrosis factor α, granulocyte colony stimulating factor, monocyte chemoattractant protein 1, macrophage inflammatory protein 1α, and increased expression of programmed cell death 1, Tcell immunoglobulin and mucin domain 3 (Tim-3) [9]. These changes contribute to lung injury pathogenesis, hypoxia-related myocyte injury, body immune response, increased damage of myocardial cells, and intestinal and cardiopulmonary changes. Infection with SARS-CoV-2 has been also shown to cause hypoxaemia. These changes lead to accumulation of oxygen free radicals, changes in intracellular pH, accumulation of lactic acid, electrolyte changes and further cellular damage.
Body systems and organs affected The respiratory system is the primary system affected in SARS-CoV-2, and multiple infiltrates of both lungs may be present. Real-time PCR (RT-qPCR) amplification of SARS-CoV-2 virus nucleic acid of nasopharyngeal swabs or sputum is needed to confirm the diagnosis. However, the test may be negative in the early days of presentation. The clinical picture, including shortness of breath, increased respiratory rate, decreased oxygen saturation and raised C-reactive protein, is nonspecific. Other tests, such as IgG and IgM antibodies against SARS-CoV-2, CD4 + and CD8 + , should be ordered. Both CD4 + and CD8 + are substantially lowered in SARS-CoV-2. The pathology of the lungs shows microscopic bilateral diffuse alveolar damages, cellular fibromyxoid infiltrates and interstitial mononuclear inflammatory infiltrates with lymphocyte domination [10].
The cardiovascular system is usually involved in COVID-19 infection. Biomarkers such as elevated highly sensitive troponin-T, natriuretic peptides and IL-6 are prognostic, and their progressive rise is associated with poor outcomes. The inflammation of the vascular system results in the following changes: diffuse microangiopathic thrombi, inflammation of cardiac muscle (myocarditis), and cardiac arrhythmias, heart failure and acute coronary syndrome. These cardiovascular complications may cause death [11,12]. The lymphocytopenia observed during the infection potentially involves CD4 + and some CD8 + T cells. These changes disturb the innate and acquired immune responses, causing delayed virus clearance and hyperstimulated macrophages and neutrophils. Notch signaling is known to be a major regulator of cardiovascular function, and it is also implicated in several biological processes mediating viral infections. Recently it has been debated whether targeting Notch signaling can prevent SARS-CoV-2 infection and interfere with the progression of COVID-19-associated heart and lung disease pathogenesis [13].
The reported gastrointestinal manifestations of COVID-19 include diarrhoea, nausea, vomiting and abdominal pain. SARS-CoV-2 RNA has been isolated from stool specimens and from swabs that sampled the anus/rectum [14]. ACE2 has been found to be expressed in the epithelial cells of the gastrointestinal tract, suggesting virus entry through the ACE2 receptors and its replication causing inflammatory changes and the patient's symptoms. SARS-CoV-2 also causes liver injury, which manifests as elevated serum alanine aminotransferase and aspartate aminotransferase levels [15]. Mild elevation of serum bilirubin and γ-glutamyl transferase have also been reported in some patients with COVID-19 infection [16]. In most cases, the liver injury was transient and mild. However, severe liver dysfunction or injury has been reported in patients with severe disease. High levels of alanine aminotransferase of over 7500 U/L has been reported in a Chinese study [17]. Microscopically, microvesicular steatosis of the liver and mild lobular injury has been found in COVID-19-infected patents [16]. It is not clear whether the observed SARS-CoV-2-associated liver injury is caused by direct viral injury or if it is related to hepatoxic drugs, coexisting systemic inflammatory changes, sepsis, respiratory distress syndrome-induced hypoxia or multiple organ failure [18].
There is clinical evidence that the SARS-CoV-2 has potential neuropathic properties. Several neurologic-related symptoms have been reported, including headaches, dizziness, seizure, decreased level of consciousness, acute haemorrhagic necrotizing encephalopathy [19], agitation and confusion.
Patients with comorbidities
In patients with type 2 diabetes mellitus who are infected with COVID-19, it is important to remember that two receptor proteins, ACE2 and dipeptidyl peptidase 4, are established in the pathogenesis of COVID-19 infection. These two receptors are also transducers involved in normal physiological processes, including metabolic signals regulating glucose homeostasis, renal and cardiovascular physiology, and pathways regulating inflammation.
History and physical examination
History and physical examination are extremely important for the diagnosis of COVID-19 infection. Common related symptoms are: fever (in 44% of patients on presentation and up to 88% of admitted patients); dry cough; shortness of breath, which may be severe and progressive, particularly when the patient develops pneumonia; myalgia and tiredness; sore throat; and nausea, vomiting and diarrhoea [20].
Patients may have neurologically related symptoms, including: acute cerebrovascular disease, headaches, dizziness, seizure, decreased level of consciousness, encephalopathy and agitation and confusion [40]. Recently, anosmia, hyposmia and dysgeusia have been reported [21]. Physical signs include raised body temperature, increased respiratory rate, decreased oxygen saturation, auscultation of the lungs may be normal or show crackles and signs of heart failure, cardiac arrhythmias, myocarditis, acute coronary syndrome, shock and death may occur.
Evaluation
In patients with clinical evidence of COVID-19 infection, laboratory tests may reveal lymphocytopenia, thrombocytopenia, elevated liver transaminases, elevated C-reactive protein and erythrocyte sedimentation rate, elevated serum lactate dehydrogenase and decreased or normal serum albumin. Elevated serum troponin-T may be present, indicating myocardial injury. The following tests are used in patients with symptoms suggestive of COVID-19 infection.
Viral testing Viral testing is performed by the RT-qPCR test, used for qualitative detection of the nucleic acid for SARS-CoV-2. Swabs are usually taken from nasal, nasopharyngeal, oropharyngeal, sputum or lower respiratory tract aspirates or wash. Positive tests indicate the presence of SARS-CoV-2 RNA, and together with the clinical picture support the diagnosis. Negative test results do not preclude SARS-CoV-2 infection, and shall be interpreted in light of the clinical picture and epidemiologic information [22].
Serology
Serology testing for SARS-CoV-2 is now available. The test can assess prior exposure to virus and cannot be used in the diagnosis of current infection. Cross-reactivity with other human coronaviruses may occur. The serology test is particularly useful (i) when the viral test is not available. Using the serology test together with the clinical picture could guide in decision making. (ii) Patients with late disease complications and their physicians need to make immediate decisions (the viral test takes more time to get the results). (iii) In some patients, virus shedding is reduced, making RT-qPCR results falsely negative. The serology test can detect IgM and IgG antibodies against SARS-CoV-2 in serum, plasma and whole blood [23].
Rapid antigen testing
Rapid antigen testing is a monoclonal antibody test against the SARS-CoV-2 nucleocapsid protein (N). This protein is abnormally expressed in infected cells. Monoclonal antibodies are specifically directed against nucleocapsid protein, and by using enzyme-linked immunosorbent assay, it is possible to detect SARS-CoV-2. The test has a reported sensitivity of 84.1% and a specificity of 98.5%. No cross-reaction with human and animal coronaviruses in the assay were reported. There are no reports yet about applying this test to SARS-CoV-2 [24].
Ultrasonography
Whole-body point-of-care ultrasonography has been provided to COVID-19 patients. Ultrasonography is considered an essential modality to guide treatment in patients with cardiorespiratory failure. Current recommendations are to extend its use to multisystem and whole-body ultrasonography: thoracic, cardiac, abdomen and deep venous thrombosis [25].
Chest computed tomographic scan
Earlier studies during the outbreak in China suggested that patients with and without SARS-CoV-2 can be differentiated by chest computed tomographic imaging, together with clinical presentation and the presence of pneumonia. The authors proposed that radiologic images and clinical features are excellent diagnostic tools for COVID-19 [26]. Predictors of severe disease may include high virus load, elevated neutrophil-to-lymphocyte ratio, chest changes or changed extent of lesion on computed tomography, patient age and presence of comorbidities [27]. Older age and neutrophilto-lymphocyte ratio are reported to be independent biomarkers for poor clinical outcomes [28].
Complications
Age and sex have been shown to affect the severity of complications of COVID-19. The rates of hospitalization and death are less than 0.1% in children but increase to 10% or more in older patients. Men are more likely to develop severe complications compared to women as a consequence of SARS-CoV-2 infection [29]. Patients with cancer and solid organ transplant recipients are at increased risk of severe COVID-19 complications because of their immunosuppressed status.
The main complications reported in patients with SARS-CoV-2 may include: [34]. Because of the rapid spread of SARS-CoV-2, anti-HIV and anti-hepatitis C virus medications have been tried in patients admitted to the intensive care unit with severe pneumonia. Table 1 summarizes these drugs, including their possible mechanisms of action, adverse effects, precautions and recommendations, and lists ongoing registered clinical trials.
Summary
The COVID-19 pandemic represents the most significant public health crisis humans have faced since the pandemic influenza outbreak of 1918. To date (25 June 2020), over 9 million people have been infected, 480 600 have died and over 5 million recovered. The outbreak originated in China, but more significant numbers of infections and deaths are reported from Europe and the United States. SARS-CoV-2 belongs to the betacoronaviruses, which are highly identical to bat coronavirus. The virus uses the ACE2 receptor for cell entry, causing pathophysiologic changes of the respiratory, cardiovascular, gastrointestinal and nervous systems. Human-to-human transmission is evident, with a reproduction number ranging from 2.24 to 3.58, indicating higher transmission. Clinical symptoms include fever, cough and shortness of breath. Symptoms related to the gastrointestinal, cardiac and nervous system have also been reported. Patients at a higher risk of infection include the elderly, those with comorbidities and those who are immunocompromised. Currently no specific therapeutics have been competent to prevent or treat COVID-19. Several drugs have been tried, including antimalarials, antiviral agents, immunomodulators and plasma-neutralizing antibody transfusion. These therapeutics are currently being investigated in clinical trials.
Conflict of interest
None declared.
|
2020-08-05T13:06:55.825Z
|
2020-08-05T00:00:00.000
|
{
"year": 2020,
"sha1": "35e81e804cb18e14b5dc51b546c7a869f7528527",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nmni.2020.100738",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9290c5210c901daec043a07b1a2b839c7a59ffab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213644698
|
pes2o/s2orc
|
v3-fos-license
|
Current status of ethnobiological studies in Merauke, Papua, Indonesia: A perspective of biological-cultural diversity conservation
Simbiak M, Supriatna J, Walujo EB, Nisyawati. 2019. Review: Current status of ethnobiological studies in Merauke, Papua, Indonesia: A perspective of biological-cultural diversity conservation. Biodiversitas 20: 3455-3466. Ethnobiology is a scientific study that examines the dynamic relationship between humans, biota and the environment. In this dynamic relationship, holistic notions that integrate humans and their cultural and biological diversity give more responsibility to ethnobiological studies. This research approach stimulates insights to integrate scientific research with awareness of political and ecological issues, loss of biological resources, including indigenous peoples' struggles over land and resources, identity degradation due to loss of culture and language. Ethnobiological studies undertaken in Merauke, Papua between 2000 and 2017 were reviewed from the perspective of biologicalcultural diversity conservation. The aims and results of such published ethnobiological studies were analyzed and we found the failure of such studies in accounting for linguistic diversity in the region while documenting ethnobiological knowledge. Most of such ethnobiological studies were oriented on the topic of economic botany, focusing on recording the potential uses of plants utilized by each ethnic group in the Marind language-culture area of Merauke District, especially those belonging to the domain of medicinal plants. Some studies also used artificial community terminology to treat several ethnic groups as uniform and did not mention what language used for the local names of plants in their reports. Future ethnobiological research in the area would benefit from: (i) adopting a cognitive ethnobiology orientation; (ii) applying appropriate ethnolinguistics standards of research to document the languages; and (iii) using a quantitative approach to analyze the distribution of ethnobiological knowledge within the communities studied. The latter approach is especially important given the extreme and rapid changes of the environment in this region.
INTRODUCTION
The Society of Ethnobiology defines ethnobiology as the scientific study of the dynamic relationship between humans, biotas, and environments (Wolverton 2013). The field of ethnobiology has become a bridge for documenting various aspects of human knowledge in relation to the biophysical environment (Lepofsky 2009;Anderson et al. 2011;Albuquerque et al. 2014). The ecological knowledge and beliefs of indigenous peoples and the ways they attempt to preserve their inherited environment from generation to generation are vitally important to the cultural traditions of indigenous and rural communities around the world (Menzies and Butler 2006;Negi 2010). Indigenous peoples' knowledge about the nature of their environment is formed through a series of progressive cognitive adaptations and produces an inseparable relationship with their cultural landscape (Woldeamanuel 2012;Bergamini et al. 2013;Sutton and Anderson 2014;Plieninger et al. 2018). These communities do not want to lose traditional knowledge, especially when they need to manage key resources and ecosystems.
Conducting and publishing ethnobiological research has become an important way of preserving knowledge about medicine, food crops, farming techniques, conservation and management, and much more (Anderson 2011). However, accessing ethnobiological information can be difficult, because it tends to be scattered across publications in various scientific fields, each of which adopts slightly different terminologies (Wolverton 2013;D'Ambrosio 2014). This is the nature of a growing discipline to find its definition and orientation, research methods, and relationships with other scientific fields that overlap with questions and areas of interest because of the fusion of researchers with various theoretical and epistemological backgrounds (Albuquerque and Medeiros 2013;Albuquerque et al. 2015). However, the insights provided by ethnobiologist (e.g., Martin 2001;Anderson 2010;Wolverton 2013) indicate that there is a great desire to create ethnobiology as an interdisciplinary field which is the scientific umbrella for a number of scientific disciplines related to the relations among people, biota, and environment in various angle of studies.
Ethnobiology has been practiced since the dawn of human civilization but is relatively new as a discipline (Martin 2001). As a discipline Clement (1998) classified the historical development of ethnobiology in three eras, pre-classical, classical, and post-classical. Martin (2001) stated that this historical study provides an appropriate basis for considering current trends in basic and applied studies of ethnobiology. Hunn (2007) extends the history of ethnobiology development into four phases. Phase 1, first step -ethnobotany and ethnobiology were formally introduced academically and ethnobiology studies at this phase focusing on the documentation of useful plants and animals. Phase 2, cognitive ethnobiology or ethnoscience (1954-1970s)-the cognitive ethnobiology studies with strong links to psychology and linguistics dominate various ethnobiological studies. Phase 3, ethnoecology (1970s-1980s)-the recognition of ecological knowledge system of indigenous peoples became the spirit of ethnobiological studies. Phase 4, indigenous ethnobiology (the 1990s)-ethnobiology was lifted from the practice of 'exploitation' of indigenous peoples' knowledge and resources through various initiatives to provide broader space in the involvement of indigenous peoples. One important development in this phase is the increasing biocultural concept in ethnobiological studies (Hidayati et al. 2015) promoted by Luisa Maffi and many researchers (Pretty et al. 2009;Cocks 2010;Maffi and Woodley 2010;Sterling et al. 2010;Wyndham et al. 2011;Arts et al. 2012;Davidson-Hunt et al. 2012;Agnoletti and Rotherham 2015;Buizer et al. 2016). Furthermore, Hunn's four phases become the foundation of the fifth phase (phase 5) of the ethnobiology in which ethnobiologist are encouraged to have capability and responsibility by taking a more significant role in facing the ecological and humanitarian crises in the 21st century and global changes in the economy and ethnobiology knowledge systems (Wyndham et al. 2011;Wolverton 2013).
The spirit of phase 5 is an appropriate approach to be implemented in ethnobiological studies in Indonesia which is rich in biological, cultural and linguistic diversity (Harmon 1996;Maffi 2001;Loh and Harmon 2005;Gorenflo et al. 2012), yet ethnobiology studies have not well-developed in this country (Walujo 2008;and Hidayati 2015). Although the field of ethnobiology has developed sluggishly in Indonesia, the wide variety of ethnicities and cultures encompassed by this archipelago nation present many opportunities for research (Walujo 2008) especially in Western New Guinea (Tanah Papua).
Tanah Papua is part of New Guinea that is politically part of Indonesia. New Guinea is a fantastic island, unique and fascinating. It is an area with incredible variety of geomorphology, biota, peoples, languages, history, traditions and cultures. Diversity is its prime characteristic, whatever the subject of interest (Gressit 1982). Yet, ethnobiological studies in Tanah Papua have yet to reflect this diversity. This paper reviews ethnobiological studies conducted between 2000 and 2017 in Merauke District, Papua Province, Indonesia and is aimed in part to integrate ethnobiological research scattered across many fields and publications. Since language is the most important cultural tool for transmitting and preserving all aspects of traditional knowledge, the current authors also reflect on language issues in examining the status of ethnobiological research in Merauke District and develop suggestions for future research.
BIOCULTURAL DIVERSITY CONSERVATION: A BRIEF OVERVIEW
The concept of biocultural diversity, which refers to the interconnection between biological diversity and cultural diversity (Pretty et al. 2009), emerged approximately one decade after the term "biological diversity" appeared in the 1968 book titled A Different Kind of Country by scientist and conservationist Raymond F. Dasmann. According to Article 2 of the Convention on Biological Diversity (CBD), biological diversity is defined as follows: The variability among living organisms from all sources including, inter alia, terrestrial, marine, and other aquatic ecosystems, and the ecological complexes of which they are part; this includes diversity within species, between species, and of ecosystems (CBD 2019) In 1980, this term became a part of the scientific jargon when Thomas E. Lovejoy promoted the concept in order to remind the scientific community about the negative impact of human activities on Earth's biological systems (Franco 2013). In September 1986, this concept was reintroduced as "biodiversity" by Walter G. Rosen at the National Forum on Bio Diversity in Washington, D.C., where selected papers were eventually published in the 1988 book titled Biodiversity by Edward O. Wilson (editor) (Wilson 1988;Lousley 2012). Thus, the 1980s can be considered as the decade in which the term "biodiversity" helped draw attention to the crisis in which the diversity of life in nature was being constantly threatened by humans (Maffi 2005).
In the late 1980s, a new awareness emerged in which the erosion of biological diversity became interconnected with the disruption and destruction of the culture of indigenous peoples around the world, which resulted in the Declaration of Belem at the First International Congress of Ethnobiology in 1988 (Posey and Dutfield 1996). Although the idea of a biocultural system actually emerged at the UNESCO World Heritage Convention in 1972, which aimed to unite the research on socio-ecological systems and human-centered cultural landscapes (Bridgewater and Rotherham 2019), it was subsequently incorporated into the CBD's international conservation policy in 1992, which formally stated the need to recognize the value of biodiversity for indigenous peoples and local communities (Cocks and Wiersum 2014). Therefore, the late 1980s to the early 1990s might be considered as the time period in which the concept of an intimate relationship among biological, cultural, and linguistic diversities was put forward, along with its implications for life in nature and culture (Maffi 2005).
The idea of bridging the concept of biological diversity and cultural diversity in an integrative manner has been discussed in several studies. For example, Harmon (1996), Loh and Harmon (2005), and Stepp et al. (2004Stepp et al. ( , 2005 showed the co-occurrence between biological richness and language richness as a representation of cultural elements. More recent studies indicated that such co-occurrence is still a central issue in global nature conservation (e.g., Gorenflo 2012;Hidayati 2015;Skutnabb-Kangas and Harmon 2015;Brundu et al. 2017;Upadhyay and Hasnain 2017).
Since it is conceptually rooted in different disciplines (e.g., natural science and social science), which has produced various difficulties in collaborative interdisciplinary efforts (Cocks 2010), biocultural diversity has been the subject of numerous discussions regarding its actual definition (Bridgewater and Rotherham 2019). First, the initial concept of biocultural diversity based on cartography, which highlighted the centers of wildlife, was criticized for considering a broader and dynamic perspective of the role of humans in relation to biological and cultural diversity (Brosius and Hitchner 2010). In this regard, Maffi (2005) provided a conceptual framework by defining biocultural diversity as "the diversity of life in all its manifestations-biological, cultural, and linguisticwhich are interrelated within a complex socio-ecological adaptive system." Considering that the earlier definition was too broad, synthesized a more detailed definition: Biocultural diversity is the total variety exhibited by the world's natural and cultural systems. It may be thought of as the sum total of the world's differences, no matter what their origin. It includes biological diversity at all its levels, from genes to populations to species to ecosystems; cultural diversity in all its manifestations (including linguistic diversity), ranging from individual ideas to entire cultures, the abiotic or geophysical diversity of the Earth, including that of its landforms and geological processes, meteorology, and all other inorganic components and processes (e.g., chemical regimes) that provide the setting for life; and, importantly, the interactions among all of these.
At its 2018 conference on "Nature and Culture" in Egypt, the CBD produced two terms related to biocultural concepts, with their respective definitions: (i) biocultural diversity, which is "considered as biological diversity and cultural diversity, and the links between them," and (ii) biocultural heritage, which reflects "the holistic approach of many indigenous peoples and local communities." The cultural landscape inscribed under the aforementioned World Heritage Convention is an example of biocultural heritage. This holistic and collective conceptual approach also recognizes knowledge as "heritage," thereby reflecting its custodial and intergenerational character. Overall, both definitions elucidate the biocultural concept from a global diversity perspective and a cultural landscape perspective, respectively.
While the debate regarding biocultural concepts and biocultural diversity is ongoing, theoretical and empirical studies on the dynamic relationship among biological, cultural, and linguistic diversities are still being conducted. From the theoretical perspective, several studies have focused on basic principles and approaches, including policy directions that can be implemented in conservation programs through an integrated biocultural approach (e.g., Sterling et al. 2010;Hill et al. 2011;Carroll et al. 2017;Davidson-Hunt et al. 2012;Grant 2012;Swiderska 2013;Poe et al. 2014;Gavin et al. 2015;Dunn 2017). Meanwhile, other researchers have explored human creativity in natural and cultural hybrid systems, including the incorporation of biodiversity in the human domain through human landscape modification and agrobiodiversity (e.g., Rahu et al. 2013;Cocks and Wiersum 2014;Temudo et al. 2014;Agnoletti and Rotherham 2015;Molnar et al. 2015;Ekblom et al. 2018;Mastretta-Yanes et al. 2018).
Efforts to document traditional ecological knowledge systems have also received the attention of researchers, not only as a form of conservation but also as an adaptation strategy to changes in both climate and socio-ecological systems (e.g., Gyampoh and Asante 2011; Andrachuk and Armitage 2015; Budiharta et al. 2016;Makondo and Thomas 2018;Hong et al. 2018). This finding indicates that the concept of biocultural diversity is based on two fundamental considerations. First, throughout human history, people have interacted with nature (Pretty et al. 2009;Cocks and Wiersum 2014;Si and Agnihotri 2014;Bennett et al. 2017), which has produced worldviews, cosmology, and narratives that reflect the relationships among plants, animals, humans, and the supernatural (Cocks and Wiersum 2014). Second, human interactions with nature have resulted in unique cultural practices that ensure the continued existence and expression of locally respected biodiversity elements (Persic and Martin 2008; Cocks and Wiersum 2014).
As a notion promoted in the nature conservation approach, debates regarding the biocultural concept continue, both at the conceptual level (Bridgewater and Rotherham 2019) and at the economic level . One fundamental issue that has sparked heated debates in nature conservation is the relationship among human culture, heritage, and nature, which is considered as ecology or biodiversity (Bridgewater and Rotherham 2019). Although many studies have revealed that nature and culture intersect at various levels, ranging from values, beliefs, and norms to practices, livelihoods, knowledge, and language (e.g., Adams 2010;Newing 2010;Tyrrel 2010;Gonzales and Gonzalez 2010;Howard 2010;Agnoletti 2014;Albo 2018;), many conservation researchers and practitioners believe that a biocultural approach to conservation can produce equitable and sustainable conservation solutions (e.g., Díaz et al. 2015;Gavin et al. 2015;Caillon et al. 2017;Sterling et al. 2017;Eriksson 2018;Gavin et al. 2018;. Finally, in order to emphasize the need for pluralistic, partnership-based dynamic approaches to conservation, Gavin et al. (2015Gavin et al. ( , 2018 formulated the following eight principles: (i) acknowledge that conservation can have multiple objectives and stakeholders; (ii) recognize the importance of intergenerational planning and institutions for long-term adaptive governance; (iii) recognize that culture is a dynamic that influences resource use and conservation; (iv) tailor interventions to a socio-ecological context; (v) devise and draw upon novel, diverse, and nested institutional frameworks; (vi) prioritize the importance of partnerships and relation-building for conservation outcomes; (vii) incorporate the distinct rights and responsibilities of all parties; and (viii) respect and incorporate different worldviews and knowledge systems into conservation planning.
STUDY REGION AND PROCEDURE
The south coast of New Guinea culture extends from the Asmat tribe in the west within the territory of the Republic of Indonesia to the Elema tribe in the east within the nation of Papua New Guinea. Anthropologists have classified the ethnic groups that span the region into seven language-culture areas. Several different tribal languages are grouped into each language-culture area; each area is usually named according to the dominant language used to communicate inter-tribally within the area. The indigenous ethnic groups in Merauke District, Papua Province, Indonesia fall into two language-culture areas. The Marind language-culture group covers the plains, while the Kolopom group is located in the Yos Sudarso Island or Kolopom Island (Knauft 1993). While the dominant language in the plains region is Marind, other ethnic groups in the area speak Moraori (Marori), Kanum, Yei, Yonggom, Kaeti, Bian Marind, Meklew, and Yelmek. The Kolopom area encompasses ethnic groups speaking Kimaghama, Riantana, Ndom, and Koneraw. In Komolom Island, only single language exists on the island, Mombum. This review compares ethnobiological studies that had been conducted in the plains, that is, the Marind languageculture area, since 2000. The diversity of languages in Merauke District and its geographical distribution is shown in Figure 1. As additional information, the word "Marind" could be found in a version of "Malind" in different references ( The indigenous groups comprising the Marind language-culture area share a local totemic belief system called Mayo (hence, Marind people are sometimes referred to as "Mayo Man") (Wattimena 2013). The Mayo philosophy incorporates a cosmology that shapes the perceptions of Marind tribe people as being integrated into their natural environment (Warib 1996). Totemism positions both physical and biological environments as entities that have horizontal relationships with Marind people. The Mayo totemic worldview treats animals, plants, certain places, and even humans as manifestations of Dema, a supernatural being involved in the evolution of Marind society and life histories of Marind people (Corbey 2010). This belief system is the basis for people's understanding of and customary provisions related to the utilization of resources in their natural environment (Wattimena 2013;Sofyandy 2014).
The historical dominance of the Marind tribe (henceforth, Marind anim) in this region of Papua put them in the limelight of classical ethnographic studies. The attention on the Marind anim seems to have influenced outsider understanding of all the other native groups in the area. This situation can be seen in several research reports that show the assumptions of some ethnobiologist that the "Marind" identity applies to the indigenous society in Merauke District as a whole (e.g. Haryanto et al. 2009;Wattimena 2013;Sofyandy 2014;Suharno et al. 2016). It should be understood, however, that each ethnic group (especially the tribes of Kanum, Marind, and Yei) within the Marind language-culture area speaks a different dialect or language (Van Baal 1966). A recent Summer Institute of Language (SIL) study demonstrated that each of these ethnic groups currently has difficulty understanding the languages spoken by the others (Sohn et al. 2009). The artificiality of the language-culture area designation has led ethnobiology researchers to ignore the diversity of the languages used among the indigenous tribes in Merauke District. Thus, important ethnobiological data may have been inadvertently neglected or eliminated from some studies. This problem is discussed further below.
In examining the development and tendencies of Ethnobiology in Merauke District, we analyzed papers published or research reports on this theme and focused on contemporary studies. We compiled studies concerning human-animal, human-plant, and human-land relations in Merauke that had been published in academic journals and other periodicals or included in chapters in textbooks and various reports including postgraduate theses. We used the following search keywords: ethnobiology, ethnoecology, ethnobotany, ethnozoology, ethnomedicine, biocultural, traditional knowledge, traditional ecological knowledge, traditional medicine, traditional wisdom, and socioecological. Accessing all the published studies was not possible because some journals do not provide online access, and other journals restrict content. Therefore, our survey was limited to the most recent studies published between 2000 and 2017. The search only included studies that directly investigated the relationship between human groups and different types of resources. Once all the publications were collected, they were subdivided into primarily ethnobiological, ethnobotanical, ethnozoological, or ethnoecological studies for purposes of comparison. For an academic standard, we refer to a new synthesis in the ethnobiological perspective by Martin (2001) that ethnobiology can be seen as an integrative discipline that refers to all different approaches to gathering various empirical data about the interaction between humans and biological organisms in various studies with terms such as ethnobotany, ethnozoology, and ethnoecology. By this perspective, ethnobiology combines conventional studies conducted by ethnobotanists, ethnozoologists, and ethnoscientists who present a limited vision of the interaction of local communities with the natural environment. This notion of unification is based on a central theory that the systematic knowledge of local communities, the management of organisms and biological ecosystems, can be classified as biological sciences, based on qualitative and quantitative research methods. By this narrative, ethnobotany, ethnozoology, and ethnoecology are ethnobiological sub-disciplines used as an empirical study approach to examine the dynamic relationships between human-plants, human-animals, and humanenvironment from the cultural way. This conceptual framework is analogous with the definition of ethnobiology by the Society of Ethnobiology mentioned above.
The first works
Our search shows that studies related to ethnobiological knowledge have been conducted since the colonial era. A review of the ethnobotanical aspects of these classical studies has also been reported by Powell in the chapter "Ethnobotany" in the book "New Guinea Vegetation," edited by K. Paijmans, 1976. Powell (1976 inventoried and evaluated over 60 ethnobotany studies in New Guinea but only a small amount of the research originated from western New Guinea. These early works were criticized by Powell for not providing sufficient ethnobotany data and typically containing only local names and without clear species identification. This can happen because the early ethnobotany studies were not a major part of their work as anthropologists or geographers. From these studies only two are specifically reported from the area currently known as Merauke District, which is the study of food sources related to the nutrition of Marind people (Luyken and Luyken-Koning 1955) and Serpenti publication of 1965 on farming systems of local communities adapted to the swampy environment on Frederik-Hendrik Island (now Yos Sudarso Island) (Barrau and Scheffler 1966).
In addition to contributing to the study of ethnobotany aspects as reported by Powell (1976), some anthropologists also contributed to a wider area of ethnobiological knowledge. Some of the more accessible information is included in Kooijman's (1960) discussion of the Marind anim's (anim means man) use of a lunar calendar and Van Baal's (1966) description and cultural analysis of the Marind anim. Retracing earlier ethnographic reports, Van Baal found that plants and animals were primary subjects of Marind mythology. Van Baal's fairly comprehensive study thus explored human-biota relationships in the Marind anim belief system. In addition to Marind anim, Van Baal (1982) also re-analyzed Pastor Jan Verschueren's report on Yei nan (nan means man) culture that provides interesting information about food ecology of the tribe. One interesting report from the ethnographic study is the impressive land-use adaptation technology shown by indigenous peoples on the island of Kolopom for gardening on their swampy land. Yams and taro planted in man-made garden islands that reclaimed from swamps by stacking layers of clay and grass on a stretch of floating grass that has been cut. With persistent effort, they ensure the humidity level of the tolerance range of various plants that are affected by different seasonal conditions. Likewise, soil temperature and water content were always maintained, all areas cleaned regularly. Their fertilization was also very specific where the garden area coated with a thin layer of mud fertilizer, sifted with a clean sieve and then coated first with hummus and then with dry grass compost (Knauft 1993).
Contemporary ethnobiological research
Starting in the 2000s, local Indonesian and Papuan researchers began to pay more attention to ethnobiological research in the region which was started by Susiarti in 2000 (Hide 2017). Some of the more intense research has been conducted by Susiarti (2005), Kameubun (2003Kameubun ( , 2013, and Winara andcolleagues (2015, 2016). Actually, the contemporary ethnobiological study in this area by local Indonesian and Papuan researchers was initiated by Warib in 1993 focusing on the study of kava (Piper methysticum) in the Marind anim tradition. But the report is inaccessible and only very limited information about the local naming of kava and the knowledge of useful plants with inadequate botanical information obtained in Warib (1996). Because of its cultural value, the study of kava is then gained attention by Kameubun (2003Kameubun ( , 2013 who explored in depth the knowledge of ethnic groups in Merauke about these plants including cognitive aspects related to determination and classification. Although the attention of local researchers to this field began to grow, the gap of interest in the ethnobiology study area was striking. Academics and functional researchers from research institutes are more interested in ethnobotany studies than other study areas that can be demonstrated by existing studies (Figure 2).
Existing reports also indicate that the studies that have been conducted have come from a limited geographical area and generally focus on the Wasur National Park (WNP) and its surroundings, thus involving only a limited ethnic group as well. Figure 1 shows that the indigenous peoples of Merauke District consist of various ethnic groups with their respective languages so that it is possible to have a wealth of own local wisdom. But the existing study focuses only on Kanum, Marind, and Marori tribes so there is a gap in the term of ethnic and linguistic diversity (Table 1). Dye plants 1 Harbelubun 2005 As mentioned above that ethnobotany dominates the existing ethnobiological study in Merauke, however, almost all of these ethnobotanical studies are oriented in the topic of economic botany (Table 2). That is, researchers only reported knowledge of plant species that are of economic value to humans. The documentation of local knowledge practices in utilizing plant resources focuses primarily on the use of plants as medicine (Susiarti 2000;Haryanto et al. 2009;Lobo 2012;Widya 2015;Winara 2015;Suharno et al. 2016;Winara and Mukhtar 2016). Researchers on other ethnobotanical domains include food plants (Hariadi 2005;Paay 2005;Susiarti 2005;Hisa et al. 2017) and dye plants (Harbelubun et al. 2005). More general uses of plants in the Marori-Men Gey community at WNP were also documented by Winara and Suhaendah (2016). The results of our evaluation indicate that some of the accounts in ethnobotanical area presented repetitive information such as the study of indigenous medicinal plants of WNP has involved at least four studies, for example, Susiarti (2000), Haryanto et al. (2009), Winara (2015, and Winara and Mukhtar (2016). Similarly, in the study of kava, although Suharno et al. (2016) stated their focus on the medicinal value of kava, their discussion of cultural values contained the same study as Kameubun (2003Kameubun ( , 2013. These findings suggest that the studies designed do not take into account the scientific rules of a study related to the novelty. Whether the aspects of the study are completely new or the development of previous studies, this is not always the case. Various reports (e.g. Wattimena 2013; Sofyandy 2014) show that the tribes in Merauke District have a close relationship with fauna, which is characterized by Australasian fauna, but we did not find a scientific report on ethnozoological studies according to the criteria we determined. Meanwhile, some studies on the topic of ethnoecology were uncovered; all emphasize local wisdom (based on the indigenous belief system) in the management of biological and environmental resources (e.g. Kosmaryandi 2012;Muliyawan et al. 2013;Wattimena 2013;Sofyandy 2014;Wambrauw 2015). Both ethnozoological and ethnoecological studies here want to emphasize to policymakers and stakeholders that local people in this district have a strong relationship with their biophysical environment. The values contained in the relationship should be considered for development planning on the scale of its needs. Some of these concepts have been implemented such as mapping important places to maintain cultural archaeological sites of indigenous tribes. This important place concept has been applied in the zoning of Wasur National Park which places many sacred areas within the core zone of the area (Kosmaryandi 2012;Muliyawan et al. 2013) including the consideration of Spatial Planning Regulation of Merauke District (Wattimena 2013;Sulistyawan et al. 2018). Nevertheless, the studies also reveal the fact that the sustainability of local wisdom is under threat. There has been a shift in perceptions of the values of local wisdom and culture among Marind young people so that the goal of commercialization has overcome the application of customary norms in the utilization of local resources.
Perspectives on future ethnobiological research in Merauke
The above brief comparison of ethnobiological studies conducted in Merauke District suggests specific avenues for future research. The usefulness of some of these studies to future researchers is limited by the failure of a few researchers to specify the ethnic identities of their local consultants. Other reports (e.g. Kameubun 2013; Muliyawan 2013) had employed artificial ethnic designations such as Marind Sendawi anim, which treats the Kanum, Marind, Marori, and Yei tribes as a homogenous study subject even though the four tribes have different languages. Similarly, Haryanto and colleagues (2009) (Posey and Dutfield 1996). Such knowledge was formerly transmitted through oral narration, so much of it remains undocumented.
For example, Walujo (2011) suggests that ethnobotany should encompass the study of how a society understands and perceives plants, in addition to using them, in the context of the human relationship with the environment. Perceptions and conceptions form two axes in ethnobotanical studies. Ethnobotany serves as a bridge to deepen perceptions and conceptions in relation to the vegetal resources of the environment. These terms relate more to cognition than to pure utilitarianism and suggest that cognitive ethnobiological studies amongst different ethnic groups in the region would be fruitful (Ross and Revilla-Minaya 2011). The cognitive approach has been somewhat illustrated in research by, Warib (1996), Muliyawan et al. (2013), Wattimena (2013), Sofyandy (2014), and Hisa et al. (2017), but only limited data was provided in these studies. Documenting perceptual knowledge of the natural environment and its components, as well as language and other cultural aspects of each community within the Marind language-culture area, should thus be undertaken as part of the biocultural conservation effort. Another fact is recent ethnobiological studies have all been conducted around Merauke city. For the concern to conservation, future research should expand northward and westward, where many places are undergoing rapid environmental change due to oil palm plantations and other agricultural industries (Wattimena 2013). Adopting a quantitative approach to analyze the distribution of ethnobiological knowledge within the varied communities studied would be especially important given the extreme and rapid changes in the environment of the Merauke District region.
We argue that a biocultural conservation approach can be a bridge to facilitate aspects that have not been noticed by previous ethnobiological studies particularly with regard to cognitive ethnobiology. The areas of cognitive ethnobiology may include the study of how knowledge is acquired, transmitted, and transformed across cultures and generations, loss of knowledge as well as behavioral studies related to resource management and conflict over resources (Ross and Revilla-Minaya 2011;De Vette 2012;Kansky and Knight 2014;Madden and McQuinn 2014;McCarter et al. 2014;Teel et al. 2014;Norrman 2015;De Pourcq et al. 2015;Baynham-Herd et al. 2018), this area also includes knowledge of folk taxonomy (Keil 2013;Poncet et al. 2015;Berlin 2014) which has not been the concern of ethnobiologists in Indonesia today. This folk taxonomy study emphasizes the exploration of the sematic aspects of the language of existing indigenous peoples to find out the conceptions of landscape elements from various perspectives of indigenous peoples' knowledge (Abraao et al. 2010;Hunn and Meilleur 2010;Johnson 2010;Johnson and Hunn 2010;Johnson and Davidson-Hunt 2011). This needs to be a concern because biocultural conservation includes the diversity of life and all that is manifested-biology, culture including language (Maffi and Woodley 2010) and losing one of them can cause another loss (Harmon 1996;Pretty et al. 2009;Si 2011). The context of biocultural conservation is very appropriate in this area because it has lost one language, Men Ge (Sohn et al. 2009) and one other language, Marori is threatened with extinction because it has very few fluent speakers (Arka 2013).
Characteristics of ethnobiological research are currently in phase 5 where there is a need for increased networks of various disciplines to address the challenges of rapid ecological changes and political economy shifts (Wyndham et al. 2011;Wolverton 2013). This is confirmed by the study of ethnobiology studies in Southeast Asia by Hidayati et al. (2015) that ethnobiology is necessary through studies relating to biocultural and socio-ecological diversity. Biocultural is a parallel approach and many theoretical contributions to the development of science as well as practically for the benefit of humans and the natural environment are generated through this approach. The contribution of the biocultural approach not only saves cultures and languages but also plays a role in other aspects such as food security, biological diversity and ecosystem functions (Tauli-Corpuz 2009;McGregor et al. 2010;Ros-Tonen 2012;Barthel et al. 2013;Boillat et al. 2013;Hong 2013;Gavin et al. 2015;Sujarwo et al. 2015;Barthel et al. 2017;Lemke and Delormier 2017;Morales et al. 2017;Moura et al. 2017;Danarto et al. 2019), human health (Worthman and Costello 2009); multi-sector development (Davidson-Hunt et al. 2012;McCarter et al. 2018;Sterling et al. 2017), including paying special attention to the sustainable economic empowerment of indigenous peoples affected by various policies (e.g. Xu et al. 2009;Abebe and Bongers 2012;Schure 2012;Mustafa and Hajdari 2014;Carr et al. 2016). By looking at the flexibility of the role of a biocultural approach, this approach can be used for more comprehensive ethnobiological research purposes through a forum involving researchers from many relevant disciplines. Through this forum is expected to produce an intellectual document of imperative studies of ethnobiology with international standards that scientifically provide direction study, conformity, and development of the methodology to produce research and publication high in quantity and quality from all aspects of ethnobiology as an interdisciplinary field.
|
2020-03-19T20:06:25.153Z
|
2019-11-02T00:00:00.000
|
{
"year": 2019,
"sha1": "50dcc1a3725ffb5e763f6a0db33e498f6fe2820d",
"oa_license": "CCBYNCSA",
"oa_url": "https://smujo.id/biodiv/article/download/4641/3516",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bf04e3327f7f0c8bd0b9c40132d75a5b3529039c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
237934772
|
pes2o/s2orc
|
v3-fos-license
|
Re-Establishment Techniques and Transplantations of Charophytes to Support Threatened Species
Re-establishment of submerged macrophytes and especially charophyte vegetation is a common aim in lake management. If revegetation does not happen spontaneously, transplantations may be a suitable option. Only rarely have transplantations been used as a tool to support threatened submerged macrophytes and, to a much lesser extent, charophytes. Such actions have to consider species-specific life strategies. K-strategists mainly inhabit permanent habitats, are perennial, have low fertility and poor dispersal ability, but are strong competitors and often form dense vegetation. R-strategists are annual species, inhabit shallow water and/or temporary habitats, and are richly fertile. They disperse easily but are weak competitors. While K-strategists easily can be planted as green biomass taken from another site, rare R-strategists often must be reproduced in cultures before they can be planted on-site. In Sweden, several charophyte species are extremely rare and fail to (re)establish, though apparently suitable habitats are available. Limited dispersal and/or lack of diaspore reservoirs are probable explanations. Transplantations are planned to secure the occurrences of these species in the country. This contribution reviews the knowledge on life forms, dispersal, establishment, and transplantations of submerged macrophytes with focus on charophytes and gives recommendations for the Swedish project.
Introduction
To protect threatened macrophyte species in Sweden, an action plan started during 2017.The main aim of this program is to build knowledge which is considered necessary before actions are taken (Zinko 2017 [1]).The program includes 10 charophyte species (Chara filiformis, C. subspinosa, C. braunii, Nitellopsis obtusa, Nitella translucens, N. mucronata, N. gracilis, N. syncarpa, N. confervacea, Tolypella canadensis) and five angiosperm species (Potamogeton acutifolius, P. compressus, P. friesii, P. rutilus, P. trichoides).The selected charophyte species are rare in Sweden, which is surprising considering a high number of sites which seem suitable.Lack of knowledge about their occurrence in the country was and is one possible reason.Therefore, intensive monitoring was the main activity of several former action plans for threatened charophytes (Blindow 2009a,b,c,d,e [2][3][4][5][6]) and is still one main activity of the ongoing program.Except for Tolypella canadensis, however, lack of knowledge does not sufficiently explain the low number of sites for rare species.Oospores of these species are expected to be very rare in the diaspore reservoirs of lakes and small water bodies, which may restrict them from spontaneous (re)establishments.Transplantations of these species are therefore a second main activity of the ongoing action plan.
Experience with transplantations (e.g., translocations, see IUCN 2013 [7]) to protect threatened charophytes is still very limited.Fortunately, a number of threatened aquatic macrophytes have already been transplanted successfully, and experiences from these projects may be transferred to charophytes.Moreover, there is extensive literature on Plants 2021, 10, 1830 2 of 33 re-establishment of submerged macrophytes for other purposes such as lake restorations because of the positive impact of these plants on lake ecosystems and water quality (Hilt et al., in press [8]), which can be achieved by direct establishment (plantations) and/or indirectly by improving the habitat conditions for this vegetation.Submerged macrophytes act as sediment traps, store nutrients, retard shore erosions, and reduce phytoplankton densities by excretion of allelopathic substances-impacts which all increase water clarity.Together with their associated epiphyton, they offer a well-structured habitat, food, and oxygen and thereby favor species richness and biomass of macroinvertebrates.Both plants and macroinvertebrates are important food sources for fish and waterfowl.The vegetation further serves as a predation refuge for zooplankton, macroinvertebrates, and fish fry (Hilt et al., 2017 [9]).
Establishment success is dependent on dispersal and fertility but also competition with other plants.These abilities vary considerably among different life forms and species of submerged macrophytes.Detailed knowledge of these properties is essential to enable successful establishment and transplantation of submerged macrophytes.
This paper consists of three different parts: a review of ecological characteristics and life strategies of macrophytes (Sections 2-4) is followed by a review of management techniques to promote submerged macrophytes (Sections 6-9).Both parts first summarize knowledge about submerged macrophytes generally and end more specifically in a review about charophytes.The third part (Section 10) describes the "Swedish example", which aims at protection and especially transplantations of threatened charophytes and is based on the experiences reviewed in the first two parts.
Dispersal, Fertility, and Hibernation
Submerged macrophytes (re)establish from vegetative parts and/or diaspores that are transported to the water body or are already present on the site.Wind transport of diaspores (anemochory) is common in emergent plants but unusual in submerged plants, which mainly use water (hydrochory) but also different animals (zoochory) as transport vectors.Exozoochorous transport of green parts or turions is restricted to short distances, often within the same catchment area (Lacoul and Freedman 2006 [10], Soons et al., 2008 [11], Bakker et al., 2013 [12]).To reach remote water bodies and distant catchment areas, endozoochorous transport by waterfowl is the product of a co-evolutionary process (Clausen et al., 2002 [13], Figuerola and Green 2002 [14], Santamaria 2002 [15]).This transport requires the production of hard-shelled diaspores, which withstand the gut passage and often show improved germination after this passage (Clausen et al., 2002 [13], Figuerola and Green 2002 [14], Santamaria 2002 [15]).Such diaspores also tolerate harsh environmental conditions such as drying and freezing and serve as hibernacles, especially in temporary water bodies (Bonis and Grillas 2002 [16], Green et al., 2002 [17]).
Charophytes hibernate as green plants or by means of specific vegetative hibernacles (bulbils) or oospores.As in vascular plants, hibernation modes vary considerably among species but also within species dependent on conditions such as water depth (Wang et al., 2015 [26]).For example, Chara aspera can hibernate as a green plant in deeper permanent habitats by means of bulbils and oospores in shallow water or exclusively by means of oospores, especially in temporary habitats (Blindow and Schütte 2007 [27]).In this species, oospores are assumed to serve mainly as long-term diaspore reservoir because they can survive long time periods but only have low annual germination rates; in contrast, bulbils germinate almost completely during spring but can survive just a few years and therefore are assumed to serve short-term diaspore reservoir (van den Berg et al., 2001 [28]).Generally, charophytes use oospores for long distance dispersal and for reestablishment from sediments after disturbances, and bulbils are used to maintain local populations (de Winton and Clayton 1996 [29], van den Berg et al., 2001 [28], Bonis and Grillas 2002 [16], Asaeda et al., 2007 [30], Brochet et al., 2010 [24]).Charophytes use three different modes to form dense vegetation with high interspecific differences in the relative importance of these modes: (A) vegetatively from omnipotent node cells, which can successfully be dispersed by means of fragments containing at least one node (Skurzy ński and Boci ąg 2011 [31]), (B) vegetatively from bulbils (Asaeda et al., 2007 [30], Wang et al., 2015 [26]), or (C) by germination of oospores (Skurzy ński and Boci ąg 2009 [32]).
Interspecific Competition
Along a eutrophication gradient, submerged macrophytes are the dominating primary producers at low to moderate nutrient loadings, while phytoplankton dominates in highly eutrophic conditions.A shift from macrophyte to phytoplankton dominance occurs at a certain nutrient-related critical turbidity.This shift can happen rapidly in shallow lakes, which were assumed to occur in two different alternative stable states (Scheffer et al., 1993 [42]).
More recently, three different states of primary producer dominance were postulated to occur during progressive eutrophication, a macrophyte-dominated state with bottom-dwellers, a second macrophyte-dominated state with tall macrophytes, and a phytoplankton-dominated turbid state (Verhofstad et al., 2017 [43]).While the bottomdweller state, often characterized by dense charophyte vegetation, is assumed to be rather stable, the tall macrophyte state, dominated by various angiosperms, is characterized by somewhat higher turbidity and lower stability (Meijer 2000 [44], Hilt et al., 2018 [45], Blindow et al., 2016 [46], Phillips et al., 2016 [47]) and therefore was called the "crashing" state (Sayer et al., 2010 [48]).Vice versa, tall macrophytes are sometimes the first submerged vegetation to establish in a turbid lake and to increase light availability in the water column far enough to enable a subsequent establishment of charophytes (Meijer 2000 [44], van den Berg et al., 2001 [28], Hargeby et al., 2007 [49]).Additionally, feedback mechanisms are assumed to differ between the two macrophyte-dominated states.While the refuge function for zooplankton seems to be of major importance in the state dominated by tall macrophytes, dense charophyte vegetation stabilizes the clearwater state mainly due to reduction of sediment resuspension, nutrient accumulation, and favoring of macroinvertebrates (Blindow et al., 2014 [50]).
Dominance patterns and interspecific competition among these different life forms of submerged plants (Figure 1) are mainly determined and affected by access to light and inorganic carbon."Bottom-dwellers", such as isoetids and charophytes, but also some low-growing vascular plants form more or less dense vegetation close to the sediments, which prevents their occurrence in deeper, turbid water and therefore restricts them to less eutrophic environments (Barko and Smart 1981 [51], Blindow 1992a [52]).Most isoetids are adapted to soft water conditions with low concentrations of inorganic carbon in the water column and have developed several adaptations to this deficiency, such as carbon dioxide uptake from sediments and CAM metabolism.Generally, they lack the ability to assimilate bicarbonate (Madsen and Sand-Jensen 1991 [53], Keeley 1998 [54], Smolders et al., 2002 [55]).Apart from several Nitella species growing in soft water environments, charophytes occur mainly in calcium-rich water with higher pH values and bicarbonate as the main form of inorganic carbon.Here, they are highly competitive due to their efficient bicarbonate assimilation (van den Berg et al., 2002 [56], Ray et al., 2003 [57]).Charophytes therefore dominate the submerged vegetation in many oligo-to mesotrophic calcium-rich lakes, which were therefore called "Chara-lakes" by Samuelsson (1925 [58]).
which prevents their occurrence in deeper, turbid water and therefore restricts them to less eutrophic environments (Barko and Smart 1981 [51], Blindow 1992a [52]).Most isoetids are adapted to soft water conditions with low concentrations of inorganic carbon in the water column and have developed several adaptations to this deficiency, such as carbon dioxide uptake from sediments and CAM metabolism.Generally, they lack the ability to assimilate bicarbonate (Madsen and Sand-Jensen 1991 [53], Keeley 1998 [54], Smolders et al., 2002 [55]).Apart from several Nitella species growing in soft water environments, charophytes occur mainly in calcium-rich water with higher pH values and bicarbonate as the main form of inorganic carbon.Here, they are highly competitive due to their efficient bicarbonate assimilation (van den Berg et al., 2002 [56], Ray et al., 2003 [57]).Charophytes therefore dominate the submerged vegetation in many oligo-to mesotrophic calcium-rich lakes, which were therefore called "Chara-lakes" by Samuelsson (1925 [58]).Many vascular plants such as Potamogeton spp.and Myriophyllum spp.are tall and often form a canopy along the water surface, thus concentrating most of their photosynthetic biomass in regions with better light availability.These plants have a competitive advantage in turbid, more eutrophic environments, facilitated by often large hibernacles such as turions and tubers, which allow high growth rates during spring, even in turbid conditions (Blindow 1992a [52]).Most of these "canopy-formers" are able to assimilate bicarbonate but less efficiently than charophytes (van den Berg et al., 2002 [56]).
Charophytes
Experiments confirmed the different preferences observed in the field: charophytes are competitive at moderate nutrient concentrations, while tall angiosperms are superior competitors at higher nutrient conditions.van den Berg et al. (2002 [56]) demonstrated that the outcome of competition between Chara aspera and Stuckenia pectinata is dependent not only on light but also on bicarbonate availability.Chara globularis outcompeted Myriophyllum spicatum at low nutrient concentrations (Richter and Gross 2013 [59]).In another experiment, C. globularis developed far higher biomasses than angiosperms at low nutrient concentrations but far lower biomass at higher nutrient concentrations, while the growth rate of Stuckenia pectinata was not affected by the experimental condition (Bakker et al., 2010 [60]).In still another experiment, Stuckenia pectinata was outcompeted by charophytes at low nutrient concentrations, probably because of the efficient assimilation of Many vascular plants such as Potamogeton spp.and Myriophyllum spp.are tall and often form a canopy along the water surface, thus concentrating most of their photosynthetic biomass in regions with better light availability.These plants have a competitive advantage in turbid, more eutrophic environments, facilitated by often large hibernacles such as turions and tubers, which allow high growth rates during spring, even in turbid conditions (Blindow 1992a [52]).Most of these "canopy-formers" are able to assimilate bicarbonate but less efficiently than charophytes (van den Berg et al., 2002 [56]).
Experiments confirmed the different preferences observed in the field: charophytes are competitive at moderate nutrient concentrations, while tall angiosperms are superior competitors at higher nutrient conditions.van den Berg et al. (2002 [56]) demonstrated that the outcome of competition between Chara aspera and Stuckenia pectinata is dependent not only on light but also on bicarbonate availability.Chara globularis outcompeted Myriophyllum spicatum at low nutrient concentrations (Richter and Gross 2013 [59]).In another experiment, C. globularis developed far higher biomasses than angiosperms at low nutrient concentrations but far lower biomass at higher nutrient concentrations, while the growth rate of Stuckenia pectinata was not affected by the experimental condition (Bakker et al., 2010 [60]).In still another experiment, Stuckenia pectinata was outcompeted by charophytes at low nutrient concentrations, probably because of the efficient assimilation of nutrients and/or bicarbonate by the latter; in the same experiment, Stuckenia pectinata inhibited charophytes when it developed a "canopy", i.e., dense biomass close to the water surface (Hidding et al., 2010a [61]).In a system with experimental ponds, Chara globularis dominated at lower and Elodea nuttallii at higher nutrient concentrations (Bakker and Nolet 2014 [62]).In a newly created oligo-to mesotrophic lake dominated by charophytes, tall angiosperms were favored by the removal of Chara sp. and Vaucheria sp. in experimental plots (Vejřiková et al., 2018 [63]).
Different Life Strategies in Charophytes
Among charophytes, both extreme R-strategists ("permanent pioneers") and extreme K-strategists with a strong impact on the whole ecosystem ("ecosystem engineers") can be identified (Schubert et al., 2018 [64]).
Typical R-strategists are annuals producing large quantities of oospores.These oospores are dispersed by waterfowl and can survive both drying and freezing and stay dormant for a long time, at least several decennia, in dry sediments (Krause 1997 [20], de Winton et al., 2000 [65], Rodrigo et al., 2015 [66]).In many newly created small water bodies, charophytes are the first submerged plants to establish but often disappear after several years due to competition of other, "late-coming" submerged plants (Casanova and Brock 1990 [41], Krause 1997 [20], Rodrigo et al., 2015 [66], Schubert et al., 2018 [64]).Chara vulgaris, C. contraria, C. aspera, and several Nitella species belong to these R-strategists, but most extreme are species such as Tolypella intricata, T. glomerata, and Nitella capillaris, which can also show up "spontaneously" in very small and temporal water bodies (see Figure 2).Already, Olsen (1944 [67]) and Hasslow (1931 [68]) mentioned their "meteoric" nature, while Allen (1950 [69]) and Fitzgerald (1985 [70]) called Tolypella spp."vegetable comets".Oospores are most probably far more widespread than the sporadic records of these species, which only spend a very small part of their life cycle as green plants.Abundances are hard to estimate, which causes problems during red list assessments (Blindow 2009e [6]).In Sweden, N. capillaris was found in two small water bodies close to a former site more than 100 years after the last record of the species in the country (Blindow 2019 [71]).
Extreme K-strategists also belong to the charophyte group.Such species are perennial, produce only moderate numbers of oogonia, and therefore have a restricted ability to reach distant catchment areas.Under suitable conditions, however, they can form dense vegetation and outcompete other submerged macrophytes, acting as "nasty neighbors" (Figure 2).Because of their high biomasses, they act as "keystone organisms" in shallow water ecosystems and affect not only a number of physical and chemical factors but the whole food web structure (Hargeby et al., 1994 [72], Kufel and Kufel 2002 [73]).Nitellopsis obtusa, Chara tomentosa, C. hispida, and C. subspinosa belong to this group.Extreme K-strategists also belong to the charophyte group.Such species are perennial, produce only moderate numbers of oogonia, and therefore have a restricted ability to reach distant catchment areas.Under suitable conditions, however, they can form dense vegetation and outcompete other submerged macrophytes, acting as "nasty neighbors" (Figure 2).Because of their high biomasses, they act as "keystone organisms" in shallow water ecosystems and affect not only a number of physical and chemical factors but the whole food web structure (Hargeby et al., 1994 [72], Kufel and Kufel 2002 [73]).Nitellopsis obtusa, Chara tomentosa, C. hispida, and C. subspinosa belong to this group.
(Re)establishment of Submerged Vegetation
(Re)establishment of submerged vegetation is therefore a major aim in many lake restorations projects.(Re)establishment can be achieved by improving the conditions for this vegetation and often without any plantations.Since some functions of this vegetation,
(Re)establishment of Submerged Vegetation
(Re)establishment of submerged vegetation is therefore a major aim in many lake restorations projects.(Re)establishment can be achieved by improving the conditions for this vegetation and often without any plantations.Since some functions of this vegetation, such as increased habitat structure and substrate and predation refuge for smaller animals, are not dependent on living plants, even "plantations" of artificial plants have been applied in lake restorations (Schou et al., 2009 [74], Boll et al., 2012 [75], Balayla et al., 2017 [76], Jeppesen et al., 2017 [77]).
Sometimes, the opposite situation occurs, and "too dense" macrophytes are regarded as a nuisance.Dense vegetation clogs fishing nets and other fishing equipment, turbines, and other installations, impedes boat traffic and bathing, retards the water flow-through in channels, and causes high oxygen consumption during night (Jellyman et al., 2009 [78]).
Many publications investigate reasons for expansion and decline of submerged plants and deal with the restoration of this vegetation, including a strikingly high number of reviews.Bakker et al. (2013 [12]) summarized "case studies" of lake restorations which caused an expansion of submerged macrophytes, often combined with improved water clarity.Blindow et al. (2014 [50]) discussed differences in the feedback mechanisms between angiosperms and charophytes.Hussner et al. (2014 [79]) and Hilt et al. (2006 [80]) described the effect of single management measures on submerged macrophytes and gave detailed recommendations for macrophyte restoration.Phillips et al. (2016 [47]) discussed causes for the disappearance of submerged vegetation from shallow lakes and asked what we have learned during the past 40 years.van Katwijk et al. (2016 [81]) and Zhang et al. (2021 [82]) presented a global analysis of seagrass restoration projects.Jeppesen et al. (2017 [77]) treated the development of submerged vegetation after biomanipulations.Verhofstad et al. (2017 [43]) summarized the knowledge about the development of dense submerged vegetation after restorations, including the importance of sediments, light, and diaspore reservoirs in this process.Hilt et al. (2018 [45]) clarified the relationships between nutrient load and dominating vegetation type with and without biomanipulation.Two regional reviews summarized global experiences and case studies concerning transplantations of submerged macrophytes (van de Weyer et al., 2021 [83]) and submerged macrophytes with focus on charophytes (Blindow 2019 [71]).Finally, Rodrigo (2021 [84]) reviewed revegetation with submerged macrophytes including charophytes as a restoration tool for natural and constructed wetlands.
This extensive literature provides a good knowledge basis about which environmental conditions favor submerged macrophytes and shows that nutrient level and grazing pressure are the most important factors to be considered.High nutrient levels disfavor submerged plants because of poor water column light availability.A reduction of nutrient concentrations by means of (external) precipitation of phosphorus or by so-called "flushing" therefore has a positive impact on submerged vegetation (Meijer 2000 [44], van den Berg et al., 2001 [28]).Additionally, reduction of internal fertilization has generally a positive effect but may be combined with a risk of (mechanically) damaging the vegetation.Besides a decrease of overall nutrient concentrations, sediment removal reduces resuspension, allows a better anchorage of plants in the sediments, and exposes formerly covered seed banks but may reduce a major part of the diaspore reservoir.Covering of sediments reduces resuspension but also covers the seed banks and therefore can impede re-establishments.Oxidation of the sediment surface and (internal) phosphorus precipitation can be harmful due to mechanical disturbance and rapid pH changes (Hussner et al., 2014 [79]).Additionally, repeated mowing can favor submerged vegetation, as nutrients are removed and the ecosystem is maintained in a lower nutrient status (Kuiper et al., 2016 [85], see below).
Table 1.Case studies for transplantations of charophytes, sorted country-wise.Methods specify, if plants are planted in pots, on textile mats, as green plant biomass, as oospores or as sediment containing oospores, and if areas were covered with sheets to impede competing species.Accompanying measures (Accomp): C-cutting of competing macrophytes; F-fish reduction; N-nutrient reduction; imp-implementation of Anodonta and Salvelinus, species assumed to favour submerged vegetation; Success/problems: + full success, ± some success, − no success of transplantations; C-competition; E-eutrophication; H-herbivory.Grazing pressure differs highly among different plant species.Thus, the highly "palatable" Stuckenia pectinata was favored by protection against grazing, while Myriophyllum spicatum grew better in open, unprotected plots (Vejřiková et al., 2018 [63]).Grazing effects also interact with nutrient conditions.An experimental study showed that grazing pressure was higher at higher nutrient concentration, which was explained by higher plant palatability (Bakker and Nolet 2014 [62]).Verhofstad et al. (2017 [43]) described the intricate interactions among nutrients, fish, and macrophyte composition: high densities of herbivorous fish or waterfowl give rise to a lake ecosystem without submerged vegetation but with dominance of phytoplankton.Biomanipulation can cause a re-establishment of submerged vegetation with dominance of bottom-dwellers at lower nutrient conditions and tall species at high nutrient concentrations, the latter of which can be replaced by phytoplankton if nutrient loading increases further.
Site
Moreover, water level and water level fluctuations have a high impact on submerged vegetation (Mäemets et al., 2018 [115]).In large, wind-exposed lakes, sediment resuspension can cause high turbidities, which can prevent (re)establishment of submerged vegetation, even if nutrient concentrations are rather low (Schutten et al., 2005 [116]).Artificial islands, enclosures, and other protecting installations have been applied to locally reduce resuspension and allow an establishment of macrophytes (Hussner et al., 2014 [79]).Restoration success can be substantially improved if several measures are combined (Kozak and Gołdyn 2016 [117]).
Restorations of nutrient-rich lakes sometimes aim at favoring angiosperms such as Stuckenia pectinata, which are well adapted to higher turbidity (Coffey 2001 [124], Jellyman et al., 2009 [78]).Often, however, charophyte vegetation is preferred before tall macrophytes (Moss and van Donk 1990 [125]).Charophytes form dense vegetation with high biodiversity and a high biomass per lake surface unit and have therefore a stronger impact on phytoplankton and light availability than angiosperms.The share of rare species is high.Many species are winter-green or have a long growth period, which gives a more permanent effect on phytoplankton and light.Finally, these "bottom-dwellers" do not hamper bathing and boating as much as tall macrophytes which reach up to the water surface (Blindow 1992b [126], van [43], Zinko 2017 [1]).
Transplantations of Submerged Vegetation
"Direct" establishment of submerged macrophytes by means of transplantations (e.g., translocations, see IUCN 2013 [7]) has been applied during lake restorations, often combined with other measures such as nutrient reduction and biomanipulation (Hussner et al., 2014 [79]) but also in running water to increase habitat quality (Riis et al., 2009 [129]).Once established, submerged vegetation contributes to the stabilization of a clearwater state and therefore causes a more sustainable effect of lake restorations.Transplantations have also been applied to increase the biodiversity of aquatic macrophytes (Muller et al., 2013 [130], Rodrigo and Carabal 2020 [108]) and to create habitats for fish (Slagle and Allen 2008 [131], Fleming et al., 2011 [132]).Transplantations are time consuming (Jeppesen et al., 2017 [77]) and can be successful only if environmental conditions are suitable for submerged macrophytes (e.g., Hussner et al., 2014 [79], Hilt et al., 2006 [80], van de Weyer et al., 2021 [83]).Time and money are wasted if the warning given by Bakker et al.
(2013 [12]) is not considered: "Subsequently one should wonder why macrophytes are not spontaneously returning to the restored water body.This may indicate that growing conditions are still not good enough and in that case transplanting will be unsuccessful".
Transplantations may be a suitable option if submerged plants do not (re)establish spontaneously in spite of suitable ecological conditions, which indicates that sufficient diaspores of native species are lacking.Based on experiences from a number of case studies, Hussner et al. (2014 [79]), Hilt et al. (2006 [80]), and van de Weyer et al. (2021 [83]) gave detailed recommendations regarding conditions and how such transplantations should be performed.Project aims should be defined, necessary permits from owners and nature conservation authorities should be obtained, threat factors should be reduced, ecological conditions and the colonization potential should be investigated, suitable plantation areas and methods as well as suitable species and donor sites should be selected, and, finally, experiences should thoroughly be documented (see Figure 3).Knowledge about which conditions and procedures favor submerged ve which influences should be avoided is therefore essential.Data on nutrients, profile, sediment structure, exposition, as well as occurrence and abundance rous animals such as fish, crayfish, and waterfowl should be available if tran considered (Grodowitz et al., 2009 [133], Hussner et al., 2014 [79]).Exceedin trient concentrations and/or high densities of cyprinid fish or grass carp are t sons for failures (see references in Table 1).
Project aims, environmental conditions, and colonization ability are facto sidered when suitable species are selected for transplantations.Hussner et a presented a list of species suitable for transplantations in Central European la ommended transplantation of Chara spp. in alkaline, calcium-rich lakes.Vice man et al. (2009 [78]) advised against plantations of species adapted to low n ditions such as charophytes in eutrophicated lakes and recommended the use pectinata for such environments.In China, Vallisneria natans is often planted, atively tolerant against eutrophication (Li et al., 2008 [134]), but transplanta species fail at high fish densities and elevated nutrient concentrations, espe both effects are combined (Gu et al., 2018 [135]).Rodrigo and Carabal (2020 [ mended transplantation of Myriophyllum spicatum, Stuckenia pectinata, and C these species are widely available, easy to cultivate, and in experiments turn rather grazing-resistant, while species such as Ceratophyllum demersum, Nitella Tolypella glomerata could be established once a vegetation cover has develope biodiversity. There are various techniques to plant aquatic macrophytes.The plants directly from a suitable donor site or transplanted after pre-culture.Green pl parts, tubers, and rhizomes can be transferred to the target site.In laboratory e some submerged plants such as Myriophyllum spicatum could easily be estab fragments, while, in other species such as Potamogeton pusillus, only few fra vived after plantation (Barrat-Segretain et al., 1998 [136], 1999 [137], Vári 201 ferent kinds of substrates have been used, preferably decomposable ones, mats, wood, wool, or decomposable pots (Rott 2005 [139], Hoffmann et al.Hussner et al., 2014 [79], van de Weyer et al., 2021 [83]).Substrates and techn considerably in costs and especially in labor input.Establishment success, how Knowledge about which conditions and procedures favor submerged vegetation and which influences should be avoided is therefore essential.Data on nutrients, light, depth profile, sediment structure, exposition, as well as occurrence and abundance of herbivorous animals such as fish, crayfish, and waterfowl should be available if transplanting is considered (Grodowitz et al., 2009 [133], Hussner et al., 2014 [79]).Exceedingly high nutrient concentrations and/or high densities of cyprinid fish or grass carp are the main reasons for failures (see references in Table 1).
Project aims, environmental conditions, and colonization ability are factors to be considered when suitable species are selected for transplantations.Hussner et al. (2014 [79]) presented a list of species suitable for transplantations in Central European lakes and recommended transplantation of Chara spp. in alkaline, calcium-rich lakes.Vice versa, Jellyman et al. (2009 [78]) advised against plantations of species adapted to low nutrient conditions such as charophytes in eutrophicated lakes and recommended the use of Stuckenia pectinata for such environments.In China, Vallisneria natans is often planted, which is relatively tolerant against eutrophication (Li et al., 2008 [134]), but transplantations of this species fail at high fish densities and elevated nutrient concentrations, especially when both effects are combined (Gu et al., 2018 [135]).Rodrigo and Carabal (2020 [108]) recommended transplantation of Myriophyllum spicatum, Stuckenia pectinata, and C. vulgaris, as these species are widely available, easy to cultivate, and in experiments turned out to be rather grazing-resistant, while species such as Ceratophyllum demersum, Nitella hyalina, and Tolypella glomerata could be established once a vegetation cover has developed to increase biodiversity.
There are various techniques to plant aquatic macrophytes.The plants can be taken directly from a suitable donor site or transplanted after pre-culture.Green plants or plant parts, tubers, and rhizomes can be transferred to the target site.In laboratory experiments, some submerged plants such as Myriophyllum spicatum could easily be established from fragments, while, in other species such as Potamogeton pusillus, only few fragments survived after plantation (Barrat-Segretain et al., 1998 [136], 1999 [137], Vári 2013 [138]).Different kinds of substrates have been used, preferably decomposable ones, such as jute mats, wood, wool, or decomposable pots (Rott 2005 [139], Hoffmann et al., 2013 [140], Hussner et al., 2014 [79], van de Weyer et al., 2021 [83]).Substrates and techniques differ considerably in costs and especially in labor input.Establishment success, however, seems generally to be less dependent on substrate type and planting technique but is severely jeopardized by unsuitable conditions such as strong currents, unconsolidated sediments, and low light availability.Sediments also should have a sufficiently high share of organic material and may not contain toxic substances.Protection against grazing is especially important as long as plant biomasses and expansion on the target site are low (Lauridsen et al., 1993 [94] Transplantations often start with so-called "founder colonies".These plantations, usually in protected exclosures, can be increased in the following years until the plants can expand by themselves and outside of the enclosures in the lake (Smart et al., 1998 [143], Smart and Dick 1999 [144], Jellyman et al., 2009 [78], Hussner et al., 2014 [79]).A sufficiently high share of the lake surface (around 30%) should be shallow enough to allow establishment by submerged vegetation (Jeppesen et al., 2017 [77]).In smaller lakes, the total area has been planted (van de Weyer et al., 2014 [99]) after a complete fish removal (see also Moss et al., 1996 [145]).Seagrass investigations demonstrate the advantages to transplant large intact patches rather than dispersed plots (Zhang et al., 2021 [82]).
Few attempts to (re)establish submerged macrophytes have been made in warmer regions, where this vegetation often is seen as a nuisance, except for China, where submerged plants have been planted in large quantities during lake restorations (Jeppesen et al., 2017 [77]).In smaller lakes, plantations were often successful when protected against herbivorous fish but failed in some cases due to expansion of floating-leaved plants (Chen et al., 2009 [146], Jeppesen et al., 2017 [77]).
Transplantations of Charophytes
Charophytes are rather commonly selected for transplantations for various reasons.Most common are transplantations connected to lake restorations.A number of charophyte species form dense and sometimes winter-green vegetation, which can store substantial quantities of nutrients and has a stronger and more sustainable impact on water quality than water angiosperms (Blindow 1992b [126], Kufel and Kufel 2002 [73], Blindow Transplantations often start with so-called "founder colonies".These plantations, usually in protected exclosures, can be increased in the following years until the plants can expand by themselves and outside of the enclosures in the lake (Smart et al., 1998 [143], Smart and Dick 1999 [144], Jellyman et al., 2009 [78], Hussner et al., 2014 [79]).A sufficiently high share of the lake surface (around 30%) should be shallow enough to allow establishment by submerged vegetation (Jeppesen et al., 2017 [77]).In smaller lakes, the total area has been planted (van de Weyer et al., 2014 [99]) after a complete fish removal (see also Moss et al., 1996 [145]).Seagrass investigations demonstrate the advantages to transplant large intact patches rather than dispersed plots (Zhang et al., 2021 [82]).
Few attempts to (re)establish submerged macrophytes have been made in warmer regions, where this vegetation often is seen as a nuisance, except for China, where submerged plants have been planted in large quantities during lake restorations (Jeppesen et al., 2017 [77]).In smaller lakes, plantations were often successful when protected against herbivorous fish but failed in some cases due to expansion of floating-leaved plants (Chen et al., 2009 [146], Jeppesen et al., 2017 [77]).
All available case studies on transplantations of charophytes are described in Table 1.For these transplantations, green plants, preferably protected by enclosures, and/or sediments rich in oospores were used.A number of these projects failed, often due to (sometimes illegal) fish implantations or nutrient loadings.
Other transplantation projects prefer charophytes, as they are bottom-dwellers and therefore are less disturbing for various activities such as boating and swimming than tall macrophytes (Hilt et al., 2006 [80]); they also provide valuable habitats for fish (Dick et al., 2004 [113], Dick and Smart 2004 [114]).A mixture of aquatic macrophytes including charophytes is sometimes transplanted to increase biodiversity (Rodrigo and Carabal 2020 [108], Rodrigo 2021 [84]; see Figure 5).Charophytes were also transplanted as agents to accumulate radioactive substances ("biological polishing"; Smith and Kalin 1992 [97]).Rarely, threatened charophytes are transplanted as a measure to protect these species (see below).).Zinko (2017 [1]) therefore advised never to implement crayfish in habitats with threatened macrophytes.All available case studies on transplantations of charophytes are described in Table 1.For these transplantations, green plants, preferably protected by enclosures, and/or sediments rich in oospores were used.A number of these projects failed, often due to (sometimes illegal) fish implantations or nutrient loadings.
Other transplantation projects prefer charophytes, as they are bottom-dwellers and therefore are less disturbing for various activities such as boating and swimming than tall macrophytes (Hilt et al., 2006 [80]); they also provide valuable habitats for fish (Dick et al., 2004 [113], Dick and Smart 2004 [114]).A mixture of aquatic macrophytes including charophytes is sometimes transplanted to increase biodiversity (Rodrigo and Carabal 2020 [108], Rodrigo 2021 [84]; see Figure 5).Charophytes were also transplanted as agents to accumulate radioactive substances ("biological polishing"; Smith and Kalin 1992 [97]).Rarely, threatened charophytes are transplanted as a measure to protect these species (see below).
Transplantations of Threatened Aquatic Vascular Plants
While there are a number of experiences with both indirect and direct establishments (transplantations), plantations aiming at the protection of threatened species (e.g., population restorations, see IUCN 2013 [7]) have given rise to different kinds of projects as well as [77]).Prior to transplantations, the presence of viable diaspores should be investigated in the transplantation site (Bakker et al., 2013 [12], Verhofstad et al., 2017 [43], Holzhausen et al., 2017 [36]).If an establishment from the present diaspore reservoir is not possible, transplantations may be a suitable option to support the regional population.Therefore, necessary permits and potentially negative consequences such as damage of the donor original population, gene pool contaminations, and introduction of neophytic species attached to the donor plant material have to be considered (Barett and Kohn 1991 [151], Foster Huenneke 1991 [152], Hussner et al., 2014 [79], Holzhausen et al., 2017 [36]).
There are few guidelines or recommendations for transplantations of rare aquatic plants.Guidelines for transplantations of rare terrestrial plants were developed in several countries such as Germany (Sukopp and Trautmann 1981 [153]), the USA.(Falk et al., 1996 [154]), and Sweden (Wetterin 2008 [155]).The IUCN (2013 [7]) provided guidelines for transplantations (translocations) of rare animals and plants.These publications agree in their main points: • A species should be transplanted only if it does not establish spontaneously; • Laws have to be followed and necessary permits must be obtained; • Species may only be planted within their (recent or historic) distribution area; • Donor plants should be obtained from a site close by and be genetically similar to the original population; • The donor population may not be damaged; • Transplantation sites must correspond to the species' environmental demands; • All transplantations have to be monitored and documented scientifically over a longer time period; • Protection and appropriate management of the transplantation site has to be guaranteed.
Falk et al. (1996 [154]) warned for failures: "A replacement population can be established only if the original causes of decline have been eliminated".
There are some experiences with transplantations of rare aquatic vascular plants.Among isoetids, the endemic Isoetes malinverniana was successfully transplanted in Italian small water bodies (Abeli et al., 2017 [156]).Transplantations of Littorella uniflora, Isoetes lacustris, Lobelia dortmanna succeeded in German lakes, especially if the plants were protected against grazing (Lenzewski 2019 [157]).
Schwarzer and Wolff (2005 [162]) used both living plants and sporangia for the reestablishment of Salvinia natans in Germany.Ibars and Estrelles (2012 [163]) described the successful transplantation of soil spore banks to recover a lost population of Marsilea quadrifolia in Spain.
Transplantations of Threatened Charophytes
Indirect and direct establishment of charophyte vegetation have been part of a number of restoration projects (see above and Table 1).These experiences provide extensive knowledge about suitable environmental conditions for charophytes (Stewart 2008 [164]), which is an important prerequisite for successful transplantations (se Bakker et al., 2013 [12]).Together with transplantations of other threatened aquatic macrophytes (see above), these activities provide knowledge essential for what were, up to now, hardly applied transplantations of threatened charophytes.Bakker et al. (2013 [12]) and Jeppesen et al. (2017 [77]) mention the need for transplantations of threatened submerged macrophytes including charophytes to maintain biodiversity.For Swedish wetlands, Ekologgruppen (2009 [165]) recommended transplantations of threatened charophytes such as Chara papillosa, Nitella gracilis, and N. mucronata.Becker (2014 [166]), however, did not include transplantations among the numerous actions suggested to protect threatened charophytes in Germany.
According to our knowledge, the Swiss action plan for Nitella hyalina was the first time a threatened charophyte species was planted aiming to re-establish the species in its (former) Swiss distribution area (Schwarzer 2017 [111]).Fresh plant material was collected in France during 2017 and pre-cultured outdoors.These pre-cultures were successful.The plants hibernated and produced richly fertile biomass during 2018, when Nitella hyalina was planted in suitable sites close to Lake Zürich.During the following years, the species was stable in six out of 10 sites and expanded in these sites (see Figure 6).Plantations in additional sites are planned (A.Schwarzer, pers.comm.).
According to our knowledge, the Swiss action plan for Nitella hyalina was the first time a threatened charophyte species was planted aiming to re-establish the species in its (former) Swiss distribution area (Schwarzer 2017 [111]).Fresh plant material was collected in France during 2017 and pre-cultured outdoors.These pre-cultures were successful.The plants hibernated and produced richly fertile biomass during 2018, when Nitella hyalina was planted in suitable sites close to Lake Zürich.During the following years, the species was stable in six out of 10 sites and expanded in these sites (see Figure 6).Plantations in additional sites are planned (A.Schwarzer, pers.comm.).Both fresh plant material and oospores can be used for transplantations depending on the life strategy of the species in question.Both fresh plant material and oospores can be used for transplantations depending on the life strategy of the species in question.
Establishment from Shoot Fragments
Many species can easily be established from shoot fragments.Shoot apices containing at least two nodes are used with the lowest node pushed down into the sediment.Node cells are omnipotent (Skurzy ński and Boci ąg 2011 [31]) and, in most cases, readily develop rhizoids and new growth.For such precultures, glass beakers with low nutrient water (tape water or water from the donor site) can be used, and sediments with a moderately high organic content provide nutrients.Sediment from the donor site eventually mixed with sand is often most suitable.A number of charophyte species from temperate regions have been cultured from shoot fragments, in most cases successfully (see Table 2).Boci ąg and Rekowska (2012 [167]) cultivated shoot fragments successfully from a number of species.Thereby, Chara globularis had the highest growth rates, followed by C. subspinosa; the lowest rates were found in C. tomentosa and C. aspera.Most Chara spp.can easily be cultured, often for many years, but generally, cultivation seems to be more difficult for species without cortex such as Nitella spp.and Nitellopsis obtusa (Blindow, own data; van de Weyer, own data).Species without cortex and long internodes such as Nitellopsis obtusa and Nitella translucens were cultured for physiological experiments, either in outdoor ponds or (more frequently) in the laboratory, but growth rates were not published for such cultures.Nitellopsis obtusa was transferred from the field to aquaria with tape water or site water in room temperature and under lamps and thus kept alive until the start of the experiments (Kurtyka et al., 2011 [171], Kisnieriene et al., 2012 [172]).In a laboratory of the University of Valencia, Spain, a number of charophyte species are kept in culture in small pots containing a sand/sediment substrate mixture, which are placed in larger beakers with tape water (Rodrigo et al., 2017 [170], Rodrigo 2021 [84]).
A new culture method was developed by Wüstenberg et al. (2011 [169]).Charophyte shoot fragments are planted in sand enriched with K 3 PO 4 and covered with pure sand without nutrient addition.The overlying water consists of a nutrient solution without phosphorus.Enclosed in a polyethylen membrane, a bicarbonate reservoir provides a permanent supply of inorganic carbon.The advantage of this method is that growth rates of microalgae are kept low, while the charophytes can take up phosphorus from the sediment.Growth rate of charophytes are very high in such cultures.
Establishment from Oospores
Some charophyte species cannot be established from shoot fragments (see above).Especially, annual species with rich oospore production can be easier to establish from oospores.Establishment from oospores is complicated by the generally low germination success (see above) and the demand for species-specific germination conditions.Oospores of Chara globularis only grow at low redox potential (Forsberg 1965 [173]), while other species do not share this requirement (Stross 1989 [35]).Germination has sometimes failed in autoclaved sediments and has been successful only if the sediment contained a certain organic share (Holzhausen et al., 2017 [36]).Temperature is probably acting as an indicator for the most suitable season (spring) for germination, while summer temperatures indicate that it is too late.Some species such as Nitella furcata and Chara zeylanica, however, only germinate during a so-called "germination window" during spring, which seems to open independently of temperature (Sokol and Stross 1986 [174], Stross 1989 [35]).Additionally, the presence of toxic substances can inhibit germination, as shown for Chara hispida in the presence of microcystin (Rojo et al., 2013 [175]).Fe 2 (SO 4 ) 3 , which sometimes is used to immobilize phosphorus in lake restoration, was shown to inhibit charophyte oospore germination (Rybak et al., 2017 [176]).Oospore germination of both Chara sp. and Nitella sp. was reduced by high concentrations of Cu (Kelly et al., 2012 [177]), and oospores of Chara vulgaris showed lower germination after exposure to high concentrations of Ni (Kalin and Smith 2007 [39]), sulfide, or Fe 2+ (Sederias and Colman 2009 [178]).
Generally, oospores should be stratified, and sediments should be dried and provided with a certain share of organic matter before germination experiments are started.The specific germination demands of the species in question must be known, such as light (Holzhausen et al., 2017 [36]).The viability of oospores collected from sediments should be investigated.The so-called "crash tests" give a first indication: viable oospores show a "resistance to crushing" when pressed.Additionally, triphenyltetrazoliumchloride (TTC) staining is a good indicator for viability (Holzhausen et al., 2017 [36]).
Precultures
Charophyte species which do not form dense vegetation but occur as single plants on their sites often have to be precultured to obtain sufficient biomass for transplantations.Many species can easily be reproduced in larger or smaller containers with suitable sediments and water (see above), eventually with transplantations to other containers.The plants can be cultured indoors with artificial light or outdoors in larger containers or mesocosms.The latter alternative is assumed to be more promising, as the plants already are adapted to the on-site climate when transferred to their target sites.A good example is the Swiss Action Plan for Nitella hyalina with precultures in a market garden, which were bought by the canton of Zürich to culture aquatic macrophytes (Schwarzer 2017 [111]; Schwarzer, pers.comm.).9.4.Accompanying Techniques 9.4.1.eDNA Analyses eDNA analyses of water samples are already widely applied to detect a large range of aquatic organisms (see reviews by Thomsen and Willerslev 2018 [179] and Ruppert et al., 2019 [180]).In Sweden, eDNA analyses have successfully been applied for several years with the focus on fish, mussels, and crayfish (Bohman 2018 [181], von Proschwitz and Wengström 2021 [182]).Aquatic plants are, however, largely under-represented in such analyses compared to aquatic animals (Thomsen and Willerslev 2018 [179]).In a Canadian investigation, eDNA analyses identified more species belonging to the genera of Potamogeton and Zannichellia than "traditional" methods (Kuzmina et al., 2018 [183]).Muha et al. (2018 [184]) detected invasive aquatic plants by means of eDNA analysis.
The method has not yet been tested systematically for charophytes but seems promising.Charophytes are assumed to release larger DNA quantities than vascular plants.When damaged by, e.g., grazing, the content of the large internode cell, which contains a high number of nuculid and chloroplasts, is released into the water column.Some first investigations confirmed that charophytes are easily detected in water samples.Thereby, markers using both nucleus and chloroplast genes are applied (Nowak, pers. comm.).
Diaspore investigations are important if transplantations of rare species are considered in sites where these species are absent in the vegetation.Such plantations should be avoided if viable oospores still are present in the sediment.Instead, re-establishment from the site's "own" diaspores should be promoted (Bakker et al., 2013 [12], Verhofstad et al., 2017 [43], Zinko 2017 [1], Holzhausen et al., 2017 [36])."Classical" diaspore reservoir investigations are suitable to quantify and determine oospores and to check their viability (Holzhausen 2017 [36]) but are labor-intensive and connected with a high risk of missing rare species.eDNA analyses of sediment samples are less expensive and may be more suitable to detect rare species in the diaspore reservoir, especially Nitella spp.and Tolypella spp.Species belonging to these genera have often high oospore production (see below), and speciesspecific primers already exist (P.Nowak, pers.comm.).Thereby, sediment samples down to 10 cm could be analyzed, which corresponds to the layer containing viable oospores (van Onsem and Triest 2018 [91]).In terrestrial habitats, eDNA analyses have already been applied to identify diaspores in soil samples (Fahner et al., 2016 [185]).
Harvesting
Harvesting of submerged vegetation is a very old technique traditionally applied to fertilize arable fields and still used for this purpose in many countries (Roger and Watanabe 1984 [186]).Recently, the technique was recommended to achieve a complete phosphorus recycling (Quilliam et al., 2015 [187]).
In Sweden, a general strategy for transplantations of native threatened aquatic species was implemented (Wetterin 2008 [155]).On a regional level, the county administration of Östergötland developed a strategy for cultivation and translocations of threatened species (Antonsson 2012 [200]).A national strategy for translocations of aquatic plants and animals is in a state of preparation.For red-listed species (which is the case for all program species), permits may be necessary for transplantations according to the national environmental law (Miljöbalken 12 kap 6 §).
The 10 program species differ widely in rareness/number of sites and especially life strategies.Consequently, different actions with different priorities are recommended to secure the species within the country (Table 3).Survey is recommended for some species, either by "classical" methods and/or by means of eDNA of sediment or water samples.Transplantations are recommended for species which are assumed to be hampered from expansion because of rareness and lack of oospores in the diaspore reservoirs, not lack of suitable sites.Some species are highly competitive (K-strategists) and can form dense and extensive biomass once they have reached a new site but have only restricted dispersal abilities.Biomass of such species can be collected from donor sites without jeopardizing the population.To test suitable techniques, these species should first be transplanted on-site.Prior to transplantations to new sites, the occurrence of rare species which potentially could be outcompeted by the "newcomers" has to be investigated, and the transplantation material has to be checked for contamination with undesired species such as neophytes.Species with only low biomass on their actual sites, mainly weak competitors (R-strategists), may have to be precultured.Methods have not been tested for any of these species, but the method developed for Nitella hyalina has been very successful (see Table 1) and could be applied.Cutting tall macrophyte vegetation may additionally support the establishment of these weak competitors, and indicator species may help to identify suitable sites.In an initial stage, all transplants need to be protected against grazing and be followed by a detailed monitoring and, if necessary, actions to improve water quality and reduce herbivorous/benthivorous fish.Table 3. Number of sites (records after 2000), life strategy, and recommended actions for the 10 charophyte species included in the Swedish action plan for threatend macrophytes (Zinko 2017 [1]).Strategy: r = r strategist.k = k strategist.int = intermediate.eDNA: specified, if analysis of sediment (sed.)and/or water samples is recommended.Transplantations (Tr.), direct and/or after precultivation (precult.): 1 = high priority; 2 = lower priority.?= strategy may deviate in the Swedish populations.Cutting: Harvesting of tall macrophytes to improve establishment.Indicator: Indicator species are used to identify suitable habitats.For further explanations, see text.Swedish authorities, similar to authorities in other countries, also include taxa with a doubtful taxonomic rank in conservational efforts.Consequently, both C. filiformis and C. subspinosa were included in the recent action plan to protect threatened macrophyte species (Zinko 2017 [1]), though they can genetically not be separated from C. contraria and C. hispida, respectively (Nowak et al., 2016 [201], Nowak, pers.comm.).The reason for this decision is that, similar to other taxonomic groups, the selection of species in charophytes is "man-made" rather than corresponding to the biological species concept.Genetic analyses are of limited support in, e.g., the so-called "Hartmania complex" within the genus of Chara (which includes C. subspinosa and C. hispida), because of generally close clustering of all taxa belonging to this group (see Nowak et al., 2016 [201]).
Lake Levrasjön in Scania is its only Swedish site.It was found for the first time during 1860 and seems since then to have occurred in the lake (Wahlstedt 1862 [205], Hasslow 1931 [68], Blindow 2009a [2], own observations).The species should be transplanted to other calcium-rich lakes close to Lake Levrasjön, preferably as green plants after a test of transplantations in Lake Levrasjön.Cutting of tall macrophytes is recommended in Lake Levrasjön to stabilize the occurrence of C. filiformis in the lake.
Bulbils of N. obtusa germinated readily at both high and low light conditions, while oospore germination failed.Cultivation of green plants was successful in natural sediments but not sand and less easily than Chara spp.(Holzhausen et al., 2017 [36]).Krautkrämer (pers.comm.)failed in culturing the species.For physiological experiments, the species was kept in laboratory cultures for longer times, but no information on growth rates was given (Kurtyka et al., 2011 [171], Kisnieriene et al., 2012 [172]).
In Sweden, Chara subspinosa and N. obtusa occur in 16 and 17 sites, respectively, all of them calcium-rich lakes (see Figure 2).C. subspinosa is difficult to investigate, as it is hard to distinguish from C. hispida.C. subspinosa and N. obtusa have disappeared from a number of their former sites, probably because of eutrophication (Kyrkander 2007 [213], Zinko 2017 [1], Herbst et al., 2018 [214], Artportalen: accessed 7 May 2021).In N. obtusa, however, this decline was compensated by the colonization of new sites during the extension of the distribution areas to northern regions (Blindow 2009a [2]).
Transplantations are recommended to secure the occurrence of both species in the country and to counteract their assumed poor dispersal abilities, preferably on sites where they disappeared before, given that the on-site conditions are favorable.Preculture is not necessary, as dense vegetation is present on the actual sites (Kyrkander 2007 [213], Zinko 2017 [1], own observations).Lake Krankesjön in southern Sweden shifted to a clearwater state during the 1980s, and charophytes expanded (Hargeby et al., 1994 [72]).C. subspinosa was observed for the first time during 1995 (Blindow 2009a [2]); Nitellopsis obtusa was observed during 2009 (Artportalen: accessed 7 May 2021).Both species have since then expanded, thereby reducing the former dense vegetation of Chara tomentosa (own observations).Additionally, in North America, where Nitellopsis obtusa is an invasive plant, it has outcompeted other submerged macrophytes (Brainard and Schulz 2017 [215], Cahill 2017 [212]).Because of the high competitive strength of C. subspinosa and N. obtusa, there is a certain risk that other submerged macrophytes are outcompeted after plantations of (one of) these target species.A detailed investigation of submerged vegetation, including a search for rare species, is therefore necessary before transplantations (Zinko 2017 [1]).Both species are, however, especially suitable for transplantations in the context of lake restorations because of their ability to form dense vegetation.They could be planted in enclosures in their former site Lakes Ringsjöarna combined with other measures to improve water quality.This question is already discussed by the local administration (Richard Nilsson, Ringsjöns vattenråd, Höörs kommun, pers.comm.), especially as the water quality of the lakes has recently improved (Ekologigruppen Ekoplan AB 2019 [216]).
In Sweden, Nitella translucens occurs in six actual sites in the southern part of the country and has disappeared from five (Artportalen: accessed 7 May 2021).There may be a rather high number of unknown sites (Zinko 2017 [1], Å. Widgren, pers.comm.).Apart from field investigations, possibly supported by eDNA analyses (P.Nowak, pers.comm.), transplantations are planned for some of the species' former sites if water quality seems appropriate and after a test of plantations within one of its actual sites.On some actual sites, biomass seems to be sufficient for plantations, which erases the need for precultures.A pilot study with transplantations within Lake Älmtasjön, one of the actual sites, is planned for the summer of 2021 (Å.Widgren, pers.comm.).
Nitella mucronata is both annual and perennial with hibernation as a green plant (Wahlstedt 1875 [217], Migula 1897 [202], Olsen 1944 [67], Forsberg 1960 [224]).Little is known about the dispersal abilities of the species.Both fertile and sterile plants are common (Olsen 1944 [67], Korsch 2014a [225]).The species can form monospecific vegetation and is therefore assumed to be a rather good competitor (Blindow 2009b [3]).It occurs in a broad range of habitats such as lakes, small water bodies, and running water in both calcium-rich water and soft water, ranging from oligotrophic to eutrophic conditions with varying conductivities, and it seems to be less sensible against eutrophication than many other charophytes (Simons and Nat 1996 [226], Doege et al., 2014 [227], Korsch 2014a [225]).In the laboratory, oospores only germinated at high light, not at low light conditions (Holzhausen et al., 2017 [36]).The species can rather easily be kept in culture (V.Krautkrämer, pers.comm.).
Intensive field investigations during the former action plan (Blindow 2009b [3]) increased the number of known sites in Sweden to around 50 (Artportalen: accessed 5 May 2021).Plantations seem promising and have been successful in Lake Phoenix, Germany (see Figure 5; Table 1), but are not considered necessary to secure the species' occurrence in Sweden.Plantations could, however, be applied during lake restorations (Zinko 2017 [1]).Cutting of tall macrophytes is recommended to favor the species on its recent sites.
Nitella syncarpa occurs in lakes and small water bodies, including temporary ones, in subneutral to alkaline water and under oligo-to eutrophic conditions, mainly in shallow water, occasionally down to 8 m depth (Vesić et al., 2011 [235], Korte et al., 2014 [229]).Zherelova (1989a,b [236,237]) probably cultivated the species in the laboratory but did not specify any methods.In Sweden, N. syncarpa only occurs in two recent sites and seems to have disappeared from a number of its former sites (Blindow 2009b [3], Artportalen: accessed 7 May 2021).The occurrence on one of its recent sites is threatened by eutrophication (Kyrkander and Örnborg 2012 [238]).The species is one of the most threatened charophytes in Sweden, and actions to secure its occurrence in the country have a high priority (see Table 3).
Transplantations seem important to secure all three species in Sweden.As they only have low biomasses on the actual sites, precultivation is probably necessary.N. confervacea has rather high biomass in Lake Möckeln (own observations), which potentially can be used for a direct transfer.In Lake Limsjön, the biomass of N. syncarpa is rather large (Kyrkander and Örnborg 2012 [238]) and, therefore, removal of part of this population for transplantations was suggested (Zinko 2017 [1]).As the three species are typical pioneer plants, transplantations should not be focused on former sites but on suitable habitats within their recent distribution area, such as lake shores with sparse vegetation and newly created small water bodies (Zinko 2017 [1]).Indicator species may help selecting such habitats.Cutting of taller macrophytes could support the establishment.The species may be over-looked on many sites.Especially N. confervacea is hard to find because of its small size and risk of confusion with Nitella wahlbergiana, which is rather abundant in the country (Langangen 2007 [208], Zinko 2017 [1]).Resting oospores may be far more common than green plants and could be tracked by means of eDNA.
Chara braunii is mainly annual and hibernates by means of oospores but occasionally also as a green plant (Wahlstedt 1864 [239], Migula 1897 [202], Langangen 1974 [240], Franke and Doege 2014 [241]).The species is richly fertile and has been assumed to have good dispersal ability (Migula 1897 [202], Krause 1997 [20], Langangen et al., 2002 [242], Zhakova 2003 [243], Franke and Doege 2014 [241], Blindow 2009c [4]).It has been characterized as a poor competitor (Migula 1897 [202], Krause and Walter 1985 [244]) but can dominate in sites where competing vegetation is erased during winter, such as fish ponds that fall dry during winter (Krause and Walter 1985 [244]).The species occurs mainly in small water bodies but also in permanent habitats such as springs (Krause 1997 [20]) and even in the deep water zones of larger lakes down to 33 m (Blindow et al., 2018 [245]).It can be found in oligotrophic to eutrophic conditions, hard and soft water, and freshwater and brackish water.Mass development in a fish pond which was dried and frozen during winter (Migula 1897 [202]) indicates that oospores not only survive drying and freezing but that germination may be stimulated by such conditions.Schmidt et al. (1996 [246]) characterized C. braunii as a "permanent pioneer" in fish ponds.In its Swedish Bothnian Bay sites, the species occurs in a depth of 0.1 to 0.7 m (Artportalen: accessed 14 October 2018), where ice action during winter is strong, and any hibernation as green plants is hardly possible (Idestam-Almqvist 2000 [247]).
The species has often been cultured.In Japan, it was kept outdoors in containers with tape water and a sand/soil mixture (Amirnia et al., 2019 [248]).Imahori and Iwasa (1965 [249]) and Sato et al. (2014 [250]) obtained axenic cultures after surface sterilization of oospores with sodiumhypochloride (see Forsberg 1965 [173]) in containers with a sand/soil mixture, distilled water, and artificial light at 23 • C. The cultivation method developed by Wüstenberg et al. (2011 [169]) was successfully applied at the University of Marburg, Germany (S. Rensing, pers.comm.).Foissner et al. (1996 [251]) and Schmölzer et al. (2011 [252]) described successful cultivation and high growth rates in aquaria containing a peat/sand mixture and distilled water with artificial light at around 20 • C. Cultures failed, however, at the University of Valencia, Spain (M.Rodrigo, pers.comm.).
In Sweden, the species occurs in around 20 actual sites in the Bothnian Bay (Pekkari 1953 [253], Tolstoy & Österlund 2003 [254], Artportalen: accessed 7 May 2021).For long time, these brackish water sites were the only ones known in the country after the species disappeared from two former freshwater sites probably because of eutrophication (Blindow 2009c [4]).During 2018 and 2019, C. braunii was detected in three larger freshwater lakes, one of which (Lake Finjasjön) was heavily eutrophicated (Artportalen: accessed 7 May 2021).Freshwater and brackish water occurrences are highly separate from each other not only geographically but also ecologically.While brackish water plants are typical R-strategists, hibernation, reproduction, and competitive behaviors of the freshwater plants are largely unknown.The genetic diversity of C. braunii is unusually large, indicating that the species may consist of several taxonomic clusters (P.Nowak, pers.comm.).
Transplantations are not planned for the Bothnian Bay, as the occurrence in this area is assumed to be secured, but are recommended to support the occurrence in freshwater.Transplantations from one of the two freshwater lakes to suitable sites close by are eventually considered after preculture if the on-site biomass is too limited Tolypella canadensis is an arctic charophyte with a circumpolar distribution (Romanov and Kopyrina 2016 [255]) and low on-site temperatures throughout (Langangen 1993 [256], Romanov and Kopyrina 2016 [255]).In Scandinavia, both fertile and sterile plants have been found.Oospores sometimes seem not to ripen before the end of the short growing period (Langangen 1993 [256], Langangen and Blindow 1995 [257]).The species is perennial and hibernates as green plants or by means of bulbils (Romanov and Kopyrina 2016 [255]).Nothing is known about its dispersal abilities or its competitive abilities, but it has often been found in dense monospecific vegetation (Langangen 1993 [256], Krause 1997 [20], Artportalen: accessed 14 October 2018).The species has been found in lakes and slowly running water; it prefers deeper water and soft water conditions with low Ca concentrations and neutral pH (Langangen 1993 [256], Langangen and Blindow 1995 [257], Romanov & Kopyrina 2016 [255]).In a culture experiment, the plants died when exposed to temperatures exceeding 15 • C (Langangen 1993 [256]).
In Sweden, there are six actual sites, all in the county of Norrbotten (Artportalen: accessed 7 May 2021).During field investigation, the species was relocated most of its former sites (Pettersson et al., 2008 [258], Blindow 2009d [4], Zinko 2017 [1], Artportalen: accessed 8 October 2018).The occurrence in Sweden seems to be secure despite the low number of sites known.The species is assumed to have been widely overlooked, as field investigations in this part of the country are difficult and expensive.eDNA analyses of water samples have been successfully tested (P.Nowak, pers.comm.) and can help to reduce the costs for these investigations.
Final Remarks
The Swedish Action Plan (Zinko 2017 [1]) is an ambitious project.The extensive literature reviewed in this paper shows that successful re-establishment and transplantation has to consider life strategies, which vary considerably among charophytes, and that management techniques have to be adapted to the different species and life strategies.Existing experiences on re-establishments and transplantations of charophytes provide a sound basis for the transplantations planned.Especially, the successful transplantation of Nitella hyalina in Switzerland is most promising.Starting this action plan, Sweden has taken a pioneer roll in the protection of threatened charophytes.A thorough documentation of the results and the experiences is of outermost importance.
Figure 1 .
Figure 1.Different systematic groups and life forms of submerged plants, schematically.
Figure 1 .
Figure 1.Different systematic groups and life forms of submerged plants, schematically.
Figure 2 .
Figure 2. Different life strategies in charophytes: (a) Nitella capillaris, an extreme R-strategist, was rediscovered in this small water body near Kristianstad, about 100 years after the last record in the country.Photo by Bertil Möllerström.(b) The K-strategists Chara subspinosa and C. tomentosa form dense vegetation in Lake Levrasjön.Photo by Silke Oldorff.
Figure 2 .
Figure 2. Different life strategies in charophytes: (a) Nitella capillaris, an extreme R-strategist, was re-discovered in this small water body near Kristianstad, about 100 years after the last record in the country.Photo by Bertil Möllerström.(b) The K-strategists Chara subspinosa and C. tomentosa form dense vegetation in Lake Levrasjön.Photo by Silke Oldorff.
Figure 5 .
Figure 5. Lake Phoenix, Germany.(a): charophytes (green plants) are collected by divers in the donor lake, (b): planting of charophytes in L. Phoenix, (c): collection of water and sediment containing oospores by divers using a pump in the donor lake, (d): implementation of donor lake water and sediment in L. Phoenix.Photos by Klaus van de Weyer.
Figure 5 .
Figure 5. Lake Phoenix, Germany.(a): charophytes (green plants) are collected by divers in the donor lake, (b): planting of charophytes in L. Phoenix, (c): collection of water and sediment containing oospores by divers using a pump in the donor lake, (d): implementation of donor lake water and sediment in L. Phoenix.Photos by Klaus van de Weyer.
Figure 6 .
Figure 6.Transplantation of Nitella hyalina in Switzerland.(a) Precultivation in different tanks in a garden.(b).Target site during 2019.Transplanted N. hyalina (red circle) within vegetation consisting of different Chara species.(c,d).Target site during 2020.(d) Some N. hyalina had hibernated (red circle); establishment of N. hyalina outside of the original plantation is indicated by red arrows.Photos by A. Schwarzer.
Figure 6 .
Figure 6.Transplantation of Nitella hyalina in Switzerland.(a) Precultivation in different tanks in a garden.(b) Target site during 2019.Transplanted N. hyalina (red circle) within vegetation consisting of different Chara species.(c,d) Target site during 2020.(d) Some N. hyalina had hibernated (red circle); establishment of N. hyalina outside of the original plantation is indicated by red arrows.Photos by A. Schwarzer.
Table 2 .
References for successful culture of single charophyte species from shoot fragments.Swedish program species are shown in bold.
|
2021-09-28T05:30:35.627Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f2d797bb4fda7c498d017abae5692faa2b4ef5b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/9/1830/pdf?version=1630658447",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f66b310f3fa1fd8ab1cd32341dcea147cb959a50",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3933167
|
pes2o/s2orc
|
v3-fos-license
|
Protein Interaction and Na/K-ATPase-Mediated Signal Transduction
The Na/K-ATPase (NKA), or Na pump, is a member of the P-type ATPase superfamily. In addition to pumping ions across cell membrane, it is engaged in assembly of multiple protein complexes in the plasma membrane. This assembly allows NKA to perform many non-pumping functions including signal transduction that are important for animal physiology and disease progression. This article will focus on the role of protein interaction in NKA-mediated signal transduction, and its potential utility as target for developing new therapeutics.
Introduction
The Na/K-ATPase (NKA) was discovered by Skou 60 years ago as the molecular machine for pumping Na + and K + across cell membrane [1]. In the early 1970s, several studies revealed the regulatory effects of ouabain on cell growth and gene expression. At that time, these regulatory effects of ouabain were all ascribed to the pump inhibition and the resulted change in intracellular ion concentration [2][3][4]. About 20 years ago, a series of studies conducted first in neonatal cardiac myocytes and subsequently in renal epithelial cells, showed that ouabain could activate a number of cell growth-related pathways, of which many are independent of changes in intracellular ion concentration. These studies have led to a great effort by many laboratories and subsequent demonstration that the NKA actually has many non-pumping functions [5,6]. In this review, we will first look back at our evolved view of NKA in cell biology. We will then give an in-depth discussion of NKA-mediated signal transduction; its role in animal physiology and disease progression; theoretical consideration and experimental evidence of direct protein interactions as the molecular mechanism; and the possibility of targeting such interactions for developing new therapeutics.
Na/K-ATPase and Active Ion Transport
NKA belongs to the P-type ATPase family. Before Skou discovered NKA in 1957, cell biologists had speculated the existence of such transmembrane machinery for over 100 years. One of the most important early studies was conducted by Carl Schmidt who demonstrated the existence of a Na + /K + concentration gradient across cell membrane [7]. This led to the proposal by Rudolf Heidenhain of a "microscopic steamship" laying within the membrane that is capable of maintaining this gradient [8]. Subsequently, several key discoveries paved the way and convinced cell biologists of a principle responsible for transmembrane movement of ions against their concentration gradients. Most notably were the studies by Ernest Overton, showing that muscle cells had active transport mechanism allowing cells to move Na + and K + across cell membrane via the consumption of energy [9,10]. This was confirmed by Heppel and Steinbach in muscle cells using isotopes [11][12][13] and by several groups of inhibitors of such active transport in red blood cells [17], and the requirement of ATP for K + uptake in these cells further supported and linked the transport system to membrane-bound ATPase sensitive to cardiac glycosides [18,19].
At the time Skou discovered NKA, Robert Post had found that the ATPase is responsible for the active transport of three Na + and two K + across the plasma membrane in red blood cells. His subsequent work on the reaction mechanism led to the Albers-Post scheme that is not only true to the NKA, but also applies to other members of P-type ATPase family [20][21][22]. Ion pumping is linked to the cycle of conformational changes. Around the same time, cell biologists and renal physiologists developed a kidney NKA purification protocol, and generated a large number of important mechanistic and cell biological data that refine the structure, reaction mechanism, and cellular regulation of NKA [23][24][25][26]. Importantly, we understand that NKA exists in a dynamic state of conformation equilibrium which was important for its ability to convert ATP hydrolysis to the binding and movement of ions across the plasma membrane as illustrated in Albers-Post reaction mechanism scheme (Figure 1). It also allows the binding of many ligands (chemicals such as cardiotonic steroids that can bind to NKA with high affinity) to the NKA in a conformational statedependent manner. NKA, as a large and highly expressed membrane protein complex (most cells contain over one million surface pumps per cell), consists of two noncovalently linked subunits, α and β [27,28]. The α subunit contains ATP and other ligand binding sites, and is considered as the catalytic subunit. The scaffolding function of β subunit is essential for the membrane targeting and full function of the NKA. Four isoforms of NKA have been identified. The existence of different isoforms was first suggested by Michael Marks and Nicholas Seeds in 1987 [29]. They found that ouabain exhibited two distinct inhibition phases of NKA in preparations made from the mouse brain. Subsequently, Sweadner identified at least two isoforms of NKA in membrane preparations from rat brain [30]. The further breakthrough came from the molecular cloning of NKA, first from sheep kidney in 1985 [31], and then the identification of four isoforms from different rat tissues [31][32][33]. Furthermore, studies showed that different isoforms are expressed in a tissue-specific manner [32,33]. The α1 isoform is found in all cells and is prevalent in all epithelial cells. The α2 and α3 isoforms are expressed in skeletal muscle, neuronal tissue, and cardiac myocytes. The α4 isoform is expressed in the testis and regulates sperm motility [27,34]. The sequence identity is about 87% among α1, α2 and α3, while α1 and α4 are 78% identical. Nevertheless, the overall tertiary structure appears to be identical among all isoforms [35].
3D structures of several P-type ATPases including α1 NKA have been resolved [36][37][38][39]. The overall structure of NKA is composed of ten transmembrane helices important for ion binding, occlusion and movement, and three cytosolic domains called the N-domain (nucleotide binding), P-domain NKA, as a large and highly expressed membrane protein complex (most cells contain over one million surface pumps per cell), consists of two noncovalently linked subunits, α and β [27,28]. The α subunit contains ATP and other ligand binding sites, and is considered as the catalytic subunit. The scaffolding function of β subunit is essential for the membrane targeting and full function of the NKA. Four isoforms of NKA have been identified. The existence of different isoforms was first suggested by Michael Marks and Nicholas Seeds in 1987 [29]. They found that ouabain exhibited two distinct inhibition phases of NKA in preparations made from the mouse brain. Subsequently, Sweadner identified at least two isoforms of NKA in membrane preparations from rat brain [30]. The further breakthrough came from the molecular cloning of NKA, first from sheep kidney in 1985 [31], and then the identification of four isoforms from different rat tissues [31][32][33]. Furthermore, studies showed that different isoforms are expressed in a tissue-specific manner [32,33]. The α1 isoform is found in all cells and is prevalent in all epithelial cells. The α2 and α3 isoforms are expressed in skeletal muscle, neuronal tissue, and cardiac myocytes. The α4 isoform is expressed in the testis and regulates sperm motility [27,34]. The sequence identity is about 87% among α1, α2 and α3, while α1 and α4 are 78% identical. Nevertheless, the overall tertiary structure appears to be identical among all isoforms [35].
3D structures of several P-type ATPases including α1 NKA have been resolved [36][37][38][39]. The overall structure of NKA is composed of ten transmembrane helices important for ion binding, occlusion and movement, and three cytosolic domains called the N-domain (nucleotide binding), P-domain (phosphorylation) and A-domain (actuator) that confer ATP hydrolyzing activity. Overall, crystal structures are in agreement with the deduced structures from biochemical studies of the past 60 years. Interestingly, recent resolution of several different CTS-bound NKAs also reveals that although these compounds all inhibit ATPase activity they could actually produce different structure perturbations [40,41].
In short, a lot has been learnt about the structure and function of NKA as an ion pump over the last 60 years. Moreover, we have gradually recognized that the NKA may be engaged in dynamic interaction with other membrane and cytosolic proteins because of the dynamic nature of NKA conformational equilibrium and the large number of NKA in the plasma membrane. Such interactions play at least three different roles in cell biology: (1) dynamic regulation of ionic concentrations including Na + , K + and consequently Ca 2+ by regulating the pumping activity of NKA, the focus of early years of investigation; (2) a key player in cellular signal transduction because of its direct interactions with signaling proteins; and (3) as a signal integrator by organizing specific membrane microdomains and by bridging different affecters and effectors together through its scaffolding function.
Na/K-ATPase and Signal Transduction
Cardiotonic Steroids (CTS) include plant-derived digitalis such as digoxin and ouabain, and vertebrate-derived aglycones such as bufalin and marinobufagenin (MBG) [42,43]. Digoxin has been used to manage congestive heart failure for over 200 years, and bufalin/MBG are active components in traditional Chinese medicine "Chan Su". However, the digitalis-specific inhibition of NKA was not recognized until the discovery of NKA in the 1950s [1,17]. In cardiac myocytes, inhibition of NKA activity by CTS increases intracellular Na + concentration, which leads to the accumulation of intracellular Ca 2+ through functional coupling to the Na + /Ca 2+ exchanger (NCX). Consequently, an increase in Ca 2+ concentration enhances the contractility of cardiac muscle and causes positive inotropy [42]. In patients with heart/kidney diseases, an increase in endogenous CTS has also been observed [44][45][46].
Na/K-ATPase: More Than a Pump
In addition to its effect on cardiac contraction, CTS were recognized long time ago to play a role in cell growth regulation. In the 1970s, for instance, ouabain at low nM concentrations was found to regulate gene expression and the mitogen-induced differentiation and proliferation in lymphoblasts [3,47,48]. Taking into the consideration that IC 50 of ouabain is around 50-100 nM for human α1 NKA [49][50][51], such low concentration of ouabain on cell growth is unlikely due to the substantial inhibition of transmembrane movement of ions via the NKA, suggesting additional mechanism being responsible for CTS-induced changes in cell growth. Other than cell growth regulation, picomolar concentrations of MBG can stimulate the synthesis of collagen in human dermal fibroblasts [52].
A series of studies from our laboratory published in the late 1990s and early 2000s revealed that CTS could stimulate protein tyrosine phosphorylation and a number of growth-related pathways in cell type-and tissue type-dependent manner [53][54][55][56][57], which has now been largely confirmed by studies from other laboratories around the world [49,[58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75]. These new findings suggest NKA as an important signal transducer, and the involvement of protein kinase cascades in the cell growth regulation by CTS rather than the inhibition of ATPase activity. At the time, the importance of NKA-mediated signal transduction had not been fully appreciated and was considered as being "moonlighting". This is in part because of the following two unresolved albeit important issues. First, since NKA has both pumping and signaling functions, it was difficult to study signaling independent of pumping, especially in cardiac myocytes where NKA is tightly coupled to other membrane transporters such as NCX [42,[76][77][78][79][80]. Second, the realization that NKA has no tyrosine kinase or phosphatase activity had raised question as to how binding of ouabain to NKA stimulated protein tyrosine phosphorylation, which was required for the regulatory actions of ouabain on cell growth. Although direct protein interaction was speculated at the time, no experimental evidence, especially the involvement of tyrosine kinase/phosphatase, had been reported in the literature. These issues have driven the next ten years of investigations, and led to our current appreciation of the molecular basis of NKA-mediated signal transduction in cells.
Protein Interaction in Signal Transduction
It is well established that regulated protein interaction is a key to cellular signal transduction. The best studied examples are those of G protein-coupled receptors and receptor tyrosine kinases (RTKs). Extracellular ligand binding to RTKs stabilizes receptor dimerization and causes trans-phosphorylation of the receptor. The phosphorylated tyrosine residues provide binding sites for Src homology 2 (SH2) domain-and phosphotyrosine binding domain-containing proteins, and propagate the downstream signaling [81,82].
Interestingly, early studies identified several NKA-interacting proteins such as ankyrin, adducin and FXYD family of proteins [83][84][85]. However, most of these as well as some of the recent studies have focused on the role of such interactions in the regulation of NKA activity and trafficking. For example, FXYD family proteins are expressed in a tissue-specific manner and appear to act as a third subunit of the enzyme [86]. Although it is not required for functional expression of α/β NKA, FXYD proteins interact, and regulate NKA pumping activity [36,[87][88][89]. Interestingly, some interactions between FXYDs and NKA are regulated by membrane receptors. For example, FXYD1, also known as phospholemman (PLM), is a principal phosphorylation substrate of c-AMP dependent protein kinase A and of Ca 2+ -phospholipid-dependent protein kinase C at Ser68 (PKA), or at Ser63, Ser68 and Thr69 (PKC) [90][91][92]. Unphosphorylated FXYD1 inhibits NKA through the direct protein interaction [93][94][95]. In addition, a number of signaling proteins has also been identified during the studies of hormonal regulation of NKA trafficking in kidney epithelial cells. For example, dopamine stimulates the recruitment of arrestin, spinophilin, GPCR kinase and 14-3-3ε, to the α1 NKA. In another example, the association of 14-3-3ζ to the α1 subunit facilitates the binding of PI3K to the α1 subunit that subsequently leads to the endocytosis of the NKA [96,97]. Conversely, in response to angiotensin II, adaptor protein-1 attaches to the α1 subunit and facilitates the recruitment of the NKA to the plasma membrane [98]. Functionally, Bcl-2 proteins were also reported directly interacting with NKA [99]. The interactions are critical for control of cell survival and apoptosis. The ratio of pro-survival and pro-apoptotic proteins interacting with NKA may determine NKA function.
In view of our demonstration that NKA plays a role in signal transduction, the fact that protein interaction is a key to signal transduction has prompted us to ask if NKA is capable of regulating the function of its interacting proteins, and, if so, whether NKA ligands can regulate such interactions, and consequently activate cellular signaling events. The concerted efforts of many over the last ten years have yielded solid evidence that supports this hypothesis. Specifically, these studies have led to the discovery that α1 NKA/Src complex is an important receptor for CTS and other NKA ligands to activate protein/lipid kinase cascades, to generate ROS, and to stimulate Ca 2+ oscillation in a cell-specific manner ( Figure 2). Moreover, new findings have suggested that α1 NKA may regulate such interactions in a conformation-dependent manner. We further speculate that the receptor NKA can actually adapt both active and inactive conformations, and that NKA ligands may stabilize either active (agonists) or inactive (inverse agonists) conformation to exert their regulatory effects on cells ( Figure 2). Finally, recent studies have also revealed NKA as a potential signal integrator important for assembling cellular signalosomes and for effective coupling of affecters and effectors.
Src Kinase in NKA-Mediated Signal Transduction
The first clue that Src kinase is important for NKA-mediated signal transduction was from the studies by Haas in early 2000 [56]. Src family kinases are membrane-associated non-receptor tyrosine kinases, and they play an essential role in the signal transduction pathways provoked by many extracellular stimuli such as growth factors, and ligands of G protein-coupled receptors [100]. We and others have shown that α1 NKA regulates Src activity through a conformation-dependent interaction. It also plays an important role in Src targeting through a phosphorylation-dependent mechanism. Binding of CTS to this NKA/Src receptor complex leads to the activation of the associated Src, recruitment of additional Src, and the initiation of the signal transduction processes (Figure 2) [101].
Evidence of NKA/Src Interaction
The following evidence supports the hypothesis that NKA and Src form a functional receptor. First, ouabain and other CTS stimulated protein tyrosine phosphorylation in many different types of cells including cardiac myocytes, smooth muscle and renal epithelial cells to name a few [54,56,59,[102][103][104][105][106]. Second, upon ouabain stimulation, Src activation, as evidenced by its translocation from cytosolic fraction to a Triton-insoluble fraction, and an increase in Y418 phosphorylation (pY418) (but not a decrease in Y529 phosphorylation), was one of the earliest events [56]. Moreover, Src inhibitors blocked the ouabain-induced tyrosine phosphorylation and ouabainactivated downstream signal pathways such as ERK. The CTS-induced cell growth effect could also be sufficiently attenuated by Src kinase inhibitors [56,103]. Third, genetic evidence also supports the requirement of Src in ouabain-induced signal transduction because ouabain failed to increase protein tyrosine phosphorylation in SYF cells where Src family kinases are knocked out. On the other hand, rescuing these cells with Src fully restored ouabain-induced signal transduction [107]. Fourth, NKA and Src were co-enriched in caveolar fractions in many cell types [107]. While immunofluorescence imaging analyses confirmed the co-localization of these two proteins, FRET analyses suggested a direct interaction between these two proteins [101]. Further evidence of direct interaction came from co-immunoprecipitation experiments, first reported by Haas et al. [56], then confirmed by many others [59,[103][104][105][108][109][110]. However, Kaplan et al. reported the failure of co-immunoprecipitation of NKA with Src in breast cancer cells and questioned whether NKA interacts with Src [111]. Although it remains to be determined, it is important to note that Kaplan lab conducted the immunoprecipitation using a different anti-α1 antibody from other labs. Interestingly, the polyclonal antibody used by Kaplan lab was raised against the fragment of α1 NKA where the putative Src
Src Kinase in NKA-Mediated Signal Transduction
The first clue that Src kinase is important for NKA-mediated signal transduction was from the studies by Haas in early 2000 [56]. Src family kinases are membrane-associated non-receptor tyrosine kinases, and they play an essential role in the signal transduction pathways provoked by many extracellular stimuli such as growth factors, and ligands of G protein-coupled receptors [100]. We and others have shown that α1 NKA regulates Src activity through a conformation-dependent interaction. It also plays an important role in Src targeting through a phosphorylation-dependent mechanism. Binding of CTS to this NKA/Src receptor complex leads to the activation of the associated Src, recruitment of additional Src, and the initiation of the signal transduction processes (Figure 2) [101].
Evidence of NKA/Src Interaction
The following evidence supports the hypothesis that NKA and Src form a functional receptor. First, ouabain and other CTS stimulated protein tyrosine phosphorylation in many different types of cells including cardiac myocytes, smooth muscle and renal epithelial cells to name a few [54,56,59,[102][103][104][105][106]. Second, upon ouabain stimulation, Src activation, as evidenced by its translocation from cytosolic fraction to a Triton-insoluble fraction, and an increase in Y418 phosphorylation (pY418) (but not a decrease in Y529 phosphorylation), was one of the earliest events [56]. Moreover, Src inhibitors blocked the ouabain-induced tyrosine phosphorylation and ouabain-activated downstream signal pathways such as ERK. The CTS-induced cell growth effect could also be sufficiently attenuated by Src kinase inhibitors [56,103]. Third, genetic evidence also supports the requirement of Src in ouabain-induced signal transduction because ouabain failed to increase protein tyrosine phosphorylation in SYF cells where Src family kinases are knocked out. On the other hand, rescuing these cells with Src fully restored ouabain-induced signal transduction [107]. Fourth, NKA and Src were co-enriched in caveolar fractions in many cell types [107]. While immunofluorescence imaging analyses confirmed the co-localization of these two proteins, FRET analyses suggested a direct interaction between these two proteins [101]. Further evidence of direct interaction came from co-immunoprecipitation experiments, first reported by Haas et al. [56], then confirmed by many others [59,[103][104][105][108][109][110]. However, Kaplan et al. reported the failure of co-immunoprecipitation of NKA with Src in breast cancer cells and questioned whether NKA interacts with Src [111]. Although it remains to be determined, it is important to note that Kaplan lab conducted the immunoprecipitation using a different anti-α1 antibody from other labs. Interestingly, the polyclonal antibody used by Kaplan lab was raised against the fragment of α1 NKA where the putative Src binding site resides, which ironically could provide further support of direct interaction. In addition to immunoprecipitation analyses, GST-fused fragments of intracellular domains of α1 NKA also pulled-down Src from cell lysates, indicating the existence of a direct and specific interaction between these two proteins [101]. Finally, many have demonstrated that the interaction between these two proteins was actually regulated by CTS [56,59,101,[103][104][105][108][109][110]. For example, ouabain increased the co-immunoprecipitation of these two proteins, and this increase was sensitive to Src inhibitors [56]. The direct interaction between α1 NKA and Src was further demonstrated by co-precipitation studies using purified dog/pig kidney NKA and fully active but unphosphorylated Src [101]. Functionally, this interaction prevented Src Y418 phosphorylation that is required for full activation of Src kinase activity. Using GST pull-down analyses of different cytosolic domains of α1 NKA and functional domains of Src, two putative Src binding sites have been mapped. One is between second cytosolic domain of α1 subunit of NKA and Src SH2 domain. The other one locates in the N domain of α1 that interacts with Src kinase domain. The latter interaction inhibits Y418 phosphorylation. Again, the polyclonal antibody used in the co-immunoprecipitation from Kaplan Lab was directed to this large fragment of α1 NKA. Importantly, ouabain was shown to release the interaction between purified α1 NKA and Src kinase domain, without affecting the binding of SH2 [101,112]. Further mapping of this interaction has identified the 20 amino acid NaKtide sequence in the N domain of α1 NKA being responsible for the direct interaction between α1 and Src kinase domain [112]. The synthesized NaKtide mimics NKA, capable of interacting with and inhibiting Src. Moreover, when NaKtide is made into pNaKtide by adding TAT (13 amino acids) sequence to the N-terminus, it becomes cell permeable. Functional studies demonstrate that pNaKtide also mimics NKA, and is effective in inhibiting NKA-interacting pool of Src in an ATP concentration-independent manner. Consequently, it blocks ouabain-induced activation of Src, ERK and hypertrophic growth in cardiac myocytes [112]. It also specifically suppresses cell proliferation in cancers, whose Src activity is not efficiently inhibited by α1 NKA [113].
Identification of CD2 as an Important Src SH2 Ligand
Although the interaction between Src SH2 domain and NKA α1 is not reduced upon ouabain binding, it is important in the recruitment and targeting of Src. In LLC-PK1 cells transfected with α1 CD2, exogenously expressed CD2 competitively bound to Src kinase in cells and prevented Src from being targeted to various effectors. Therefore, as the exogenous Src SH2 ligand, α1 CD2 increased the global Src activity but blocked Src-mediated pathways including ouabain-induced signal transduction [114]. In short, we and others have generated strong evidence of α1 NKA/Src interaction and demonstrated the importance of such interaction in CTS-induced signal transduction. However, it is important to note that Karlish and his colleagues have questioned such interaction based on their work with purified recombinant human α1 NKA expressed in yeast and Src kinase expressed in bacteria [115]. A major difference is noted between the study from Karlish and our investigation. We used non-phosphorylated Src in our studies whereas Karlish used bacteria-expressed Src that is known to be phosphorylated at both Y418 and Y529 sites. Phosphorylation of these sites affects the activity and binding of Src to other proteins [116]. For example, it is known that SH2 plays an important role in directing the interaction between Src and its partners [117]. This is also true for NKA/Src interaction as we reported recently [114]. The heterogeneous nature of bacteria-expressed Src in both Y418 and Y529 phosphorylation makes it difficult to measure the potential interaction involving the SH2 domain, especially if the interaction has low affinity. To this end, we and others have demonstrated that inhibition of Src by PP2 could abolish ouabain-induced increases in Src binding, reaffirming an important role of Src-mediated phosphorylation and SH2-directed interaction.
The Identification of a Mutant α1 NKA That Pumps but Is Null in Src Interaction
To further verify that NaKtide sequence is engaged in direct interaction with Src, we performed mutagenesis studies [118]. These studies indicated that the N-terminal helical structure of NaKtide is important for the Src binding and inhibition. Moreover, W423, L424, and R427 appear to be in direct contact with Src kinase domain. This conclusion was further supported by mutations (e.g., A420P and A425P) that disrupt the formation of helical structure. Significantly, when A420P or A425P was introduced into full length α1 NKA, we found that the mutated NKA retained full pumping capacity but failed in Src interaction and regulation [118]. Now, we have two mutant α1 NKA that work as a pump but not a signaling receptor.
NKA/Src Interactions Are Isoform-Specific
Sequence comparison shows that the NaKtide sequence is highly conserved in mammalian α1 NKA. However, the corresponding NaKtide sequences in α2 and α3 isoforms are different from that in α1. Interestingly, these differences are also conserved in both α2 and α3 sequences (Figure 3). In view of the importance of NaKtide sequence in α1 NKA-mediated Src interaction, we generated α2 and α3-expressing mammalian cells using a knock-down and rescue protocol, and demonstrated that both α2 and α3 NKA isoforms lack Src-interacting capacity [119,120]. As such, they do not carry Src-dependent signal transduction upon ouabain binding. However, α3 isoform differs from α2 because it does signal in a Src-independent manner [119]. These new findings reveal a major functional difference among three NKA isoforms, and re-enforces the importance of Src binding capacity of α1 NKA in the regulation of cell signal transduction. In addition, the observed difference in their ability to conduct signal transduction among three isoforms provide strong evidence that CTS-induced activation of protein kinases is unlikely due to changes in intracellular ATP secondary to the inhibition of the pumping as advocated by some [121,122]. Finally, Lingrel lab and others have generated strong evidence that different NKA isoforms do exert distinct regulation of animal physiology (e.g., muscle contraction). Therefore, our new findings also suggest the importance of the lack of Src binding for α2 and α3 specific cellular singling functions.
The Identification of a Mutant α1 NKA That Pumps but Is Null in Src Interaction
To further verify that NaKtide sequence is engaged in direct interaction with Src, we performed mutagenesis studies [118]. These studies indicated that the N-terminal helical structure of NaKtide is important for the Src binding and inhibition. Moreover, W423, L424, and R427 appear to be in direct contact with Src kinase domain. This conclusion was further supported by mutations (e.g., A420P and A425P) that disrupt the formation of helical structure. Significantly, when A420P or A425P was introduced into full length α1 NKA, we found that the mutated NKA retained full pumping capacity but failed in Src interaction and regulation [118]. Now, we have two mutant α1 NKA that work as a pump but not a signaling receptor.
NKA/Src Interactions Are Isoform-Specific
Sequence comparison shows that the NaKtide sequence is highly conserved in mammalian α1 NKA. However, the corresponding NaKtide sequences in α2 and α3 isoforms are different from that in α1. Interestingly, these differences are also conserved in both α2 and α3 sequences (Figure 3). In view of the importance of NaKtide sequence in α1 NKA-mediated Src interaction, we generated α2 and α3-expressing mammalian cells using a knock-down and rescue protocol, and demonstrated that both α2 and α3 NKA isoforms lack Src-interacting capacity [119,120]. As such, they do not carry Srcdependent signal transduction upon ouabain binding. However, α3 isoform differs from α2 because it does signal in a Src-independent manner [119]. These new findings reveal a major functional difference among three NKA isoforms, and re-enforces the importance of Src binding capacity of α1 NKA in the regulation of cell signal transduction. In addition, the observed difference in their ability to conduct signal transduction among three isoforms provide strong evidence that CTS-induced activation of protein kinases is unlikely due to changes in intracellular ATP secondary to the inhibition of the pumping as advocated by some [121,122]. Finally, Lingrel lab and others have generated strong evidence that different NKA isoforms do exert distinct regulation of animal physiology (e.g., muscle contraction). Therefore, our new findings also suggest the importance of the lack of Src binding for α2 and α3 specific cellular singling functions.
NKA/Src Complex as a Receptor
Mechanistic investigations of last fifteen years have revealed a novel molecular mechanism of NKA-mediated signal transduction. As illustrated in Figure 2, the receptor NKA interacts with several proteins to perform cell-specific signal transductions including Raf/MEK/ERK, PLC/PKC, PI3K/Akt, and Ca 2+ signaling and the generation of ROS. One of the most important signaling partners trans-activated by NKA/Src receptor complex is EGF receptor, which is recruited and phosphorylated at several phosphorylation sites other than its major phosphorylation site Y1173 when cells are exposed to CTS [56]. The activated EGF receptor then recruits the adaptor protein Shc, which in turn binds the protein complex Grb2 and SOS. SOS is a guanine nucleotide exchange factor that activates Ras by exchanging GDP for GTP. Activated Ras then stimulates Raf/MEK and p42/44 ERK cascade [56,57]. Activation of this cascade by CTS appears to occur in most of cell types [57,59,104,109,123]. It is of interest to note that activated EGFR is also a critical element in the signal transduction networks of cytokines, H 2 O 2 , and pathways utilizing G protein-coupled receptors (51). However, the number of membrane α1 NKA in most of cells is at least over 100 times of G protein-coupled receptors. Thus, it is reasonable to speculate that α1 NKA may regulate the Src-dependent signaling pathways of G protein-coupled receptors. Moreover, in view of a critical role of EGFR in cancer, it would be of great importance to further dissect α1 NKA-mediated regulation of EGFR and its potential role in cancer biology.
Several important features of this newly appreciated signaling mechanism are worthy of further discussion. First, the activation of protein and lipid kinase cascades and the generation of second messengers ensure the formation of a positive feed forward loop that could amplify CTS-provoked signal transduction, and also allow signal diversification, transcriptional and translational regulation of gene expression [52,53,57]. This is best exemplified by the recruitment of additional signaling partners into the receptor complex [57,107,124], and by the ROS-induced signal amplification ( Figure 2) [55,125,126]. In accordance, it explains how endogenous CTS could exert profound physiological effects at concentrations well below 1/100th of IC 50 [127]. For example, it has been reported that ouabain at 10 to 100 nM was sufficient to stimulate mouse or rat cardiac fibroblasts, resulting in increased collagen production [52,128]. Similarly, such low concentrations of CTS were found to elicit Ca 2+ oscillation in both mouse and rat kidney epithelial cells where only the ouabain-resistant α1 NKA is expressed [129]. It is also important to point out that the fetal bovine serum we all use in our cell culture may contain sufficient amount of CTS to promote cell growth [130]. Finally, signal amplification similar to these in rodents has also been observed in human cells [52].
Second, because NKA contains a large number of motifs both intracellularly and extracellularly, it would not be a surprise that NKA could perform much more regulatory function than those outlined in the scheme ( Figure 2). Moreover, it is likely that many of these pathways could cross-talk to each other and exert cell-specific regulation depending on the context of available signaling constitutes. This is exemplified by the fact that NKA could regulate PI3K signaling in Src-knock out cells, whereas inhibition of Src also attenuates ouabain-induced PI3K signaling in normal cells [131][132][133][134]. Similarly, Src is also involved in ouabain-induced Ca 2+ oscillation by affecting the interaction between IP3 receptor and α1 NKA in renal epithelial cells [129].
Third, this scheme provides a framework to begin addressing the role of NKA-mediated signal transduction in animal physiology. To this end, recent animal studies have demonstrated the importance of this signaling mechanism in wild array of physiological processes including renal salt handling, vascular activity, cardiac growth, and embryonic development to name a few [106,[135][136][137][138][139][140][141][142][143]. These new findings call for the need of re-examination of CTS physiology and exploring the potential new pharmacology of exogenous CTS [144]. In the past, most pharmacological studies of CTS were focused on their ability to inhibit NKA. As such, they were used as NKA inhibitors to increase myocardial contractility. Even in this application, clinical studies have demonstrated that the use of lower, but not higher, doses of digoxin is associated with a decrease in mortality in patients with congestive heart failure [145]. Interestingly, recent studies have shown that the activation of NKA signaling, but not inhibition of cellular pump capacity, by CTS is capable of protecting the heart from ischemia/reperfusion injury in rats [146][147][148]. Furthermore, CTS at doses lower than 1/100th of IC 50 of NKA activity are effective stimuli of collagen synthesis, suggesting the potential use of these compounds in skin care and wound healing [52,128]. It is equally important to recognize that CTS could also inhibit cell growth in a wide variety of cancer cell lines such as prostate, lung, colon cancer cells and neuroblastoma cells by stimulating several different pathways, including apoptosis and autophage-related processes [61,62,66,70,71,104,[149][150][151]. On the other hand, endogenous CTS may play an important role in the pathogenesis of autosomal dominant polycystic kidney disease (ADPKD) by activating Src/EGF receptor/ERK pathways [109,152,153].
Finally, it has been reported that the endocytosis of NKA/Src receptor complex, like many other membrane receptors, is stimulated by its ligands such as CTS [131,154,155]. This occurs via clathrin-coated pits, early and late endosomes, and depends on the activation of Src and PI3K. Although it remains to be further investigated, it is conceivable that CTS-induced endocytosis of receptor NKA/Src could represent a pathway of signal termination. Of course, it might also generate an effective way of communication with intracellular compartments during the signal transduction process [156].
Conformation-Dependent Regulation of Src by α1 NKA, a New Hypothesis
The essence of receptor-mediated signal transduction is the intrinsic ability of a receptor to adapt both active and inactive conformational states [157,158]. Several important but seeming un-related studies have led us to test this important concept (hypothesis) in NKA-mediated signal transduction. The first clue was actually from the studies of purified NKA/Src interaction. As reported by Tian et al. [101], purified kidney α1 NKA inhibited Src Y418 phosphorylation, and that addition of ouabain restored Y418 phosphorylation only in the presence of α1 NKA. Interestingly, while vanadate also inhibited ATPase activity of α1 NKA, it showed minimal effect on Y418 phosphorylation at concentrations that produced similar degree of NKA inhibition as ouabain. Most significantly, ouabain was able to further stimulate Y418 phosphorylation in the presence of vanadate that caused complete inhibition of α1 NKA. Because it is known that vanadate facilitates ouabain binding to the α1 NKA, these findings suggest that α1 NKA may interact and regulate Src activity in a conformation-dependent manner. The second line of evidence came from studies of xanthone derivatives. These compounds are potent and specific inhibitors of α1 NKA. However, they show no ouabain-like effect on α1 NKA/Src interaction [159]. This led to studies of Ye et al., testing whether α1 NKA/Src interaction can be modeled on the Albers-Post scheme [134]. By using well-characterized conformation stabilizing chemicals as well as α1 NKA mutant defective in conformation transitions, we find strong evidence that α1 NKA, like G protein-coupled receptors, can adapt both active and inactive conformations to interact and regulate Src. This new framework, taken together with our appreciation of Albers-Post scheme, has led us to deduce that α1 NKA may represent a broad cell signaling mechanism. As such, many ligands of α1 NKA, including CTS and intracellular/extracellular ions, may alter cellular signal transduction through a Src-dependent process (Figure 2).
NKA/Src/ROS Loop and Disease Progression
ROS participates in various cellular activities [160][161][162][163][164][165]. Our early studies demonstrate that ouabain stimulates ROS generation in a Ras-dependent way via NKA/ Src signaling [55,166]. On the other hand, modification of α1 NKA by ROS has been well documented [125,[167][168][169][170][171][172], and such modification by ROS can directly alter the conformation states of α1 NKA [173]. Our new appreciation of α1 NKA/Src signaling mechanism has led to several studies of whether ROS can work similar to CTS on NKA/Src complex. These studies have led to the following observations. First, an increase in H 2 O 2 generation is sufficient to cause the activation of Src and ERK, and stimulated α1 NKA endocytosis in LLC-PK1 cells. Disruption of NKA/Src interaction by either pNaKtide or by the expression of Src-interaction null mutants (A420P) abolishes H 2 O 2 -induced Src/ERK activation [126]. On the other hand, ouabain stimulates the generation of ROS, which results in direct carbonylation of Pro 222 and Thr 224 in the α1 subunit of NKA [125]. Moreover, inhibition of the carbonylation by anti-oxidants attenuates ouabain-induced activation of protein kinase cascades. Thus, it is proposed that NKA/Src and ROS form a signal amplification loop allowing not only CTS but also ROS to generate signals from the NKA.
In view of the well-established role of ROS stress in the progression of many chronic diseases, we and others have recently explored whether the newly appreciated NKA/Src/ROS loop is essential for un-regulated ROS signaling. These studies have demonstrated that this signaling loop is indeed activated and plays an important role in the development of atherosclerosis, renal inflammation-induced tissue damage, and metabolic syndrome as well as uremic cardiomyopathy [136,139,174].
NKA/Src Interaction as a Drug Target
The rationale for targeting NKA-mediated signal transduction to develop new therapeutics has been discussed [6,43,144]. Recent in vitro and in vivo studies have demonstrated the feasibility of targeting NKA/Src interaction, and effectiveness of an inhibitor, pNaKtide, of this interaction as a potential therapeutics of cardio-renal diseases and metabolic syndrome. As discussed above, pNaKtide is composed of NaKtide sequence (20 amino acid peptide) from human α1 NKA and a TAT leader (13 amino acids peptide). The TAT leader is the so-called cell penetrating peptide that helps large molecules to across cell membrane. pNaKtide not only readily passes cell membrane, it resides, similar to α1 NKA, in the plasma membrane, making it highly specific as an inhibitor of NKA/Src complex. Moreover, it is potent, as 0.1 to 1 µM is sufficient to completely block CTS-or ROS-induced signal transduction in cell cultures [112,126]. It also has a good safety profile as no cellular toxicity was observed up to 20 µM in three different cell lines. Remarkably, it is readily taken up in vivo by the heart, kidney, liver and fat tissues, and shows a plasma membrane distribution as well [140,175,176]. Significantly, pNaKtide was effective in blocking ROS amplification and α1 NKA-mediated signal transduction in animals fed with high fat diet, and consequently attenuated metabolic syndrome [140]. Moreover, recent studies have further demonstrated its effectiveness as an inhibitor of ROS amplification and NKA/Src signaling in animal model of chronic kidney failure, Western diet-induced liver damage and atherosclerosis [175,176]. For example, in animal models of uremic cardiomyopathy induced by 5/6 nephrectomy, it not only prevented cardiac hypertrophy and fibrosis, but also improved cardiac function and hematocrit. Remarkably, it was also capable of reversing cardiac lesions in a dose-dependent manner [140,175].
Conclusions and Perspectives
Studies from many laboratories of the past 20 years have documented that NKA has an ion-pumping independent receptor function that confers a ligand-like effect of CTS on protein/lipid kinases, intracellular Ca 2+ oscillation and ROS generation. Direct protein interactions between NKA and its partners are responsible for this newly appreciated signaling mechanism. Meanwhile, our appreciation of this signaling mechanism has also evolved from "moonlighting" to an essential pathway in animal physiology and disease progression. It is important to recognize that the aforementioned investigations only mark the beginning of a fascinating field. In addition to the continued effort of many in defining the molecular mechanism of NKA-mediated signal transduction, isoform-specificity and identifying cell/tissue-specific signalosomes, the following two areas of research may further advance our understanding of NKA. First, efforts have been and will continue to be made to generate new animal models including transgenic animals with specific defect in NKA signaling and tool drugs targeting NKA/Src or NKA/other partner interactions. These new animal models and tool drugs will help advancing our understanding of NKA-mediated signal transduction in animal physiology and disease progression, provide further validation of NKA-mediated protein interaction as a druggable target, and generate lead candidates for the development of clinically useful drugs.
Second, NKA, as discussed, interacts with many proteins. Unlike other membrane receptors, it is highly expressed in most of cells. As such, it may also work as an important scaffold. As an example, it is of interesting to look at the interaction of α1 NKA with caveolin-1, and a Src-dependent interplay among the α1 NKA, caveolin-1 and cholesterol. Both caveolin-1 and cholesterol are important structural components of caveolae that are flask-shaped vesicular invaginations on the plasma membrane [154,[177][178][179]. Caveolae are known to play an important role in cellular signal transduction. The α1 subunit of NKA contains a highly conserved caveolin binding motif at the N-terminus, and brings caveolin-1 to its regulatory kinase Src [107,179]. Reduction in the expression of α1 NKA stimulates Src, resulting in an increase in caveolin-1 Y14 phosphorylation. This leads to the reduction of membrane caveolin-1 and cholesterol, and consequently a decrease in the number of caveolae [179,180]. On the other hand, reduction of membrane cholesterol can activate Src in a α1 NKA-dependent manner, leading to an increase in the endocytosis of α1 NKA. Thus, this Src-dependent interplay may establish a highly efficient feed-forward mechanism that detects the change in cellular cholesterol and/or α1 NKA, and then alters the structure and function of the plasma membrane [107,123,178,179,[181][182][183][184][185]. In accordance, studies have shown that α1 NKA/caveolin-1 interaction is not only essential for CTS-induced activation of protein/lipid kinase cascades, but also for α1 NKA to interact with other signaling proteins. One of these is IP3 receptor (Figure 2). This latter interaction may also depend on other scaffolding proteins such as ankyrin. Nevertheless, the interaction between caveolar α1 NKA and ER IP3R allows the formation of an efficient Ca 2+ signaling machine by tethering affecters (e.g., membrane receptors), signal transducer (e.g., Src and phospholipase C) and effectors (e.g., Ca 2+ channels) together. Consequently, α1 NKA is necessary not only for CTS but also for purinergic stimulation of Ca 2+ oscillation [107,124,186]. Clearly, much remains to be learned about the potential role of α1 NKA as scaffold and its interplay with other receptors in animal physiology and disease progression, which could open up new opportunities for the discovery of other NKA-specific drug targets.
|
2017-07-24T04:40:06.899Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4fa6b75f6eaa6eeafdaa92889d4657198e2ab909",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/22/6/990/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4fa6b75f6eaa6eeafdaa92889d4657198e2ab909",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
265563600
|
pes2o/s2orc
|
v3-fos-license
|
Evaluating Gelatin-Based Films with Graphene Nanoparticles for Wound Healing Applications
In this study, gelatin-based films containing graphene nanoparticles were obtained. Nanoparticles were taken from four chosen commercial graphene nanoplatelets with different surface areas, such as 150 m2/g, 300 m2/g, 500 m2/g, and 750 m2/g, obtained in different conditions. Their morphology was observed using SEM with STEM mode; porosity, Raman spectra and elemental analysis were checked; and biological properties, such as hemolysis and cytotoxicity, were evaluated. Then, the selected biocompatible nanoparticles were used as the gelatin film modification with 10% concentration. As a result of solvent evaporation, homogeneous thin films were obtained. The surface’s properties, mechanical strength, antioxidant activity, and water vapor permeation rate were examined to select the appropriate film for biomedical applications. We found that the addition of graphene nanoplatelets had a significant effect on the properties of materials, improving surface roughness, surface free energy, antioxidant activity, tensile strength, and Young’s modulus. For the most favorable candidate for wound dressing applications, we chose a gelatin film containing nanoparticles with a surface area of 500 m2/g.
Introduction
The development of civilization, a sedentary lifestyle, poor nutrition, and the accompanying stress definitely have a negative impact on the quality of life in today's society.As our health deteriorates, humans become more vulnerable to various diseases and other problems.Wounds-often resulting from mechanical or thermal injuries-are defined as damage or tears to the skin's surface, and are treated with various dressings [1].
Natural polymers are biodegradable, biocompatible and non-toxic [2,3], which is of great importance to their use in medicine and as a dressing for damaged skin.Biopolymer dressings are necessary to heal skin injuries and reconstruct damaged tissues [4].Among the various dressings used in the biomedical area, biopolymers have shown significant potential for application in effective wound healing by providing a moist environment at the injury interface and enabling oxygen exchange between fabrics and the external environment [5].Additionally, this type of dressing is often used as a carrier of active compounds, such as antibacterial agents, anti-inflammatory substances, etc., which additionally support tissue regeneration [6].
Graphene is an emerging material in electronic and energy applications, including acting as an electrode material in batteries, supercapacitors, and fuel cells.Despite the growing interest in graphene research, some specific domains still need to be adequately explored.Bio-oriented (medicine, cosmetics, etc.) applications of graphene belong to such underestimated fields.According to some previous studies [7], 3D-structured graphene flakes exhibited biocompatibility with blood cells (DPPH tests, blood compatibility), which opens the field of potential application in medicine.Graphene in its pristine form is a 2D material, however; this geometrical form is not preferred in some experiments, such as drug delivery or biofiltration.For such purposes, a well-developed pore structure is needed, emphasizing the importance of all measures converting loose graphene flakes into a porous material with a permanent pore structure.Some methods have been established [8,9] which result in the surface area and pore structure development reaching more desirable values (ca.1000 m 2 /g and above 1 cm 3 /g, respectively).
Gelatin is a well-known biopolymer of animal origin, obtained by partial hydrolysis of collagen [10].Further, due to its biocompatibility, biodegradability, and non-toxicity (the material is recognized as safe by the US Food and Drug Administration (US FDA) [11], it is widely used in the various fields of medicine, among others, in wound dressings, tissue engineering, and surgical adhesives.As knowledge has developed, scientists have always looked for techniques that accelerate the tissue regeneration process, which is why adding bioactive nanoparticles to the construction of biopolymers is becoming more and more common.For example, biopolymers with added selected metallic nanoparticles significantly affect the effectiveness of antibacterial agents by inhibiting the possibility of wound infection, significantly improving damaged tissue's healing process [4].Furthermore, the use of graphene nanoparticles can be extensive.The potential use of graphene in modern technology is influenced by aspects such as the filtering properties, strength, and flexibility of a material with a two-dimensional structure.Materials based on graphene are nanomaterials that exhibit good biocompatibility and broad-spectrum antibacterial activity that can interact with other biological molecules, such as proteins, enzymes, and other factors [12].However, there are also some reports about their cytotoxicity, which is dependent mainly on size, concentration, and exposure duration [13].This mechanism is attributed to the generation of reactive oxygen stress, which can cause DNA damage or disturb cell signaling [14].For example, Shvedova et al. [15] reported that carbon derivatives may result in skin irritation and diseases after cutaneous exposure.Further, it is generally accepted that graphene shows higher cytotoxicity than graphene oxide, related mainly to its aggregation tendency [16].
The aim of this study was to obtain and characterize novel gelatin-based materials in thin film form, modified with special graphene nanoparticles, for application as wound dressing.The characterization of graphene nanoparticles was carried out.Also, their biocompatibility with human blood and cells was determined.Nanoparticles without the toxic effect were selected and added to gelatin to fabricate thin films.
Materials
Commercial materials for further investigations, i.e., graphene-type powder materials and porcine-derived gelatine, were delivered by Sigma-Aldrich (Pozna ń, Poland).Unless otherwise noted, reagents for hemo-and cytocompatibility studies came from Merck KGaA (Darmstadt, Germany).For better understanding, some symbolic names for the obtained samples were proposed according to the general formula X-Y.X-Y describes the type of graphene material: low surface area GF-15 (150 m 2 /g), medium surface area GF-30 (300 m 2 /g), and high surface area GF-75 (750 m 2 /g).
Graphene Nanoparticle Characterization
A scanning electron microscope (SEM, 1430 VP, LEO Electron Microscopy Ltd., Oberkochen, Germany), capable of working in STEM mode (detecting BF and DF), was applied to determine the structure of the investigated materials.Surface area and porosity studies were performed by means of a widely approved method of low temperature (−196 • C) adsorption of nitrogen.An automatic sorptometer was used for this purpose, i.e., ASAP 2010 (Micromeritics, Norcross, GA, USA).Each analysis was preceded by high temperature desorption in a vacuum at 200 • C for 12 h.All determined nitrogen adsorption isotherms were considered the II type, according to the IUPAC.In such a case, it is assumed that the nitrogen adsorption follows the monolayered-multilayered mechanism.
In Vitro Biocompatibility
The in vitro studies on hemo-and cytocompatibility of nanoparticles were conducted on red blood cells (RBCs) and fetal osteoblast cells (hFOB 1.19, ATCC, Manassas, VA, USA) of human origin.To determine the number of cells, a hemocytometer Superior CE (Marienfeld, Lauda-Königshofen, Germany) was used.Before testing, powders were sterilized through 30 min of UV light exposure.
Hemocompatibility
RBCs were isolated and fractionated according to the standard protocol [14], as a by-product from buffy coats obtained during the blood donation from healthy volunteers at the Regional Centre in Gda ńsk (under the approval of the Regional Bank Review Board, with the institutional permission M-073/17/JJ/11).RBCs (3 × 10 9 cells/mL) were incubated with the nanoparticle powders (n = 3; 100 mg/3 mL) at 37 • C for up to 24 h.Then, the suspensions were centrifuged to obtain supernatants for 3 min at 100× g at room temperature.The hemolysis (expressed as a percentage) was measured using an Ultrospect 3000pro spectrophotometer (Amersham-Pharmacia-Biotech, Cambridge, UK) at a 540 nm wavelength.For a positive control, RBCs were treated with 0.2% Triton (i.e., 100% hemolysis), while for a negative control, RBCs were incubated without nanoparticles.According to the literature, materials resulting in hemolysis below 2% are nonhemolytic [17].
Cytocompatibility
For the study, extracts from the tested nanoparticles (n = 3; 100 mg/1.5 mL) were prepared through a direct extraction method, according to ISO 10993-5 [18].The osteoblast cells (hFOB 1.19) were grown in a culture medium based on Ham's F12 Medium and Dulbecco's Modified Eagle's Medium (without phenol red), in the proportion 1:1, containing L-glutamine (1 mmol/L), geneticin (G418; 0.3 mg/mL), and 10% fetal bovine serum.The cell culture was carried out at 37 • C in a humidified atmosphere with 5% CO 2 .Then, cells at a density of 12 × 10 3 were seeded on a 96-well plate and incubated until a confluent layer was obtained.Next, the culture medium was exchanged on those containing tested extracts.The viability of hFOB cells was evaluated after 24 h, using the MTT assay (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide; 0.60 mmol/L) spectrophotometrically at 570 nm wavelength.The results were presented as a % of change, referring to the living cells grown on the tissue culture plate (TCP, 100%).Further, the LDH assay, which determined the death of cells during the culture, was tested by directly measuring the NAD oxidation of lactate dehydrogenase (LDH) spectrophotometrically at a 340 nm wavelength.The results were presented as a % of the total LDH released from the cells grown on TCP.According to the ISO standard, reducing cell viability by more than 30% is considered a cytotoxic effect [18].
Film Preparation and Characterization
Gelatin was dissolved in distilled water at 1 w/w% concentration.Selected graphene nanoparticles were added to the gelatin solution at 10 w/w% concentration, which is the lowest concentration that allows the creation of homogeneous films.The mixture was mixed with a magnetic stirrer for 1 h (400 rpm) and then placed in plastic holders (40 mL per 10 cm × 10 cm) to evaporate the solvent (room conditions, 72 h).Thin films-with 0.017 mm (±0.003) thickness, measured with a gauge (Sylvac, Valbirse, Switzerland)-were obtained.Gelatin film without graphene nanoparticles was studied as a control.Further, the films were denoted as: Gel_X-Y.
Scanning Electron Microscope (SEM)
A scanning electron microscope (SEM; LEO Electron Microscopy Ltd., Cambridge, UK) was used to observe the surface and cross-section morphology of the obtained films.
Mechanical Properties
The mechanical properties were tested using a universal testing machine (Shimadzu EZ-Test EZSX, Kyoto, Japan) in stretching mode (initial force at 0.1 MPa, crosshead speed fixed at 5 mm/min, n = 10) [19].The samples were cut using a paddle-shaped stencil and a hand press.The mechanical parameters, such as Young's modulus, maximum tensile strength, and elongation at break, were calculated using the Trapezium X Texture program.
Antioxidant Activity
The antioxidant properties of the films were determined using the 2,2-Diphenyl-1-picrylhydrazyl reagent (DPPH, free radical, 95%; Alfa Aesar, Karlsruhe, Germany).Samples (1 cm × 1 cm) of each film were placed in a 24-well plate and filled with 2 m of DPPH solution (250 µM solution in methyl alcohol), and left without exposure to light for 0.5 h.The absorbance of the samples (Abs PB ) and the control (Abs DPPH ) were measured spectrophotometrically at 517 nm (UV-1800, Shimadzu, Reinach, Switzerland).The radical scavenging assay was calculated from the formula:
Roughness of Surface
The surface roughness of the films (1 cm × 1 cm) was analyzed at room temperature, using a microscope with a scanning SPM probe of the NanoScope MultiMode type (Veeco Metrology, Inc., Santa Barbara, CA, USA), which operated in a tapping mode.Two parameters-the root-mean-square roughness (Rq) and the arithmetic mean (Ra)-were measured (n = 5) using the Nanoscope v6.11 software (Bruker Optoc GmbH, Ettlingen, Germany).
Surface Free Energy
In this experiment, the contact angles of glycerin or diiodomethane were measured at a constant temperature value using a goniometer equipped with a drop shape analysis system (DSA 10 Control Unit, Krüss, Hamburg, Germany).The surface free energy IFT(s), polar IFT(s,P), and dispersive IFT(s,D) components were calculated using the Owens-Wendt method.
Water Vapor Permeation Rate (WVPR)
A dried anhydrous calcium chloride (m 0 ), to be used as a desiccant, was placed in a plastic container (5 cm diameter).The films were placed onto the desiccant, and the container was sealed tightly.After three days, the calcium chloride was weighed (m t ) and the change in its weight was determined, which was considered as water vapor absorbed by the desiccant.Then, WVPR was calculated in mg/cm 2 /h.
Statistical Analysis
Obtained results were expressed as the mean plus standard deviation (x ± SD), and were statistically analyzed using commercial software (SigmaPlot 15.0, Systat Software, San Jose, CA, USA).The normal distribution of data was checked using the Shapiro-Wilk test.One-way ANOVA analysis was performed, with multiple comparisons to the control using the Bonferroni t-test, with p < 0.05.
Graphene Nanoparticle Characterization
All material characterization was multidirectional.Figure 1A,B depict the SEM images of the GF15 and GF75 samples.As shown, the graphene flakes were clean, without loose particles on the surface.Figure 2 presents the SEM/STEM mode images.In particular, the image for the GF15 sample shows very thin graphene layers; so thin that the copper mesh can be seen.STEM also confirms the purity of the material, which is especially important for biocompatibility testing.The remaining images in Figure 2B-D show agglomerates of graphene flakes; they are also free of impurities.
Water Vapor Permeation Rate (WVPR)
A dried anhydrous calcium chloride (m0), to be used as a desiccant, was placed in a plastic container (5 cm diameter).The films were placed onto the desiccant, and the container was sealed tightly.After three days, the calcium chloride was weighed (mt) and the change in its weight was determined, which was considered as water vapor absorbed by the desiccant.Then, WVPR was calculated in mg/cm 2 /h.
Statistical Analysis
Obtained results were expressed as the mean plus standard deviation (x ± SD), and were statistically analyzed using commercial software (SigmaPlot 15.0, Systat Software, San Jose, CA, USA).The normal distribution of data was checked using the Shapiro-Wilk test.One-way ANOVA analysis was performed, with multiple comparisons to the control using the Bonferroni t-test, with p < 0.05.
Graphene Nanoparticle Characterization
All material characterization was multidirectional.Figure 1A,B depict the SEM images of the GF15 and GF75 samples.As shown, the graphene flakes were clean, without loose particles on the surface.Figure 2 presents the SEM/STEM mode images.In particular, the image for the GF15 sample shows very thin graphene layers; so thin that the copper mesh can be seen.STEM also confirms the purity of the material, which is especially important for biocompatibility testing.The remaining images in Figure 2B-D show agglomerates of graphene flakes; they are also free of impurities.Table 1 presents the content of three key elements, i.e., N, C, and H, and their porosity measurements (pore volume).C carbon C is the main component in all the samples under investigation.It ranges from 89.3 wt.% to 98.0 wt.%, which is typical for materials considered as pristine graphene.The rest of the content may be ascribed mainly to oxygen, the content of which is very low (far below the level typically occurring in the case of Table 1 presents the content of three key elements, i.e., N, C, and H, and their porosity measurements (pore volume).C carbon C is the main component in all the samples under investigation.It ranges from 89.3 wt.% to 98.0 wt.%, which is typical for materials considered as pristine graphene.The rest of the content may be ascribed mainly to oxygen, the content of which is very low (far below the level typically occurring in the case of graphene oxide).The raw materials do not contain heavy metals, which has been proven in our previous works [20,21].The results show that the materials are free of unnecessary impurities.Nitrogen adsorption-desorption isotherms (Figure 3) for all samples belong to type II (IUPAC standard), which represents the unrestricted monolayer-multilayer adsorption process.It is probable that multilayer adsorption occurs in mesoporous materials, which contributes to the total pore volume.The surface area increased from 145 m 2 /g to 750 m 2 /g.Usually, an increase in surface area results from a diminishing of graphene plate size [21].Also, the percentage of the mesopore volume V me in the total pore volume V t increased from 44 to 87%.Moreover, the sample with the highest surface area (GF75) does not have the best biocompatibility features.The Raman spectra of the investigated samples (Figure 4A) show the typical shape for graphene peaks.G peak intensity corresponds to the degree of graphitization.The graphene nanoplatelets' Raman spectra are characteristic because of two specific peaks at 1340 cm −1 (D band) and 1580 cm −1 (G band) [22].However, despite all structural imper- The Raman spectra of the investigated samples (Figure 4A) show the typical shape for graphene peaks.G peak intensity corresponds to the degree of graphitization.The graphene nanoplatelets' Raman spectra are characteristic because of two specific peaks at 1340 cm −1 (D band) and 1580 cm −1 (G band) [22].However, despite all structural imperfections and irregularities, some similarly agglomerated graphene domains are present in all materials under investigation.It may be concluded that a few-layered graphene (FLG) is a dominating form, in which graphene flakes are self-organized.
Hemocompatibility
All tested nanoparticles damaged the integrity of human erythrocytes (Fig causing a significant increase in hemolysis rate.A negative trend due to a smaller s area can be found.Further, it was observed that all the analyzed groups significan creased the percentage of hemolysis compared to the control condition.To sum u noplatelets with 150 m 2 /g surface area had a hemotoxic effect, and medium (300 a In turn, in Figure 4B, the intensity ratios of the D to G bands (I D /I G ) depend mainly on the level of the disorder.The I D /I G ratio for GF15 is 0.23, GF30 is 0.45, GF50 is 0.52, and GF75 is 0.62.The interpretation of the I D /I G ratios states that the amount of defects in the graphene nanoparticles is small.
Hemocompatibility
All tested nanoparticles damaged the integrity of human erythrocytes (Figure 5), causing a significant increase in hemolysis rate.A negative trend due to a smaller surface area can be found.Further, it was observed that all the analyzed groups significantly increased the percentage of hemolysis compared to the control condition.To sum up, nanoplatelets with 150 m 2 /g surface area had a hemotoxic effect, and medium (300 and 500 m 2 /g) were slightly hemolytic, while only the 75 GF were classified as nonhemolytic [17].The effect of graphene and its derivatives on hemolysis has been previously and discussed.Research has shown that several factors may affect the hemocompa of particles, such as their size and shape; purity; concentration and dispersion; charge and functionalization; stability; and, finally, exposure time and environ conditions [23,24].Further, Sasidharan A. et al. [25] found that graphene nanoma with doses of up to 75 µg/mL did not elicit hemolysis, however, a negative tren increasing particle concentration was observed.Here, the applied concentratio much higher (~33.3mg/mL), and the particles' shape was also different (here, we a nanoplatelets), which may explain the differences in results.Further, it is assum various nanoparticles might affect the erythrocyte membrane integrity through m ical damage or the generation of reactive oxygen species [26].
Cytocompatibility
Nanoparticles did not negatively affect the viability of osteoblasts (tested th extract exposure for 24 h), which was confirmed by comparable MTT results (Fig All groups showed high cytocompatibility (close to 100%), although a slightly sign increase of LDH release was noticed in the hFOB cell treated with extracts, especia the nanoplatelets with the smallest surface area (GF15).The effect of graphene and its derivatives on hemolysis has been previously tested and discussed.Research has shown that several factors may affect the hemocompatibility of particles, such as their size and shape; purity; concentration and dispersion; surface charge and functionalization; stability; and, finally, exposure time and environmental conditions [23,24].Further, Sasidharan A. et al. [25] found that graphene nanomaterials with doses of up to 75 µg/mL did not elicit hemolysis, however, a negative trend with increasing particle concentration was observed.Here, the applied concentration was much higher (~33.3mg/mL), and the particles' shape was also different (here, we applied nanoplatelets), which may explain the differences in results.Further, it is assumed that various nanoparticles might affect the erythrocyte membrane integrity through mechanical damage or the generation of reactive oxygen species [26].
Cytocompatibility
Nanoparticles did not negatively affect the viability of osteoblasts (tested through extract exposure for 24 h), which was confirmed by comparable MTT results (Figure 6).All groups showed high cytocompatibility (close to 100%), although a slightly significant increase of LDH release was noticed in the hFOB cell treated with extracts, especially for the nanoplatelets with the smallest surface area (GF15).Carey et al. [27] also confirmed the cytocompatibility of the graphene flakes with human umbilical vein endothelial cells (up to 1 mg/mL), which is consistent with our results.Further, Chang et al. [28] reported no cytotoxic effect on lung carcinoma epithelial cells caused by graphene oxide (up to 0.2 mg/mL).Also, Guo et al. [29] developed GO-coating, which showed good compatibility with MC3T3-E1 osteoblasts and even promoted osteogenic differentiation.However, some reports have been made regarding the cytotoxic effects of graphene and its derivatives.For example, Wang et al. observed that a concentration of graphene oxide above 50 µg/mL had a cytotoxic effect on human fibroblast cells [30], and Ricci et al. [31] found that graphene nanoribbons were toxic above 200 µg/mL for MG-63.In conclusion, based on the literature [26], it can be assumed that the shape and size of particles and their concentration significantly impacts cytocompatibility properties.Further, the size-dependent toxicity between erythrocytes and human cells was previously noted in some reports regarding various nanoparticles [32,33].Also, Liao et al. found that various graphene types showed different biotoxicity results, probably due to surface area and hydrophobic surfaces [34].Moreover, differences between our hemo-and cytocompatibility results may also be related to the applied method.In the hemolysis study, the nanoparticles were in direct contact with cells, while in the cytocompatibility study, the extracts were used.
Film Characterization
Gelatin-based films were obtained through solvent evaporation and were modified with the addition of GF30, GF50, and GF75 (Figure 7).We decided not to use GF15, as this group showed the greatest hemolysis rate and an increase of released LHD in the cytocompatibility study.Carey et al. [27] also confirmed the cytocompatibility of the graphene flakes with human umbilical vein endothelial cells (up to 1 mg/mL), which is consistent with our results.Further, Chang et al. [28] reported no cytotoxic effect on lung carcinoma epithelial cells caused by graphene oxide (up to 0.2 mg/mL).Also, Guo et al. [29] developed GOcoating, which showed good compatibility with MC3T3-E1 osteoblasts and even promoted osteogenic differentiation.However, some reports have been made regarding the cytotoxic effects of graphene and its derivatives.For example, Wang et al. observed that a concentration of graphene oxide above 50 µg/mL had a cytotoxic effect on human fibroblast cells [30], and Ricci et al. [31] found that graphene nanoribbons were toxic above 200 µg/mL for MG-63.In conclusion, based on the literature [26], it can be assumed that the shape and size of particles and their concentration significantly impacts cytocompatibility properties.Further, the size-dependent toxicity between erythrocytes and human cells was previously noted in some reports regarding various nanoparticles [32,33].Also, Liao et al. found that various graphene types showed different biotoxicity results, probably due to surface area and hydrophobic surfaces [34].Moreover, differences between our hemo-and cytocompatibility results may also be related to the applied method.In the hemolysis study, the nanoparticles were in direct contact with cells, while in the cytocompatibility study, the extracts were used.
Film Characterization
Gelatin-based films were obtained through solvent evaporation and were modified with the addition of GF30, GF50, and GF75 (Figure 7).We decided not to use GF15, as this group showed the greatest hemolysis rate and an increase of released LHD in the cytocompatibility study.
Scanning Electron Microscope (SEM)
Scanning electron microscope images of gelatin-based films with graphene nanoparticles are shown in Figures 8 and 9.It is observed that graphene is totally embedded in the matrix and well distributed in the whole volume of gelatin.However, the morphology of the surface changes and is rougher than gelatin film without graphene.
Scanning Electron Microscope (SEM)
Scanning electron microscope images of gelatin-based films with graphene nanoparticles are shown in Figures 8 and 9.It is observed that graphene is totally embedded in the matrix and well distributed in the whole volume of gelatin.However, the morphology of the surface changes and is rougher than gelatin film without graphene.
Mechanical Properties
Gelatin-based films containing 10% graphene nanoplatelets showed higher mechanical properties than unmodified films (Figure 10).The Young's modulus for Gel_GF75 was twice the value for pure gelatin film, while the maximum tensile strength was triple.This suggests that the mechanical strength of the obtained films was significantly improved after the modification, are they are much stronger than they were beforehand.These properties are essential for appropriately applying wound dressing materials to the injury.If the film does not have adequate resistance to mechanical stresses occurring during handling, it will be unsuitable for medical use.Hence, the modification of nanoplatelets in this aspect is very beneficial.The positive effect of the addition of nanoparticles was previously noted in the literature.For example, Wang et al. reported that adding graphene oxide to gelatin increased the mechanical parameters of the obtained films [8].forehand.These properties are essential for appropriately applying wound dressing materials to the injury.If the film does not have adequate resistance to mechanical stresses occurring during handling, it will be unsuitable for medical use.Hence, the modification of nanoplatelets in this aspect is very beneficial.The positive effect of the addition of nanoparticles was previously noted in the literature.For example, Wang et al. reported that adding graphene oxide to gelatin increased the mechanical parameters of the obtained films [8].
Antioxidant Activity
In the literature, attention is increasingly paid to nanoparticles in the context of a compound with a strong antioxidant effect.Graphene-modified materials have an antioxidant effect thanks to scavenging DPPH radicals, which can release free radicals and form non-radical species [35].The antioxidant results of the obtained gelatin films with graphene nanoplatelets are presented in Table 2.In modified films with specific types of graphene, a significant increase in the RSA parameter was observed compared to the control unmodified gelatin film, which does not show any markers of antioxidant activity.It is worth noting that the antioxidant effect increases with a higher surface area of nanoplatelets, and the RSA parameter characteristic was the greatest in the Gel_GF75 film.Both Ra and Rq increased after the addition of graphene nanoparticles (Table 3).The roughness of the films' surface increases with increasing surface area of GF.The surface roughness can be classified as nanoroughness (less than 100 nm).The morphology of the obtained gelatin-based films is shown in Figure 11.To consider the material's biomedical applicability, a rough surface should be characterized as a requirement, as it improves the cell's adhesion to the film due to its flexible cell membrane.Moreover, the bacteria's at-
Antioxidant Activity
In the literature, attention is increasingly paid to nanoparticles in the context of a compound with a strong antioxidant effect.Graphene-modified materials have an antioxidant effect thanks to scavenging DPPH radicals, which can release free radicals and form non-radical species [35].The antioxidant results of the obtained gelatin films with graphene nanoplatelets are presented in Table 2.In modified films with specific types of graphene, a significant increase in the RSA parameter was observed compared to the control unmodified gelatin film, which does not show any markers of antioxidant activity.It is worth noting that the antioxidant effect increases with a higher surface area of nanoplatelets, and the RSA parameter characteristic was the greatest in the Gel_GF75 film.Both Ra and Rq increased after the addition of graphene nanoparticles (Table 3).The roughness of the films' surface increases with increasing surface area of GF.The surface roughness can be classified as nanoroughness (less than 100 nm).The morphology of the obtained gelatin-based films is shown in Figure 11.To consider the material's biomedical applicability, a rough surface should be characterized as a requirement, as it improves the cell's adhesion to the film due to its flexible cell membrane.Moreover, the bacteria's attachment to the film surface is also dependent on roughness [36].Therefore, a positive effect of film modification on surface differentiation was found in this study.tachment to the film surface is also dependent on roughness [36].Therefore, a positive effect of film modification on surface differentiation was found in this study.The results presented in Table 4 show that increasing the surface area of graphene in the obtained films reduces both the surface free energy and the dispersion component, while increasing the polar component.Lowering the surface free energy parameter may result in improved cell-material interactions, which is essential from the point of view of using materials as dressings for wound treatment.In the literature, it can be found that the hydrophobicity of the graphene surface significantly affects the decrease in surface energy, and that at room temperature, the surface energy is about 46.7 mJ/m 2 [37]; this is a value close to the surface energy value shown in Table 4.
Surface Free Energy
The results presented in Table 4 show that increasing the surface area of graphene in the obtained films reduces both the surface free energy and the dispersion component, while increasing the polar component.Lowering the surface free energy parameter may result in improved cell-material interactions, which is essential from the point of view of using materials as dressings for wound treatment.In the literature, it can be found that the hydrophobicity of the graphene surface significantly affects the decrease in surface energy, and that at room temperature, the surface energy is about 46.7 mJ/m 2 [37]; this is a value close to the surface energy value shown in Table 4.
Specimen
Θ Analyzing the results in Table 5, it is noticeable that the water vapor permeability gradually decreased with the increase in the graphene content in the obtained materials.Gelatin is a material with a hydrophilic nature [38], which allows water molecules to bind and, as a result, allows water to penetrate through the created film.However, the addition of graphene causes a decrease in the WVPR parameter because graphene has hydrophobic
Figure 4 .
Figure 4.The intensities ratio of the D and G bands for GF15, GF30, GF50, and GF75.(A) spectra of the D and G bands for GF15, GF30, GF50, and GF75; (B) The intensities ratio of th G bands for GF15, GF30, GF50, and GF75.
Figure 4 .
Figure 4.The intensities ratio of the D and G bands for GF15, GF30, GF50, and GF75.(A) Raman spectra of the D and G bands for GF15, GF30, GF50, and GF75; (B) The intensities ratio of the D and G bands for GF15, GF30, GF50, and GF75.
Nanomaterials 2023 ,Figure 5 .
Figure 5.The effect of tested nanoparticles on the hemocompatibility of human erythrocyt centage hemolysis rate) after 24 h exposure (n = 3; data are expressed as the mean ± SD; * cantly different from the negative control (p < 0.05); # measurement outside the device's above 5%).
Figure 5 .
Figure 5.The effect of tested nanoparticles on the hemocompatibility of human erythrocytes (percentage hemolysis rate) after 24 h exposure (n = 3; data are expressed as the mean ± SD; * significantly different from the negative control (p < 0.05); # measurement outside the device's range, above 5%).
Figure 6 .
Figure 6.The effect of tested nanoparticles on the cytocompatibility of hFOB 1.19 cells (cell viability and lactate dehydrogenase release) after 24 h of exposure to sample extracts (n = 3; data are expressed as the mean ± SD; * statistical significance compared to the control-TCP-p < 0.05).
Figure 6 .
Figure 6.The effect of tested nanoparticles on the cytocompatibility of hFOB 1.19 cells (cell viability and lactate dehydrogenase release) after 24 h of exposure to sample extracts (n = 3; data are expressed as the mean ± SD; * statistical significance compared to the control-TCP-p < 0.05).
Table 1 .
The content of C, H, N, and surface parameters of the used carbons.
|
2023-12-04T16:12:26.984Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "51d77d7357838633805b79dfd1f1f3739d3defa9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/13/23/3068/pdf?version=1701509629",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b3b9eb10e8729a6b6a3f35ad5bc2bf378944af3",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
267722691
|
pes2o/s2orc
|
v3-fos-license
|
Sex differences in laterality of motor unit firing behavior of the first dorsal interosseous muscle in strength-matched healthy young males and females
Purpose The purpose of this study was to compare laterality in motor unit firing behavior between females and males. Methods Twenty-seven subjects (14 females) were recruited for this study. The participants performed ramp up and hold isometric index finger abduction at 10, 30, and 60% of their maximum voluntary contraction (MVC). High-density surface electromyography (HD-sEMG) signals were recorded in the first dorsal interosseous (FDI) muscle and decomposed into individual motor unit (MU) firing behavior using a convolution blind source separation method. Results In total, 769 MUs were detected (females, n = 318 and males, n = 451). Females had a significantly higher discharge rate than males at each relative torque level (10%: male dominant hand, 13.4 ± 2.7 pps vs. female dominant hand, 16.3 ± 3.4 pps; 30%: male dominant hand, 16.1 ± 3.9 pps vs. female dominant hand, 20.0 ± 5.0 pps; and 60%: male dominant hand, 19.3 ± 3.8 vs. female dominant hand, 25.3 ± 4.8 pps; p < 0.0001). The recruitment threshold was also significantly higher in females than in males at 30 and 60% MVC. Furthermore, males exhibited asymmetrical discharge rates at 30 and 60% MVC and recruitment thresholds at 30 and 60% MVC, whereas no asymmetry was observed in females. Conclusion In the FDI muscle, compared to males, females exhibited different neuromuscular strategies with higher discharge rates and recruitment thresholds and no asymmetrical MU firing behavior. Notably, the findings that sex differences in neuromuscular activity also occur in healthy individuals provide important information for understanding the pathogenesis of various diseases.
Introduction
Sex differences in physical performance have been well documented across a variety of activities (Lewis et al. 1986;Conkright et al. 2022).The mechanisms of sex differences are multifactorial, and differences in strength, muscle volume and muscle fiber composition (Bishop et al. 1987;Staron et al. 2000) have been documented.Less information is known about sex differences in the central nervous system that underlie force regulation (Nishikawa et al. 2017a;Inglis and Gabriel 2020;Taylor et al. 2022).Understanding sex differences between male and female motor unit (MU) firing behavior is important because knowledge of sex-specific neuromuscular control improves health and well-being by providing insights into aging, disease, and training.
A recent review reported that several muscles (vastus lateralis (VL), vastus medialis (VM), tibialis anterior (TA), and first dorsal interosseous (FDI) muscles) exhibit different MU firing behavior between females and males on intramuscular electromyography (EMG) and high-density surface EMG (HD-sEMG) (Lulic-Kuryllo and Inglis 2022).A common sex difference identified in these muscles was that females exhibited a higher discharge rate of MUs than males (Peng et al. 2018;Parra et al. 2020;Inglis and Gabriel 2020;Guo et al. 2022).Specifically, females have shown a higher discharge rate of MUs at lower intensities (10-40% MVC) for the TA (Inglis and Gabriel 2020) and VL (Guo et al. 2022) muscles but a higher discharge rate of MUs at higher intensities (> 60% MVC) for VM (Peng et al. 2018) and FDI (Parra et al. 2020) muscles during isometric contraction compared to males.These findings suggested that there are potential sex-related differences in neural drive that contribute to force output.
Another important factor that may influence sex differences in MU firing behavior is the difference in maximal muscle strength between females and males (Boccia et al. 2019).In several studies that have examined sex differences in MUs, participants with sex-related differences in maximal muscle strength have been recruited (Nishikawa et al. 2017a;Parra et al. 2020;Inglis and Gabriel 2020;Taylor et al. 2022).Musculoskeletal differences between females and males may play an important role in sex differences in MU firing behavior (Oliveira et al. 2022).Therefore, it is important to match females and males by maximal muscle strength.A recent study reported that sex differences in the properties of MUs in the TA muscle detected using intramuscular EMG were more apparent when the MVC values were matched (Inglis and Gabriel 2020).However, no study has been conducted to support a sex difference in MUs in the FDI muscle in strength-matched subjects.
Dominance is another known factor that influences MU firing behavior, especially in the upper extremities (Adam et al. 1998).Using the intramuscular EMG method, Adam et al. reported that the dominant FDI muscle exhibited lower discharge rates and recruitment thresholds than the nondominant FDI muscle during 30% submaximal isometric contraction in males (Adam et al. 1998).This finding may be influenced by differences in adaptations of the muscle in response to preferential use.The lateral dominance of the cerebral cortex indicates functional specialization within the left or right cerebral hemisphere of the brain.An essential principle of human brain organization is functional cerebral asymmetry (FCA), which is thought to result from interhemispheric inhibition of the dominant hemisphere.It has been reported that FCA is sex specific: males have more stable FCA (greater asymmetry) than females (Weis and Hausmann 2010).
The aim of this study was to examine the sex and laterality differences in MU firing behavior during submaximal isometric contractions of the FDI muscle in strength-matched healthy young males and females using HD-sEMG method.We hypothesized that compared to males, females would exhibit higher MU discharge rates and lower dominant vs. nondominant hand asymmetry of MU firing behavior.HD-sEMG is a noninvasive method for assessing the behavior of MUs and can be applied to a wide range of subjects.Recently, its accuracy for tracking MUs was reported (Goodlich et al. 2023).Given the high applicability of this method, including the ability to track activity changes in MUs over time, we believe that identifying sex differences in the FDI muscle, which is commonly used in the assessment of several neurodegenerative diseases, is important.
Participants
Twenty-seven subjects (females, n = 14 and males, n = 13) were enrolled in this study after written informed consent was obtained (Table 1).The inclusion criteria were independence in activities of daily life and the ability to give informed consent.The exclusion criteria were a reported history of orthopedic, neuromuscular, and cardiovascular diseases or diabetes mellitus.This study was approved by the Research Ethics Committee of Kanazawa University (approval no.2020-220 (83)) and was performed in accordance with the Declaration of Helsinki.
Measurements
A total of two visits to the laboratory were made by participants.As part of the first visit, the participants were introduced to the experimental procedures by performing a series of maximal and submaximal isometric finger abductions.
Participants underwent the main experimental session during the second visit, which occurred 24 h after the familiarization session.During this examination, ultrasound was used (to assess muscle cross-sectional area (CSA) and thickness of subcutaneous tissue) as well as to record voluntary isometric finger abduction force and HD-sEMG signals from FDI muscle.Maximum voluntary contraction (MVC).
After placement of the surface electrodes, the participants were asked to perform finger abduction at MVC, and the finger testing order was randomized.The subject's hand rested palm down on the examination table with the thumb in a 90-degree radial abduction position.A dynamometer (Takei Scientific Instruments Co., Ltd., Niigata, Japan) was used to measure the MVC (Fig. 1B).The force signal was detected by a force amplifier (TSA-110, Takei Scientific Instruments Co., Ltd., Niigata, Japan) with a 190 Hz sampling rate.The subjects were instructed to maintain a sitting posture during the MVC measurement.All participants performed two MVC trials after a warm-up period of ten minutes that included upper limb stretching and indoor walking.The target torque for the submaximal isometric pinch ramp-up contractions was calculated from the peak MVC torque.During MVC measurements, the subject was asked to keep the upper arm in a 90-degree flexed position at the shoulder and elbow joints.The subject's palms were placed on the table to prevent wrist and finger flexion.
Ultrasound
Ultrasound images of the FDI muscle were taken bilaterally to determine the muscle cross-sectional area (CSA) and thickness of subcutaneous tissue using an ultrasound imaging device (FAMUBO, SEIKOSHA, Tokyo, Japan).During the examination, participants sat in the same chair as for MVCs with the tested hand open.For each scan, a 2-cm scan depth was used, and the transducer frequency was 7.5 MHz for ultrasound brightness mode (B-mode).A longitudinal scan of the muscle was used to identify the FDI's origin and insertion.We measured and marked the origin and insertion.A CSA measurement was conducted at the midway point between the two.The probe head was oriented perpendicular to the second metacarpal once the midway point was determined.The FDI runs along the lateral side of the second metacarpal, which was used to guide the orientation of the probe.To create uniform pressure on the skin, we ensured that enough gel was used and that the probe was perpendicular to the surface.After properly focusing the muscle, an image was captured and saved.For subsequent analysis, each image was saved in jpg format and exported to a personal computer.ImageJ (National Institutes of Health, Bethesda, MD, USA) was used to determine the muscle CSA (in cm 2 ) and thickness of subcutaneous tissue (in mm).A centimeter mark was inlaid in each image to calibrate the scale.Using the muscle's cross-sectional center as a reference point, the thickness of subcutaneous tissue was measured.The CSA and polygonal tools were used to outline the entire muscle.
HD-EMG
A grid of 64 electrodes (GR04MM1305, OT Bioelettronica) was used to record HD-sEMG signals from the FDI muscle (diameter of 1 mm, distance of 4 mm between the electrodes; Fig. 1B).Using a bioadhesive foam (KIT04MM1305, OT Bioelettronica) and conductive paste (Elefix ZV-181E, NIHON KOHDEN, Tokyo, Japan), the electrode grid was attached to the muscle surface (Nishikawa et al. 2017b(Nishikawa et al. , 2018).An electrode was placed at the wrist as a reference.We recorded monopolar HD-sEMG signals using an
Protocol
First, participants were evaluated using ultrasound for muscle shape and subcutaneous tissue thickness.Second, the electrode grid was placed to the muscle belly of the FDI muscle, after which the MVC was measured.After recording the MVC measurements, all participants performed a submaximal isometric finger abduction force at 10, 30, and 60% MVC in a random order (Fig. 1B).To calculate the discharge rate and recruitment threshold of MUs to understand MU firing behavior, a submaximal isometric contraction task was performed, for which a trapezoidal motor task was chosen based on the methods applied in previous studies (Nishikawa et al. 2017b(Nishikawa et al. , 2018(Nishikawa et al. , 2022)).Contractions at 10 and 30% were sustained for 15 s, whereas those at 60% MVC lasted for 5 s.In each trial, the subjects received visual feedback of the torque applied to the dynamometer, which was displayed as a trapezoid (ramp up and ramp down): increasing torque by 1% MVC/s until 10% MVC, which was maintained for 15 s (Nishikawa et al. 2018), increasing torque by 2% MVC/s until 30% MVC, which was maintained for 15 s (Nishikawa et al. 2022), and increasing torque by 10% MVC/s until 60% MVC, which was maintained for 5 s (Nishikawa et al. 2017b).HD-sEMG data were collected during the MVC assessment and during submaximal rampup contraction tasks.
Data processing
In this study, 59 bipolar EMG signals were calculated from adjacent electrodes.Convolutive blind source separation was used to separate HD-sEMG recordings into individual MU discharges (Holobar and Zazula 2007;Holobar et al. 2009; Holobar and Farina 2014) (Fig. 2).To identify individual MUs, we used DEMUSE software (v.6.0; the University of Maribor, Slovenia).Data were excluded from the analysis if the discharge rate fell below 4 Hz (Holobar et al. 2009;Nishikawa et al. 2022) or the pulse-to-noise ratio was less than 30 dB (Holobar et al. 2014).The coefficient of variation (CV) for the interspike interval was defined as the ratio between the standard deviation and the mean value of the interspike interval.Next, the mean discharge rate of the identified MUs was calculated during the sustained contractions (Fig. 1B).The MU recruitment thresholds were defined by the level of force (%MVC) expected at the first firing of each MU.A wide range of MUs were recruited in the 60% MVC task.The characteristics of MUs include a phenomenon called "onion skin", in which earlier recruited MUs generally have a higher discharge rate than that of later recruited MUs (De Luca and Hostage 2010), and a phenomenon called "reverse onion skin", in which later recruited MUs have a higher discharge rate than that of earlier recruited MUs (Inglis and Gabriel 2021a).According to these characteristics of MUs, we classified the detected MUs into three subgroups for each RT (MU20, < 20% MVC; MU40, 20-40% MVC; MU60, 40% <).In addition, the CV of force (standard deviation (SD)/mean 100, CV force) at each level of sustained submaximal contraction was calculated.
Statistical analysis
Stata version 17 (Stata Corp LLC, Texas, USA) was used for all analyses, while GraphPad Prism version 8 (GraphPad Software Inc., California, USA) was used to generate graphs.Data normality was confirmed using the Shapiro-Wilk test.Based on the normality in the data, parametric analysis was performed to compare age, height, and weight in each participant between females and males using an unpaired t test.Furthermore, the MVC value was analyzed using twoway (sex (female and male) and side (dominant and nondominant)) analysis of variance (ANOVA).Nonparametric analysis (generalized linear mixed-effects model with random slopes) was performed to compare CSA, subcutaneous tissue, CV of force, mean discharge rate and recruitment threshold.The explanatory variables were as follows: for CSA and subcutaneous tissue, sex (female and male) and side (dominant and nondominant); for discharge rate and recruitment threshold, sex (female and male), side (dominant and nondominant), and contraction level (10, 30, and 60% of MVC).Furthermore, an analysis of the mean discharge rate at the 60% MVC task was conducted using generalized linear mixed-effects with a random intercept and a random slope.There were two explanatory variables: the side (dominant and nondominant) and MU_subgroup (MU20, MU40, and MU60).For multiple comparisons, the Bonferroni correction was applied to account for the effects of multiple comparisons.A bivariate correlation analysis was conducted using Spearman's correlation coefficients to assess bivariate correlations between subcutaneous tissue thickness and the yield of MUs and between CSA and MVC.Effect size was calculated from the generalized linear mixed-effects model
Results
The general characteristics of the participants are presented in Table 1.Males and females were similar in age and body mass index (p = 0.6895, 95% CI = − 0.169 to 0.670 years and p = 0.2192, 95% CI = − 2.432 to 0.591 kg/m 2 ).
A B C
Fig. 3 Comparison of the discharge rate between the dominant side and nondominant side in females and males at 10% (A), 30% (B), and 60% MVC (C).Females' dominant side showed a significantly higher discharge rate than males' dominant side at 10, 30, and 60% MVC.Females' nondominant side showed a significantly higher dis-charge rate than the males' dominant side at 10% MVC.Furthermore, the males' dominant side showed a significantly higher discharge rate than the males' nondominant side at 30 and 60% MVC.Data are shown as the median and 95% CI. * p < 0.05.
There was no significant sex × side interaction effect on the MVC (F = 0.007, p = 0.9334, η 2 = 0.001) and no main effect of sex (F = 0.072, p = 0.7893, η 2 = 0.002) and side (F = 0.2668, p = 0.6084, η 2 = 0.007).Furthermore, there was no correlation between CSA and MVC in either A-C) and females (D-F).There were no significant side × MU_ subgroup interactions in males and females (A and D).Males and females (B and E) had a main effect of MU_subgroup, with MU20 and MU40 discharge rates significantly higher than those of MU60 for males, while MU20 discharge rates were significantly higher than those of MU60 for females.Males' nondominant side showed a significantly higher discharge rate than males' dominant side (C), while there was no significant difference between the dominant and nondominant sides in females (F) males (r = − 0.2925, p = 0.2108) or females (r = 0.04766, p = 0.8250).
Discussion
This study compared sex-specific dominance-specific MU firing behavior in young adults using HD-sEMG.We found that males showed (1) significant asymmetry in discharge rates and recruitment thresholds according to dominance at moderate and high force output and (2) a significantly lower discharge rate and recruitment threshold than females.On the other hand, females did not show significant asymmetry in the above outcomes at each investigated force output.
These results support our hypothesis that females and males have different MU control strategy characteristics.
We found that females exhibited a significantly higher discharge rate and recruitment threshold than males.In previous studies using HD-sEMG, females exhibited a higher discharge rate than males in various muscles (i.e., the VL muscle (Guo et al. 2022) and TA muscle (Taylor et al. 2022)).Furthermore, several previous studies Fig. 6 Comparison of muscle cross-sectional area between females and males (A) and subcutaneous tissue between the dominant side and nondominant side (B).Females showed a significantly lower cross-sectional area than males.The dominant side showed significantly less subcutaneous tissue than the nondominant side reported that females showed a significantly higher recruitment threshold than males in the TA muscle (Martinez-Valdes et al. 2020) andVL muscle (Boccia et al. 2019).These studies were not strictly sex-specific, as there were also sex differences in muscle strength, and the effects of muscle fiber type and other factors must be considered.This study found sex differences in discharge rate and recruitment thresholds even when sex differences were accounted for in a muscle strength-matched population.A recent study using intramuscular EMG reported that sex differences in the properties of MUs in the TA muscle were more apparent when the MVC values were matched (Inglis and Gabriel 2020).Furthermore, Herda et al. also reported sex differences in MU firing behavior (discharge rate and recruitment threshold) in a muscle-matched subject group (8-10 years old) for the FDI muscle, as in the present study (Herda et al. 2019).Their findings support our results indicating sex differences in potential MU characteristics, including the discharge rate and recruitment threshold, in strength-matched subjects.The sex differences in MU firing behavior may be due to sex differences in corticospinal tract excitability associated with differences in brain anatomy, such as gray matter and white matter (Hanlon and McCalley 2022).Interestingly, previous studies reported that males are better at visual motor tracking tasks than females (Carey et al. 1994;Mathew et al. 2020).Anatomical and functional sex differences in the cerebellum (Raz et al. 2001), a structure important in eye-hand coordination (Miall et al. 2001), may be responsible for the sex differences in the visual coordination task.The results of our study also found that the CV of force in females was significantly higher than that in males.Similarly, a previous study reported higher values of CV of force and CV of the ISI in females than in males (Inglis and Gabriel 2021b).The CV of force has also been reported to affect MU firing behavior (Jakobi et al. 2018), and it is likely that the difference in eye-hand coordination while performing the motor task was also a factor in the sex differences in MU firing behavior of the FDI muscle.Hormonal influences have been implicated in these sex differences in the characteristics of MU firing behavior.The female sex hormone progesterone is known to affect neurotransmitter function (Callachan et al. 1987), and it has been reported that the MU discharge rate is higher after ovulation when the progesterone level is high (Tenan et al. 2013).On the other hand, a recent study reported that higher testosterone levels were associated with reduced MU action potential complexity (Guo et al. 2022).These findings indicate that sex hormones influence MU firing behavior and may be one of the reasons for the observed sex differences in MU characteristics.However, this study did not investigate hormonal dynamics.Future studies should clarify this point by analyzing the association of MU firing behavior with the circadian rhythm of hormones and the ovulatory cycle.
In this study, we found that females did not exhibit laterality in firing behavior compared to males.Lateral dominance of brain activity suggests functional specialization in either the left or right cerebral hemisphere (Steenhuis and Bryden 1989;Hebbal and Mysorekar 2006), resulting in the preferential use of the dominant limb to manipulate objects or initiate a movement (Peters 1988).Accordingly, an association between the characteristics of MU firing behavior and the dominant side has been well established (Kamen et al. 1992;Schmied et al. 1994;Adam et al. 1998).Specifically, the dominant hand has been reported to have lower discharge rates and recruitment thresholds than the nondominant hand in the FDI muscle as recorded with intramuscular EMG (Adam et al. 1998).These findings are consistent with the results of this study that the dominant side exhibited a lower discharge rate and lower recruitment threshold compared with the nondominant side during the isometric contraction task in males.On the other hand, females did not show a significant difference between the dominant and nondominant sides.This finding is consistent with the assumption of reduced asymmetrical organization in females (Shaywitz et al. 1995;Hausmann and Güntürkün 1999), possibly due to sex differences in cerebral function.FCA is thought to be generated by interhemispheric inhibition of the nondominant hemisphere by the dominant hemisphere.Several studies that identified sex differences have found that FCA is more pronounced in males than in females (Hausmann et al. 1998;Hausmann and Güntürkün 1999).Many researchers have reported sex differences in brain structure and function (Allen et al. 1991;Ingalhalikar et al. 2014;Björnholm et al. 2017).In adulthood, males exhibited higher fractional anisotropy (FA) and lower mean diffusivity than females in many regions (Westerhausen et al. 2003;Hsu et al. 2008;Abe et al. 2010;Ritchie et al. 2018).In contrast, females may have higher FA than males in parts of the corpus callosum (Schmithorst et al. 2008;Kanaan et al. 2014).FA is related to axonal packing and myelination (Beaulieu 2002).The corpus callosum is the region responsible for neurotransmission between the left and right cerebral hemispheres (Wahl and Ziemann 2008).Differences in neural networks in this region may lead to sex differences in information transfer between the hemispheres and influence asymmetries in motor nerve function between the dominant and nondominant hands.
The results of this study showed that CSA was significantly higher in males than in females, but there was not a significant sex difference in muscle strength, although there was a relative difference of 60%.
Previous studies have reported that males have higher CSA and muscle strength in the FDI muscle compared to females (Sars et al. 2018;Parra et al. 2020), but our results were not consistent with these studies.We analyzed the relationship between CSA and muscle strength in the FDI muscle and found no significant correlation between CSA and MVC of the FDI muscle.According to this finding, the muscle strength of the FDI muscle is influenced more by neuromuscular factors than by muscle mass.Although the subjects were different (amyotrophic lateral sclerosis), Jenkins et al. reported no correlation between muscle weakness and changes in CSA in the FDI muscle (Jenkins et al. 2013).This finding supports our hypothesis that muscle strength in the FDI muscle is influenced more by neural factors than by muscle mass.Furthermore, the possibility that the menstrual cycle affects muscle strength must be considered.Several studies on the effects of the menstrual cycle on muscle strength found no effects in the lower limb muscles (Kubo et al. 2009;Romero-Moraleda et al. 2019), but an effect has been reported for the hand muscles (Phillips et al. 1996).Our study and other studies targeting the FDI muscle (Sars et al. 2018;Parra et al. 2020) did not clearly account for menstrual cycles, which may have contributed to the variability in these results.Therefore, we believe that further studies addressing the menstrual cycle are needed in future investigations.
This study has several limitations.First, although both right-and left-handed individuals were recruited, the percentage of left-handed individuals was negligible for both females and males.A previous study noted that right-and left-handed individuals have different nerve conduction velocities (Patel and Mehta 2012); thus, it is desirable to analyze right-and left-handed individuals separately.Second, this study recruited only young adults.MU firing behavior changes with age (Watanabe et al. 2016), and there are sex differences in these changes (Piasecki et al. 2021).Therefore, the results may not generalize to older adults.Third, this study only included tasks up to 60% MVC.Previous studies reported that the majority of MUs in the FDI muscle are recruited prior to 50% MVC, with a few MUs recruited up to 70% MVC (Thomas et al. 1986;Kamen et al. 1995).De Luca et al. reported an increase in the MU firing rate in the FDI muscle up to 80% MVC with increasing torque exerted (De Luca et al. 1982).These findings indicate that MU yield decreases with increasing exercise intensity, and data at exercise intensities up to 80% MVC may provide information for further motor control.Fourth, we only performed HD-sEMG and did not directly assess whether there were sex differences in brain function; therefore, we can only speculate about neural network asymmetries during motor tasks.In the future, simultaneous measurements with HD-sEMG and electroencephalogram (EEG) should be performed to provide more detailed data on asymmetry of brain function through analysis of EEG signals and MU activity during motor tasks.Finally, only subjects with similar muscle strength and subcutaneous tissue thickness were included in this study.There are several factors known to influence EMG recordings, such as muscle strength, subcutaneous tissue thickness, and muscle size; thus, we cannot conclude that there is a causal relationship between those factors and the behavior of MUs according to HD-sEMG based on this study alone.In future studies that include a wide range of subjects, these relationships can be clarified to enable better interpretation of the HD-sEMG results.Finally, this study examined only the FDI muscle.A recent study analyzing MUs in the TA muscle reported that no asymmetry of MUs was observed in males (Petrovic et al. 2022).Since the upper and lower extremities are used differently, it is likely that the motor control mechanisms are also different, but sex differences in other muscles need to be clarified.
Conclusions
We identified sex-specific laterality of MU firing behavior in young adults.Females exhibited higher discharge rates and MU recruitment thresholds than males.Furthermore, there was asymmetry in MU firing behavior in males, whereas no asymmetry was observed in females.Sex differences have been observed not only in motor function but also in disease severity, and clarification of neurophysiological sex differences in healthy individuals is important for rehabilitation medicine as well as for sports science.In the future, further details on sex differences in MUs can be elucidated by investigating the relationships of MUs with sex hormone dynamics and aging effects.
Fig. 1
Fig. 1 Placement of the electrode grid and study protocol.A An electrode grid was placed on the FDI muscle belly.The force sensor of the dynamometer was placed to touch the outside of the basal phalanx of the index finger.B Participants performed three submaximal voluntary contractions
Fig. 2
Fig. 2 Representative images of high-density surface electromyogram (EMG) decomposition and definition of the recruitment threshold.A HD-sEMG signal for the 8 channels.B Motor unit action potentials (MUAPs) were identified by HD-sEMG decomposition.C Representative images of HD-sEMG decomposition in males (left side) and females (right side) (upper panel is dominant side, lower panel is nondominant side) ◂
Fig. 4
Fig.4Comparison of the discharge rate of the 60% maximum voluntary contraction task between the side and MU_subgroup in males (A-C) and females (D-F).There were no significant side × MU_ subgroup interactions in males and females (A and D).Males and females (B and E) had a main effect of MU_subgroup, with MU20 and MU40 discharge rates significantly higher than those of MU60
Fig. 5
Fig.5Comparison of recruitment thresholds between the dominant side and nondominant side in females and males at 10% (A), 30% (B), and 60% MVC (C).Males' nondominant side showed a significantly lower recruitment threshold than females' dominant side at 30% and 60% MVC and a significantly lower recruitment threshold than
Table 1
Characteristics of participants
Table 2
Motor Unit yield
|
2024-02-18T06:16:32.286Z
|
2024-02-16T00:00:00.000
|
{
"year": 2024,
"sha1": "542c675585535dc679e3dbdc68bf93d52370e2c4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00421-024-05420-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfa6d402c6ad7102bb9ffb04e543b4acf9ad2281",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
40462035
|
pes2o/s2orc
|
v3-fos-license
|
Recommended Thermal Rate Coefficients for the C + H$_3^+$ Reaction and Some Astrochemical Implications
We have incorporated our experimentally derived thermal rate coefficients for C + H$_3^+$ forming CH$^+$ and CH$_2^+$ into a commonly used astrochemical model. We find that the Arrhenius-Kooij equation typically used in chemical models does not accurately fit our data and use instead a more versatile fitting formula. At a temperature of 10 K and a density of 10$^4$ cm$^{-3}$, we find no significant differences in the predicted chemical abundances, but at higher temperatures of 50, 100, and 300 K we find up to factor of 2 changes. Additionally, we find that the relatively small error on our thermal rate coefficients, $\sim15\%$, significantly reduces the uncertainties on the predicted abundances compared to those obtained using the currently implemented Langevin rate coefficient with its estimated factor of 2 uncertainty.
Introduction
Interstellar astrochemistry is mostly organic in nature. Of the 194 molecules identified to date in the interstellar medium (ISM) and circumstellar shells, approximately threequarters are carbon bearing (Müller et al. 2005). Formation of these molecules begins with atomic carbon becoming bound into hydrocarbons (van Dishoeck & Blake 1998;Herbst & van Dishoeck 2009). This represents one of the first links in the chain of astrochemical reactions leading to the synthesis of complex organic molecules (COMs). A key reaction in this network is the proton transfer process (Wakelam et al. 2012) In dense clouds, the resulting CH + is predicted to rapidly undergo sequential hydrogen abstraction with the abundant H 2 in the cloud to form CH + 3 , which has been identified as a bottleneck species in the chain of reactions leading to the formation of interstellar COMs (Smith & Spanel 1995).
Current astrochemical models use the Langevin rate coefficient for Reaction (1).
However, our recent laboratory work has shown that the Langevin rate coefficient agrees poorly with the experimentally derived thermal rate coefficient (O'Connor et al. 2015).
Similarly, calculations by Talbi et al. (1991) and Bettens & Collins (1998, 2001, using a combination of quantum mechanical potential energy surfaces and classical trajectories, do not agree with our experimental results for Reaction (1). Additionally, we find that the reaction C + H + 3 → CH + 2 + H is open, despite the lack of its inclusion in current astrochemical databases. We also find poor agreement between our results for Reaction (2) and semi-classical calculations.
Moreover, our rate coefficient for this channel is larger than that of Reaction (1) for temperatures below ∼ 50 K. The resulting CH + 2 then undergoes hydrogen abstraction to form CH + 3 .
Reducing the uncertainty of the rate coefficient for Reaction (1) has been identified by Wakelam et al. (2009Wakelam et al. ( , 2010 as being critically important in order to more reliably predict the abundances for a large number of species observed in dense molecular clouds. Similarly, Vasyunin et al. (2008) has shown that the uncertainty in this rate coefficient hinders our ability to reliably predict chemical abundances in protoplanetary disks. Our recent laboratory studies have reduced the uncertainty on this reaction from a factor of 2 down to ∼ 15%.
Additionally, our work has demonstrated that Reaction (2) is open at molecular cloud temperatures and should be included in the chemical network.
In this paper, we explore the astrochemical impact of our new rate coefficients for Reactions (1) Wakelam et al. 2015) and the gas-phase astrochemical code Nahoon (Wakelam et al. 2012), we have then investigated the impact of our new data on astrochemical models.
Below, we briefly discuss the O' Connor et al. (2015) results in Section 2. We present the functional fits to the experimental data in Section 3. In Section 4, we briefly review the astrochemical model. Some astrochemical implications of our new thermal rate coefficients are discussed in Section 5, and a summary is presented in Section 6.
Experimental Work
O' Connor et al. (2015) and de Ruette et al. (2016), respectively, measured reactions between H + 3 , with an internal energy of ∼ 2500 K, and ground term atomic C and O, with the fine-structure levels statistically populated. In those works, we detail how we derived thermal rate coefficients from our data and discuss in detail their validity for astrochemical models.
To summarize those results for our carbon work, we found good agreement between our data and the mass-scaled results of Savic et al. (2005), who studied C with statistically populated fine-structure levels reacting with D + 3 with an internal energy of 77 K. Their work was carried out at a kinetic temperature of ∼ 1000 K. We found good agreement in the rate coefficients for both the CH + and CH + 2 outgoing channels and for the sum of both channels. Additionally, in our oxygen work, we found good agreement between the thermal rate coefficient summed over both the OH + and H 2 O + outgoing channels compared to flow tube work at kinetic temperatures of ≈ 300 K, which used H + 3 with a corresponding level of internal temperature (Fehsenfeld 1976;Milligan & McEwan 2000). We refer the reader to Clearly, the optimal laboratory situation would be experimentally derived thermal rate coefficients involving thermally populated levels in C and H + 3 . However, such measurements appear to be beyond current experimental capabilities. Reliable calculations also appear to be just beyond current capabilities of quantum mechanical approaches. For now, the results of O' Connor et al. (2015) represent the state of the art for Reactions (1) and (2).
For the rest of this paper, we follow the standard practice of extrapolating state-of-the-art laboratory results to the temperatures needed for molecular cloud studies.
Fitting the Experimental Results
Astrochemical databases typically store thermal rate coefficients using the Arrhenius-Kooij formula However, our experimentally derived thermal rate coefficients for Reactions (1) and (2) cannot be accurately reproduced by this formula. This can be seen in Figure 1, which presents the fit using Equation (3) In the absence of any deep theoretical understanding of Reactions (1) and (2), it is not clear what fitting formula to use. We have opted to use the versatile fitting function recommended by Novotný et al. (2013) for astrochemical modeling, namely, We have used this equation to fit the thermal rate coefficient data of O'Connor et al. (2015) for Reactions (1) and (2) to better than 1% over the 1 − 10, 000 K temperature range. The relevant fitting parameters are listed in Table 1. In that table, we also give a fitted rate coefficient for the sum of both channels.
Astrochemical Model
In order to study the impact of the experimentally derived thermal rate coefficients from O'Connor et al. (2015) on astrochemical models of dark molecular clouds, we have used the Nahoon code along with the kida.uva.2014 astrochemical database (Wakelam et al. 2012(Wakelam et al. , 2015. The database currently contains 489 species and 7509 reactions. These include Reaction (1) but not Reaction (2). We have modified KIDA so that we can run the model using either our fitted rate coefficient data for both these channels or for the sum of these two channels.
The input parameters used were typical values for dark molecular clouds (Nummelin et al. 2000;Rodríguez-Fernández et al. 2010). For each run, the cosmic ray ionization rate ζ was taken to be 10 −17 s −1 and the visual extinction A v was set to 30. The initial chemical abundances were taken from Wakelam et al. (2015), and are reproduced in Table 2.
Each simulation used a fixed cloud temperature, T , between 10 to 300 K and a fixed total number density of hydrogen nuclei, n H , in a range from 10 3 to 10 7 cm −3 .
Astrochemical Implications
We first look at how the differences in the temperature dependence and magnitude of our rate coefficients relative to those of the Langevin rate coefficient affect predicted chemical abundances. Next, we investigate how the ∼ 15% error reported in O'Connor et al.
(2015) reduces the uncertainty in the predicted abundances compared to the uncertainties resulting from the estimated factor of 2 error in the Langevin value.
In order to test the sensitivity of our results to the experimentally derived branching ratio for forming CH + or CH + 2 , we have run the model treating both reactions separately and with both reactions summed together. For the results presented below, we find no significant difference in the model output for either assumption. We attribute this to the high H 2 abundance in the dark clouds, resulting in rapid hydrogen abstraction reactions. Figure 2 shows the fractional difference in the predicted abundances for all 489 species using our new rate coefficients relative to those from the unmodified model. Specifically, we have plotted the new abundances normalized by the old abundances. Calculations were carried out for n H = 10 4 cm −3 and T = 10, 50, 100, and 300 K.
Predicted Abundances
At 10 K, we find no significant differences except for CH + . The abundances of all other species are essentially unchanged because, as noted by O'Connor et al. (2015), any CH + and CH + 2 formed rapidly undergo hydrogen abstraction, leading to CH + 3 , and the summed rate coefficient at 10 K for Reactions (1) and (2) is basically equal to the Langevin value currently used in the database for Reaction (1). Hence the absence of Reaction (2) in the databases appears not to be an issue at this temperature. The decreased abundance for CH + in the new model is due to the decreased rate coefficient for Reaction (1). Naively, one would expect the CH + 2 abundance to increase due to the addition of Reaction (2) to the network. However, this is compensated for by a reduction in the hydrogen abstraction rate for CH + forming CH + 2 due to the decreased CH + abundance. Hence the CH + 2 abundance remains unchanged.
At 50 K, our summed rate coefficient is a factor of ∼ 30% smaller than the Langevin value. For species that depend on the products of Reactions (1) and (2), this means their abundances decrease in the modified model using our data. Conversely, for species whose formation depends on C and H + 3 , the abundance of those species increases with the new model as C and H + 3 are destroyed less rapidly using our new rate coefficients. The new predicted abundances range from a factor of 2 smaller than the old to a factor of 1.5 larger; however, this spread decreases dramatically between 10 5 and 10 6 years. This appears to be due to a large increase in the abundance of O 2 during this epoch, which enables the reaction to become important. This leads to a dramatic decrease in the atomic C abundance, thereby reducing the importance of reactions (1) and (2).
At 100 K, the summed rate coefficient is ∼ 40% smaller than the Langevin value, and this leads to correspondingly larger variations in the predicted abundances plotted in Figure 2. The new abundances range from a factor of 2 smaller to a factor of 2 larger. As before, these variations decrease dramatically between 10 5 and 10 6 years. This is again due to a large increase in the O 2 abundance, an increase in the importance of Reaction (5), and an accompanying decrease in the C abundance.
At 300 K, the total rate coefficient is ∼ 45% smaller than the Langevin value, leading to new abundances which range from a factor of 2 smaller than the old to a factor of 1.5 larger. Also, at this higher temperature a new set of chemical reactions become important, leading to a dramatic decrease in the atomic C abundance at around 10 3.5 years. This decrease appears to be due, in large part, to an increase in the abundance of neutral hydrocarbons, which react with and incorporate much of the atomic C in the cloud. As a result of the decreased atomic C abundance, at this temperature, Reactions (1) and (2) are less important for cloud ages above 10 3.5 years.
Abundance Uncertainties
Langevin rate coefficients have estimated uncertainties of a factor of 2, though our previous work indicates that the actual uncertainties in Langevin rate coefficients may be even larger (Kreckel et al. 2010;O'Connor et al. 2015;de Ruette et al. 2016). O'Connor et al. (2015) report uncertainty factors of ≈ ±13% and ≈ ±18% for their experimentally derived thermal rate coefficients for Reactions (1) and (2), respectively. To track the resulting decrease of uncertainty throughout the network, we first ran the model using the Langevin rate coefficient at the upper limit of the estimated factor of 2 uncertainty, and then at the lower limit of its uncertainty. For each species, the abundances from those runs (χ upper and χ lower , respectively) were obtained as a function of time. The difference of the logarithms of the two abundances, log(χ upper /χ lower ), was used as a heuristic for the level of uncertainty in the predicted abundances. Then, we replaced the Langevin value with our new coefficients and ran the model again using the new upper and lower uncertainty limits.
By tracking the uncertainty statistic log(χ upper /χ lower ) for the old and new models, we were able to track the reduction in the abundance uncertainties throughout the network.
Following Wakelam et al. (2015), a "significantly" uncertain species was taken to mean that | log(χ upper /χ lower )| was greater than or equal to 0.3 (i.e., a factor of 2 difference).
We find that for every temperature and density in our model, there was a reduction in the number of significantly uncertain species. Figure C 2 H 4 , CH 3 CHO, CH 2 CHCN, HCOOCH 3 , CH 3 CH 2 OH, CH 3 OCH 3 , CH 3 C 4 H, CH 3 COCH 3 , CH 3 C 5 N, CH 3 C 6 H, and C 6 H 6 . No species in the new model are significantly uncertain.
Additionally, Figure 5 shows the relationship between the number of significantly uncertain species and n H for T = 10, 50, 100, and 300 K. The number of significantly uncertain species has been reduced over the full range of temperatures and densities.
Summary
In this work, we have fit the experimentally derived thermal rate coefficients of O'Connor et al. (2015) for Reactions (1) and (2) to a functional form given by Equation (4) Equation (3), and their actual data, normalized to their data. The solid curve represents the CH + formation channel, the dashed curve is for CH + 2 , and the dotted curve for the sum of these two channels.
|
2016-09-12T03:48:04.000Z
|
2016-04-30T00:00:00.000
|
{
"year": 2016,
"sha1": "afc548b5cad878e543be378186188a6fe2733f78",
"oa_license": null,
"oa_url": "https://academiccommons.columbia.edu/doi/10.7916/D8FN1JPK/download",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "afc548b5cad878e543be378186188a6fe2733f78",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
221780285
|
pes2o/s2orc
|
v3-fos-license
|
Embolization of Inferior Pancreaticoduodenal Artery Aneurysm with Celiac Stenosis or Occlusion: A Report of Three Cases and a Review of Literature
True pancreaticoduodenal artery aneurysms are relatively rare, approximately 50% of which are associated with stenosis or occlusion of the celiac axis. It is imperative to treat the condition immediately after diagnosis, considering that its rupture has a mortality rate of approximately 50%. The current most commonly used method to treat pancreaticoduodenal artery aneurysms is transcatheter arterial embolization. Here, we report three cases of embolization of inferior pancreaticoduodenal artery aneurysm with celiac stenosis or occlusion along with a literature review.
INTRODUCTION
Pancreaticoduodenal artery (PDA) aneurysms are rare, representing only 2% of all visceral artery aneurysms (1). In almost 50% of reported cases, formation of PDA aneurysm is associated with celiac axis stenosis or occlusion (2,3). Similar to other aneurysms, rupture is the major complication in PDA aneurysms as well. The overall rup-This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/ licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
2 Three Cases Report: Treatment of PDA Aneurysm ture rate is approximately 40% and the mortality rate of rupture is up to 50% (1,4,5).
Therefore, it is important to treat PDA aneurysm immediately on detection.
With advances in imaging and interventional techniques, transcatheter arterial embolization of the aneurysm has replaced surgery as the treatment of choice (2). In this article, we report three cases of embolization of inferior PDA (IPDA) aneurysm with celiac stenosis or occlusion. We also include a literature review of IPDA aneurysm associated with celiac artery stenosis or occlusion.
CASE 1
Refer to Table 1 for patient's information and characteristics of disease and treatment. A 54-year-old female presented acute onset epigastric pain and had no significant underlying disease such as hypertension nor connective tissue disease. For evaluation, she underwent abdominopelvic CT (AP CT). On AP CT, there was retroperitoneal hematoma with a small aneurysm in the IPDA and celiac artery stenosis (Fig. 1A). The surgery department suggested endovascular treatment because of the high failure rate of surgery. So, she was scheduled to undergo endovascular treatment. The access was achieved via right femoral artery puncture with a 5-Fr vascular sheath (Terumo, Tokyo, Japan). Selective celiac and superior mesenteric artery (SMA) angiography was performed with a cobra catheter (Terumo), which revealed multiple aneurysms in the anterior IPDA, with the largest one approximately 11.7 mm (Fig. 1B). The celiac axis showed a hook-like appearance, suggesting median arcuate ligament (MAL) compression. The aneurysms, also with efferent and afferent arteries were embolized using fourteen metallic detachable coils (IDC; Boston Scientific, Marlborough, MA, USA) ( Fig. 1C). The angiogram obtained after embolization showed no definite evidence of contrast filling in the aneurysm and persistent blood flow to the common hepatic artery (Fig. 1D). Further treatment was recommended, if needed. In the immediate follow-up, no hepatic ischemic symptoms or laboratory abnormalities were seen. On 6-month follow-up, CT showed no recurrent aneurysm nor infarction of visceral organ. She underwent AP CT. On AP CT, there was retroperitoneal hematoma with a suspicious small aneurysm in the IPDA. The celiac axis showed a hook-like appearance, which indicated MAL compression ( Fig. 2A, B). The surgery department suggested endovascular treatment because of the high failure rate of surgery. So, she was scheduled to undergo endovascular treatment. The access was achieved via right femoral artery puncture with a 5-Fr vascular sheath. Selective celiac and SMA angiography was performed with a cobra catheter. It showed a saccular aneurysm at the anterior IPDA with diffuse dilatation of PDA (Fig. 2C). Severe stenosis at the proximal celiac artery was also seen. Transarterial embolization for IPDA aneurysm was done with seven metallic detachable coils (IDC, Boston Scientific) (Fig. 2D).
The angiogram obtained after embolization showed no definite evidence of contrast filling in the aneurysm (Fig. 2E). Further treatment was recommended, if needed. In the immediate follow-up, no hepatic ischemic symptoms or laboratory abnormalities were seen. But 5 days later, the patient complained of dizziness. Brain MRI showed acute infarction in the cerebel-
CASE 3
Refer to Table 1 for patient's information and characteristics of disease and treatment. A 76-year-old male came to our hospital due to uncontrolled fever and had no significant underlying disease such as hypertension nor connective tissue disease. For evaluation of fever focus, he underwent AP CT. AP CT revealed a suspicious large aneurysm in the IPDA without
DISCUSSION
In 1973, Sutton and Lawton first suggested that celiac axis occlusion or stenosis was the The underlying cause of celiac axis stenosis or occlusion is frequently unknown, although atherosclerosis, fibromuscular dysplasia, aortic dissection, and MAL compression are possible etiologies (2). Among these etiologies, entrapment by the MAL (MAL compression) at the aortic hiatus has been cited as the cause more often than other etiologies (4). All three cases in this report were suspected to have the aneurysm due to celiac axis stenosis or occlusion.
Furthermore, a hook-like appearance of the celiac axis, indicating MAL compression, was seen in two cases.
Clinical presentation of PDA aneurysm is variable and non-specific. The diagnosis of incidental PDA aneurysm is currently increasing as CT and ultrasonography are being increasingly used. Duplex ultrasound, CT angiography, and magnetic resonance angiography often yield this incidental diagnosis (2). However, conventional catheter angiography remains the gold standard (6).
Unlike other aneurysms of the visceral arteries, it seems that no correlation exists between the size of true PDA aneurysms and their propensity to rupture (3,7). There are no predictive factors for rupture (2). Because it has unpredictability of rupture and has high mortality rate of rupture as up to 50%, it is important to treat this aneurysm when detected, regardless of its size (1).
In the past, surgery was the standard treatment, including ligation, resection, exclusion, and endoaneurysmorrhaphy (2). However, mortality rate for open repair has been reported to be as high as 50% (1,3,8). With the field of interventional radiology has continued to advance, endovascular treatment is now considered the first line therapy (2).
Until now, no definite guidelines have been established for treatment of PDA aneurysm.
Some cases reported that patients only treated for celiac stenosis by stent ended up auto-occlusion of PDA aneurysm (2,4,7). However, when aneurysm ruptured like our two cases or when we considered unpredictability of aneurysm rupture, we therefore concluded that PDA aneurysm treatment is more reasonable with coil embolization.
Two major concerns of endovascular embolization for PDA aneurysm with celiac stenosis or occlusion are supposed. The first one is aneurysmal recurrence after aneurysm embolization; without celiac axis treatment, aneurysm recurrence is postulated to occur due to remaining blood flow via the PDA. However, in recent review, no cases of recurrence have been reported in patients who did not undergo treatment of celiac axis stenosis (1). The other concern is that interruptions in the arterial circulation to the liver may contribute to the development of hepatic failure. Generally, occlusion of the proximal hepatic artery from embolization is tolerated well in the presence of intact portal blood flow. In our cases, the hepatic arterial flow was observed to be normal after embolization and the portal venous flow was intact. Therefore, we presumed a low possibility of hepatic ischemia and infarction (9).
Treatment for concomitant celiac stenosis or occlusion is controversial (6,8). Based on pathophysiology of PDA aneurysm, PDA aneurysm coil embolization without celiac axis treatment may remain increased flow via peri-pancreatic blood flow and leave possibility of aneurysm recurrence (1,3,6). However, no cases of recurrences have been reported in the patient with untreated celiac axis (1,2,3). Also, endovascular treatment for celiac axis has many limitations. Percutaneous transluminal angioplasty and insertion of a stent for MAL compression do not solve the underlying problem of extrinsic compression of the celiac trunk and often require further open procedures due to stent occlusion by thrombus formation of neo-intimal hyperplasia, stent fracture, or hemorrhage induced by systemic antiplatelet therapy (7,10). So, the treatment of celiac axis is considered when the patient's anatomy (on angiography) generates concern of potential hepatic or duodenal ischemia, the patient develops ischemic symptoms after initial definitive treatment, or the patient has continued symptoms similar to those on initial presentation (1).
Author Contributions
Writing-original draft, all authors; and writing-review & editing, all authors.
|
2020-08-06T09:04:37.754Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7c466b6e7e060efaf5ea67efbb7044ff05b5a28c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3348/jksr.2020.81.4.945",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "65944d6ddfff323c1b822fa7479b7820f95d12d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
55182048
|
pes2o/s2orc
|
v3-fos-license
|
Experimental assessment of the sensitivity of an estuarine phytoplankton fall bloom to acidification and warming
We investigated the combined effect of ocean acidification and warming on the dynamics of the phytoplankton fall bloom in the Lower St. Lawrence Estuary (LSLE), Canada. Twelve 2600 L mesocosms were set to initially cover a wide range of pHT (pH on the total proton scale) from 8.0 to 7.2 corresponding to a range of pCO2 from 440 to 2900 μatm, and two temperatures (in situ and +5 C). The 13-day experiment captured the development and decline of a nanophytoplankton bloom dominated by the chainforming diatom Skeletonema costatum. During the development phase of the bloom, increasing pCO2 influenced neither the magnitude nor the net growth rate of the nanophytoplankton bloom, whereas increasing the temperature by 5 C stimulated the chlorophyll a (Chl a) growth rate and maximal particulate primary production (PP) by 76 % and 63 %, respectively. During the declining phase of the bloom, warming accelerated the loss of diatom cells, paralleled by a gradual decrease in the abundance of photosynthetic picoeukaryotes and a bloom of picocyanobacteria. Increasing pCO2 and warming did not influence the abundance of picoeukaryotes, while picocyanobacteria abundance was reduced by the increase in pCO2 when combined with warming in the latter phase of the experiment. Over the full duration of the experiment, the time-integrated net primary production was not significantly affected by the pCO2 treatments or warming. Overall, our results suggest that warming, rather than acidification, is more likely to alter phytoplankton autumnal bloom development in the LSLE in the decades to come. Future studies examining a broader gradient of temperatures should be conducted over a larger seasonal window in order to better constrain the potential effect of warming on the development of blooms in the LSLE and its impact on the fate of primary production.
Introduction
Anthropogenic emissions have increased atmospheric carbon dioxide (CO 2 ) concentrations from their pre-industrial value of 280 ppm to 412 ppm in 2017, and concentrations of 850-1370 ppm are expected by the end of the century under business-as-usual scenario RCP 8.5 (IPCC, 2013).The global ocean has already absorbed about 28 % of these anthropogenic CO 2 emissions (Le Quéré et al., 2015), leading to a global pH decrease of 0.11 units (Gattuso et al., 2015), a phenomenon known as ocean acidification (OA).The surface ocean pH is expected to decrease by an additional 0.3-0.4 units under the RCP 8.5 scenario by 2100, and as much as 0.8 units by 2300 (Caldeira and Wickett, 2005;Doney et al., 2009;Feely et al., 2009).The accumulation of anthro-Published by Copernicus Publications on behalf of the European Geosciences Union.R. Bénard et al.: Estuarine phytoplankton sensitivity to acidification and warming pogenic CO 2 in the atmosphere also results in an increase in the Earth's heat content that is primarily absorbed by the ocean (Wijffels et al., 2016), leading to an expected rise of sea surface temperatures of 3 to 5 • C by 2100 (IPCC, 2013).Whereas the effect of increasing atmospheric CO 2 partial pressures (pCO 2 ) on ocean chemistry is relatively well documented, the potential impacts of OA on marine organisms and how their response to OA will be modulated by the concurrent warming of the ocean surface waters are still the subject of much debate (Boyd and Hutchins, 2012;Gattuso et al., 2013).
Over the last decade, there has been increasing interest in the potential effects of OA on marine organisms (Kroeker et al., 2013).The first experiments were primarily conducted on single phytoplankton species (reviewed in Riebesell and Tortell, 2011), but subsequent mesocosm experiments highlighted the impact of OA on the structure and productivity of complex plankton assemblages (Riebesell et al., 2007(Riebesell et al., , 2013)).Due to their widely different initial and experimental conditions, these ecosystem-level experiments generated contrasting results (Schulz et al., 2017), but some general patterns nevertheless emerged.For example, diatoms generally benefit from higher pCO 2 through stimulated photosynthesis and growth rates since the increase in CO 2 concentrations compensates for the low affinity of RuBisCO towards CO 2 (Giordano et al., 2005;Gao and Campbell, 2014).Although most phytoplankton species have developed carbon concentration mechanisms (CCMs) to compensate for the low affinity of RuBisCO towards CO 2 , CCM efficiencies differ between taxa, rendering predictions of the impact of a CO 2 rise on the downregulation of CCM rather difficult (Raven et al., 2014).For example, some studies unexpectedly reported no significant or very modest stimulation of primary production under elevated CO 2 concentrations (Engel et al., 2005;Eberlein et al., 2017).OA can ultimately affect the structure of phytoplankton assemblages.Small cells such as photosynthetic picoeukaryotes can benefit directly from an increase in pCO 2 as CO 2 can passively diffuse through their boundary layer (Beardall et al., 2014), and the smallest organisms within this group could benefit most from the increase (Brussaard et al., 2013).Accordingly, OA experiments have typically favoured smaller phytoplankton cells (Yoshimura et al., 2010;Brussaard et al., 2013;Morán et al., 2015), although the proliferation of larger cells has also been reported (Tortell et al., 2002).Hence, generic predictions of phytoplankton community responses to OA are challenging.
Few recent studies have investigated the combined effects of OA and warming on natural phytoplankton assemblages (Hare et al., 2007;Feng et al., 2009;Maugendre et al., 2015;Paul et al., 2015Paul et al., , 2016)).Laboratory experiments have shown that OA and warming could together increase photosynthetic rates, but at the expense of species richness, the reduction of diversity predominantly imputable to warming (Tatters et al., 2013).Results of an experiment conducted with a natural planktonic community from the Mediterranean Sea showed no effect of a combined warming and decrease in pH on primary production, but higher picocyanobacteria abundances were observed in the warmer treatment (Maugendre et al., 2015).Shipboard microcosm incubations conducted in the northern South China Sea displayed higher phytoplankton biomass, daytime primary productivity, and dark community respiration under warmer conditions, but these positive responses were cancelled at low pH (Gao et al., 2017).In contrast, a mesocosm experiment carried out with a fall planktonic community from the western Baltic Sea led to a decrease in phytoplankton biomass under warming, but combined warming and increased pCO 2 led to an increase in biomass (Sommer et al., 2015).Results from experiments where the impacts of pCO 2 and temperature are investigated individually may be misleading as multiple stressors can interact antagonistically or synergistically, sometimes in a nonlinear, unpredictable fashion (Todgham and Stillman, 2013;Boyd et al., 2015;Riebesell and Gattuso, 2015;Gunderson et al., 2016).
The Lower St. Lawrence Estuary (LSLE) is a large (9350 km 2 ) segment of the greater St. Lawrence Estuary (d'Anglejan, 1990).From June to September, the LSLE is characterized by a dynamic succession in the phytoplankton community, mostly driven by changes in light and nutrient availability through variations in the intensity of vertical mixing (Levasseur et al., 1984).The spring and fall blooms are mostly comprised of diatoms, with simultaneous nitrate and silicic acid exhaustion ultimately limiting primary production (Levasseur and Therriault, 1987;Roy et al., 1996).How OA and warming may affect these blooms and primary production has never been investigated in the LSLE.The OA problem is complex in estuarine and coastal waters where freshwater runoff, tidal mixing, and high biological activity contribute to variations in pCO 2 and pH on different timescales (Duarte et al., 2013).The surface mixed-layer pCO 2 in the LSLE varies spatially from 139 to 548 µatm and is strongly modulated by biological productivity (Dinauer and Mucci, 2017).Surface pH T has been shown to vary from 7.85 to 7.93 in a single tidal cycle in the LSLE, nearly as much as the world's oceans have experienced in response to anthropogenic CO 2 uptake over the last century (Caldeira and Wickett, 2005;Mucci et al., 2018).
The main objective of this study was to experimentally assess the sensitivity of the LSLE phytoplankton fall assemblage to a large pCO 2 gradient at two temperatures (in situ and +5 • C).Whether lower trophic-level microorganisms thriving in a highly variable environment will show higher resistance or resilience to future anthropogenic forcings is still a matter of speculation.The whole setup includes a second container holding six more mesocosms not depicted here.
Mesocosm setup
The mesocosm system consists of two thermostated fullsize ship containers each holding six 2600 L mesocosms (Aquabiotech Inc., Québec, Canada).The mesocosms are cylindrical (2.67 m × 1.40 m) with a cone-shaped bottom within which mixing is achieved using a propeller fixed near the top (Fig. 1).The mesocosms exhibit opaque walls and all lie on the same plane level so as not to shade each other.Light penetrates the mesocosms only through a sealed Plexiglas circular cover at their uppermost part.The cover allows the transmission of 90 % of photosynthetically active radiation (PAR; 400-700 nm), 85-90 % of UVA (315-400 nm), and 50-85 % of solar UVB (280-315 nm).The mesocosms are equipped with individual, independent temperature probes (AQBT-Temperature sensor, accuracy ±0.2 • C).Temperature in the mesocosms was measured every 15 min during the experiment, and the control system triggered either a resistance heater (Process Technology TTA1.8215) located near the middle of the mesocosm or a pump-activated glycol refrigeration system to maintain the set temperature.The pH in each mesocosm was monitored every 15 min using Hach ® PD1P1 probes (±0.02 pH units) connected to Hach ® SC200 controllers, and positive deviations from the target values activated peristaltic pumps linked to a reservoir of artificial seawater equilibrated with pure CO 2 prior to the onset of the experiment.This system maintained the pH of the seawater in the mesocosms within ±0.02 pH units of the targeted values by lowering the pH during autotrophic growth, but could not increase the pH during bloom senescence when the pCO 2 rose and pH decreased.
Setting
The water was collected at 5 m depth near Rimouski harbour (48 • 28 39.9 N, 68 • 31 03.0 W) on 27 September 2014 (indicated as day −5 hereafter), and the experiment lasted until 15 October 2014 (day 13).In situ conditions were salinity = 26.52,temperature = 10 • C, nitrate (NO − 3 ) = 12.8 ± 0.6 µmol L −1 , silicic acid (Si(OH) 4 ) = 16 ± 2 µmol L −1 , and soluble reactive phosphate (SRP) = 1.4 ± 0.3 µmol L −1 .On day −5, the water was filtered through a 250 µm mesh while simultaneously filling the 12 mesocosm tanks by gravity with a custom-made "octopus" tubing system.The initial pCO 2 was 623 ± 7 µatm and the in situ temperature of 10 • C was maintained in the 12 mesocosms for the first 24 h (day −4).After that period, the six mesocosms in one container were maintained at 10 • C, while temperature was gradually increased to 15 • C over day −3 in the six mesocosms of the other container.To avoid subjecting the planktonic communities to excessive stress due to sudden changes in temperature and pH while setting the experiment, the mesocosms were left to acclimatize on day −2 before acidification was carried out over day −1.One mesocosm from each temperaturecontrolled container was not pH-controlled to assess the community response to the freely fluctuating pH.These two mesocosms were labelled "Drifters" as the initial in situ pH was allowed to fluctuate over time with the development of the phytoplankton bloom.The other mesocosms were set to cover a range of pH T of ∼ 8.0 to ∼ 7.2 corresponding to a pCO 2 gradient of ∼ 440 to ∼ 2900 µatm after acidification was carried out.To attain initial targeted pH, CO 2saturated artificial seawater was added to the mesocosms that needed a pH lowering, while mesocosms M2 (8.0), M4 (7.8), M6 (Drifter), M9 (8.0), M11 (Drifter), and M12 (7.8) were openly mixed to allow the degassing of the supersaturated CO 2 .Once the mesocosms had reached their target pH, the automatic system controlled the sporadic addition of CO 2saturated water to stop the pH from rising.Only the Drifters were not controlled throughout the experiment.Incident light was variable during our experiment, with only a few sunny days (Fig. 2).
Seawater analysis
The mesocosms were sampled between 05:00 and 08:00 Eastern Daylight Time (EDT) every day.Seawater for carbonate chemistry, nutrients, and primary production was collected directly from the mesocosms as close to sunrise as possible.Seawater was also collected in 20 L carboys for the determination of chlorophyll a (Chl a), taxonomy, and other variables.The total amount of volume sampled every day was 24 L or less.Samples for salinity were taken from the artificial seawater tanks and in the mesocosms on days −3, 3, and 13.The samples were collected in 250 mL plastic bottles and stored in the dark until analysis was performed using a Guildline Autosal 8400B Salinometer during the following months.
Carbonate chemistry
Carbonate chemistry parameters were determined using methods described in Mucci et al. (2018).Briefly, water samples for pH (every day) and total alkalinity (TA, every 3-4 days) measurements were, respectively, transferred from the mesocosms to 125 mL plastic bottles without headspace and 250 mL glass bottles.A few crystals of HgCl 2 were added to the glass bottles before sealing them with a groundglass stopper and Apiezon ® Type-M high-vacuum grease.The pH was determined within hours of collection, after thermal equilibration at 25.0 ± 0.1 • C, using a Hewlett-Packard UV-Visible diode array spectrophotometer (HP-8453A) and a 5 cm quartz cell with phenol red (PR; Robert-Baldo et al., 1985) and m-cresol purple (mCP; Clayton and Byrne, 1993) as indicators.Measurements were carried out at the wavelength of maximum absorbance of the protonated (HL) and deprotonated (L) indicators.Comparable measurements were carried out using a TRIS buffer prepared at a practical salinity of 25 before and after each set of daily measurements (Millero, 1986).
The pH on the total proton concentration scale (pH T ) of the buffer solutions and samples at 25 • C was calculated according to the equation of Byrne (1987), using the salinity of each sample and the HSO − 4 association constants given by Dickson (1990).The TA was determined on site within 1 day of sampling by open-cell automated potentiometric titration (Titrilab 865, Radiometer ® ) with a pH combination electrode (pHC2001, Red Rod ® ) and a dilute (0.025 N) HCl titrant solution.The titrant was calibrated using Certified Reference Materials (CRM Batch#94, provided by Andrew Dickson, Scripps Institute of Oceanography, La Jolla, USA).The average relative error, based on the average relative standard deviation on replicate standard and sample analyses, was better than 0.15 %.The carbonate chemistry parameters at in situ temperature were then calculated using the computed pH T at 25 • C in combination with the measured TA using CO 2 SYS (Pierrot et al., 2006) and the carbonic acid dissociation constants of Cai and Wang (1998).
Nutrients
Samples for NO − 3 , Si(OH) 4 , and SRP analyses were collected directly from the mesocosms every day, filtered through Whatman GF/F filters and stored at −20 • C in acid washed polyethylene tubes until analysis by a Bran and Luebbe Autoanalyzer III using the colorimetric methods described by Hansen and Koroleff (2007).The analyti-cal detection limit was 0.03 µmol L −1 for NO − 3 plus nitrite (NO − 2 ), 0.02 µmol L −1 for NO − 2 , 0.1 µmol L −1 for Si(OH) 4 , and 0.05 µmol L −1 for SRP.
Plankton biomass, composition, and enumeration
Duplicate subsamples (100 mL) for Chl a determination were filtered onto Whatman GF/F filters.Chl a concentrations were measured using a 10-AU Turner Designs fluorometer, following a 24 h extraction in 90 % acetone at 4 • C in the dark without grinding (acidification method: Parsons et al., 1984).The analytical detection limit for Chl a was 0.05 µg L −1 .
Pico-(0.2-2 µm) and nano-phytoplankton (2-20 µm) cell abundances were determined daily by flow cytometry.Sterile cryogenic polypropylene vials were filled with 4.95 mL of seawater to which 50 µL of glutaraldehyde Grade I (final concentration = 0.1 %, Sigma Aldrich; Marie et al., 2005) were added.Duplicate samples were flash frozen in liquid nitrogen after standing 15 min at room temperature in the dark.These samples were then stored at −80 • C until analysis.After thawing to ambient temperature, samples were analyzed using a FACS Calibur flow cytometer (Becton Dickinson) equipped with a 488 nm argon laser.The abundances of nanophytoplankton and picophytoplankton, which include photosynthetic picoeukaryotes and picocyanobacteria, were determined by their autofluorescence characteristics and size (Marie et al., 2005).The biomass accumulation and nanophytoplankton growth rates were calculated by the following equation: where N 1 and N 2 are the biomass or cell concentrations at given times t 1 and t 2 , respectively.Microscopic identification and enumeration for eukaryotic cells larger than 2 µm were conducted on samples taken from each mesocosm on three days: day −4, the day when maximum Chl a was attained in each mesocosm, and day 13.Samples of 250 mL were collected and preserved with acidic Lugol solution (Parsons et al., 1984), and then stored in the dark until analysis.Cell identification was carried out at the lowest possible taxonomic rank using an inverted microscope (Zeiss Axiovert 10) in accordance with Lund et al. (1958).The main taxonomic references used to identify the phytoplankton were Tomas (1997) and Bérard-Therriault et al. (1999).
Primary production
Primary production was determined daily using the 14 Cfixation incubation method (Knap et al., 1996;Ferland et al., 2011).One clear and one dark 250 mL polycarbonate bottle were filled from each mesocosm at dawn and spiked with 250 µL of NaH 14 CO 3 (80 µCi mL −1 ).One hundred µL of 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU; 0.02 mol L −1 ) was added to the dark bottles to pre-vent active fixation of 14 C by phytoplankton (Legendre et al., 1983).The total amount of radioisotope in each bottle was determined by immediately pipetting 50 µL subsamples into a 20 mL scintillation vial containing 10 mL of scintillation cocktail (Ecolume ™ ) and 50 µL of ethanolamine (Sigma).Bottles were placed in separate incubators, at either 10 • C or 15 • C, under reduced (30 %) natural light for 24 h, which corresponds to the light transmittance at mid-mesocosm depth.
At the end of the incubation periods, 3 mL was transferred to a scintillation vial for determination of the total primary production (P T ), and 3 mL was filtered through a syringe filter (GD/X 0.7 µm) to estimate daily photosynthetic carbon fixation released in the dissolved organic carbon pool (P D ).The remaining volume was filtered onto a Whatman GF/F filter to measure the particulate primary production (P P ).Vials containing the P T and P D samples were acidified with 500 µL of HCl 6 N, allowed to sit for 3 h under a fume hood, and then neutralized with 500 µL of NaOH 6 N. The vials containing the filters were acidified with 100 µL of 0.05 N HCl and left to fume for 12 h.Fifteen mL of a scintillation cocktail was added to the vials and was stored pending analysis using a Tri-Carb 4910TR liquid scintillation counter (PerkinElmer).Rates of carbon fixation into particulate and dissolved organic matter were calculated according to Knap et al. (1996) using the dissolved inorganic carbon concentration computed for each mesocosm at the beginning of the daily incubations and multiplied by a factor of 1.05 to correct for the lower uptake of 14 C compared to 12 C.
Statistical analysis
All statistical analyses were performed using R (nlme package).A general least squares (gls) model approach was used to test the linear effects of the two treatments (temperature, pCO 2 ), and of their interactions on the measured variables (Paul et al., 2016;Hussherr et al., 2017).The analysis was conducted independently on two different time periods: Phase I (day 0 to day of maximum Chl a concentration) was calculated individually for each mesocosm, whereas Phase II (day after maximum Chl a concentrations) corresponded to the declining phase of the bloom (Table 1).Averages (or time integration in the case of primary production) of the response variables were calculated separately over the two phases and were plotted against pCO 2 .Separate regressions were performed with pCO 2 as the continuous factor for each temperature when a temperature effect or interaction with pCO 2 was detected in the gls model.Otherwise, the model included data from both temperatures and the interaction with pCO 2 .Normality of the residuals was determined using a Shapiro-Wilk test (p > 0.05) and data were transformed (natural logarithm or square root) if required.As explained by Havenhand et al. (2010), the gradient approach, instead of treatment replication, is particularly suitable when few experimental units are available such as in large-volume mesocosm experiments.In addition, squared Pearson's correlation coefficients www.biogeosciences.net/15/4883/2018/Biogeosciences, 15, 4883-4904, 2018 ) with a significance level of 0.05 were used to evaluate correlations between key variables.
Seawater chemistry
Water salinity was 26.52 ± 0.03 on day −4 in all mesocosms and remained constant throughout the experiment, averaging 26.54 ± 0.02 on day 13.The TA was practically invariant in the mesocosms, averaging 2057 ± 2 µmol kg −1 sw on day −4 and 2058 ± 2 µmol kg −1 sw on day 13.Following the filling of the mesocosms, the pH T in all mesocosms decreased from an average of 7.84 to 7.53.Throughout the rest of the experiment after treatments were applied, the pH remained relatively stable in the pH-controlled treatments, but decreased slightly during Phase II by an average of −0.14 ± 0.07 units relative to the target pH T (Fig. 3a).Given a constant TA, pH variations were accompanied by variations in pCO 2 , from an average of 1340±150 µatm on day −3, and ranging from 564 to 2902 µatm at 10 • C, and from 363 to 2884 µatm at 15 • C on day 0 following the acidification (Fig. 3b; Table 1).The pH T in the Drifters (M6 and M11) increased from 7.896 and 7.862 on day 0 at 10 and 15 • C, respectively, to 8.307 and 8.554 on day 13, reflecting the balance between CO 2 uptake and metabolic CO 2 production over the duration of the experiment.On the last day, pCO 2 in all mesocosms ranged from 186 to 3695 µatm at 10 • C, and from 90 to 3480 µatm at 15 • C. The temperature of the mesocosms in each container remained within ±0.1 • C of the target temperature throughout the experiment and averaged 10.04 ± 0.02 • C for meso-cosms M1 through M6, and 15.0±0.1 • C for mesocosms M7 through M12 (Fig. 3c; Table 1).
Dissolved inorganic nutrient concentrations
Nutrient concentrations averaged 9.1 ± 0.5 µmol L −1 for NO − 3 , 13.4 ± 0.3 µmol L −1 for Si(OH) 4 , and 0.91 ± 0.03 µmol L −1 for SRP on day 0 (Fig. 3d, e, f).Within individual mesocosms, concentrations of nitrate, silicic acid, and soluble reactive phosphate displayed similar temporal patterns following the development of the phytoplankton bloom.Overall, NO − 3 depletion was reached within 5 days in all mesocosms at 10 • C, except for the Drifter, which became nutrient-depleted by day 3. Nutrient depletion was reached slightly earlier within the 15 • C mesocosms, all of them displaying exhaustion within 3 days of the experiment.Accordingly, bloom development and primary production within each mesocosm were eventually limited by the supply in nutrients, irrespective of the temperature or pH treatment.Likewise, Si(OH) 4 fell below the detection limit between days 1 and 5 in all mesocosms except for those whose pH T was set at 7.2 and 7.6 at 10 • C (M5 and M3) and in which Si(OH) 4 depletion occurred on day 9. Variations in SRP concentrations followed closely those of NO − 3 in all mesocosms, except again for those set at pH 7.2 and 7.6, in which undetectable values were reached on day 9.
Phytoplankton biomass
Chl a concentrations were below 1 µg L −1 just after the filling of the mesocosms, and averaged 5.9±0.6 µg L −1 on day 0 (Fig. 4a).They then quickly increased to reach maximum concentrations around 27 ± 2 µg L −1 on day 3 ± 2, and decreased progressively until the end of the experiment, reach-Table 2. Results of the generalized least squares models (gls) tests for the effects of temperature.pCO 2 and their interaction during Phase I (day 0 to the day of maximum Chl a concentration).Separate analyses with pCO 2 as a continuous factor were performed when temperature had a significant effect.Chl a concentration, nanophytoplankton abundance, picoeukaryote abundance, picocyanobacteria abundance, particulate and dissolved primary production, and Chl a-normalized particulate and dissolved primary production.Significant results are in bold.ing 11 ± 1 and 2.4 ± 0.2 µg L −1 at 10 and 15 • C on day 13.During Phase I, results from the gls model show no significant relationships between the mean Chl a concentrations and pCO 2 , temperature, and the interaction of the two factors (Fig. 4b; Table 2).During this phase, the accumulation rate of Chl a was positively affected by temperature, increasing by ∼ 76 %, but was not affected by the pCO 2 gradient at either temperature (Fig. 5a; Table 3).The maximum Chl a concentrations reached during the bloom were not affected by the two treatments (Fig. 5b; Table 3).During Phase II, we observed no significant effect of pCO 2 , temperature, and the interaction of those factors on the mean Chl a concentrations following the depletion of NO − 3 (Fig. 4c; Table 4).
Phytoplankton size class
Nanophytoplankton abundance varied from 8 ± 1 × 10 6 cells L −1 on day 0 to an average maximum of 36 ± 10 × 10 6 cells L −1 at the peak of the bloom (Fig. 4d).At both temperatures, nanophytoplankton abundance increased until at least days 2 or 4 and decreased or remained stable thereafter.The correlation between the nanophytoplankton abundance and Chl a (r 2 = 0.75, p < 0.001, df = 166) suggests that this phytoplankton size class was responsible for most of the biomass build-up throughout the experiment.As observed for the mean Chl a concentration, the mean abundance of nanophytoplankton was not significantly affected by the pCO 2 gradient at the two temperatures investigated during Phase I, but showed higher values at 15 • C (26±2×10 6 cells L −1 ) than at 10 • C (14±1×10 6 cells L −1 ) (Fig. 4e; Table 2).Likewise, the growth rate of nanophytoplankton during Phase I was not influenced by the pCO 2 gradient at the two temperatures, but was significantly higher in the warm treatment (Fig. 5c; Table 3).During Phase II, no relationship was found between the mean nanophytoplankton abundance and the pCO 2 gradient, the temperature, and the pCO 2 × temperature interaction (Fig. 4f; Table 4).
Initial abundance of photosynthetic picoeukaryotes was 10 ± 2 × 10 6 cells L −1 , accounting for more than 80 % of total plankton cells in the 0.2-20 µm size fraction.The abundance of this plankton size fraction decreased slightly through Phase I and their number remained relatively stable at 4 ± 3 × 10 6 cells L −1 throughout Phase II (Fig. 4g).We found no relationship between the abundance of picoeukaryotes and the pCO 2 gradient at the two temperatures investigated during both Phases I and II, and no temperature effect was observed either (Fig. 4h, i; Tables 2 and 4).Picocyanobacteria exhibited a different pattern than the nanophytoplankton and picoeukaryotes (Fig. 4j).Their abundance was initially low (1.7 ± 0.3 × 10 6 cells L −1 on day 0), remained relatively stable during Phase I, and increased rapidly during Phase II, accounting for ∼ 50 % of the total picophytoplankton cell counts toward the end of the exper-iment.During Phase I, the mean picocyanobacteria abundance was not influenced by the pCO 2 gradient or temperature (Fig. 4k; Table 2).During Phase II, the mean picocyanobacteria abundance was not significantly affected by pCO 2 at in situ temperature.However, mean picocyanobacteria were higher at 15 • C, with the pCO 2 gradient responsible for a ∼ 33 % reduction of picocyanobacteria abundance from the Drifter to the more acidified treatment (4.4 ± 0.2 × 10 6 cells L −1 vs. 3.0 ± 0.3 × 10 6 cells L −1 ) (Fig. 4l; Table 4).
Phytoplankton taxonomy
The taxonomic composition of the planktonic assemblage larger than 2 µm was identical in all treatments at the begin- ning of the experiment, and was mainly composed of the cosmopolitan chain-forming centric diatom Skeletonema costatum (S. costatum) and the cryptophyte Plagioselmis prolonga var.nordica (Fig. 6).At the peak of the blooms (maximum Chl a concentrations), the species composition did not vary between the pCO 2 treatments and between the two tem-peratures tested.S. costatum was the dominant species in all mesocosms (70-90 % of the total number of eukaryotic cells), except for one mesocosm (M3, pH 7.6 at 10 • C) where a mixed dominance of Chrysochromulina spp.(a prymnesiophyte of 2-5 µm) and S. costatum was observed (Fig. 6a).Table 3. Results of the generalized least squares models (gls) tests for the effects of temperature, pCO 2 , and their interaction.Separate analyses with pCO 2 as a continuous factor were performed when temperature had a significant effect.Accumulation rate of Chl a (day 0 to maximum Chl a concentration), maximum Chl a concentration, growth rate of nanophytoplankton (day 0 to maximum nanophytoplankton abundance), and maximum nanophytoplankton abundance.Significant results are in bold.cell counts in all mesocosms at the end of the experiment carried out at 10 • C. At 15 • C, the composition of the assemblage had shifted toward a dominance of unidentified flagellates and choanoflagellates (2-20 µm) in all mesocosms, with these two groups accounting for 55-80 % of the total cell counts, while diatoms showed signs of loss of viability as indicated by the presence of empty frustules (Fig. 6b).
Primary production
P P increased in all mesocosms during Phase I of the experiment, in parallel with the increase in Chl a (Fig. 7a).P P maxima were attained on days 3-4, except for the 15 • C Drifter (M11) where P P peaked on day 1.We found no significant effect of the pCO 2 gradient, temperature, and pCO 2 × temperature interaction on the timeintegrated P P during both Phases I and II (Fig. 7b, c; Tables 2 and 4).Similarly, the absence of significant treatment effects remained when normalizing P P per unit of Chl a (Fig. 7g, h, i).Initial Chl a-normalized P P val-ues were 3.3 ± 0.5 µmol C (µg Chl a) −1 d −1 and reached maxima between 3.7 ± 0.3 µmol C (µg Chl a) −1 d −1 and 5.7 ± 0.6 µmol C (µg Chl a) −1 d −1 at 10 and 15 • C, respectively.These values then decreased to 2.2 ± 0.6 µmol C (µg Chl a) −1 d −1 and 0.9 ± 0.2 µmol C (µg Chl a) −1 d −1 on the last day of the experiment.During Phase I, the mean Chl a-normalized P P was not significantly affected by the pCO 2 gradient or warming, as observed for the mean Chl a concentrations and time-integrated P P over that phase (Fig. 7h; Table 2).During Phase II, the log of the mean Chl anormalized P P was not significantly affected by the pCO 2 gradient, the temperature, or the interaction of these factors (Fig. 7i; Table 4).P D was low at the beginning of the experiment, averaging 1.5 ± 0.4 µmol C L −1 d −1 , increased progressively during Phase I to reach maximum values of 6-48 µmol C L −1 d −1 between days 4 and 8, and decreased thereafter (Fig. 7d).Time-integrated P D was not significantly affected by the pCO 2 gradient, the temperature, and the pCO 2 × temperature interaction during the two phases (Fig. 7e, f; Tables 2 and 4).7j).During Phase I, the mean Chl anormalized P D was affected by neither the pCO 2 gradient, nor the temperature, nor the interaction between those factors (Fig. 7k; Table 2).During Phase II, the log of the mean Chl a-normalized P D was not affected by pCO 2 at either temperature tested, but significantly increased with warming (Fig. 7l; Table 4).
Figure 6 shows the influence of the treatments on maximum P P and P D as well as on the time-integrated P P and P D over the full length of the experiment.We found no effect of the pCO 2 gradient on the maximum P P values at the two temperatures tested, but warming increased the maximum P P values from 66 ± 13 µmol C L −1 d −1 to 126 ± 8 µmol C L −1 d −1 (Fig. 8a; Table 5).The time-integrated P P over the full duration of the experiment was not affected by the pCO 2 gradient or the increase in temperature (Fig. 8b; Table 5).The maximum P D values were significantly affected by the treatments (Fig. 8c; Table 5).Maximum P D decreased with increasing pCO 2 at in situ temperature, but warming cancelled this effect (antagonistic effect).Nevertheless, the time-integrated P D over the whole experiment did not vary significantly between treatments, although a decreasing tendency with increasing pCO 2 at 10 • C and an increasing tendency with warming can be seen in Fig. 8d (Table 5).
Table 5. Results of the generalized least squares models (gls) tests for the effects of temperature, pCO 2 , and their interaction.Separate analyses with pCO 2 as a continuous factor were performed when temperature had a significant effect.Maximum particulate and dissolved primary production, and time integration over the full duration of the experiment (day 0 to day 13).Natural logarithm transformation is indicated in parentheses when necessary; significant results are in bold.
General characteristics of the bloom
The onset of the experiment was marked by an increase in pCO 2 on the day following the filling of the mesocosms.This phenomenon often takes place at the beginning of such experiments when pumping tends to break phytoplankton cells and larger debris into smaller ones.We attribute the rapid fluctuations in pCO 2 to the release of organic matter following the filling of the mesocosms with a stimulating effect on heterotrophic respiration, and hence CO 2 production.Then, a phytoplankton bloom, numerically dominated by the centric diatom S. costatum, took place in all mesocosms, regardless of treatments (Fig. 6).S. costatum is a common phytoplankton species in the St. Lawrence Estuary and in coastal waters (Kim et al., 2004;Starr et al., 2004;Annane et al., 2015).The length of the experiment (13 days) allowed us to capture both the development and declining phases of the bloom.The exponential growth phases lasted 1-4 days depending on the treatments, but maximal Chl a concentrations were reached only after 7 days in 2 of the 12 mesocosms (Fig. 4a; Table 1).The suite of measurements and statistical tests conducted did not provide any clues as to the underlying causes of the lower rates of biomass accumulation measured in these two mesocosms.Since statistical analyses conducted with or without these two apparent outliers gave similar results, they were not excluded from the analyses.
In situ nutrient conditions prior to the water collection were favourable for a bloom development.Based on previous studies, in situ phytoplankton growth was probably limited by light due to water turbidity and vertical mixing at the time of water collection (Levasseur et al., 1984).Grazing may also have played a role in keeping the in situ biomass of flagellates low prior to our sampling.However, a natural diatom fall bloom was observed in the days following the water collection in the adjacent region (Gustavo Ferreyra, personal communication, 2014).The increased stability within the mesocosms, combined with the reduction of the grazing pressure (filtration on 250 µm), likely contributed to the fast accumulation of phytoplankton biomass.During the development phase of the bloom, the concentration of all three monitored nutrients decreased, with NO − 3 and Si(OH) 4 reaching undetectable values.This nutrient co-depletion is consistent with results from previous studies suggesting a co-limitation of diatom blooms by these two nutrients in the St. Lawrence Estuary (Levasseur et al., 1987(Levasseur et al., , 1990)).Variations in P P roughly followed changes in Chl a, and, as expected, the maximum Chl a-normalized P P (5 ± 2 µmol C (µg Chl a) −1 d −1 ) was reached during the exponential growth phase in all mesocosms.Decreases in total phytoplankton abundances and P P followed the bloom peaks and the timing of the NO − 3 and Si(OH) 4 depletions.A clear succession in phytoplankton size classes characterized the experiment.Nanophytoplankton cells were initially present in low abundance and became more numerous as the S. costatum diatom bloom developed.The correlation (r 2 = 0.83, p < 0.001, df = 34) between the abundance of nanophytoplankton and S. costatum enumeration suggests that this cell size class can be used as a proxy of S. costatum counts in all mesocosms throughout the experiment.Nanophytoplankton cells accounted for 79 ± 7 % of total counts of cells < 20 µm on the day of the maximum Chl a concentration.Accordingly, nanophytoplankton exhibited the same temporal trend as Chl a concentrations.During Phase II, nanophytoplankton abundances remained roughly stable at in situ temperature, but decreased at 15 ) Figure 7. Temporal variations and time-integrated or averaged ± SE during Phase I (day 0 to day of maximum Chl a concentration) and Phase II (day after maximum Chl a concentration to day 13) for (a-c) particulate primary production, (d-f) dissolved primary production, (g-i) Chl a-normalized particulate primary production, and (j-l) Chl a-normalized dissolved primary production.For symbol attribution to treatments, see legend.
were originally abundant and decreased throughout the experiment, whereas picocyanobacteria abundances increased during Phase II.This is a typical phytoplankton succession pattern for temperate systems where an initial diatom bloom growing essentially on allochthonous nitrate gives way to smaller species growing on regenerated forms of nitrogen (Taylor et al., 1993).
Phase I (diatom bloom development)
Our results show no significant effect of increasing pCO 2 /decreasing pH on the mean abundance and net accumulation rate of the diatom-dominated nanophytoplankton assemblage during the development of the bloom (Figs.4e and 5c).These results suggest that S. costatum, the species accounting for most of the biomass accumulation during the bloom, neither benefited from the higher pCO 2 nor was negatively impacted by the lowering of pH.Assuming that S. costatum was also responsible for most of the carbon fixation during the bloom development phase, the absence of effect on P P and Chl a-normalized P P following increases in pCO 2 brings additional support to our conclusion.S. costatum operates a highly efficient CCM, minimizing the potential benefits of thriving in high CO 2 waters (Trimborn et al., 2009).This may explain why the strain present in the LSLE did not benefit from the higher pCO 2 conditions.Likewise, a mesocosm experiment conducted in the coastal North Sea showed no significant effect of increasing pCO 2 on carbon fixation during the development of the spring diatom bloom (Eberlein et al., 2017).
In addition to the aforementioned insensitivity to increasing pCO 2 , our results point towards a strong resistance of S. costatum to severe pH decline.During our study, surprisingly constant rates of Chl a accumulation and nanophytoplankton growth (Fig. 5a, c), as well as maximum P P (Fig. 8a), were measured during the development phase of the bloom over a range of pH T extending from 8.6 to 7.2 (Fig. 3a).In a recent effort to estimate the causes and amplitudes of short-term variations in pH T in the LSLE, Mucci et al. (2018) showed that pH T in surface waters was constrained within a range of 7.85 to 7.93 during a 50 h survey over two tidal cycles at the head of the Laurentian Channel.It is notable that even the upwelling of water from 100 m depth or of low-oxygen LSLE bottom water would not decrease pH T beyond ∼ 7.75 and ∼ 7.62, respectively (Mucci et al., 2018, and references therein).Our results show that the phytoplankton assemblage responsible for the fall bloom may tolerate even greater pH T excursions.In the LSLE, such conditions may arise when the contribution of the low pH T (7.12) freshwaters of the Saguenay River to the LSLE surface waters is amplified during the spring freshet.However, considering that comparable studies conducted in different environments have reported negative effects of decreasing pH on diatom biomass accumulation www.biogeosciences.net/15/4883/2018/Biogeosciences, 15, 4883-4904, 2018 (Hare et al., 2007;Hopkins et al., 2010;Schulz et al., 2013), it cannot be concluded that all diatom species thriving in the LSLE are insensitive to acidification.
In contrast to the pCO 2 treatment, warming affected the development of the bloom in several ways.Increasing temperature by 5 • C significantly increased the accumulation rate of Chl a and the nanophytoplankton growth rate during Phase I of the bloom.The positive effects of warming on maximum P P during the development phase of the bloom most likely reflect the sensitivity of photosynthesis to temperature (Sommer and Lengfellner, 2008;Kim et al., 2013).It could also be related to optimal growth temperatures, which are often higher than in situ temperatures in marine phytoplankton (Thomas et al., 2012;Boyd et al., 2013).In support of this hypothesis, previous studies have reported optimal growth temperatures of 20-25 • C for S. costatum, which is 5-10 • C higher than the warmer treatment investigated in our study (Suzuki and Takahashi, 1995;Montagnes and Franklin, 2001).Extrapolating results from a mesocosm experiment to the field is not straightforward, as little is known of the projected warming of the upper waters of the LSLE in the next decades.In the Gulf of St. Lawrence, positive temperature anomalies in surface waters have varied from 0.25 to 0.75 • C per decade between 1985 and 2013 (Larouche and Galbraith, 2016).In the LSLE, warming of surface waters will likely result from a complex interplay between heat transfer at the air-water interface and variations in vertical mixing and upwelling of the cold intermediate layer at the head of the estuary (Galbraith et al., 2014).Considering current uncertainties regarding future warming of the LSLE, studies should be conducted over a wider range of temperatures in order to better constrain the potential effect of warming on the development of the blooms in the LSLE.
Picoeukaryotes showed a more or less gradual decrease in abundance during Phase I, and our results show that this decline was not influenced by the increases in pCO 2 (Fig. 4g, h; Table 2).Picoeukaryotes are expected to benefit from high pCO 2 conditions even more so than diatoms as CO 2 can passively diffuse through their relatively thin boundary layer, precluding the necessity of a costly uptake mechanism such as a CCM (Schulz et al., 2013).This hypothesis has been supported by several studies showing a stimulating effect of pCO 2 on picoeukaryote growth (Bach et al., 2016;Hama et al., 2016;Schulz et al., 2017, and references therein).On the other hand, in nature, the abundance of picoeukaryotes generally results from a delicate balance between cell division rates and cell losses through microzooplankton grazing and viral attacks.The few experiments, including the current study, reporting the absence or a modest effect of increasing pCO 2 on the abundance of eukaryotic picoplankton attribute their observations to an increase in nano-and microzooplankton grazing (Rose et al., 2009;Neale et al., 2014).During our experiment, the biomass of microzooplankton increased with increasing pCO 2 by ca.200-300 % at the two temperatures tested (Gustavo Ferreyra and Mohamed Lem-lih, unpublished data).Thus, it is possible that a positive effect of increasing pCO 2 and warming on picoeukaryote abundances might have been masked by higher picoeukaryote losses due to increased microzooplankton grazing.
Phase II (declining phase of the bloom)
The gradual decrease in nanophytoplankton abundances coincided with an increase in the abundance of picocyanobacteria (Fig. 4j).At in situ temperature, the picocyanobacteria abundance during Phase II was unaffected by the increase in pCO 2 over the full range investigated (Fig. 4l; Table 4).The lack of a positive response of picocyanobacteria to elevated pCO 2 was somewhat surprising considering that they have less efficient CCMs than diatoms (Schulz et al., 2013).Accordingly, several studies have reported a stimulation of the net growth rate of picocyanobacteria under elevated pCO 2 in different environments (coastal Japan, Mediterranean Sea, and Raunejforden in Norway) and under different nutrient regimes, i.e. bloom and post-bloom conditions (Hama et al., 2016;Sala et al., 2016;Schulz et al., 2017).However, studies have also shown no direct effect of elevated pCO 2 on the net growth of picocyanobacteria during studies conducted in the subtropical North Atlantic and the South Pacific (Law et al., 2012;Lomas et al., 2012).In our study, picocyanobacteria abundance was even reduced when high CO 2 was combined with warming.Similar negative effects of CO 2 on picocyanobacteria (particularly Synechococcus) have also been observed under later stages of bloom development, i.e. nutrient depletion, caused by either competition or grazing (Paulino et al., 2008;Hopkins et al., 2010).A potential increase in grazing pressure, following the rise in heterotrophic nanoflagellate abundance (e.g.choanoflagellates; Fig. 6b) measured under high pCO 2 and warmer conditions, could explain the ostensible negative effect of increasing pCO 2 on picocyanobacteria abundance in our experiment.Despite the absence of grazing measurements during our study, our results support the hypothesis that the potential for increased picocyanobacteria population growth under elevated pCO 2 and temperature is partially dependent on different grazing pressures (Fu et al., 2007).
Neither warming nor acidification affected the net particulate carbon fixation during the declining phase of the bloom.In our study, the time-integrated P P and Chl a-normalized P P were not significantly affected by the increase in pCO 2 during Phase II at the two temperatures tested (Fig. 7; Table 4).This result is surprising since nitrogen-limited cells have been shown to be more sensitive to acidification, resulting in a reduction in carbon fixation rates due to higher respiration (Wu et al., 2010;Gao and Campbell, 2014;Raven et al., 2014).Although our measurements do not allow us to discriminate between the contributions of the different phytoplankton size classes to carbon fixation, we can speculate that diatoms, which were still abundant during Phase II, contributed a significant fraction of the primary produc-tion.If so, these results suggest that S. costatum remained insensitive to OA even under nutrient stress.However, in contrast to Phase I, increasing the temperature by 5 • C during Phase II significantly increased the Chl a-normalized P D .The warming-induced increase in fixed carbon being released in the dissolved fraction likely stems from increased exudation by phytoplankton, or sloppy feeding/excretion following ingestion by grazers (Kim et al., 2011).The increase in fixed carbon released as dissolved organic carbon (DOC) measured during Phase II may also result from greater respiration by the nitrogen-limited diatoms during periods of darkness of the incubations, as dark phytoplankton respiration rates generally increase with temperature (Butrón et al., 2009;Robarts and Zohary, 1987).Moreover, the enclosures do not permit the sinking and export of particulate organic carbon (POC), allowing a further transformation into DOC by heterotrophic bacteria, a process that could be exacerbated under warming (Wohlers et al., 2009).
Effect of the treatments on primary production over the full experiment
As mentioned above, increasing pCO 2 had no effect on timeintegrated P P during the two phases of the bloom, and warming only affected the maximum P P .As a result, primary production rates integrated over the whole duration of the experiment were not significantly different between the two temperatures tested.Although not statistically significant, the time-integrated P D over the full experiment displays a slight decrease with increasing pCO 2 at 10 • C and overall higher values in the warmer treatment (Fig. 8d; Table 5).Previous studies have reported increases in DOC exudation (Engel et al., 2013), but also decreasing DOC concentrations at elevated pCO 2 under nitrate limitation (Yoshimura et al., 2014).The increase in DOC exudation is attributed to a stimulation of photosynthesis resulting from its sensitivity to higher pCO 2 (Engel et al., 2013), but the causes of a decrease in DOC concentrations at high pCO 2 are less clear and potentially attributable to an increase in transparent exopolymer particle (TEP) production (Yoshimura et al., 2014).Elevated TEP production under high pCO 2 conditions has been measured at both the peak of a bloom in a mesocosm study (Engel et al., 2014) and in post-bloom nutrient-depleted conditions (MacGilchrist et al., 2014).However, during our study, TEP production decreased under high pCO 2 (Gaaloul, 2017).Thus, the apparent decrease in P D cannot be attributed to a greater conversion of exuded dissolved carbohydrate into TEP.The apparent rise in P D under warming is consistent with previous studies reporting similar increases in phytoplankton dissolved carbon release with temperature (Morán et al., 2006;Engel et al., 2011).Although these apparent changes in P D with increasing pCO 2 and warming require further investigations, they suggest that a larger proportion (∼ 15 % of P T at 15 • C compared to 10 % at 10 • C) of the newly fixed carbon could be exuded and become available for heterotrophic organisms under warmer conditions.
Implications and limitations
During our study, we chose to keep the pH constant during the whole experiment instead of allowing it to vary with changes in photosynthesis and respiration during the bloom phases.This approach differs from previous mesocosm experiments where generally no subsequent CO 2 manipulations are conducted after the initial targets are attained (Schulz et al., 2017, and references therein).Keeping the pH and pCO 2 conditions stable during our study allowed us to precisely quantify the effect of the changing pH/pCO 2 on the processes taking place during the different phases of the bloom.Such control was not exercised in two of our mesocosms (i.e. the Drifters).In these two mesocosms, the pH T increased from 7.9 to 8.3 at 10 • C, and from 7.9 to 8.7 at 15 • C. Since the buffer capacity of acidified waters diminishes with increasing CO 2 , the drift in pCO 2 and pH due to biological activity would have been even greater in the more acidified treatments (Delille et al., 2005;Riebesell et al., 2007).Hence, allowing the pH to drift in all mesocosms would have likely ended in an overlapping of the treatments where acidification effects would have been harder to detect.Thus, our experiment could be considered an intermediate one between strictly controlled small-scale laboratory experiments and large-scale pelagic mesocosm experiments in which only the initial conditions are set.By limiting pCO 2 decrease under high CO 2 drawdown due to photosynthesis during the development of the bloom phase, we minimise confounding effects of pCO 2 potentially overlapping in association with high biological activity in the mesocosms.Hence, the experimental conditions could be considered extreme examples of acidification conditions, due to the extent of pCO 2 values studied.However, the absence of OA effects on most biological parameters measured during our study, even under these extreme conditions, strengthens the argument that the phytoplankton community in LSLE is resistant to OA.
Conclusion
Our results reveal a remarkable resistance of the different phytoplankton size classes to the large range of pCO 2 /pH investigated during our study.It is noteworthy that the plankton assemblage was subjected to decreases in pH far exceeding those that they are regularly exposed to in the LSLE.The resistance of S. costatum to the pCO 2 treatments suggests that the acidification of surface waters of the LSLE will not affect the development rate and the amplitude of fall blooms dominated by this species.Photosynthetic picoeukaryotes and picocyanobacteria thriving alongside the blooming diatoms were also insensitive to acidification.In contrast to the pCO 2 treatments, warming the water by 5 tiple impacts on the development and decline of the bloom.The 5 • C warming hastened the development of the diatom bloom (albeit with no increase in total cell number) and increased the abundance of picocyanobacteria during Phase II despite a reduction under high pCO 2 .These temperatureinduced variations in the phytoplankton assemblage were accompanied by an increase in maximal P P and suggest a potential increase in P D under warming, although no significant changes in time-integrated P P and P D were observed over the phases or the full temporal scale of the experiment.Overall, our results indicate that warming could have more important impacts than acidification on phytoplankton bloom development in the LSLE in the next decades.Future studies should be conducted and specifically designed to better constrain the potential effects of warming on phytoplankton succession and primary production in the LSLE.
Figure 1 .
Figure 1.Schematic drawing including mesocosm dimensions and placement within the containers (Aquabiotech Inc., Québec, Canada).The whole setup includes a second container holding six more mesocosms not depicted here.
Figure 2 .
Figure 2. Changes in incident photosynthetic active radiation (PAR) at the top of the mesocosm level during the experiment as measured by a Satlantic HyperOCR hyperspectral radiometer and integrated into the 400-700 nm range.Local sunrise and sunset times (EDT) are indicated with the corresponding days of the experiment.
Figure 3 .
Figure 3. Temporal variations over the course of the experiment for (a) pH T , (b) pCO 2 , (c) temperature, (d) nitrate, (e) silicic acid, and (f) soluble reactive phosphate.For symbol attribution to treatments, see legend.
Figure 4 .
Figure 4. Temporal variations and averages ± SE during Phase I (day 0 to day of maximum Chl a concentration) and Phase II (day after maximum Chl a concentration to day 13) for (a-c) chlorophyll a,(d-f) nanophytoplankton, (g-i) picoeukaryotes, and (j-l) picocyanobacteria.For symbol attribution to treatments, see legend.
Figure 5 .
Figure 5. (a) Accumulation rate of Chl a (day 0 to maximum Chl a concentration), (b) maximum Chl a concentrations, (c) growth rate of nanophytoplankton (day 0 to maximum nanophytoplankton abundance), and (d) maximum nanophytoplankton abundance during the experiment.For symbol attribution to treatments, see legends.
Figure 6 .
Figure6.Relative abundance of 10 groups of protists at the beginning of the experiment (day −4), on the day of maximum Chl a concentrations in each mesocosm, and at the end of the experiment (day 13) for (a) 10 • C and (b) 15 • C mesocosms.The group "others" includes dinoflagellates, Chlorophyceae, Dictyochophyecae, Euglenophyceae, heterotrophic groups, and unidentified cells.Each bar plot represents a mesocosm at a given time.The bar plot on day −4 represents the initial community assemblage before temperature manipulation and acidification, and is therefore the same for each temperature treatment.For symbol attribution to treatments, see legend.
Figure 8 .
Figure8.(a) Maximum particulate primary production, (b) time-integrated particulate primary production, (c) maximum dissolved primary production, and (d) time-integrated dissolved primary production over the full course of the experiment (day 0 to day 13).For symbol attribution to treatments, see legend.
Table 1 .
Day of maximum Chl a concentration, the associated average pH T (total hydrogen ion scale), and average pCO 2 over each individually defined phase.Phase I is defined from day 0 until the day of maximum Chl a for each mesocosm, while Phase II is defined from the day after maximum Chl a until day 13.Average temperature over day 0 to day 13 is also presented for each mesocosm.Average values are presented with ±standard errors.
Table 4 .
Results of the generalized least squares models (gls) tests for the effects of temperature, pCO 2 , and their interaction during Phase II (day after maximum Chl a to day 13).Separate analyses with pCO 2 as a continuous factor were performed when temperature had a significant effect.Chl a concentration, nanophytoplankton abundance, picoeukaryote abundance, picocyanobacteria abundance, particulate and dissolved primary production, and Chl a-normalized particulate and dissolved primary production.Significant results are in bold.
|
2018-12-05T06:56:23.524Z
|
2018-08-17T00:00:00.000
|
{
"year": 2018,
"sha1": "547f8573bd29c02627a153b80fd163d162a73bbd",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/15/4883/2018/bg-15-4883-2018.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "547f8573bd29c02627a153b80fd163d162a73bbd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
55802575
|
pes2o/s2orc
|
v3-fos-license
|
A Spectral Element Reduced Basis Method in Parametric CFD
. We consider the Navier-Stokes equations in a channel with varying Reynolds numbers. The model is discretized with high-order spectral element ansatz functions, resulting in 14 (cid:48) 259 degrees of freedom. The steady-state snapshot solutions define a reduced order space, which allows to accurately evaluate the steady-state solutions for varying Reynolds number with a reduced order model within a fixed-point iteration. In particular, we compare different aspects of implementing the reduced order model with respect to the use of a spectral element discretization. It is shown, how a multilevel static condensation [1] in the pressure and velocity boundary degrees of freedom can be combined with a reduced order modelling approach to enhance computational times in parametric many-query scenarios.
Introduction
The use of spectral element methods in computational fluid dynamics [1] allows highly accurate computations by using high-order spectral element ansatz functions. Typically, an exponential error decay can be observed under p-refinement. See [2], [3], [4], [5], [6] for an introduction and overview of the applications.
This work is concerned with the reduced basis method (RBM, [7]) of a channel flow, governed by the Navier-Stokes equations, and discretized with the spectral element method into 14 259 degrees of freedom. In particular, we are interested in computing the steady-state solutions for varying Reynolds number with a reduced order model, guaranteeing competitive computational performances.
Section 2 introduces the governing equations and used fixed-point iteration algorithm. Section 3 introduces the spectral element discretization, while section 4 describes the model reduction approach. Numerical results are provided in section 5, while section 6 summarizes and concludes the work by also providing new perspectives.
Problem Formulation
Let Ω ∈ R 2 be the computational domain. Incompressible, viscous fluid motion in a spatial domain Ω over a time interval (0, T ) is governed by the in-arXiv:1712.06432v1 [math.NA] 18 Dec 2017 compressible Navier-Stokes equations with velocity u, pressure p, kinematic viscosity ν and a body forcing f , (1) -(2): Boundary and initial conditions are prescribed as with d, g and u 0 given and The Reynolds number Re depends on the viscosity ν through the characteristic velocity U and characteristic length L via Re = U L ν , [12]. In particular, we are interested in computing the steady states for varying viscosity ν, such that ∂u ∂t = 0. A solution u(ν 1 ) for a parameter value ν 1 , can be used as an initial guess for a fixed point iteration to obtain the steady state solution u(ν 2 ) at a parameter value ν 2 , provided that the solution u(ν) depends continuously on ν in the interval [ν 1 , ν 2 ].
Oseen-Iteration
The Oseen-iteration is a secant modulus fixed-point iteration, which in general exhibits a linear rate of convergence [8]. Given a current iterate (or initial condition) u k , the linear system ∇ · u = 0 in Ω, is solved for the next iterate u k+1 = u. A typical stopping criterion is that the relative change between iterates in the H 1 norm falls below a predefined tolerance. An initial solution u 0 (ν 0 ) is computed by time-advancement of (1)-(2) from zero initial conditions at a parameter value ν 0 , and the whole parameter domain is then explored by using a continuation method with the Oseen-iteration.
Spectral Element Discretization
The Navier-Stokes problem is discretized with the spectral element method. The spectral/hp element software framework used is Nektar++ in version 4.3.5, [9] 1 . The discretized system to solve in each step of the Oseen-iteration is given by (10) as where v bnd and v int denote velocity degrees of freedom on the boundary and in the interior, respectively. Correspondingly, f bnd and f int denote forcing terms on the boundary and interior, respectively. The matrix A assembles the boundary-boundary coupling, B the boundary-interior coupling,B the interior-boundary coupling and C assembles the interior-interior coupling of elemental velocity ansatz functions. In the case of a Stokes system, it holds that B =B T , but this is not the case for the Oseen equation, since the linearization term (u k · ∇)u is present in (6). The matrices D bnd and D int assemble the pressure-velocity boundary and pressure-velocity interior contributions, respectively.
The linear system (10) is assembled in local degrees of freedom, resulting in block matrices A, B,B, C, D bnd and D int , each block corresponding to a spectral element. In particular, this means that the system is singular in this form. To solve the system, the local degrees of freedom need to be gathered into the global degrees of freedom [1]. Since C contains the interior-interior contributions, it is invertible and the system can be statically condensed into By taking the top left 2 × 2 block and reordering the degrees of freedom such that the mean pressure mode of each element is inserted into the corresponding block of results in whereD is invertible, such that a second level of static condensation can be employed. We have: When the vector b is computed, which contains the velocity boundary degrees of freedom and the mean pressure modes, the remaining solution components are computed by reverting the steps of the static condensations [1]. The main computational effort lies in solving the final system (14) ( Additionally, the matrices C andD need to be inverted, which due to the elemental block structure requires inverting submatrices in the size of the degrees of freedom per element for each submatrix.
Reduced Order Modelling
The reduced order model (ROM) aims to represent the full order solution accurately in the parameter domain of interest. Two ingredients are essential to RB modelling, a projection onto a low order space of snapshot solutions and an offline-online decomposition for computational efficiency, [16]. A set of snapshots is generated by solving (14) over a coarse discretization of the parameter domain and used to define a projection space U of size N . The proper orthogonal decomposition computes a singular value decomposition of the snapshot solutions to 99.9% of the most dominant modes [7], which defines the projection matrix U ∈ R N δ ×N to project system (14) and obtain the reduced order solution b N .
Offline-Online Decomposition
The offline-online decomposition [7] allows fast input-output evaluations independent of the original model size N δ . It is a crucial part of an efficient reduced order model but since the static condensation includes the inversion of the parameter-dependent matrix C, an intermediate projection is introduced. The reduced order model considers the top left 2×2 block of (11), i.e., one level of static condensation [1]. During the offline phase, full-order solutions have been computed over the parameter domain of interest, which now serve as a projection space to define the reduced order setting. This projection space incorporates the transformation of local velocity boundary degrees of freedom to global velocity boundary degrees of freedom and the reordering of mean pressure degrees of freedom. The projection space then takes the form U = P M V with a permutation matrix P to reorder the degrees of freedom and a transformation M from local to global degrees of freedom. The collected offline data V contain the gathered velocity and mean pressure modes as well as interior pressure modes.
The projected system is then of the form and upon its solution, the interior velocity dofs can be computed by resubstituting into (11) at the reduced order level. To achieve fast reduced order solves, the offline-online decomposition expands (15) in the parameter of interest and computes the parameter independent projections offline to be stored as small-sized matrices of the order N × N . Since in an Oseeniteration each matrix is dependent on the previous iterate, the submatrices corresponding to each basis function is assembled and then formed online using the reduced basis coordinate representation of the current iterate. This is analogous to reduced order assembly of the nonlinear term in the Navier-Stokes case, [16].
Model and Numerical Results
We consider a channel flow in the domain considered in Fig. 1, similar to the model considered in [13]. The rectangular domain Ω(x, y) = [0, 36] × [0, 6] is decomposed into 32 spectral elements. The spectral element expansion uses modal Legendre polynomials of order p = 12 in the velocity. The pressure ansatz space is chosen of order p − 2 to fulfill the inf-sup stability condition, [10], [11]. The inflow is defined for y ∈ [2.5, 3.5] as u x (0, y) = (y−2.5)(3.5−y). At x = 36 is the outflow boundary, everywhere else are zero velocity walls. Note that the velocity boundary degrees of freedom are along the boundaries of the spectral elements and not only the domain boundary, resulting in 3072 local degrees of freedom for this problem. This is a simplified model of a contraction-expansion channel [12], where flow occurs though a narrowing of variable width. Variations in the width have been moved to variations in the Reynolds number and only the section after the narrowing comprises the computational domain. The relation to the Reynolds number is established with U = 1 4 as the maximum inflow velocity and L = 1 as the width of the narrowing as Re = 1 4ν . Consider a parametric variation in the viscosity ν, ranging from ν = 0.0075 to ν = 0.0025, which corresponds to Reynolds numbers between 33 and 100. The solution for ν = 0.0075 is shown in Fig. 2. It is slightly unsymmetrical, which marks the onset of the Coanda effect [14], [15], which is a known phenomenon characterized as a 'wall-hugging' effect occurring at these Reynolds numbers. The solution for ν = 0.0025 is shown in Fig. 3. Here, the Coanda effect is fully developed as the flow orients itself along the boundaries. Using model reduction with the form (11), which allows the offline-online decomposition or using form (14), which has the lowest fullorder system size, resulted in similar computational results. Shown in Fig. 4 is the relative H 1 0 (Ω) error in the velocity between the full order and reduced order model.
While the full-order solves were computed with Nektar++, the reducedorder computations were done in a separate python code. To compare computational gains, compute times between a full order solve and a reduced order solve both implemented in python are taken. The compute times reduce by a factor of 50, i.e., for a single iteration step from about 40s to under 1s. Current work also aims to extend the software to make it available as a SEM-ROM software framework within the AROMA-CFD project (see Acknowledgment) as ITHACA-SEM.
Conclusion and Outlook
It has been shown that the reduced basis technique generates accurate reduced order models of small size for channel flow discretized with spectral elements up to a Reynolds number of 100. The use of basis functions obtained by the spectral element method suggests a potential important synergy between high-order and reduced basis methods, see also [6]. Due to the multilevel static condensation used here, particular care must be taken to achieve an offline-online decomposition. The domain decomposition into spectral elements shows a resemblance to reduced basis element methods (RBEM), [17] [18]. A comparison of both approaches could be the subject of further investigation.
|
2017-12-18T14:46:13.000Z
|
2017-09-25T00:00:00.000
|
{
"year": 2019,
"sha1": "72b352b3405fd07ddd9550a1a26aad919a9d1460",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1712.06432",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "72b352b3405fd07ddd9550a1a26aad919a9d1460",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
126320891
|
pes2o/s2orc
|
v3-fos-license
|
Effect of variation of the central-hole depth and the axial anisotropy on the AB oscillations in a wide nanoring
The effect of the external magnetic field on the spectral properties of one-electron non-uniform quantum ring with radially directed hills is analysed. The corresponding one-particle wave equation is separable in the adiabatic limit, when the layer thickness is essentially smaller than its lateral dimension. Our calculations show that the presence of a single axially directed hill as well as a rise of the central hole thickness produce a quenching of the Aharonov-Bohm (AB) oscillations of the lower energy levels and of the magnetic momentum. However, as the number of radially directed hills is increased, the system exhibits again oscillations, resulted from an enhancement of tunnelling circular currents.
Introduction
The development of new semiconductor growth techniques has made possible the fabrication of the selfassembled quantum rings (QRs) which in the presence of the magnetic field exhibit an interesting quantum-interference phenomenon, denominated AB effect [1]. Theoretical analysis of this phenomenon reveals its delicate nature related to a destructive interference induced by a possibility to appearance of multiple different spatially separated paths in QR under a diminishing of the central-hole dimension or any non-uniformity that breaks the axial symmetry [2,7]. This destructive interference disappears if the ring is sufficiently narrow, and uniform. However, further numerical analysis of more realistic 3D model of non-uniform crater-like QRs shows that, although the real QR shape differs strongly from an idealized circular-symmetric narrow ring structure, AB-type oscillations in the magnetization survive [8,10].
A 3D exactly solvable model of the crater-like one-electron QR, proposed recently in reference [11,12] provides an explanation of the stability of the AB oscillations with respect to structural nonuniformities. In the presence of the magnetic field applied along the symmetry axis, more probable electron paths in this model are clustered close to the crater's rim being similar to those in quasi-onedimensional quantum rings independently on the crater width. However, any slight non-uniformity produced by a single radially directed valley or single hill suppresses the oscillations of several lower levels due to the localization of the corresponding rotational states. Nevertheless, when the nonuniformity becomes substantial due to the presence of multiple valleys and hills, the AB oscillations generated by the external magnetic field becomes again possible owing to the electron tunnelling through thin potential barriers.
It is possible to extend the model considered in reference [11,12] in order to analyse additionally the effect of the variation of the width and the depth of the central-hole on the AB oscillations. In this extended model, proposed in this paper, the crater's height between the central hole and the outer border grows linearly with different slopes at different radial directions in the case of non-uniform structure, while the corresponding slopes in the central-hole at all radial directions are zero. By varying the depth and the radius of the central hole in this model, one can analyse a successive change of the spectral and magnetic properties during a transformation of the nanostructure morphology, from disk-like to ringlike.
Theoretical model
It is considered a model of a crater-like non-uniform QD as a thin layer whose thickness d depends on polar coordinates ( ) , ρ ϕ as: Here ( ) sin ; 1 2,1, 2,... As the thicknesses of actual self-assembled QDs manufactured up to now are much smaller than their lateral dimensions one can take advantage of the adiabatic approximation assuming a model with infinite-barrier confinement. In framework of this approximation, the fast movement of the electron along the Z-axis inside the layer is supposed to be independent on its in-plane slow displacements [13,15]. Therefore, the ground state energy for each the electron in-plane position given by polar coordinates ( ) , ρ ϕ , given by the well-known expression corresponding to infinite-barrier quantum well of width ( ) , d ρ ϕ , in dimensionless units is: Following the adiabatic approximation procedure, one can consider afterwards the in-plane electron motion as 2D problem with additional adiabatic potential given by this function. The renormalized 2D Hamiltonian describing in the effective-mass approximation the in-plane electron slow motion inside the layer with profile given by Equation (1) in the presence of the adiabatic potential (2) and the magnetic field applied along Z-axis has the following form: For the uniform crater ( In what follows, the results of calculation of some lower energies
Results and discussion
The geometrical parameters used below are: a typical value for the InAs/GaAs material effective Bohr radius 0 * 10 a nm ≈ , the outer radius For a h less than 3.5nm, one can observe AB oscillations of energy levels, typical for a 1D ring-like structure, which generate multiple crossovers between the curves with different magnetic quantum numbers m. As it is well known, the period of AB oscillations, γ Δ in 1D QR of the radius R is equal in dimensionless units to. h grows from 2nm to 3nm the maximum of the density is displaced slightly toward the border, the confinement is increased and the averaged value of the electron rotation radius is enlarged.
A rise of the energy band bottom and an increase of the period of the AB oscillation in Figure 3 is associated with such change of the charge distribution. For 3.5 a h nm ≥ the electron configuration becomes rather similar to one of the disc-like structure for which there are no the AB oscillation of the energy levels. In Figure 4 similar curves are presented for wide axially non-uniform QRs, whose morphology is given by the layer thickness dependency specified by the relation (1) In Figure 4 is shown a similarity of the curves of the energies dependencies on the magnetic field for upper energy levels, which are due to rotational states. The electron density distribution in these states is practically insensible to the structural non-uniformity due to the fact, that the energy of the electron in these states are superior than the heights of barriers generated by the axial non-uniformity. On the contrary, the magnetic field dependencies of five lower levels and the magnetization in the single-hill structure ( Figure 4 do not exhibit any oscillation, such as observed in the second column of Figure 4. This effect is attributed to a localization of the lower rotational states, induced by single hill-type non-uniformity, and to relate to this localization insensibility to the external magnetic field. It is seen also, that these lower energies in the third and fourth columns, for cases of non-uniform craters with two ( 1 p = ), and four ( 2 p = ) hills, are clustered inside separated bands with two and four levels in each one, respectively. The presence of a single radially directed hill in the crater produces an effective adiabatic potential with a wide barrier that impedes a cyclic displacement of the electron with a low energy along the rotational path. With a growth of the number of the hills, the potential barriers along circular paths between adjacent hills become narrower.
The bigger the number of the hills, the narrower are the barriers for circular paths and the larger is the tunnelling current generated by the external magnetic field. An increase of the tunnelling current provides a growth of the amplitude of the AB oscillations and a clustering of the vibrational levels in separated non-crossing bands, while inside each band one can see in Figure 4 multiple crossovers and reordering of the levels, typical for AB oscillations. The period of AB oscillations in the first column of Figure 4 for uniform crater-like wide QR with inner and outer radii 5nm and 20nm, respectively coincides with the corresponding value in 1D QR of the radius 20nm. Its means that lines of the circular current induced by the magnetic field in uniform crater goes along the external border. It is interesting that AB oscillations in a non-uniform crater with multiple hills and valleys restored due to induced tunnelling current have the same period of oscillation as it one can observe at the last column of Figure 4 It means that lines of the circular tunnelling current induced by the magnetic field in non-uniform crater also goes along the outer border.
Conclusions
In this work is presented a theoretical analysis of the effect of the variation of the depth of the central hole in uniform and in non-uniform crater-like quantum dots on their energies and the induced magnetic momenta dependencies on the external magnetic field applied in the Z-axis direction. To this end is used a simple model of one-electron crater-like quantum dot with ideally circulate inner and outer radii but with non-uniform thickness due to the presence of radially directed hills. It is shown that in the adiabatic limit, when the thickness of the crater grows linearly in all radial directions keeping its value significantly smaller than the outer radius, the energies and wave functions of the electron can be found analytically.
The dependencies of the energies and induced magnetic momentum on the magnetic field in uniform craters reveal the presence of the AB oscillations typical for 1D quantum ring independently of the size of the central hole of the crater until the layer thickness of the central-hole region becomes almost equal to the thickness of the outer border. This result is attributed to a strong confinement that retains the electron in the crater-like structure inside a very narrow circulate region close to the exterior frontier, independently on width of the crater and the value of the external magnetic field, in contrast to the case of nanorings with rectangular cross-sections. Similar analysis for crater-like non-uniform nanostructures with radially directed hills show that the barriers in the effective adiabatic potential in the regions of the valleys between adjacent hills along the circular paths, provides a localization of the electron in the vicinities of tops of the hills if the electron energy is smaller than barrier height. It has been shown that in the case of the crater-like structure with a single hill, a small non-uniformity of the crater thickness might suppress the electron rotation induced by the external magnetic field in various lower energy states, producing a quenching of the AB oscillations of the corresponding energy levels. The higher the non-uniformity, the bigger is the number of the separated levels whose energies does not depend on the external magnetic field. Its consider that these states (whose energies does not exhibit AB oscillations) as vibrations, localized close to the hill.
It was also found that an increase of the number of hills in a non-uniform crater produces the assembling of the localized states in independent sub-bands. The number of levels inside of each of them coincides with the number of hills in nanostructures, while the dependencies of the energies inside sub-bands on the magnetic field exhibit AB oscillations with crossovers between them, similar to those for extended states. The period of these oscillations, coincide with those for the extended states. These oscillations are attributed to the tunnelling currents generated by the external magnetic field. The bigger the number of the valleys, the larger are tunnelling currents, the wider are these bands, and the higher are the amplitudes of the corresponding AB oscillations. It has been shown that it is due the fact that the increase of the number of the valleys produces a reduction of the wide of the potential barriers between adjacent hills, making easier the tunnelling through narrower barriers along circular paths.
|
2019-04-22T13:04:46.186Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "fd758fdfa8838624bda4859c2271603392bb3b3f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/786/1/012007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "120e358d9632b294c7abf3adade87d1b685c1d79",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
8349177
|
pes2o/s2orc
|
v3-fos-license
|
Measuring X-ray anisotropy in solar flares. Prospective stereoscopic capabilities of STIX and MiSolFA
During the next solar maximum, two upcoming space-borne X-ray missions, STIX on board Solar Orbiter and MiSolFA, will perform stereoscopic X-ray observations of solar flares at two different locations: STIX at 0.28 AU (at perihelion) and up to inclinations of $\sim25^{\circ}$, and MiSolFA in a low-Earth orbit. The combined observations from these cross-calibrated detectors will allow us to infer the electron anisotropy of individual flares confidently for the first time. We simulated both instrumental and physical effects for STIX and MiSolFA including thermal shielding, background and X-ray Compton backscattering (albedo effect) in the solar photosphere. We predict the expected number of observable flares available for stereoscopic measurements during the next solar maximum. We also discuss the range of useful spacecraft observation angles for the challenging case of close-to-isotropic flare anisotropy. The simulated results show that STIX and MiSolFA will be capable of detecting low levels of flare anisotropy, for M1-class or stronger flares, even with a relatively small spacecraft angular separation of 20-30{\deg}. Both instruments will directly measure the flare X-ray anisotropy of about 40 M- and X-class solar flares during the next solar maximum. Near-future stereoscopic observations with Solar Orbiter/STIX and MiSolFA will help distinguishing between competing flare-acceleration mechanisms, and provide essential constraints regarding collisional and non-collisional transport processes occurring in the flaring atmosphere for individual solar flares.
Introduction
Solar flares are the most powerful explosive events in the solar system, and large flares can release up to 10 32 ergs of energy in a few minutes (Benz 2008;Holman et al. 2011). A large fraction of this energy, stored in coronal magnetic fields and released by magnetic reconnection, goes into the acceleration of particles. However, the mechanisms transforming magnetic energy into kinetic energy are still not clearly understood. Flare-accelerated electrons emit a continuous spectrum of bremsstrahlung X-rays that can span a wide energy range up to gamma rays. Hard Xrays (HXR, 20 keV) are a direct link to flare-accelerated electrons and a vital probe of the flare physical processes occurring at the Sun (e.g. Brown et al. 2003;Kontar et al. 2011;Holman et al. 2011). Current X-ray observations are performed by the spaceborne Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Lin et al. 2002), using nine rotation modulation collimators and bulk germanium (Ge) detectors to perform indirect imaging and spectroscopy from 3 keV to 17 MeV with an angular resolution of few arcseconds . The brightest X-ray sites are often found at the footpoints of newly reconnected magnetic loops that link coronal acceleration regions with the much denser chromosphere. Here, the bulk of the accelerated electrons interact, losing energy via electronelectron Coulomb collisions and emitting bremsstrahlung X-rays by interacting with the ambient ions. Solar flare X-ray observations typically show a superposition of two distributions. The first is a thermal component emanating from the corona, with flare temperatures of a few tens of million degrees, dominating up to 10-20 keV. The second is a non-thermal power law extending to higher energies with a spectral index in the range from ∼2 to 5 for HXR footpoints sources and, when detected, from ∼3 to 8 for the coronal non-thermal emission Kašparová et al. 2005;Krucker & Lin 2008;Simões & Kontar 2013;Chen & Petrosian 2013).
Although the flare X-ray energy spectrum is well observed by RHESSI, the angular distribution is poorly constrained. The X-ray spectrum is dependent on the angular distribution of the parent electrons (e.g. Massone et al. 2004;Kontar et al. 2011) and uncertainty regarding the electron pitch-angle distribution can also lead to changes in inferred plasma parameters, e.g. an overestimation of coronal density from X-ray imaging (Jeffrey et al. 2014). Thus, knowing the directivity of both the injected and radiating electron distributions is essential for understanding the type of acceleration mechanism(s) and the transport and interactions of solar flare electrons. Often, in an oversimplified collisional thick-target model, the injected and emitting electrons are assumed to be beamed along the guiding field lines (e.g. Brown 1971). However acceleration (for example by a second- order Fermi process) might produce an isotropic distribution of accelerated electrons (e.g. Melrose 1994;Miller et al. 1996;Petrosian 2012). Furthermore, electron transport through the surrounding solar plasma ultimately broadens the electron distribution, increasing the isotropy by collisional or non-collisional pitch-angle scattering . Therefore, even if the injected distribution is strongly beamed, the angular distribution of radiating electrons is expected to isotropise as they are transported from the corona to the chromosphere.
Method 2 suggests that the HXR emitting electron distribution is close to isotropic, and not beamed as in a simple standard flare model, at least for the few events published. This method uses the X-ray albedo effect (e.g. Tomblin 1972;Santangelo et al. 1973;Bai & Ramaty 1978), where sunwards emitted X-rays are Compton backscattered in the photosphere into the observer direction. The directivity is then determined by separating the directly emitted and reflected components of HXR flux that contribute to the observed X-ray spectrum. Kontar & Brown (2006) studied two flares and their analysis showed that both flares were close to isotropic. A follow-up study of eight events by Dickson & Kontar (2013) again found a lack of electron anisotropy below 150 keV.
Recently Kašparová et al. (2007) studied 398 flares using method 1, accounting for the albedo component. Although they found changes in spectral index that were consistent with the presence of an albedo component, the statistical study gave no clear conclusion regarding average flare directivity.
Method 3 uses the direct link between X-ray linear polarization and electron anisotropy. Electron directivity and X-ray polarization have been extensively modelled (with and without albedo) with different scenarios (e.g. Leach & Petrosian 1983;Bai & Ramaty 1978;Emslie et al. 2008;Jeffrey & Kontar 2011). Nevertheless, observations with past instruments and non-dedicated polarimeters, such as RHESSI, have proved inconclusive, owing to instrumental issues (small effective area etc.) inducing large uncertainties associated with the measurements. The recently launched POLAR (Hajdas 2015), a wide field-of-view X-ray polarimeter installed on the Chinese space station Tiangong-2 in September 2016, should allow for confident detection of X-ray polarization during large flares at suitable heliocentric angles away from the solar disk centre. Importantly, POLAR has an effective area of 200 cm 2 , i.e. two orders of magnitude larger than the polarimeter on board RHESSI, and a low minimum detectable polarization of 5%, that should remove some of the previous issues (e.g. high background levels and large uncertainties). However, POLAR will be operational during a period of decreasing solar activity.
Unambiguous measurements of solar flare electron anisotropy can be obtained through X-ray directivity measurements made by cross-calibrated detectors looking at the same source from two separate points of view (method 4). Such previous stereoscopic studies (e.g. Kane et al. 1998) found no clear evidence for directivity at large X-ray energies. However, past direct measurements by multiple spacecrafts suffered greatly from calibration issues, owing to the use of different types of detectors, therefore the results were questionable at best. Thus, it is fundamental that the two instruments have a well-known energy cross-calibration.
A concrete possibility of obtaining simultaneous stereoscopic observations will be provided by two instruments, which will operate at the next solar maximum. The "Spectrometer Telescope for Imaging X-rays" (STIX) (Krucker et al. 2013), is an instrument to be flown on board the ESA/NASA Solar Orbiter mission (Müller et al. 2013) within the ESA Cosmic Vision programme. Solar Orbiter will be launched in October 2018 and it will start its science programme after a three year cruise phase. The "Micro Solar-Flare Apparatus" (MiSolFA) (Casadei 2014) is a compact X-ray detector being developed in Switzerland, in collaboration with the French STIX team and the Italian Space Agency, to be operated in low-Earth polar orbit at the next solar maximum. Thus, during the next solar maximum period simultaneous observations will be jointly performed by STIX and Mi-SolFA, the first orbiting around the Sun and the second around the Earth. Importantly, both instruments will adopt the same type of photon detectors, overcoming the calibration issues of past instruments.
The purpose of this paper is to discuss the prospective stereoscopic observations with STIX and MiSolFA, highlighting the capabilities of both instruments working in tandem, and estimating the number of observable flares, which are suitable for measuring the anisotropy of the X-ray emission. The two instruments are illustrated in Section 2. Simulated solar flares with small differences in electron anisotropy are studied, and we consider both instrumental and physical effects such as X-ray albedo. In Section 3 it is shown that, even with a spacecraft angular separation as small as 20-30 • , a mildly anisotropic distribution will produce detectable differences in the observed X-ray spectra, for a bright enough solar flare. The expected number of flare observations is finally estimated in Section 4, where it is found that several flares per year will be suitable for an energy-dependent directivity measurement with STIX and MiSolFA. In Section 5, all the main results are summarized.
Dual observations with STIX and MiSolFA
The STIX instrument will provide X-ray imaging spectroscopy from 4 to 150 keV with 32 Caliste-SO units equipped with CdTe crystals (Meuris et al. 2012) and energy resolution better than 1 keV (FWHM) from 14 to 60 keV. The imaging is performed with an indirect technique based on the Moiré effect, achieving an angular resolution of about 7 arcseconds. At perihelion, STIX will observe flares from a distance three times closer to the Sun than instruments orbiting around the Earth, hence achieving an effective spatial resolution similar to RHESSI (which has ∼ 2 arcsecond resolution). In order to complement STIX observations with minimal differences in the energy response, MiSolFA will adopt the same photon detectors, i. e. Caliste units equipped with CdTe crystals with 1 mm thickness. The orbital inclination of STIX will increase over time up to 25 • or more (depending on the mission duration). Hence, the two instruments will be able to observe the same flare stereoscopically from two different points of view.
The STIX and MiSolFA instruments both need to exploit indirect imaging techniques because they cannot accommodate grazing-incidence focussing optics, which require large focal distances of several metres. For example, astronomical direct Xray imagers such as the Nuclear Spectroscopic Telescope Array (NuSTAR) have a 10 m focal length (Harrison et al. 2013). Like RHESSI, STIX and MiSolFA rely on the Moiré effect, which is produced by a pair of parallel grids placed in front of each photon detector. The STIX instrument has 30 pairs of tungsten grids providing Fourier components with 9 different directions (at 20 • steps) and 10 angular scales (from 7 to 950 arcseconds, with constant-ratio steps of √ 2). Hence, STIX will be able to precisely locate the flare on the Sun and study the morphology of the X-ray emitting sources. On the other hand, the main purpose for the imaging system of MiSolFA is to separate the flare HXR footpoints, relying on other observations to locate the source on the Sun. For this purpose, it is sufficient to sample a relatively narrow angular range (from 10 to 60 arcseconds). MiSolFA will cover this range with 12 subcollimators sampling two orthogonal directions with frequencies following a Fourier series. Therefore, although only in a very limited angular range, MiSolFA will have a better point spread function than STIX, whereas the latter will be able to cover a much wider angular range and will be able to locate the source with very high precision (of order of 1 arcsecond).
In the rest of this paper, we focus on X-ray spectroscopy alone, hence the imaging performance of the two instruments is ignored, apart from taking into account the reduction of effective area (because of the grids, only about 25% of the photons reach the detectors). The effective area of MiSolFA detectors is 40% of the STIX area. In addition, the time average over the higheccentricity orbit of STIX gives it almost a factor 3 increase in intensity, owing to the closer distance from the Sun. This gives a ratio of 7.5 between STIX and MiSolFA acceptances. We also account for additional details in the following simulations. Arriving photons "see" the thermal shield of each instrument, which is designed to stop radiation up to the X-ray range. The STIX instrument has two beryllium (Be) windows with total thickness of 3 mm, whereas a 0.3 mm thick Be layer has been considered for MiSolFA.
In order to have sufficient counts up to about hundred keV, flares of class M or above are considered. This allows for the precise determination of the thermal and non-thermal contributions to the total flux. To avoid large dead times owing to the high flux of low-energy photons, STIX employs a movable aluminium at- tenuator with 0.6 mm thickness. On the other hand MiSolFA has no movable part, as this would compromise the pointing stability of such a light satellite. Nevertheless, the low-energy flux is also attenuated by MiSolFA, which plans to adopt golden grids fabricated onto a silicon or carbon substrate. Here a Si layer is considered (a conservative assumption), which absorbs most photons below 8 keV. Hence the MiSolFA transmission is not very different from what STIX achieves with the attenuator in front of the photon detectors (see figure 1).
Furthermore, although STIX and MiSolFA will adopt the same photon detectors, their performance cannot be expected to be identical, since Solar Orbiter will only start collecting science data after a cruise of three years. During this initial phase, the CdTe crystals will experience some ageing owing to incident radiation. The radioactive sources installed on these detectors will provide continuous calibration data, which is fundamental to ensure a good cross-calibration. Here it is conservatively estimated that the STIX resolution will worsen by a factor of two (a larger effect than what found by Eisen et al. 2002;Zanarini et al. 2004), whilst no ageing is considered for MiSolFA.
Finally, the background counts will have different distributions for the two detectors. One example is shown in figure 2, where the MiSolFA background distribution is taken to be similar to the RHESSI background at the beginning of the mission, and for STIX a flat component is superimposed with a bump that mimics the background X-ray radiation (Marshall et al. 1980). The actual background distributions will be measured once the two instruments start operating. Here what matters is that they are expected to have different shapes, affecting the measured photon spectra in different ways.
Flare spectra are rapidly falling energy distributions, while the background counts have a more uniform distribution. Therefore, at high energies (e. g. well above 100 keV) the measurement will be background dominated. On the other hand, the low-energy part is dominated by the thermal emission, which is not expected to show significant anisotropy, although a thermal source can show low levels (few percent) of directivity and polarization (e.g. see Emslie & Brown 1980). Hence the useful energy range for directivity studies extends from the end of the thermal region (∼ 20 keV), up to the energy bins in which the background counts are of the same order of magnitude as the actual flare photon rate. For example, for M-class flares this energy region roughly goes from 20 to 100 keV.
Article number, page 3 of 8 A&A proofs: manuscript no. directivity_v8 In order to estimate the flare viewing angles of STIX and MiSolFA, the flare position on the Sun was simulated according to the measured distribution seen by RHESSI over the past decade. 1 Similar to the sunspot distribution, flares are uniform in solar longitude, which implies a higher density of flares approaching the limb when looking from the Earth, and bimodal in latitude, with peaks at about ±13.5 • and a root mean square of 6.3 • . With MiSolFA in a low Earth orbit and STIX taken approximately 2 uniform in latitude (within ±0.5 rad) and in longitude (within ±π rad), there is about 50% probability (ignoring beyond-the-limb flares 3 ) that a flare is visible by one instrument, hence the integral of the distribution in figure 3 is 0.25, which is the fraction of flares visible by both instruments. There is a low probability for small viewing angles with a peak at θ S ∼ 30-50 • for STIX and θ M ∼ 20-30 • for MiSolFA; this is about 40% higher than a broad plateau reaching 90 • , which corresponds to limb flares. Accounting for a 20% live time for STIX, corresponding to the fraction of time it will spend in science mode during the main phase of Solar Orbiter 4 and for a fraction of flares visible by both instruments equal to 50%, one expects Mi-SolFA to observe the same event as STIX for 10% of all flares. The MiSolFA instrument has a two year nominal science mission duration, and this amounts to 73 days of net observing time. Assuming 20% dead time, caused by some issue for at least one of the instruments, one ends up with two months of total simultaneous observing time.
Simulation of flare measurement
Starting from different electron distributions, Jeffrey & Kontar (2011) computed the X-ray bremsstrahlung emission as a function of the electron energy and estimated the total X-ray flux along all directions, for different degrees of electron anisotropy. Importantly, this work also included the X-ray albedo component, where photons emitted towards the Sun are Compton backscattered in the photosphere towards the observer with Fig. 4. Polar plot of the simulated electron angular distributions for completely isotropic (blue) and mildly anisotropic (red). In the simple model considered here, the energy dependence of the electron anisotropy is neglected. changed photon properties. The albedo component produces a bump in the energy spectrum between ∼10-100 keV and this bump must be included since it changes all the measured electron and photon properties, including anisotropy. In our simulation, we only modelled the non-thermal electron distribution. The resulting X-ray bremsstrahlung (including the albedo component) is calculated for a HXR footpoint source located at a chromospheric height of 1 Mm above the photosphere (for more information see Kontar & Jeffrey 2010;Jeffrey & Kontar 2011). The energy spectrum of the emitting electrons is a single power law with spectral index of 2, hence the injected electron distribution has a spectral index of δ ∼ 4 and the emitting photon distribution has a spectral index of γ ∼ 3. Here we make a comparison between an isotropic and a mildly anisotropic case with a Gaussian pitch-angle distribution with approximately 0.4 rad standard deviation.
In the simulation, we use a simple model where the angular distribution of the electrons does not change with energy, since more complicated electron distributions are not required to show the prospective stereoscopic capabilities of both instruments. Here we focus on the physically relevant observable differences in the detected X-ray flux, which arise from different lines of sight, albedo, and instrumental effects. The latter are independent of the details of the parent electron distribution, while both direct and reflected emissions are expected to be functions of the electron energy. However, while the energy dependence of the albedo component is well understood (Jeffrey & Kontar 2011), the "true" electron distribution is not known and likely changes in different flares. Hence there was no attempt to model the full complexity of real flares, as this is not necessary to assess the capability of the two instruments to perform joint measurements from which the anisotropy can be inferred, whatever the underlining electron model is. The angular distributions of the X-ray emitting electrons used in our analysis are shown in Figure 4. The non-isotropic distribution is only slightly sunwards beamed compared to the isotropic case.
The two corresponding photon spectra have different energy dependence along different directions. Figure 5 shows their binwise ratio as a function of the energy and µ ≡ cos θ, where θ = θ S or θ M is the viewing angle with respect to the local flare verti- cal direction (equal to the local heliocentric angle on the solar disk). This means that, unless STIX and MiSolFA are symmetrically located with respect to the flare direction (which is very unlikely, see figure 3), they will measure different energy spectra. Figure 6 shows the difference | cos θ S −cos θ M | between STIX and MiSolFA: most flares will be observed with angular separations large enough to measure sizeable differences in the photon spectra. Here we take cos θ S = 0.7-0.8 for STIX and cos θ M = 0.9-1.0 for MiSolFA, to make a concrete example. We used this conservative example since a larger difference in the viewing angles will generally produce a greater difference in the resulting energy spectra observed by both instruments. The most effective way of detecting deviations from a fully isotropic electron distribution is to look for differences in the shape (not only in the normalization) of the energy spectrum of the emitted X-rays. A sizable difference in the absolute flux reaching the two detectors is expected. However, it is very difficult to establish a standard candle for calibration of absolute fluxes. On the other hand, shape differences can be compared against spectra, which can be safely assumed to have the same shape, for example when the same source is viewed at about the same angle. This is why checking for shape differences is a more robust approach from the experimental point of view. The example under consideration, i.e. minimal variations in anisotropy, represents a challenging case, because it implies choosing adjacent slices in figure 5, with minimal shape differences in the energy spectrum. Figure 7 shows the corresponding "true" energy spectra of the simulated photon flux towards STIX and MiSolFA, using the same energy binning for both.
After having considered the passage through all materials in front of the photon detectors, the different energy resolution and background distributions, one obtains the expected distributions shown in figure 8. Next, they are taken as the input for a pseudo-experiment, in which the expected (real) value in each bin is taken as the parameter of a Poisson distribution, which is adopted to generate a random (integer) number of counts in 1 minute of observation time for both detectors. The result is shown in figure 9. size of the detected sample. The main goal is to assess how well these two cases can be distinguished, by comparing the observations performed with two instruments. In order to quickly verify that the two models can indeed be disentangled, we take here the binwise ratios between the measured distributions by MiSolFA and STIX and compare these ratios in figure 10. These ratios are clearly different, which implies that the two models can be distinguished by the simultaneous measurement performed with the two instruments. Hence, any greater level of flare anisotropy, if present, will also be measured by both instruments. Actually, a better approach would be to compare the unfolded distributions, which are obtained after taking into account all detector effects and represent our best inference on the "true" input of each instrument, and then take the ratio of the unfolded distributions. However, the accuracy of the unfolding procedure is ultimately limited by the statistical uncertainty, which is shown by the error bars in figure 10. Hence the latter provides sufficient information to verify that the two models produce shape differences in the measured energy spectra of STIX and MiSolFA.
Expected number of good flares
Based on the statistical study of three years of RHESSI solar flare observations performed by Battaglia et al. (2005), a simulation was performed, with the purpose of understanding the range of possible scenarios to be encountered by STIX. Two functional relationships based on this study have been heuristically obtained from the simulation. The first relates the energy threshold E thr at which the non-thermal contribution equals the thermal component to the flare intensity I, taken as the logarithm in base 10 of the GOES class normalized to X1 (i.e. I = 0 for X1, I = −1 for M1, I = −2 for C1, etc.). A quadratic fit with the function E thr = p 0 + p 1 I + p 2 I 2 provides a very good description of this relationship, with best-fit parameters p 0 = 36.3 ± 0.07 keV, p 1 = 3.57 ± 0.06 keV, and p 2 = −0.301 ± 0.015 keV. The fit quality is very good, with χ 2 = 0.072 over 4 degrees of freedom. However one must be aware that the flare-to-flare variations are so big that the good fit quality is mostly because of the large spread among the data.
The second heuristic relationship connects the base-10 logarithm of the photon rate R in Hz/cm 2 above E thr to the flare intensity I. A parabolic fit with the function R = q 0 + q 1 I + q 2 I 2 provides a very good description of this relationship, with bestfit parameters q 0 = 3.107 ± 0.009, q 1 = 1.154 ± 0.010, and q 2 = 0.059 ± 0.005. The fit quality is very good, with χ 2 = 0.008 over 12 degrees of freedom. Again, this is mostly a result of the large spread of flare characteristics.
The same relationships can also be exploited for MiSolFA, after accounting for the instrumental differences mentioned above. To put ourselves in the worst case, we considered the detection of a flare with STIX at perihelion (where it spends only a very short fraction of its orbit). With respect to the orbit average, the closest distance gives a significant intensity magnification, bringing the ratio between STIX and MiSolFA acceptances to the very conservative factor of 25, which is used below. Thus, the rate logarithm R for MiSolFA is taken to be 1.4 units smaller than STIX, and the expected non-thermal counts for a power law model can be computed for any given spectral index. Here we take as an example, a photon spectral index γ = 4 for each instrument, which is steeper than the photon spectrum considered above 5 , to get a conservative estimate of the photon counts. The result is shown in table 1 for MiSolFA.
For an M1-class flare, STIX should collect about 6000 nonthermal counts in one minute at the peak, while MiSolFA in the same time is expected to see 240 counts. For an M3-class flare, the expected counts per minute are 21000 for STIX and 840 for MiSolFA. For M5 and X1 classes, the expectation is 38k and 96k counts per minute for STIX, and 1.5k and 3.8k for MiSolFA. As STIX will collect many more events in each energy bin, the relative uncertainty in the ratio with MiSolFA is dominated by the counts of the latter instrument. This uncertainty (ignoring any systematic effect, which might be discovered in the future) is also reported in table 1.
All M and X class flares will be suitable for directivity measurements. For the weaker M-class flares it might be beneficial to adopt a coarser energy binning, to decrease the statistical uncertainty in each bin. However, shape differences like those expected from the isotropic and mildly anisotropic models considered above can be measured without rebinning from class M3 above.
The last step is to estimate the expected number of flares of class M1 or higher, at the next solar maximum. The statistical distribution of solar flare classes is well described by a power law behaviour, with spectral index of about 2.1 and percent-level variations in slope across different solar cycles. According to the recent review by Winter & Balasubramaniam (2015), one would expect to see at least 20 solar flares with class M1 or above per month during a solar maximum period. Together with our conservative estimate of the total overlapping live time of two months for STIX and MiSolFA, this implies that there should be at least 40 observations suitable for directivity measurement. Observing even one suitable flare, in which non-isotropic emission is detected and its dependence on energy is studied, would be a definite step forwards in our understanding of electron anisotropy in solar flares, and hence the expected number of good flares is very encouraging.
Summary
In this work, dual X-ray observations of the same solar flare from two upcoming instruments, STIX and MiSolFA, at different viewing angles are considered, in the context of prospective electron directivity measurements. A number of instrumental effects have been taken into account by performing a simulation of the response of STIX and MiSolFA to the photon fluxes computed in two models, in which the accelerated electrons are either fully isotropic or close to isotropic, helping us to determine the capabilities of the instrumentation for electron directivity measurements. The X-ray albedo component is also taken into account, as described in Jeffrey & Kontar (2011).
Our study focussed on the energy range of the non-thermal component up to the energy bins in which the background counts are no longer negligible compared to the photon rate. Depending on the flare, this region goes from 20-30 keV to about 100 keV. In order to have enough counts in MiSolFA, which is the instrument with the smaller acceptance, flares of class M1 or higher are required, to be able to distinguish between the two considered models. For such flares, we find that even for a mildly anisotropic case, and for spacecraft separations as small as 20-30 • , STIX and MiSolFA will be able to detect shape differences in the X-ray spectra. Hence, higher levels of X-ray anisotropy should be easily detectable. Given the rate of flares as a function of the GOES class, and conservatively assuming a 8% net overlapping time for STIX and MiSolFA, one expects to observe at least 40 flares of M1 class or above during solar maximum, and it will be possible to perform quantitative estimates of the X-ray intensity along different directions, as a function of energy.
Therefore, the result of this study is that there will be at least 40 flares suitable for directivity studies, thanks to the stereoscopic measurements by STIX and MiSolFA at the next solar maximum. The combined use of these two instruments will allow for a quantitative measurement of solar flare electron anisotropy for the first time in solar flare physics, a vital diagnostic tool for understanding and constraining fundamental solar flare models of particle acceleration and transport.
|
2017-08-04T07:54:20.000Z
|
2017-02-28T00:00:00.000
|
{
"year": 2017,
"sha1": "5fa751a96a0ebb71b0d251082b8af1957c579ac4",
"oa_license": "CCBY",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2017/10/aa30629-17.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5fa751a96a0ebb71b0d251082b8af1957c579ac4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2627655
|
pes2o/s2orc
|
v3-fos-license
|
Frequency of craniofacial pain in patients with ischemic heart disease
Background Referred craniofacial pain of cardiac origin might be the only symptom of ischemic heart accidents. This study aimed to determine the frequency of craniofacial pain in patients with ischemic heart disease. Material and Methods This cross-sectional study was accomplished on 296 patients who met the criteria of having ischemic heart disease. Data regarding demographics, medical history and referred craniofacial pain were recorded in data forms. In addition, patients underwent oral examination to preclude any source of dental origin. Chi-square test, Student’s t-test and backward regression model were used to analyze the data by means of SPSS software version 21. P<0.05 was considered significant. Results A total of 296 patients were studied comprising of 211 men (71%) and 85 women (29%) with the mean age of 55.8. Craniofacial pain was experienced by 53 patients out of 296, 35 (66%) of whom were male and 18 (34%) were female. None of the patients experienced craniofacial pain solely. The most common sites of craniofacial pain were occipital and posterior neck (52.8%), head (43.3%), throat and anterior neck (41.5%) respectively. We found no relationship between craniofacial pain of cardiac origin with age, diabetes, hypertension, and family history. On the other hand, there was a significant relationship between hyperlipidemia and smoking with craniofacial pain of cardiac origin. Conclusions Radiating pain to face and head can be expected quite commonly during a cardiac ischemic event. Dental practitioners should be thoroughly aware of this symptomatology to prevent misdirected dental treatment and delay of medical care. Key words:Craniofacial pain, ischemic heart disease, myocardial infarction, angina pectoris, referred pain.
Introduction
Ischemic heart disease is considered as a major cause of death in adults (1). The cardinal symptom of ischemic heart disease is chest pain characteristically induced by activities such as walking, climbing stairs, eating, or stress. Convergence of vagus, trigeminal, and cervical (C2, C3) nerves, may cause the pain radiate to other areas like right or left shoulder, scapular region, neck and lower jaw (2)(3)(4)(5)(6). In rare occasions, pain is solely perceived in the aforementioned areas instead of the chest, which leads to increased mortality due to misdiagnosis. Kreiner et al. (6) reported the occurrence of craniofacial pain during ischemic heart accidents to be nearly 40%, which was the sole symptom of ischemic heart disease in 6% of cases. Ischemic heart disease can be manifested as angina pectoris and myocardial infarction (MI), with the former having two clinical types of stable and unstable angina. Patients with stable angina has a good prognosis, whereas those with unstable angina experience episodes of chest pain even in the rest position and are likely to progress to MI soon (7). Pain of dental, periodontal, sinus and musculoskeletal origins are amongst the most common types of orofacial pain. However, pain in these areas might be originated from other regions, which are referred to as heterotopic pain (5). Cardiac pain can be presented as heterotopic pain, where in the orofacial region leads to unnecessary dental procedures and delay in diagnosis and treatment of cardiac disease. There are several reports regarding inappropriate dental treatment due to misdiagnosis of pain source (8). On the other hand, in developed countries missed diagnosis of MI has been described in 2-27% of cases (9,10). As demonstrated in a study, one fourth of misdiagnosis of MI resulted in lethal or potentially lethal complications (8) lack of chest pain and slow twitch (ST) elevation being the most important causes (10). Patients suspected of acute myocardial infarction (AMI) with no chest pain have a three times higher risk of death compared with those having chest pain (11). Another study revealed that in the absence of chest pain, one-year mortality rate of cardiac patients was twice as high as those experiencing chest pains. Orofacial pain was recorded as the sole symptom of ischemic heart disease in 6% of patients, while 32% had orofacial pain accompanied with pain in other regions. The frequency of craniofacial pain of cardiac origin is higher in women than in men as shown in two studies (12,7). Many studies have been conducted about frequency of cardiac pain in different parts of body (13)(14)(15)(16)(17)(18), however few studies addressed referral of cardiac pain to head and neck areas with most of them being case reports (19)(20)(21)(22)(23)(24)(25)(26)(27). Therefore, this study aimed to determine the frequency of craniofacial pain in patients with ischemic heart disease.
Material and Methods
This cross-sectional study was accomplished on 296 hospitalized patients who met the criteria of having ischemic heart disease (angina pectoris or myocardial infarction) verified by means of angiography in Shahid Rajaie Cardiovascular, Medical & Research Center, Tehran, Iran. History of previous cardiovascular disease such as hospitalization in coronary care unit (CCU), consumption of cardiovascular medications, history of severe chest pain as well as risk factors of coronary heart disease like dia-betes mellitus, hypertension, smoking, hyperlipidemia, and family history of heart disease were all recorded in data forms. Patients with history of chronic headache, earache, severe psychiatric disorders, pain in the tempororomandibular joint (TMJ) region, surgery or presence of a diagnosed mass in the jaws, and recent odontogenic pain were excluded from the study. Meanwhile, all eligible patients underwent oral examination by means of dental mirror and flashlight while lying on their beds, and those who found to have dental problems (severe or complex dental caries and any suspected tooth with pulp exposure) were banned from entering the study. On the day after angiography, patients were fully informed about details of the study, and then requested to fill out the data forms, which was sub divided into two part, providing demographics and pain characteristics (head and neck pain before or during heart attack, pain in other parts of body, seeking or receiving dental care due to craniofacial pain). Thereafter patients were shown anatomic illustrations representing chest, abdomen, back, shoulders, arms, face, neck, and mouth, and asked to mark site of their pain on the picture (6) (�ig. 1). To ensure the acuity of data, all questionnaires were reviewed by the researcher, and patients were asked once more regarding having pain and pointing to site of pain.
The study protocol was approved by oral medicine department of Shahid Beheshti University of Medical Sciences and Shahid Rajaie Cardiovascular, Medical & Research Center. All patients were obtained informed consent to participate in the study.
-Data analysis: To analyze the data, SPSS software version 21 was used. Descriptive statistics was used to report the results. Moreover, Chi-square test was used to examine differences between two genders in terms of symptoms, and Student's t-test to assess the distribution of age between two groups (with pain and without pain). In order to analyze the effect of gender, age, type of cardiac disease, and coronary risk factors on chance of having craniofacial pain, backward regression model was utilized. P<0.05 was considered significant.
Results
The present study was aimed to determine the frequency of craniofacial pain of cardiac origin in patients with ischemic heart disease hospitalized in Shahid Rajaie Cardiovascular, Medical & Research Center in the year 2014-2015. A Total of 296 patients entered the study comprising of 211 men (71%) and 85 women (29%) age ranging 29 to 85 years with the mean age of 55.8. Of the total sample, 133 patients were hospitalized for MI, 71 for unstable angina, and 92 for stable angina. In regard to medical records, 166 patients had history of hospitalization in CCU, 209 were taking cardiovascular medications, and 150 experienced severe chest pain. Meanwhile, medical history taking revealed diabetes in 95 patients, hypertension in 134, smoking in 106, and hyperlipidemia in 148 and presence of ischemic heart disease in first-degree relatives in 149. Craniofacial pain was experienced by 53 patients out of 296, 35 (66%) of whom were male and 18 (34%) were female. The age range of men was 57.9 and that of women was 62.8. Thirty six patients (66%) were hospitalized because of MI, and 17(34%) for unstable angina. None of the patients experienced craniofacial pain solely, but all had concomitant pain in mid-chest or left chest. In case of craniofacial pain, the most common sites of pain were occipital and posterior neck (52.8%), head (43.3%), throat and anterior neck (41.5%) respectively. Pain in the left mandible was recorded in 28.3% of patients. The right neck was the least frequently site of referred craniofacial pain (3. 7%) (�ig. 2). There was no report of referral pain in the maxilla or teeth among our patients. The most prevalent site of pain in patients without history of craniofacial pain were mid-chest, left side of the chest, back, and left shoulder. On the other hand, those with craniofacial pain had cardiac pain most commonly in left side of the chest, mid-chest, and back respectively. We found no relationship between age and craniofacial pain of cardiac origin. However, there was a statistically significant difference between men and women with craniofacial pain in terms of sex (p=0.02). Meanwhile, no association was detected between craniofacial pain of cardiac origin with diabetes, hypertension, and family history. There was a significant relationship between hyperlipidemia (p=.001) and smoking (p=.003) with craniofacial pain of cardiac origin (�ig 3).
Discussion
Ischemic heart disease is considered one of the major fatal events among adults (1). Patients with ischemic heart disease may experience referred pain in the head and neck areas. Induction of pain radiation after physical activity and relief following rest is indicative of cardiac origin (7). There are a few studies regarding frequency of cardiac pain referred to head and neck in patients with ischemic heart disease. The objective of the present study was to determine the frequency of craniofacial pain of cardiac origin in patients hospitalized at Shahid Rajaie Cardiovascular, Medical & Research Center. In this study patients were selected after confirmation of having severe coronary stenosis based on angiographic interpretation by an experienced cardiologist. In other similar studies the method of patient selection was not mentioned (27,28). Danesh-Sani et al. (28) found 34% of patients having craniofacial pain of cardiac origin, which was the only presentation of ischemic heart disease in 13.3% of them. In Kreiner et al. (27) study, these values were 38% and 15% respectively. In the present study, 17.9% of patients reported to have referred craniofacial pain accompanied by pain in other parts of the body, and none of them pre-sented with craniofacial pain as the sole symptom during heart attack. Due to low public awareness of craniofacial pain as a symptom of cardiac ischemia, the frequency found in our study regarding craniofacial pain of cardiac origin is likely to constitute an underestimation. In accordance to our results, there was no association between age and craniofacial pain in similar studies (p=0.2). We found a significant difference between men and women with craniofacial pain with respect to age in a way that mean age of women was significantly more than of men (p=0.025). Previous studies have demonstrated significant differences between men and women in terms of referred pain to craniofacial region. Kreiner et al. (27) and Danesh-Sani et al. (28) showed that women experienced craniofacial pain more frequently than did men. Danesh-Sani et al. (28) reported significantly higher frequency of craniofacial pain among men compared to women. In the present study, there was no significant difference between men and women in regard to craniofacial pain (p=0.352). Kreiner et al. (27) noticed that the most common sites of craniofacial pain were upper throat and anterior neck (81.7%), left mandible (45.1%), and right mandible (40%). However, Danesh-Sani et al. (28) found left mandible as the most common site of involvement with referred craniofacial pain . In this study, regions of occipital and back neck (52.8%), head (43.3%), and anterior neck and throat (41.5%) were found to be the most prevalent sites of craniofacial pain. Pain in the left mandible was perceived by 28.3% of our patients, and there was a significant difference between left and right mandible in this regard in a way that left mandible was significantly affected higher than right mandible (p=0.02). Contrary to our results, Danesh-Sani et al. (28) and Kreiner et al. (29) reported that the most common site of pain in the absence of chest pain was maxillofacial region. In this study, patients without chest pain reported regions of back and low back as the most prevalent sites of pain. We were not able to take panoramic views of hospitalized patients. In addition, patients examined by means of observation with dental mirror in supine position. Danesh-Sani et al. (28) ordered panoramic views for all patients, which might have excluded more patients due to having non-cardiac pain. It is noteworthy that the previous studies neither addressed craniofacial pain after angiography, nor considered coronary risk factors such as diabetes, hypertension, hyperlipidemia, smoking, and family history. Our study found no relationships between craniofacial pain of cardiac origin with diabetes, hypertension, and family history. However, there was a significant relationship between craniofacial pain with hyperlipidemia (p=0.01) and smoking (p=0.03). In Kreiner et al. (6) study, three cardiac patients expe-rienced bilateral toothache in the mandible, and one had left maxillary odontogenic pain. In addition, the ratio of bilateral craniofacial pain to unilateral pain was 6:1, whereas this ratio was 1:1 in the arms. However there was no report of referral pain in the maxilla or teeth among in present study. Regarding the crucial risk of ischemic heart disease and possibility of pain referral solely to the craniofacial area, it is recommended that further studies be conducted with larger sample size, more elaborate oral examination including radiographic imaging as well as detailed inspection of facial, muscular, and temporomandibular structures, and recording pattern of pain. Radiating pain to face and head areas can be expected quite commonly during a cardiac ischemic event. Since dental practitioners may play an important role to detect such atypical symptoms of cardiac origin; they should be thoroughly aware of this symptomatology in order to prevent misdirected dental treatment and delay of medical care.
|
2017-10-28T02:53:16.683Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "49b98dbd73e6ae98d332df9e61c1751b316a20ca",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/jced.53078",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2768fa9c350acbed36ba24c99ee252bb8d467a1a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
57505350
|
pes2o/s2orc
|
v3-fos-license
|
Anti- Candida Activities and GC Mass Analysis of Seeds Hydroalcohlic Extract of Rumex obtusifolius
Background: Nowadays,theuseofherbalmedicinesinthepreventionandtreatmentofdiseaseshasbeenincreasedintheworld,especiallyinIran.Developingchemical drugsresistanceinsome Candida speciesandtheirsideeffects,conductingresearchtofindnewresources,especiallymedicinalplants,areof primeimportance. Objectives: Thepresentstudyaimedatevaluatinganti- Candida activitiesandantioxidantfunctionof hydroalcohlicextractof seedsof Rumexobtusifolius andanalyzing GCmassforthedeterminedmaterial,whichincludedespecialformulatedextract. Methods: The Rumexobtusifolius seedswereextractedusingEthylacetate:methanol:distilledwater(6:3:1)bySoxheletsystem.Theantifungalactivityaswellastotalfree phenolicsandflavonoidscontentwereexamined.Theextractwascarriedoutagainst40isolated Candida speciessuchas Candidaalbicans and C.glabrata throughwelldif-fusionmethod.Inaddition,hydroalcoholicextractionof R.obtusifolius wasevaluatedforitsantioxidantcapacitiesusing1-diphenyl-2-picrylhydrazylradicalscavenging. Thecomponentsof theextractwereanalyzedviaGaschromatography-Massspectrometry(GC-Mass)instrument. Results: The minimum inhibitory concentration (MIC) values were 100 - 150 µ g/ µ L for C. albicans 150 µ g/ µ L for C. glabrata . The hydroalcoholic extract can strongly scavengeDPPHradical, anditsantioxidantcapacitiesmaybecorrelatedwiththetotalfreephenolicsandtotalflavonoids. Thisstudyrevealedthehighestantioxidant capacityintheseedsof R.obtusifolius comparedtothecontrolgroups. Conclusions: Theextractcontainedahighamountofphenoliccompounds,anditsantioxidantactivitywassignificant.Theseedof R.obtusifolius hasstronganticandidial andantioxidantactivities,whichmaybeduetothepresenceof highlevelsof phenoliccompounds,particularlypyrogallol.
Background
The Rumex species, belonging to Polygonaceae family, includes approximately 200 species, which are distributed all over the world. Rumex obtusifolius is a perennial plant widely distributed in North America, Europe, and Iran. It is native to these regions and is commonly known as "Torshak" in Iran and its local name is "toopa" in Mahdishahr (Sangsar), Semnan, Iran. The leaves of this plant are used in foods and soups and they are also used as a raw vegetable. The seeds are used as flour (1). Leaves, the root, and seeds, which have medicinal properties, are used as an herbal medicine.
Rumex obtusifolius is used in the treatment of different diseases. The seeds of R. obtusifolius are used to treat coughs, colds, bronchitis, cancer, burns, and wounds. It was applied for the treatment of different types of tumors such as epidermal carcinoma, melanoma, and ovary carcinoma (2). Moreover, R. obtusifolius is used by Indian tribes to treat different diseases such as constipation, diarrhea, dysentery, jaundice, skin problems, stomachache, and they also use it as a contraceptive (2,3). Rumex species are known to be rich in anthraquinones, with R. obtusifolius containing emodin, chrysophanol, and physcionin (2). Butylated Hydroxy Toluene (BHT), Folin-Ciocalteu reagent, and trichloracetic acid (TCA) were prepared from Sigma-Aldrich. Also, reagents and chemicals had analytical grades. Spectrophotometric measurements were performed using a Spectronic. Spectronic Spekol, 2000 spectrophotometer (Analytik jena, Germany).
Plant Collection
Rumex obtusifolius was collected from Jashloobar garden located in the Northwest of Semnan, Iran. Experienced botanists of the University of applied science and technology (UAST) education center in Semnan branch recognized the seed of the plant. A testifier sample of R. obtusifolius was kept in the medicinal plants research herbarium center at UAST. Seeds of the plants were dried in the shade for a week after washed with running tap water, then, crushed into small pieces, and finally powdered using an electric blender. Plant seeds powder was then stored in plastic bags for further use.
Preparation of the Plant Extracts
Plant extracts were obtained by the following procedures: Ethanolic extract was obtained by incubating 50 g of powdered seeds with 500 mL of Ethyl acetate: methanol: distilled water (6:3:1) overnight at room temperature (RT) before adjusting to 40°C for 8 hours in a Soxhlet system. The extracts were centrifuged (3000 rpm) for 15 minutes and the supernatants harvested and filtered using Whatman paper No. 1. Then, the solvents were evaporated by incubation RT, and the powder of the extract was then stored in glass bottle for experiment. The extraction yields were calculated as the percentage of the used powder.
The commercial antifungal drugs "fluconazole disc" (10 µg/disc Hi-Media, Mumbai, India) and DMSO were applied as positive control and solvent control, respectively.
Broth Microdilution Method for Determination of Minimum Inhibitory Concentration (MIC)
Anti-fungal effect well diffusion method was used, and 100 µL of yeast inoculums (10 6 cells/mL) was cultured onto Sabouraud dextrose agar. The, wells were holed by pipette pasture into the SDA medium and filled with 100 µL of serial dilutions of plant extracts, and sterile DMSO was used as negative control. Cultures were incubated at 35°C for 24 hours. The zone of inhibition was measured to determine anti-Candida activity. Tests were repeated 2 times (4).
Determination of Total Phenolic Content
Phenolic compounds of the R. obtusifolius hydroalcoholic extract were estimated using the Folin-Ciocalteu method described previously (6,7). Briefly, the reaction mixture included 800 µL of freshly prepared diluted Folin Ciocalteu reagent, 200 µL of diluted R. obtusifolius seeds extracts, 2 mL of 7.5% sodium carbonate, and the final mixture reached 10 mL with deionized water. Optical density (OD) of blue color, which resulted from reaction, was measured at 765 nm by Spectronic Spekol, 2000 spectrophotometer (Analytik jena, Germany) after incubation for 2 hours in darkness at the ambient conditions to complete the reaction at RT. The standard curve of Gallic acid was done for quantification (8).
Determination of the Total Flavonoid Contents
In the present study, 1 mL of the extract in methanol and different dilution of standard solution of Quercetin brought to 5 mL by distilled water. Then, 0.3 mL of 5% sodium nitrite and 0.3 mL of 10% aluminum chloride were added along with 5 and 6 minutes incubation time, respectively. Finally, 2 mL of 1 M sodium hydroxide was added, and the total volume was made up to 10 mL with distillated water. The OD of the mixture pink in color was measured at 510 nm against a freshly prepared reagent blank. Total flavonoid content of the extracts was expressed as mg quercetin equivalents (QE) per gram of the sample (mg/g) (9)(10)(11).
Concentration values of the extracts were taken from the Quercetin standard curve by interpolating to the Xaxis. Total flavonoid contents were calculated using the following formula:
DPPH radical scavenging activity was estimated according to Burits et al. (12), Cuendet et al. (13), and others (14)(15)(16). In this assay, a 50 µL of various dilutions of the R. obtusifolius hydroalcoholic extracts solution (Stock 1 mg/mL concentration) was added with 5 mL of 0.004% methanolic DPPH and incubated for 30 minutes at RT after it was shaken strongly. The scavenging effect on the DPPH radical was read at 517 nm by spectrophotometer. Ascorbic acid and Butylated Hydroxy Toluene (BHT) were used as positive control and DPPH solution without sample solution was used as a control. IC50 values are the concentration of the sample required to scavenge 50% of DPPH free radical, which was calculated from the plotted graph of radical scavenging activity across the concentration of extracts.
The radical scavenging activity was expressed as the radical scavenging percentage using the following equation: % percentage DPPH radical scavenging: Where A c = absorbance of control and A s = absorbance of sample solution
Gas Chromatography-Mass Spectrometry Analysis
The extract was analyzed using the Gas chromatography-Mass spectrometry (GC-MS, Agilent), equipped with a determined capillary column "DB-35 ms, 30 m × 250 µm ×0.15 µm". A constant flow rate of Helium gas (99.99%) as a carrier gas (1 mL/min) was used for GC-MS detection. Injector and mass transfer line temperature were set at 200°C and 240°C, respectively. The oven temperature was ordered from 60°C to 280°C at 10°C/min. 1µL of diluted samples was injected, with split ratio of 1: 10, with mass scan of 50 -600 amu.
Total running time of GC-MS is 50 minutes. The extract constituent percentage was expressed as percentage with peak area normalization. The identity of the ingredients in the extract was specified by comparing their retention times and mass spectra fragmentation patterns with those replete on the computer library and by the results of the published articles. Willey library sources were used to match the identified components from the plant material.
Statistical Analysis
Statistical analysis was conducted by analysis of variance (ANOVA) at confidence level of 95% using SPSS Version 16 software. Linear regression to correlate the antioxidant activity of the total phenolic and total flavonoid was performed using Excel 2003. P values less than 0.05 were considered significant for an anti-Candida activity.
Results
Rumex obtusifolius extraction of seeds was found to have the most anti-oxidant effect, whose total antioxidant activity and DPPH scavenging activity was determined using spectrophotometry.
Extraction Yield, Total Polyphenolics, and Flavonoid Contents
The phenolics and flavonoids contents as well as an antiradical activity vary depending on the extraction methods, genetic factors, and climatic/growing conditions. However, we preferred to use a different and more effective solvent system. As the results of one study (17) revealed that the ethyl acetate: methanol: distilled water (60:30:10) system is a well-known system to extract antioxidant compounds; the percent of yield of Rumex obtusifolius seeds hydroalcoholic extract was 5.3 gram.
Anti-Candida
The anti-Candida activity of hydroalcohlic extract of R. obtusifolius against C. albicans, C.glabrata is presented in Table 1. Hydroalcohlic extract of R. obtusifolius has shown anti-Candida activities against C. albicans (n = 34) and C. glabrata (n = 6). The highest inhibition zone was observed against C. albicans, with a range of 18 ± 2.0 mm and a minimum inhibition zone of 15 ± 2.0 mm and 100 -150 µg/µL MIC values.
DPPH Radical Scavenging Activity
The antiradical activity of the hydroalcoholic extract, prepared from R. obtusifolius aerial organ and collected from Semnan in Iran, was examined. The extract showed high antioxidant activities in the 3 tested assays: DPPH method: This assay is known as a primary test since quenching odd electron of DPPH by the extract, which is associated with decreased absorption at 517 nm. R. obtusifolius seeds extract showed a high level antiradical activity in scavenging DPPH radical (comparable to BHT as a standard), with a maximum inhibition of about 18.10 at a concentration of 1000 µg/mL (Table 2 upper
.Total Free Phenolics
Phenolic compounds are most common secondary metabolites found in plants including flavonoids, tannins, and phenolic acids. The total phenolic content (TPC) of the R. obtusifolius was determined spectrophotometrically according to the method of Folin-Ciocalteu, and the results were expressed as Gallic acid equivalent. The standard curve equation used was y (absorbance) = 0.038 × (µg Gallic acid) -0.0132, R 2 = 0.9859 (Figure 1). The OD value was inserted in the above equation, and the total amount of phenolic compound was calculated. As displayed in Table 2, lower panel, the total phenolic content of the R. obtusifolius seeds was 97.49 ± 5.36.
Determination of Total Flavonoid Content
Flavonoids are polyphenolic molecules that protect the plants against stress effect. The flavonoids are divided into 6 major subtypes, which include chalcones, flavones, isoflavonoids, flavanones, anthoxanthins, and anthocyanins. The total flavonoids content of the R. obtusifolius was determined spectrophotometrically according to the method of Folin-Ciocalteu, and the results were expressed as Quercetin equivalent. The standard curve equation used was as follows: y (absorbance) = 0.0005 × (µg Quercetin) -0.014, R 2 = 0.9615 (Figure 2). The OD value was inserted in the above equation, and the total amount of flavonoid compound was calculated. The overall mean ± SD of the total flavonoids was 332 ± 5.66 mg/g ( Table 2, lower panel).
5. Gas Chromatography-Mass Spectrometry Analysis
The GC-MS analysis of the extracts showed the presence of phytocomponents; in total, 9 constituents were identified in the R. obtusifolius seed extract (Table 3). In this research, 9 chemical constituents were identified from the seeds of the hydroalcohlic extract of R. obtusifolius. The major chemical constituents are 1, 2, 3-Benzenetriol (Pyrogallol) 62.821%, Octadecenoic acid (oleic acid) 11.976%, Hexadecanoic acid (Palmitic acid) 10.095%, and Linoleic acid 8.5%. Pyrogallol is found in many plants naturally. Pyrogallol was the predominant phenolic compound in the seed extract.
Discussion
In the recent decades, the increase in using immunocompromised drugs and antibiotics for long periods has led to increase of infectious diseases and candidiasis. However, an increase in the use of chemical antifungal drugs and an increase in the prevalence of drug resistant C. al-bicans have been observed. Researchers have paid attention to the use of pharmaceutical compounds from natural plant sources, and traditional medicine has proven the benefits of herbal medicines. Antioxidants play an important role in preventing human diseases (6). Natural resources compounds with antioxidants activity may function as free radical scavengers and reduce agents, protecting the body from degenerative diseases (18). The antioxidant activity of seed extract was examined by its ability to scavenge the stable DPPH free radicals. The results showed that the seed extract of R. obtusifolius has the highest inhibition zone, which was observed against C. albicans with a range of 18 ± 2.0 mm, and a minimum inhibition zone of 15 ± 2.0 mm, and 100 -150 µg/µL MIC values.
On the other hand, the seed extract will be able to reduce free radicals significantly. It is related to the presence of high amounts of flavonoids, and phenolic acids in seeds of R. obtusifolius. These findings are in agreement with other studies (18,19). Many studies have shown that the pyrogallol displays a broad spectrum of pharmacological and promotes health effects in animal models of disease. Pyrogallol has been reported to possess the ability to scavenge free radicals, antioxidant action, and inhibit the formation of carcinogenic metabolites (20)(21)(22)(23)(24). Pyrogal-lol was also evaluated as a potential inhibitor for acethylcholinesterse enzyme (25). Baruah et al. (26) reported that the potency of pyrogallol to induce protective immunity makes it a probable natural protective agent that might avoid different infections and this compound is as potential antimicrobial agent. Kang et al. (27) reported that the antioxidant activities of pyrogallol were higher than those of the commercial antioxidant and ascorbic acid, moreover, they found that pyrogallol effectively inhibited DNA damage induced by H 2 O 2 (21,28). Our results revealed that the seed of R. obtusifolius has strong antioxidant and anticandidial activities, which may be due to the presence of high levels of phenolic compounds, particularly pyrogallol. These findings are in agreement with other studies (21,22). In addition, Sadeghi Nejad et al. (4) reported the anti-candida activity of Heracleum persicum hydroalcoholic extract. Mahdavi Omran et al. (29) reported the anti-candida activity essential oils of some medicinal plant such as thyme, pennyroyal, and Lemon in Iran.
In conclusion, the natural antioxidative compound "pyrogallol" can be used in the natural food industry as well as a part of some medicinal plant drugs such as antioxidative and anticancer drugs because of their efficacy, availability, low cost, and low toxicity. The phenolic and flavonoids compounds with great potential of antioxidant and antifungal are available. However, further research needs to be done to introduce natural medicine to treat different diseases and introduce it as a therapeutic and natural agent, with antioxidant and fungicides compounds.
|
2018-12-27T02:22:08.216Z
|
2017-05-30T00:00:00.000
|
{
"year": 2017,
"sha1": "d44662720c4ea7aae7368e02f61eb0684d17b19f",
"oa_license": "CCBYNC",
"oa_url": "http://jjm.neoscriber.org/cdn/dl/68efc6c0-8192-11e7-8c9f-7bcbc0381a34",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "21672c823ca91613394546109e30e9b8c3569a41",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
229219569
|
pes2o/s2orc
|
v3-fos-license
|
Thermodynamics of moisture sorption in tobacco ( Nicotiana tabacum L.) seeds
. In this study, the thermodynamic characteristics of tobacco seeds were investigated, based on experimental data from equilibrium moisture isotherms at desorption. An empirical exponential relationship was found for the decrease of the net isosteric heat and the differential entropy, based on the Clausius-Clapeyron equation, in response to the increase in moisture content; the values of the two parameters varied from 31.09 to 1.46 kJ.mol - 1 and from 91.53 to 4.29 J.mol -1 .K -1 , respectively. The linear relationship between the enthalpy and the entropy in the moisture desoption of tobacco seeds supported the validity of the enthalpy-entropy theory. The value of Gibbs free energy was positive and the harmonic temperature (297.5 K) was lower than the isosteric temperature (339.6 K). The outcomes from the study provide for the deeper insight into the process of moisture desorption of tobacco seeds.
Introduction
Desorption isotherms are an essential element of the analysis and control of different processes related to the preservation, drying, packaging, and blending of food products [1]. The thermodynamic properties of various biological objects, i.e. the heat of sorption and the differential entropy, can be characterized on the basis of sorption data analysis [2,3]. These properties provide important evidence about the mechanism of water sorption and its bonding to the solid skeleton; they are also useful for energy valuation in accomplishing the process of drying of various food and agricultural products.
A number of studies have validated the applicability of the Clausius-Clapeyron equation for the determination of the heat of sorption of various objects based on the data derived from the integral interpretation of sorption isotherms at different temperature levels [4,5]. In turn, the enthalpy-entropy compensation theory (EECT) has been found highly relevant in the exploration of desorption reactions in various experimental matrices [6,7,8].
Tobacco seeds (Nicotiana tabacum L.) can be regarded an underestimated plant biomass with potential for use in diverse value-added products; moreover, they are a waste from tobacco leaf production available in large amounts in many countries [9][10][11]. Tobacco seeds are nicotine-free and rich in nutritive and bioactive compounds (vitamins, minerals, fiber, protein, glyceride oil, etc.), with a number of benefits in human and animal nutrition [12][13][14].
A critical moment in the collection and storage of tobacco seeds with food and feed functions is the compulsory prevention of detrimental processes, which are essentially related to the availability of water that is bound in the biological matrix. Therefore, all aspects of the hygroscopic properties of tobacco seeds are equally important in the overall assessment of seed quality, of proposed processing technologies or of seed use potential benefits.
According to a preceding study (own unpublished data) the modified Chung-Pfost model [15] was the best fitting for the description of the desorption isotherms of Oriental tobacco seeds. Therefore, current study applies the obtained modified model for the determination of the basic thermodynamic properties of tobacco seeds, the net isosteric heat of desorption and the differential entropy, with the vision of a deeper insight into the process of moisture desorption of tobacco seeds. The outcomes from the study provide new details about the hygroscopic behavior of tobacco seeds, which might be of practical importance in seed safe storage and usability design choices.
Plant material
Authentic seeds of "Kroumovgrad 90" variety of the Oriental type tobacco (N. tabacum L.) were the primary raw material in the study. According to the objectives of this work, the seeds were producer-supplied and not intended for planting purposes [16].
Desorption isotherms
The basis for the determination of the thermodynamic properties of tobacco seeds in this study were their desorption isotherms, obtained at three temperatures, in particular 10°C, 25°C and 40°C, and water activities (aw) ranging from 0.113 to 0.823. The modified Chung-Pfost model was defined as the most adequate for the description of the obtained desorption isotherms. The Chung-Pfost model, correlating the equilibrium moisture (M) of tobacco seeds with water activity, was used in the following modified form: (1)
Net isosteric heat of desorption
The total isosteric heat of desorption is the sum of the latent heat of evaporation of pure water and the heat of desorption. The latter can be obtained from the model describing the respective desorption isotherms at a series of temperatures and the Clausius-Clapeyron equation [17,18]: where: Qst,n is the net isosteric heat of desorption, J.mol -1 ; qst -the total isosteric heat of desorption, J.mol -1 ; R -the universal gas constant (8.314 J.mol -1 .K -1 ); Lv -the latent heat of vaporization of water, J.mol -1 ; aw -the water activity, and T -the absolute temperature, K. The Clausius-Clapeyron equation is transformed in its linear form [19]: The equation provides the linear correlation between ln(aw) and (1/Т) at a constant equilibrium moisture content (Meq = const).
Then, by applying the Chung-Pfost model (eq. 1), the values of aw were calculated, corresponding to a moisture content varied from 5% to 15% by a step of 2% (on a dry weight basis, d.b.) at the three single temperatures (10ºC, 25ºC and 40ºC), respectively. Qst,n values were obtained from the slope of the curve in the graphical dependency between ln(aw) and (1/T).
Differential entropy
The differential entropy of desorption (ΔS) can be derived from the Gibbs-Helmholtz equation [20]: where the Gibbs free energy (ΔG) is calculated as: The interrelation between the shift in water sorption capacity and ΔG is paralleled by resultant variations in the enthalpy and the entropy. When equation (5) is substituted in equation (4), the following model is obtained: Then, ΔS values can be obtained from the intersection of the linearized experimental curve and the aw axis.
In order to describe the relationship between Qst,n and the moisture content the following, empirically-derived, exponential correlation [21] can be used: ( 7 ) where: Meq is the equilibrium moisture content, % d.b.; Mo -the initial moisture content, % d.b.; qo -the net isosteric heat of desorption of the first molecular layer, J.mol -1 .
Enthalpy-entropy compensation theory (EECT)
According to the postulates of the EECT, the relationship between the enthalpy and the entropy of the desorption process in many food products is linear [3,5,6]: where: Tβ is the isokinetic temperature and ΔGβ is the free energy at that temperature.
In this equation, Tβ is a highly informative indicator, suggestive of the temperature state at which all interactions within the solid body progress at equal rate [4]. The mathematical sign of ΔGβ value (negative or positive), respectively, shows if the process of desorption is a spontaneous or a non-spontaneous one. Therefore, Tβ and Gβ were computed by equation (8), providing the dependency of Qst,n on ΔS. Then a test for the existence of true compensation was applied, as recommended in [22], which involves the comparison between Tβ and the harmonic mean temperature (Thm): ( 9 ) where n is the number of experimental isotherms.
The EECT postulate is valid on the condition that Tβ ≠ Thm. Depending on the numerical correlation between the two parameters, the process may be governed either by enthalpy (Tβ > Thm) or by entropy (Tβ < Thm) [23].
According to [24], the parameter (ΔGβ) has limited impact on enthalpy variation and can be excluded from the model; thus, the following equation is obtained: The following equation proposed in [24,25] discloses the application of the EECT in the assessment of the relationship between the temperature and the adsorption capacity: (11) where f(M) is an empirical function of Meq.
Therefore, the experimental sorption data can be presented on the graph with coordinates and Meq, respectively. In this context, [24] and [26] recommend an adequate exponential function for the water equilibrium in foods that takes into consideration the influence of temperature, in the following form: Then (13) After plotting versus M, K1 and K2 are defined as coefficients of the linear equation.
Isosteric heat of desorption and differential entropy
The precise characterization of the scale of the heat of sorption, i.e. the energy of the bonds between water molecules and the solid skeleton, plays a decisive role in equipment design and in mode selection for the drying of a given product [25]. In order to eliminate water from a moist material by heat drying, water has to evaporate and vapors must migrate outside the product. The phase liquid-to-vapor conversion involves certain energy consumption. The isothermal evaporation of bound water requires an additional amount of energy, which is to be used for the disruption of water-material bonds; namely, that represents the heat of sorption.
The Chung-Pfost model (eq. 1), previously defined as the most adequate for the description of the obtained desorption isotherms of tobacco seeds, was applied for the determination of the isosteric heat of desorption and the entropy change in the desorption process. The results are shown in Fig. 1; each straight line approximating the calculated data sets the pattern of the change in the amount of heat necessary for water elimination at a given moisture content. From the slope of each line , determined by the least-square method, the respective values of the isosteric heat of desorption were calculated.
Fig. 1. Desorption isosterics of tobacco seeds.
The correlation between the two parameters for tobacco seeds, derived from the Clausius-Clapeyron equation, Qst,n and Meq, is given in Fig. 2. As seen in this figure, Qst,n decreased significantly with the elevation of Meq values, from 31.09 to 1.46 kJ.mol -1 in the specified variation range of Meq between 5% and 15% d.b.
Fig. 2. Isosteric heat of desorption Qst,n of tobacco seeds
The observation of high Qst,n values at low Meq levels is associated with the availability of active locations on object's surface (tobacco seeds), which are the attractive spots for the formation of water monomolecular layers [21,27]. These layers require greater amounts of energy in order to remove water molecules from the material [27]. With the advance of moisture incorporation within the solid body (increased Meq of tobacco seeds) the number of accessible water-bonding sites decreases, subsequently lower values of Qst,n are being established [28]. These observations are fully supported by previous findings for other solid plant fractions, such as lime seeds [29] and coffee beans [19]. The Tsami equation as announced in [21] was applied to examine the impact of the change in the Meq content on Qst,n in the drying of tobacco seeds, in the following form: (14) In this equation, the qo and Mo values for the processed desorption data were 143.43 kJ.mol -1 and 3.3% d.b., respectively, with R 2 equal to 1.0.
The Tsami model was also used to assess the Qst,n/Meq interrelation for tobacco seeds, considering analogous observations for other plant matrices, for example coffee [19] and red algae [30].
The respective levels of ΔS corresponding to each of the regarded Meq variations were determined by relating the equilibrium data with equation (6). Fig. 3 displays the functional dependency between ΔS and Meq resulting from these calculations.
Fig. 3. Differential entropy (ΔS) of tobacco seeds
The results suggested a strong inverse relationship between the two characteristics of the water transfer process in tobacco seeds, ΔS and Meq; the relationship profile was similar to the observations regarding Qst,n response to Meq variation. The values of ΔS ranged from 4.29 to 91.53 J.mol -1 .K -1 at Meq levels decreasing from 15 to 5% d.b. Our findings complied well with the statement that the differential entropy is reduced when molecular movements are more restricted, i.e. when the material is with a higher water content [27].
Comparable data are found in the studies by [4] for the desorption entropy of garlic, by [31] for melon seed and cassava, and by [32] for sesame seeds.
The results about the variation of ΔS by Meq for tobacco seeds were adequately expressed by the following exponential equation: (15)
Enthalpy-entropy compensation
A marked straight line related Qst,n to ΔS for tobacco seeds, within the integrated temperature range (Fig. 4). The presence of pronounced and proportionate interrelation between the two thermodynamic indicators provides evidence in support of the EECT supposition, as described above [23].
Fig. 4. Enthalpy-entropy relationship for water desorption in tobacco seeds
Further, the isokinetic temperature (Tβ) and the free energy (ΔGβ) were defined (Eq. (8)), and the characteristic parameters of the enthalpy-entropy relationship for tobacco seeds were: (16) The positive sign of the ΔGβ value is indicative of an energy-absorbing non-spontaneous reaction, which obliges energy provision from the surrounding environment into the material [3]. The value of ΔGβ for the studied tobacco seeds, averaged over the entire water content domain, was found to be positive but low (3.3 J.mol -1 ), thus indicating a non-spontaneous desorption process.
The isokinetic temperature value (Tβ) was 339.6 K and the harmonic temperature (Thm) was found to be 297.5 K. It was observed that Tβ ≠ Thm and this difference corroborates the enthalpy-entropy compensation theory. In particular, our results showed that Tβ > Thm, therefore the desorption process of tobacco seeds is an enthalpy-driven mechanism [23]; this suggests that tobacco seeds are a stable solid matrix and are not susceptible for structural alterations accompanying the moisture removal within the 10ºC ÷ 40ºC temperature zone.
The next step in the study was the application of the EECT for demonstrating the temperature influence on the sorption of tobacco seeds using equations (12) and (13). Fig. 5 shows the graph between the temperature-related factor ln(( ) -1 ln(aw)) and the Meq of tobacco seeds; as seen, the approximation was that of a straight line. (13) Therefore, the EECT (the isokinetic theory) could be effectively spread over the water desorption by tobacco seeds, identified in this study as an enthalpy-controlled process.
Conclusions
The net isosteric heat of desorption and the differential entropy of tobacco seeds (variety "Kroumovgrad 90", Oriental type) were defined based on experimental desorption data, obtained at 10ºС, 25ºС and 40ºC, and water activity level from 0.113 to 0.823. Both thermodynamic indices showed inverse, exponential in form, relationship with the equilibrium moisture content. The value of Gibbs free energy had a positive sign and the isokinetic temperature value (339.6 K) exceeded that of the harmonic temperature (297.5 K), thus revealing that moisture desorption from tobacco seeds was an endergonic and enthalpycontrolled process. Therefore, the EECT fitted adequately the water desorption characteristics of tobacco seeds. These findings contribute to the deeper insight into the process of moisture desorption of tobacco seeds.
This study did not receive any specific funding.
|
2020-11-19T09:12:07.374Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "01b6c8770d84801b04a8a84e8c308dc949de19a0",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/67/e3sconf_fpepm2020_01019.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f10f1c447a28703f6b8cf32abe390a06a070b482",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
253684929
|
pes2o/s2orc
|
v3-fos-license
|
Product of invariant types modulo domination–equivalence
We investigate the interaction between the product of invariant types and domination–equivalence. We present a theory where the latter is not a congruence with respect to the former, provide sufficient conditions for it to be, and study the resulting quotient when it is.
Mathematics Subject Classification 03C45
To a sufficiently saturated model of a first-order theory one can associate a semigroup, that of global invariant types with the tensor product ⊗. This can be endowed with two equivalence relations, called domination-equivalence and equidominance. This paper studies the resulting quotients, starting from sufficient conditions for ⊗ to be well-defined on them. We show, correcting a remark in [3], that this need not be always the case.
Let S(U) be the space of types in any finite number of variables over a model U of a first-order theory that is κ-saturated and κ-strongly homogeneous for some large κ. For any set A ⊆ U, one has a natural action on S(U) by the group Aut(U/A) of automorphisms of U that fix A pointwise. The space S inv (U) of global invariant types consists of those elements of S(U) which, for some small A, are fixed points of the action Aut(U/A) S(U). Each of these types has a canonical extension to bigger models U 1 U, namely the unique one which is a fixed point of the action Aut(U 1 /A) S(U 1 ), and this allows us to define an associative product ⊗ on S inv (U). This is the semigroup which we are going to quotient.
We say that a global type p dominates a global type q when p together with a small set of formulas entails q. This is a preorder, and we call the induced equivalence relation domination-equivalence. We also look at equidominance, the refinement of domination-equivalence obtained by requiring that domination of p by q and of q by p can be witnessed by the same set of formulas. These notions have their roots in the work of Lascar, who in [9] generalised the Rudin-Keisler order on ultrafilters to types of a theory; his preorder was subsequently generalised to domination between stationary types in a stable theory.
Equidominance reached its current form in [3], where it was used to prove a result of Ax-Kochen-Ershov flavour; namely, that in the case of algebraically closed valued fields one can compute the quotient of the semigroup of global invariant types by equidominance, and it turns out to be commutative and to decompose in terms of value group and residue field. It was also claimed, without proof, that such a semigroup is also well-defined and commutative in any complete first-order theory. The starting point of this research was to try to fill this gap by proving these claims. After trying in vain to prove well-definedness of the quotient semigroup, the author started to investigate sufficient conditions for it to hold. Eventually, a counterexample arose: Theorem There is a ternary, ω-categorical, supersimple theory of SU-rank 2 with degenerate algebraic closure in which neither domination-equivalence nor equidominance are congruences with respect to ⊗.
The paper is organised as follows. In Sect. 1 we define the main object of study, namely the quotient Inv(U) of the semigroup of global invariant types modulo domination-equivalence, provide some sufficient conditions for it to be well-defined and investigate its most basic properties. In Sect. 2 we prove the theorem above, which shows that Inv(U) need not be well-defined in general; we also show (Corollary 2.12) that in the theory of the Random Graph Inv(U) is not commutative. In Sect. 3 we prove that definability, finite satisfiability, generic stability (Theorem 3.5) and weak orthogonality to a type (Proposition 3.13) are preserved downwards by domination. This is useful in explicit computations of Inv(U) and yields as a by-product (Corollary 3.11) that another, smaller object based on generically stable types is instead well-defined in full generality. In Sect. 4 we explore whether and how much Inv(U) depends on U; we show (Corollary 4.7) that its independence from the choice of U implies NIP. Section 5 gathers some previously known results from classical stability theory and explores their consequences in the context of this paper (e.g. Theorem 5.11). Sections from 2 to 4 depend on Sect. 1 but can be read independently of each other; Sect. 5 contains references to all previous sections but can in principle be read after Sect. 1.
Set-up
Notations and conventions are standard, and we now recall some of them.
We work in an arbitrary complete theory T , in a first-order language L, with infinite models. As customary, all mentioned inclusions between models of T are assumed to be elementary maps, and we call models of T which are κ-saturated and κ-strongly homogeneous for a large enough κ "monster" models; we denote them by U, U 0 , etc. Saying that A ⊆ U is small means that U is |A| + -saturated and |A| + -strongly homogeneous, and is sometimes denoted by A ⊂ + U, or A ≺ + U if additionally A ≺ U. Large means "not small". The letters A and M usually represent, respectively, a small subset and a small elementary substructure of U.
Parameters and variables are tacitly allowed to be finite tuples unless otherwise specified, and we abuse the notation by writing e.g. a ∈ U instead of a ∈ U |a| . Coordinates of a tuple are indicated with subscripts, starting with 0, so for instance a = (a 0 , . . . , a |a|−1 ). To avoid confusion, indices for a sequence of tuples are written as superscripts, as in a i | i ∈ I . The letters x, y, z, w, t denote tuples of variables, the letters a, b, c, d, e, m denote tuples of elements of a model.
A global type is a complete type over U. "Type over B" means "complete type over B". We say "partial type" otherwise. We sometimes write e.g. p x in place of p(x) and denote with S x (B) the space of types in variables x.
When mentioning realisations of global types, or supersets of a monster, we implicitly think of them as living inside a bigger monster model, which usually goes unnamed. Similarly, implications are to be understood modulo the elementary diagram ed(U * ) of an ambient monster model U * , e.g. if c ∈ U * U and p ∈ S(Uc) then ( p U) p is a shorthand for ( p U) ∪ ed(U * ) p. We sometimes take deductive closures implicitly, as in "{x = a} ∈ S x (U)".
If we define a property a theory may have, and then we say that a structure has it, we mean that its complete theory does. When we say "L-formula", we mean without parameters; for emphasis, we sometimes write L(∅), with the same meaning as L. In formulas, (tuples of) variables will be separated by commas or semicolons. The distinction is purely cosmetic, to help readability, and usually it means we regard the variables on the left of the semicolon as "object variables" and the ones on the right as "parameter variables", e.g. we may write ϕ(x, y; w) ∈ L, ϕ(x, y; d) ∈ p(x) ⊗ q(y).
If p(x), q(y) ∈ S(B) and A ⊆ B, we write In situations like the one above, we implicitly assume, for convenience and with no loss of generality, that x and y share no common variable. Proposition 1.2 [15, p. 19] Let A be small. Given an A-invariant type p ∈ S x (U) and a set of parameters B ⊇ U there is a unique extension p | B of p to an A-invariant type over B, and it is given by requiring, for all ϕ(x; y) ∈ L and b ∈ B, , d ∈ U and q ∈ S y (U), then the following are equivalent: Also note that if A 1 ⊇ A is another small set then p | B is also the unique A 1 -invariant extension of p. All this ensures that the following operation is well-defined, i.e. does not depend on b q and on whether we regard p as A-invariant or A 1 -invariant.
Domination
Definition 1.6 Let p ∈ S x (U) and q ∈ S y (U).
1. We say that p dominates q and write p ≥ D q iff there are some small A and some r ∈ S xy (A) such that • r ∈ S pq (A), and • p(x) ∪ r (x, y) q(y).
2. We say that p and q are domination-equivalent and write p ∼ D q iff p ≥ D q and q ≥ D p. 3. We say that p and q are equidominant and write p ≡ D q iff there are some small A and some r ∈ S xy (A) such that So p ≡ D q if and only if both p ≥ D q and q ≥ D p hold, and both statements can be witnessed by the same r . To put it differently, a direct definition of p ∼ D q can be obtained by replacing, in the last clause of the definition of p ≡ D q, the small type r with another small type r , possibly different from r . That the last two relations are in general distinct can be seen for instance in DLO together with a dense-codense predicate; see Example 1.11. Note that we are not requiring p ∪ r to be complete; in other words, domination is "small-type semi-isolation", as opposed to "small-type isolation". The finer relation of semi-isolation, also known as the global RK-order, 1 was studied for instance in [16]. Proposition 1.7 ≥ D and ≡ D are respectively a preorder and an equivalence relation on S <ω (U). Consequently, ∼ D is an equivalence relation as well.
Proof The only non-obvious thing is transitivity. We prove it for ≡ D first, as the proof for ≥ D is even easier. Suppose that r (x, y) ∈ S p 0 p 1 (A r ) witnesses that p 0 (x) ≡ D p 1 (y) and that s(y, z) ∈ S p 1 p 2 (A s ) witnesses p 1 (y) ≡ D p 2 (z). Up to taking a larger A and then completing r , s to types with parameters from A, we can assume A r = A s = A. By hypothesis and compactness, for every formula ϕ(z) ∈ p 2 there are formulas ψ(y, z) ∈ s, θ(y) ∈ p 1 and χ(x, y) ∈ r such that p 0 ∪ {χ(x, y)} θ(y) and {θ(y) ∧ ψ(y, z)} ϕ(z). If we let σ ϕ (x, z) := ∃y (χ (x, y) ∧ ψ(y, z)), then p 0 (x) ∪ {σ ϕ (x, z)} ϕ(z). Moreover, we have σ ϕ (x, z) ∈ L(A). Analogously, for each δ(x) ∈ p 0 we can find ρ δ (z, x) ∈ L(A) such that p 2 (z) ∪ {ρ δ (z, x)} δ(x), obtained in the same way mutatis mutandis. It is now enough to show that the set is consistent, as this will in particular entail consistency of which will therefore have a completion to a type in S p 0 p 2 (A) witnessing p 0 ≡ D p 2 . To see that is consistent, in a larger monster U 1 let (a, b) p 0 ∪ r and (b,c) p 1 ∪ s.
. The proof for ≥ D is exactly the same, except we do not need to consider the ρ δ formulas.
As we are interested in the interaction of these notions with ⊗, we restrict our attention to quotients of S inv (U). Note that, by the following lemma, whether or not p ∈ S inv (U) only depends on its equivalence class.
Lemma 1.8 If p ∈ S inv
x (U, A) and r ∈ S xy (B) are such that p ∪ r is consistent and p ∪ r q ∈ S y (U), then q is invariant over AB.
Proof The set of formulas p∪r is fixed by Aut(U/AB) and implies q. As q is complete, the conclusion follows.
Anyway, q will not be in general A-invariant: for instance, by the proof of point 3 of Proposition 1.19, for every p and every realised q we have p ≥ D q, and it is enough to take q realised in U\ dcl(A) to get a counterexample. Note that, if p ∪ r q, by passing to a suitable extension of r there is no harm in enlarging its domain, provided it stays small. This sort of manipulation will from now on be done tacitly.
Remark 1.10
In [3], the name domination-equivalence is used to refer to ≡ D (no mention is made of ≥ D and ∼ D ). The reason for this change in terminology is to ensure consistency with the notions with the same names classically defined for stable theories, which coincide with the ones just defined (see Sect. 5). As Inv(U) carries a poset structure, and is in some sense better behaved than Inv(U), we mostly focus on the former. Example 1.11 1. It is easy to see that, in any strongly minimal theory, two global types are domination-equivalent, equivalently equidominant, precisely when they have the same dimension over U. 2. In DLO, if p(x) is the type at +∞, then p(x) ≡ D p(y) ⊗ p(z), as can be easily seen by using some r containing the formula x = z. 3. The two equivalence relations differ in the theory DLOP of a DLO with a densecodense predicate P. In this case, if p(x) is the type at + ∞ in P, and q(y) is the type at + ∞ in ¬P, then p(x) ≥ D q(y) (resp. p(x) ≤ D q(y)) can be witnessed by any r containing y > x (resp. y < x). To show p ≡ D q, take any r ∈ S pq (A).
It follows from quantifier elimination that, if for instance r x > y, then p ∪ r y > b, and a fortiori p ∪ r q. The reason the two equivalence relations may differ is, simply, that even if there are r 0 and r 1 such that p ∪ r 0 q and q ∪ r 1 p, we may still have that the union r 0 ∪ r 1 is inconsistent. 4. The two equivalence relations may differ even in a stable theory, as shown by [17,Example 5.2.9] together with the fact (Proposition 5.4) that the classical definitions via forking (see Definition 5.3) in stable theories coincide with the ones in Definition 1.6.
Interaction with ⊗
We start our investigation of the compatibility of ⊗ with ≥ D and ≡ D with two easy lemmas. While the first one will not be needed until later, the second one will be used repeatedly.
Proof Let ψ(y) ∈ q B. By hypothesis and compactness there is χ(x, y) ∈ r such that p ∀y (χ (x, y) → ψ(y)). As A ⊆ B, this formula is in p B.
Lemma 1.13
If p x , q y ∈ S inv (U, A) and r ∈ S pq (A) is such that p ∪ r q, then for all sets of parameters Proof Let ϕ(y; w) be an L(∅)-formula and b ∈ B be such that ϕ(y; b) ∈ q | B. Pick anyb ∈ U such thatb ≡ A b. By definition of q | B we have ϕ(y;b) ∈ q, so by hypothesis and compactness there is an L(A)-formula ψ(x, y) ∈ r (x, y) such that p ∀y ψ(x, y) → ϕ(y;b) . But then, by definition of p | B and the fact that Notation We adopt from now on the following conventions. The letter A continues to denote a small set. The symbols p, q, possibly with subscripts, denote global Ainvariant types, and r stands for an element of, say, S pq (A) witnessing domination or equidominance.
In the special case where the same r also witnesses p 1 ≥ D p 0 , for the same s we One may expect a similar result to hold when multiplying on the left by p a relation of the form q 0 ≥ D q 1 , and indeed it was claimed (without proof) in [3] that ≡ D is a congruence with respect to ⊗. Unfortunately, this turns out not be true in general: we will see in Sect. 2 that it is possible to have q 0 ≡ D q 1 and p ⊗ q 0 ≥ D p ⊗ q 1 simultaneously. For the time being, we assume this does not happen as an hypothesis and explore some of its immediate consequences.
Definition 1.15 For a theory T , we say that
In this case ∼ D is a congruence with respect to ⊗, and the latter induces on ( Inv(U), ≥ D ) the structure of a partially ordered semigroup.
Proof Everything follows at once from Lemma 1.14.
Lemma 1.17
Suppose that p, q ∈ S inv (U) and p is realised. The following are equivalent: Proof The implications 1 ⇒ 2 ⇒ 3 are true by definition, even when p is not realised.
For 4 ⇒ 1 suppose that for some b ∈ U we have q = tp(b/U) and let A be any small set containing a and b.
Proof The first part is clear. It follows that, if q is A-invariant and a ∈ A, in order to show that p(x) ⊗ q(y) ≡ D q(z) it suffices to take as r the type Notation When quotienting by ∼ D or ≡ D we denote by p the class of p, with the understanding that the equivalence relation we are referring to is clear from context. We write 0 for the class of realised types. 2 Proposition 1.19 Suppose that ⊗ respects ≥ D (resp. ≡ D ). Then: In particular, p is realised as well. 3. We have to show that for every p(x) and every realised q(y) we have p ≥ D q. If q is realised by b ∈ U, it is sufficient to put in r the formula y = b.
Some sufficient conditions
We proceed to investigate sufficient conditions for ⊗ to respect ≥ D and ≡ D . These conditions are admittedly rather artificial, but we show they are a consequence of other properties that are easier to test directly, such as stability.
In what follows, types will be usually assumed to have no realised coordinates and no duplicate coordinates, i.e. we will assume, for all i < j < |x| and a ∈ U, Up to domination-equivalence, and even equidominance, no generality is lost, as justified by Lemma 1.18 and by the fact that, for example, if p(x 0 ) is any 1-type and q(y 0 , We usually abuse the notation and indicate e.g. ( Inv(U), ⊗, ≤ D ) simply with Inv(U).
We say that T has stationary domination (resp. stationary equidominance) iff when-
Proposition 1.21
If T has stationary domination, then ⊗ respects ≥ D . If T has stationary equidominance, then ⊗ respects ≡ D .
Proof Immediate from the definitions.
Definition 1.22
We say that q 1 is algebraic over q 0 iff there are b q 0 and c q 1 such that c ∈ acl(Ub). We say that T has algebraic domination iff p ≥ D q if and only if q is algebraic over p.
and this is witnessed by a type r [ p] as in the definition of stationary domination. In particular, algebraic domination implies stationary domination.
Proof Let b, c ∈ U 1 + U witness algebraicity of q 1 over q 0 . Suppose ψ(y, z) is an L(U)-formula such that ψ(b, z) isolates tp(c/Ub), and let s : . This means that ϕ(w, z) ∈ L(U) and ϕ(w, c) ∈ tp(a/Uc) = p | Uc.
By hypothesis, there are only finitely manyc ≡ Ub c, which must be contained in any model containing Ub and, by invariance of p | U 1 , for all suchc ∈ U 1 we have Proposition 1.25 Let T be stable. Then T has stationary domination and stationary equidominance. Moreover Inv(U) and Inv(U) are commutative.
and T is stable if and only if ⊗ is commutative, we have stationary domination and commutativity of Inv(U). For stationary equidominance and commutativity of Inv(U), argue analogously starting with any r witnessing q 0 (y) ≡ D q 1 (z).
Lemma 1.27
If T is weakly binary and tp(a/U), tp(b/U) are both invariant, then so is tp(ab/U).
Proof If (1) holds and tp(a/U) and tp(b/U) are B-invariant then the left-hand side of (1) is fixed by Aut(U/AB). As tp(a, b/U) is complete, it is AB-invariant.
Example 1.28 Every binary theory T , i.e. where every formula is equivalent modulo T to a Boolean combination of formulas with at most two free variables, is weakly binary. This follows from the fact that T is binary if and only if for any B and tuples a, b An example of a weakly binary theory which is not binary is the theory of a dense circular order, or any other non-binary theory that becomes binary after naming some constants. A weakly binary theory which does not become binary after adding constants can be obtained by considering a structure (M, E, R) where E is an equivalence relation with infinitely many classes, on each class R(x, y, z) is a circular order, and R(x, y, z) → E(x, y) ∧ E(x, z). The generic 3-hypergraph and ACF 0 are not weakly binary.
We thank Jan Dobrowolski for pointing out the relationship between binarity and weak binarity, therefore also implicitly suggesting a name for the latter.
Lemma 1.29 T is weakly binary if and only if for every n ≥ 2 we have the following.
If a 0 , . . . , a n−1 are such that for all i < n we have tp( Proof For the nontrivial direction, assume T is weakly binary. For notational simplicity we will only show the case n = 3, and leave the easy induction to the reader. Let a, b, c be tuples with invariant global type. By Lemma 1.27 tp(bc/U) is still invariant, so we can let A witness weak binarity for b, c and for a, bc simultaneously, where bc is considered now as a single tuple.
and by applying weak binarity to a, bc we get
Corollary 1.30 Every weakly binary theory has stationary domination and stationary equidominance.
Proof Let p(x), q 0 (y), q 1 (z) be A 0 -invariant and r ∈ S q 0 q 1 (A 0 ) be such that q 0 ∪ r q 1 . In some U 1 + U choose (b, c) q 0 ∪ r , then choose a p | U 1 . By the case n = 3 of (2) there is some A ⊂ + U, which without loss of generality includes A 0 , such that Combining this with (3), and observing that tp This proves stationary domination. For stationary equidominance, start with an r witnessing q 0 ≡ D q 1 and prove analogously that in addition We now give some examples of ( Inv(U), ⊗, ≥ D ). These characterisations can be proven with easy ad hoc arguments but, as such computations are made almost immediate by results like Proposition 3.13 or Theorem 5.11, we state them without proof. We postpone the investigation of further examples to a future paper.
Example 1.32
Let T be the theory of an equivalence relation E with infinitely many classes, all of which are infinite. Since T is ω-stable, by Proposition 1.25 and Proposition 1.21 ⊗ respects ≥ D , and moreover by [13,Theorem 14.2] for every κ there is a κ-saturated U T of size κ. For such U we have ( Inv(U), ⊗, ≥ D ) ∼ = κ N, where each copy of N is equipped with the usual + and ≥, and ⊕ is the direct sum of ordered monoids.
To spell this out and give a little extra information on Inv(U) for T , fix a choice of representatives b i | 0 < i < κ for U/E and let π E : U → U/E be the projection to the quotient. Then an element p ∈ Inv(U) corresponds to a κ-sequence (n i ) i<κ of natural numbers with finite support where, for any c p, n 0 = |π E c\π E U| and, for positive i, As we will see in Sect. 5, the fact that Inv(U) has the previous forms follows from the stability-theoretic properties of the theories above: Theorem 5.11 applies to both and, in the case of Example 1.31, Corollary 5.19 tells us directly that Inv(U) ∼ = N.
Example 1.33
As DLO is binary, ⊗ respects ≥ D . We have already seen an example of two domination-equivalent types in this theory in Example 1.11. To describe Inv(U), call a cut in U invariant iff it has small cofinality on exactly one side, and let IC U be the set of all such. The domination-equivalence class of an invariant type in DLO is determined by the (necessarily invariant) cuts in which it concentrates and, writing P fin (X ) for the set of finite subsets of X , we have ( Inv(U), ⊗, ≥ D ) ∼ = (P fin (IC U), ∪, ⊇).
Counterexamples
In [3, p. 18] it was claimed without proof that Inv(U) is well-defined and commutative in every first-order theory. This section contains counterexamples to the statements above.
Well-definedness
This subsection is dedicated to the proof of the following result.
3 }, where arities of symbols are indicated as superscripts, and define := 0 ∧ 1 , where Note that in particular R 2 is still symmetric irreflexive on the quotient by E. We do not add an imaginary sort for this quotient; it will be notationally convenient to mention it anyway but, formally, every reference to the quotient by E, the relative projection, etc, is to be understood as a mere shorthand.
Proposition 2.3 1. K is a Fraïssé class with strong amalgamation.
Let T be the theory of the Fraïssé limit of K .
2. T is ω-categorical, eliminates quantifiers in L and has degenerate algebraic closure, i.e. for all sets X ⊆ M T we have acl X = X. 3. T is ternary, i.e. in T every formula is equivalent to a Boolean combination of formulas with at most 3 free variables.
T can be axiomatised as follows:
(I) E is an equivalence relation with infinitely many classes, all of which are infinite. (II) Whether R 2 (x 0 , x 1 ) holds only depends on the E-classes of x 0 , x 1 ; moreover, the structure induced by R 2 on the quotient by E is elementarily equivalent to the Random Graph.
between the x i there are precisely two R 2 -edges and their E-classes are pairwise distinct. 3. T eliminates quantifiers in a ternary relational language. 4. Easy back-and-forth between the Fraïssé limit of K and any model of (I)-(IV). 5. Denote by π the projection to the quotient by E. A routine application of the Kim-Pillay Theorem (see [8,Theorem 4.2]) shows that T is simple and forking is given by , from which we immediately see that the SU-rank of any 1-type in T is at most 2; finding a 1-type of SU-rank 2 is easy.
Definition 2.4 In T , define the global types
These three types are complete by quantifier elimination and the axioms of T : for instance, in the case of q 1 , the condition E(z 0 , z 1 ) together with the restriction of q 1 to z 0 decides all the R 2 -edges of z 1 , and for all a, b ∈ U we have ¬ 0 (z 1 , a, b), hence ¬R 3 (z 1 , a, b). Moreover, it follows easily from their definition that p, q 0 and q 1 are all ∅-invariant. Proposition 2.5 q 0 ≡ D q 1 and in particular q 0 ∼ D q 1 . Nonetheless, p(x) ⊗ q 0 (y) ≥ D p(w) ⊗ q 1 (z).
Proof Let A be any small set and let r (y, z 0 , z 1 ) ∈ S q 0 q 1 (A) contain the formula y = z 0 . Clearly, q 1 (z) ∪ r (y, z) q 0 (y). Moreover, since E(z 0 , z 1 ) ∧ z 0 = z 1 ∈ (q 1 ∅) ⊆ r we have the first part of the conclusion.
Note that p(x) ⊗ q 0 (y) is axiomatised by and similarly p(w) ⊗ q 1 (z) is axiomatised by Let A be any small set and r (x, y, w, z) ∈ S p⊗q 0 , p⊗q 1 (A), then pick any a ∈ U\A and i < 2 such that ( p(x) ⊗ q 0 (y)) ∪ r y = z i . By genericity of R 2 , the set is consistent, 3 and by genericity of R 3 so is ∪ {R 3 (w, z i , a)} (as well as ∪ {¬R 3 (w, z i , a)}). This shows that As an aside, note that anyway p(x) ⊗ q 0 (y) ≤ D p(w) ⊗ q 1 (z) by Corollary 1.24, the map f being the projection on the coordinates (w, z 0 ).
Question 2.7
Is Inv(U) well-defined in every NIP theory?
Commutativity
In this subsection we prove that in the theory of the Random Graph Inv(U) coincides with Inv(U) and is not commutative. To begin with, note that this theory is binary, hence Inv(U) is well-defined by Corollary 1.30 and Proposition 1.21. This also follows from the characterisation of domination we are about to give in Proposition 2.11. Definition 2.8 Let L 0 be the "empty" language, containing only equality. We say that T has degenerate domination iff whenever p(x) ≥ D q(y) there is a small set r 0 of L 0 (U)-formulas with free variables included in x y and consistent with p such that p ∪ r 0 q.
Remark 2.9
It is easy to see that, if there is r 0 as above, then q is included in p up to removing realised and duplicate coordinates and renaming the remaining ones.
Lemma 2.10 Suppose T has degenerate domination. Then T has algebraic domination, and in particular ⊗ respects ≥ D . Moreover for global types p and q the following are equivalent:
1. There is a small set r 0 of L 0 (U)-formulas consistent with p ∪q such that p ∪r 0 q and q ∪ r 0 p.
In particular, ⊗ respects ≡ D too.
Proof By Remark 2.9 degenerate domination implies algebraic domination. The implications 1 ⇒ 2 ⇒ 3 are trivial and hold in any theory. To prove 3 ⇒ 1 suppose p(x) ∼ D q(y), and let r 1 and r 2 be small sets of L 0 (U)-formulas with free variables included in x y and consistent with p ∪ q such that p ∪ r 1 q and q ∪ r 2 p. It follows easily from Remark 2.9 that we may find r 0 satisfying the same restrictions as r 1 and r 2 and such that p ∪ r 0 q and q ∪ r 0 p hold simultaneously.
Proposition 2.11 The Random Graph has degenerate domination.
Proof Suppose that r ∈ S pq (A) witnesses p(x) ≥ D q(y) and assume that q has no realised or duplicate coordinates. Up to a permutation of the y j , assume that r identifies y 0 , . . . , y n−1 with some variables in x and for all j such that n ≤ j < |y| and all i < |x| we have r x i = y j . If n = |y| then we can let r 0 be a suitable restriction of r and we are done, so assume that n < |y|, hence for every i < |x| we have r y n = x i . Pick any b ∈ U\A; by the Random Graph axioms p ∪ r is consistent with both E(y n , b) and ¬E(y n , b), contradicting p ∪ r q. Other easy consequences of Proposition 2.11 are that in the theory of the Random graph 1. Inv(U) is not generated by the classes of the n-types for any fixed n < ω, 2. Inv(U) is not generated by any family of classes of pairwise weakly orthogonal types (see Definition 3.12), and 3. for any nonrealised p the submonoid generated by p is infinite.
Question 2.13 Let T be NIP and assume Inv(U) is well-defined. Is it necessarily commutative?
The analogous question for Inv(U) has a negative answer. We are grateful to E. Hrushovski for pointing out the following counterexample and allowing us to include it.
Let DLOP be as in Example 1.11. It eliminates quantifiers in {<, P}, it is NIP, and it is binary, hence Inv(U) and Inv(U) are well-defined by Corollary 1.30 and Proposition 1.21.
Proposition 2.14 (Hrushovski) In DLOP, Inv(U) is not commutative.
Proof Let p be the type at +∞ in the predicate P and q the type at +∞ in ¬P, and note that both types are ∅-invariant. Let r ∈ S p⊗q,q (∅) contain the formula y = z. Then r witnesses p x ⊗ q y ≡ D q z , and similarly one shows that q ⊗ p ≡ D p. As shown in Example 1.11, p and q are not equidominant, and therefore we have This counterexample exploits crucially ≡ D , as opposed to ∼ D . In fact, in DLOP Inv(U) is the same as in the restriction of U to {<}, and in DLO Inv(U) is commutative. A further analysis also shows that (Inv(U), ⊗) cannot be endowed with any order ≤ compatible with ⊗ in which 0 is the minimum. In fact, if p and q are as above, then we have already shown that ( p ⊗ q) ≡ D q ≡ D p ≡ D (q ⊗ p). If we had an order ≤ as above then we would get
Properties preserved by domination
In this section we show that some properties are preserved downwards by domination. These invariants also facilitate computations of Inv(U) and Inv(U) for specific theories; an immediate consequence is for instance Corollary 3.8, that such monoids may change when passing to T eq . The next results are related to the ones in [16], which contains a study of weak orthogonality and the global RK-order (similar to domination) in the case of generically stable regular types. Of particular interest are [16,Proposition 3.6], to which Theorem 3.5 is related, and [16,Theorem 4.4]. A). A Morley sequence of p over A is an Aindiscernible sequence a i | i ∈ I , indexed on some totally ordered set I , such that for any i 0 < · · · < i n−1 in I we have tp(a i n−1 , . . . ,
Definition 3.2
Let M ≺ + U and A ⊂ + U.
A partial type π is finitely satisfiable in M iff for every finite conjunction ϕ(x) of
formulas in π there is m ∈ M such that ϕ(m).
A global type p ∈ S x (U) is definable over A iff it is A-invariant and for every
ψ(x; y) ∈ L the set d p ψ is clopen, i.e. of the form {q ∈ S y (A) | ϕ ∈ q} for a suitable ϕ ∈ L(A).
A global type p ∈ S x (U) is generically stable over A iff it is A-invariant and for
every ordinal α ≥ ω and Morley sequence (a i | i < α) of p over A, the set of formulas ϕ(x) ∈ L(U) true of all but finitely many a i is a complete global type.
We say that p is definable iff it is definable over A for some small A, and similarly for the other two notions.
The definition of generic stability we use is that of [1, Definition 1.6]. It is well-known (see [13,Lemma 12.10]) that every partial type which is finitely satisfiable in M extends to a global type still finitely satisfiable in M, and that if p ∈ S(U) is finitely satisfiable in M then p is M-invariant (see [13,Theorem 12.13]). Moreover all the notions above are monotone: for instance if p is generically stable over A and A ⊆ B, then p is generically stable over B, as Morley sequences over B are in particular Morley sequences over A.
Lemma 3.4 Suppose p ∈ S inv x (U) is finitely satisfiable in M and r ∈ S xy (M) is consistent with p. Then p ∪ r is finitely satisfiable in M.
Proof Pick any ϕ(x) ∈ p and ρ(x, y) ∈ r . As p ∪ r is consistent, we have p ∃y (ϕ(x) ∧ ρ(x, y)), and as p is finitely satisfiable in M there is m 0 ∈ M such that ∃y (ϕ(m 0 )∧ρ(m 0 , y)). In particular, ∃y ρ(m 0 , y), and since ρ(m 0 , y) ∈ L(M) and M is a model there is m 1 ∈ M such that ρ(m 0 , m 1 ), so (m 0 , m 1 ) ϕ(x)∧ρ(x, y).
We can now prove the main result of this section. Part 3 can be seen as a generalisation of [16,Proposition 3.6]; the missing step to formally call it a generalisation would be to know that for a regular type p the equivalence p ⊥ w q ⇔ p ≤ D q held. To the best of the author's knowledge, this is currently only known for strongly regular generically stable types, or under additional assumptions such as stability. See [16] for the definitions of regularity and strong regularity in this context, and the next subsection for ⊥ w .
r ∈ S(U) which is, again, finitely satisfiable in M, and in particular M-invariant; take a Morley sequence (a i , b i ) | i ∈ I ofr over M, let f ∈ Aut(U/M) be such that Note that p, q, r and ψ(y; w) are fixed by f . Now let J be a copy of ω disjoint from I and let a j | j ∈ J realise a Morley sequence of p over Md{a i | i ∈ I }. We want to show that the concatenation of a i | i ∈ I with a j | j ∈ J contradicts generic stability of p over M. By construction this is a Morley sequence over M, and if we find χ(x; d) such that χ(a i ; d) holds for i ∈ J but for no i ∈ I then we are done, since I and J are infinite.
As ψ(y; d) ∈ q by M-invariance of q, there is by hypothesis ϕ(x, y) ∈ r such that p(x) ∀y ϕ(x, y) → ψ(y; d) . Let χ(x; d) be the last formula. By hypothesis, for i ∈ J we have χ (a i ; d). On the other hand, for i ∈ I we have ( ; d), and in particular for all i ∈ I we have ¬χ(a i ; d).
Remark 3. 6 We are assuming that p, q are A-invariant. It is not true that if p is finitely satisfiable/definable/generically stable in/over some B ⊆ A then q must as well be such, for the same B. Even when B = N ≺ M = A are models, a counterexample can easily be obtained by taking q to be the realised type of a point in M\N .
Question 3.7
Is it true that in the setting of Remark 3.6 q is domination-equivalent to a type finitely satisfiable/definable/generically stable in/over N ?
Corollary 3.8 There is a theory T where Inv(U) changes when passing to T eq .
Proof As generic stability is preserved by domination, this happens in any theory where T does not have any nonrealised generically stable type but T eq does, as such a type cannot be domination-equivalent to any type with all variables in the home sort. An example of such a theory is that of a structure (M, <, E) where (M, <) DLO and E is an equivalence relation with infinitely many classes, all of which are dense.
Such a thing cannot happen when passing from a stable T to T eq ; see Remark 5.6.
Proposition 3.9 Generically stable types commute with every invariant type.
Proof The proof of [15, Proposition 2.33] goes through even without assuming NIP provided the definition of "generically stable" is the one above.
Even if ( Inv(U), ⊗) need not be well-defined in general, a smaller object is. Proof It follows immediately from Lemma 1.14 and Proposition 3.9 that, when restricting to the set of products of generically stable types, ∼ D is a congruence with respect to ⊗. As the generators of Inv gs (U) commute, so does every pair of elements from it.
The reason we defined Inv gs (U) as above is that generic stability is not preserved under products: the type p in [1, Example 1.7] is generically stable but p ⊗ p is not. Inv gs (U) may be significantly smaller than Inv(U), and even be reduced to a single point; this happens for instance in the Random Graph, or in DLO.
Weak orthogonality
Another property preserved by domination is weak orthogonality to a type. This generalises (by Proposition 5.4) a classical result in stability theory, see e.g. [10, Proposition C.13'''(iii)]. Definition 3. 12 We say that p ∈ S x (U) and q ∈ S y (U) are weakly orthogonal, and write p ⊥ w q, iff p ∪ q is a complete global type.
Note that if p is invariant then p ⊥ w q is equivalent to p ∪ q p ⊗ q, or in other words to the fact that for any c q in some U 1 + U we have p p | Uc. In the literature the name orthogonality is sometimes (e.g. [15, p. 136] or [16, p. 310]) used to refer to the restriction of weak orthogonality to global invariant types. We will not adopt this convention here.
Proposition 3.13
Suppose that p 0 , p 1 ∈ S inv (U) are such that p 0 ≥ D p 1 and p 0 ⊥ w q. Then p 1 ⊥ w q.
This entails the following slight generalisation of [13,Theorem 10.23]. Corollary 3.14 Let p x , q y ∈ S inv (U). If p ≥ D q and p ⊥ w q, then q is realised.
Proof From p ≥ D q and p ⊥ w q the previous proposition gives q ⊥ w q. But this can only happen if q is realised, otherwise q(x)∪q(y)∪{x = y} and q(x)∪q(y)∪{x = y} are both consistent. [16,Theorem 4.4] that if p is strongly regular (see [16,Definition 2.2]) and generically stable then p is ≤ RK -minimal among the nonrealised types, and for all invariant q we have p ⊥ w q ⇐⇒ p ≤ RK q. An immediate consequence of his result and of the previous corollary is that such types are also ≤ D -minimal among the nonrealised types.
Remark 3.15 Tanović has proved in
We conclude this section by remarking that a lot of properties are not preserved by domination-equivalence, nor by equidominance. For instance, there is an ω-stable theory with two equidominant types of different Morley rank, namely T eq where T is the theory of an equivalence relation with infinitely many classes, all of which are infinite. Another property that is not preserved is having the same dp-rank, a counterexample being DLO, where if p is, say, the type at +∞ we have p ≡ D p ⊗ p even if the former has dp-rank 1 and the latter has dp-rank 2.
Dependence on the monster model
In strongly minimal theories (see Example 1.31) Inv(U) ∼ = N regardless of U while in, say, the Random Graph, Inv(U) is very close to S inv (U) by Proposition 2.11 and the subsequent discussion: the former is obtained from the latter by identifying types that only differ because of realised, duplicate, or permuted coordinates. It is natural to ask whether and how much the quotient Inv(U) depends on U, and the question makes sense even when ⊗ does not respect ≥ D . This section investigates this matter.
Theories with IP
The preorder ≥ D is the result of a series of generalisations that began in [9] with starting point the Rudin-Keisler order on ultrafilters. It is not surprising therefore that some classical arguments involving the latter object generalise as well. We show in this subsection (Proposition 4.6) that, in the case of theories with IP (see [15,Chapter 2]), one of them is the abundance of pairwise Rudin-Keisler inequivalent ultrafilters on N; the classical proof goes through for ∼ D as well, and shows that even the cardinality of Inv(U) depends on U.
In this subsection p stands for the ∼ D -class of p. Even if we state everything for ∼ D and its quotient Inv(U), the same arguments work if we replace ∼ D by ≡ D , Inv(U) by Inv(U) and interpret p as the class of p modulo ≡ D .
The following result is classical, see e.g. For the rest of this subsection, let U be λ + -saturated and λ + -strongly homogeneous of cardinality at most 2 λ , let σ be the least cardinal such that U is not σ + -saturated, and let κ = |U|. Thus λ + ≤ σ ≤ κ ≤ 2 λ .
Lemma 4.2 In the notations above, for every p
Proof Clearly p ⊆ {q | q ≤ D p}. For every q ≤ D p, there is some small r q such that p ∪ r q q. If r q = r q then q = q , and therefore |{q | q ≤ D p}| is bounded by the number of small types. As "small" means of cardinality strictly less than σ , the number of such types is at most the size of A⊂U,|A|<σ S(A), which cannot exceed κ <σ · 2 <σ = κ <σ . Proof If ϕ(x; y) witnesses IP, then over a suitable model of cardinality λ, which we may assume to be embedded in U, there are 2 λ -many ϕ-types, and a fortiori types. This gives the first equality, and the same argument with any μ such that λ ≤ μ < σ gives the second one. The third one follows by cardinal arithmetic.
Recall the following property of theories with IP.
Fact 4.5
If T has IP, then for every λ ≥ |T | there is a type p over some M T such that |M| = λ and p has 2 2 λ -many M-invariant extensions. Moreover, such extensions can be chosen to be over any λ + -saturated model.
Proof This is [13,Theorem 12.28]. The "moreover" part follows from the proof in the referenced source: in its notation, it is enough to realise the f -types of the b w over {a α | α < λ}. Proposition 4.6 If T has IP and U is λ + -saturated and λ + -strongly homogeneous of cardinality 2 λ , then Inv(U) has size 2 |U| .
The map e
Let U 1 + U 0 . The map p → p | U 1 shows that, for every tuple of variables x, a copy of S inv x (U 0 ) sits inside S inv x (U 1 ); for instance, if T is stable, this is nothing more than the classic identification of types over U 0 with types over U 1 that do not fork over U 0 .
Proposition 4.11
The map e is well-defined and weakly increasing. If moreover ⊗ respects ≥ D , then e is also a homomorphism of monoids.
Stable theories
The domination preorder we defined generalises a notion from classical stability theory. For the sake of completeness, we collect in this section what is already known in the stable case. From now on, we will assume some knowledge of stability theory from the reader, and T will be stable unless otherwise stated; we repeat this assumption for emphasis. References for almost everything that follows can be found in e.g. [2,12,13]. 7 In this section, we mention orthogonality of types, denoted by ⊥, which is a strengthening of weak orthogonality that can be defined in a stable theory for stationary types (see [12,Section 1.4.3]). For global types, it coincides with weak orthogonality.
The classical definition
In the following definition A is allowed to be a large set, e.g. we allow A = U. More conceptual proofs of the first and last point can be obtained from the classical results that p q if and only if q is realised in the prime a-model containing a realisation of p, and that prime a-models are a-atomic (see [12,Lemma 1.4.2.4]). Note that a consequence of this equivalence is that in a stable theory semi-a-isolation (i.e. ≥ D by point 3 of the previous Proposition) is the same as a-isolation: if p ∪ r q then r can be chosen such that p ∪ r is complete, despite r being small.
Corollary 5.5
If T is stable, then e is injective and e( p ) ≥ D e( q ) implies p ≥ D q .
Proof By point 3 of Proposition 5.4 we can apply Lemma 4.12.
Remark 5.6
While studying Inv(U) in a stable T , there is no harm in passing to T eq , which we see as a multi-sorted structure, for the following reason. Even without assuming stability, every type p ∈ S(U) in T eq is dominated by, and in particular (if it is nonrealised) not weakly orthogonal to, a type q ∈ S(U) with all variables in the home sort via the projection map. Suppose now that T is stable and let M be such that p and q do not fork over M. By (the proof of) [13,Lemma 19.21] there is a (possibly forking) extension of q M which is equidominant with p. Trivially, this extension has all variables in the home sort. We would like to thank Anand Pillay for pointing this out.
Remark 5.7 Definition 5.1 makes sense also in simple theories, and more generally in rosy theories if we replace forking by þ-forking (see [11]). One can then give a definition of even for types that are not stationary but, in the unstable case, even for global types the relation need not coincide with ≥ D . For instance, in the notation of Definition 2.4, let (b, c) q 1 and a p | Ubc, and recall that in T forking is characterised as It follows that, for all B ⊇ U such that abc | U B, and for all d such that ab | B d, we have abc | B d, and therefore ab U abc. Since tp(a, b/U) = p ⊗ q 0 and tp(a, bc/U) = p ⊗ q 1 this shows p ⊗ q 0 p ⊗ q 1 , but by Proposition 2.5 p ⊗ q 0 ≥ D p ⊗ q 1 .
Thin theories
Recall that a stable theory is thin iff every complete type has finite weight (see [ . This hypothesis provides a structure theorem for Inv(U), namely Theorem 5.11. This result is implicit in the literature (see [12,Proposition 4.3.10]), but we need to state it is as done below for later use. Proof Since weight is additive over ⊗ ([2, Proposition 5.6.5 (ii)]) we have w( p (n) ) = n and we conclude by Fact 5.9 that the map n → p (n) is an isomorphism between N and the monoid generated by p .
Theorem 5.11
If T is thin, then there are a cardinal κ, possibly depending on U, and an isomorphism f : Inv(U) → κ N. Moreover, p ⊥ q if and only if f ( p) and f (q) have disjoint supports.
Proof Let p i | i < κ be an enumeration without repetitions of the ∼ D -classes of types of weight 1. For such classes, define f ( p i ) to be the characteristic function of {i}, then extend f to classes of products of weight-one types by sending p ⊗ q to f ( p ) + f ( q ) and 0 to the function which is constantly 0. It is easy to show using Fact 5.8 and Corollary 3.14 that f is well-defined, i.e. does not depend on the decomposition as product of weight-one types, and that f is injective. By [12,Proposition 4.3.10] in a thin theory every type is domination-equivalent to a finite product of weight-one types, so f is defined on the whole of Inv(U). By Lemma 5.10 if w( p) = 1 then the monoid generated by p is isomorphic to N and this easily entails that f is surjective. It is also clear that f is an isomorphism of ordered monoids. Since two types of weight 1 are either weakly orthogonal or domination-equivalent by Fact 5.8 and, by [10, Proposition C.5(i)], in stable theories p ⊥ q 0 ⊗ q 1 if and only if p ⊥ q 0 and p ⊥ q 1 , the last statement follows.
Remark 5.12
Weight, which is preserved by domination-equivalence (Fact 5.9), can, in the thin case, be read off f ( Inv(U)) by taking "norms". Specifically, if f ( p ) = (n i ) i<κ , then w( p) = i<κ n i (recall that every (n i ) i<κ ∈ κ N has finite support).
Proposition 5.13
If T is thin, then ≡ D and ∼ D coincide.
Proof By [7, Theorem 4.4.10] every type is in fact equidominant with a finite product of types of weight 1. The conclusion then follows from Fact 5.8 and the fact that, as T is stable, ⊗ respects both ∼ D and ≡ D .
Dimensionality and dependence on the monster
At least in the thin case some classical results imply that independence of Inv(U) from the choice of U is equivalent to dimensionality of T , also called nonmultidimensionality.
Definition 5.14 Let T be stable. We say that T is dimensional iff for every nonrealised global type p there is a global type q that does not fork over ∅ and such that p ⊥ q. We say that T is bounded iff | Inv(U)| < |U|.
If T is thin, then T is dimensional if and only if it is bounded, as follows e.g. from Theorem 5.11 (alternatively, see the proof of [2, Lemma 7.1.2], but replace "superstable" with "thin" and "regular types" with "weight-one types"). In this case the number of copies of N required is bounded by 2 |T | , and by |T | if T is totally transcendental, see e.g. [2, Corollary 7.1.1]. In fact, some sources define boundedness only for superstable theories, essentially as boundedness of the number of copies of N given by Theorem 5.11.
Conjecture 5.15
Let T be stable. The following are equivalent: 1. T is bounded. 2. T is dimensional. 3. e is surjective.
1 ⇒ 2 follows from [2, Proposition 5.6.2] and 3 ⇒ 1 is trivial, so it remains to prove 2 ⇒ 3, namely that if there is a type over U 1 not domination-equivalent to any type that does not fork over U 0 , then there is a type orthogonal to every type that does not fork over ∅.
Proposition 5.16
If T is thin then Conjecture 5.15 holds.
Proof Suppose U 0 ≺ + U 1 and let f j : Inv(U j ) → κ j N, for j ∈ {0, 1}, be given by Theorem 5.11. Let Since weight is preserved by nonforking extensions (e.g. by [2, Definition 5.6.6 (iii)]), e sends types of weight 1 to types of weight 1. Therefore by Remark 5.12 we may decompose the codomain of g as where the direct summand i<κ 0 N may be assumed to coincide with Im g. It then follows that if e is not surjective then we can find p / ∈ Im e such that p has weight 1. Again by Theorem 5.11, such a p needs to be orthogonal to every type in the union of Im e, which is the set of types that do not fork over U 0 . In particular, p is orthogonal to every type that does not fork over ∅.
A possible attack in the general case could be, assuming e is not surjective, to try to find a type of weight 1 outside of its image. This will be either orthogonal to every type that does not fork over U, or dominated by one of them by [2, Corollary 5.6.5]. If we knew a positive answer to Question 4.13 at least in the stable case, and if we managed to find a type as above, then we would be done.
A possibly related notion is the strong compulsion property (see [6,Definition 2]); it implies that every type over U 1 + U 0 is either orthogonal to U 0 or dominates a type that does not fork over it. Whether all countable stable T eq have a weakening of this property is [6, Conjecture 18].
We conclude with two easy consequences of some classical results.
Definition 5.17
A stable theory T is unidimensional iff whenever p ⊥ q at least one between p and q is algebraic.
If T is totally transcendental then unidimensionality is the same as categoricity in every cardinality strictly greater than |T | (see [2, Proposition 7.1.1]). Unidimensional theories may still fail to be totally transcendental, e.g. Th(Z, +) is such. Anyway, the following classical theorem by Hrushovski (see [5,Theorem 4]) tells us that the situation cannot be much worse than that.
Corollary 5.19 A stable T is unidimensional if and only if Inv(U) ∼ = N.
Proof If T is unidimensional, by Hrushovski's result we have the hypothesis of Theorem 5.11, and the conclusion then follows easily from the definition of unidimensionality. In the other direction, the hypothesis yields that any two types are ≥ D -comparable, but if p ⊥ w q and p ≥ D q then q is realised by Corollary 3.14.
Compare the previous corollary with [9,Proposition 5]. Note that the hypothesis that T is stable is necessary: in the random graph if p ⊥ w q then one between p and q must be algebraic, but Inv(U) is not commutative by Corollary 2.12.
Proposition 5.20 If T is stable then N embeds in Inv(U).
Proof By [13, Lemma 13.3 and p. 336] in any stable theory there is always a type p of U-rank 1, and in particular of weight w( p) = 1 (see [13,before Theorem 19.9]). The conclusion follows from Lemma 5.10.
|
2022-11-20T14:17:00.546Z
|
2019-05-09T00:00:00.000
|
{
"year": 2019,
"sha1": "ad0ded5b99e1097e5b6f154b369d3811002188d3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00153-019-00676-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "ad0ded5b99e1097e5b6f154b369d3811002188d3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
119274245
|
pes2o/s2orc
|
v3-fos-license
|
Quantum phase transition in an atom-molecule conversion system with atomic hopping
The quantum phase transition in an atom-molecule conversion system with atomic hopping between different hyperfine states is studied. In mean field approximation, we give the phase diagram whose phase boundary only depends on the atomic hopping strength and the atom-molecule energy detuning but not on the atomic interaction. Such a phase boundary is further confirmed by the fidelity of the ground state and the energy gap between the first-excited state and the ground one. In comparison to mean field approximation, we also study the quantum phase transition in full quantum method, where the phase boundary can be affected by the particle number of the system. Whereas, with the help of finite-size scaling behaviors of energy gap, fidelity susceptibility and the first-order derivative of entanglement entropy, we show that one can obtain the same phase boundary by the MFA and full quantum methods in the limit of $N\rightarrow \infty$. Additionally, our results show that the quantum phase transition can happens at the critical value of the atomic hopping strength even if the atom-molecule energy detuning is fixed on a certain value, which provides one a new way to control the quantum phase transition.
I. INTRODUCTION
The quantum phase transition (QPT) describes an abrupt change in the ground state of a many-body system as some system parameters going across a critical point (at zero temperature). Quantum phase transitions (QPTs) in the systems of the quantum Hall, superconductor and ultracold atoms have been studied extensively [1][2][3]. Ultracold atomic systems, especially Bose-Einstein condensates (BECs), provide us a good platform to study the QPT. The experiment on ultracold atoms in an optical lattice by Bloch et al. has given a good example of the QPT from a superfluid to a Mott insulator (SF-MI) [4]. Then the SF-MI transition in the ultracold atomic systems was discussed widely [5,6]. Besides the SF-MI transition, the transition from non-entangled to entangled states in two-mode BECs [7], from degenerate and non-degenerate ground states in the extended boson Josephson-junction model [8], and from a pure molecule state to a mixed atom-molecule one in an atom-molecule model [9][10][11][12][13] have been investigated. Note that in the above QPTs, one can control the QPT by changing different system parameters such as atom-pair tunneling strength and energy detuning between the atomic and the molecular states.
Molecular BECs are versatile in physical studies because, in comparison to atomic BECs, they have more freedoms to be controlled. For example, the ultracold polar molecules have been used to study the ultracold chemistry, quantum many-body physics and quantum information science. The polar molecules with dipole moment may be either heteronuclear molecules or homonuclear molecules. The heteronuclear molecules with large electric-dipole moment have been investigated both in theory and experiment widely [14][15][16][17]. The homonuclear molecules with large dipole moment, such as Rydberg molecules, have been produced by T. Pfau et al. in experiments [18] and studied widely [19][20][21]. One Rydberg molecule consists of a single kind of atoms in different states, and the atoms can jump between the ground state and Rydberg states, which provides probability to study the QPT in atom-molecule conversion system with atomic hopping. Although the quantum phase transition of the ground state has been studied in the atom-molecule system [9][10][11][12][13], the atoms in that system were in the same state and the hopping between different hyperfine atomic states was not considered. So it is need to consider the atom-molecule conversion system with atomic hopping and study the effect of the hopping strength between different hyperfine atomic states on the quantum phase transition.
In this paper, we study the quantum phase transition of a atom-molecule conversion system with atomic hopping between different hyperfine atomic states in both mean field approximation (MFA) and full quantum methods. It is interesting to find that the QPT can still appear in the system as the hopping strength increasing even if the atom-molecule energy detuning is fixed on a certain value, which is different from Ref. [12] where the QPT was induced by changing the energy detuning. In MFA, we show that the QPT exists in the thermodynamic limit (i.e., N → ∞) and give the phase diagram whose phase boundary only depends on the atomic hopping strength and the atom-molecule energy detuning but not the atomic interaction. In full quantum method, we characterize the QPT with the help of energy gap, fidelity susceptibility and the first-order derivative of entanglement entropy, which give the same phase boundary. We also show that one can obtain the same phase boundary by the MFA and full quantum methods in the limit of N → ∞ through studying the finite-size scaling behaviors of energy gap, fidelity susceptibility and the firstorder derivative of entanglement entropy, which further confirm the existence of the QPT in the atom-molecule conversion system with atomic hopping.
In the next section, we give the model of the system and the general phase diagram in mean field approximation. The energy gap and fidelity are also studied to characterize the QPT. In Sect. III, we study the QPT from a pure molecule state to an atom-molecule mixed state in full quantum method. The energy gap, fidelity susceptibility, entanglement entropy, first-order derivative of entanglement entropy, and their scaling behaviors are investigated. In the last section, we give a brief summary.
II. MODEL AND PHASE DIAGRAM
We consider a three-component atom-to-molecule conversion system where the atoms can jump between two hyperfine atomic states.
whereâ i andâ † i (i = 1, 2) denote that annihilate and create a atom in the ith hyperfine atomic states, and b andb † denote that annihilate and create a molecule, respectively. Here J refers to the hopping strength between the two atomic components, g ′ the atom-molecule coupling strength, U a (U ′ a ) the strength of atomic intracomponent (intercomponent) interaction, δ a the energy detuning between the two atomic components, and δ b the energy detuning between atomic and molecular components.
In order to study the property of the system we considered, we need to give the dynamical equations of the system in mean-field approximation (MFA). We know that with the help of Heisenberg motion equation for operators, one can easily give the evolution equations for operatorâ i andb. In mean-field approximation, the average value of the operators can be replaced by their average values, i.e., Then we can obtain the evolution equations for α 1 , α 2 and β with the help of the aforementioned evolution equations forâ i andb. Here α i and β satisfy the conservation law |α 1 | 2 + |α 2 | 2 + 2|β| 2 = 1 and N is the total particle number. For convenience to study the properties of fixed points, we assume α 1 = √ ρ 1 e iθ1 , α 2 = √ ρ 2 e iθ2 and β = √ ρ b e iθ b , where ρ 1 + ρ 2 + 2ρ b = 1. The mean-field dynamical equations of the system can be written as where Since the fixed points correspond to the eigenvalues of the nonlinear system [22], we can study the static properties of the system by solving the fixed points. The fixed points requires that d dt φ a = 0, d dt φ = 0, d dt z = 0, d dt ρ b = 0, which can give many fixed point solutions. In the following discussion, we only focus on the kind of fixed points with z = 0 since they contain the ground and the first excited state. Such a kind of fixed points read as, where ∆ = 3δ b /2 + 2J cos(2φ a ), '+' is taken for φ = 0, and '−' is taken for φ = π. The Equation (4) is obtained in the assumption of δ 12 = 0 and U = U + U ′ = 0. Note that for the case of U = 0, we can not give the analytical solution of the fixed point. Whereas, we find that the existence of U does not change the boundary between the pure molecule (PM) phase and the mixed atom-molecule (AM) phase. In Fig. 1(a), we plot the dependence of density of molecules on hopping strength for the ground and first excited states. In such a figure, the three quantities in the legend denote the values of U , 2φ a and φ. The lines with ( U , 0, π) correspond to the ground states for different U , and the line with (0, π, π) corresponds to the first excited (0,0,π) (2,0,π) (4,0,π) (6,0,π) (8,0,π) (0,π,π) (a) state for U = 0. Note that the first excited state can not be influenced by the value of U , so the line with (0, π, π) can also denote the first excited state for different interaction strength U . From Fig. 1(a), we can find that the ground state is a pure molecule state when J < J c , and it is a mixed atom-molecule state when J > J c . Here the critical value J c = (3− √ 2)g/2 which is dependent on the energy detuning δ b (here δ b = −2g) but not on the interaction strength U . Meanwhile, we give the general phase diagram of the ground state in the parameter space of δ b and J in Fig. 1 (b). The boundary between PM phase and AM phase is 3δ b /2 + 2J = − √ 2g with g = 1 (corresponding to the blue solid line in Fig. 1 (b)). Note that for the case of J = 0, the critical point δ b = − 8/9g) which agrees well with the result in ref. [12].
In order to further confirm the phase boundary, we plot profiles of the energy gap between the first excited state and the ground state (∆E MF ) and of the fidelity of the ground state for a fixed energy detuning δ b in MFA. From Fig. 2, we find that the energy gap between the ground and first excited state is zero for J < J c and nonzero for J > J c for any interaction strengthŨ . That implies that the transition from PM phase to AM phase happens on the point where the energy degeneracy between ground and first excited state is lifted [8]. We know that the fidelity for eigenstates can also be used to characterize the phase transition. Here the fidelity for the ground state is defined as F MF = Ψ GS (J)|Ψ GS (J + δJ) , where |Ψ GS (J) and |Ψ GS (J + δJ) is the ground state of the system for different hopping strength J and J + δJ with δJ being a very small quantity. From Fig. 2, we also find that the fidelity has a sudden decrease from one at the critical point J = J c , where the phase transition from a pure molecule phase to a mixed atom-molecule phase takes place. In a word, both the energy gap and the fidelity have fantastic phenomena at the critical point.
III. QUANTUM PHASE TRANSITION
To get insight into the QPT, we study the system in full quantum method. For a finite particle number N , where one molecule is counted as two particles, the Hamiltonian can be exactly diagonalized on the basis of the Fock state |n, N − n − 2m, m . Here n is the number of atoms for the first atomic component and m is the number of molecules. The dimension of the Fock basis is (N/2+1) 2 . For convenience, the form of the Fock basis is signed as |n, m . Then the eigenstate of the system can be written as where C n,m (J) are complex coefficients with parameter J. The eigenvalues and the eigenstates of the system as a function of J can be easily obtained based on the method of exact diagonalization of Hamiltonian (1). In Fig. 3, we plot the total population of atoms for the ground state of the system with finite number of particles and in MFA . From this figure, we can see that the QPT from a pure molecule state (the total population of the two atomic states is zero, i.e., ρ a = 1 − 2ρ b = 0) to a mixed atom-molecule state (the atomic population is nonzero, i.e., ρ a = 1 − 2ρ b > 0,) happens in the system as the atomic hopping strength reaches the critical value of J N . Note that the critical value of J N is very close to J c obtained in MFA, and J N ∼ J c in the limit of N → ∞. In the following discussion, we will further characterize the quantum phase transition with the help of the concepts of energy gap, fidelity susceptibility and entanglement entropy. Additionally, we will study the scaling behaviors of the system near the critical point J C .
A. Energy gap
Now we are in the position to study the QPT with the help of energy gap which has been used to characterize the QPT in ref. [11][12][13]15]. Here the energy gap is defined as the energy difference between the first excited state E 1 and the ground state E 0 , i.e., ∆E = E 1 − E 0 . In Fig. 4 (a), the energy gap ∆E is shown as a function of J for different particle numbers. For a fixed energy detuning δ b , the avoided level-crossing between the ground and the first excited state appears near the critical hopping strength J c which is the phase transition point given in mean-field approximation. From Fig. 4 (a), we can see that for a finite particle number N , the energy gap reaches its minimum value ∆E min at the critical points J N where the QPT from a pure molecule state to a mixed atom-molecule state happens. We find that the critical hopping strength becomes closer with the increase of the particle number N .
To further characterize the finite-size effect present in Fig. 4 (a), we show the scaling behaviors of ∆E min and ∆J = J N − J C on the particle number N for different interaction strength in Fig. 4 (b) and (c). In such two panels, the discrete points denotes the numerical results for finite particle numbers N , while the solid lines are the fitting functions which well show how the quantum results tend to MFA ones with the increase of particle numbers. From Fig. 4 (b) and (c), we find that both ∆J and (∆E) min converge to zero with different slopes for different U when N −δG and N −µ approach zero. So that both ∆J and (∆E) min approach zero in the limit of N → ∞, i.e., ∆E min converges to zero and J N converges to J C , which is agreed well with the results in MFA. Additionally, the critical exponents δ G and µ for different U with slight difference are shown in Table I.
B. Fidelity susceptibility
We know that the quantum fidelity can be used to characterize the quantum phase transition [11,13,[23][24][25], where the quantum fidelity is defined as the absolute value of the overlap between two ground states with an infinitesimal variation of the control parameter. In order to study the effect of hopping strength J on the QPT of ground state, the fidelity can be written as where |Ψ 0 (J) and |Ψ 0 (J + δJ) are two ground states of the system with small parameter difference δJ. We find that the value of fidelity is dependent on the value of δJ although there is a sudden drop of the fidelity value near the critical point J c which is the phase transition point given in MFA. In order to make up for this deficiency (i.e., the value of fidelity is dependent on the value of δJ), we make use of the concept of fidelity susceptibility [8,23,26]. In the first-order perturbation theory, we consider the hopping term as the perturbation term (i.e., H = H 0 + JH J ). Then the fidelity susceptibility is defined as which does not depend on the value of δJ. It is also proved to be related to the correlation function [27] which is used to show phase transition.
The numerical results of the fidelity susceptibility versus J for N = 10, 50, 110, 190 are shown in Fig. 5 (a). We can see that the fidelity susceptibility is about zero when J is far away from the critical point J N , while it increases suddenly and reaches its maximum value at the critical point J N which is dependent on the particle number N . Meanwhile, we can find that the maximum values of the fidelity susceptibility become larger and the critical point J N become closer to J C as N increasing. In order to study the effect of particle number on the critical behavior, we show the finite-size scaling behaviors of ∆J and (χ max ) −1 for different U by power-law in Fig. 6 (b) and (c).
We can find clearly that both ∆J and (χ max ) −1 converge to zero for various U with different slopes when N −δF and (N ν ) −1 approach zero, respectively. In other words, ∆J is close to zero and χ max is approximated to infinity when N → ∞. We show the critical exponents δ F and ν for different U without good convergency [8,28], in Table II. If the system can be viewed as a bipartite system, the entanglement entropy of its ground state is physically meaningful and has been used to characterize the quantum phase transition [9,11,13,23,29,30]. The von Neumann entropy, one typical entanglement entropy, of a bipartite system AB for a pure state |Ψ is defined as (8) where ρ A(B) = Tr B(A) (|Ψ Ψ|) is the reduced density matrix of the system with two subsystems A and B. In this paper, we consider the two atomic modes as subsystem A and the molecular mode as subsystem B. From the Schmidt decomposition of pure state, we know that the With the help of the definition of entanglement entropy, we plot the numerical results of the entanglement entropy between the two subsystems for the exact ground state in Fig. 6 (a) with different particle numbers N = 10, 50, 110, 190. In Fig. 6 (a), the maximum value of the entanglement entropy become larger as N increasing. Whereas, unlike the behavior of the transverse Ising model [31,32], the entanglement entropy does not reach its maximum value at the critical point J C even if N → ∞. However, it is exciting to find that the sudden rise of the entanglement entropy takes place near the critical point J C = (3− √ 2)/2. In order to describe the quantum phase transition, the first-order derivative of the entanglement entropy with respect to J is introduced. The numerical results of the first-order derivatives are shown in Fig. 6 (b). The maximum value (dS/dJ) max of firstorder derivative of the entanglement entropy appears at the critical points J N which become closer to J C as N increasing. So as to find the dependence of (dS/dJ) max and ∆J on the particle number N , we show their scaling behavior for different U in Fig. 6 (c) and (d). We find that both ∆J and (dS/dJ max ) −1 converge to zero for various U with different slopes when N −δS and (N ω ) −1 approach zero. So that ∆J is close to zero and dS/dJ max is in limit of infinity when N → ∞. Meanwhile , the critical exponents δ S and ω for different U are shown in Table III.
IV. SUMMARY
In this paper, we have studied the phase transition in an atom-molecule conversion system where the atoms can jump between two atomic hyperfine states. In mean-field approximation, we have given the the phase diagram for the ground state, and shown that the phase boundary between pure molecule phase and mixed atom-molecule one is only dependent on the hopping strength J and energy detuning δ b but not dependent on the atomic interaction U . With the help of fixed points, we have studied the fidelity for the ground state and the energy gap between the first-excited state and the ground one. We have found that the ground state changes from degeneracy to non-degeneracy and the fidelity decreases suddenly at the phase boundary J C = − 3 4 δ b − √ 2 2 g with g = 1, which implies that the energy gap and the fidelity can well characterize that phase transition.
In comparison to mean-field approximation, we have investigate the quantum phase transition of the system in full quantum method. Taking the total population of atoms as the order parameter, we have shown that the QPT from a pure molecule phase to a mixed atommolecule phase happens at the critical point J N for different N with a fixed energy detuning δ b . Note that the value of J N depends on both the value of δ b and the particle number N of the system. We have further characterized the QPT with the help of the energy gap between the first-excited state and the ground state, the fidelity susceptibility of the exact ground state, and the entanglement entropy and its first-order derivative between the atomic subsystem and molecular subsystem, respectively, which give the same critical point J N . We have also shown that the critical point J N approaches to J C with the increase of particle number N through studying the finite-size scaling behaviors of the energy gap, fidelity susceptibility and first-order derivative of entanglement entropy. So that in the limit of N → ∞, the MFA and full quantum methods can give the same phase boundary. Our results enrich the phenomena of QPT in multiple component system especially atom-molecule conversion system, and indicate that one can control the QPT of the atom-molecule conversion system by changing the hopping strength between the two atomic hyperfine states as well as the atom-molecule energy detuning.
|
2012-12-31T12:56:48.000Z
|
2012-12-31T00:00:00.000
|
{
"year": 2013,
"sha1": "6d6c4533242e8c1bf3230fdc5d95c380b3327709",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1212.6884",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6d6c4533242e8c1bf3230fdc5d95c380b3327709",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
245704566
|
pes2o/s2orc
|
v3-fos-license
|
Maker-Breaker-Crossing-Game on the Triangular Grid-graph
We study the $(p,q)$-Maker Breaker Crossing game introduced by Day and Falgas Ravry in 'Maker-Breaker percolation games I: crossing grids'. The game described in their paper involves two players Maker and Breaker who take turns claiming p and q as yet unclaimed edges of the graph respectively. Maker aims to make a horizontal path from a leftmost vertex to a rightmost vertex and Breaker aims to prevent this. The game is a version of the more general Shannon switching game and is played on a square grid graph. We consider the same game played on the triangular grid graph $\Delta_{(m,n)}$ (m vertices across, n vertices high) and aim to find, for given $(p,q,m,n)$, a winning strategy for Maker or Breaker. We establish using a similar strategy to that used by Day and Falgas Ravry to show that: $\bullet$ For sufficiently tall grids and $p\geq q$ Maker has a winning strategy for the $(p,q)$-crossing game on $\Delta_{(m,n)}$ . $\bullet$ For sufficiently wide grids and $4p\leq q$, Breaker has a winning strategy for the $(p,q)$-crossing game on $\Delta_{(m,n)}$.
Introduction
We consider the Maker-Breaker crossing game as described by Day and Falgas-Ravry on a triangular lattice grid as opposed to the square lattice grid. The results are obtained through a method derived from their paper [1].
The results obtained from paper [1] are applied in Day and Falgas-Ravry's follow up paper [2] to prove results on the (p, q)-percolation game on the infinite square grid graph in which Breaker wins if they can completely enclose the central vertex in a dual cycle.
The proof for one of their results, that Breaker wins the (p, q)-percolation game for q ≥ 2p on the square lattice, involves building concentric square annuli which are then split into 4 crossing games for the sides and 4 corner sections. The results from [1] show that Breaker can prevent any top bottom path (or in out path) on each side of these square annuli. Another strategy for a 'box-game' means for some annulus, Breaker can claim all edges connecting the corners sections of the annulus with the side sections, hence completely enclosing the centre and winning the game. A similar result could be proven for the (p, q)-percolation game on the triangular or hexagonal lattice for q > p or q ≥ 6p respectively.
The results may also apply to other combinatorial games played on triangular or hexagonal grid-graphs or lattices. Definition 1.1 ((p, q)-crossing game). The Maker-Breaker (p, q)-crossing game played on a grid graph G(V, E) is a game played by Maker and Breaker who alternately claim p and q as yet unclaimed edges respectively on each turn. G(V, E) has a set of leftmost and rightmost vertices specified, in the case of grid graphs it is the obvious leftmost and rightmost vertices. Maker's goal is to claim edges which form a path from a leftmost vertex to a rightmost vertex. Breaker's goal is to stop this from happening. We will use the notation G (V , E ) for the dual graph of a planar graph G, whose vertices are the faces of G and whose edges represent the adjacency of the faces of G. Hence each edge e ∈ G has a respective dual edge e ∈ G .
We carry over assumptions concerning behaviour of Maker and Breaker from this paper including: • There exists a unique winner for all finite boards • For a planar graph G(V, E), Breaker's goal is equivalent to claiming all edges that are dual to the edges of a top bottom path on the dual graph G of G • Maker never makes a cycle or arch (from left to left or right to right) • Breaker never makes a dual-cycle or dual-arch (from top to top or bottom to bottom) • Claiming an edge is never disadvantageous to either player Something we cannot carry over from Day and Falgas-Ravry's paper is the grid being self dual as is the case with the square grid lattice Λ n+1,n . The dual of the triangular grid graph ∆ m,n is in fact the hexagonal grid graph (H n,m ) which we will address in section 4. This makes determining an exact threshold for c in the (p, cp)-crossing game challenging. We know from our two main results that the value for c is somewhere between 1 and 4 Definition 1.2. We denote ∆ m,n as the graph formed by a triangular lattice with n rows with m and m − 1 vertices per row alternating upward. We number the rows upward from 1 to n and vertices are horizontally labelled similarly with 2 horizontally adjacent vertices labelled a difference of 2 from each other. We can observe that the dual of this graph ∆ m,n is the hexagonal lattice and Breakers objective of stopping Maker creating a left right crossing path is equivalent to creating a top bottom dual crossing path on this dual graph.
We will label the dual-vertices intuitively horizontally and give all vertices between standard vertex row k and k + 1 the vertical label k + 1 2 . (We will later define a 'dual vertex level' to distinguish dual-vertices representing the faces of upward and downward orientated triangles with the same vertical label.) Definition 1.3 (External boundary edges). We define the external boundary edges of a connected component C of Maker edges to be the unclaimed edges from the set of vertices in component C, V (C), to the set of vertices not in C, V (G)\V (C). We denote this β(C).
Lemma 1.1. The number of edges in the external boundary of a connected component C on the dual graph ∆ m, n is at most 3 more than the edges in C .
|β(C)| ≤ |E(C)| + 3 Proof. Proof is proven easily by induction A single dual edge has at most 4 external boundary edges. Consider a dual component of n edges. If there is a cycle in the dual component then there exists a dual edge e such that C\e is a connected component of n − 1 dual edges. Adding e back in makes no difference to the set of external boundary edges. So by inductive hypothesis |β(C)| ≤ (n−1)+3 ≤ n+3.
If there is no cycle in C then the component is a tree. Let e be adjacent to a leaf of C. C\e is a tree of n − 1 edges. By inductive hypothesis |β(C\e)| ≤ (n − 1) + 3. Adding e adds at most 2 dual edges to the set of external boundary edges but removes e. Hence, |β(C)| ≤ n + 3.
We see from this lemma that with an allowance of 3 edges per component a component of Breaker edges could be 'contained' by Maker. Hence given a sufficient strategy to manage these 3 edges a winning strategy for Maker would follow for the (p, q)-crossing game when p ≥ q.
The following is the first main result we aim to prove.
Maker has a winning strategy for the (p, q)-crossing-game on the triangular grid graph ∆ m,n for p ≥ q and n ≥ q + 2 Since it is never disadvantageous for either player to claim a free edge, we only need to prove this for the case p = q.
The q-response game, Brackets and Security
We define a more general variant of the (p, p)-crossing game between Maker and Breaker. Definition 2.1. q-response game The q-response game is played on ∆ ∞,n by players Horizontal (H) and Vertical (V). On each turn t, V picks an integer r t ∈ [q] and claims r t unclaimed edges in ∆ ∞,n marking them red. H responds to each of V turns by also claiming r t unclaimed edges, marking them blue. The goals for Vertical (V) is equivalent to that of Breaker. Horizontal (H) wins if she can prevent V from getting a top-bottom path.
The (q,q)-crossing game is the special case of the response game where V chooses r t = q for all t and H forfeits her first turn. Therefore note, any winning strategy for H on the q-response game is also a winning strategy for Maker in the (p, p)-crossing game where Maker may make an arbitrary first move.
Type 6 the edges {(x + 1, y), (x + 3, y), (x + 4.5, y + 0.5)} form a bracket of Type 6 if none are red. The interior dual-vertices of this bracket are defined as {(x + 1, y + 0.5), (x + 3, y + 0.5), (x + 4, y + 0.5)}. For the dual graph ∆ ∞,n we label the vertical 'dual vertex level' of v ∈ V (∆ m,n ) intuitively from bottom to top: 1 to 2n. With the dual-vertex at the centre of each upward pointing triangle with base on the k th line being on dual vertex level 2k and the dual-vertex at the centre of a downward pointing triangle with point on the k th line being on dual vertex level 2k + 1. See Lemma 2.1. let n ≥ q + 2. If the grid is secure at the start of V's turn in the q-response game played on ∆ ∞,n , then V cannot win in a single turn. Let the order of the edges be that of their appearance in the path from bottom to top. We define v − i and v + i to be the 'dual vertex level' of the first and second vertex adjacent to e i in order of appearance in the bottom top path.
We show by induction on k that v + k ≤ 2k + 1 For k = 1 either e 1 is adjacent to a bottom dual-vertex (in which case v + 1 = 2) or it meets a bottom component. As the bottom components is secure e 1 must lie across the gate. Hence e 1 = (x , 1.5) and consequently v + 1 ≤ 3. Hence the statement holds for k = 1 Now let us suppose v + k ≤ 2k + 1 and consider the dual edge e k+1 . If e k and e k+1 share a vertex, then since any 2 adjacent vertices are at most 1 dual vertex level apart, v + k+1 ≤ v + k +1 ≤ 2(k +1). If e k and e k+1 do not share a vertex then there must be a secure floating component C between them. If C is secured by a bracket B, then consider the dual of each edge in B. These are the possible edges of e k and e k+1 . We consider the greatest dual vertex level reached by any of these dual edges with respect to the lowest interior dual-vertex as this is the highest possible value for v + k+1 − v + k . By observation, we can see that bracket type 3 has largest difference of 2 vertex levels. Therefore, Since the interior dual-vertex of the gate of a secure top component is at dual vertex level of at least 2n − 2, e l does not reach a top vertex nor top component.
Therefore V cannot win in a single turn from a secure grid.
3 The secure game 1. the grid is 'secure' with respect to the q-response game 2. for distinct red components C and C secured by blue paths P and P , any edge in P ∩ P is a blue double edge.
H wins if she can ensure the grid is secure at the end of each of their goes. V wins otherwise.
Lemma 3.1. H has a winning strategy for for the secure game on ∆ ∞,n for n ≥ 2.
Proof. Suppose the grid is secure before the beginning of V's turn and let V claim edge e where e * is adjacent to dual-vertices v 1 and v 2 . The cases to check, for example we will consider bracket type 1 with end vertices (x, y) and (x + 2, y): • If e = (x − 0.5, y − 0.5) then H claims edge f = (x − 1, y) to secure the component with P ∪ f and bracket of type 4 with end vertices (x − 2, y) and (x + 2, y).
The cases of the other bracket types are reacted to analogously by H. Case 3. Let v 1 and v 2 be part of existing components C 1 and C 2 . Let C i be secured by blue path P i and either bracket of non-red edges B i or gate of non-red edge g i if C i is not secure. Clearly C 1 cannot equal C 2 else e * creates a dual cycle in that connected component which breaks rule (i). Case 3b. C 1 and C 2 are both floating components. If e is part of both P 1 and P 2 , then player H has 3 moves and uses them to claim all edges of bracket B 1 . If e is part of P 1 and B 2 , then player H has 2 moves and uses them to claim the remaining 2 edges of bracket B 2 . Finally if e is part of B 1 and B 2 , then we consider cases on brackets. Cases are easy to check since many brackets aren't compatible with one another. We will check The possible instances for the bracket of type 1: • Alignment is possible with another bracket of type 1 where the third and first edge is shared respectively. H can claim the first edge of the left bracket f to secure the new component with P 1 ∪ P 2 ∪ f and bracket of type 6.
• Alignment is possible between brackets of type 1 and 3, sharing first and second edges respectively. H claims the third edge of the type 3 bracket f to secure the new component with P 1 ∪ P 2 ∪ f and bracket of type 4.
• Alignment is possible between brackets of type 1 and 4, sharing first and third edges respectively. H claims the first edge of the type 4 bracket f to secure the new component with P 1 ∪ P 2 ∪ f and bracket of type 6.
• Alignment is possible between brackets of type 1 and 6, sharing first and third edges respectively. H claims the first edge of the type 6 bracket f to secure the new component with P 1 ∪ P 2 ∪ f and bracket of type 6.
• Alignment is possible between brackets of type 1 and 7, sharing first and second (/or third) edges respectively. H claims the third edge of the type 7 bracket (/or third of the type 1 bracket) f to secure the new component with P 1 ∪ P 2 ∪ f and bracket of type 4 (/or 8).
• Alignment is possible between brackets of type 1 and 8, sharing first and second edges respectively. As the third edge of bracket type 8 is now not an external boundary edge the new component is secured by P 1 ∪ P 2 and a bracket of type 4.
• No alignment is possible between brackets of type 1 and type 2 and 5.
Hence, at the end of player H's turn, the board is always secure. Hence H has a winning strategy for the secure game on ∆ ∞,n for n ≥ 2. Now, we apply the strategy for the secure game to the q-response game to get our final result.
Theorem 3.2. Maker has a winning strategy for the q-response secure game on ∆ ∞,n for n ≥ q + 2.
Proof. We aim to prove that player H can keep the board secure at the end of their go. Suppose the grid is secure at the start of V's turn and he claim r ≤ q edges A = e 1 , e 2 , . . . e r in their turn. By Lemma 2.1 V cannot win in this single turn.
Player H responds to each of these edges one by one as if she were played in the secure game. If an edge e j was claimed by H is response to a previous edge e i then H may use this edge elsewhere as in the secure game. After responding to all edges in A, player H has a set B of at most r unselected edges. The grid remains secure.
However, we must check what happens if player V contradicts conditions (i)-(iii) of the secure game: (i) We can assume player V never creates a dual cycle or dual arch in the q-response game. So condition (i) is satisfied.
(ii) For the second condition V cannot claim a dual edge if doing so connects a top and bottom component. This would mean that V wins and, by Lemma 2.1, V cannot win in one turn from a secure grid. So condition (ii) is satisfied.
(iii) Finally, there is condition (iii), which states If C is a floating component and P a blue path that 'secures' C, V cannot claim an edge of P if doing so turns C into a top or bottom component. For this we choose an order for A for H to respond. Due to conditions (i) and (ii), all top and bottom components are rooted trees with the root being the unique top or bottom dual-vertex. Let A be ordered such that: If e, e * ∈ A are dual edges of the same top or bottom component after V's turn, and e is strictly closer in graphical distance to the root, then e appears before e * in A. Suppose, using this ordering, that there exists an edge e i ∈ A such that V claiming e i violates condition (iii). Hence, before e i is played a floating component C is secured by blue path P and bracket B is made a top or bottom vertex after e i is played. Given the ordering of A, no other edge of A adjacent to C could have been played before e i in the secure game as e i is what joins C to the bottom or top component and therefore has a strictly smaller graphical distance to the root. As none of the dual edges adjacent to C were played before e i all edges of P were present before V started their turn. Therefore in the q-response game V could not claim e i hence a contradiction.
As player H can hence ensure the grid is always secure at the start of V's turn and he can't win in one turn from a secure grid, then as the grid is finite, eventually H will achieve a left right crossing path. This is a winning strategy for Horizontal player
A similar strategy for Breaker
We wish to prove that Breaker has a winning strategy in the (p, 4p)-Crossing game.
Theorem 4.1. Breaker has a winning strategy for the (p, q)-crossing-game on the triangular grid graph ∆ m,n for q ≥ 4p and m ≥ q + 1 Using the fact that the Breaker's objective on a planar grid graph G is equivalent to Maker's objective on a the dual graph G rotated 90 • , we can replicate the method used to get a strategy for Maker.
It is important to note, however, that as the Maker of the original game has first move, that Breaker is the first to in this 'dual game'. To differentiate these games Maker and Breaker on this dual grid of this game will be referred to as 'DualMaker' and 'DualBreaker'.
We will call the dual graph generated as above by ∆ m,n , H n,m since the dual of the triangular grid is hexagonal. On this new grid graph, DualMaker plays on the Hexagonal grid graph and DualBreaker plays on the triangular grid graph.
Again we consider a response game though this time the Horizontal player may respond with 4 times the moves of Vertical.
Definition 4.1. q-4response game The q-4response game is played on H ∞,n by players Horizontal (H) and Vertical (V). On each turn t, V picks an integer r t ∈ [q] and claims r t unclaimed edges in H ∞,m marking them red. H responds to each of V's turns by also claiming 4r t unclaimed edges, marking them blue. The goals for Vertical (V) is equivalent to that of Breaker. Horizontal (H) wins if she can prevent V from getting a top bottom path.
Again, the crossing-game is a specific version of the q-4response game where r t = q ∀t , a winning strategy for horizontal on this game can be used directly by DualMaker as a winning strategy for the (p, 4p)-crossing game on H m,n for any m ≥ 1.
Secure Floating Components are defined as they are for the triangular grid graph however with dualbrackets used instead of the original brackets. Remark. All connected components C of dual edges have a left most lowest dual-vertex. The dualbrackets are formed by being the 3 lowest edges whose dual are adjacent to this lowest dual-vertex and one of the 5 possible 3-edgepaths along the exterior dual edges of C.
Before defining secure top and bottom components, we define top-securing vertices and bottom-securing vertices of H n,m to be the vertices of form (x, 2m − 2) and (x, 2) respectively. We do this because edges above these vertices are never required to secure a top/bottom component, as if V chooses these edges it would create a dual arch. Proof. Let m ≥ (1.5)q and the grid be in a secure position before V's turn and he selects q edges.
Suppose he claims l ≤ q edges e 1 , e 2 , . . . , e l to create a bottom-top crossing path of red edges. Let the order of these edges be in their use from bottom to top in the path.
Define y − i and y + i to be the the vertical labelling of the vertices adjacent to edge e i in the order of appearance in the bottom top path. We will prove by induction that y + i ≤ 3i − 1. For the first edge it can either be adjacent to a bottom most dual-vertex, in which case y − 1 = 1 and y + 1 = 3, or it's adjacent to a bottom most component and is dual to the gate helping secure the bottom component. By definition of bottom component this would mean y + 1 = 2. Therefore the statement holds for i = 1 Suppose the statement is true for i = k and y + k ≤ 3k − 1. If e k+1 is adjacent to e k then clearly y + k+1 ≤ y + k + 2 ≤ 3(k + 1) − 1. If not then e k is adjacent to an existing component. The component cannot be a bottom component or a top component else e k+1 is not part of the path, so it is a floating component and e k and e k+1 are dual edges of the dualbrackets. The value of y + k+1 − y + k is bounded by the largest difference in vertical label between an interior dual-vertex of the component and a vertex not in C adjacent to the dual of the bracket securing C. For dualbrackets it is easy to check that for Types 1, 2, 3, 4 and 5 these differences are 3, 2, 2, 1 and 0 respectively. Hence, y + k+1 − y + k ≤ 3 and y + k+1 ≤ y + k + 3 ≤ 3(k + 1) − 1. Since the lower top dual-vertices have vertical label 2m − 1 ≥ 3q + 3 and any top component must be reached through a gate to vertex with vertical label 2m − 3 ≤ 3q + 1. This contradicts that V creates a top bottom crossing path from a secure board in one turn.
We define the secure game on the hexagonal grid graph as the same as on the triangular grid graph except H may claim 3 addition edges.
Definition 4.5. Secure game on H n,m The Secure game on H n,m is played similarly to that on ∆ ∞,n . At any point an edge may be unclaimed, red, blue or a blue double edge (claimed by H twice). On each turn V claims ANY edge and makes it red, subject to the following conditions: Proof. Suppose the grid is secure before the beginning of V's turn and let V claim edge e where e * is adjacent to dual-vertices v 1 and v 2 .
Let v 1 and v 2 not be part of any existing red components. If neither v 1 or v 2 are top or bottom vertices, then with dualbrackets Type 1,2 or 3 (depending on orientation of v 1 and v 2 as the interior dual-vertices) H can secure the new component with the 4 other edges whose duals are adjacent to v 1 . If WLOG v 1 is a top (/bottom) dual-vertex, then either: • v 1 and v 2 have different horizontal labels, in which case the dual edge can be extra secured by the 4 edges adjacent to v 2 .
• or v 1 and v 2 have equal horizontal label, in which case the the dual edge can be secured by the 4 left most edges whose dual is adjacent to v 2 with the last one as the gate. Case 2d. If e is part of the dualbracket B securing the existing red component then we consider the external boundary edges of the component. C ∪ e has at most 10 external boundary edges. A new dualbracket can be found by finding the leftmost lowest dual-vertex of the interior dual-vertices of B, v 2 and including the lower 3 external boundary edges of this vertex then selecting the next three external boundary edges right of this lowest dual-vertex. By the remark following the dualboundary definitions this is a dual bracket. H selects the other 4 external boundary edges to secure the component.
Case 3. Let v 1 and v 2 be part of existing components C 1 and C 2 . Let C i be secured by blue path P i and either dualbracket of non-red edges B i or gate of non-red edge g i if C i is not secure.
Clearly C 1 cannot equal C 2 else e * creates a dual cycle in that connected component which breaks rule (i). C 1 and C 2 cannot both be top or bottom components else e either creates a dual arch or connects a top and bottom component contradicting conditions (i) and (ii) respectively of the secure game.
Case 3a. C 1 is a top or bottom component and C 2 is a floating component. By restriction (iii) e must be in the dualbracket B 2 . If e is part of the blue path P 1 then player H has 5 moves and chooses the other 5 edges of B 2 to make C 1 ∪ C 2 ∪ e * a secure top or bottom component. If e is the gate g 1 then by trivial observation we can see that e cannot be part of dualbracket B 2 for any dualbracket type.
Case 3b. C 1 and C 2 are both floating components. If e is part of both P 1 and P 2 , then player H has 6 moves and uses them to claim all edges of dualbracket B 1 re-securing the component C 1 ∪ C 2 with P 1 ∪ P 2 ∪ B 1 \e and dualbracket B 2 . If e is part of P 1 and B 2 , then player H has 5 moves and uses them to claim the remaining 5 edges of bracket B 2 re-securing the component C 1 ∪ C 2 with P 1 ∪ P 2 ∪ B 2 \e and B 1 . Finally if e is part of B 1 and B 2 , then the set B 1 ∪ B 2 \e consists of at most 10 non-red edges in one connected component. Consider the lowest interior dual-vertices of B 1 and B 2 , v . Without loss of generality let v belong to C 1 . Then by the structure of the grid e is not in the 3 lower edges of B 1 . Following the remark after the definition of dualbrackets, we take the new bracket to be these 3 lower edges and the 3-edge-path from the right endpoint those lower edges. By construction this is a dual bracket. H claims the other 4 edges to re-secure the component.
Hence, at the end of player H's turn, the board is always secure. Hence H has a winning strategy for the secure game on ∆ ∞,n for n ≥ 2.
Theorem 4.4. DualMaker has a winning strategy for the q-4response game on H n,m for m ≥ q + 1 We apply exactly the same method to change the secure game strategy on H n,m into a q-4response game as that from the secure game on ∆ m,n into the q-response game. The proof is hence analogous.
|
2022-01-06T02:16:27.052Z
|
2022-01-04T00:00:00.000
|
{
"year": 2022,
"sha1": "73cbfb3eb6867a8750c871c5cadf7430d120f7b3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "73cbfb3eb6867a8750c871c5cadf7430d120f7b3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
14014281
|
pes2o/s2orc
|
v3-fos-license
|
Modification by isolevuglandins, highly reactive γ-ketoaldehydes, deleteriously alters high-density lipoprotein structure and function
Cardiovascular disease risk depends on high-density lipoprotein (HDL) function, not HDL-cholesterol. Isolevuglandins (IsoLGs) are lipid dicarbonyls that react with lysine residues of proteins and phosphatidylethanolamine. IsoLG adducts are elevated in atherosclerosis. The consequences of IsoLG modification of HDL have not been studied. We hypothesized that IsoLG modification of apoA-I deleteriously alters HDL function. We determined the effect of IsoLG on HDL structure–function and whether pentylpyridoxamine (PPM), a dicarbonyl scavenger, can preserve HDL function. IsoLG adducts in HDL derived from patients with familial hypercholesterolemia (n = 10, 233.4 ± 158.3 ng/mg) were found to be significantly higher than in healthy controls (n = 7, 90.1 ± 33.4 pg/mg protein). Further, HDL exposed to myeloperoxidase had elevated IsoLG-lysine adducts (5.7 ng/mg protein) compared with unexposed HDL (0.5 ng/mg protein). Preincubation with PPM reduced IsoLG-lysine adducts by 67%, whereas its inactive analogue pentylpyridoxine did not. The addition of IsoLG produced apoA-I and apoA-II cross-links beginning at 0.3 molar eq of IsoLG/mol of apoA-I (0.3 eq), whereas succinylaldehyde and 4-hydroxynonenal required 10 and 30 eq. IsoLG increased HDL size, generating a subpopulation of 16–23 nm. 1 eq of IsoLG decreased HDL-mediated [3H]cholesterol efflux from macrophages via ABCA1, which corresponded to a decrease in HDL–apoA-I exchange from 47.4% to only 24.8%. This suggests that IsoLG inhibits apoA-I from disassociating from HDL to interact with ABCA1. The addition of 0.3 eq of IsoLG ablated HDL's ability to inhibit LPS-stimulated cytokine expression by macrophages and increased IL-1β expression by 3.5-fold. The structural–functional effects were partially rescued with PPM scavenging.
Numerous epidemiological studies show that HDL-C 2 is inversely correlated with CVD risk (1)(2)(3)(4). However, pharmacological interventions that raise HDL-C have failed to reduce risk (5). Recent evidence suggests that risk for CVD is more closely linked to HDL function than to HDL-C levels (6). Risk factors for CVD, including obesity, hypercholesterolemia, hypertension, and chronic kidney disease, create an environment of high oxidative stress, generating oxidized lipid species that modify HDL and alter its functional properties (7)(8)(9). Because HDL possesses several anti-atherogenic functions, including transport of excess cholesterol from the peripheral cells to the liver for excretion, efflux of cholesterol from macrophage foam cells, anti-inflammation, and more (10), a loss of any of these functions would probably contribute to disease pathogenesis.
IsoLGs (also known as isoketals) are a family of lipid ␥ketoaldehydes that resemble prostaglandins and are generated both enzymatically by cyclooxygenases and nonenzymatically by lipid peroxidation in parallel to F 2 -isoprostanes during oxidative stress (Fig. 1). F 2 -isoprostanes are enriched in HDL, not LDL (11), and are considered the most reliable biomarker of oxidative damage (12), especially of lipid peroxidation (13). Whereas F 2 -isoprostanes are chemically stable, IsoLGs are extremely unstable due to the reactivity of the 1,4-dicarbonyl moiety with primary amines such as the ⑀-amino groups of lysine residues of proteins as well as headgroups of phosphatidylethanolamines (PEs). The initial reaction of the IsoLG aldehyde forms a Schiff base, which undergoes a secondary reaction with the 4-keto group to form irreversible pyrrole adducts. These pyrrole adducts easily oxidize in the presence of oxygen to form stable lactam and hydrolactam adducts. IsoLGs also react with multiple proteins to form pyrrole-pyrrole cross-links (Fig. 1). The reaction rate of IsoLG to proteins greatly exceeds that of O. is a founder of and owns a significant stake in Seer Biologics, Inc., which could stand to benefit from the research described here. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This article contains Figs. S1-S4. 1 To whom correspondence should be addressed: Dept. of Pharmacology, Van 4-hydroxyl-2(E)-nonenal (HNE) (14) and malondialdehyde (15). Lipid peroxidation has long been postulated to play a critical role in the pathogenesis of atherosclerosis due to oxidative modification of LDL. Modifications of apoB of LDL by lipidderived oxidation products lead to unregulated endocytosis of modified LDL, resulting in macrophage foam cells. IsoLGlysine pyrrole adducts are present in oxidized LDL and in human atherosclerotic lesions (16). IsoLG-modified LDL induces macrophage uptake through the same receptor that recognizes oxidized LDL but not acetylated LDL (16). However, removal of lipoproteins containing apoB from plasma only decreases total plasma IsoLG-protein adducts by 20 -22% (17), suggesting that most IsoLG adducts in plasma are not associated with LDL or very high-density lipoprotein. Importantly, IsoLG-protein adducts are increased 2-fold in patients with atherosclerosis or end-stage renal disease compared with healthy controls (17) and correlate more closely with cardiovascular disease risk than classical risk factors, including LDL and total cholesterol levels (18).
The potential contribution of oxidized HDL to atherogenesis has recently received attention, as HDL is not only more oxidizable than LDL (19,20) but also the major acceptor of lipid peroxides in plasma, including isoprostanes (19). A consequence is that HDL may be exposed to decomposition products of these oxidized lipids. Once HDL is modified, it not only loses important protective functions but also acquires pro-atherosclerotic properties. Another important pathway of oxidative modification involves reactive intermediates produced by phagocytic white blood cells, the cellular hallmark of inflammation. One potent oxidative enzyme is MPO, which is expressed by activated phagocytes and is found in high levels in human atherosclerotic tissues. MPO uses hydrogen peroxide to generate reactive oxygen and nitrogen species that severely impair HDL function. Because MPO complexes with apoA-I on HDL (21), these impairments are probably due to oxidative targeting of apoA-I (22). We have previously shown that MPO generates IsoLG, which can adduct to HDL proteins as well as PEs (23). However, the structural and biological consequences of IsoLG modification of HDL have not been explored.
Because IsoLG is extremely reactive and cross-links proteins, we hypothesize that modification of HDL proteins (particularly cross-linking of its structural proteins apoA-I and apoA-II) by IsoLG generated under oxidative conditions of atherosclerosis would cause deleterious consequences to HDL particle structure and function. To assess the contribution of IsoLG to HDL
Isolevuglandin causes HDL dysfunction and inflammation
dysfunction, our laboratory has developed small-molecule scavengers of 1,4-dicarbonyls, including 5-O-pentyl-pyridoxamine (PPM), which react with IsoLG nearly 2000 times faster than lysines react with IsoLG, thereby inhibiting lysine modification (24). In the current study, we demonstrate that IsoLG mimics the effect of MPO on HDL cross-linking and loss of function and that PPM can prevent MPO-mediated HDL dysfunction. We examine the consequences of IsoLG in cross-linking HDL proteins, particle morphology, and various HDL functions, including apoA-I exchange, cholesterol efflux, and protection against inflammation. Further, we test the ability of PPM to preserve HDL and thus protect against dysfunction.
IsoLG-HDL adducts are elevated in familial hypercholesterolemia
Whereas IsoLG-protein adducts were previously found in oxidized LDL, in human atherosclerotic lesions (16), and ϳ80% of all IsoLG-protein adducts in plasma were not associated with apoB-containing lipoproteins (17), we sought to determine the levels of IsoLG-protein adducts in HDL isolated from patients with hypercholesterolemia and atherosclerosis. We isolated HDL using density-gradient ultracentrifugation from plasma of familial hypercholesterolemic (FH) patients (n ϭ 10) and from healthy volunteers (n ϭ 7). Two of the FH patients had homozygous FH, and eight of the subjects had severe heterozygous FH, with six of the patients (two homozygous FH and four severe heterozygous FH) undergoing regular LDL apheresis. From these patients, plasma was collected before LDL apheresis. Fig. 2A shows total plasma cholesterol levels of controls (189.1 Ϯ 31.6 mg/dl) versus FH (298.6 Ϯ 13.1 mg/dl) before HDL isolation. We found that IsoLG-protein adducts were significantly higher (p Ͻ 0.05) in FH (233.4 Ϯ 158.3 pg/mg protein) than in controls (90.1 Ϯ 33.4 pg/mg protein) (Fig. 2B). These results demonstrate that IsoLG-adducted HDL is increased in conditions associated with hypercholesterolemia and atherosclerosis.
PPM prevents the generation of IsoLG-protein adducts and apoA-I cross-linking by MPO
We previously showed that ex vivo oxidation of HDL by MPO (in the presence of glucose oxidase/glucose/sodium nitrate) generates IsoLG-protein adducts (23). We therefore examined the effects of the 1.4-dicarbonyl scavenger PPM on this MPO-mediated oxidation. We found that PPM, but not its inactive analogue pentylpyridoxine (PPO), significantly reduced IsoLG-protein adducts formed when HDL was exposed to MPO (Fig. 3A). PPM also dose-dependently inhibited MPO-mediated cross-linking of apoA-I (at 50 and 250 eq to apoA-I), whereas PPO did not (Fig. 3B). These data demonstrate that IsoLG contributes to MPO-mediated cross-linking and modification of HDL.
IsoLG cross-links HDL structural proteins, resulting in HDL of larger size
To further characterize the effects of IsoLG on HDL crosslinking, we exposed HDL to increasing concentrations of IsoLG starting from 0.1 molar eq of IsoLG/mol of apoA-I (0.1 eq) to 3 eq of IsoLG. This range of IsoLG concentrations yielded approximately the expected level of IsoLG-lysine adducts seen in vivo. Modification of HDL by 0.3 eq of IsoLG resulted in 242 Ϯ 120 pg/mg IsoLG-lysine, which approximates adducts seen in HDL derived from human FH patients (Fig. 2B). Modification of HDL by 3 eq of IsoLG resulted in 1936 Ϯ 509 pg/mg IsoLG-lysines, which is below the level produced by ex vivo modification by MPO (Fig. 3A). We also confirmed that even with the highest concentration of IsoLG used (3 eq), no unreacted IsoLG was present in the flow-through when the HDL preparation was filtered through a 10-kDa MWCO filter (Fig. S1A). Western blots are representative of those from the three independent experiments. ****, p Ͻ 0.0001.
Isolevuglandin causes HDL dysfunction and inflammation
We found that IsoLG dose-dependently cross-linked proteins in HDL starting at ϳ0.3 eq (Fig. 4A). 0.3 eq of IsoLG produced apoA-I immunoreactive bands with molecular weight higher than that of apoA-I monomer and consistent with possible apoA-I dimers and trimers. Higher IsoLG concentrations produced high-molecular weight bands that would be consistent with apoA-I cross-linking to additional proteins. PPM blocked cross-linking, but the inactive analog PPO did not (Fig. 4B). The higher-molecular weight bands are seen in Coomassie Blue-stained protein gels of IsoLG-modified HDL as well as IsoLG-modified synthetic apoA-I particles containing only recombinant apoA-I as the protein (Fig. S2A). That the apoA-I antibody detects higher-molecular weight bands in the synthetic apoA-I particles strongly supports the notion that the higher-weight bands in native HDL modified by IsoLG represent apoA-I oligomers as well as apoA-I cross-links to other proteins (Fig. S2B).
Examination of HDL by transmission EM to quantify particle size showed that IsoLG modification produced larger HDL particles ( Fig. 4, C and D). Unmodified control HDL consisted of small round HDL with mean diameters of 9.50 Ϯ 2.91 nm. At 3 eq of IsoLG, HDL particles consisted of two size distributions: 5-13 and 15-23 nm, which was significantly different from unmodified HDL (p Ͻ 0.0001). These results show that IsoLG modification increases HDL particle size as well as cross-linking its structural proteins.
IsoLG is more reactive than other lipid aldehydes in adducting to lysine residues and cross-linking HDL apolipoproteins
To assess to what extent the structural features of IsoLG uniquely contributed to its effect on HDL, we compared it with two other related lipid aldehydes: 4-hydoxynonenal (HNE; a widely studied ␣,-unsaturated aldehyde) and succinaldehyde (a 1,4-dicarbonyl lacking the alkyl and carboxylate tails present in IsoLG) (Fig. 5A). We compared the ability of these three lipid aldehydes to modify lysine residues on HDL using o-phthalaldehyde (OPA) to detect available lysines. OPA also detects the headgroups of PEs, but these are in much lower abundance than lysyl residues. Significantly lower molar equivalents of IsoLG than HNE or succinaldehyde were required to modify HDL (Fig. 5B). For example, 10 eq of IsoLG modified ϳ50% of the available lysines of HDL, whereas 10 eq of succinaldehyde only modified ϳ20% of available lysines, and 10 eq of HNE failed to modify HDL.
Isolevuglandin causes HDL dysfunction and inflammation
Furthermore, IsoLG cross-links HDL at 10 -30 times lower concentrations than HNE and succinylaldehyde (Fig. 5C). Similar to its effects in purified HDL, the addition of IsoLG to plasma resulted in cross-linked apoA-I at a much lower concentration (10 M) relative to other lipid aldehydes (0.6 mM for acrolein; 5 mM for HNE) (25) (Fig. 5D). These data demonstrate that IsoLG is far more potent than ␣,-unsaturated aldehdydes or nonsubstituted 1,4-dicarbonyls at modifying lysines and cross-linking HDL proteins.
IsoLG-modified HDLs have lower HDL-apoA-I exchange (HAE) and cholesterol efflux to macrophages
ApoA-I exchanges freely with HDL, but not with very highdensity lipoprotein, LDL, or albumin. Reduced exchangeability of apoA-I on HDL is associated with atherosclerosis in animal models and in acute coronary syndrome patients (26). Oxidation of apoA-I by MPO also reduces the rate of HAE concomitant with diminished cholesterol efflux capacity (26). Because MPO produced IsoLG (Fig. 3A), which heavily cross-linked HDL proteins such as apoA-I (Fig. 4A), we examined whether IsoLG modification altered the exchangeability of apoA-I from HDL using the method of Borja et al. (26). Whereas unmodified HDL had an HAE rate of 47.4 Ϯ 1.6%, IsoLG dose-dependently decreased HAE, so that HDL exposed to 1 eq of IsoLG had an HAE rate of 28.9 Ϯ 7.6% (p Ͻ 0.01) (Fig. 6A).
With the dramatic decrease in HAE in IsoLG-modified HDL, we speculated that HDL with IsoLG-modified apoA-I would be less efficient in cholesterol mobilization from macrophages. Thus, we examined cholesterol efflux using apoE Ϫ/Ϫ macrophages (27). The use of apoE Ϫ/Ϫ macrophages, rather than WT macrophages, provides strict assessment of the effect of IsoLG modification on HDL-dependent efflux, as the apoE endogenously produced by WT macrophages promotes some cholesterol efflux even in the absence of acceptors, such as HDL or apoA-I (28 -30). IsoLG modification dose-dependently decreased the ability of HDL to efflux cholesterol ( Fig. 6B) with the same concentration of IsoLG that cross-linked HDL apolipoproteins and decreased HAE. 30 M PPM (100-fold excess) prevented the decrease in cholesterol efflux induced by 3 eq of IsoLG (Fig. 6B).
IsoLG modification and cross-linking of HDL can potentially affect proteins involved in promoting cholesterol efflux, such as lecithin:cholesterol-acyltranferase (LCAT) (31). Immunoblotting of IsoLG-modified HDL did not reveal significant changes in the molecular weight of LCAT immunoreactive bands, suggesting that IsoLG does not cross-link LCAT (Fig. S3A). However, it may be that anti-LCAT antibody does not recognize cross-linked proteins. We therefore measured LCAT activity and found that IsoLG modification of HDL dose-dependently decreases phospholipase A2 activity, but not LCAT activity (Fig. S3B). Thus, IsoLG modification of LCAT does not appear to significantly contribute to the changes in cholesterol efflux seen in our experiments.
Isolevuglandin causes HDL dysfunction and inflammation
To assess whether IsoLG modification primarily disrupted ABCA1-mediated cholesterol efflux to HDL, we employed probucol, an inhibitor of ABCA1-mediated cholesterol efflux that does not interfere with ABCG1-or SR-BI-mediated efflux (32). The efflux capacity of IsoLG-modified HDL was only 78.2 Ϯ 9.4% that of unmodified HDL (Fig. 6C). Control HDL efflux capacity was reduced to 55.4 Ϯ 10.9% of original capacity in the presence of probucol, and IsoLG modification of HDL did not further reduce efflux capacity (56.7 Ϯ 11.4%) (Fig. 6C). This suggests that IsoLG modification of HDL predominantly inhibits the ABCA1-mediated pathway. This is further supported by experiments examining cholesterol efflux to lipidpoor apoA-I, which promotes macrophage cholesterol efflux mainly through ABCA1. IsoLG-modified apoA-I had significantly reduced cholesterol efflux capacity (31.3 Ϯ 10.3%) compared with unmodified apoA-I (Fig. 6D). Probucol reduced efflux in unmodified apoA-I to 27.4 Ϯ 9.8%, and IsoLG modification did not further reduce efflux (27.3 Ϯ 10.9%) (Fig. 6D). Taken together, the results indicate that IsoLG modification predominantly impairs ABCA1-mediated cholesterol efflux rather than that of other pathways, such as ABCG1 or SRBI.
IsoLG-modified HDL induces a synergistic pro-inflammatory phenotype with LPS in macrophages
HDL serves various anti-inflammatory functions, including inhibiting the Toll-like receptor-induced pro-inflammatory cytokine response in macrophages at a transcriptional level (33). We therefore tested whether IsoLG would inhibit HDL's ability to prevent LPS-induced inflammatory response in apoE Ϫ/Ϫ macrophages. ApoE Ϫ/Ϫ macrophages were used in these studies to allow direct comparison with efflux studies of the concentrations of IsoLG that altered effects. Coincubation of unmodified HDL with LPS resulted in a 85.5 Ϯ 10.3, 54.9 Ϯ 21.3, and 43.6 Ϯ 20.1% inhibition of the LPS-induced expression of Tnf, Il-1, and Il-6, respectively (Fig. 7A). Preincubation of macrophages with unmodified HDL, followed by subsequent removal of this HDL and then exposure to LPS, did not result in inhibition of LPS-induced Tnf, Il-1, and Il-6 expression, consistent with previous observations (34) and the concept that concurrent interaction with macrophages is needed for HDL to inhibit these effects of LPS (Fig. S4A). Coincubation of LPS with
Isolevuglandin causes HDL dysfunction and inflammation
HDL modified with 0.3 eq of IsoLG ablated its ability to prevent Tnf expression and even induced a greater expression of Il-1 and Il-6 than LPS activation alone (365.8 Ϯ 184 and 255.8 Ϯ 90.9%, respectively) (Fig. 7A). This pro-inflammatory phenotype was not seen with treatments with HNE or succinaldehyde-modified HDL (Fig. 7B). These results show that IsoLGmodified HDL induces a pro-inflammatory phenotype at a concentration of IsoLG that is much lower than that needed to invoke cross-linking of HDL apolipoproteins. Inclusion of PPM completely eliminated the loss of HDL's inhibitory effect on LPS that is seen when HDL is incubated with IsoLG (Fig. 7C).
The augmentation in cytokine release seen during co-incubation of LPS and IsoLG-HDL raised the possibility that IsoLG modification not only disrupted the ability of HDL to inhibit LPS signaling, but that IsoLG-HDL independently stimulated cytokine signaling. Previous studies found that MPO-oxidized HDL activates pro-inflammatory signaling in endothelial cells, with its effects largely attributed to modified apoA-I protein (35) and that IsoLG-modified PEs induce proinflammatory signaling in macrophages in the absence of LPS (36). However, in the absence of LPS, IsoLG-modified HDL does not induce Tnf, Il-1, or Il-6 expression (Fig. S4B). Furthermore, priming macrophages with IsoLG-modified HDL followed by removal of the HDL and subsequent LPS stimulation did not augment cytokine response (Fig. 4A). Therefore, IsoLG-modified PEs in HDL are unlikely to be responsible for the proinflammatory effect of IsoLG-modified HDL, because IsoLG-modified PEs can induce inflammatory signaling even in the absence of LPS. Taken together, the data demonstrate that IsoLG-modified HDL is not only dysfunctional in preventing LPS-induced macrophage activation, but also synergizes with LPS to induce a more significant inflammatory response.
Discussion
Growing evidence supports the notion that modifications of HDL proteins play a major role in the pathogenesis of atherosclerosis (8,37). Elevated IsoLG protein adducts had been shown in atherosclerosis, but their formation on HDL and resulting consequences have not been studied. In the present study, we demonstrate for the first time that IsoLG protein adducts are elevated in HDL derived from patients with hypercholesterolemia compared with healthy controls, indicating that significant IsoLG adduct formation on HDL occurs in conditions that promote atherosclerosis. We also show that the cross-linking of HDL that is induced when MPO associates with HDL can be blocked by a dicarbonyl scavenger. Low concentrations of IsoLG can cross-link apoA-I, the major structural protein of HDL; generate an HDL subpopulation of larger size; impair HDL remodeling, cholesterol efflux, and its antiinflammatory function; and further augment the inflammatory response of activated macrophages. In addition, we demonstrate the potential of dicarbonyl scavengers such as PPM as an anti-atherosclerotic strategy to preserve the HDL particle and prevent HDL dysfunction.
MPO participates in direct protein oxidation, nitration, or chlorination as well as initiating lipid peroxidation in vivo (38 -40). Elevated levels of MPO are present in patients with angiographic evidence of CVD (41) and predict risk for myocardial infarction, revascularization, and cardiac death in subjects presenting with chest pain or acute coronary syndrome (42,43). MPO binds to apoA-I and thus directly targets HDL within the human atheroma (44). We verified that IsoLG-protein adducts form in HDL with MPO oxidation, which probably contributes to MPO-mediated cross-linking of HDL proteins. The fact that PPM can prevent IsoLG-protein adduct formation and protein cross-linking, as detected by our MS analyses and by our immunoblots, illustrates the contribution of IsoLG in MPO-mediated oxidation events within the atherosclerotic lesion.
In human plasma, HDL is a heterogeneous collection of particles ranging from 7 to 12 nm in diameter and from 1.063 to 1.21 g/ml in density. Mass spectrometry identifications report up to 204 different proteins that associate with HDL (45). Approximately 70% of total HDL protein is apoA-I, a 28-kDa apolipoprotein associated with essentially every HDL particle. The second most abundant protein is apoA-II, which comprises 15-20% total HDL protein but is not present in all HDL particles. ApoA-I and apoA-II are the scaffold proteins of HDL that primarily determine particle structure. We demonstrate that IsoLG at very low concentrations (0.3 molar eq to apoA-I) cross-links apoA-I to produce dimers and trimers, which can be prevented by PPM. Modification of HDL by Ͼ1 eq of IsoLG produces multimers of apoA-I/apoA-II and probably crosslinks with other proteins in the HDL proteome.
Concomitant to protein cross-linking, IsoLG modification increased the size of HDL, from an average size range of what is designated to be "medium" to "large" HDL (8.3-10.2 nm) to the appearance of "very large" HDL particles (10.3-13.5 nm). An increase in discoidal HDL particle diameter beyond 10 nm is associated with incorporation of more apoA-I molecules (46). The presence of apoA-II in discoidal apoA-I/A-II-containing HDL has been reported to alter the conformation of apoA-I in a site-specific manner (47), which could potentially hinder the remodeling of the HDL particles (48). It is likely that the extensive cross-linking of apoA-I and apoA-II by IsoLG caused apolipoprotein aggregates and therefore particle fusion. Crosslinking of apoA-II to apoA-I may also hinder HDL remodeling. These perturbations to HDL remodeling and the formation of very large particles have detrimental consequences, such as the inability to mobilize intracellular cholesterol depots (49) or interact with macrophage ABCA1 (50) to promote cholesterol efflux.
When we measured the rate of HDL-apoA-I exchange, we found that IsoLG dose-dependently reduced the conformational adaptability of apoA-I and thus inhibited HDL remodeling. Previously, decreases in HAE were observed in atherosclerotic animal models as well as human subjects with acute coronary syndrome and metabolic syndrome (26), type I diabetes (51), metabolic syndrome (52), sickle cell anemia (53), and HIV (54). The decrease in HAE appears to be linked to oxidative damage to HDL and sometimes correlated with loss of other HDL functions, such as cholesterol efflux, because the ability of apoA-I to exchange between lipid-associated and lipid-free states is critical for efficient cholesterol efflux via ABCA1 (55).
We found that IsoLG modification of HDL dose-dependently decreased cholesterol efflux from cholesterol-loaded apoE Ϫ/Ϫ macrophages. The use of apoE Ϫ/Ϫ macrophages in our studies
Isolevuglandin causes HDL dysfunction and inflammation
allowed us to strictly measure HDL-dependent efflux, because apoE endogenously produced by WT macrophages promotes ABCA1-dependent cholesterol efflux even in the absence of HDL (28 -30). The concentration of IsoLG needed to cause a significant decrease in efflux correlated with apolipoprotein cross-linking as well as the decrease in HDL-apoA-I exchange. These observations support the notion that cross-linking of HDL scaffold proteins alters their conformational adaptability, probably impairing the ability of lipid-free/poor apoA-I to exchange off HDL particles, which is required to elicit efflux of cholesterol via ABCA1 (56,57). Indeed, that probucol (which blocks ABCA1-but not ABCG1-mediated efflux) did not further inhibit efflux in the presence of IsoLG modification supports the notion that the main pathway of cholesterol efflux affected by IsoLG modification of HDL is via ABCA1. The association between HDL protein cross-linking by oxidative modifications and defects in cholesterol efflux has been previously reported in HDL exposed to copper (58,59), modified by malondialdehyde (58), or exposed to MPO (44,60,61). However, not all endogenous cross-linkers of HDL proteins impair function, as HDL apolipoproteins cross-linked by exposure to peroxidase-generated tyrosyl radicals appear to enhance the ability of HDL to facilitate cholesterol efflux (62), which is mediated by apoA-I/apoA-II heterodimers (63).
In addition to cholesterol efflux functions, HDL also protects against infection and inflammation (64). One of the key defense functions of HDL is its ability to neutralize toxic effects of LPS and other bacterial products, which in turn inhibit inflammatory responses in atherosclerosis (65)(66)(67). Presumably, modifications of apoA-I or HDL would result in a decrease in function, such as loss of binding ability to LPS (68). We found that modification of HDL at concentrations that were insufficient to induce cross-linking was nevertheless sufficient to render it unable to protect against LPS-stimulated inflammatory cytokine response in apoE Ϫ/Ϫ macrophages. Interestingly, cytokine expression of Il-1 and Il-6 were dramatically higher than LPS induction alone, suggesting that IsoLG modification of HDL did not simply disrupt the neutralization ability of HDL but synergized with LPS to produce a greater pro-inflammatory phenotype. IsoLG has been previously shown to exert potent inflammatory effects, particularly in activating endothelial cells (69), macrophages (36), and dendritic cells (70). IsoLG-modified PE has recently been shown to induce proinflammatory signaling in macrophages in the absence of LPS (36). The observation that IsoLG-modified HDL did not induce an increase in pro-inflammatory cytokine expression in the absence of LPS suggested that IsoLG-modified PE in HDL was not responsible for the effect. However, IsoLG modification of various HDL components using reconstituted HDL systems will be studied in the future.
The low level of modification by IsoLG needed to promote HDL protein cross-linking, structural and morphological changes, and changes in HDL function (especially compared with other known reactive lipid aldehydes) demonstrates that minor lipid peroxidation events in atherosclerosis are sufficient to significantly reduce the levels of functional HDL particles. The ability of scavengers such as PPM to block the ability of 1,4-dicarbonyls, including IsoLG, to modify proteins and to preserve the HDL particle demonstrates the therapeutic potential of these scavengers in the treatment of atherosclerosis.
Materials
Chemicals required for the synthesis of HNE and succinylaldehyde were purchased from Aldrich (Milwaukee, WI). Reagents for SDS-PAGE and immunoblotting were from Novex by Life Technologies (Carlsbad, CA). Materials used for cell culture were from Gibco by Life Technologies, Inc. OPA reagent was purchased from Thermo Scientific (Rockford, IL). [1,[2][3] H]cholesterol was purchased from PerkinElmer Life Sciences. ApoA-I mouse/human (5F4) monoclonal antibody was purchased from Cell Signaling Technology (Danvers, MA). ApoA-II human (EPR2913) monoclonal antibody was purchased from Abcam (Cambridge, MA). The RNEasy minikit was purchased from Qiagen (Hilden, Germany). iQ SYBR Green Supermix and the iScript cDNA synthesis kit were purchased from Bio-Rad.
Plasma from FH patients and healthy controls
EDTA plasma was isolated from the blood of FH patients (n ϭ 10), of whom eight had heterozygous FH and two had homozygous FH. The two homozygous FH patients and four of the heterozygous FH patients underwent regular LDL apheresis, and blood was collected before LDL apheresis. Control plasma was isolated from blood of healthy volunteers (n ϭ 7). The study was approved by the Vanderbilt University institutional review board, and all participants gave their written informed consent.
Animals
Breeding pairs of homozygous ApoE Ϫ/Ϫ mice on a C57BL/6J background (strain 002052) were purchased from Jackson Laboratories (Bar Harbor, ME) at 12 weeks old and housed in the Vanderbilt University animal facility in a 12-h light/12-h dark cycle. The animals were maintained on standard rodent chow (LabDiet 5001) with free access to water. Progeny of the breeding pairs were at least 8 weeks of age before harvest of macrophages (described below). All procedures were approved by the Vanderbilt University institutional animal care and use committee.
Chemical synthesis of IsoLG, 4-HNE, and succinylaldehyde
15-E 2 -IsoLG was synthesized as described previously by organic synthesis (71). 15-E 2 -IsoLG is one of eight regioisomers potentially generated by peroxidation of arachidonic acid. The 15-and 5-series of IsoLGs are expected to form in greater abundance than the 8-or 12-series. 15-E 2 -IsoLG is also chemically identical to levuglandin E 2 formed nonenzymatically from prostaglandin H 2 . For these reasons, 15-E 2 -IsoLG is the most widely used regioisomer of IsoLG for studies. 4-HNE was synthesized using the procedure of Gardner et al. (72). Both carbonyls were dissolved in DMSO and prepared as a 10 mM stock and stored as small aliquots at Ϫ80°C until use. Fresh succinylaldehyde was synthesized before each experiment from 2,5dimethyltetrahydrofuran, as described previously (73). Fresh
Isolevuglandin causes HDL dysfunction and inflammation
working solutions were prepared before each assay and diluted in water to appropriate concentrations.
MPO oxidation of purified human HDL and measurement of IsoLG
HDL obtained from fasting healthy subjects was isolated by density gradient ultracentrifugation and dialyzed into PBS to eliminate residual Tris buffer or other primary amines that would react with the lipid aldehydes and/or their protein adducts. HDL was oxidized by MPO as described previously (23,74). Briefly, HDL was incubated at 37°C in 50 mM sodium phosphate (pH 7.4), 200 M diethylenetriaminepentaacetic acid, 57 nM MPO, 100 g/ml glucose, 20 ng/ml glucose oxidase, and 0.05 mM NaNO 2 overnight. Scavenger PPM and its inactive precursor PPO were synthesized as described (75) and solubilized in water. HDL was incubated for 30 min at 37°C before the addition of MPO. Quantitation of lysine modification of HDL by IsoLG was performed by first subjecting an aliquot of the preparation to proteolysis with Pronase and aminopeptidase M and then measuring the amount of IsoLG-lysyl-lactam (the most prominent species of IsoLG modification generated under these conditions) by stable isotope dilution LC/MS/MS as previously described (76).
Lipid aldehyde modification of HDL and the use of scavengers
HDL was exposed to various concentrations of lipid aldehydes at 37°C overnight to guarantee a complete reaction to form a stable end product. Control HDL was treated similarly in the absence of aldehydes. HDL preparations were diluted with DMEM for incubation with the macrophages. For experiments involving the use of scavengers, PPM and PPO solubilized in water were incubated with HDL for 30 min at 37°C before the addition of IsoLG.
Characterization of apolipoprotein cross-linking of modified HDL
HDL apolipoprotein cross-linking was assessed by SDS-PAGE performed under reducing conditions with Invitrogen's gel electrophoresis and transfer system. 4 -20% Tris gradient gels were used. Western blotting analyses were carried out using polyclonal antibodies specific for human apoA-I and apoA-II.
Characterization of lysine adduction
OPA is a primary amine-reactive fluorescent detection reagent that is used to detect free lysines in HDL (77,78). The procedure was performed according to the manufacturer's instructions (Thermo Scientific) using HDL modified by lipid aldehydes as described above and adapted to 96-well plates. The percentage of lysine adduction was calculated as fluorescence of modified HDL/unmodified HDL ϫ 100.
Measurement of HDL morphology and size
Negative stain preparations were prepared from suspensions of particles. The particles were adhered to Formvar/carboncoated grids by floating the grids on top of a drop of the suspension for 45 s to 1 min. The grid was removed from the drop, and excess fluid was wicked away with filter paper. The particles were then negatively stained by floating the grid with particles on a drop of 1% phosphotungstic acid at pH 5.0 for 45 s. Excess stain was removed by wicking with filter paper. The negatively stained particles were imaged by EM using an FEI T-12 (Ther-moFisher) electron microscope operated at 100 keV. For quantitation, 100 particles for each condition were arbitrarily chosen using an unbiased sampling scheme. The particles were chosen from at least three separate preparations for each condition. The diameters were measured from the two-dimensional images using an unbiased algorithm that arbitrarily selected a different angle for each measurement.
HDL-apoA-I exchange
HDL samples prepared by adding 15 l of 3 mg/ml spinlabeled apoA-I probe to 45 l of 1 mg/ml HDL and drawn into an EPR-compatible capillary tube (VWR) (26). EPR measurements were performed using a Bruker eScan EPR spectrometer outfitted with temperature controller (Noxygen). Samples were incubated for 15 min at 37°C and then scanned at 37°C. The peak amplitude of the nitroxide signal from the apoA-I probe in the sample (3462-3470 Gauss) was compared with the peak amplitude of a proprietary internal standard (3507-3515 Gauss) provided by Bruker. The internal standard is contained within the eScan spectrometer cavity and does not contact the sample. Because the y axis of the EPR spectrometer is measured in arbitrary units, measuring the sample against a fixed internal standard facilitates normalization of the response. HAE activity represents the sample/internal standard signal ratio at 37°C. The maximal percentage of HAE activity was calculated by comparing HAE activity with a standard curve ranging in the degree of probe lipid-associated signal. Experiments were repeated two times. All samples were read in triplicate and averaged.
Cell culture
Male and female apoE Ϫ/Ϫ mice (C57/BL genetic background) were injected intraperitoneally with 3% thioglycolate, and the macrophages were harvested by peritoneal lavage after 4 days. Cells were maintained in 24-well plates in DMEM with 10% (v/v) fetal bovine serum and penicillin-streptomycin at 100 units/ml and 100 g/ml, respectively.
Cholesterol efflux
Efflux was assessed by the isotopic method (79). Loading medium was prepared to consist of DMEM containing 100 g/ml acetylated LDL with 6 Ci of [ 3 H]cholesterol/ml. After equilibration for 30 min at 37°C, loading medium was added to cells for 48 h. After 48 h, the cells were incubated for 1 h with DMEM containing 0.1% BSA so that surface-bound acetylated LDL was internalized and processed. Cells were washed and incubated with efflux medium, which contained DMEM with 35 g/ml HDL samples. Experiments involving probucol followed the same procedure except that 10 M probucol was added to the cells 1 h before treatment with HDL samples. After a 4-h incubation, supernatants were collected, vacuum-filtered, and prepared for -scintillation counting.
|
2018-05-03T02:53:33.747Z
|
2018-04-30T00:00:00.000
|
{
"year": 2018,
"sha1": "432e2ac1b3cde38259e1de288609ab2321180e9d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/293/24/9176.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "91f6098f0f585b5606f10d87bca29a940cb843b3",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
15727450
|
pes2o/s2orc
|
v3-fos-license
|
A parametric level-set method for partially discrete tomography
This paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We represent the geometry of the anomaly using a level-set function, which we represent using radial basis functions. We pose the reconstruction problem as a bi-level optimization problem in terms of the background and coefficients for the level-set function. To constrain the background reconstruction we impose smoothness through Tikhonov regularization. The bi-level optimization problem is solved in an alternating fashion; in each iteration we first reconstruct the background and consequently update the level-set function. We test our method on numerical phantoms and show that we can successfully reconstruct the geometry of the anomaly, even from limited data. On these phantoms, our method outperforms Total Variation reconstruction, DART and P-DART.
Introduction
The need to reconstruct (quantitative) images of an object from tomographic measurements appears in many applications. At the heart of many of these applications is a projection model based on the Radon transform. Characterizing the object under investigation by a function u(x) with x ∈ D = [0, 1] 2 , tomographic measurements are modeled as where s i ∈ [0, 1] denotes the shift, θ i ∈ [0, 2π) denotes the angle and n(θ) = (cos θ, sin θ). The goal is to retrieve u from a number, m, of such measurements for various shifts and directions.
If the shifts and angles are regularly sampled, the transform can be inverted directly by Filtered back-projection or Fourier reconstruction [9]. A common approach for dealing with non-regularly sampled or missing data, is to express u in terms of a basis where b are piece-wise polynomial basis functions and {x j } n j=1 is a regular (pixel) grid. This leads to a set of m linear equations in n unknowns p = W u, dx. Due to noise in the data or errors in the projection model the system of equations is inconsistent, so a solution may not exist. Furthermore, there may be many solutions that fit the observations equally well because the system is underdetermined. A standard approach to mitigate these issues is to formulate a regularized least-squares problem where R is the regularization operator. Such a formulation is popular mainly because very efficient algorithms exist for solving it. Depending on the choice of R, however, this formulation forces the solution to have certain properties which may not reflect the truth. For example, setting R to be the discrete Laplace operator will produce a smooth reconstruction, whereas setting R to be the identity matrix forces the individual coefficients u i to be small. In many applications such quadratic regularization terms do not reflect the characteristics of the object we are reconstructing. For example, if we expect u to be piecewise constant, we could use a Total Variation regularization term Ru 1 where R is a discrete gradient operator [14]. Recently, a lot of progress has been made in developing efficient algorithms for solving such non-smooth optimization problems [6]. If the object under investigation is known to consist of only two distinct materials, the regularization can be formulated in terms of a non-convex constraint u ∈ {u 0 , u 1 } n . The latter leads to a combinatorial optimization problem, solutions to which can be approximated using heuristic algorithms [3].
In this paper, we consider tomographic reconstruction of partially discrete objects that consist of a region of constant density embedded in a continuously varying background. In this case, neither the quadratic, Total Variation nor non-convex constraints by themselves are suitable. We therefore propose the following parametrization The inverse problem now consists of finding u 0 (x), u 1 and the set Ω. We can subsequently apply suitable regularization to u 0 separately. To formulate a tractable optimization algorithm, we represent the set Ω using a level-set function φ(x) such that In the following sections, we discuss how to formulate a variational problem to reconstruct Ω and u 0 based on a parametric level-set representation of Ω and assuming we know u 1 . The outline of the paper is as follows. In section 2 we discuss the parametric level-set method and propose some practical heuristics for choosing various paramaters that occur in the formulation. A joint background-anomaly reconstruction algorithm for partially discrete tomography is discussed in section 3.
The results on few moderately complicated numerical phantoms are presented in Section 4. We provide some concluding remarks in Section 5.
Level-set methods
In terms of the level-set function, we can express u as where h is the Heaviside function and the latter term represents the anomaly.
Level-set methods have received much attention in geometric inverse problems, interface tracking, segmentation and shape optimization. The reason being their ability to handle topological changes. The classical level-set method, introduced by Sethian and Osher [12], solves the Hamiltonian-Jacobi equation, also known as level-set equation.
where φ : R 2 × R + → R denotes the level-set function as a time-dependent quantity for representing the shape and v denotes the normal velocity. In the inverse-problems setting, the velocity v is often derived from the gradient of the cost function with respect to the model parameter [5], [7]. There are various numerical issues associated with the numerical solution of level-set equation, e.g. reinitialization of the level-set. We refer the interested reader to a seminal paper in level-set method [11] and its application to computational tomography [10]. Instead of taking this classical level-set approach, we employ a parametric level-set approach, first introduced by Aghasi et al [1]. In this method, the level-set function is parametrized using radial basis functions: where Ψ(.) is a radial basis function, {α j } n j=1 and {χ j } n j=1 are the amplitudes and nodes respectively, and the parameters {β j } n j=1 control the widths. Introducing the kernel matrix A(χ, β) with elements we can now express u as where h is applied element-wise to the vector A(χ, β)α and denotes the element-wise (Hadamard) product. By choosing the parameters (χ, β, α) appropriately we can represent any (smooth) shape. To simplify matters and make the resulting optimization problem more tractable, we consider a fixed regular grid {χ j } n j=1 and a fixed width β j ≡ β. In the following we choose β in accordance with the gridspacing ∆χ as β = 1/(η∆χ), where η determines the influence of RBF on its neighbors.
Example
To show that the reconstruction of level-set with a finitely many radial basis functions, we consider the level-set shown in Figure 1 (a). With n = 196 RBFs, it is possible to reconstruct a smooth shape discretized on a grid with n = 256 × 256 pixels. Finally, the discretized reconstruction problem for determining the shape is now formulated as where h is a smooth approximation of the Heaviside function. The gradient and Gauss-Newton Hessian of f (α) are given by where the diagonal matrix and residual vectors are given by Using a Gauss-Newton method, the level-set parameters are updated as where µ k is a suitable stepsize and α (0) is a given initial estimate of the shape. From equation 4, it can be observed that the ability to update the level-set parameters depends on two main factors: 1) The difference between u 0 and u 1 , and 2) the derivative of the Heaviside function. Hence, the support and smoothness of h plays a crucial role in the sensitivity. More details on the choice of h are discussed in section 2.1.
Example
We demonstrate the parametric level-set method on a (binary) discrete tomography problem. We consider the model described in Figure 2(a). For a full-angle case (0 ≤ θ ≤ π) with a large number of samples, Figure 2(c) shows that it is possible to accurately reconstruct a complex shape.
Approximation to Heaviside function
The update of the level-set function primarily depends on the Heaviside function. Various approximations have been mentioned earlier [1]. These approximations suffer from the variational region of Dirac-Delta function near its peak (δ| x=0 ) which amplifies the gradient disproportionally. This sometimes results in poor updates for the level-set parameter α, and hence ruining the reconstructions. To solve this issue, we propose a new formulation of the Heaviside function. We construct the piecewise Dirac-Delta function shown in equation (5): This new approximation has been plotted in Figure 3. The above formulation provides mainly 3 benefits: 1) constant sensitivity in the boundary region controlled by parameter µ, 2) a smooth transition part and 3) the compact support.
Definition 2.1. In accordance with the compact approximation of the Heaviside function with width , a level-set boundary, denoted by ∂Ω, is defined as the set of all the points x ∈ R 2 satisfying the condition h (φ(x)) > 0. Proof. From Taylor series expansion for φ(x) near the level-set point x 0 , we get h (φ(x)) > 0 if and only if |φ(x)| < . Neglecting higher-order terms, we get |(x − x 0 ) T ∇φ(x 0 )| ≤ . This implies the above relation.
From the lemma 2.1, it is important to choose the Heaviside width in such a way that the level-set boundary exists on model grid. For simplicity, we crudely approximate the gradient of level-set function using upper and lower bounds [8]. Hence, the heaviside width is represented by where κ controls the number of gridpoints a level-set boundary can have. This formulation of solves the re-initialization issue associated with the level-set method. The steepness (|∇φ(x)| 1) of the level-set function in the level-set boundary can be handled by this formulation as well, as it adapts the level-set boundary to global change in level-set function.
Joint reconstruction algorithm
Reconstructing both the shape and the background parameter can be cast as a bi-level optimization problem min where L is of form [L T x L T y ] T . L x and L y are the second-order finite-difference operators in x and y directions respectively. This optimization problem is separable; it is quadratic in u 0 and non-linear in α. In order to exploit the fact that the problem has a closed-form solution in u 0 for each α, we introduce a reduced objective The gradient and Hessian of this reduced objective are given by where u 0 = argmin u0 f (α, u 0 ) [2]. Using a modified Gauss-Newton algorithm to find a minimizer of f , leads to the following alternating algorithm where the expressions for the gradient and Gauss-Newton Hessian are given by (4). Convergence of this alternating approach to a local minimum of (7) is guaranteed as long as the step-length satisfies the strong Wolfe conditions [16]. The reconstruction algorithm based on this iterative scheme is presented in Algorithm 1.
We use the LSQR method in step 3, with pre-defined maximum iterations and a tolerance value. A trust-region method is applied to compute α (k+1) in step 4 restricting the conjugate gradient to only 10 iterations.
Numerical Experiments
The numerical experiments are performed on 4 phantoms shown in figure 4. Each phantom has a constant gray value of parameter 1. For the first two phantoms, the background varies from 0 to 0.5, while for the next two, it varies from 0 to 0.8. In order to avoid inverse crime, the data is generated using a line Kernel, and the forward model uses a Joseph kernel. We use ASTRA toolbox to compute the forward and backward projections [4]. First, we show the results on the noiseless full-view data and later we compare various methods to proposed method in limited-data case with additive gaussian noise of 10 dB SNR. For the parametric level-set method, we use compactly supported radial basis functions. The basis functions has the form given below: RBF nodes are placed on a rectangular grid with the gridspacing 5 times the computational (model) gridspacing. The grid extends to two points outside the model grid to compensate for the background effects. The heaviside width parameter κ is set to be 0.01 and the its inclination parameter µ is set to be 0.1.
The level-set parameter α is optimized using the fminunc package (trust-region algorithm) in MATLAB. A total of 50 iterations are performed for predicting the α, while 200 iterations are performed for predicting u 0 (x) using LSQR at each step.
Regularization parameter selection
The reconstruction with the proposed algorithm is influenced by the parameter for Tikhonov regularization. In general, there are various strategies to choose this parameter, e.g., [15]. As our problem formulation deals with the non-linearity in the level-set parameter, application of these kinds of strategies is not clear. Instead we analyze the various residuals, introduced below, with respect to the regularization parameter.
We define three measures (all in the least-squares sense) to quantify the residuals: 1) data residual (DR), determines the data fit between the true data and reconstructed data, 2) model residual (MR), determines the fit between reconstructed model and true model, 3) shape residual (SR), determines the fit between the reconstructed and true anomaly shape. In practice, one can only have a data residual measure to figure out the regularization parameter λ. From Figure 5, it is evident that there exists a sufficient region of λ for which the reconstructions almost stays constant. This region is easily identifiable from the data residual plot for various λ.
Benchmark test
For the full-view (benchmark) case, the projection data is generated on 256 × 256 grid with 256 detectors and 180 projections with 0 ≤ θ ≤ π. The noise is assumed to be zero in this case. The results on the phantoms with the full-view data are shown in Figure 6. Anomaly geometries in all of these models are reconstructed almost perfectly with the proposed method, although the background has been smoothened out with the tikhonov regularization.
Limited-angle test
In this case, we use only 5 projections with θ restricted from 0 to 2π/3. The data is now reduced to almost 3% compared to the benchmark test. We also add Gaussian noise of 10 dB SNR to this synthetic data. To check the performance of the proposed method, we compare it to Total-variation method [4], DART [3] and its modified version for partially discrete tomography, P-DART [13]. A total of 200 iterations were performed with regularization parameter determined from shape residual curve. In DART, the background part was modeled using 20 discrete gray-values between its bounds for model A and B, while 30 discrete gray-values for model C and D. 40 DART iterations were perfomed in each case. For P-DART, a total of 150 iterations were performed. The results on noisy limited-angle with limited data are presented in Figure 7. The proposed method is able to capture most of the fine details (evident from the shape residual) in the phantoms even with the very limited data with moderate noise. The P-DART method achieves the least amount of data residual in all the cases, but fails to capture the complete geometry of the anomaly.
Conclusions and Discussion
We discussed a parametric level-set method for partially discrete tomography. We model such objects as a constant-valued shape embedded in a continuously varying background. The shape is represented using a level-set function, which in turn is represented using radial basis functions. The reconstruction problem is posed as a bi-level optimization problem for the background and level-set parameters. This reconstruction problem can be efficiently solved using a variable projection approach, where the shape is iteratively updated. Each iteration requires a full reconstruction of the background. The algorithm includes some practical heuristics for choosing various parameters that are introduced as part of the parametric level-set method. Numerical experiments on a few numerical phantoms show that the proposed approach can outperform other popular methods for (partially) discrete tomography in terms of reconstruction error. As the proposed algorithm requires repeated full reconstructions, future research is directed at making the method more efficient.
|
2017-04-03T13:19:49.000Z
|
2017-04-03T00:00:00.000
|
{
"year": 2017,
"sha1": "5f35f89316d7ada1f7c2c05ce55dda98bfebcd7c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1704.00568",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a8cf6746b7b7c01c1142d45af1a58cbc02a7e95",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
231575371
|
pes2o/s2orc
|
v3-fos-license
|
Olfactory and Gustatory Dysfunctions in COVID-19 Patients: From a Different Perspective
Purpose: The prevalence of sensory disorders (smell and/or taste) in affected patients has shown a high variability of 5% to 98% during the COVID-19 outbreak, depending on the methodology, country, and study. Loss of smell and taste occurring in COVID-19 cases are now recognized by the international scientific community as being among the main symptoms of the disease. This study investigates loss of smell and taste in outpatients and hospitalized patients with laboratory-confirmed COVID-19 infection. Methods: Enrolled in the study were patients with a positive PCR test for COVID-19. Excluded were patients with chronic rhinosinusitis, nasal polyposis, common cold, influenza, and olfactory/gustatory dysfunction predating the pandemic. Patients were asked about changes in their sense of smell and taste by structured questionnaire. Their status was classified according to severity of the symptoms. Results: A total of 217 patients were included in the study, of whom 129 received outpatient treatment, whereas 88 were hospitalized; mean age was 41.74 years (range18–76), 59.4% were male. At evaluation for olfactory dysfunction, 53.9% of the patients were found to be normal, whereas 33.2% were anosmic. No gustatory dysfunction was found in 49.8% of patients, whereas in those with loss of taste, the most commonly recorded symptom was ageusia. Anosmia was significantly more common in outpatients (P = 0.038). Presentation of chemosensorial symptoms in women was higher than in men (P = 0.009). No correlation was found between olfactory and gustatory dysfunction and age (P = 0.178). Conclusions: About one-half of our patients presented olfactory and/or gustatory deficits, and loss of smell was more common in mild cases. It should be considered; a sudden, severe, and isolated loss of smell and/or taste may also be present in COVID-19 patients who are otherwise asymptomatic. We suggest that identification of persons with these signs and early isolation could prevent spread of the disease in the community.
C oronavirus disease 2019 caused by the Severe Acute Respiratory Syndrome-Coronavirus-2 (SARS-CoV-2), emerged in December 2019 in the Chinese city of Wuhan and spread worldwide, causing a pandemic. 1 The US Centers for Disease Control list new loss of smell or taste among the main symptoms of COVID-19. 2 A loss of smell developing after upper respiratory tract infections (like the common cold or influenza) is seen in 11% to 40% of cases. 3 The coronavirus causing severe acute respiratory syndrome (SARS) is known to be neurotropic. 4 The neuroinvasive potential of SARS-CoV-2 is assumed to be partly responsible for the symptoms of respiratory failure in COVID-19 patients. 5 Loss of smell related to neuropathy with acute onset after SARS infection and a duration of 2 years has been reported. 6 As the target cells for the virus are located in the lower respiratory tract, COVID-19 patients show fewer upper respiratory tract symptoms. 7 In infections with the highly contagious SARS-CoV-2, olfactory and gustatory dysfunctions emerge as the first signs of the disease, which explains their great importance from the angle of disease control. Several publications demonstrate that the SARS-CoV-2 may cause loss of smell and taste, a symptom of disease onset that develops suddenly during the course of the illness. 5,[8][9][10][11][12][13][14][15] Rates of patients' self-reported subjective smell and taste loss range from 40% to 80%. A meta-analysis of 19 studies with a total of 10,818 patients calculated that 8823 patients had ageusia (81.6%; range 5.6%-88%) and 8088 had anosmia (74.8%; range 5.1%-85.6%). 16 Both findings occurred jointly in 85% of patients. 17 In addition, cases without fever and cough, with loss of smell or taste as the only complaint, have also been reported. 18,19 These symptoms can be considered as the first signs of the infection. 16 Objective smell tests found that 98% of COVID-19 inpatients participating in the study showed some olfactory dysfunction, whereas 25% of the sample suffered anosmia. 11 Another objective test used for chemosensory assessment showed that 73.6% of 72 patients exhibited olfactory or gustatory dysfunction; most of them were found to be hyposmic, and in one third, gustatory dysfunction was found. 20 However, very few data are available regarding the effect of disease severity on those findings. In a more seriously affected patient group in intensive care, loss of smell and taste amounted to 19%. 21 A study comparing the frequency of anosmia between hospitalized patients and outpatients found a higher rate of anosmia in the outpatients (26.9% versus 66.7%, P < 0.001). 10 However, in a study by Mao et al 14 with a rate of smell or taste impairment of 5% in patients with peripheral nervous system (PNS) manifestations, no significant difference was found between mild and severe infections.
In our study, we aimed to determine the occurrence and duration of smell and taste loss among COVID-19 patients and to investigate whether a difference between outpatients and inpatients.
MATERIALS AND METHODS
Between April and July, 2020, we evaluated consecutive adult patients examined in our COVID-19 outpatient clinic with diagnoses of mild to moderate COVID-19 who either went on to receive outpatient treatment or were hospitalized subsequently. Thus, we mainly included mild-to-moderate COVID-19 patients, defined as patients without need of intensive cares. Reported COVID-19 cases were classified as mild to moderate (ie, nonpneumonia and mild pneumonia) ( Enrolled in the study were patients fulfilling the following inclusion criteria: Being adult (!18 years of age) Having a laboratory-confirmed COVID-19 infection (nasopharyngeal swab test -reverse transcription polymerase chain reaction, RT-PCR) Being clinically able to answer survey questions Not having suffered from olfactory or gustatory dysfunction before the pandemic Not suffering from chronic rhinosinusitis, nasal polyps, common cold, or influenza. At the time of evaluation by the ENT specialist, the otolaryngological examinations of all patients included in the study were normal. Those with abnormal findings in their ENT examination were excluded from the study.
Evaluation of olfactory and gustatory function was carried out face-to-face twice at different times with self report's of patients. (in the first week of clinical disease symptoms and 21 days after the end of treatment). They answered ''How would you evaluate your ability to identify odors or taste compared to nonCOVID period of your life?'' (formulated comparison question). Patients not reporting any change in their sense of smell or taste in either of the interviews were grouped as normosmicnormogeusic. Those with changes in smell were classed as follows: Some difficulty smelling (mild hyposmia) Considerable difficulty smelling (moderate hyposmia) Cannot smell anything (anosmia) Perceiving smells differently (parosmia) Those with changes in taste were classed as follows: Some difficulty tasting (mild hypogeusia) Considerable difficulty tasting (moderate hypogeusia) Cannot taste anything (ageusia) Perceiving tastes differently (parageusia).
Statistical Analysis
Data were analyzed using IBM SPSS Statistics for Windows, Version 24.0. (Armonk, NY: IBM Corp.). In descriptive data analysis, frequency, percentage, mean, standard deviation, and minimum/maximum values were used. Categorical variables were analyzed with chi-square and advanced chi-square tests. A value of P < 0.05 was considered statistically significant. The most commonly reported complaints by the patients were cough, fever, and myalgia/fatigue (in this order). The distribution of the main complaints is presented in Figure 1A.
RESULTS
About half the patients (53%) showed chemosensory symptoms (olfactory or gustatory dysfunction). Around one third of the patients were anosmic or ageusic (33.2%-33.1%, respectively). Rates for mild-to-moderate hyposmia and parosmia were far lower: A relatively low fraction of patients had mild-to-moderate symptoms of hypogeusia and parageusia. The detailed distribution of patients' olfactory and gustatory dysfunction is presented in Figure 1B and C.
This study found no statistically significant correlation between the presence of chemosensory symptoms and patients' complaints or severity of thorax computed tomography findings (P > 0.05 each). On the other hand, the presence of chemosensory symptoms in women was higher than in men (P ¼ 0.009). Of all study participants, 47% had neither gustatory nor olfactory dysfunction whereas 43.32% had both.
A further analysis for olfactory dysfunction comparing outpatients and hospitalized patients determined a significantly greater frequency of anosmia in outpatients (P ¼ 0.038).
Although no statistically significant correlation was observed between chemosensory impairment and other clinical characteristics, both gustatory and olfactory dysfunction in women were higher than in men (P ¼ 0.023). There was no correlation between olfactory and gustatory dysfunction and age (P ¼ 0.178).
Clinically, the median onset time for chemosensory symptoms was 3 (1-13) days and the median recovery time 13 days. At 2-month follow-up, sensory impairment continues in 5 patients. Onset and recovery times for chemosensory dysfunction are presented in Figure 1D and E. The study found no statistically significant difference between hospitalized patients and outpatients for the presence of gustatory dysfunction symptoms (P ¼ 0.105 and P ¼ 0.513, respectively).
DISCUSSION
Our study qualitatively evaluated olfactory and gustatory functions in patients diagnosed with SARS-CoV-2 infection, comparatively for outpatients and hospitalized patient groups, and investigated the recovery of these functions during follow-up post-treatment. The international scientific community has quickly included newly occurring anosmia or ageusia, accumulating in case reports and studies all over the world, among the main symptoms of COVID-19 infection. As contagion may occur in the early course of the infection, recognizing this kind of initial symptoms could help identifying SARS-CoV-2 in the initial stages. 22 According to our study results, anosmia and ageusia were present in around half the members of the COVID-19 group. Although anosmia was a significantly more common symptom in milder cases of COVID-19 in outpatients, no difference between the 2 groups was found for gustatory dysfunction.
In many viral infections of the upper respiratory tract, smell and taste dysfunctions occur with mucosal congestion; by contrast, only very few COVID-19 patients exhibit mucosal congestion. [8][9][10][11][12][13][14]23 The pathophysiological mechanisms causing impairment of smell and taste in COVID-19 are still unknown. The pathogenesis of these dysfunctions is being explained with hypotheses generated on the basis of studies with other coronaviruses. 15 Although anosmia in COVID-19 may be caused directly by invasion of the olfactory pathway, it is thought that the accompanying ageusia may be related to diminished retronasal olfaction rather than direct impact on the taste receptors, as objective taste tests have confirmed. 24,25 In cases of olfactory impairment, damage occurs at the level of chemosensory receptors. 26,27 Cellular destruction in the olfactory neuroepithelium may lead to inflammatory changes that in turn damage neuronal function. Subsequently, olfactory receptor neurons may be damaged and neurogenesis disrupted. 24 The penetration of SARS-CoV-2 via the olfactory bulb is thought to be facilitated by the copious expression of the ACE-2 receptor in the epithelial cells of the oro-nasal cavity that is needed for the virus to be able to enter the cell. 28 Viral pathogens infecting the nasal epithelium can be transported to the central nervous system (CNS) by retrograde axonal transport along the olfactory pathway. 29 Giacomelli et al were able to interview 59 out of 88 hospitalized patients with COVID-19; in 20 of them (33.9%), at least one of the chemosensory modalities (smell or taste) was affected, in 11 (18.6%) both. 30 Females experienced these complaints more often. A case-control study employing quantitative smell testing found that almost everybody who had come in contact with COVID-19, irrespective of severe nasal congestion or inflammation, exhibited loss of smell. 11 Patients with olfactory or gustatory dysfunction can be asymptomatically contagious. In our patient group, half of the individuals (53%) displayed chemosensory symptoms. Although 53.9% were normosmic, 33.2% were anosmic. Almost half of the 217 patients (49.8%) were evaluated as normogeusic, 33.1% as ageusic.
In the study by Mao et al 14 with 214 inpatients, 36.4% showed neurologic symptoms: Symptoms of the CNS were found in 24.8%, in the PNS 8.9%, and in the skeletal muscles 10.7%. Most common complaints in patients with CNS symptoms were dizziness (16.8%) and headache (13.1%), whereas the most frequently seen complaints with PNS symptoms were hypogeusia (5.6%) and hyposmia (5.1%). In another study with 59 COVID-19 patients, the rates for olfactory and gustatory dysfunction were 68 and 71%, respectively. This rate was found to be significantly lower in 203 COVID-19negative patients with flu-like complaints, with 16% for smell and 17% for taste. In the latter study, olfactory and gustatory dysfunction showed a significant and strong correlation with testing positive for COVID-19, and a correlation between sore throat and COVID-negativity was also found. 12 Although the outpatients participating in our study mainly presented with complaints like fatigue/myalgia and headache, hospitalized patients mainly displayed more severe symptoms like cough, fever, and dyspnea. According to our results, there was no significant correlation between clinical complaints and chemosensory symptoms. Compared to inpatients with hospitalized patients, anosmia symptoms in outpatients were significantly more common. No difference between the groups could be seen regarding ageusia. Although the pathophysiological mechanisms causing olfactory and gustatory dysfunction in COVID-19 are not clear, severity of the disease and olfactory dysfunction appear to be inversely correlated.
In our patient group, gustatory as well as olfactory impairment was higher in women than in men. No correlation was found between smell and taste impairment and age. Although some studies reported female sex and young age as being risk factors for chemosensory loss, others did not find any difference. 8,11,31 The onset time for chemosensory symptoms in our patients was on average 3 days, the mean recovery time, in accordance with the literature, 2 weeks. 8,20 Furthermore, olfactory and gustatory dysfunction could begin before the onset of general symptoms as well as simultaneously or subsequently. In 5 of the our patients, chemosensory dysfunction is still ongoing. These patients declared that only recovered up to 10% in detailed inquiries. At present, it is not possible to know if SARS-CoV-2 infections may cause permanent olfactory impairment. 22 When olfactory dysfunction resolves spontaneously, specific treatment is not needed. However, if the condition continues beyond 2 weeks, therapy is to be considered. 31 The effectivity of available treatment for loss of smell and taste in COVID-19 patients is not known. However, systemic and nasal drugs used to treat postinfectious olfactory dysfunction might potentially be beneficial for COVID-19 as well. Olfactory training should be recommended in COVID-19 patients who experience loss of smell have not yet recovered, this being strongly recommended after 1 month of the dysfunction onset. 35 Another aspect to be discussed is the risk, if SARS-CoV-2 becomes a latent virus in the nervous system, that patients after therapy might be affected by neurological diseases in the long term. 32,33 A high degree of vigilance should be kept for the hypothetical role of this virus in neurodegenerative processes in recovered COVID-19 patients. Therefore, long-term follow-up of these patients is of great importance, particularly for those with persisting hyposmia. 34 A definitive diagnosis should not be made based on subjective assessment of chemosensory function. 24 Since the virus can reach surfaces in the form of an aerosol, infection via surfaces should be considered 36 Due to a high risk of contagion and technical limitations, unfortunately, we were unable to perform an objective olfactory function test on patients. Vaira et al shown that on quarantined patients, the olfactory and gustatory evaluation by self-administered test can be considered a valid tool, fundamental for remotely obtaining qualitative and quantitative data on the extent of chemosensitive disorders. 37 This pandemic has created a need and opportunity for telemedicine. Video visits can prefer to telephone visits because they allow for better interaction and data gathering. Virtual visits have been widely accepted by patients and represent a key component of providing timely and safe health care during this pandemic. 38 Telemedicine not only can be valuable for patient monitoring of SARS-CoV-2 infection, but may be a helpful tool for ongoing COVID-19 olfaction research. 39,40 To conclude, olfactory and gustatory dysfunctions are common symptoms in COVID-19 patients not displaying signs of nasal congestion. Newly developing anosmia or ageusia have been recognized by the international scientific community as important symptoms of COVID-19. Early isolation and testing of persons showing these symptoms can help preventing further contagion.
COVID-19-related anosmia is a new definition in medicine. Half of COVID-19 patients present with anosmia but its pathogenesis is not well understood. In many patients, anosmia is associated with dysgeusia. These symptoms may be considered as the first indication of the infection. In the absence of other respiratory disorders, such as allergic or acute rhinitis, chronic rhinosinusitis; anosmia, hyposmia, and dysgeusia will alert doctors to the possibility of COVID-19 infection and warrant serious consideration for self isolation and testing of these individuals. Recovery of olfaction and taste after the infection lasts less than 28 days.
Finally, understanding the sensorineural mechanisms of smell and taste loss in coronavirus infection might open new perspectives on viral pathogenesis.
|
2021-01-12T06:16:45.862Z
|
2021-01-07T00:00:00.000
|
{
"year": 2021,
"sha1": "b243846abc97dd6ef90cea233c88c38e87e96748",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8386443",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "62609181391e92cb25c085f8b857bb19a6db3423",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259209130
|
pes2o/s2orc
|
v3-fos-license
|
Two Schiff base iodide compounds as iodide ion conductors showing high conductivity
Here, we reported the crystal structures, dielectric and conducting properties of two Schiff base iodide compounds [m-BrBz-1-APy]I3 (1) and [o-FBz-1-APy]I3 (2). The Schiff base cations build irregular channel frameworks, and the polyiodide anions are located in the channel. The impedance spectra demonstrated that the two compounds show intrinsic iodide ion conductance with higher conductivity of 1.03(4) × 10−4 S cm−1 at 343 K for 1 and 4.94(3) × 10−3 S cm−1 at 353 K for 2. The dielectric modulus analysis further confirmed that the conductance contributed to the migration of iodide ions.
Liquid electrolytes exhibit good ion-transport characteristics and high power conversion efficiency. For example, the power conversion efficiency is 11.3% in I − /I 3 − electrolytes 1 and 12.3% in cobalt-based electrolytes. 2 However, liquid electrolytes have some disadvantages compared to solid electrolytes. It is difficult to achieve hermetic sealing and build exible devices, which limits their commercialization. 3 From a dye-sensitized solar cell (DSSC) standpoint, high ionic conductivity, which allows fast charge transport from anode to cathode and low cost of the solid-state electrolytes, is required. [4][5][6] Solid-state electrolytes show a wide range of practical applications in batteries and fuel cells. 7 The ionic polymers, such as Naon, are typical solid-state electrolytes, and ion conduction decreased as the temperature increased, which can be attributed to the small volatile molecule evaporating out of the electrolyte matris. 8,9 In contrast, a series of solid-state electrolytes based on ionic conductors have shown high ionic conductivity without some of the disadvantages of polymer electrolyte conductors. [10][11][12] One of the classical conduction mechanisms within ionic conductors is "ion hopping", whereby the open vacancies with the crystalline lattice can promote ion hopping. Thus, some compounds composed of bigger size cations and smaller size anions and vice versa were reported, where enough space and channels were formed for the "ion hopping" of the smaller size counterions. 13,14 A typical class of electrolytes is organic ionic plastic crystals, which exhibit long-range crystalline order and short-range crystalline disorder. 15,16 The disorder involved is typically associated with rotation or orientational changes in molecules or ions. As a consequence of this disorder, not only the fast-ion transport, such as Li + , 17 H + , 18 or I − /I 3 − , 19 but also plastic mechanical properties are conferred. Both of these properties are highly favorable for solid-state electrolytes. Among those solid-state electrolytes, imidazolium iodides are usually reported as promising candidates for application in solid-state DSSCs. [20][21][22][23] However, their conductivity is low without any additives. A useful method for improving conductivity and photovoltaic performance is the doping of iodine or iodine ion compounds into ionic conductors. 24 Unfortunately, the doping of iodine leads to some disadvantages, such as charge recombination and incident light ltering. 25 Furthermore, multi-components may inuence long-term stability because of phase separation. It is very important to synthesize single-component solid-state electrolytes with high conductivity using solid-state DSSCs. 26 In our previous study, a series of 1-aminopyridinium base derivative ionic liquids was reported, where cations were larger in size compared to imidazolium. Herein, we explored a type of single-component solid-state electrolyte, [m-BrBz-1-APy]I 3 (1) and [o-FBz-1-APy]I 3 (2) (Scheme 1), for DSSCs. The two compounds show a higher conductivity of 1.03(4) × 10 −4 S cm −1 at 343 K for 1 and 4.94(3) × 10 −3 S cm −1 at 353 K for 2.
Compound 1 was prepared using 2-uorobenzaldehyde with 1-aminopyridinium iodide and iodine in ethanol. The solution was slowly evaporated, and red block crystals of 1 were obtained aer 7 days (see ESI †). The synthesis method is similar to our previously reported study. 27 A similar procedure was used for the preparation of compound 2.
1 crystallizes in a triclinic system with space group P 1. An asymmetric unit contains two (Fig. 1a). Parallel to the (1 0 0) plane, a irregular channel framework was constructed from anti-parallel arrangement Schiff base cations with a size of about 5.3 × 6.2 Å 2 by considering the van der Walles radii, where the polyiodide anions are located in the channel (Fig. 1b).
2 crystallizes in a monoclinic space group C2/c. Its asymmetric unit contains two halfs of the I 3 − anion together with one (Fig. 1c).
The conductivity of 1 was measured by alternating current impedance spectroscopy from 273 to 353 K under dry N 2 , and the Nyquist plots are shown in Fig. 2a, b, and S1. † In the temperature range of 273-293 K, no typical semicircle was observed for 1 in the Z ′ − Z ′′ plot (Fig. S1 †), indicating that the conductivity of 1 is low. Above this temperature, conductivity tends to increase, revealing the existence of a thermally activated conduction mechanism. The tting model for 1 at the selected temperature response comprised a series of two RC circuits, in which the capacitances were replaced by constant phase elements (CPE). This allowed us to tentatively evaluate the conductivity of the sample as calculated from the value of Z ′ in the low-frequency end of the semicircle, where Z ′′ supposedly reaches the abscissa axis. As shown in Fig. 2a and b, the radius of the semicircle decreases, corresponding to an increase in conductivity value as the temperature increases. The best t gave a conductivity of 1.53(5) × 10 −7 S cm −1 at 293 K and reached 1.03(4) × 10 −4 S cm −1 at 343 K (Fig. S2 †), which is higher than that of the iodide ion conductor [Mn(en) 3 ]I 2 (s = 1.37 × 10 −6 S cm −1 at 423 K). 13 For a typical iodide ionic conductivity CuPbI 3 , the conductivity is in the order of magnitude 10 −8 S cm −1 at 298 K. 28 The conductivity of 1 is about three orders of magnitude compared to CuPbI 3 . The temperaturedependent conductivity is plotted in the form of ln versus 1000/T, as shown in Fig. 2c, which shows a linear relationship in the temperature range of 273-343 K, and the activation energy (E a ) was tted by the Arrhenius equation (eqn (1)): where E a is the ion migration activation energy, A represents the pre-exponential factor, and k B is Boltzmann's constant. The tted activation energy is 0.26(2) eV. The value is smaller than that of compound [Mn(en) 3 ]I 2 (ref. 13) and is comparable to the perovskite-type structure iodide ion conductors of CuPbI 3 (0.29 eV). 28 The Z ′ − Z ′′ plots of 2 are similar to those of 1, as shown in Fig. 2d, e, S3 and S4. † From 253 to 353 K, the graphs turn from pitch to semicircles, and the radius of the semicircle decreases as the temperature increases. This is due to the decrease in bulk resistance as the temperature increases, revealing that 2 is also an activated thermal conduction mechanism. The conductivity can be simulated using an equivalent circuit (EC), and the values are 3.11(5) × 10 −8 S cm −1 at 293 K, 8.90(5) × 10 −4 S cm −1 at 343 K, and 4.94(3) × 10 −3 S cm −1 at 353 K (Fig. S5 †). The conductivity of 2 is slightly higher than that of 1 at high temperatures. The temperature-dependent conductivity is plotted, as illustrated in Fig. 2f, and the tted activation energy is equal to 0.23(2) eV, which is slightly less than that of 1.
The variation of the dielectric loss (tan(d)) with the temperatures at various frequencies is shown in Fig. S6. † Relaxation peaks were observed at 343 K and 10 kHz. Dielectric relaxation is the result of the reorientation process of dipoles or ion migration, which is the extrinsic nature of the materials. However, electrode polarization and the space charge injection effect can also induce dielectric relaxation. In the low-frequency region, tan(d) gradually increases as the temperature increases owing to electrode polarization in compound 1 (Fig. S7 †). To reduce the electrode polarization and space charge injection effect at low frequency, dielectric modulus analysis was used. The electric modulus (M*) is calculated using the following equation: where M ′ and M ′′ are the real and imaginary parts of the complex modulus M*, respectively. From Fig. 3a, clear relaxation peaks were observed. The dielectric peak shis to the high-frequency region as the temperature increases. The relaxation process is analyzed according to the empirical Arrhenius equation: where s 0 represents the characteristic macroscopic relation time, E a is the activation energy or potential barrier required for dielectric relaxation, and k B is Boltzmann's constant. The E a value is 0.28(2) eV for 1 in the temperature range of 273-343 K by tting Fig. 3b, and the values are very close to the ion migration activation energies obtained by a temperaturedependent conductivity of 1. These results further conrm the conductivity mechanism arising from the migration of ions. The dielectric modulus plots and dielectric relaxation tted line for 2 are shown in Fig. 3c and d. The E a value is 0.25(2) eV for 2 in the temperature range of 273-343 K.
In summary, we presented two Schiff base iodide compounds and explored their dielectric properties and conduction behavior. The linear I 3 − ions are stacked into different shaped polyiodide chains, and the irregular channel frameworks were constructed from Schiff base cations in two compounds, where the polyiodide anions are located in the channel. The analysis of dielectric relaxation and impedance spectra disclosed that the two compounds showed intrinsic iodide ion conductance with a higher conductivity of 1.03(4) × 10 −4 S cm −1 at 343 K for 1 and 4.94(3) × 10 −3 S cm −1 at 353 K for 2. The conductivity mechanism can be attributed to the migration of iodide ions. This study opens a way to synthesize single-component solid-state electrolytes with high conductivity using solid-state DSSCs.
Conflicts of interest
There are no conicts to declare.
|
2023-06-22T05:08:36.176Z
|
2023-06-15T00:00:00.000
|
{
"year": 2023,
"sha1": "962bd2b70df26aa9847a42480ce8d141a3461766",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "962bd2b70df26aa9847a42480ce8d141a3461766",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220965139
|
pes2o/s2orc
|
v3-fos-license
|
Protective Effects of Glucose-Related Protein 78 and 94 on Cisplatin-Mediated Ototoxicity
Cisplatin is a widely used chemotherapeutic drug for treating various solid tumors. Ototoxicity is a major dose-limiting side effect of cisplatin, which causes progressive and irreversible sensorineural hearing loss. Here, we examined the protective effects of glucose-related protein (GRP) 78 and 94, also identified as endoplasmic reticulum (ER) chaperone proteins, on cisplatin-induced ototoxicity. Treating murine auditory cells (HEI-OC1) with 25 μM cisplatin for 24 h increased cell death resulting from excessive intracellular reactive oxygen species (ROS) accumulation and caspase-involved apoptotic signaling pathway activation with subsequent DNA fragmentation. GRP78 and GRP94 expression was increased in cells treated with 3 nM thapsigargin or 0.1 μg/mL tunicamycin for 24 h, referred to as mild ER stress condition. This condition, prior to cisplatin exposure, attenuated cisplatin-induced ototoxicity. The involvement of GRP78 and GRP94 induction was demonstrated by the knockdown of GRP78 or GRP94 expression using small interfering RNAs, which abolished the protective effect of mild ER stress condition on cisplatin-induced cytotoxicity. These results indicated that GRP78 and GRP94 induction plays a protective role in remediating cisplatin-ototoxicity.
Introduction
Hearing loss, also known as hearing impairment, is generally classified into conductive and sensorineural hearing loss. The latter is caused by several risk factors, including acoustic trauma, aging, ototoxic drug use, autoimmune disease, infection, and genetic disorders. Hearing loss is commonly associated with the loss of auditory hair cells in the cochlea, which is irreversible due to its inability to regenerate irreparable hair cell damage [1]. In particular, various commonly used drugs have ototoxic properties that damage the cochlea or auditory nerve and vestibular system and are referred to as drug-induced hearing loss (DIHL). The ototoxic side effects of drugs, such as salicylates, aminoglycosides, and cisplatin, are bilaterally symmetric or asymmetric, with one ear being affected later. DIHL may arise during or after the end of therapy and may be occasionally recoverable if the drug is immediately discontinued or if the initial damage is allowed to repair. However, further accumulation of ototoxic medication may lead to permanent destruction of the sensory hair cells and, concomitantly, permanent hearing loss [2].
Cell Culture
The HEI-OC1 cell line was kindly provided by Dr. Federico Kalinec (Dept. of Cell and Molecular Biology, House Ear Institute, Los Angeles, CA, USA). The HEI-OC1 cells were cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM) with 10% fetal bovine serum (FBS) at 33 • C in 10% humidified CO 2 atmosphere.
Cytotoxicity Assay
Cell viability was evaluated using a colorimetric D-Plus™ CCK cell viability assay kit (Dongin LS, Seoul, Korea), according to the manufacturer's instructions. The cells were seeded on 96-well plates at a density of 4 × 10 3 cells/well and grown for 24 h under standard conditions. These cells were exposed to different concentrations of cisplatin, TG, and TM for 24 h. For inducing GRP expression, the cells were pretreated with 3 nM of TG or 0.1 µg/mL of TM for 24 h, followed by the treatment of 25 µM of cisplatin for 24 h. The amount of formazan dye generated was determined by measuring the absorbance at 450 nm using a microplate spectrophotometer (Molecular Devices Corp., Sunnyvale, CA, USA). The absorbance values were converted to percentages for comparison with untreated controls.
Immunoblotting
The cells were washed with ice-cold PBS and lysed with RIPA buffer (Sigma-Aldrich) supplemented with complete protease inhibitor cocktail on ice for 30 min. The supernatants were collected by centrifugation at 13,000× g for 20 min, and protein concentrations were determined using a BCA Protein Assay kit (Thermo Fisher Scientific, Waltham, MA, USA). Total soluble proteins (10-30 µg) were separated on 12% sodium dodecyl sulfate polyacrylamide gel and transferred to nitrocellulose membranes (GE Healthcare Biosciences, Uppsala, Sweden). The membranes were blocked using 5% skim milk in TBS-T (10 mM Tris-HCl, pH 7.4, 100 mM NaCl, and 0.1% Tween 20) for 1 h at room temperature. The membranes were probed with their corresponding primary antibodies, followed by the appropriate HRP-conjugated secondary antibodies. Then, immunoreactive bands were detected using enhanced chemiluminescence assay technique (ECL; Dongin LS) and quantified using the ImageQuant LAS 500 biomolecular imager (GE Healthcare Biosciences).
Measurement of Intracellular ROS Production
Intracellular ROS level was measured using a fluorescent dye, 5-(and-6)-chloromethyl-2 ,7 -dichlorodihydrofluorescein diacetate acetyl ester (CM-H 2 DCFDA; Molecular Probes, Inc., Eugene, OR, USA). The cells grown on 96-well plates were pre-incubated with 3 nM of TG or 0.1 µg/mL of TM for 24 h and then treated with 25 µM of cisplatin for another 24 h. The cells were then washed twice with Hank's balanced salt solution (HBSS) and incubated with 5 µM CM-H 2 DCFDA for 20 min at 33 • C in the dark. After washing twice with HBSS, the samples were immediately observed at 485 nm excitation and 535 nm emission using a PerkinElmer VICTOR 3 luminescence spectrometer (Perkin-Elmer, Waltham, MA, USA).
Detection of Apoptosis Using TUNEL Assay
Apoptosis was detected using both LIVE/DEAD Viability/Cytotoxicity Kit (Molecular Probes, Inc., Eugene, OR, USA) and In Situ Cell Death Detection Kit, TMR red (Roche Diagnostics, Indianapolis, IN, Antioxidants 2020, 9, 686 4 of 13 USA), according to the manufacturer's instructions, with slight modifications. The cells grown on the glass coverslip in 6-well culture dishes were pretreated with 3 nM TG or 0.1 µg/mL TM for 24 h and further incubated with 25 µM of cisplatin for 24 h. Live cells were labeled with calcein AM, briefly washed with PBS, and fixed with 4% paraformaldehyde. Then, the cells were permeabilized with 0.1% Triton X-100 in 0.1% sodium citrate for 5 min and incubated with the TUNEL reaction mixture containing terminal deoxynucleotidyl transferase and tetramethyl-rhodamine-dUTP. The cells were examined using the appropriate filter of an Olympus IX71 fluorescence microscope, green fluorescence (ex/em ≈495/≈515 nm) for live cells, and red fluorescence (ex/em ≈495/≈635 nm) for apoptotic cells. The percentage of TUNEL-positive cells was determined by counting ≈1000 cells selected from 3-4 randomly chosen fields of the cover slip.
Transfection with siRNA
The siRNAs of GRP78, GRP94, and scrambled oligonucleotide, as a negative control, were obtained from Genolution Pharmaceuticals, Inc. (Seoul, Korea). The cDNA sequences of GRP78 (GenBank accession number; NM_001163434.1) and GRP94 (GenBank accession number; NM_011631.1) to design the respective siRNAs were as follows: 5 -GAAU GAAUUGGAAAGCUAUUU-3 for GRP78 and 5 -CUGGAAAUGAGGAGUUAACUU-3 for GRP94. For the scrambled siRNA, it was 5 -CCUCGUGCCGUUCCAUCAGGUAGUU-3 . The cells were seeded on 24-well culture plate and transiently transfected with each siRNAs (60 nM) using G-fectin (Genolution Pharmaceuticals, Inc., Seoul, Korea), according to the manufacturer's instructions. Each transfection procedure was performed in quadruplicate. After 24 h, the transfection mixture was replaced with fresh culture medium and further incubated for 2 d. Each transfectant was treated with TG (3 nM) or TM (0.1 µg/mL) for 24 h and incubation with 25 µM of cisplatin for another 24 h. Cell viability and ROS accumulation level were evaluated as described in the previous text.
Statistical Analysis
Data were expressed as means ± standard error (SE) of three independent experiments. Differences between groups were evaluated using Student s t-test or one-way analysis of variance (ANOVA), as appropriate. A p value of <0.05 was considered statistically significant.
Cisplatin-Induced Apoptosis in HEI-OC1 Cells
The HEI-OC1 cells were exposed to different concentrations of cisplatin (5-100 µM) for 24 h to determine the adequate cytotoxic cisplatin concentration, and cell viability was monitored using CCK assay. The cisplatin treatment decreased cell viability in a dose-dependent manner, with a lagging dose between 10 and 15 µM. At 25 µM cisplatin concentration, cell viability was 44.8%, compared with that of the untreated control ( Figure 1A). As a result, 25 µM cisplatin concentration was used in our subsequent studies, since this concentration and timepoint were within the range of an estimated half-maximal cytotoxic dose (IC 50 ).
It is well established that cisplatin-induced cytotoxicity is closely associated with excessive generation of ROS and activation of apoptosis-related proteins [18,19]. Therefore, we initially measured the levels of cisplatin-induced intracellular ROS using a peroxide-sensitive fluorescent probe, CMH 2 DCFDA. As shown in Figure 1B, DCF fluorescence intensity from the cisplatin-treated cells was five-fold higher than that of the untreated control. Next, we evaluated the changes in expression levels of proteins involved in apoptotic pathways to investigate whether cisplatin-induced cytotoxicity was associated with apoptosis. Immunoblot analyses showed that the expression levels of two mitochondrial proteins, namely Bcl-2 (anti-apoptotic protein) and Bax (pro-apoptotic protein), were contrasting in cisplatin-treated cells, that is, the ratio of Bcl-2/Bax was 1.0 in the untreated control versus 0.24 in cisplatin-treated cells. Moreover, catalytically activated forms (cleaved) of caspase-3 and caspase-7 had a six-fold increase in cisplatin-treated cells, with concomitant cleaved (inactivated) PARP fragment accumulation ( Figure 1C). Taken together, these results indicated that excessive ROS accumulation and apoptosis contributed to cisplatin-mediated ototoxicity in HEI-OC1 cells. It is well established that cisplatin-induced cytotoxicity is closely associated with excessive generation of ROS and activation of apoptosis-related proteins [18,19]. Therefore, we initially measured the levels of cisplatin-induced intracellular ROS using a peroxide-sensitive fluorescent probe, CMH2DCFDA. As shown in Figure 1B, DCF fluorescence intensity from the cisplatin-treated cells was five-fold higher than that of the untreated control. Next, we evaluated the changes in expression levels of proteins involved in apoptotic pathways to investigate whether cisplatin-induced cytotoxicity was associated with apoptosis. Immunoblot analyses showed that the expression levels of two mitochondrial proteins, namely Bcl-2 (anti-apoptotic protein) and Bax (pro-apoptotic protein), were contrasting in cisplatin-treated cells, that is, the ratio of Bcl-2/Bax was 1.0 in the untreated control versus 0.24 in cisplatin-treated cells. Moreover, catalytically activated forms (cleaved) of caspase-3 and caspase-7 had a six-fold increase in cisplatin-treated cells, with concomitant cleaved (inactivated) PARP fragment accumulation ( Figure 1C). Taken together, these results indicated that excessive ROS accumulation and apoptosis contributed to cisplatin-mediated ototoxicity in HEI-OC1 cells. Protein bands were quantified using densitometry, and their abundances were expressed relative to β-actin band density. The ratio of each protein to β-actin is presented as a fold change of that of the untreated control. Values are expressed as means ± SE of three independent experiments. * p < 0.05, ** p < 0.01, *** p < 0.001; compared with the untreated control.
Effects of ER Stress Inducers on GRP78 and GRP94 Expressions in HEI-OC1 Cells
The induction of GRP78 and GRP94 expression during ER stress are reported to function in maintaining ER homeostasis, assisting in proper protein folding, and degrading misfolded proteins through chaperone formation [20,21]. The involvement of GRPs in cell survival prompted us to examine the protective roles of GRP78 and GRP94 in cisplatin-mediated ototoxicity. Cell viability was evaluated in TG-or TM-treated cells at various concentrations. Moreover, the 24-h exposure revealed that both inducers' cytotoxicity was increased dose-dependently. At 3 or 5 nM TG concentration, cell viability was decreased by 92.6% or 90.1%, whereas at 0.05 or 0.1 µg/mL TM concentration, cell viability was 91.2% or 89.2% (Figure 2A,B), indicating that the cytotoxic effects of TG or TM is relatively mild at these concentrations. Treatment with 3 or 5 nM of TG induced significant increases in GRP78 and GRP94 expressions; that is, three-fold and six-fold for GRP78 and GRP94 at both concentrations, respectively. Treatment with 0.05 µg/mL TM resulted in two-and-a-half-fold increase in GRP78 expression, but not in that of GRP94. The expressions of both proteins were significantly increased at 0.1 µg/mL ( Figure 2C). Therefore, the aforementioned concentration of TG or TM (3 nM or 0.1 µg/mL, respectively) was used to examine the protective effects of GRP78 and GRP94 on cisplatin-induced ototoxicity.
examine the protective roles of GRP78 and GRP94 in cisplatin-mediated ototoxicity. Cell viability was evaluated in TG-or TM-treated cells at various concentrations. Moreover, the 24-h exposure revealed that both inducers' cytotoxicity was increased dose-dependently. At 3 or 5 nM TG concentration, cell viability was decreased by 92.6% or 90.1%, whereas at 0.05 or 0.1 μg/mL TM concentration, cell viability was 91.2% or 89.2% (Figure 2A,B), indicating that the cytotoxic effects of TG or TM is relatively mild at these concentrations. Treatment with 3 or 5 nM of TG induced significant increases in GRP78 and GRP94 expressions; that is, three-fold and six-fold for GRP78 and GRP94 at both concentrations, respectively. Treatment with 0.05 μg/mL TM resulted in two-and-a-half-fold increase in GRP78 expression, but not in that of GRP94. The expressions of both proteins were significantly increased at 0.1 μg/mL ( Figure 2C). Therefore, the aforementioned concentration of TG or TM (3 nM or 0.1 μg/mL, respectively) was used to examine the protective effects of GRP78 and GRP94 on cisplatin-induced ototoxicity.
Protection of GRP78 and GRP94 Induction from Cisplatin-Mediated Ototoxicity
To further examine whether the upregulation of GRP78 and GRP94 attenuated cisplatin-induced cytotoxicity, these proteins were induced by pre-incubating HEI-OC1 cells with 3 nM of TG or 0.1 μg/mL of TM for 24 h, and then exposed to 25 μM of cisplatin for another 24 h. As shown Figure 3A, the CCK assay showed that pretreatment with TM or TG increased cell viability by 29.4% or 27.8% more than that of cisplatin alone. We evaluated the level changes of
Protection of GRP78 and GRP94 Induction from Cisplatin-Mediated Ototoxicity
To further examine whether the upregulation of GRP78 and GRP94 attenuated cisplatin-induced cytotoxicity, these proteins were induced by pre-incubating HEI-OC1 cells with 3 nM of TG or 0.1 µg/mL of TM for 24 h, and then exposed to 25 µM of cisplatin for another 24 h. As shown Figure 3A, the CCK assay showed that pretreatment with TM or TG increased cell viability by 29.4% or 27.8% more than that of cisplatin alone. We evaluated the level changes of cisplatin-induced intracellular ROS generation in cells pretreated TM or TG. Cisplatin-triggered ROS accumulation was decreased by 2.9 or 2.2 times, respectively, in cells pretreated with TM or TG, as determined by DCF fluorescence intensity analysis ( Figure 3B). When the cells were treated with TM or TG for 24 h, the ROS levels were slightly increased, but the values were significantly lower than that of cisplatin treatment alone (data not shown). This result indicated that GRP induction attenuated cisplatin-triggered intracellular ROS accumulation. Next, we investigated changes in the levels of protein expressions involved in apoptotic pathways, using immunoblot analysis ( Figure 3C). Cisplatin treatment did not cause any changes in GRP78 and GRP94 expression by themselves. The ratio of Bcl-2/Bax was 0.13 in cisplatin-treated cells, whereas this ratio was elevated in cells pretreated TM or TG (0.6 or 0.67). In addition, the augmented activation of caspase-3 and caspase-7, as well as cisplatin-induced PARP inactivation dramatically declined in cells pretreated with TM or TG. Finally, the protective effects of GRP overexpression on cisplatin-induced apoptosis was confirmed through calcein AM staining (green) of viable cells, following the detection of DNA fragmentation using the TUNEL assay method (red). This allowed the researchers to calculate the percentage of apoptotic cells over viable cells. As shown in Figure 4, the percentage of TUNEL-positive Antioxidants 2020, 9, 686 8 of 13 cells was 35% in cisplatin-treated cells, whereas pretreatment of TM or TG resulted in a dramatic reduction of this percentage by 8% or 13%, respectively. Taken together, these results indicate that GRP pre-induction inhibits cisplatin-mediated apoptotic events in HEI-OC1 cells, such as oxidative stress, caspase-dependent pathway activation, and dysregulation of apoptosis-regulating mitochondrial proteins.
Antioxidants 2020, 9, x FOR PEER REVIEW 9 of 15 Finally, the protective effects of GRP overexpression on cisplatin-induced apoptosis was confirmed through calcein AM staining (green) of viable cells, following the detection of DNA fragmentation using the TUNEL assay method (red). This allowed the researchers to calculate the percentage of apoptotic cells over viable cells. As shown in Figure 4, the percentage of TUNEL-positive cells was 35% in cisplatin-treated cells, whereas pretreatment of TM or TG resulted in a dramatic reduction of this percentage by 8% or 13%, respectively. Taken together, these results indicate that GRP pre-induction inhibits cisplatin-mediated apoptotic events in HEI-OC1 cells, such as oxidative stress, caspase-dependent pathway activation, and dysregulation of apoptosis-regulating mitochondrial proteins.
Effect of GRP78 or GRP94 Knockdown (KD) on Cisplatin-Mediated Ototoxicity
To further validate the protective roles of GRP78 and GRP94 against cisplatin-induced ototoxicity, the pre-induction of GRP78 and GRP94 in TM-or TG-treated cells was inhibited through small interfering (si) RNA transfection, and then the changes in cisplatin-mediated cytotoxicity and ROS accumulation were evaluated. After 72 h of transfection, GRP78 and GRP94 expression levels in both KD transfectants were markedly decreased 0.4 times, compared with that in the scrambled siRNA transfectant, as per the results of the immunoblot analysis. The slight reduction of GRP94 or GRP78 expression observed in GRP78 or GRP94 KD cells was not statistically significant ( Figure 5A). Cisplatin-induced cytotoxicity was not changed in either GRP78 or GRP94 KD transfectant, whereas the rescue effect of the TG or TM pretreatment on cell viability was markedly decreased by 30%, compared with that of the scrambled siRNA transfectant ( Figure 5B). Concomitantly, each pretreatment further increased cisplatin-triggered ROS accumulation in KD cells ( Figure 5C). These results demonstrated that GRP overexpression plays a crucial role in attenuating cisplatin-mediated ototoxicity.
Effect of GRP78 or GRP94 Knockdown (KD) on Cisplatin-Mediated Ototoxicity
To further validate the protective roles of GRP78 and GRP94 against cisplatin-induced ototoxicity, the pre-induction of GRP78 and GRP94 in TM-or TG-treated cells was inhibited through small interfering (si) RNA transfection, and then the changes in cisplatin-mediated cytotoxicity and ROS accumulation were evaluated. After 72 h of transfection, GRP78 and GRP94 expression levels in both KD transfectants were markedly decreased 0.4 times, compared with that in the scrambled siRNA transfectant, as per the results of the immunoblot analysis. The slight reduction of GRP94 or GRP78 expression observed in GRP78 or GRP94 KD cells was not statistically significant ( Figure 5A). Cisplatin-induced cytotoxicity was not changed in either GRP78 or GRP94 KD transfectant, whereas the rescue effect of the TG or TM pretreatment on cell viability was markedly decreased by 30%, compared with that of the scrambled siRNA transfectant ( Figure 5B). Concomitantly, each pretreatment further increased cisplatin-triggered ROS accumulation in KD cells ( Figure 5C). These results demonstrated that GRP overexpression plays a crucial role in attenuating cisplatin-mediated ototoxicity. Figure 5. Cisplatin-mediated cytotoxicity and intracellular ROS accumulation in GRP78 or GRP94 KD cells. Cells were transfected with GRP78, GRP94, or scrambled siRNAs and then treated with cisplatin as described in Section 2. (A) GRP78 and GRP94 expression levels after the 72-h transfection. Protein bands were quantified densitometrically and normalized to the density of the β-actin band. The ratio of GRP78 or GRP94 to β-actin in each group was presented as its fold-change relative to the scrambled siRNA transfectant. * p < 0.05, compared with the scrambled siRNA transfectant. After the 48-h incubation, cells were treated with TM or TG, and then cisplatin, as previously described. (B) Cell viability was determined through CCK assay. The graph represents the relative viability percentage, compared with the untreated control. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin. (C) The levels of ROS accumulation were determined through DCF fluorescence intensity spectrofluorometry. The graph represents the relative ROS accumulation fold, compared with untreated controls. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin. Protein bands were quantified densitometrically and normalized to the density of the β-actin band. The ratio of GRP78 or GRP94 to β-actin in each group was presented as its fold-change relative to the scrambled siRNA transfectant. * p < 0.05, compared with the scrambled siRNA transfectant. After the 48-h incubation, cells were treated with TM or TG, and then cisplatin, as previously described. (B) Cell viability was determined through CCK assay. The graph represents the relative viability percentage, compared with the untreated control. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin. (C) The levels of ROS accumulation were determined through DCF fluorescence intensity spectrofluorometry. The graph represents the relative ROS accumulation fold, compared with untreated controls. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin.
Discussion
Excessive free radical formation in the cochlea caused by aging, noise exposure, and ototoxic compounds results in sensory hair cell injury, which subsequently leads to hearing loss. Potential free radical generators in the ear include mitochondria, enzymatic reactions, NOX3, and increased intracellular calcium concentration that leads to overproduction of neurotransmitters, such as nitric oxide (NO) and glutamate [2,22,23]. In this respect, maintaining redox homeostasis is crucial in protecting the cochlea and central auditory system against oxidative stress-mediated acoustic trauma. In the present study, we found that pre-induction of GRP78 and GRP94 attenuated the cisplatin-induced ROS accumulation, which protected the HEI-OC1 cells from oxidative injury.
Cisplatin is an effective, widely used anticancer drug; however, its major side effect is ototoxicity with subsequent sensorineuronal hearing loss after high-dose treatment. Cisplatin ototoxicity is known to be associated with at least two mechanisms; DNA adduct formation and ROS accumulation in both the cochlea and the vestibular system, leading to the death of sensory cells through apoptosis or necrosis [24]. For example, cisplatin was found to induce apoptosis HEI-OC1 cells and the neonatal rat organ of Corti explants, which was mediated by ROS generation and lipid peroxidation [25]. Intraperitoneal cisplatin evoked a hearing threshold shift and an intrinsic apoptotic pathway within rat cochleae, which involved the activation of caspase-3 and caspase-7, and modulation of two mitochondrial protein expressions (increased Bax and decreased Bcl-2 levels) [26]. It has been also reported that cisplatin ototoxicity in HEI-OC1 cells is mainly associated with the mitochondrial apoptotic pathway through the activation of ROS/JNK signaling cascade [27]. Consistent with these findings, the present study showed that intracellular ROS accumulation and intrinsic apoptotic pathways mainly contributed to cisplatin-induced cell death in HEI-OC1 cells (Figure 1). Activation of caspase-3 and caspase-7, inactivation of PARP, and altered expression of Bcl-2/Bax were found to be associated with increased levels of DNA fragmentation (Figure 4).
GRP78 and GRP94 induction ensures proper protein folding in the ER, thereby protecting cells from ER dysfunction caused by nutrient deprivation, chemical toxicity, changes in calcium mobilization, oxidative stress, or glycosylation disturbances [28,29]. These were induced by treating the cells with a specific inhibitor of ER Ca 2+ -ATPase (TG) or an N-linked glycosylation inhibitor (TM), which disrupts ER calcium homeostasis or prevents post-translational protein maturation, respectively. The protective mechanisms of GRPs are involved in suppressing intracellular ROS accumulation and stabilizing mitochondrial function [11,30]. In the present study, a dose-dependent cytotoxicity was observed in HEI-OC1 cells exposed to different concentrations of TG or TM for 24 h. At 3 nM of TG or 0.1 µg/mL of TM, cytotoxicity was relatively decreased and GRP78 and GRP94 expression levels were significantly increased ( Figure 2). Moreover, there was no obvious change in intracellular ROS accumulation and apoptosis at the same concentrations (data not shown). Similar ranges of TG or TM concentration and a specific timepoint were used to induce ER stress proteins without serious toxicity in various cell lines [13,14,31]. However, prolonged exposure resulted in decreased GRP78 and GRP94 levels, leading to a consequent loss of cell viability. Taken together, these findings suggest that GRP78 and GRP94 may be induced in cells prior to more severe cytotoxic development.
ER stress exposure resulted in either activation of protective ER stress responses or ER-associated apoptotic pathways, which occur due to ER stress beyond the capacity of the UPR system. The protective response restored cellular homeostasis and adaptive reactions that potentiate protective abilities against a later and more injurious stress. The beneficial effect of mild ER stress, assimilated as ER stress preconditioning, has been reported in the liver or brain of TM-injected rats, which were protected from later hepatic ischemia/reperfusion injury or lipopolysaccharide-induced neuroinflammation and memory impairment, respectively [32,33]. Additionally, ER stress preconditioning in cultured cells pretreated with TG alleviated toxicant-mediated cell damage through the upregulation of ER stress-related proteins, including GRP78 and GRP94 [14,34]. In the present study, pre-incubation of HEI-OC1 cells with TG or TM prior to adding cisplatin induced GRP78 and GRP94 expression, attenuated intracellular ROS accumulation, and inhibited the caspase-dependent apoptotic pathway, resulting in increased cell viability (Figure 3), which were correlated with an amelioration of TUNEL-positive cells (Figure 4). These results define a novel mechanism wherein mild ER stress may be beneficial for auditory cells in defending against the ototoxic side effect of cisplatin, as it can alleviate cell injury, including excessive ROS accumulation and apoptosis.
GRP78 and GRP94 induction in ER stress-preconditioned cells plays cytoprotective roles in various cytotoxic conditions. For example, increased GRP78 expression during ER stress response attenuated H 2 O 2 -induced renal epithelial cell injury by inhibiting the increase of intracellular Ca 2+ concentration and activation of the ERK1/2 signaling pathway [35]. Tolerance to various cytotoxins was provided by GRP78 and GRP94 overexpression in several cell lines [31]. Furthermore, the co-downregulation of GRP78 and GRP94 expressions in prostate cancer cells by their specific siRNAs suppressed cell migration and promoted caspase-9-dependent apoptosis [36]. In the present study, transient transfection of HEI-OC1 cells with siRNA targeted against GRP78 or GRP94 abolished both inducers' expression levels during ER stress preconditioning and failed to reduce cisplatin-mediated ROS accumulation, thus sensitizing cells to cisplatin-induced cytotoxicity ( Figure 5). This finding indicated that GRP78 and GRP94 induction is integral for ER function to promote a protective mechanism against cisplatin-triggered ototoxicity. This is supported by recent findings that the decreased expression of ER stress-related proteins, including GRP78, in the cochleae of aged mice was associated with age-related hearing loss [16], and that intense noise exposure upregulated GRP78 expression level in hair, lateral wall, and spiral ganglion cells of guinea pigs, thereby protecting cochlear cells from noise-induced injury [17]. It has been also reported that cisplatin binds to GRP78 and GRP94 from cochlear and kidney cell lysates, suggesting that this interaction may attenuate cisplatin ototoxicity [37]. It will be of interest to examine the signaling pathways of ER stress sensors, including inositol-requiring enzyme 1 (IRE1), PKR-like endoplasmic reticulum kinase (PERK), and activating transcription factor 6 (ATF6). This may help to understand the protective mechanism of ER stress preconditioning against ototoxicity of cisplatin.
Conclusions
In summary, we showed that TG or TM pretreatment before cisplatin exposure attenuated cisplatin ototoxicity in auditory hair cells. The protective effects of these ER stress inducers were achieved through increasing GRP78 and GRP94 expressions, leading to the inhibition of intracellular ROS accumulation and activation of intrinsic apoptotic signaling pathways induced by cisplatin. Our findings add to the knowledge of the beneficial effects of ER stress preconditioning on cisplatin-induced ototoxicity and also provide new insight in designing approaches to prevent or treat environment-related hearing loss.
|
2020-08-05T13:06:29.846Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "72bc909c4ab34b395972ea37b167347e482b4e5f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/9/8/686/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c81c5995b7a52dd79fb2d9a18f9276b8cd336d3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
825468
|
pes2o/s2orc
|
v3-fos-license
|
Sleep, Circadian Rhythms, and Interval Timing: Evolutionary Strategies to Time Information
Sleep, circadian rhythms, and interval timing are evolutionary well-conserved functions that are widely shared throughout the animal kingdom. A crucial property of the brain is to make use of internal clock mechanisms (e.g., circadian) to locate events in time. From a neurobiological and computational point of view, clock mechanisms can be seen as strategies that involve information processing of internal biological states at different time scales (Tucci, 2011; Tucci et al., 2014). The coordination between temporal information processing and internal physiological responses maintains homeostasis in many biological domains. For example, sleep is a genetically and epigenetically regulated phenomenon that can be mathematically modelled by, at least, two fundamental processes, a homeostatic process and a circadian process. The homeostatic process of sleep depends on the previous wakefulness, representing the pressure for sleep since last sleep episode. The circadian process dictates the timing of sleep, it is a self-sustained periodic mechanism and it develops with approximately 24 hours, cell-autonomous, oscillations. Thus, the distribution of sleep over 24 hours results from the combination of these two processes. Moreover, the ability to understand and perform in time is also realised within seconds-to-minutes intervals (e.g., interval timing). Interval timing represents a cross-species crucial property of many cognitive processes; interestingly, this short timing ability varies with the time of day, and it has been shown that sleep enhances the consolidation of timing learning. Furthermore, we have observed that mouse performance is modulated by a sleep inertia-like effect (Maggi et al., 2014). To understand the interplay between sleep homeostasis, circadian biological rhythms and interval timing it
A crucial property of the brain is to integrate temporal information with accurate physiological responses (Hinton and Meck, 1997;Buhusi and Meck, 2005;Coull et al., 2011). Evolution has favored biological clocks that dictate homeostatic processes (e.g., the circadian timing of sleep) and, on a smaller time-scale, timed behavioral responses (e.g., interval timing). The interplay between such time-keeping mechanisms is intriguing but biologically complex. Moreover, in biology, analogous problems can be successfully solved by multiple computations. In this article I will discuss of sleep, circadian rhythms, and interval timing by delineating several aspects that suggest a common evolutionary role in providing neurobiological mechanisms for temporal information processing.
Neither interval timing nor the homeostatic regulation of sleep are currently as well-understood, at the molecular level, as the circadian clock. This has triggered a scientific interest in linking these phenomena to the circadian molecular machinery. On one hand, sleep homeostasis has been investigated in genetic and lesion studies of the circadian "master" clock (Franken and Dijk, 2009) to test whether the two processes (homeostatic and circadian) were independent. On the other hand, it has been questioned whether interval timing and circadian clock share similar oscillatory mechanisms, although with a different time-scale (Crystal, 2001(Crystal, , 2006aCrystal and Baramidze, 2007), and whether circadian rhythms affect interval timing. Whilst the research in sleep and circadian clock over the last few years resulted in some interesting positive associations (reviewed in Tucci and Nolan, 2010), the question whether interval timing is related to the circadian clock has not a clear unanimous conclusion yet.
Sleep and Interval tImIng
As in conditioning behaviors, in which a brief time interval itself may embody the proper information to be learned, sleep behavior is phase-locked in the 24-h (circadian) rhythms of the organism. However, while the circadian process is self-sustained, sleep is homeostatically regulated and varies according to the previous wakefulness. Back in evolution, single-cell organisms became entrained to environmental rest-activity stimuli (light and temperature changes caused by the earth's rotation) in order to set the time of their metabolic processes (Krueger, 2010). Sleep evolved from restactivity cycles and it is still debated whether a sleep-like phenotype occurs within single neurons and how this is affected by environmental (extracellular) stimuli. Nevertheless, it is reasonable to envisage that sleep has developed, in multicellular organisms, based on the same biology of a primordial rest state.
In understanding the brain mechanisms of interval timing it is becoming pivotal to define whether the coding of interval timing depends on targeted neuronal mechanisms and their associated sub-cellular signaling pathways or on the states of neuronal networks. There is no doubt that the two mechanisms (single neuronal signaling and networks) are intimately connected but it is fundamental to understand what sets the time for timed responses such as a motor action or an endocrine oscillation.
It appears that sleep and interval timing share a common destiny as they are often studied as emerging properties of neuronal networks. For this reason, it is not surprising that the function of sleep, is thought to be associated to synaptic plasticity, as theorized by the latest functional model of sleep (Tononi and Cirelli, 2006). In both sleep (Tononi and Cirelli, 2006) and interval timing (Buonomano and Laje, 2010) the output units of the network rely on the development and stabilization of proper synaptic weights. Yet, the synaptic interplay between frontal cortex and subcortical neurons, such as striatal neurons, is important in both sleep-related memory consolidation and interval timing. Besides, these phenomena rely on specific patterns of "slow" oscillatory firings (Diekelmann and Born, 2010).
A number of studies has shown that sleep benefits memory consolidation (see Diekelmann and Born, 2010 for an extensive review). Beneficial effects of sleep on memory occur after a few minutes (Lahl et al., 2008), a few hours (Mednick et al., 2003;Tucker et al., 2006a,b;Korman et al., 2007;Nishida and Walker, 2007), or after a proper night of sleep (Pace-Schott et al., 2005;Stickgold, 2005;Stickgold and Walker, 2005a,b;Walker et al., 2005a,b;Marshall and Born, 2007). Of all the memory tasks that have been used in investigating learning and memory aspects of sleep, almost none of them have manipulated temporal variables, until recently. and Lewis and Meck, (2011), by using a combination of psychophysics and neuroimaging techniques in human participants, have tested the hypothesis that sleep promotes consolidation of temporal information. The authors differentiated motor and perceptual timing in their task. Interestingly, they reported that brain sleep-wake states during retention modulates motor learning in motor areas, such as the supplementary motor area, the striatum, and the cerebellum, while perceptual timing activates the posterior hippocampus zone . This new evidence is in agreement with an influential model of a sleep-dependent memory mechanism that involves slow-wave sleep (SWS; Diekelmann and Born, 2010). Cortical slow oscillations (<1 Hz) provide a timed electrophysiological mechanism of up-and down-states that pass memories from hippocampal temporary storage to neocortical long-term storage (Cheng et al., 2008(Cheng et al., , 2009Molle and Born, 2011).
CIrCadIan CloCk and Interval tImIng
The circadian clock is represented, at the cellular level, by a well-known negative transcription/translation-based feedback loop that sets the oscillation of the so called "core clock genes." The brain contains a pacemaker-like structure, the suprachiasmatic nuclei (SCN) of the hypothalamus, that provides circadian rhythmicity to other peripheral organs.
Core clock elements in the cell are transcriptional factors that regulate interlocking positive/negative feedback loops (Ko and Takahashi, 2006;Reppert, 2006). In mammals, the positive loop starts when CLOCK and BMAL1 (two members of the bHLH-PAS transcriptional factor family), form heterodimers and then translocate to the nucleus. Here this CLOCK:BMAL1 complex binds to E-box enhancers and promote the transcription of the Period (PER1-3) and the Cryptochrome (CRY1 and CRY2) genes. The resulting proteins form PER:CRY heterodimers that initiate the negative feedback loop. These PER:CRY complexes move back to the nucleus and inhibit the activity of CLOCK:BMAL1 resulting, then, in the repression of their own transcription. However, these are not the only regulatory loops. CLOCK:BMAL1 activates also the transcription of two retinoic acid-related orphan receptors (REV-ERBs and RORs) which regulate positively (RORs) and negatively (REV-ERBs) Bmal1. To increase the complexity of the circadian clock, a number of post-translational mechanisms were shown to regulate clock genes (Lopez-Molina et al., 1997;. This whole clock mechanism of activators and repressors represent an oscillatory mechanism that is necessary to coordinate circadian rhythms. However, a remarkable causal relation exists between molecular oscillations and neural activity. An important property of neurons in the SCN of the hypothalamus is their ability to generate, endogenously, action potentials which oscillate throughout day and night (Albus et al., 2002;Schaap et al., 2003;Yamaguchi et al., 2003;Kuhlman and McMahon, 2006;Ko et al., 2009). Neurons are active for 4-6 h during the day and inactive during the night. During the daily active state their responses to excitatory inputs are remarkably reduced while at night they become responsive again. A combination of ion channels regulates membrane currents that maintain the spontaneous circadian activity of SCN neurons (Colwell, 2011). The silencing of SCN neuronal firing, at night, depends on a difference in membrane potential and this is mainly mediated by a hyperpolarizing potassium mechanisms (Kuhlman and McMahon, 2006).
A series of important studies, that have investigated the relation between neuronal activity and circadian molecular clock Sheeba et al., 2008;Choi et al., 2009), has shown that synaptic activity and membrane potential are responsible for the oscillation of core clock genes. This has dramatically changed the general assumption that a molecular clock drives the activity of clock neurons. were able to show, in Drosophila, that chronic hyperpolarization silenced circadian neurons and interrupted the circadian behavioral rhythmicity and the expression of PER and TIM proteins. This testifies that neural activity regulates circadian molecular clock. Another component that plays a crucial role in clock gene expression is represented by the levels of Ca 2+ . The resting levels of Ca 2+ during the day, when the SCN neurons are more active, are doubled if compared with the night inactivity (Colwell, 2000). Ca 2+ is thought to be responsible also for another important aspect of circadian rhythms: the phase-response curve (Colwell, 2011). Indeed, the response to an environmental input (e.g., light stimulation) differs according to the specific phase of the circadian cycle.
There are several reasons to investigate associations between interval timing and circadian rhythms. A series of studies contravenes the assumption that interval timing depends on a linear accumulator but, instead, indicates an oscillatory-like mechanisms (Crystal, 1999(Crystal, , 2003Crystal and Baramidze, 2007;Gu et al., 2011). Thus, it is reasonable to investigate if shared mechanisms between short (seconds-to-minutes) and long (circadian) timed responses occur. Several studies have concluded for a close relationship between interval timing and circadian rhythms. For example, it has been shown that circadian rhythms change the perception for short intervals (Pfaff, 1968;Aschoff, 1998a,b;Nakajima et al., 1998;Morofushi et al., 2001) and that in conditions of temporal isolation (Aschoff, 1998) the time estimation co-varies with their circadian period. Furthermore, in drosophila circadian mutants it is present a deficit for short-interval timing (Kyriacou and Hall, 1980). Yet, dopamine (DA) mechanisms and motivated behaviors are strongly associated with both interval timing (Meck, 2006a,b;Agostino et al., 2011) and circadian clock (Albrecht, 2011). SCN projects toward brain areas within the DA circuitries and which mediate reward-related behaviors.
Other studies, instead, suggested that circadian clock mechanisms are independent of interval timing. This conclusion was driven by SCN lesion studies (Lewis et al., 2003) and by investigations of interval timing in circadian mutants (Cordes and Gallistel, 2008;Papachristos et al., 2011). I shall argue that conclusions driven by lesions restricted to SCN should not be generalized to the molecular level. For example, studies in mice of SCN lesions that lead to arrhythmic circadian behaviors, but did not affect sleep homeostasis (Easton et al., 2004;Larkin et al., 2004), have supported, for a long time, the idea that the two processes (circadian and homeostatic) were independent. However, at a molecular level, we now know that several circadian genes play a role in sleep (Tucci and Nolan, 2010).
Regarding the phenotyping of interval timing in circadian mouse mutants, Cordes and Gallistel (2008) have reported that Clock has no abnormal consequences in the peakinterval procedure in mice. In this study CLOCK-KO mice were used instead of the CLOCK mutants that carry the single point mutations. Similar negative results in interval timing were obtained by Papachristos et al. (2011) in Cry1 and Cry2 KO mice. Our critical argument to these studies is that, due to functional genomic redundancy, gene deletion models may not be able to reveal all the important functions of the gene. Paralog compensation among several clock functional genes has been reported DeBruyne et al., 2007). It was shown that CLOCK-deficient mice present only mild circadian alterations and, thus, it is not essential for the circadian rhythms . A paralog of CLOCK, NSPAS2, has been proposed to dimerize with BIMAL1 and to work in the mouse forebrain as the clock molecular loop (Reick et al., 2001). NSPAS2 is particularly expressed in the cortex, hippocampus, striatum, amygdala, and thalamus (Garcia et al., 2000) and exerts an important role in sleep and behavior (Dudley et al., 2003). For all these reasons, I believe that further investigations in interval timing and circadian rhythms is required before we can roll out the hypothesis of an independency among these phenomena.
it could be envisaged that interval-timing phenotypes will be largely employed in phenotype-based mutagenesis program (Nolan et al., 2000) and would have the potential to promote a new era of molecular discoveries, in interval timing, as we had for circadian clock. However, cognitive tests in mice present a number of restraints that make them unfeasible for such large-scale functional genomics enterprises. An easy solution to this impasse is the development of automated tests in home-cage. They have the advantage to increase the sample of observations, to reduce the time for training and to leave the animals undisturbed. Last, but not least, 24-h home-cage screens allow the integrated investigation of timing phenotypes at different time-scales. aCknowledgmentS I thank Glenda Lassi for critical reading of the manuscript and for discussions. Support (to Valter Tucci) was provided by the European Commission FP7 Programme under project 223263 (PhenoScale).
referenCeS epIgenetICS and bIologICal CloCkS
Ultimately, the investigation of the genetic determinants that regulate biological clocks is growing in complexity. Epigenetic mechanisms set a number of temporal determinants for gene expression. The action of the genome seems to respond to a principle of modularity (Litvin et al., 2009) that refers to a functional model in which cellular states, determined by genetic variations and by extracellular stimulations, affect transcriptional responses (Litvin et al., 2009). In particular, a primary locus controls the states of the cell and then a secondary locus has an effect only in particular states. This is a very interesting model that implies a timed expression of specific gene-driven phenotypes. The temporal coding of genetic information depends on chromatin states that regulate gene transcription and functioning. Epigenetic marks can vary over periods of minutes to hours and they constitute fundamental mechanisms for learning and memory consolidation (Akbarian and Huang, 2009). Since approximately 10% of all mammalian transcripts present a circadian rhythmicity (Panda et al., 2002), an efficient chromatin remodeling must exist to ensure this rhythmic gene expression (Borrelli et al., 2008). A number of studies has demonstrated that methylation oscillates at clock gene promoters (Etchegaray et al., 2006) and rhythmic histone modification arises at promoter of clock-controlled genes (Etchegaray et al., 2003;Naruse et al., 2004;Ripperger and Schibler, 2006). Although these studies do not proof that specific epigenetic mechanisms, such as those involved in chromatin remodeling, are necessary for clock control, they demonstrate that transcription-permissive chromatin states occur at specific circadian times (Borrelli et al., 2008).
ConCluSIon and future dIreCtIonS
Molecular elements play an important role in sleep, circadian rhythms, and interval timing. Thus, timing is coded at behavioral, physiological, genetic, and epigenetic level. An integrated investigation of these mechanisms represents the next frontier in developing models of coding and retaining of temporal information. The choice of animal models (e.g., the mouse), which carry single nucleotides mutations that translate into abnormal circadian phenotypes is preferred to deletion models. In future,
|
2014-10-01T00:00:00.000Z
|
2012-01-04T00:00:00.000
|
{
"year": 2011,
"sha1": "f591ef8e608f73ae2273eb846cb9d7444bf20c88",
"oa_license": "CCBYNC",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnint.2011.00092/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f591ef8e608f73ae2273eb846cb9d7444bf20c88",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
28640427
|
pes2o/s2orc
|
v3-fos-license
|
Epidemic mitigation via awareness propagation in communication networks: the role of time scales
The participation of individuals in multi-layer networks allows for feedback between network layers, opening new possibilities to mitigate epidemic spreading. For instance, the spread of a biological disease such as Ebola in a physical contact network may trigger the propagation of the information related to this disease in a communication network, e.g. an online social network. The information propagated in the communication network may increase the awareness of some individuals, resulting in them avoiding contact with their infected neighbors in the physical contact network, which might protect the population from the infection. In this work, we aim to understand how the time scale γ of the information propagation (speed that information is spread and forgotten) in the communication network relative to that of the epidemic spread (speed that an epidemic is spread and cured) in the physical contact network influences such mitigation using awareness information. We begin by proposing a model of the interaction between information propagation and epidemic spread, taking into account the relative time scale γ. We analytically derive the average fraction of infected nodes in the meta-stable state for this model (i) by developing an individual-based mean-field approximation (IBMFA) method and (ii) by extending the microscopic Markov chain approach (MMCA). We show that when the time scale γ of the information spread relative to the epidemic spread is large, our IBMFA approximation is better compared to MMCA near the epidemic threshold, whereas MMCA performs better when the prevalence of the epidemic is high. Furthermore, we find that an optimal mitigation exists that leads to a minimal fraction of infected nodes. The optimal mitigation is achieved at a non-trivial relative time scale γ, which depends on the rate at which an infected individual becomes aware. Contrary to our intuition, information spread too fast in the communication network could reduce the mitigation effect. Finally, our finding has been validated in the real-world two-layer network obtained from the location-based social network Brightkite.
Introduction
Epidemic spreading models in complex networks have been extensively studied in order to understand the spreading process of epidemics, worms, failures and information [1][2][3]. Significant efforts have been devoted to understanding the epidemic spread in a single network, especially the influence of the network topology [4][5][6][7]. More than one disease could coexist and interact [8,9]. However, real-world networks are not isolated but instead are interconnected and interdependent on each other [10][11][12]. The function and behavior of nodes not only depend on the network they locate in, but also rely on other networks. Epidemic spreading as well as other stochastic processes, such as opinion interaction in interconnected and interdependent networks, have been explored extensively since 2010 [13][14][15][16][17][18][19][20]. if it is infected. The recovery (curing) process of each infected node is an independent Poisson process with a recovery rate δ. Each infected agent infects each of its susceptible neighbors with a rate β, which is also an independent Poisson process. The ratio t b d is the effective infection rate. A phase transition has been observed around a critical point t c in a single network. When t t > c , a non-zero fraction of agents will be infected in the meta-stable state, whereas if t t < c , infection rapidly disappears [27,28].
Interacting epidemic and awareness spread model
We consider a two-layer network which describes two types of relations along the same set of individuals. The bottom layer is a physical contact network, in which an epidemic spreads according to the SIS model. The infection rate along each link is b 2 and the recovery rate of each node is d 2 . The upper layer is a communication network in which the awareness information propagates in the same way as the SIS model. An aware node informs each of its unaware neighbors to become aware with rate b 1 whereas an aware node becomes unaware with rate d 1 . We call this awareness spreading process the unaware-aware-unaware (UAU) process, analogous with the SIS model in real epidemics.
The relative time scale of UAU information propagation with respect to SIS epidemic spread can be controlled by scaling both the spreading rate and the recovery rate of the UAU process as g b * 1 and g d * 1 respectively. The relative time scale of UAU is thus characterized by the scaling parameter γ. The larger the time scale γ is, the faster the UAU process is, which represents the case when individuals are more frequently involved in the information propagation in the communication network. The two spreading processes, SIS and UAU, on the two layers respectively interact with each other. When an individual is infected, he or she becomes aware of the epidemic with rate g . An aware individual is alert to the epidemic to the extent that (s)he would like to inform his/her friends via a communication network, such as by posting a message or calling friends. Specifically, we assume that the time for an infected node just to become aware of the epidemic is an exponential random variable with rate g . This process of becoming aware of the epidemic due to the node itself getting infected is called the 'injection' of information from a physical contact layer to the communication network layer. The rate of this injection is intuitively related to the relative time scale γ of the information propagation: frequent usage of a communication network allows fast injection of information. For example, if people use an online social network everyday, the delay of the information injection is smaller compared to the case when people use another communication network once per week. Moreover, frequent usage of a communication network implies significant social impact of the network, which motivates possibly even faster injection of information. Without loss of generality, we consider here the injection rate as a polynomial function g of the time scale γ. Note that each time a node gets infected, it may introduce maximally one injection. An infection may happen only if (a) the injection happens before the node recovers from the epidemic, i.e. the injection delay is smaller than the recovery time for the node to become susceptible; (b) the node is unaware of the epidemic at the moment when it gets infected; and (c) the injection happens before the node becomes aware due to any of its aware neighbors. Conversely, when an individual is aware of the epidemic, (s)he would take precautions which reduces the infection rate of this individual to a b * 2 , where a < < 0 1. The state transition diagram of our Makrovian interacting epidemic and awareness spread (IEAS) model is shown in figure 1. Our model differs from the one proposed in [26]: (i) We introduce the relative time scale γ of the information spread in the communication network with respect to the epidemic spread in the physical contact network. (ii) Our model is more generalized in the sense that it may take time (e.g. injection delay) for an infected node to become aware (post information) in contrast to immediately becoming aware of getting infected, as assumed in [26]. The special case in our model where each node becomes immediately aware after getting infected is studied in section A.4 using two analytical approaches. (iii) In our model, the recovery process for an aware node to become unaware starts immediately once the node becomes aware, which is independent of whether the node is infected or not, whereas [26] assumes that an aware node can start the recovery process to become unaware only after the node becomes susceptible. Our model is driven by the fact that an individual may lose the awareness of the epidemic and stop informing others via the communication network after some time, even though (s)he is still infected, because (s)he might get bored of and/or does not have the energy to continuously inform others.
Two-layer network construction
In this paper, we consider two-layer networks where both layers are generated from the same network modeleither the Erdős-Rényi random network model or the scale-free network model. The Erdős-Rényi random network is one of the most studied random network models that allows many problems to be treated analytically [29,30]. To generate an Erdős-Rényi random network with N nodes and average degree [ ] E D , we start with N nodes and place each link between two nodes that are chosen at random among the N nodes until a total number = Activities along the links that could trigger the transition of states are: infected (a node gets infected by a neighbor in the physical contact network; the infection rate depends on whether the node is aware of the disease or not; the rate indicated is the infection rate per infected neighbor), recover, informed (a node becomes aware due to the contagion of an aware neighbor in the communication network; the rate is the contagion rate per aware neighbor), forget (an aware node becomes unaware of the disease), inject (a node becomes aware because of its own infection of the disease).
We use the configuration model to generate scale-free networks having a power-law degree distribution = = l -[ ] Pr D k ck as observed in many real-world networks [31][32][33][34] . Firstly, we generate a degree sequence for N nodes following the power-law degree distribution ck . Given the degree sequence, we generate a random network according to the configuration model: we assign each node as many 'stubs' as its degree; afterwards, we randomly choose two spare stubs from two different nodes which are not yet connected, and connect them with a link until all stubs are connected. In this paper, we consider N = 1000 and l = 2.5.
Besides the degree distribution of the two layers, the overlap in links between the two layers may affect the epidemic spreading [35,36]. Hence, we control the overlap extent when generating the multi-layer networks. In order to generate a two-layer ER network with a fraction f of overlapping links, we first generate f * L random links that exist on both layers, then randomly generate the rest of the links on the two layers separately under the constraint that links which exist in one layer do not appear in the other layer. We consider the two extreme cases, i.e. f = 0 and f = 1. A two-layer SF network with overlap f = 1 can be constructed by generating a one-layer scale-free network with the configuration model, and all the links are copied to the other layer. A two-layer SF network with overlap f = 0 can be obtained by generating the degree sequence for each layer independently and afterwards constructing each layer using the configuration model. Since the scale-free networks are sparse, the two independently generated layers hardly overlap, leading to f = 0.
Simulating the IEAS model
For each simulation with a given set of parameters with respect to the two spreading processes and the multilayer network structure, we generate 200 realizations. Within each realization, we firstly construct a two-layer network with the specified network model ER or SF and the overlap percentage f = 0 or f = 1. Initially, 10% of the nodes are randomly selected to get infected. Afterwards, we simulate the two continuous time interacting spreading processes of the epidemic and the awareness in discrete time with time steps of small intervals D = t 0.01. Within each time step, the probability that a node gets infected by an infected neighbor is b * Dt if it is aware, and the probability that an infected node recovers is d * Dt 2 . A similar situation holds for the UAU process. Once an unaware node gets infected, this node has a probability g * Dt in each following step to become aware due to the injection as long as the node has not recovered to be susceptible and has not yet become aware due to its aware neighbors in the UAU process. The two interacting processes continue until both reach their meta-stable states, where the fraction of infected nodes and the fraction of aware nodes remain constant. For each set of parameters, the fraction of infected nodes and aware nodes in the meta-stable state are obtained as the average over all the realizations.
We focus on the following parameters with respect to the IEAS model throughout the paper to illustrate our and b 2 being the control parameter in the range [ ] 0, 1 . Note that many other parameter sets have also been tested and lead to similar observations. The relative awareness spreading rate is chosen here to be above the critical spreading threshold of the communication network layer such that the epidemic spreading can be possibly reduced via awareness. Various values of the time scales γ will be considered within the range [ ] 0.125, 8 and the injection rate scaling will be varied within [ ] 0.5, 3 . The complexity of simulating interacting processes operating at different time scales is significantly higher than that of simulating a single process. Our simulations contain three types of processes at different time scales: the SIS epidemic spread in the communication network layer, the UAU information propagation in the social network layer and the information injection between the two layers. Take the case that g 1 as an example, i.e. the information propagation is far slower than the epidemic spread. The sampling time step Dt has to be selected based on the fastest dynamics, i.e. the epidemic spread, such that within each time step no multiple events happen. The time for the simulation to converge to the meta-stable state is long due to the slow dynamics in the information propagation and the possibly even slower information injection between the two layers.
Theoretical analysis: individual-based mean field approximation
The probability that each node is infected by the SIS epidemic at any time in a single network as well as in interconnected networks has been derived via the N-intertwined mean-field approximation, which takes into account the network structure [16,37,38]. In order to take not only the two-layer network structure into account but also the interacting epidemic and awareness spread, we develop here the individual-based mean field approximation (IBMFA) to derive the probability that each node is infected or aware in our IEAS model. Recall that in the SIS epidemic spread, the rate to infect an unaware and aware node is b 2 and a b * 2 respectively, and the recovery rate is d . 2 In the UAU information propagation, the spreading rate is b 1 and the recovery rate is d . 1 The rate for an infected node to become aware is g . We thus suggest the following IBMFA equations: is the probability that node i is infected and could inject information to the communication network, is the probability that node i is infected but could not inject the information.
The time derivative of the probability u i (t) for a node being aware of the epidemic is determined by three processes: (a) when the node is aware of the epidemic, it recovers to unaware with rate gd ; 1 (b) when the node is unaware, it gets aware by any of its aware neighbors in the communication network with rate gb 1 ; and (c) when the node is in state II and it injects the awareness information with rate g .
The derivative of the probability ( ) v t i S that a node is susceptible depends on the following two competing processes: (a) when the node is susceptible, it gets infected by any of its infected neighbors in the physical contact network with rate b 2 if the node is unaware of the epidemic and with rate a b * 2 if this node is aware; and (b) if the node is infected, i.e. either is state II or IN, it recovers to susceptible with rate d 2 .
Similarly, the probability ( ) v t i II that a node is infected and is able to inject the information to the communication network decreases because of the following processes: when the node is in state II, (a) it recovers to susceptible, or (b) injects the awareness information, or (c) the node becomes aware because of the information propagation from its aware neighbors. The probability ( ) v t i II increases when the node is susceptible and unaware and it gets infected by any of its infected neighbors in the physical contact network.
We are interested in the meta-stable state when = 0 The exact steady state is the susceptible and unaware state for all the nodes, which is the only absorbing state of the Makrovian process IEAS. However, this absorbing state will be reached within an unrealistically long time for realistic sizes of networks [39]. We are thus interested in the meta-stable state in which the system stays for a long time and which will be reached faster, and better characterizes real epidemics. In general, mean-field approximations assume the uncorrelation of random variables [40]. The IBMFA is derived based on the assumption that the infection states (infected or susceptible) of two neighboring nodes in the physical contact network are uncorrelated, the awareness states (aware or unaware) of two neighboring nodes in the communication network are uncorrelated, and the infection state and the awareness state of the same node are uncorrelated, although the injection has been taken into account. These types of correlations, especially the correlation between the infection state and the awareness state of the same node, do exist, which explains why IBMFA is not precise.
Theoretical analysis: microscopic Markov chain approach
The microscopic Markov chain approach (MMCA) was proposed in [41,42] and has been applied to the interacting processes on two-layer networks proposed in [26]. Here we derive the MMCA for our IEAS model. Later, we will compare MMCA and IBMFA, the two seemingly most advanced analytical approaches so far, with numerical simulations.
The MMCA examines the discrete time evolution of the probability ( ) that each node i is in each of the five possible states respectively: unaware and susceptible (US), unaware, infected and cannot inject the aware information (UIN), unaware, infected and able to inject the aware information (UII), aware and susceptible (AS), and aware and infected (AI) when there is no need to inject the information since the node is already aware of the information.
Within any time step t of interval Dt , we define the probability that node i is NOT informed by any neighbor in the communication network as ( ) r t i , the probability that an unaware node i is NOT infected by any neighbor in the physical contact network as ( ) q t i U and the probability that an aware node i is NOT infected by any neighbor as ( ) q t i A and they follow: is the probability that node j is aware at time t, is the probability that node j is infected at time t, the probability that a node gets informed by an aware neighbor within a time step of interval Dt follows * b gb = D · t 1 1 and similarly, we . The time interval Dt should be small so that this discretetime approach well approximates the continuous time IEAS model. For consistency with the simulations, we use the interval time D = t 0.01. The MMCA equations describe the time evolution of the probability that each node is in each of the five possible states 5 : t is the probability that the awareness information is injected from the physical contact network to the communication network layer within a time step, i.e. the node becomes aware because it gets infected. The normalization condition should be satisfied at any time t for any node i, In the meta-stable state, the probability that each node stays in each state remains the same over time, e.g.
. The meta-stable state fraction of aware nodes r 1 and fraction of infected nodes r 2 can be derived as:
Simulation, IBMFA and MMCA comparison
We compare the average fraction of infected nodes r 2 obtained by IBMFA, MMCA and simulations, respectively, to estimate the precision of these two analytical approaches. Figures 2 and 3 show that the two theoretical approaches relatively well approximate the simulation results. Moreover, IBMFA approximates the simulation results slightly better around the epidemic threshold. For the infection rate b 2 above the epidemic threshold, MMCA approximates the simulations better than IBMFA when γ is small and IBMFA approximates the simulations better as γ increases. These observations are more evident in two-layer ER networks and in larger networks, e.g. N = 10000, as shown in the appendix (figures A1 and A2). Cai et al in [43] already noticed the weak performance of MMCA around the epidemic threshold. Hence, IBMFA and MMCA may serve as complimentary approaches. In the following sections, we will use the simulation results for discussions instead of the two approximations. 5 In the MMCA approach, we consider that once a node gets infected in a time step, there is a chance that it injects the information starting from this time step instead of the next time step. In this case, a large injection rate g corresponds to the immediate injection scenario as assumed in [25,26].
Effect of time scale γ
In this section, we explore how time scale γ influences the mitigation effect, i.e. the fraction of infected nodes in the meta-stable state. As seen in figure 4, the average fraction of infected nodes in the meta-stable state can indeed be significantly reduced with the help of awareness information compared to the case when there is no awareness information propagated (see the dotted line, the so-called upper bound). This upper bound corresponds to the case when people do not use the social communication network. Moreover, the effect of γ is non-trivial. When = 1, the fraction of infected nodes decreases monotonically with decreasing γ for a given b 2 , which implies that a smaller γ better mitigates the epidemic. However, this is not the case when = 2, where there seems to be an optimal γ that mitigates the epidemic spreading, but determining its value is not straightforward.
Hence, we explore further, for a given b 2 , which relative time scale γ of information propagation better mitigates the epidemic for various values of . We consider the specific case of b = 1 2 when the effect of γ differs more evidently, as suggested in figure 4. Figure 5 suggests the existence of a non-trivial optimal γ that minimises the average fraction of infected nodes for a given . For = 2 it seems that the optimal γ is close to 0.5 while for = 3, the optimal γ is close to 1. We observe similar results in two-layer ER networks.
We would like to understand how and why the optimal γ changes with , i.e. with the injection rate g . This would provide essential insight to the question: operating at which time scale could the information spread best mitigate the epidemic for a given that characterizes the relation between the injection rate and the time scale? We find (figure 5) that the optimal γ tends to decrease as decreases, even though the precision of the optimal γ is limited here due to the complexity of simulating interacting processes operating at different time scales, as discussed in section 3.2. The same trend can be captured by both analytical approximations IBMFA and MMCA. Furthermore, we explain this phenomenon using analytical and physical interpretations. As shown in from the physical layer corresponds to the SIS model in a single network layer. In this case, the average fraction r 1 of aware nodes in the meta-stable is solely determined by b d 1 1 for a given network topology. Injections are triggered by the infections of nodes in the physical contact network. Consider the epidemic spreading alone in the physical contact network without the awareness information. In this case, the average fraction of infected nodes depends on b d 2 2 . The frequency that nodes get infected relative to the information spreading is proportional to g 1 . An injection occurs if a node is unaware at the moment it gets infected and it gets the injection before it recovers to be susceptible again and before it becomes aware due to its aware neighbors. When an unaware node gets infected, the probability that an injection happens, i.e. before it recovers and before it gets aware via its neighbor, is approximately , where c is the average number of aware neighbors of a node. Hence, the frequency of injection relative to the information spread is approximately proportional to 6 1 , the maximum of which is obtained at smaller γ as decreases and is obtained at g = 0 when 1. A large injection frequency leads to a higher fraction of aware nodes, which in turn results in a lower fraction of infected nodes. Hence, the optimal γ that best mitigates the epidemic decreases as decreases. As the time scale γ decreases, the relative frequency that nodes get infected with respect to the information spread increases. However, the probability that an infected node could inject this awareness information, i.e. it gets aware before it recovers and before it gets aware via aware neighbors, becomes smaller. Both effects contribute to the non-trivial optimal time scale when > 1.
It takes on average 11.4 days (incubation period) for a susceptible individual to become infected by the infectious disease Ebola [44]. Information propagation in online social networks is fast due to the fact that more than 70% of users use, e.g. Facebook, daily [45]. However, the information spread in other communication networks like the mobile phone network could be relatively slower because of the less frequent usage of these networks. Which communication network is the best for epidemic mitigations depends on the speed that information is injected from the physical contact network to the communication network. Our result shows that no matter how the information is propagated at a faster time scale (g > 1) or a slower time scale (g < 1), a fast information injection from the physical contact network to the communication network (e.g. a small when g < 1 and a large when g > 1 ) is beneficial for the epidemic mitigation.
Validation in real-world network
Finally, we explore the effect of time scale γ on the epidemic spreading on a real-world two-layer network. We consider the two-layer network obtained from the location-based social network Brightkite where users shared their locations by checking-in. One layer is the online friendship network and the other is the physical contact network. We consider the users in the dataset that have been to New York at least once during the observation period April 2008-October 2010 [46]. Two users are assumed to be connected in the physical contact network layer if there is at least one day that their physical distance is less than 200 meters. The largest connected component of the physical contact network with 1967 nodes is considered and the friendship relations among these nodes are considered as the communication network layer where information propagates. Both layers follow a power-law degree distribution. The communication network has 9284 links, the physical contact network contains 11857 links and the two layers overlap in 767 links.
Our IEAS model is deployed upon this real-world network with various parameters. As shown in figure 6, we observe in the real-world two-layer network similar results as in network models: a non-trivial optimal time scale of the information propagation may exist and the optimal γ increases as the information injection control parameter increases.
Conclusions
The participation of individuals in several networks, such as the physical contact network and communication network, allows the dynamic processes deployed on these networks respectively to interact, introducing new possibilities for epidemic mitigation. In this work, we propose a generalised interacting epidemic and awareness spread model where becoming infected may make an individual aware of the epidemic whereas an individual aware of the epidemic reduces its rate of becoming infected by, e.g. avoiding contact with infected friends or wearing masks. We find that the epidemic spreading can indeed be mitigated by using the awareness information propagated in the communication network. Importantly, we discovered how the performance of the mitigation is influenced by the time scale of the awareness propagation relative to the epidemic spreading. Depending on how fast an infected node becomes aware, the optimal mitigation is achieved at a time scale γ that is not necessarily zero nor infinity, which contradicts our intuition that fast information spread better mitigates the epidemic spreading. We developed the IBMFA and MMCA to theoretically analyze such interacting processes on a two-layer network. Our observation is explained using both analytical and physical interpretations and is validated in a real-world physical contact-communication network. Our results imply that an optimal mitigation can be achieved when the time scale of the information spreading is not too fast such that the awareness information injected due to nodes' infection is not diluted, nor too slow such that the awareness information can be fast, thus successfully injected before the infected nodes recover or become aware via aware neighbors. Given a communication network and its corresponding time scale, a somewhat faster information infection from the physical contact network to the communication network, i.e. an infected node becomes aware fast, is in general beneficial for the mitigation.
The effect of the various features of the two-layer network topology on the epidemic spreading is explored in the appendix. We find that the mitigation tends to perform better when the two layers of the network overlap more, i.e. a larger fraction of pairs of nodes are connected in both layers. The optimal time scale decreases as the density, or equivalently when the average degree of the two-layer networks, increases. This initial work points out the importance of further exploring real-world user behaviors: how long it takes for a user to share the information about an epidemic after (s)he get infected and whether this time delay depends on the social network they use.
Our analytical approaches can be applied to this special, and actually simpler, case. The IBMFA becomes where u i (t) is the probability that node i is aware of the epidemic at time t, v i (t) is the probability that node i is infected at time t. Figure A1. Comparison of the average fraction of infected nodes in the meta-stable state obtained by IBMFA, MMCA and simulations, respectively, in two-layer Erdős-Rényi random networks. The injection rate is g with = 2. communication network. Since γ is infinitely fast, the UAU process would reach the steady state instantly. At any time t, the rate that a node i gets infected by an infected neighbor is a b * 2 with probability v i , and is b 2 with probabilityv 1 i . In this case, the upper bound, i.e. the worst possible mitigation, can be obtained via
|
2017-10-21T00:09:39.666Z
|
2017-07-31T00:00:00.000
|
{
"year": 2017,
"sha1": "5f16c7a20e3ee8fa884b43e9d52895d4f12b5b94",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/aa79b7",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "da4c61342133ee2eb04c570a72d05547c6b9143d",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2509369
|
pes2o/s2orc
|
v3-fos-license
|
Inhibition of age-related cytokines production by ATGL: a mechanism linked to the anti-inflammatory effect of resveratrol.
Ageing is characterized by the expansion and the decreased vascularization of visceral adipose tissue (vAT), disruption of metabolic activities, and decline of the function of the immune system, leading to chronic inflammatory states. We previously demonstrated that, in vAT of mice at early state of ageing, adipocytes mount a stress resistance response consisting in the upregulation of ATGL, which is functional in restraining the production of inflammatory cytokines. Here, we found that, in the late phase of ageing, such an adaptive response is impaired. In particular, 24-months-old mice and aged 3T3-L1 adipocytes display affected expression of ATGL and its downstream PPARα-mediated lipid signalling pathway, leading to upregulation of TNFα and IL-6 production. We show that the natural polyphenol compound resveratrol (RSV) efficiently suppresses the expression of TNFα and IL-6 in an ATGL/PPARα dependent manner. Actually, adipocytes downregulating ATGL do not show a restored PPARα expression and display elevated cytokines production. Overall the results obtained highlight a crucial function of ATGL in inhibiting age-related inflammation and reinforce the idea that RSV could represent a valid natural compound to limit the onset and/or the exacerbation of the age-related inflammatory states.
Introduction
Immunometabolism is an emerging field of investigation including immunology and biochemical pathways that govern metabolism. Accelerating interest in this area is being fuelled by increased lifespan and the relatively recent knowledge that aging affects the immune system and promotes inflammation that is associated with metabolic dysfunctions [1]. Ageing is associated with an increase in visceral obesity in men and women, which has been claimed as the prominent cause of systemic metabolic perturbations and chronic inflammation [2]. In particular, with ageing, vAT expands and becomes hypovascularized and resident adipocytes quickly release proinflammatory cytokines as response to such stressful condition [3][4][5]. Among the produced proinflammatory molecules, tumour necrosis factor (TNF ) and interleukin 6 (IL-6) have been most intensively studied for their involvement in inducing systemic metabolic perturbations [4].
Polyphenols are the most promising natural compounds to combat metabolic syndromes including agerelated inflammatory states [6]. Many polyphenols are efficient antioxidants and anti-inflammatory molecules by virtue of their ability to directly scavenge inflammation-derived radicals, to increase antioxidants expression, and to block inflammatory cytokines production by modulating the activity of specific transcription factors [6]. In adipose tissue, the polyphenol resveratrol (RSV) suppresses both systemic and adipose tissue inflammation and has the potential to improve age-associated metabolic disorders and to increase insulin sensitivity [7,8]. Moreover, RSV inhibits triglycerides accumulation by suppressing adipocytes differentiation [9] and by stimulating lipid catabolism via the induction of the adipose triglyceride lipase (ATGL) [10]. 2 Mediators of Inflammation ATGL is one of the key factors involved in adipose tissue function. It has been firstly identified in white adipose tissue and represents the rate-limiting enzyme of triglycerides lipolysis [11]. Moreover, fatty acids (FAs) liberated by ATGL have been implicated in lipid signalling mediated by the family of peroxisome proliferator activated receptors (PPARs) [12]. PPARs are ligand-activated nuclear receptors, which can be activated by FAs and have been mainly studied in the transcriptional regulation of genes involved in glucose and lipid metabolism [13]. ATGL-mediated FA/PPAR signalling was demonstrated to be essential to maintain mitochondrial oxidative metabolism and in vAT orchestrates stress resistance adaptation of adipocytes to limited nutrient delivery, thus counteracting cell death and the onset of the inflammatory response [5]. ATGL activity changes during aging, and it has been suggested that its expression levels are directly related to the inflammatory status [5,14,15]. More precisely, ATGL downregulation is described in several agerelated metabolic disorders (i.e., insulin resistance-related states) characterized by an increased level of inflammatory mediators [16,17]. Importantly, PPARs, including PPAR , play a very important role in the regulation of inflammatory responses. In particular, PPAR transactivates or transrepresses transcription factors including NF B [18]. However, even though cell metabolism and inflammation are known to be tightly regulated by lipid signalling, the mechanism by which ATGL modulates the production of inflammatory mediators is unclear. In particular, whether FA/PPAR is involved in the anti-inflammatory effect of ATGL has not been fully addressed yet. In the present work we have investigated whether ATGL has a role in orchestrating pro-inflammatory cytokines production in aged adipocytes and whether the anti-inflammatory activity of RSV is associated with modulation of ATGL.
Mice Treatment.
We housed and sacrificed all mice in accordance with accepted standard of humane animal care and with the approval by relevant national (Ministry of Welfare) and local (Institutional Animal Care and Use Committee, Tor Vergata University) committees. C57BL/6 male mice were purchased from Harlan Laboratories Srl (Urbino, Italy). For the experiments, 1-, 7-, 14-, and 24-monthold mice were used ( = 3 mice/group). Mice were killed by cervical dislocation; vAT was explanted immediately, frozen on dry ice, and stored at −80 ∘ C.
Cell
Lines, Treatments, and Transfection. 3T3-L1 murine preadipocytes and C 2 C 12 murine myoblasts were purchased from American Type Cell Culture (ATCC) and grown in DMEM supplemented with 10% newborn serum or 10% fetal bovine serum and 1% pen/strep mix (Lonza Sales, Basel, Switzerland). 3T3-L1 and C 2 C 12 cells were seeded at density of 2 × 10 5 cells per well in 6-well plates and differentiated in adipocytes and myotubes, respectively, as previously reported [9,19]. RSV (Sigma-Aldrich, St. Louis, MO, USA) was solubilized in DMSO and added at concentration of 100 M for up to 48 h, a condition that was demonstrated to be effective in selectively inducing ATGL expression in adipocytes [20]. ATGL and scramble siRNAs (Santa Cruz Biotechnology, Dallas, Texas, USA) were transfected by using the DeliverX Plus kit (Affymetrix, Santa Clara, CA, USA) as previously described [21].
RT-qPCR Analysis.
RT-qPCR analysis was carried out as previously described [22]. Briefly, total RNA was extracted using TRI reagent (Sigma-Aldrich). 3 g of RNA was used for retrotranscription with M-MLV (Promega, Madison, WI, USA). qPCR was performed in triplicates by using validated qPCR primers (BLAST), Ex TAq qPCR Premix (Lonza Sales), and the Real Time PCR LightCycler II (Roche Diagnostics, Indianapolis, IN, USA). mRNA levels were normalized toactin mRNA, and the relative mRNA levels were determined by using the 2 −ΔΔCt method.
Statistical Analysis.
The results are presented as means ± S.D. Statistical evaluation was conducted by ANOVA, followed by the post-Student-Newman-Keuls. Differences were considered to be significant at < 0.05.
Age-Related Inflammatory Cytokines Are Produced via an ATGL-Dependent Mechanism in Adipocytes.
During aging vAT expands and undergoes hypovascularization concomitantly with immunometabolic perturbations [3,5]. In particular, vAT peaks at middle age or early old age and then declines substantially in advanced old age [24,25]. The inflammatory state that commonly accompanies aging, also called "inflammaging, " has been proposed to be causative of a systemic metabolic perturbation [26]. vAT has been proposed at the nexus of the mechanisms and pathways involved in the genesis of age-related inflammatory disorders. We have recently demonstrated that during early ageing ATGL is significantly increased in vAT. ATGL upregulation represents a stress adaptive response of adipocytes to hypovascularization that is crucial to buffer energetic catastrophe and to prevent cell death and tissue inflammation [5]. The present study was designed to investigate whether the dramatic inflammatory picture typically observed during late ageing [27] could be triggered by the failure of the ATGLmediated adaptive response. As showed in Figure 1 respect to 1-month-old mice. However, a decline of ATGL was observed at later stage of ageing (Figure 1(a)). In particular, the oldest mice (24-months-old) had ATGL protein level comparable to that of 1-month-old mice (Figure 1(a), bottom panel). Moreover, RT-qPCR analysis demonstrated a significant reduction of ATGL mRNA in the oldest mice with respect to young ones, suggesting an impaired ATGL expression (Figure 1(b)). We then attempted to reveal the degree of inflammation in vAT of the oldest mice with respect to the youngest and we found a stronger production of IL-6 mRNA. Notwithstanding, we did not reveal any changes in macrophages marker CD-14 (Figure 1(c)), suggesting that the production of inflammatory cytokines was independent of cellular-mediated immune response. This data is supported by our previous evidence showing that macrophages were not infiltrated in vAT of old mice [5].
To confirm the ability of adipose cells residing in vAT of 24-month-old mice to produce inflammatory cytokines independently of cell immunity, we have set up an in vitro "aging" model of adipocytes by culturing differentiated 3T3-L1 adipocytes for 21 days and compared the mRNA levels of ATGL, TNF , and IL-6 to those of 3T3-L1 adipocytes after 8 days of differentiation. As shown in Figure 2(a), we detected reduced mRNA level of ATGL and its downstream target PPAR . This event was associated with an upregulation of TNF and IL-6 expression in 21-day-old adipocytes compared with the 8-day-old adipocytes, thus nicely recapitulating the in vivo results obtained with 24-month-old mice ( Figure 1(b)). Therefore, on the basis of these data we can postulate that the failure of ATGL-mediated stress response observed in 24-month-old and 21-day-old adipocytes triggers the production of proinflammatory cytokines. In agreement with this idea, higher levels of inflammatory markers were observed upon ATGL downregulation in vitro and in vAT of ATGL KO mice [5]. The modulatory action of ATGL on tissue inflammation has been reported recently also in cardiac muscle [15]. In particular, a prominent upregulation of different inflammatory markers (e.g., TNF and IL-6) was observed in steatotic hearts of ATGL KO mice. Thus, we asked whether downregulation of ATGL in cultured skeletal muscle myotubes could result in enhanced inflammation markers as well. To this end, we downregulated ATGL in fully differentiated C 2 C 12 myotubes [ATGL(−)] through RNAi. Coherently, ATGL(−) myotubes displayed decreased PPAR and a greater mRNA expression level of TNF than controls (Figure 2(b)).
FAs are liberated by ATGL function as lipid signalling molecules leading to activation of PPAR , which favours the expression of lipid oxidative genes. Moreover, it has been demonstrated that PPAR functions like a repressor of inflammation [12]. An impaired FA/PPAR signalling was observed during ATGL deficiency in vAT and adipocytes [5] and this may initiate sequelae of events that eventually lead to the induction of proinflammatory genes. In support of this assumption PPAR KO mice show a prolonged inflammatory response [28]. The involved mechanism is that PPAR is a strong repressor of NF B transcription factor and of its downstream inflammatory cytokines in different cell types including adipocytes [28]. Moreover, PPAR is able to inhibit inflammation in several pathological conditions including hepatic steatosis and obesity [29][30][31]. The anti-inflammatory role of PPAR in our system is strongly supported by the analysis of PPAR mRNA in 21-day-old adipocytes and in vAT of the 24-month-old mice, which evidenced a significant reduction of PPAR concomitant with production of inflammatory mediators.
RSV Inhibits the Production of Age-Related Proinflammatory Cytokines in Adipocytes by Upregulating ATGL.
Several studies have suggested that the health benefits of RSV are mediated by its antioxidant capacity [32]. In the context of adipocytes physiology, RSV strongly inhibits adipogenesis [9] by inducing the synthesis of the main nonenzymatic intracellular antioxidant, that is, glutathione [33]. In doing so, RSV buffers the onset of a prooxidant milieu, which is mandatory for adipocytes differentiation [9]. Other findings support RSV efficacy also in reducing the inflammatory response in several tissues [34] such as brain during neurodegenerative processes [35] and adipose tissue upon high fat and sugar diet [8]. The main anti-inflammatory mechanism of RSV seems to be related to the inhibitory action of NF Bmediated pathways including the transcription of TNF and IL-6 [36]. According to Lasa et al. [20], we found that treatment of 8-day-old adipocytes with RSV efficiently impinged upon a strong ATGL protein accumulation at 24 h, which was accompanied by the decrease of phosphoactive NF B (Figure 2(c)). To dissect whether ATGL/FA/PPAR axis could be involved in the anti-inflammaging action of RSV, we analysed the level of ATGL and PPAR after RSV administration in 21-day-old adipocytes. We found a significant upregulation of both ATGL and PPAR expression (Figure 2(a)). Coherently, a simultaneous decrease of TNF and IL-6 mRNA expression was observed after RSV treatment (Figure 2(a)). As reported in the literature, stress stimuli such as LPS or nutrient starvation upregulate ATGL [5,37]. Moreover, ATGL KO mice challenged with LPS display enhanced inflammation in liver compared to WT and show increased mortality and torpor, and these events have been attributed to impaired PPAR activity [14].
Next, given that RSV has an anti-inflammaging action and this nicely correlates with the upregulation of ATGL and PPAR , we asked whether RSV could also restrain the increase of TNF and IL-6 caused by ATGL downregulation through RNAi, in 8-day-old adipocytes. As reported in Figure 3(a), ATGL lacking cells [ATGL(−)] displayed impaired PPAR expression and higher expression of TNF and IL-6 than controls, in line with what we observed in C 2 C 12 myotubes (Figure 2(b)) and previously revealed on primary adipocytes and mouse embryonic fibroblasts [5]. However, contrary to what we observed in 21-day-old adipocytes, RSV was unable to revert the induction of TNF and IL-6 in ATGL(−) adipocytes; but rather it unexpectedly further upregulated their mRNA expression, indicating an enhancement of NF B activity (Figure 3(a)). To confirm the proinflammatory role of RSV in ATGL(−) adipocytes, we carried out an immunoblotting analysis of the transcriptional phosphoactive form of NF B (pNF B) and its inhibitory partner I B. Figure 3(b) shows an increased pNF B level in ATGL(−) adipocytes treated with RSV that was paralleled by the decrease of I B (Figure 3(b)).
RSV has been proposed as a plausible gerosuppressant natural compound to overcome age-related metabolic perturbations and chronic inflammatory states [8,38,39]. Interestingly, in rhesus monkeys, RSV administration reduces NF B activation in high fat diet fed animals, suppressing inflammation in vAT with beneficial action on metabolic profile [8]. Our data point out that, in condition of irreversible ATGL inhibition (silenced ATGL expression), RSV would not function as anti-inflammatory agent, the ATGL-mediated FA/PPAR signalling axis being strongly affected. Thus, we can speculate that the anti-inflammatory potential of RSV is strongly dependent on ATGL/FA/PPAR pathway ( Figure 4).
Overall these findings further support the antiinflammatory role of ATGL in adipocytes and suggest that RSV, being a powerful enhancer of ATGL expression/activity, is able to reinforce the anti-inflammatory FA/PPAR signalling [5,15]. Given that RSV is ineffective in inhibiting NF B when ATGL is lacking, we can suggest that this lipase also efficiently modulates NF B activity by (i) restraining its phosphorylation and (ii) stabilizing the inhibitory partner I B. These hypotheses remain to be elucidated yet and are currently under investigation in our laboratory. Importantly, recent papers demonstrate that RSV worsens the clinical symptoms in mice models of multiple sclerosis, exacerbating inflammation and neuronal demyelization [40]. Moreover, RSV has been found to activate NF B and to increase inflammatory cytokines in cardiac cells [41]. These findings and our data collectively indicate that caution should be exercised in using RSV against inflammatory states.
Conclusions
Here we give additional effort to the authentic role of ATGL as a stress responsive protein, having the capacity to suppress the production of age-related proinflammatory cytokines in adipocytes. ATGL being an important node in the promotion of lipid signalling, the anti-inflammaging effect of ATGL seems to proceed via the induction of FA/PPAR -mediated pathway in that PPAR functions as a valid suppressor of inflammatory cytokines at the level of gene transcription. Importantly, we have also pointed out that RSV can act as a powerful anti-inflammatory agent thanks to its ability to restore ATGL expression, which is hardly compromised by ageing, thus allowing FA/PPAR signalling to proceed towards the repression of cytokines production. Drugs able to boost activity of ATGL in adipose tissue are not currently available. Therefore, on the basis of our findings we can state that RSV could act as a powerful enhancer of ATGL/FA/PPAR pathway, thus representing a valid natural tool to limit the onset and/or the exacerbation of the agerelated metabolic disorders and inflammatory states.
|
2018-04-03T03:18:34.697Z
|
2014-04-08T00:00:00.000
|
{
"year": 2014,
"sha1": "38a92e7258b0baa37a45a2da25f2626bb55ca424",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/2014/917698.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38a92e7258b0baa37a45a2da25f2626bb55ca424",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
51727291
|
pes2o/s2orc
|
v3-fos-license
|
Combined effect of cabozantinib and gefitinib in crizotinib‐resistant lung tumors harboring ROS1 fusions
The ROS1 tyrosine kinase inhibitor (TKI) crizotinib has shown dramatic effects in patients with non‐small cell lung cancer (NSCLC) harboring ROS1 fusion genes. However, patients inevitably develop resistance to this agent. Therefore, a new treatment strategy is required for lung tumors with ROS1 fusion genes. In the present study, lung cancer cell lines, HCC78 harboring SLC34A2‐ROS1 and ABC‐20 harboring CD74‐ROS1, were used as cell line‐based resistance models. Crizotinib‐resistant HCC78R cells were established from HCC78. We comprehensively screened the resistant cells using a phosphor‐receptor tyrosine kinase array and RNA sequence analysis by next‐generation sequencing. HCC78R cells showed upregulation of HB‐EGF and activation of epidermal growth factor receptor (EGFR) phosphorylation and the EGFR signaling pathway. Recombinant HB‐EGF or EGF rendered HCC78 cells or ABC‐20 cells resistant to crizotinib. RNA sequence analysis by next‐generation sequencing revealed the upregulation of AXL in HCC78R cells. HCC78R cells showed marked sensitivity to EGFR‐TKI or anti‐EGFR antibody treatment in vitro. Combinations of an AXL inhibitor, cabozantinib or gilteritinib, and an EGFR‐TKI were more effective against HCC78R cells than monotherapy with an EGFR‐TKI or AXL inhibitor. The combination of cabozantinib and gefitinib effectively inhibited the growth of HCC78R tumors in an in vivo xenograft model of NOG mice. The results of this study indicated that HB‐EGF/EGFR and AXL play roles in crizotinib resistance in lung cancers harboring ROS1 fusions. The combination of cabozantinib and EGFR‐TKI may represent a useful alternative treatment strategy for patients with advanced NSCLC harboring ROS1 fusion genes.
| INTRODUCTION
The discovery of oncogenic driver genes and corresponding targeted drugs has changed the clinical treatment of non-small cell lung cancer (NSCLC) over the past 15 years. 1-3 Fusions in c-ros oncogene 1 (ROS1) have been identified in approximately 1%-2% of patients. 3,4 ALK tyrosine kinase inhibitors (TKI, including crizotinib), which have been approved for clinical use, showed inhibitory activity against ROS1 because ROS1 and ALK protein share 49% amino acid sequence identity in their kinase domains. 5 The MET/ALK/ROS1 inhibitor crizotinib has been approved for clinical use as an inhibitor of ROS1 in several countries because it confers excellent benefits and shows acceptable tolerance in patients with ROS1 fusion-positive lung cancer. 6,7 Similar to other oncoprotein inhibitors, however, lung tumors with ROS1 fusion genes inevitably acquire resistance to crizotinib, and further improvements in treatment strategies are, thus, required. 8 Many groups have explored the resistance mechanisms in lung tumors with ROS1 fusion genes in attempts to develop new treatment strategies. Similar to the mechanisms of resistance in lung cancers with ALK fusion genes, secondary mutations in the ROS1 kinase domain (eg, G2032R, S1986Y, S1986F, D2033N or L2155S) have been reported. [8][9][10][11] However, the resistance mechanisms of ROS1 inhibitors have not been fully clarified in NSCLC harboring ROS1 fusion genes. supplemented with 10% heat-inactivated FBS and 1% penicillin/ streptomycin in a tissue culture incubator at 37°C with 5% CO 2 .
To establish a crizotinib-resistant cell line, HCC78 cells were treated with gradually increasing concentrations of crizotinib, starting at .2 μmol/L (lower than the IC 50 of HCC78 cells). After 4 months, the cells grew in the presence of 2 μmol/L crizotinib and were designated as HCC78R cells. HCC78R cells were maintained in culture medium containing 1 μmol/L crizotinib. The resistant cell lines were tested using the PowerPlex 16 STR System (Promega Corporation, Madison, WI, USA).
| MTT assay
Growth inhibition was determined using a modified MTT assay. 12 Cells were plated on 96-well plates at a density of 2000-4000 cells per well and continuously exposed to each drug for 96 hours. Absorbance values were expressed as percentages relative to those of untreated cells. The drug concentration required to inhibit the growth of tumor cells by 50% (IC 50 ) was used to evaluate the effect of the drug. Each assay was performed in triplicate or more.
| ELISA
HB-EGF levels were determined by Human HB-EGF DuoSet ELISA (R&D Systems) according to the manufacturer's instructions.
| Fluorescence in situ hybridization
FISH was performed on formalin-fixed, paraffin-embedded samples using a custom ROS1 break-apart probe set in the laboratory of SRL (Tokyo, Japan). 13 The probe set hybridizes with the neighboring 5′ telomeric (RP11-48A22, labeled with SpectrumGreen) and 3′ centromeric (RP11-1036C2, labeled with SpectrumOrange) sequence of ROS1. Cases with >15% split signals in cells were defined as FISH-positive.
| Statistical analysis
All experiments were performed 3 times, and statistical analyses were performed using STATA software (ver. 13; StataCorp, College Station, TX, USA). Group differences were compared using a 2-tailed unpaired t test. In the box plots, the center line is the median and whiskers show minimum to maximum values. In all analyses, P < .05 was considered to indicate statistical significance.
| Epidermal growth factor receptor activation in crizotinib-resistant HCC78R cells
To explore the mechanisms of resistance, we used HCC78 cells Table 1). The ALK/EGFR inhibitor brigatinib exhibited a relatively good inhibitory effect in the resistant cells ( Figure 1E, Table 1). None of the HCC78R cell lines possessed acquired resistance mutations, such as ROS1 S1986Y, S1986F, D2033N or G2032R, in the ROS1 kinase domain [8][9][10][11] (Figures S1D, S5A and B). Morphological examination yielded no obvious evidence of epithelial-mesenchymal transition and western blotting experiments confirmed that the expression patterns of E-cadherin and F I G U R E 1 Establishment of the crizotinib-resistant cell line, HCC78R. A, Cell proliferation assay of HCC78 and HCC78R cells treated with the indicated concentrations of crizotinib. Error bars: SD. All experiments were performed in triplicate. B, FISH analysis of ROS1 fusion gene in HCC78 and HCC78R cells. Red probes hybridized to the 5′ region of ROS1 and green probes to the 3′ region. In the presence of ROS1 rearrangement, the 2 colors are observed separately. HCC78R cells maintained ROS1 fusion genes (98% ROS1 fusion FISH positive) to the same extent as the parental HCC78 cells (94% ROS1 fusion FISH positive) C, Detection of SLC34A2-ROS1. Complementary DNA (cDNA) derived from PC-9 (negative control), HCC78 and HCC78R were examined by PCR. D,F. Immunoblots and phospho-receptor tyrosine kinase (RTK) arrays in HCC78 and HCC78R cells. These cells were cultured in normal medium for 4 d, after which both types of cells were exposed to .5 μmol/L crizotinib for 6 h. E, Cell proliferation assay of HCC78 and HCC78R cells treated with the indicated concentrations of ALK/ROS1 inhibitors, ceritinib, alectinib, brigatinib and lorlatinib. Error bars: SD. All experiments were performed in triplicate vimentin were similar between parental HCC78 and HCC78R cells ( Figure S1B).
Subsequently, we comprehensively assessed the phosphorylation of receptor tyrosine kinases (RTK) in HCC78 cells and HCC78R cells using an RTK array. The results indicated that EGFR phosphorylation was relatively well maintained in HCC78R cells under conditions of crizotinib exposure compared with phosphorylation in the parental HCC78 cells (Figures 1F and S1E). The levels of ERBB2 and MET phosphorylation markedly decreased in HCC78R cells compared with that in parental HCC78 cells. In contrast, phosphorylation of AXL increased in HCC78R cells compared with that in parental HCC78 cells ( Figures 1F and S1E). Western blotting analysis showed that phosphorylation of EGFR and the downstream signaling protein, ERK1/2, were maintained in HCC78R cells under crizotinib exposure ( Figure 1D).
Taken together, these observations suggested that EGFR or AXL may play a role in the mechanism of resistance in HCC78R cells.
| Effects of epidermal growth factor receptor inhibitors in HCC78R
We performed an MTT assay using the EGFR-TKI (gefitinib) in HCC78R cells to examine whether activation of EGFR is responsible for crizotinib resistance. Gefitinib monotherapy had little effect on cell proliferation but co-treatment with gefitinib and crizotinib showed a superior effect of inhibiting the proliferation of parental HCC78 cells ( Figure S4B). The results indicated that the EGFR pathway played an important role in intrinsic sensitivity to crizotinib in HCC78 cells. This was consistent with previous reports. 15,16 In contrast to parental HCC78 cells, crizotinib showed no inhibitory effect ( Figure 1A), but gefitinib inhibited the proliferation of HCC78R cells (IC 50 ± SD: .136 ± .022 μmol/L) (Figure 2A). Next, we assessed the effects of EGFR-TKI on the EGFR signaling pathway in each cell line. Phosphorylation of EGFR and its downstream signaling protein, ERK1/2, was not suppressed in HCC78R cells treated with crizotinib, while these signaling pathways were suppressed upon gefitinib exposure ( Figure 2B).
Adding gefitinib to crizotinib monotherapy showed a great inhibitory effect on the proliferation of HCC78R cells ( Figure S4C). In contrast, combination therapy with gefitinib plus crizotinib was not superior to gefitinib monotherapy with regard to the proliferation of HCC78R ( Figure 2C). These results suggested that the resistant HCC78R cells were no longer addicted to the oncogenic ROS1 fusion protein but were instead addicted to EGFR. To confirm the effect of EGFR-TKI, we performed the same experiments using other EGFR-TKI, erlotinib and afatinib. As expected, similar results were observed with both of these agents in HCC78R cells (Figures S2A,B). Third, we examined the inhibitory effect of the anti-EGFR antibody cetuximab, in vitro. Interestingly, cetuximab inhibited the proliferation of resistant cell lines to a significantly greater extent than that of the parental cells in vitro (mean IC 50 ± SD > 20 μg/mL for HCC78 cells and 11.2 ± 1.33 μg/mL for HCC78R cells ( Figure 2D). Western blotting analysis showed that 1 or 5 μg/mL of cetuximab inhibited the phosphorylation of EGFR and its downstream signaling proteins ( Figure 2E). Taken together, these observations suggested that the EGFR signaling pathway played an important role in the mechanism of resistance in HCC78R cells.
| Heparin-binding epidermal growth factor-like growth factor/epidermal growth factor receptor axis signaling confers resistance to crizotinib in lung cancer cells harboring ROS1 fusion genes
Next, we investigated the mechanisms of EGFR activation in HCC78R cells. The level of EGFR protein expression was not significantly increased in HCC78R cells ( Figure 1D). Activated EGFR muta- Western blotting analysis indicated that the expression of total EGFR protein decreased, but the phosphorylation of EGFR and its downstream signaling protein, ERK1/2, was maintained under crizotinib exposure by the addition of HB-EGF to HCC78 cells (Figure 3D). We also examined the effect of the conditioned medium prepared by mixing equal parts fresh medium and the supernatant of HCC78R cells. As expected, the conditioned medium rendered the parental HCC78 cells resistant to crizotinib in vitro ( Figure S3E). In addition, we investigated the impact of other growth factors on the sensitivity of HCC78 cells to crizotinib. Consistent with the results for HB-EGF ( Figure 3C), EGF (100 ng/mL) stimulation rendered HCC78 cells resistant to crizotinib in vitro ( Figure S3A). Similar to the effects of HB-EGF, EGF maintained phosphorylation of EGFR and ERK1/2 in HCC78 treated with crizotinib ( Figure S3B). In contrast to the results for EGFR ligand, neither insulin-like growth factor The antiproliferative effects were evaluated using the MTT assay. Data are presented as the mean ± SD from 3 independent experiments.
(IGF) nor fibroblast growth factor (FGF) rescued the proliferation of HCC78 cells during crizotinib exposure ( Figure S3F). Finally, another lung cancer cell line, ABC-20, harboring the CD74-ROS1 fusion gene ( Figure S3C) was investigated to reconfirm the roles of HB-EGF and EGF. Similar to HCC78, HB-EGF, or EGF, stimulation rendered ABC-20 cells resistant to crizotinib ( Figures 3E and S3D), and the phosphorylation of both EGFR and ERK1/2 was maintained in ABC-20 cells under crizotinib treatment ( Figure 3F).
| AXL upregulation and resistance to crizotinib in HCC78R cells
To explore the mechanisms of resistance in more detail, we performed RNA-targeted sequence analysis using next-generation sequencing in HCC78 and HCC78R cells. The samples were collected from dishes in which HCC78 cells (n = 4) or HCC78R cells (n = 4) were independently cultured. The mRNA expression level of 612 human kinome genes and kinase-related genes was comprehensively compared ( Figure 4A). The raw data are shown in Table S1.
The mRNA expression of 9 genes, PLK1 and 2, PBK, AURKA, AURKB, TTK, CDK1, NEK2 and AXL, showed changes of more than 8-fold between HCC78 and HCC78R cells ( Figure 4A). In contrast, the mRNA levels of KDR, INSR, SBK1, EPHA4 and TNIK decreased 8-fold between the cell lines ( Figure S6). Among these genes, we focused on AXL, because the read frequency of AXL was among the highest and phosphorylation of AXL was increased in HCC78R ( Figures 1F and S1E). Consistent with the NGS data, western blotting analysis showed that expression of the AXL protein significantly increased ( Figure 4B). The effects of 2 clinically relevant AXL inhibitors, 17 cabozantinib 18 and gilteritinib, 19 were assessed in HCC78R cells.
F I G U R E 2 Effects of epidermal growth factor receptor (EGFR) inhibitor treatment in HCC78R cells. A, Cell proliferation assays in HCC78 and HCC78R cells treated with the indicated concentrations of gefitinib. Error bars: SD. All experiments were performed in triplicate. B, Effects of combined treatment with crizotinib and gefitinib on the EGFR signaling pathway in HCC78R cells. Cells were exposed to gefitinib at .5 μmol/L for 6 h, and crizotinib at .1 and .5 μmol/L for 6 h. C, Inhibitory effects of crizotinib and gefitinib on HCC78R cell proliferation upon the addition of .5 μmol/L crizotinib. Error bars: SD. All experiments were performed in triplicate. D, Cell proliferation assays in HCC78 and HCC78R cells treated with the indicated concentrations of cetuximab. Error bars: SD. All experiments were performed in triplicate. E, Effects of cetuximab on the EGFR pathway in HCC78R cells. Cells were exposed to cetuximab at 1 and 5 μg/mL for 6 h Compared with the effects of the EGFR-TKI gefitinib, the effects of cabozantinib and gilteritinib were relatively limited in HCC78R cells ( Figure 5A). Furthermore, the combination of crizotinib with cabozantinib showed an antagonistic effect. In contrast, the combination of gefitinib with cabozantinib or gilteritinib inhibited cell proliferation to a significantly greater extent than gefitinib monotherapy ( Figure 5A). Consistent with these observations, combination therapy more strongly inhibited phosphorylation of ERK1/2 than monotherapy ( Figure 5B). In addition, we assessed the effect of combining gefitinib with knockdown of AXL using siRNA. As expected, the combination showed a superior inhibitory effect on cell proliferation than AXL inhibition alone in HCC78R cells ( Figure S7). Finally, we exam- Xenograft tumors treated with vehicle or crizotinib showed similar growth ( Figure 5C), while cabozantinib showed a moderate effect on tumor growth in vivo. In contrast, gefitinib led to a significant inhibition of tumors in the mice. Combination therapy with gefitinib and cabozantinib also showed a better inhibitory effect than each of the monotherapies alone in mouse tumors bearing HCC78R cells, although this effect was not statistically significant ( Figure 5C). No differences in body weight were observed among any of the mouse groups ( Figure 5C).
| DISCUSSION
The development of resistance to targeted therapy is critical for patients with lung cancers harboring driver oncogenes. We demonstrated that HB-EGF/EGFR and AXL play roles in the mechanism of resistance of lung cancers harboring ROS1 fusions treated with crizotinib. Using a cell line-based model, we also found that dual inhibition of EGFR and AXL has the potential to overcome crizotinib resistance in vitro and in vivo. These findings could be clinically relevant, as EGFR inhibitors and AXL inhibitors 17 16 We also showed that IGF and FGF did not affect crizotinib sensitivity in HCC78 cells ( Figure S3F). Therefore, the EGFR pathway may be especially important for persister cells to survive drug exposure in lung cancer cells harboring ROS1 fusion genes.
In this study, we found that the AXL RNA expression level significantly increased in crizotinib-resistant HCC78R cells. AXL is thought to play a role in acquired resistance to oncoprotein inhibitors, [31][32][33] but its role regarding crizotinib resistance has not yet been reported.
We examined the effects of 2 clinically relevant AXL inhibitors in HCC78R cells. Monotherapy with AXL inhibitors showed only moderate effects, but combining AXL inhibitors with EGFR-TKI resulted in a superior inhibitory effect compared with monotherapies. This suggested that AXL plays some role, in concert with EGFR, in resistance to crizotinib.
"Oncogene swap" has been reported as a resistance mechanism in lung cancer with EGFR mutations. 34 In this situation, the activation of other oncogenes acts not as a "bypass", but rather as a "main" oncoprotein. In our study, HCC78R cells maintained ROS1 fusion genes ( Figure 1C)
ACKNOWLEDGMENTS
We are grateful to Hiromi Nakashima and Kyoko Maeda for the technical support. We also thank Dr Takehiro Matsubara (Division of Biobank, Center for Comprehensive Genomic Medicine, Okayama University Hospital) for analyzing next-generation sequencing data, and our laboratory colleagues for the useful discussions. This work received a poster award, ESMO 2014 (Madrid, Spain).
CONFLI CTS OF INTEREST
All authors declare no conflict of interests regarding this study. F I G U R E 5 Beneficial effects of combination therapy with AXL inhibitor and epidermal growth factor receptor (EGFR) inhibitor in HCC78R cells. A, Inhibitory effects of combining an AXL inhibitor, cabozantinib or gilteritinib, with gefitinib on the proliferation of HCC78R cells. Cells were exposed to crizotinib or cabozantinib at .5 μmol/L, to gilteritinib at .2 μmol/L, and to gefitinib at .1 μmol/L, all for 96 h. Data are presented as the means ± SD of 3 independent experiments. ***P < .001. Cabo, cabozantinib; Criz, crizotinib; Gefi, gefitinib; Gilt, gilteritinib. B, Effects of combined treatment with AXL inhibitors and gefitinib on EGFR pathway signaling in HCC78R cells. Cells were exposed to all drugs for 6 h. C, Effects of combined treatment with cabozantinib and gefitinib on tumor growth and body weight in HCC78R cell xenograft models. Mice were treated with 100 mg/kg/d crizotinib or 5 mg/kg/d gefitinib, 30 mg/kg/d cabozantinib, or 5 mg/kg/d gefitinib with 30 mg/kg/d cabozantinib. Statistical analysis of the data from the vehicle and treated groups was performed on day 14. Tumor volume (top) and body weight (bottom) curves
|
2018-08-06T13:51:29.530Z
|
2018-09-11T00:00:00.000
|
{
"year": 2018,
"sha1": "299526132b0a281a31103e03588cc1ee046cae20",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.13752",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "299526132b0a281a31103e03588cc1ee046cae20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
236356681
|
pes2o/s2orc
|
v3-fos-license
|
COMMUNITY EMPOWERMENT STRATEGY TEMAN PROGRAM BASED
People on the north coast of Cirebon Regency depend on their livelihood from catching sea products. If the weather is terrible, they cannot go to sea, so they do not get income. The community's inability to have a substantial income is the aim of this study, namely how to create productive business groups for fishermen's wives. They help people who can run small-scale businesses to get assistance from academics, industry, and government with the term Triple Helix. The strategy to build the synergy of these three elements is applied through the TEMAN program (Economic Order of the Fishermen's Community) in collaboration between community service in tertiary institutions in partnership with industry in villages around industrial areas through community empowerment efforts, thus forming business groups that produce products in the form of crackers that utilize the remaining processed fish and fat/small crab lemi become a fish paste and crab crackers. The method approach used is descriptive qualitative because it is related to social problems in the community, how to build motivation in the fishermen's wife group to do business so that they can earn even though they are small but sustainable or generic income. The findings from the research on community empowerment strategies based on the TEMAN Program show that access to natural resources, access to community participation, access to markets, and access to information and knowledge proves that community empowerment programs can make small business groups become independent and help economic sustainability Families of fishermen on the coast.
A. INTRODUCTION
Community building activities in the north coastal area, especially the Mundu coastal area of Cirebon Regency, which have a place of ± 15,525 km2, are inhabited by a population of 5,875 people, 2,939 men and 2,936 women with a total of 1340 households. The majority of people work as fishermen, as many as 2450 people. Related to community empowerment efforts aimed at reducing poverty levels and encouraging society to be productive is to pay attention to women's role. In Mundu Village, the number of formative age women is 1,631 people with a high school education level and below as many as 1490 people. The condition of the dominant community with limited education and even those who cannot read and write causes many people to work odd jobs without a regular income. Social phenomena such as the livelihood of people whose livelihoods are fishermen who only rely on the sea's catch have an uncertain income so that fishers often owe money to baskets/boatmen who are paid when they return to sea so that the basket determines each yield of sea products. Then based on observations in the coastal environment of Mundu, there are raw materials left over from processed fish or crabs that are wasted even though they can be used as raw material for making crackers. It seems that the community has the potential to do small-scale businesses, so it is necessary to receive Triple Helix assistance, namely from academia, industry, and government, by seeing the role of fishermen's wives who have the potential to be empowered.
Empowerment activities in Mundu Village are similar to other studies that also focus on empowerment: "It is necessary to develop a community empowerment strategy that will help them be more empowered. This is because so far, no real results or changes have been found due to whether this program is sufficient enough to empower the community. The success of the program cannot be separated from the strategy applied in the program implementation process. To find out this, an assessment is needed to describe the process. From the description of the program implementation process, it can be seen whether the program is following the community empowerment strategy" (Hadiyanti Puji, 2008) The community empowerment program in Mundu village, Cirebon Regency, is called the Fishermen Community Economic Arrangement Program, shortened by the acronym TEMAN. This empowerment program aims to improve the welfare of the fishing community in Mundu Village. The fact that the Mundu community has various social and economic life problems, an analytical study of efforts to overcome fishermen's socio-economic issues, including strategies on how to build a synergy of the three triple helix elements, is implemented through the TEMAN program (Economic Order of the Fishermen Community) This research was conducted not only to analyze the implementation of empowerment programs but more importantly to see the relationship between the paradigm of the program implemented by looking at the urgency of community empowerment programs for the socio-economic problems of fishing communities that need to be overcome by implementing the right strategy. "... At every level of the organization, there are strategies that are made based on the scope of their authority, but that also depends on the centralized or decentralized pattern that the organization follows. the strategy is the art of using the skills and resources of an organization to achieve its goals through its influential relationship with the environment in the most favorable conditions " (Salusu, 2015). TEMAN is a model of empowerment that originates from studying the economic structure of the fishing community. It is hoped that this program can be felt by the people of Mundu village, especially in the empowerment of organizations on the north coast of Cirebon regency. The formulation of this program is; 1. How to increase the capacity of the TEMAN program members in fulfilling their daily needs, which is marked by increased family income, improved quality of food, clothing, shelter, health, education level, and being able to carry out religious activities as well as increasing the growth of other social needs. 2. How to improve TEMAN members' ability to overcome problems in their families and their social environment. It is marked by whether there is a unity of agreement in decision making in the family and social environment, including accepting differences of opinion that may arise between husband and wife or between parents and children. 3. How to improve the ability of TEMAN members in displaying their social roles, both in their family and in their social environment, marked by the increasing awareness and sense of responsibility and participation of members in social welfare efforts in their environment, the more options are opened for group members in the more profitable development business, opens up opportunities to utilize the resources and potential of social welfare available in the environment (Ria Adriyani et al.: 2014). Every program has a goal to be achieved, as well as with FRIENDS. However, the extent to which the program is effective and following the community's needs is a challenge for program implementers, so it requires good thinking in the form of a strategy to achieve this. The notion of this Strategy has been reported by several experts who are competent in the field of economics, including the opinion that "Strategy is a way to mobilize human resources, funds, power, and equipment to achieve the goals set. The Strategy for community empowerment has several stages so that these activities can be well realized. The steps referred to are (1) selection of program target areas, (2) socialization of community empowerment, (3) implementation of community empowerment programs, and (4) monitoring and evaluation of the performance of community empowerment programs. (Hadiyanti 2008). Furthermore, a good plan is to make the community receiving the program prosperous. The program that is carried out can change the community's perspective to be independent and not depend on the program implementation team.
The Teman program's main subjects are group members who receive assistance or empowerment consisting of fishermen's wives in coastal Mundu Village. The goal is that wives can be independent and sustainably help the family economy with the opinion that "... fishermen's wives generally deal with domestic affairs a lot, but can also carry out economic functions, both in fishing activities in shallow waters (beach seine), fish processing, as well as service and trade activities. " Satria (2015: 20). This indicates that women are an essential factor in stabilizing the family economy. The empowerment of fishermen's wives is necessary to increase the standard of living and help husbands earn a living. Therefore, fishermen's wives need to be more creative and form communities or social groups or economic groups as an activist effort to get generic income.
B. CONCEPT Empowerment Strategy Concept
The strategy for empowerment of coastal communities is a concept of efforts to change specific community groups' behavior according to the plan through an empowerment program. According to Satria (2015: 129-130), there are four accesses to empowering coastal communities: 1) Access to natural resources: the ability of coastal communities either individually or in groups to utilize coastal, fishery, and marine resources. 2) Access to participation: coastal communities get participation from information, inputs, processes, outputs, and outcomes from participation equitably and fairly. 3) Access to markets: coastal communities, mostly fishermen, can sell their catch and know information on developing market dynamics. 4) Access to information and knowledge: information transformation, smooth learning between the community and the government, including environmentally friendly fishing techniques and methods, government assistance and empowerment programs, the dynamics of market demand and supply, the weather in fishing, and access to fuel for the need to go to sea. These four accesses become dimensions in the study of community empowerment strategies for the TEMAN Program.
C. METHOD
The TEMAN program research is solving coastal communities' problems with a qualitative descriptive method, using qualitative analysis of the results of interviews based on a list of questions that have been prepared to describe the object of research in the field. It begins with collecting primary data in society according to the theory "... this is done by gathering a sufficient amount of knowledge and which leads to an effort to understand or explain the related factors" (Basrowi, Suwandi;2008;67). The object of this research is the TEMAN Program as a community empowerment strategy. Sources of data are community groups involved in the TEMAN program, namely 30 fishermen's wives and 4 of the TEMAN program activists, and three informants to complement the data. This research uses qualitative in line with the theory of qualitative research according to (Sugiyono 2013: 9), which are as follows: "The qualitative research method is a research method used to examine the conditions of natural objects. The researcher is the key instrument. The data collection technique is done by triangulation (combined), the data analysis is inductive, and the results of qualitative research emphasize the meaning of generalization. In conducting data collection techniques, sourced from primary and secondary data. According to (Sugiyono 2013: 225): "Primary sources are sources of data that provide data directly to data collectors, while secondary sources are sources that do not directly provide data to data collectors, for example through other people or documents." The technique of determining informants or respondents used in this study is the technique of deciding informants or samples that are purposive sampling in nature (Sugiyono, 2013: 216): "The technique of determining informants or samples by purposive sampling is a technique of sampling data sources with certain considerations. The considerations referred to are related to the informant's extra knowledge, the informant's broad access to power and authority, as well as the informant's position which is strategic and influences the social situation understudy". Data analysis was carried out to interpret the data obtained in the field. The stages of qualitative data analysis, according to Miles & Huberman in (Sugiyono 2013: 246-253), are as follows: 1. Data Reduction Data reduction, namely summarizing, selecting the main things, focusing on the essential things, looking for themes and patterns to obtain further data.
Presentation of Data
Presentation of data in qualitative research where the data presented is in the form of narrative text.
Conclusion Drawing and Verification
The conclusion in qualitative research is a new finding. Findings can be in the form of a description or description of a previously confused object, and the situation becomes clear. Verification, namely data that must be tested for accuracy and compatibility.
In the implementation of this research, data collection techniques using interviews based on interview guidelines that were processed using percentages then made a descriptive analysis, equipped with data from observations and documentation of activities.
D. RESULTS AND DISCUSSION
Strategy formulation is carried out to ensure the accuracy of target achievement, and a strategic plan can be made to close gaps or assist in achieving goals. In the previous study that "about strategy formulation, the organization can examine what factors can influence it through a matrix of strengths, weaknesses, opportunities, threats (SWOT). SWOT analysis is an analysis of the influencing situations and conditions in the policy's internal and external environment. The research is based on the logic that can maximize strengths (strengths) and opportunities and reduce weaknesses and threats. In this way, an organization can see the strengths, weaknesses, opportunities, and threats as an integral unit to find out the potential strategic issues and those likely to be faced in the organization." (Aisah, 2015) The essence of strategy is determining the leaders' plans that focus on the long-term goals of the organization, along with the preparation of a way or an effort to achieve these goals. The community empowerment strategy in implementing the TEMAN program is by utilizing access to the empowerment of coastal communities, namely: 1. Access to natural resources.
2. Access to participation. 3. Access to markets. 4. Access to information and knowledge.
The explanation based on the results of research on this four access to community empowerment is as follows: 1. Access to natural resources Coastal areas provide many natural resources that come from sea catches, so coastal communities both individually and in groups to be able to utilize coastal, fishery, and marine resources much helps empowerment programs. In the process of stripping the crab, only the crab meat is used, while the egg or lemon/crab fat is not used; besides the fresh fish, which is usually processed into pindang, it turns out the remaining boiled water containing nutrients is wasted. The rest of the processed products that have been discarded can be used as raw materials for making crackers. The study results showed 26 out of 30 respondents, or 87%, agreed to use the leftover processed fish and crab lemi as the primary raw material. This fact indicates that the ease of getting raw materials sourced from the natural surroundings can give the TEMAN program participants the enthusiasm to start a cracker-making business.
Access to participation
An important aspect of empowerment is how coastal communities get participation starting from information, inputs, processes, to outputs and outcomes from participation evenly and somewhat as a form of community involvement to participate in the TEMAN program. The community in question is a group member formed from a meeting of fishermen's wives initiated by the hamlet head's wife. Providing outreach on plans to start a joint venture with a group, then discussing the ideal type of business to work together, what is the implementation process for those who wish to become members of the TEMAN group, until an agreement is reached to join the TEMAN program. During the initial meeting with fishermen's wives, the results of the interviews were 25 people or 83% who were ready to participate. The empowerment strategy focuses on raising the economy of fishing families by empowering wives in structured community empowerment programs. The program's main objective is to create an independent and innovative fishing family community through the formation of fishermen's wife groups under the TEMAN Program to develop the potential of fishermen's wives by processing the remaining unsold and useless catch into crackers. Its products are petis crackers and crab crackers as popular snack foods, so they can be a source of income, although small but sustainable/generic income. One of the starting points for success is independence. "The formation of freedom begins with participation, and the community will be encouraged to participate if they understand the benefits that will be obtained from a program to improve their welfare. For this reason, the community needs to be involved from the beginning of the activity. It is also essential to foster a sense of belonging to the program in question, further encouraging them to continue developing it. "(Zuliyah, 2010) 3. Access to markets Most of whom are ordinary fishermen; the coastal communities sell their catch to Bakul or boat owners. They are generally reluctant to know the information on the developing market dynamics because debt agreements already bind them. The existence of a fish cracker and small crab cracker business is an effort to raise fishermen's wives willing to help the family economy. Based on the research results, most respondents, as many as 28 people, 93%, expressed concern about the marketing of the products they produced. This group makes crackers where the cracker products produced by TEMAN members are marketed in addition to their respective homes; they have also distributed installs and sold in traditional markets for Rp. 40,000 / kg. Some consumers deliberately order crackers as souvenirs; this shows that the members have started to feel the results of selling crackers which they have made a joint venture. Source: TEMAN Program Report To increase the confidence and experience of FRIEND members. Kerupuk Teman was also included in the bazaar because the cracker business that has been initiated has started to be useful in terms of shape and delicious in terms of taste and has been producing continuously.
Image: LPPM documentation, Bazaar Activities
Temporary cracker marketing is sold in traditional markets, and an occasional bazaar is held as a promotional event to introduce cracker products. Production results gradually increase calculated per quarter (3 months), the explanation is shown in the graph below: Graph: Increase in both types of cracker production It is hoped that in the future, the community empowerment of the TEMAN program can continue at the stage of arranging PIRT and halal labels for petis cracker and crab cracker products that have been successfully pioneered to increase selling value, and there will be a significant increase in sales turnover.
Access to information and knowledge
The TEMAN program, which has been socialized, is then implemented to help transform information, knowledge smoothly between the community and industry, and the government, including government assistance and empowerment programs, including aid from industry bridged by universities related to empowerment of communities around industrial zones. About the TEMAN program, the provision of information is carried out right at the time of 2 months of coaching and aims to improve members' expertise in making crackers. Therefore, the TEMAN team presented a cracker entrepreneur mentor who is already proficient in managing hackers' manufacture and production in the East Cirebon area. This effort was also made to motivate the members to stay enthusiastic and never make petis and crab crackers. The need for access to information and knowledge in empowerment strategies is quite critical. Still, it is evident from the research results of 20 respondents or 67% who feel they need information and want to increase their knowledge because most respondents cannot read and write, graduate from elementary school, or do not complete junior high school.
E. CONCLUSION
The conclusion obtained from this study results is that the TEMAN program can run well even though it has not implemented a comprehensive community empowerment strategy. The group members can feel the benefit of this program and provide additional family income. In terms of the process that refers to the four accesses, access to information and knowledge is not maximal because of constraints on group members' academic abilities or insight. In 10 months in the fourth quarter, there has been a significant success, that there are 30 fishermen's wives in the TEMAN group. They have been able to independently work together in creating productive businesses to increase family income so that it has a positive impact on the economy and the level of community welfare. The TEMAN group was formed because the Team had a great desire to improve family welfare and provide technical assistance to the group in material and non-material forms. I hope the continued development of the business will continue by monitoring the production of petis crackers and small crab crackers so that the quality and quantity of crackers produced are maintained and increased. Community empowerment based on the TEMAN program can positively impact and hope for the community in Mundu Pesisir Village, Cirebon Regency. Fishermen's wives can be independent and at the same time help their family's economy with generic income through the use of leftover fish and crabs. The synergy between the government, industry, and various related parties who are serious about fostering coastal communities oriented towards small business development can survive amid the conditions of the Covid-19 pandemic by utilizing the right community empowerment strategy.
|
2021-07-27T00:04:54.675Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "549bb3c951e12ba5b5b81c0e0c5eef9e1efcfb7a",
"oa_license": "CCBYSA",
"oa_url": "http://jurnal.untag-sby.ac.id/index.php/dia/article/download/#5099%2Fpdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "13ec8c26e5c9c209de84a6db065d5a30c9c01d09",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
6512539
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Pretreated Switchgrass and Biomass Carbohydrates on Clostridium thermocellum ATCC 27405 Cellulosome Composition: A Quantitative Proteomic Analysis
Background Economic feasibility and sustainability of lignocellulosic ethanol production requires the development of robust microorganisms that can efficiently degrade and convert plant biomass to ethanol. The anaerobic thermophilic bacterium Clostridium thermocellum is a candidate microorganism as it is capable of hydrolyzing cellulose and fermenting the hydrolysis products to ethanol and other metabolites. C. thermocellum achieves efficient cellulose hydrolysis using multiprotein extracellular enzymatic complexes, termed cellulosomes. Methodology/Principal Findings In this study, we used quantitative proteomics (multidimensional LC-MS/MS and 15N-metabolic labeling) to measure relative changes in levels of cellulosomal subunit proteins (per CipA scaffoldin basis) when C. thermocellum ATCC 27405 was grown on a variety of carbon sources [dilute-acid pretreated switchgrass, cellobiose, amorphous cellulose, crystalline cellulose (Avicel) and combinations of crystalline cellulose with pectin or xylan or both]. Cellulosome samples isolated from cultures grown on these carbon sources were compared to 15N labeled cellulosome samples isolated from crystalline cellulose-grown cultures. In total from all samples, proteomic analysis identified 59 dockerin- and 8 cohesin-module containing components, including 16 previously undetected cellulosomal subunits. Many cellulosomal components showed differential protein abundance in the presence of non-cellulose substrates in the growth medium. Cellulosome samples from amorphous cellulose, cellobiose and pretreated switchgrass-grown cultures displayed the most distinct differences in composition as compared to cellulosome samples from crystalline cellulose-grown cultures. While Glycoside Hydrolase Family 9 enzymes showed increased levels in the presence of crystalline cellulose, and pretreated switchgrass, in particular, GH5 enzymes showed increased levels in response to the presence of cellulose in general, amorphous or crystalline. Conclusions/Significance Overall, the quantitative results suggest a coordinated substrate-specific regulation of cellulosomal subunit composition in C. thermocellum to better suit the organism's needs for growth under different conditions. To date, this study provides the most comprehensive comparison of cellulosomal compositional changes in C. thermocellum in response to different carbon sources. Such studies are vital to engineering a strain that is best suited to grow on specific substrates of interest and provide the building blocks for constructing designer cellulosomes with tailored enzyme composition for industrial ethanol production.
Introduction
Plant cell walls consist of several intertwined heterogeneous polymers, primarily composed of cellulose, hemicellulose (substituted xylan), pectin, and lignin. Therefore, the action of several enzymes with diverse catalytic activities is needed in order to efficiently break down and unravel this inherently complex polymer network. The anaerobic, thermophilic, Gram-positive bacterium Clostridium thermocellum possesses this diversity in catalytic capability [1], thus making this organism an attractive candidate for lignocellulosic biomass deconstruction and conversion for cellulosic ethanol production [2].
C. thermocellum has one of the fastest known growth rates on crystalline cellulose, the major component in plant biomass [3]. High efficiency cellulose hydrolysis is aided by the cell surface attached multienzyme protein complex termed the cellulosome [4,5,6]. The cellulosome consists of a primary non-catalytic scaffoldin unit (CipA) that can accommodate as many as nine catalytic units [7]. The catalytic units are non-covalently attached to the scaffoldin via the high affinity Type I interaction between dockerin domains borne by the catalytic units with the cohesins on the scaffoldin [8,9]. In turn, the entire scaffoldin with bound subunits is attached to the cell surface via the high affinity Type II interaction between the dockerin domain of CipA and the cohesin(s) borne by the anchoring proteins (OlpB, SdbA, Orf2p) [10]. The scaffoldin and several of the catalytic units also have carbohydrate-binding modules that aid in attachment of the cellulosome directly to the growth substrates to form a cell-cellulosome-substrate tri-complex (see schematic in Figure 1).
With the genome sequence available, three recent studies have utilized mass spectrometry-based methods to identify and experimentally confirm the expression of a number of new cellulosomal proteins. Zverlov et al. [27] used two-dimensional electrophoresis to separate the cellulosomal proteins isolated from cellulose-grown C. thermocellum F7 and identified 13 proteins, using Matrix Assisted Laser Desorption and Ionization-Time of Flight (MALDI-TOF) mass spectrometry. More recently, the same team used MALDI-TOF/TOF mass spectrometry to identify 32 components across four different cellulosomal samples isolated from C. thermocellum 27405 cultures grown on cellulose, cellobiose, cellulose+xylan, and barley beta-glucan [30].
The most comprehensive proteomic study to date by Gold and Martin [31] employed a metabolic isotope labeling strategy in conjunction with Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS) mass spectrometry to estimate quantitative changes in expression patterns of C. thermocellum 27405 cellulosomal subunits during growth on cellulose and cellobiose. Qualitatively, 41 cellulosomal components were identified between the two samples, the highest number of experimentally verified subunits thus far. Quantitatively, the authors reported the increased expression of the anchor protein OlpB, exoglucanases CelS and CelK, and the GH9 endoglucanase CelJ, and lowered expression of endoglucanases from glycoside hydrolase families GH8 (CelA) and GH5 (CelB, CelE, CelG) and hemicellulases (XynA, XynC, XynZ and XghA) during growth on cellulose, as compared to cellobiose-grown cellulosomes. Based on these results, the authors suggested a novel distinction in the regulation of GH5 and GH9 endoglucanases.
Other studies have also demonstrated a growth rate and/or carbon source dependent regulation of cellulolytic activity and cellulosomal gene expression in C. thermocellum [8,32,33]. Specifically, transcript levels of the major exoglucanase, celS and endoglucanase genes from GH9 (celD) and GH5 (celB, celG) families have been shown to increase at either low growth rates or in the presence of crystalline cellulose [32,33,34,35,36,37]. A similar trend in expression has also been reported for the scaffoldin, CipA and cell-surface anchoring proteins OlpB and Orf2p but not SdbA. However, the underlying regulatory mechanisms for these observations are not well understood. It has been hypothesized that C. thermocellum down-regulates the expression of energy-intensive cellulases in the presence of alternate readily metabolizable substrates such as cellobiose via catabolite repression [33,38]. In support of this idea, recently the sugar laminaribiose was identified as an inducer (by inhibiting binding of the negative regulator GlyR3) of the celC gene cluster encoding non-cellulosomal enzymes in C. thermocellum [39].
While the above studies have provided valuable insights on cellulosomal gene regulation and contributed significantly to identifying the cellulosomal composition, the field would benefit from further detailed investigations. For example, more than 20% of these cellulosomal proteins have domains with no assigned function [2]. Most cellulosomal composition and expression studies have only investigated growth on two model substrates (crystalline cellulose and cellobiose) with the exception of the work by Zverlov and Schwarz [30] which also included barley betaglucan and xylan in combination with cellulose. While the recent quantitative proteomics study by Gold and Martin has offered a comprehensive look at the cellulosome composition and subunit expression profile, the comparison was only between cellulose-and cellobiose-grown cultures. Additional research is needed for experimental verification of more than 40% of the currently hypothetical cellulosomal proteins as their expression may require the presence of other substrates in the medium for induction. Therefore, in this study, we investigated qualitative and quantitative changes in cellulosome composition of C. thermocellum during growth on a wide variety of growth substrates ranging from crystalline cellulose (Avicel), amorphous cellulose (Z-TrimH), and cellobiose to combinations of cellulose with pectin and xylan. Most importantly, we investigated the cellulosomal expression profile during growth on dilute-acid pretreated switchgrass, a natural biomass substrate for cellulosic ethanol production. Quantitative proteomics ( 15 N metabolic labeling coupled with LC-MS/MS) was used to measure substrate-specific changes in cellulosome composition in controlled replicate fermentations. By examining cellulosome expression of C. thermocellum during growth on real biomass and multiple combinations of model substrates, we aimed to uncover regulation patterns of cellulosome catalytic subunits by comparing and correlating their expression across these substrates. We hypothesize that the expression and functions of many candidate cellulosomal genes could potentially be ascertained under these complex substrate conditions.
Materials and Methods
Fermentation C. thermocellum ATCC 27405 was a gift from Prof. Herb Strobel at the University of Kentucky, Lexington, KY. Fermentations were conducted in 3 L BioStat B jacketed glass fermentors (Sartorius BBI, Inc.) with 2 L working volume of MTC medium at 58uC [33]. Fermentors with media containing only the carbon source were sparged with ultra-high purity nitrogen and vigorously agitated overnight. On the next day, the rest of the media components were added and sparged for an additional 2-3 hrs with nitrogen. A 10% v/v inoculum of cultures pre-adapted on the various substrates in bottles was used to inoculate the fermentors and the gas inlet and exhaust were clamped after inoculation. Samples were taken at regular intervals for protein analysis of pellet and supernatant fractions and HPLC analysis of metabolites. The supernatant protein was estimated using the Bradford assay. Growth was monitored based on increase in the pellet protein concentration. Briefly, cells were lysed in NaOH/SDS solution, cell debris were pelleted and removed, and protein concentration in the supernatant was estimated using the BCA assay. Metabolite analysis was performed using a LaChrom Elite system (Hitachi High Technologies America, Inc.) equipped with a refractive index detector (Model L-2490). Metabolites were separated at a flow rate of 0.5 ml/min in 5 mM H2SO4 using an Aminex HPX-87H column (Bio-Rad Laboratories, Inc.).
Cellulosome Isolation
Cellulosomes were isolated from cell-free broth from fermentations using the affinity digestion method [41]. Briefly, cultures were spun down and the cell-free broth was incubated with phosphoricacid-swollen-cellulose (100 mg per liter of cell free broth) overnight at 4uC for cellulase binding to cellulose. On the following day, amorphous cellulose with bound enzymes was spun down and resuspended in 20 mL dialysis buffer (50 mM Tris, 50 mM CaCl 2 , 50 mM DTT, pH 7.0). The amorphous cellulose suspension with bound cellulases was dialyzed in membrane bags (regenerated cellulose, SpectraPor, 6-8 kDa cut-off) at 60uC against 2 L of deionized water to initiate amorphous cellulose degradation by the enzymes. Deionized water was changed every ,60 mins to avoid inhibition of cellulases by the degradation product, cellobiose. The suspension cleared within 2-4 hrs and a purified cellulase fraction was obtained after further centrifugation of the clarified solution. The total protein concentration of the isolated cellulosome samples was determined with the Lowry assay [42].
MS/MS Sample Preparation
Every cellulosome sample grown in 14 N medium was mixed with the reference 15 N-labeled cellulose cellulosome sample in equal proportions based on the total protein concentration. All mixtures were digested using the following protocol. The proteins were denatured and reduced with 6 M guanidine and 10 mM dithiothreitol (DTT) (D9163, Sigma Chemical Co. St. Louis, MO) at 60uC for 1 h. The samples were then diluted 6-fold with 50 mM Tris, 10 mM CaCl 2 (pH 7.6), and sequencing grade trypsin (Promega, Madison, WI) was added at 1:100 (wt:wt). The first digestion was run for 5 hrs at 37uC and after adding additional trypsin, the second digestion was run overnight at 37uC. Finally, the samples were reduced with 20 mM DTT for 1 h at 60uC and desalted using C18 solid-phase extraction (Sep-Pak Plus, Waters, Milford, MA).
Quantitative Proteomics Measurement
All samples were examined with liquid chromatography-tandem mass spectrometry (LC-MS/MS) using a five-step, nine-hour, split-phase MudPIT technique [43,44]. MudPIT measurements were repeated for every sample as technical replication. The samples were loaded via a pressure cell (New Objective, Woburn, MA) onto a 250-mm-I.D. back column packed with 3 cm of C18 reverse-phase resin (Jupiter-3 mm, Phenomenex, Torrance, CA) and 3 cm of strong cation exchange resin (Luna, Phenomenex). The back column was connected to a 15-cm-long 100-um-I.D. C18 reverse-phase PicoFrit column (New Objective) and placed in-line with an Ultimate quaternary HPLC (LC Packings, a division of Dionex, San Francisco, CA). The two-dimensional LC separation was performed with five salt pulses, each of which was followed by a reverse-phase gradient elution. The LC eluent was directly electrosprayed into an LTQ linear ion trap mass spectrometer (ThermoFinnigan, San Jose, CA). Each full scan (400-1700 m/z) was followed by three data-dependent MS/MS scans at 35% normalized collision energy with dynamic exclusion enabled. The full scans were averaged from five microscans, and the MS/MS scans were averaged from two microscans.
Quantitative Proteomics Data Analysis
All MS/MS scans were searched with the SEQUEST program [45] against a Clostridium thermocellum ATCC 27405 protein sequence database (http://genome.ornl.gov/microbial/cthe) that contained common contaminants as well as sequence-reversed analogs of each protein for estimation of peptide false discovery rates [46]. The light isotopologs of peptides from the sample proteins were identified using normal amino acid masses in the SEQUEST parameter file, and the heavy isotopologs from the reference proteins were identified using 15 N-labeled amino acid masses (Enzyme type: trypsin; parent mass tolerance: 3.0; fragment ion tolerance: 0.5; up to four missed cleavages allowed; fully tryptic peptides only). The SEQUEST search results for the two technical replicate measurements of a cellulosome mixture were merged and analyzed by DTASelect 1.9 [47] to yield confident protein identifications. Peptide identifications were filtered by DTASelect based on Xcorr and delCN [Xcorr.1.8 (singly-charged parent ions), .2.5 (doubly-charged parent ions), and .3.5 (triply-charged parent ions); delCN.0.08] and assembled into proteins, retaining duplicate MS/MS scans of a peptide (DTASelect option: 2t 0). The abundance ratios for identified proteins in a cellulosome mixture were estimated with the program ProRata 1.1 [48,49]. Default parameters in ProRata were used, including peptide quantification with a minimum profile signal-tonoise ratio cutoff of 2 for peptide quantification and protein quantification with at least two quantified peptides and a maximum confidence interval width cutoff of 4. Finally, the biological replicate cellulosome mixtures of each comparison were combined with the Combine module of the ProRata program. To filter out proteins with poor quantification reproducibility between the two replicates, only proteins with overlapping confidence intervals in the two biological replicates were retained (Table S1). Proteins with their log 2 abundance ratios greater than 0.4 or less than 20.4 and confidence intervals excluding 0 were considered as significantly differentially expressed.
NSAF Calculation
To estimate relative amounts of the various cellulosomal proteins, Normalized Spectral Abundance Factor (NSAF) values [50] were calculated for proteins identified in each LC-MS/MS measurement. For a given protein, the NSAF is the spectrum count for that protein divided by the number of amino acid residues in the protein, divided by this quantity summed over all detected proteins. The spectrum count for a protein is the number of tandem mass spectra assigned to tryptic peptides resulting from digestion of that protein. NSAF calculations included contributions to spectrum count only from unique peptides (those appearing only in a single protein in the predicted C. thermocellum proteome). Only peptides identified as 14
Results and Discussion
In order to elucidate substrate-induced changes in cellulosome composition, we grew C. thermocellum on pretreated switchgrass and several other model substrates and analyzed the changes in subunit profile of cellulosomes isolated from cell-free broth using quantitative proteomics ( 15 N metabolic labeling and LC-MS/ MS). Specifically, duplicate C. thermocellum fermentations were conducted on dilute-acid pretreated switchgrass (50% glucan, 8% xylan), crystalline cellulose (separate cultures grown in 14 N-and 15 N-containing media), cellobiose, Z-TrimH dietary fiber (60% amorphous cellulose, 16% hemicellulose), and combinations of cellulose-pectin (3:2 wt ratio), cellulose-xylan (3:2), cellulose-pectinxylan (3:1:1). Cellulosomes were isolated by affinity digestion method from the cell-free broth of late stationary phase cultures during growth on these various substrates. Each cellulosome sample was mixed with an equal proportion of the reference 15 Nlabeled cellulosome sample isolated from cellulose cultures for differential comparison analysis by mass spectrometry.
Fermentation
In fermentations containing crystalline cellulose, either alone or in combination with pectin, xylan or both, the overall biomass yield, based on total cellular protein levels, was proportional to the amount of cellulose present in the medium ( Figure 2). This is not surprising, as C. thermocellum cannot grow on xylan or pectin monomers. 14 N and 15 N labeled cellulose fermentations yielded similar protein levels, while the mixed substrates supported lower growth rates and total yield with a growth lag. Growth on a blend of 40% pectin 60% cellulose (w/w) showed significantly delayed growth compared to other mixed substrate mixtures ( Figure 2). HPLC analysis of metabolite production (Table 1) revealed an average ethanol+acetate combined yield of 0.37-0.50 g per g of starting glucan during growth on various substrates with the lowest yield on pretreated switchgrass. However, wet chemistry analysis of spent biomass from switchgrass fermentations revealed the presence of unconsumed glucan. Hence, the metabolite yield calculated based on the starting glucan is low due to incomplete conversion of all the glucan in switchgrass fermentations. The acetate to ethanol ratio ranged from 0.9 on crystalline cellulose to 1.88 on pretreated switchgrass (Table 1). Increased acetate concentration in pretreated switchgrass fermentations and Z-TrimH (acetate: ethanol = 1.5; Table 1) may be due to their hemicellulose content (8% and 16% xylan respectively) since acidic deacetylation of hemicellulose leads to increased levels of acetic acid in the medium.
Under each growth condition, there was a significant increase over time in the protein amount present in the cell-free growth medium (Figure 2), beginning at approximately 7.5 hours when grown on cellulose and later for the slower growing cultures. To demonstrate that this increase was due to cellulosomes being released into the medium, as reported earlier [51], cellulases present in the supernatant of late stationary phase cultures for each substrate were captured and isolated using the affinity digestion method. The isolated proteins were examined by SDS PAGE separation which yielded a very consistent profile representative of cellulosome components [32] between the biological replicates for each of the different substrate growth conditions. Substrate-related changes in cellulosomal subunit profiles were not readily apparent based on the SDS-PAGE gel pattern (Figure 3).
Proteomics Analysis
The metabolic labeling strategy coupled with LC-MS/MS technology was used to identify and quantify the cellulosomal proteins using 15 N-labelled cellulosomes isolated from cellulose cultures as the reference. 14 N labeled cellulosomes isolated from cell-free broth of duplicate fermentations on seven different substrates were mixed in 1:1 ratio with 15 N labeled cellulosomes isolated from cellulose cultures. In total, 14 cellulosome sample mixtures were prepared for analysis by mass spectrometry in technical replicate runs. Protein data from biological and technical replicate LC-MS/MS MudPIT runs were combined and expres- sion differences were normalized based on the CipA scaffoldin protein across different comparisons (Figures 4, 5, 6). The affinity digestion-based cellulosome isolation procedure in effect captures the whole complex, which is built on the scaffoldin protein, thus justifying this type of normalization of data based upon the distribution of cellulosomal proteins on a per scaffoldin basis [27,31].
Subunits Identification
In total, mass spectrometry analysis detected 67 cellulosomal proteins between the seven samples (Figures 4, 5, 6), which includes 80% (59/73) of the dockerin module containing proteins and 100% (8/8) of the cohesin containing proteins in C. thermocellum, based on genome analysis [27]. Among the different cellulosomes analyzed, cellobiose samples yielded the highest (64) and switchgrass the lowest (53) number of protein identifications; the latter is likely due to the complexity of the growth substrate affecting the quality of the cellulosomal preparation.
We identified 16 new cellulosomal components in this study ( Figures 5, 6, highlighted in blue). This represents a 30% (16/53) increase in the total number of subunits identified and biochemically verified to date, including the two recent comprehensive studies by Gold/Martin and Zverlov/Schwarz [30,31]. Out of the 16 newly detected cellulosomal subunits, 7 proteins were detected under all conditions tested, while others were observed only in a subset of the samples. While many of the newly detected subunits were low abundant proteins, two proteins (Cthe0435 and Cthe0452) appeared to be fairly abundant, based on NSAF (Figure 4). In fact, Cthe0452, a potential anchor protein containing one type I cohesin, was among the 20 proteins with the highest spectral abundance (weighted NSAF) during growth on cellobiose and Z-TrimH (Figure 4).
We did not detect 14 out of 81 predicted cellulosome-related structural and catalytic proteins in C. thermocellum under any of the conditions, including the only representative proteins from glycoside hydrolase families GH81, GH2 and GH39 encoded in C. thermocellum genome. Among the undetected proteins, Cthe2360 (CelU, GH9) and Cthe3136 (S8, S53 peptidase) have been observed earlier by Zverlov et al [30]. Many of the undetected proteins are encoded by contiguous genes (e.g. Cthe2137-2138, Cthe2194-2195-2196-2197, and Cthe2949-2950) suggesting that these are likely inducible 'operons' and, hence, were not expressed due to the lack of their potential 'inducers' under the growth conditions tested.
Subunit Abundance Distribution
Weighted NSAF values were used to determine a rough estimate of the relative abundance of cellulosomal components within the different samples ( Figure 4). NSAF values [50] for each protein within a sample were divided by the NSAF for the scaffoldin CipA protein in that sample to yield weighted NSAF values.
The majority (49/67) of the identified proteins were detected under all growth conditions tested although their relative amounts within each sample were different depending on the growth substrate. Based on weighted NSAF data, the 20 most abundant proteins were similar across all the substrates containing crystalline cellulose, but not during growth on Z-TrimH or cellobiose ( Figure 4). It appears that the cellulosomal proteins follow the 10-60 or 20-80 law, i.e., the top 10 or 20 most abundant proteins (based on weighted NSAF) account for 60% or 80% of the total cellulosomal protein fraction, respectively. The exoglucanase CelS had the highest spectral abundance under all growth conditions including cellobiose. However, this is in contrast to a previous study which reported xylanases as the most abundant components in cellobiose-grown cellulosomes [31]. This may be due to differences between the NSAF approach that we used and the Protein Abundance Index method used by Gold and Martin.
Quantitative Proteomics
Relative quantitative expression data for the cellulosomal proteins during growth on pretreated switchgrass and other biomass carbohydrates, as compared to growth on cellulose, are reported in Figures 5, 6. The abundance ratios in quantitative expression data were normalized based on the CipA protein to look at changes in expression of the cellulosomal proteins on per scaffoldin basis across the different growth conditions. The comparison between 14 N and 15 N labeled cellulose-grown cellulosome samples served as a control to evaluate biological and technical reproducibility of cellulosomal protein expression analysis and to determine criteria for differential expression. Based on this comparison, proteins with their relative log 2 abundance ratios greater than 0.4 or less than 20.4 and confidence intervals excluding 0 were considered as significantly differentially expressed.
On average, 49 proteins were quantified in each of the seven comparisons for the different growth substrates (Figures 5, 6). Cellulosome samples isolated from cultures grown on cellobiose, amorphous cellulose (Z-TrimH) and pretreated switchgrass showed the most distinct differences in cellulosomal protein levels (as compared to crystalline cellulose) with 40 (78%), 28 (56%) and 25 (61%) of the quantitated proteins, respectively, showing significant differential expression (Figures 5, 6). Relatively fewer proteins (13)(14)(15) were differentially expressed in the case of cellulosomes isolated from cultures grown on cellulose in combination with xylan or pectin, or both. The control comparison between 14 N and 15 N cellulose-grown cellulosome samples showed minimal technical and biological variability in cellulosomal protein expression with only 6 of the quantified proteins with significant differential expression based on the criteria described above. Grouping the proteins based on their structural function or catalytic activity identified growth substrate-related trends in cellulosomal protein expression for structural proteins, exoglucanases, endoglucanases belonging to GH5 and GH9 families, xylanases, and other hemicellulases, as outlined below.
Structural Proteins
All known proteins containing type I and type II cohesin (Coh I, Coh II) modules in C. thermocellum (see Figure 1) were detected in this study, including two proteins, Cthe0452 and Cthe0735, that have not been observed experimentally prior to this study.
Among the Coh I containing proteins, OlpA (with 1 Coh I domain) has been suggested to play an intermediary role in the assembly of the cellulosome complex by binding the catalytic units prior to their transfer and assembly on the scaffoldin CipA ( Figure 1) [52,53]. Another protein with a single Coh I, Cthe0452, was detected with significantly higher weighted-NSAF than OlpA (Figure 4) under all growth conditions. This observation may suggest a yet unknown but potentially important role for this protein in the cellulosome assembly process, as has been hypothesized for OlpA. We also observed increased expression of the Cthe0452 protein during growth on cellobiose ( Figures 5, 6), as compared to cellulose-grown cellulosomes, in our quantitative proteomics analysis. This may be related to a similar pattern in expression observed for Cthe435, a subunit of unknown function, as the dockerin module on the latter is known to interact specifically with the cohesin on Cthe0452 (Carlos Fontes, personal communication).
The affinity digestion method of cellulosome isolation used in this study is targeted towards the capture of the subunit-laden CipA scaffoldin via its cellulose-binding module. Therefore, the detection of type II cohesin containing proteins that anchor the scaffoldin CipA to the cell surface (Figure 1) supports the hypothesis that the detachment of intact cellulosomes from the cell surface in mature cultures of C. thermocellum is possibly achieved by proteolytic cleavage of the anchor proteins [10,51].
Among the five proteins with type II cohesin domains in C. thermocellum (Figure 1), except Cthe0735 (with 1 Coh II domain) which was detected only during growth on cellobiose, all other proteins were detected under all growth conditions (Figures 4, 5, 6). In general, the relative spectral abundance of Coh II containing anchor proteins within each sample was inversely proportional to the number of cohesin modules borne by them, with SdbA (with 1 Coh II domain) being the most spectrally abundant and OlpB (with 7 Coh II domains) or Cthe0736 (with 7 Coh II domains), the least abundant across the different cellulosomal preparations (based on weighted-NSAF data, Figure 4). These results are in contrast to earlier reports of OlpB or Orf2p being the most prominent anchor protein during growth on cellulose or cellobiose, respectively [31,35]. Dror et al., based on transcript levels, reported a 10-fold excess in the number of cohesins available on the anchor proteins for attaching the scaffoldin CipA to the cell surface under conditions of low growth rate [35]. In this study, we estimated a 3.5-6 fold excess of cohesins over CipA at the protein level, based on the spectral abundance, during growth on different substrates (Figure 4).
However, it should be noted that the spectral abundance of a protein is influenced by several factors including the detectability of its peptides. It is known that the structural proteins of the cellulosome, namely the anchors and scaffoldin, are often glycosylated to protect the complex against proteolysis [54]. This leads to differences in their cleavability by trypsin and also the detectability of the resulting peptides due to mass-shifts resulting from glycosylation, thus influencing the spectral abundance. Therefore, studies involving absolute quantitation of these proteins are needed to investigate further the observed trends in type II cohesion-dockerin ratios, which would also provide insight into the plasticity or elasticity of the cellulosome complex.
SdbA exhibited the second highest weighted-NSAF in mass spectrometry analysis of cellobiose-grown cellulosomes ( Figure 4). Correspondingly, quantitative proteomics analysis also showed a .5-fold increase in expression of SdbA during cellobiose growth as compared to cellulose-grown cellulosomes (Figures 5, 6), consistent with a previous study [31]. Higher levels of SdbA were also observed under conditions of relatively slow growth on pretreated switchgrass and fast growth on Z-Trim (Figures 5, 6) and thus presumably regulated in a growth-rate independent manner as reported earlier [35]. On the other hand, a growth rate and/or carbon source dependent regulation has been reported for the genes encoding the anchor proteins, OlpB and Orf2p, at the transcript level [35]. Consistent with these results, we observed lower levels of the OlpB protein under fast growing conditions on cellobiose ( Figures 5, 6), which has also been reported by Gold and Martin [31]. However, no such correlation was observed for Orf2p (Figures 2, 5, 6).
Interestingly, recent genome sequencing revealed the presence of another type II cohesin containing protein, Cthe0736 (with 7 Coh II domains), but the protein lacks the surface layer homology (SLH) domain needed for cell surface anchoring. This suggests the potential presence of ''free'' non-cell attached cellulosomes, formed via the type II interaction between the scaffoldin CipA and the Cthe0736 protein, in C. thermocellum. The ''free'' cellulosomes could aid in targeting catalytic subunits, involved in hydrolysis of non-cellulosic material, to surfaces of complex substrates for exposing the preferred substrate of cellulose for hydrolysis and consumption. Alternatively, these ''free'' cellulosomes may yet remain cell-associated (if not cell-attached) through their integration in the glycocalyx matrix of the polycellulosomal complexes. C. thermocellum is known to form protuberant structures of polycellulosomes on the cell surface consisting of several hundred cellulosomes with masses up to 100 MDa [51]. In this study, Cthe0736 showed increased expression during growth on all substrates, except pretreated biomass, as compared to cellulose (Figures 5, 6). Further research is needed to understand the observed trends in expression and to unravel the function and regulation patterns of this novel protein.
Exoglucanases
All four known cellulosomal exoglucanases in C. thermocellum belonging to families GH48 (CelS), GH9 (CelK, CbhA) and GH5 (CelO) were detected in this study. CelS was the most spectrally abundant protein, while CelO was a relatively minor component, in the cellulosomal preparations irrespective of the growth substrate (based on weighted-NSAF data; Figure 4). Our results confirm earlier reports that CelS is the major component in C. thermocellum cellulosomes [27,55,56,57]. The four exoglucanases accounted for 18-30% of the total spectral abundance in the cellulosomal fractions under the different growth conditions, with the least proportion during growth on cellobiose and Z-Trim ( Figure 4).
Correspondingly, quantitative proteomics showed lower expression of all four exoglucanases during growth on Z-TrimH (60% amorphous cellulose) than on cellulose. CelS and CelK also showed decreased expression during growth on cellobiose, as compared to growth on cellulose, with no significant difference in the expression of CbhA or CelO (Figures 5, 6). Previous studies have reported a growth rate dependent regulation of celS gene with reduced levels of gene expression at both transcript [34,37] and protein level [34] in cellobiose-grown cells, as compared to crystalline cellulose-grown cells. In this study, the observed trend of lower CelS protein expression under fast growing conditions on Z-Trim and cellobiose (data not shown) is consistent with this type of regulation. Moreover, CelS has higher activity on amorphous cellulose than crystalline cellulose [57], which might explain the need for lower levels of CelS during growth on Z-Trim. Recently, Gold and Martin also reported a similar trend in expression of CelS and CelK proteins with decreased levels during growth on cellobiose, as compared to cellulose [31].
On the other hand, the duplicated family 9 cellobiohydrolases, CelK and CbhA, both were expressed at higher levels during growth on pretreated switchgrass, as compared to cellulose (Figures 5, 6). Among the four cellulosomal exoglucanases in C. thermocellum, the GH9 exocellulases attack the cellulose chain from the non-reducing end whereas CelS and CelO attack from the reducing end of the chain [19,26]. Therefore, the increased expression of CelK and CbhA suggests an enhanced need for exoexo synergy between these two classes of exocellulases, with different specificities, in cells grown on natural plant biomass to attack the cellulose in this complex substrate from both reducing and non-reducing ends [58].
Endoglucanases
All known cellulosomal endoglucanases in C. thermocellum belonging to glycoside hydrolase families GH5 (9 proteins in total), GH8 (1) and GH9 (12 in total) were detected in this study, with the exception of the GH9 subunit CelU (Cthe2360) (Figures 4, 5, 6). We also experimentally confirmed a new GH5 subunit (Cthe3012) as a cellulosomal component. Multimodular cellulosomal proteins containing other catalytic modules, in addition to GH5 and GH9 domains, are grouped separately in Figures 5, 6. The following discussion assumes endoglucanase activity for all previously uncharacterized GH5 and GH9 cellulosome components.
In general, CelA (GH8) and the recently discovered GH5 subunit (Cthe0821) were the two most spectrally abundant endoglucanases (based on weighted-NSAF data, Figure 4) in cellulosomal preparations, during growth on various substrates. This is consistent with previous reports of CelA being the major endoglucanase in C. thermocellum cellulosomes [3]. However, cellulosomes isolated from cultures grown on pretreated switchgrass were an exception and contained Cthe0821 and CelF (Cthe0543, GH9) as the most spectrally abundant endoglucanase components ( Figure 4). In general, GH5 endoglucanases, Cthe0821, CelB, CelG, and CelE and GH9 endoglucanases, CelQ, CelF, CelT, CelR, CelW and CelJ were among the top 20 catalytic components with the highest weighted-NSAF in all cellulosomal preparations, irrespective of the growth substrate.
Quantitative proteomics analysis revealed a general trend toward decreased expression of GH5 endoglucanases during growth on cellobiose, as compared to cellulose (Figures 5, 6). This is in contrast to a recent study which reported increased expression of GH5 proteins during cellobiose growth [31]. However, our results are consistent with a transcript-level study by Dror et al. which showed a growth-rate dependent regulation of CelB and CelD genes with decreased transcript levels in cellobiose cultures, as compared to cellulose [36]. On the other hand, no significant changes in expression of GH5 endoglucanases were observed, with the exception of decreased CelL expression, during growth on Z-TrimH ( Figures 5, 6). Quantitative proteomics also showed increased expression of the GH5 endoglucanase, Cthe0821, in switchgrass-grown cellulosomes, but lower levels of CelB and Cthe2193 as compared to cellulose (Figures 5, 6).
Endoglucanases belonging to the GH9 family also showed a trend toward decreased expression during growth on cellobiose, as compared to growth on cellulose (Figures 5, 6). These results are broadly in agreement with previous studies which also reported decreased expression of GH9 endoglucanases in cellobiose cultures, both at the transcript [36] and the protein level [31]. In general, GH9 endoglucanases were also expressed at lower levels during growth on Z-TrimH, with the exception of two contiguous genes, Cthe2760 (CelV)-2761, which showed higher expression, as compared to cellulose (Figures 5, 6). These results show that GH9 endoglucanases are specifically down-regulated in the absence of crystalline cellulose in the growth medium. On the other hand, the increased expression of several GH9 endoglucanases, CelN, CelF, CelV and Cthe0433, during growth on pretreated switchgrass (Figures 5, 6) highlights the important role of this family of endoglucanases in the degradation of natural plant biomass.
As discussed above, we observed a differentiation with respect to the expression of GH5, but not GH9, endoglucanases between cellobiose and Z-Trim-grown cultures. While GH9 endoglucanases showed decreased expression during growth on both cellobiose and Z-Trim, GH9 endoglucanases showed decreased expression only during growth on cellobiose ( Figures 5, 6). Put another way, while GH9 endoglucanases showed decreased expression in the absence of crystalline cellulose, GH5 endoglucanases showed decreased expression in the absence of cellulose in general, amorphous or crystalline, in the growth medium. Taken together, these results suggest an important role for the GH9 endoglucanases in the decrystallization of crystalline cellulose. We propose that GH9 endoglucanases attack the crystalline surface of cellulose fibrils aiding in the creation of amorphous cellulose regions which become targets for hydrolysis by GH5 endoglucanases.
Among the multimodular proteins, the increased expression of the CelH (Figures 5, 6) during growth on cellobiose, Z-TrimH and pretreated switchgrass may be attributable to the presence of a second GH26 catalytic unit in the protein. During growth on combinations of cellulose, xylan, and pectin, CelE (GH5, CE2 domains) showed increased expression in all three conditions but not during growth on switchgrass, possibly due to the presence of the carbohydrate esterase catalytic unit.
Xylanases
Among the known cellulosomal xylanases, XynA, XynC, XynZ, and XghA showed higher expression during growth on cellobiose, as compared to cellulose-grown cellulosomes ( Figures 5, 6). In fact, xylanases accounted for 22% of the total spectral abundance in the cellulosomal fraction during growth on cellobiose, as compared to 12% during growth on cellulose (based on weighted-NSAF data, Figure 4). These results are consistent with the growth independent regulation of xylanases reported in earlier studies [8,36] with higher expression of xylanases during growth on cellobiose, as compared to cellulose [31].
C. thermocellum cannot grow on pentose sugars [3] derived from catalytic activities of xylanases and other hemicellulases. Therefore, these enzymes are suggested to play a vital role in exposing the preferred substrate, cellulose, in plant cell walls through the degradation of hemicellulose and other polymeric substrates [1]. Interestingly, xylanases showed decreased expression during growth on pretreated switchgrass relative to growth on cellulose ( Figures 5, 6), which could be detrimental in unmasking the preferred substrate of cellulose during growth on natural plant biomass. However, it is possible that the residual hemicellulose (8% xylan) in pretreated switchgrass is buried under lignincellulose complexes, thus minimizing potential xylanase inductive effect. The lack of pectin in pretreated switchgrass may also explain the significant downregulation (.16-fold) of one of the few pectate-active enzymes (Cthe0246) in C. thermocellum during growth on plant biomass versus growth on cellulose.
To date, this study provides the most comprehensive comparison of cellulosomal compositional changes in C. thermocellum in response to different carbon sources. Up to 80% of the known dockerin containing subunits were identified in this study. Quantitative results show a clear pattern in regulation of cellulosomal components and their individual levels to better suit the organism's needs for growth under different conditions. While the results highlight the importance of Glycoside Hydrolase 9 family of exoglucanases and endoglucanases in degradation of plant biomass, they also point to potential bottlenecks, such as downregulation of xylanases and pectinases that may compromise the cells' ability to unwrap the intertwined polymeric compounds in plant cell walls. Such studies are vital to engineering a strain that is best suited to grow on specific substrates of interest and provide the building blocks for constructing designer cellulosomes with tailored enzyme composition for industrial ethanol production.
|
2017-04-13T11:15:37.953Z
|
2009-04-22T00:00:00.000
|
{
"year": 2009,
"sha1": "84b259c23f931dc490bc538c0fe89eca6a0ef87e",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0005271&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7bb7490f7f7bf7ad56b689633fb5fdb735e44a3",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
255947551
|
pes2o/s2orc
|
v3-fos-license
|
Nigral overexpression of alpha-synuclein in the absence of parkin enhances alpha-synuclein phosphorylation but does not modulate dopaminergic neurodegeneration
Alpha-synuclein is a key protein in the pathogenesis of Parkinson’s disease. Mutations in the parkin gene are the most common cause of early-onset autosomal recessive Parkinson’s disease, probably through a loss-of-function mechanism. However, the molecular mechanism by which loss of parkin function leads to the development of the disease and the role of alpha-synuclein in parkin-associated Parkinson’s disease is still not elucidated. Conflicting results were reported about the effect of the absence of parkin on alpha-synuclein-mediated neurotoxicity using a transgenic approach. In this study, we investigated the effect of loss of parkin on alpha-synuclein neuropathology and toxicity in adult rodent brain using viral vectors. Therefore, we overexpressed human wild type alpha-synuclein in the substantia nigra of parkin knockout and wild type mice using two different doses of recombinant adeno-associated viral vectors. No difference was observed in nigral dopaminergic cell loss between the parkin knockout mice and wild type mice up to 16 weeks after viral vector injection. However, the level of alpha-synuclein phosphorylated at serine residue 129 in the substantia nigra was significantly increased in the parkin knockout mice compared to the wild type mice while the total expression level of alpha-synuclein was similar in both groups. The increased alpha-synuclein phosphorylation was confirmed in a parkin knockdown cell line. These findings support a functional relationship between parkin and alpha-synuclein phosphorylation in rodent brain.
Background
Parkinson's disease (PD) is the second most common neurodegenerative disorder. Neuropathologically, it is characterized by the progressive loss of dopaminergic neurons in the substantia nigra (SN) and the presence of proteinaceous intracellular inclusions called Lewy bodies (LBs) and Lewy neurites in the surviving neurons [1]. Although the etiology of sporadic PD remains still unclear, the discovery of genes linked to familial forms of the disease has improved our understanding of the pathogenic mechanisms leading to PD.
Point mutations and multiplications of the α-synuclein (α-SYN) gene, SNCA, cause a rare familial autosomal dominant form of PD [2]. α-SYN is a small protein of 140 amino acids that is widely expressed in the brain and localizes predominantly to presynaptic terminals [3]. The biological function of α-SYN remains unknown, although it's involvement in dopamine transmission and biosynthesis [4,5], synaptic plasticity [6] and turnover of synaptic vesicles has been suggested [7]. Under pathological conditions, including mutations and increased expression levels, α-SYN has the propensity to adopt a β-sheet rich conformation which leads to the formation of oligomers and fibrillar aggregates [8]. This fibrillar form of α-SYN is the main protein component of LBs and Lewy neurites, indicating that α-SYN plays a crucial role in the pathogenesis of PD [9]. Moreover, animal models based on overexpression of wild type (WT) or mutant α-SYN, recapitulate some of the main hallmarks of PD, including neurodegeneration, motor dysfunction and inclusion formation [10].
Mutations in the parkin gene are the major cause of early-onset autosomal recessive PD [11]. Parkin has been identified as an E3 ubiquitin-ligating enzyme, catalyzing the attachment of ubiquitin to substrate proteins, which consequently leads to their proteasomal degradation [12][13][14]. It was therefore suggested that the loss of parkin function due to disease-causing mutations might damage neurons through the accumulation of toxic proteins. On the other hand, evidence is accumulating that parkin ubiquitin ligase activity also contributes to non-degradative poly-and mono-ubiquitination, which are involved in alternative cellular pathways, including mitochondrial quality control [15] and signal transduction cascade activation [16,17]. However, the molecular mechanism by which loss of parkin function leads to the development of PD is still not completely elucidated. More specifically, the role of α-SYN in parkin-associated PD remains unclear, although there are lines of evidence suggesting a potential link between parkin and α-SYN. On one hand it has been reported that the unmodified form of α-SYN does not interact with parkin [18] and no accumulation of α-SYN was detected in the brain of parkin knockout (parkin −/− ) mice [19][20][21]. On the other hand, in about two thirds of the seventeen neuropathological reports of patients with parkin mutations published to date, no LBs were found, indicating that parkin might play a role in LB formation [22]. Furthermore, overexpression of parkin provides protection against α-SYN toxicity in a variety of cellular and animal models [23][24][25][26][27][28][29]. Conflicting results were published in a number of studies using a transgenic approach to investigate the effect of parkin on α-SYN-induced neurotoxicity. The neurodegenerative phenotype was unexpectedly delayed in the absence of parkin in A30P α-SYN transgenic mice [30], while no effect of loss of parkin on neuropathology was found in A53T α-SYN transgenic mice [31]. In a third publication, an increase in damaged mitochondria in neurons of the SN and a reduction of complex I activity was reported in old double mutant mice generated by the crossing of parkin −/− mice with double mutated A53T and A30P α-SYN overexpressing mice [32]. However, these results in transgenic mice showed that the absence of parkin did not clearly affect α-SYN-induced neurodegeneration, except for some minor changes.
In the present study, we chose an alternative approach to investigate the effect of the absence of parkin on α-SYNinduced cell death in adult rodent brain using viral vector technology. Therefore, we overexpressed human WT α-SYN in the SN of parkin −/− and wild type (parkin +/+ ) mice with recombinant adeno-associated viral (rAAV) vectors. Subsequently, we analyzed the degree of neurodegeneration and synucleinopathy in the SN of those mice.
Results
Absence of parkin does not increase the sensitivity to dopaminergic degeneration induced by a high dose of rAAV2/7-WT α-SYN In a previous study, we showed that rAAV2/7-mediated overexpression of WT α-SYN in the SN of mice resulted in a dose-dependent, progressive dopaminergic neurodegeneration [33]. Therefore, to study the effect of the absence of parkin on α-SYN induced dopaminergic cell death, in a first experiment we stereotactically injected a rAAV2/7 vector encoding WT α-SYN in the right SN of adult parkin −/− and parkin +/+ mice at a vector titer of 4E + 11 genome copies (GC)/ml (high vector titer, see experimental design in Table 1).The animals were analyzed at 1 week and 4 weeks after injection. As additional control, a group of parkin −/− and parkin +/+ mice was injected with a similar titer of rAAV2/7 vector coding for enhanced green fluorescent protein (eGFP).
Immunohistochemical stainings for α-SYN and eGFP revealed high transgene expression in the SN for both vectors ( Figure 1). With confocal analysis we observed a transduction efficiency of the dopaminergic neurons of approximately 85% (Figure 2). To investigate the degree of dopaminergic neurodegeneration, we stereologically quantified the number of tyrosine hydroxylase (TH)positive cells in the SN. At 4 weeks after injection, the rAAV2/7-WT α-SYN induced a dopaminergic lesion of 59 ± 6% compared to the non-injected side in the parkin +/+ mice, which is in agreement with our previous study [33] (Figure 1 and 3A). In the parkin −/− mice a comparable dopaminergic cell loss of 52 ± 6% was observed. The rAAV2/7-eGFP injected mice did not show any loss of TH-positive cells (Figure 1 and 3B). The loss of dopaminergic terminals in the striatum was comparable between parkin +/+ (24 ± 5.2%) and parkin −/− (33 ± 6.9%) mice ( Figure 3C). Thus, we conclude that the sensitivity of parkin −/− mice and parkin +/+ mice to dopaminergic degeneration induced by a high dose of rAAV2/7-WT α-SYN is similar.
Overexpression of α-SYN with rAAV2/7 increases phosphorylation of α-SYN at serine residue 129 in parkin −/− mice compared to parkin +/+ mice In a next step, we determined the number of cells in the SN that were positive for α-SYN phosphorylated at serine Adult 2 to 4-month-old parkin +/+ and parkin −/− mice were stereotactically injected in the right SN with 2 μl of rAAV2/7-WT α-SYN vector at a titer of 4E + 11 GC/ml (high α-SYN dose) or 1E + 11 GC/ml (low α-SYN dose) or rAAV2/7-eGFP vector at a titer of 4E + 11 GC/ml. At the mentioned time points after injection, animals were perfused for immunohistochemical analysis. residue 129 (P-S129), a form of α-SYN which is considered to be the pathological form of α-SYN and the most abundant modification of α-SYN in LBs [34,35]. Therefore, we performed an immunohistochemical staining with an antibody specifically recognizing this phosphorylated form of α-SYN [34] and stereologically quantified the number of positive cells in the injected side of the whole SN. No P-S129 positive cells were observed in the non-injected side of the SN, indicating that only phosphorylation of the overexpressed human α-SYN is within the limits of detection. At 1 week after injection, no difference was observed between parkin −/− mice (3563 ± 259 cells) and parkin +/+ mice (3371 ± 416 cells). At 4 weeks after injection, the number of cells positive for P-S129 α-SYN was increased in both groups when compared to 1 week. Interestingly, this number was significantly higher in the parkin −/− mice (7632 ± 291 cells) than in the parkin +/+ mice (6288 ± 495 cells) (p = 0,027) ( Figure 4A-B). This higher number of P-S129 α-SYN positive cells in the parkin −/− mice was not caused by higher expression levels of α-SYN, since the total number of α-SYN positive cells in the SN was similar in the parkin +/+ and parkin −/− mice at both time points ( Figure 4C). Differences in the affinities of the P-S129 α-SYN antibody and α-SYN antibody explain the apparent lower number of α-SYN positive cells compared to the number of P-S129 α-SYN positive cells. We also stained the striatum for P-S129 α-SYN to check if the dopaminergic terminals also contain phosphorylated α-SYN. We detected P-S129 α-SYN-positive neuritic inclusions ( Figure 4E) but no difference in the number of these inclusions was observed between parkin +/+ and parkin −/− mice at 4 weeks after injection ( Figure 4D).
A low dose of rAAV2/7-WT α-SYN induces slower but similar dopaminergic degeneration and increased α-SYN phosphorylation in parkin −/− mice The enhanced phosphorylation of α-SYN in the parkin −/− mice observed in the previous experiment was not paralleled by an increase in dopaminergic cell death. We reasoned that this might be due to the very fast and robust degenerative process, precluding the detection of subtle differences. Therefore, we decided to repeat the experiment with a 4 times lower dose of rAAV2/7-WT α-SYN (1E + 11 GC/ml, low vector titer). In this experiment, animals were analyzed at 1 week, 4 weeks, 8 weeks and 16 weeks after injection (see experimental design in Table 1). As expected, stereological quantification of the number of surviving dopaminergic neurons revealed a milder dopaminergic cell loss at 4 weeks (approximately 40%) compared to the high titer of rAAV2/7-WT α-SYN. At 8 weeks and 16 weeks, in both groups the dopaminergic degeneration was not further progressive, suggesting that with the low titer of rAAV2/7-WT α-SYN the maximum amount of degeneration was already reached at 4 weeks after injection. However, the TH-positive cell loss was again comparable between the parkin −/− and parkin +/+ mice at all time points (e.g. at 4 weeks 35 ± 7% and 41 ± 5% TH-positive cell loss, respectively compared to the non-injected side) ( Figure 5A-B). These results confirm that the absence of parkin does not alter the susceptibility to α-SYN induced dopaminergic cell death.
We also investigated the effect of absence of parkin on α-SYN phosphorylation in the low dose α-SYN set-up. The number of P-S129 α-SYN positive cells in the injected side progressively increased over time until 8 weeks and remained stable at 16 weeks after viral vector delivery to the SN ( Figure 6A-B). Here again, we observed a significantly higher level of α-SYN phosphorylation in the parkin −/− mice compared to the parkin +/+ mice at 8 weeks and 16 weeks after injection (respectively 9389 ± 337 cells versus 7263 ± 532 cells at 8 weeks, p = 0,0179; 7954 ± 518 cells versus 6000 ± 313 cells at 16 weeks, p = 0,009). Immunohistochemical staining for α-SYN confirmed expression of α-SYN up to 16 weeks after injection ( Figure 6C). As seen before, the total number of α-SYN positive cells was similar in the parkin −/− mice and the parkin +/+ mice at the 4 different time points ( Figure 6D).
To investigate the mechanism behind this increased α-SYN phosphorylation in the absence of parkin we induced stable parkin knockdown using microRNA (miR)-based lentiviral vectors in human SHSY5Y neuroblastoma cells overexpressing WT human α-SYN. Two miRparkin lentiviral vectors induced efficient knockdown of parkin as shown by Q-RT-PCR (data not shown) and Western blotting ( Figure 7). In agreement with the in vivo data we observed an increase of α-SYN phosphorylation at serine residue 129 without affecting the total α-SYN levels in cell culture ( Figure 7). Since Polo-Like-Kinase-2 (PLK2) was shown to phosphorylate α-SYN at S129 [36] and protein phosphatase-2A (PP2A) is a known α-SYN phosphatase at S129 [37], we verified the levels of PLK2 and PP2A in both parkin knockdown and the control cell lines but no differences were observed in expression of either protein ( Figure 7).
We can conclude that overexpression of α-SYN in the absence of parkin enhances phosphorylation of α-SYN at serine residue 129 but does not affect the degree of dopaminergic neurodegeneration.
Discussion
This study was designed to investigate the effect of the loss of parkin on α-SYN induced neurotoxicity in rodent brain. We found that the absence of parkin did not alter the vulnerability of dopaminergic neurons to WT α-SYN induced neurodegeneration. However, the number of P-S129 α-SYN positive cells in the SN of parkin −/− mice was increased compared to parkin +/+ mice. This increase in the number of P-S129 α-SYN positive cells in the parkin −/− mice was not due to differences in expression level of α-SYN, since the total number of α-SYN-positive cells was similar in both groups, These results were reproduced in a second, independent experiment performed with a 4 times lower titer of rAAV2/7-WT α-SYN.
(See figure on previous page.) Figure 4 Increased phosphorylation of α-SYN at S129 in parkin −/− mice compared to parkin +/+ mice. (A) Representative images of P-S129 α-SYN expression in the SN of parkin −/− and parkin +/+ mice at 1 week and 4 weeks after injection with a high titer of rAAV2/7-WT α-SYN. Right panels are magnifications of the overviews of the injected side (middle panels). Scale bar overviews = 400 μm and magnifications = 50 μm. (B) Stereological quantification of the number of P-S129 α-SYN positive cells in the injected side of the SN of parkin +/+ and parkin −/− mice at 1 week (n = 4) and 4 weeks (n = 15-16) after injection with a high titer of rAAV2/7-WT α-SYN. (Mean ± SEM, two-way ANOVA followed by Bonferroni post-hoc test, # p < 0.05 versus parkin +/+ , **p < 0.01 versus 1 week, ***p < 0.001 versus 1 week). A number of previous studies already addressed the question if the absence of parkin affects the development of α-synucleinopathy using a transgenic strategy. However, inconsistent results were reported, since in one study no effect was found in A53T α-SYN transgenic mice [31], whereas in a second study the loss of parkin unexpectedly mitigated the α-SYN phenotype in A30P α-SYN transgenic mice [30]. In the present study, we opted for an alternative approach with viral vectormediated overexpression of WT α-SYN in the SN of parkin −/− and parkin +/+ mice. rAAV vectors are an attractive tool for gene delivery in the brain, since they provide several advantages: specific brain regions can be targeted, the transduction efficiency in dopaminergic neurons is high, and a long-lasting and stable expression of the transgene at different doses can be achieved with a single delivery [38]. In addition, in a previous study in our own research group, we showed that rAAV2/7-mediated WT α-SYN overexpression in mouse SN leads to a dosedependent, progressive dopaminergic cell death [33].
With our strategy, we did not observe a difference in sensitivity to WT α-SYN induced dopaminergic cell death between parkin −/− and parkin +/+ mice. The strength of our study is that it allowed us to specifically investigate the sensitivity of the dopaminergic cell population to α-SYN toxicity in the absence of parkin. Since the majority of α-SYN transgenic mice do not develop a robust dopaminergic phenotype, they are less suitable to study the effect of parkin deficiency and α-synucleinopathy in nigral dopaminergic neurons [39,40]. On the other hand, our finding that loss of parkin does not exacerbate dopaminergic degeneration is partly consistent with the results in the transgenic mice, since in those studies, no difference in dopaminergic cell survival was found between the α-SYN overexpressing mice and the parkin −/− -α-SYN double transgenic mice, indicating that the complete absence of parkin does not affect dopaminergic cell survival [30,31]. A possible explanation for this somewhat unexpected observation might be the presence of compensatory mechanisms in the parkin −/− mice, which may counterbalance for the total loss of parkin protein occurring already during embryonic development. Reports of an increased sensitivity of striatal metabotropic glutamate receptors and elevated levels of reduced glutathione in parkin −/− mice indicate that such compensatory adaptations exist [41,42]. Locoregional downregulation of parkin with viral vectors or the generation of conditional parkin knockout animals may be a valuable strategy to overcome this issue. Indeed, it was recently reported that adult depletion of parkin in the SN of conditional parkin −/− mice resulted in a progressive loss of dopaminergic neurons up to 40%, a phenotype that has never been observed in the constitutive parkin −/− mice [21,43].
The lack of increased vulnerability of parkin −/− mice to WT α-SYN induced dopaminergic cell death was observed with two different doses of WT α-SYN. In the second experiment, we opted for a lower dose, because we reasoned that the dramatic dopaminergic cell loss achieved with the highest dose of WT α-SYN might hide small differences in sensitivity between parkin −/− and parkin +/+ mice. Although the lower dose of WT α-SYN resulted in a milder dopaminergic cell death, the degree of degeneration was still considerable (approximately 40% at 4 weeks). Therefore, we cannot exclude that subtle differences in sensitivity might emerge when using even lower overexpression of α-SYN or at later time points than analyzed in this study.
In a next step, we wondered whether the absence of parkin would influence the phosphorylation of α-SYN at serine residue 129, since the level of P-S129 α-SYN is highly elevated in the brains of PD patients [34,35]. We found that the number of nigral cells positive for P-S129 α-SYN was significantly higher in the parkin −/− mice compared to parkin +/+ mice. This was not the case for the dopaminergic terminals in the striatum. This discrepancy might suggest that mainly the non-dopaminergic neurons show an increased phosphorylation in the parkin −/− mice or that the dopaminergic neurons with increased P-S129 α-SYN have relatively lower terminal densities. This increased phosphorylation is in apparent contradiction with the findings of Lo Bianco et al. who reported an increase in P-S129 α-SYN positive aggregates after overexpression of parkin together with A30P α-SYN in the rat SN using lentiviral vectors [25]. That observation was associated with a protective effect of parkin overexpression on A30P α-SYN induced dopaminergic degeneration. Furthermore, no increase in P-S129 α-SYN abundance was noticed in A30P α-SYN transgenic mice on a parkin −/− background [30]. On the contrary, and in agreement with our results, lentiviral vector-mediated co-expression of parkin with WT α-SYN in the striatum of rats reduced the levels of P-S129 α-SYN [24]. A similar result was found in the striatum of macaque monkeys when parkin and WT α-SYN were overexpressed by means of rAAV vectors, although in this study a decrease in total α-SYN was also reported [29]. In the study with the A53T α-SYN transgenic mice crossed with parkin −/− mice phosphorylation of α-SYN has not been investigated [31]. So far, we cannot clearly explain the inconsistencies between these reports, although the most obvious explanation is differences in experimental conditions. Altogether, the studies performed with WT α-SYN, including ours, point towards a correlation between decreased levels of parkin and increased phosphorylation of α-SYN at S129, whereas in the studies with mutant A30P α-SYN either the reverse or no effect is suggested. Thus, it is possible that the influence of parkin on S129-phosphorylation is different for WT α-SYN than for A30P α-SYN, since the two forms also differ in other properties including aggregation [44] and membranebinding [45].
At this point, we can only speculate about the mechanism behind the increased phosphorylation of WT α-SYN in the absence of parkin. No direct binding of parkin to P-S129 α-SYN was observed in brain extracts of A30P α-SYN transgenic mice [30]. We also quantified the percentage of ubiquitin and α-SYN double-positive cells but no difference was seen between parkin +/+ and parkin −/− mice (data not shown). This suggests that the increased phosphorylation of WT α-SYN is not a consequence of a difference in ubiquitination of α-SYN. It is possible that parkin modulates kinases and phosphatases regulating the phosphorylation of α-SYN at S129. In agreement with this hypothesis it was reported that overexpression of α-SYN in the brain of rats resulted in an increase in the level of Polo-Like-Kinase-2 (PLK2) and this increase was annihilated when parkin was coexpressed with α-SYN [24]. Absence of parkin might then result in a more pronounced increase in PLK2 levels and therefore increased S129 phosphorylation of α-SYN, since PLK2 is known to phosphorylate α-SYN at S129 [36]. We performed stainings for PLK2 on brain sections, but we failed to reliably detect endogenous expression levels of PLK2 (data not shown). Therefore we induced stable parkin knockdown in human SHSY5Y neuroblastoma cells overexpressing WT α-SYN. Interestingly, the increased S129 phosphorylation of α-SYN after parkin knockdown was replicated in this cell culture Figure 7 Increased P-S129 α-SYN in human SHSY5Y neuroblastoma cells after parkin knockdown. Western blotting against PLK2, parkin, PP2A, P-Ser129 α-SYN and α-SYN. The miRs against parkin induced both parkin knockdown and an increase in P-Ser129 α-SYN signal without affecting α-SYN, PLK2 and PP2A levels. A miR against firefly luciferase (Fluc) was used as control.
model, without effect on total α-SYN level. However, PLK2 protein levels were not altered after parkin knockdown in cell culture. Another potential mechanism involves protein phosphatase-2A (PP2A) that has been shown to dephosphorylate α-SYN at S129 [37]. Indeed, the level of protein phosphatase-2A (PP2A) was reportedly increased when parkin was co-expressed with α-SYN compared to expression of α-SYN alone in rat striatal extracts [24]. Absence of parkin might then decrease PP2A levels in these conditions, resulting in increased S129 phosphorylation of α-SYN. However, we failed to detect alterations in PP2A expression in the parkin knockdown cells. Thus, our data suggest that the mechanism behind the increased α-SYN phosphorylation might be independent from the PLK2 or PP2A pathway, although we cannot exclude changes in activity of either PLK2 or PP2A.
Furthermore, it has been demonstrated that 26S proteasomal activity is decreased in parkin −/− mice and parkin null Drosophila [46]. Also proteasomal dysfunction and S129 phosphorylation of α-SYN have been linked before. First, two independent studies describe that proteasomal inhibition increases casein kinase 2 activity, another kinase mediating phosphorylation of α-SYN at S129 [47], resulting in enhanced S129 phosphorylation of α-SYN [48,49]. Second, it was demonstrated that inhibition of the proteasome pathway resulted in the accumulation of P-S129 α-SYN without alteration in the total levels of α-SYN, suggesting that P-S129 α-SYN specifically undergoes degradation by the proteasome pathway [50].
In the present study, the increased level of P-S129 α-SYN in the parkin −/− mice was not associated with increased dopaminergic degeneration, which is intuitively in contradiction with the knowledge that ± 90% of α-SYN within LBs from PD patients is phosphorylated at S129 [34,35]. The toxicity of P-S129 α-SYN in vivo has been studied extensively in the last years using mutated forms of α-SYN in which S129 is replaced either with an aspartate (S129D), to mimic phosphorylation or with an alanine (S129A), to block phosphorylation. However, conflicting results were reported. Expression of S129D α-SYN in Drosophila resulted in an enhanced toxicity, whereas no dopaminergic cell loss was observed if S129A α-SYN was expressed [51]. On the contrary, Caenorhabditis elegans overexpressing S129A α-SYN showed severe motor dysfunction and synaptic abnormalities, unlike worms overexpressing S129D α-SYN that exhibited a nearly normal phenotype [52]. Two studies using rAAV vectors in rats found expression of S129A α-SYN to be more toxic than S129D α-SYN [53,54]. In a third study, there was no difference between the two forms of α-SYN [55]. In a recent study, S129D α-SYN expression in rat SN resulted in an accelerated striatal dopaminergic fiber loss and an earlier appearance of motor deficits compared to S129A α-SYN, although the nigral degeneration was similar [56]. However, the S129D mutant might not completely mimick the constitutively phophorylated α-SYN. In another study phosphorylation of A53T α-SYN was induced by rAAV-mediated overexpression of G-protein-coupled receptor kinase 6 in rats, which resulted in accelerated A53T α-SYN induced neurodegeneration [57], in contrast to our data. Species differences might be involved, since we observed that rats are more sensitive to rAAV-α-SYN-induced neurodegeneration compared to mice [33,58]. Altogether, the role of S129 phosphorylation of α-SYN in neurodegeneration still remains unclear.
Conclusions
In the present study, we have shown that the vulnerability of mouse dopaminergic neurons to α-SYN toxicity is not altered in the absence of parkin, but that the loss of parkin enhances phosphorylation of α-SYN at S129. Additional studies will be required to elucidate the molecular mechanism behind our findings and the significance for the pathogenesis of parkin-associated PD.
Cloning of rAAV vector plasmids
The plasmids for the rAAV vector production were previously described [38,59]. These plasmids include the construct for the AAV2/7 serotype, the AAV transfer plasmid and the pAdvDeltaF6 adenoviral helper plasmid. The transfer plasmids encoded as transgene eGFP or human WT α-SYN. The transgenes were under the control of the CMVie-enhanced Synapsin1 promotor.
Recombinant rAAV vector production and purification
Vector production and purification were performed as previously described [38]. Briefly, the triple transfection into HEK293T cells was carried out using linear polyethylenimine solution. Vector particles were harvested from the supernatant and concentrated using tangential flow filtration. The concentrated supernatant was further purified using a discontinuous iodixanol step gradient. The final sample was aliquoted and stored at −80°C. Characterization of the rAAV stocks included real-time quantitative PCR analysis for genomic copy determination (presented as GC/ml) and silver-stained sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis for vector purity.
Animals and stereotactic neurosurgery
All animal experiments were carried out in accordance with the European Communities Council Directive of 24 November 1986 (86/609/EEC) and approved by the Bioethical Committee of the KU Leuven (Belgium).
The transgenic parkin −/− mice carry a homozygous deletion of exon 3 [41]. Parkin +/+ mice in the whole first 'high α-SYN dose' experiment (n = 19), including rAAV2/7-eGFP (n = 5), and part in the second 'low α-SYN dose' experiment (1 week all and 4 weeks: 6 out of 10) were age-and gender-matched C57BL/6 J mice (Janvier, Le-Genest-Saint-Isle, France) (see Table 1). Rest of parkin +/+ mice in the 'low α-SYN dose' experiment were age-and gender-matched wild type littermates. Parkin genotyping was performed as described in [30]. No differences were observed in number of TH-positive cells, Ser-129-P α-SYN positive cells and α-SYN positive cells between the C57Bl6 mice and wild type littermates at 4 weeks after injection with the low titer of rAAV2/7-WT α-SYN (data not shown). Therefore, we concluded that potential minor differences in background did not influence our outcome. The age of the mice was 2 months in the first experiment and varied between 2 and 4 months in the second experiment. The mice were housed under a normal 12-h light/ dark cycle with free access to pelleted food and tap water. All surgical procedures were performed using aseptic techniques and ketamine (75 mg/kg IP, KETALAR®, Pfizer, Elsene, Belgium) and medetomidine (1 mg/kg, DORMI-TOR®, Pfizer) anesthesia. Following anesthesia the mice were placed in a stereotactic head frame (Stoelting Co, Wood Dale, IL, USA). The skull was exposed by a midline incision and a small hole was drilled in the appropriate location, using bregma as reference point. 2 μl of rAAV vector were injected at a rate of 0.25 μl/min with a 30gauge needle on a 10 μl Hamilton syringe. Stereotactic coordinates used for the right mouse SN were anteroposterior −3.1 mm and medio-lateral −1.2 mm relative to bregma, and dorsoventral −4.0 mm, from the dura surface. The needle was left in place for an additional 5 min before being slowly retracted. After surgery, anesthesia was reversed with an intraperitoneal (IP) injection of atipamezol (0.5 mg/kg, ANTISEDAN®, Pfizer).
Stereological quantification
The total number of immunoreactive positive cells in the SN was estimated by stereological measurements using the optical fractionator method in a computerized system (StereoInvestigator; MicroBright-Field, Magdeburg, Germany) and a Leica DMR optical microscope as described before [60]. The SN pars compacta was delineated based on visual observation and morphology. Every fifth section throughout the rostro-caudal extent of the SN was analyzed, with a total of six sections for each animal. The coefficients of error, calculated according to the procedure of Schmitz and Hof as estimates of precision [61] varied between 0.07 and 0.16. The conditions of the experiment were blinded to the investigator. To determine the terminal density in the striatum pictures of TH stained sections were taken of 3 sections spaced 250 μm apart and analyzed using ImageJ. The number of P-S129 α-SYNpositive neuritic inclusions in the striatum was counted on a representative section 4 weeks after injection from the animals of the 'high α-SYN dose' experiment.
Statistical analysis
Statistical analysis was performed using GraphPad Prism 5.0 (GraphPad Software, La Jolla, CA, USA). Results are expressed as means ± standard error of the mean (SEM). Statistical analysis of the number of TH-positive cells, the number of Ser-129-P α-SYN positive cells and the number of α-SYN positive cells was carried out using two-way analysis of variance (ANOVA) followed by the Bonferroni post-hoc test for intergroup comparisons. To compare the number of P-S129 α-SYN positive cells at the different time points between the two genotypes in the experiment with the low titer of rAAV2/7-WT α-SYN, a Student's T-test was performed and P-values were adjusted with the Benjamini-Hochberg procedure, with the false discovery rate set at 0.05, to avoid accumulation of α-errors in multiple T-tests.
|
2023-01-18T15:32:54.449Z
|
2015-06-23T00:00:00.000
|
{
"year": 2015,
"sha1": "85fc98f4e73e7a0c3f0b3dc62e9f0494666c387b",
"oa_license": "CCBY",
"oa_url": "https://molecularneurodegeneration.biomedcentral.com/counter/pdf/10.1186/s13024-015-0017-8",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "85fc98f4e73e7a0c3f0b3dc62e9f0494666c387b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.