id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
26618954
|
pes2o/s2orc
|
v3-fos-license
|
The fabrication of a customized occlusal splint based on the merging of dynamic jaw tracking records, cone beam computed tomography, and CAD-CAM digital impression
OBJECTIVES: The aim of this case report was to present the procedure of fabricating a customized occlusal splint, through a revolutionary software that combines cone beam computed tomography (CBCT) with jaw motion tracking (JMT) data and superimposes a digital impression. MATERIALS AND METHODS: The case report was conducted on a 46-year-old female patient diagnosed with the temporomandibular disorder. A CBCT scan and an optical impression were obtained. The range of the patient's mandibular movements was captured with a JMT device. The data were combined in the SICAT software (SICAT, Sirona, Bonn, Germany). RESULTS: The software enabled the visualization of patient-specific mandibular movements and provided a real dynamic anatomical evaluation of the condylar position in the glenoid fossa. After the assessment of the range of movements during opening, protrusion, and lateral movements all the data were sent to SICAT and a customized occlusal splint was manufactured. CONCLUSIONS: The SICAT software provides a three-dimensional real-dynamic simulation of mandibular movements relative to the patient-specific anatomy of the jaw; thus, it opens new possibilities and potentials for the management of temporomandibular disorders.
Introduction
O cclusion and its relationship to the function of the masticatory system is complex and remains a topic of great interest. The dento-periodontal complex, the temporomandibular joints (TMJs), and the masticatory muscles are interrelated components of the stomatognathic system [1] and are regulated by an intricate neurologic control system. Temporomandibular disorders (TMDs) include a number of conditions characterized by signs and symptoms involving the TMJ, masticatory muscles, or both. [2] Approximately, 33% of the population has at least one TMD symptom and 3.6-7% of the population has TMD with sufficient severity to cause them to seek treatment. [3,4] Many dental specialties have been involved in the diagnosis and treatment of TMDs, but most of their means have been based on empirical data and biased clinical experience. [5] Two decades ago, a significant ruling in the US court resulted in a judgment against a Michigan orthodontist for causing TMD in a 16-year-old girl, prompted the orthodontic community to re-evaluate the relationship between orthodontic treatment and TMD. [5] Since then, many studies have been conducted in order to elucidate the relationship between orthodontics and TMD, but significant controversy still persists. The main issues that are raised are whether orthodontic treatment is capable of improving the symptoms of TMD and whether it predisposes to the development of TMD. A systematic review conducted in 2010 concluded that there is no evidence to show that orthodontic treatment can prevent or relieve TMDs. [6] However, the call for contemporary orthodontics to deal with the management of TMDs is evident, since the prevalence of mild TMD signs has been reported to be high in the general population.
The study of mandibular motion is essential t o t h e m a n a g e m e n t o f T M D s . T h e n e e d t o duplicate the mandibular movements extra-orally led to the employment of various methods to record and analyze them. [7][8][9][10][11][12][13] Patient information can be transferred to an articulator with mounted casts and thus, mandibular movements can be evaluated. Using central relation as a reference point, interarch movements of the articulator are possible. [13] However, there are several limitations concerning bite registration and articulators. Bite registrations are static recordings; thus, articulators are unable to record real life dynamics of occlusion during mandibular movements. Another basic limitation is the lack of visualization of condyle position, which is essential to the management of TMDs. There are also various difficulties with transferring the registration onto the articulator and mounting the casts with accuracy. [14] Cone beam computed tomography (CBCT) in conjunction with jaw tracking devices enabled the virtual evaluation of the occlusion and the TMJs and helped substantially in overcoming these problems. In the early years, several methods using mechanical devices including marking needles had been proposed, but all of them had the disadvantage of causing interferences with natural jaw movements. Later, an apparatus based on the principles of pantograph was introduced. It consisted of a recording styli and recording plates, but the transferred recordings were inaccurate. The case gnathic replicator was presented, but its main downside was that jaw movements could be influenced by the weight of the apparatus. Many photographic methods were also used, including cinematographic methods and photo-anthropometry, but most of them proved to be unsatisfactory. Roentgenographic methods have also been used in the past, but their use is doubtful due to radiation exposure. [15] Mechano-electronic recorders and optoelectronic recorders that register mandibular movement electronically have been developed in order to improve precision and efficiency. [13] In mechano-electronic recorders, mandibular movement is recorded by the digital contact plates and processed by the software. Optoelectronic systems have sensors that are optically tracked by cameras. Both of them are light weight and require relatively little time, but their cost is high. [13] Several techniques can be used to image the TMJ including panoramic radiography, plain radiography, conventional CT, and CBCT. The hard and soft tissue structures of TMJ have been reconstructed by spiral and helical CT and magnetic resonance imaging (MRI). [16] There are previous studies that have merged these data with jaw movement recordings by ultrafast MRI, electromagnetic tracking device, or optoelectric measuring systems. [16,17] MRIs disadvantages are related to the high cost and the need for the patient to lie down during MRI imaging, which might alter normal jaw movements. It is also contraindicated to patients with pacemakers and metallic heart valves. [16,18] The disadvantage of the conventional CT is that it shows higher exposure values than CBCT. [19] CBCTs main advantage is the observation of boney joint structures in all three planes in addition to the possible image manipulation at different depths and three-dimensional (3D) reconstruction. [18] The digital intraoral impression was introduced for single unit restorations but in recent years, the accuracy of these systems has improved to capture larger areas up to full-arch impressions. The main advantages of digital impression are the time saved, the reproducibility of the method, and the fact that without placing material inside the patient's mouth the possibility of eccentric movements is reduced. [20] The purpose of this case report as to present a new technique to capture the range of motion of jaw tracking and the translation of the information into a fully CAD-CAM fabricated occlusal splint.
Subject description
A 46-year-old female patient presented with a chief complaint of feeling like her teeth are moving and her bite is changing. She also complained about her front teeth chipping. She had a symmetric maxillofacial structure in the frontal view and a shorter lower facial height. Extra-orally her profile was slightly convex with a retrusive lower lip. Intra-orally her molar relationship was Class I on both sides and her overbite was 9 mm. Her upper dental midline was coincident with face midline and the lower dental midline deviated 1 mm to the left. Cephalometric analysis indicated a skeletal II Class relationship, mild retrognathic mandible, and protrusive upper incisors. The patient mentioned that she has had TMJ issues her whole life including symptoms of clicking and pain. In the past, pharmacologic therapy (muscle relaxants) had been prescribed to her. The patient has never had previous orthodontic treatment [ Figure 1]. Eventually, it was decided to fabricate a noninvasive, therapeutic splint in order to reduce the symptoms. This CAD-CAM fabricated splint was achieved through a new software application, SICAT Function (SICAT, Sirona, Bonn, Germany), which directly combines and merges 3D CBCT and electronic jaw motion tracking (JMT) data. The software also imports digital impressions taken with intraoral scanners and integrate them in functional movement displays. [21] The SICAT JMT + system is an electronic recording system that is based on 3D ultrasound measurements and records the lower jaw movements of the patient in all degrees of freedom. A detailed description of the methods and dynamic clinical simulation of mandibular movement of a patient was performed and presented.
Cone beam computed tomography
The CBCT device (Sirona, Galileos, Bensheim, Germany) was used, which acquires the images with a scan time of 14s, capture the maxilla-mandibular region in a 210 rotation, and has, according to the manufacturer, a reported radiation dosage of 29 µSi to 54 µSi. The voxel size is between 0.15 mm and 0.30 mm and the grayscale is 12 bit. The field of view is a spherical volume of 15 cm. The data from the CBCT were transferred from the scanner to a workstation, where GALAXIS 3D software (Sirona, Galileos, Bensheim, Germany) constructed 3D images. The data were saved as DICOM format (digital imaging and communication in medicine).
A CBCT scan was taken for the assessment of TMJs. The FusionBite reference tray is a transfer tray which enables the precise merging of CBCT and JMT data sets. There are eight radiopaque markers on the FusionBite tray, which are used as landmarks for the fusion of CBCT and JMT data. Silicone impression material was put in the maxillary and mandibular side of the FusionBite tray, and the patient was asked to bite on the material and wear the tray during the CBCT scan.
Jaw motion tracking
SICAT JMT + is the jaw motion tracker used to record and measure jaw position and movement. The system converts the propagation times of multiple acoustic signals into spatial information. [21] The paraocclusal attachment of the ultrasonic transmitter is to be attached to the patient and blocks occlusal bite relationships. It is adjusted to the low jaw dental arch, supplemented with autopolymerizing composite to the bending part of the T-attachment and adapted and hardened to the tooth surfaces. Excess and sharp material was removed. As a result of this procedure, functional movement of the jaw in the occlusion was undisturbed since the maxillary teeth were not in occlusion. The measurement sensor technology consists of a receiving sensor and a transmitting sensor.
The upper jaw sensor was positioned stably on the patient's head, making sure that the headband was on the patient's skull the nasion support did not stretch the skin in the nasion area. The elastic neckband was tightened comfortably for the patient. The FusionBite tray with the impression was positioned in the patient's mouth, and it was checked that the patient bit into the right position.
The T-attachment was also placed in the patient's mouth. The SICAT JMT + software was started and prepared for measurement. The lower jaw sensor technology is fitted with a special locking mechanism for fixing it to the attachment. After the lower jaw sensor was attached to the SICAT FusionBite and then clicking "record," the software guided the program throughout the whole calibration sequence. The SICAT JMT + lower jaw sensor was attached to the paraocclusal T-attachment and then "record" within the software. Subsequently, the SICAT FusionBite was removed and the sensor was mounted on the attachment so that the process of functional analysis could begin. Patient jaw movement including jaw opening and closing movements to and from the habitual intercuspal position, lateral and protrusive movements, and chewing were recorded [ Figure 2].
Merging cone beam computed tomography and jaw motion tracking data sets
The CBCT (DICOM format) and JMT data were loaded in the SICAT Function software. The software automatically aligned the data sets after radiopaque markers on the FusionBite tray were chosen. All jaw movements and position data that were recorded by the jaw motion tracker could be accessed directly after merging.
Hard tissue segmentation
In order to simulate the virtual jaw tracking, Sicat Function Software was used to perform mandibular and glenoid fossa segmentation. After a segment of the mandible was selected by drawing marks on the radiographic sectional slices, the software extracts CBCT data and merges it with the corresponding jaw motion data. The system then presents a 3D image of the patient-specific mandibular movement on the screen.
Superimposition of the digital models
Full-arch optical impressions of the patient were obtained with an intraoral scanner (Sirona, CEREC Omnicam). Subsequently, they were imported and merged with the CBCT data in order to obtain metrically correct images. The SICAT Function software loaded the data and superimposed the arches on the CBCT images, simultaneously with the mandibular and condylar segmentation [ Figure 3]. This procedure gives the possibility to assess the dynamic occlusion and mandibular movement in interaction with TMJ function. Based on the assessment of the mandibular movements during opening and lateral movements, an occlusal splint was fabricated which intended to disengage the posterior teeth, eliminate their influence in the function of the masticatory system, and increase the patient's vertical dimension (anterior guidance). The splint was manufactured by SIRONA after all the data were sent [ Figure 4].
Results
Since the system can measure 3D position and rotation in all six degrees of freedom, during the procedure all mandibular movements were recorded including mouth opening and closing, right and left lateralization [ Figure 5], protrusive [ Figure 6], and chewing. The ranges of these movements were automatically displayed. During opening, the maximum opening was recorded and during lateral movements and chewing the range of mandibular incisors was recorded (the reference point is located at the interincisal point of the lower anterior teeth). During protrusion, the excursion of both condyles, condyle inclination in reference to Frankfurt plane and the range of lower anterior teeth were recorded. All movements could be shown in different planes.
The SICAT Function software performed mandibular segmentation after drawing marks on the radiographic sectional slices. Segmentation of the glenoid fossa was also performed. After merging the CBCT and the JMT data, the software also displayed interincisal point movement of the lower incisors during chewing. The path of any selected point of the mandible or the condyles could be displayed. After superimposing the digital models, the software presented a real, dynamic simulation of mandibular movements, in addition to the visualization of the condyle and its movements in the glenoid fossa. All the above data allowed SIRONA to fabricate a custom-made splint to fit the needs of this individual patient. The appliance covered the maxillary teeth and rugae, providing anterior guidance. The patient reported immediate relief from TMJ symptoms and kept wearing it every night after 3 months of the appliance.
Discussion
The TMJ is a complex joint and has been proved quite difficult to study and understand. An accurate evaluation of the mechanics of the TMJ is important to distinguish between healthy and diseased joints. Many devices and methods in the past have been used to measure mandibular movement, but contemporary computed technology is more accurate and has helped to improve our knowledge about temporomandibular disorders. [17] This article presented the use of SICAT Function software, which enables an anatomically precise and real dynamic rendering of jaw movement. A real patient-specific condylar position can be displayed within a 3D volume. The ranges of all mandibular movement are easily recorded and assessed and even changes in the gap between the condyle and the fossa can be measured during movements or at resting position. The advantage of this system is that it takes into account the forces of jaw movement and include them in the overall analysis. [21] The production of 3D images brings a new level of diagnostic accuracy and detail in modern CT scanners. CBCT is a promising method to visualize hard tissue changes with a relatively low radiation dose. SICAT Function allows the fusion of three technologies that had not been used together: 3D imaging, JMT, and digital impressions. The 3D presentation of the positions and movement paths of occlusal points provides important information on the movement behavior of the mandibular joint and the teeth on the lower and upper jaw. Any dysfunction and limitations of movement can be analyzed and documented. The system records only natural mandibular movement relative to the head, so head movement and position do not affect the measurement results. The software provides a high degree of accuracy, and it is easy to use. After loading the CBCT data and JMT files in the software and superimposing the optical impression, fabrication of the splint is possible with little further work. The mandibular position to fabricate the splint was chosen ensuring disengagement of the posterior teeth and based on the lateral movements anterior guidance was provided [ Figure 7]. The specific mandibular position that was chosen (which determines the thickness of the acrylic material) is associated with relaxed positioning elevator muscles allowing the articular disc to obtain an anterior and superior position over the condylar head. The system provides a complete virtual articulation of the jaws and the condyle in the glenoid fossa, in addition to a 3D simulation of the occlusion, lateral movements, and protrusion. Thus, the splint that is fabricated is accurate, adjustment time is reduced, and valuable chair time is saved. The splint also fits easily in the freeway space [ Figure 8].
SICAT Function software enables the visualization of patient-specific jaw movement relative to the patient-specific anatomy of the jaw. It is also capable of visualizing the joint space during different movements, thus providing an anatomical condylar position evaluation of jaw positions and in dynamic occlusion. Consequently, the software provides the ultimate opportunity for oral diagnostics and treatment. It can also be used in other fields of dentistry for creating mouth guards, dentures, esthetically functional reconstructions with or without tooth implants and even for simultaneous prosthetic and surgical planning.
Conclusion
The SICAT Function software can combine cone bean CT, electronic JMT data, and digital impressions; thus, it is capable of presenting a real, 3D simulation of mandibular movements relative to the patient-specific anatomy of the jaw. In addition, changes in the joint space during resting or other positions can be recorded. Thus, the system can be used as a useful supporting tool in the diagnosis, treatment, and management of TMDs.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/ have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial Support and Sponsorship
Nil.
Conflicts of Interest
There are no conflicts of interest.
|
2018-04-03T02:31:00.672Z
|
2017-07-01T00:00:00.000
|
{
"year": 2017,
"sha1": "fff88a451f3c63664584a4a9bbf9afeff0034a43",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jos.jos_61_16",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fff88a451f3c63664584a4a9bbf9afeff0034a43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
261063019
|
pes2o/s2orc
|
v3-fos-license
|
Transthyretin amyloid cardiomyopathy disease burden quantified using 99mTc-pyrophosphate SPECT/CT: volumetric parameters versus SUVmax ratio at 1 and 3 hours
Background Various parameters derived from technetium-99m pyrophosphate (99mTc-PYP) single-photon emission computed tomography (SPECT) correlate with the severity of transthyretin amyloid cardiomyopathy (ATTR-CM). However, the optimal metrics and image acquisition timing required to quantify the disease burden remain uncertain. Methods and results We retrospectively evaluated 99mTc-PYP SPECT/CT images of 23 patients diagnosed with ATTR-CM using endomyocardial biopsies and/or gene tests. All patients were assessed by SPECT/CT 1 hour after 99mTc-PYP injection, and 13 of them were also assessed at 3 hours. We quantified 99mTc-PYP uptake using the volumetric parameters, cardiac PYP volume (CPV) and cardiac PYP activity (CPA). We also calculated the SUVmax ratios of myocardial SUVmax/blood pool SUVmax, myocardial SUVmax/bone SUVmax, and the SUVmax retention index. We assessed the correlations between uptake parameters and the four functional parameters associated with prognosis, namely left ventricular ejection fraction, global longitudinal strain, myocardial extracellular volume, and troponin T. CPV and CPA correlated more closely than the SUVmax ratios with the four prognostic factors. Significant correlations between volumetric parameters and prognostic factors were equivalent between 1 and 3 hours. Conclusions The disease burden of ATTR-CM was quantified more accurately by volumetric evaluation of 99mTc-PYP SPECT/CT than SUVmax ratios and the performance was equivalent between 1 and 3 hours. Supplementary Information The online version contains supplementary material available at 10.1007/s12350-023-03353-w.
INTRODUCTION
Transthyretin amyloid cardiomyopathy (ATTR-CM) is an increasingly recognized cause of heart failure (HF) resulting from the myocardial deposition of misfolded protein fibrils. 1 Disease-modifying therapies for ATTR-CM, ATTR stabilizer and gene silencing pharmacotherapy have recently been developed. 2,3This has resulted in a clinical need for early detection and accurate quantitation of disease burden, prognosis, and response to treatment.ATTR-CM can be non-invasively diagnosed in patients without light-chain cardiac amyloidosis using bone scintigraphy with technetium-99m pyrophosphate ( 99m Tc-PYP), 99m Tc-3,3-diphosphono-1,2-propanodicarboxylic acid (DPD), or 99m Tc-hydroxymethylenediphosphonate. 4,5][8][9][10][11][12][13][14] Accurate quantitation also enables interval monitoring of disease progression and responses to treatment.However, an optimal method has not been established.Uptake parameters such as the standardized uptake value (SUV), the SUV ratio (normalized SUV and the SUV retention index), as well as volumetric parameters have been investigated using images acquired at various timing.[8][9][10][11] Blood pool activity is particularly important for patients without ATTR-CM because it can result in false-positive diagnoses.However, the optimal method for diagnosis, including cohorts without ATTR-CM, is not necessarily the same as that for assessing disease burden. 148][19][20] Moreover, images acquired at 1 hour after radiotracer injection have a unique feature that is lacking in images acquired at 3 hours; myocardial abnormal uptake on bone scintigraphic images peaks at 1 hour, then slowly declines. 20he aim of this study was to define whether 99m Tc-PYP volumetric parameters or SUVmax ratios are more effective for quantifying ATTR-CM disease burden at 1 and 3 hours after radiotracer injection.
Study population
We retrospectively evaluated patients who were assessed by 99m Tc-PYP SPECT/CT at Kanazawa University Hospital between September 2019 and February 2023 and diagnosed with ATTR-CM from endomyocardial biopsies (EMBs) and/or TTR gene tests.The diagnostic criteria for ATTR-CM were based on one or more of the following: (1) EMB positive for ATTR and (2) having a documented TTR genetic mutation and evidence of cardiomyopathy without apparent plasma cell dyscrasia (serum and urine immunofixation and serum-free light-chain assays).When patients had undergone multiple 99m Tc-PYP tests during follow-up, only one was included in the present study.The ethics committee at Kanazawa University approved this study and waived the requirement for written informed consent because the study retrospectively selected patient data.
Imaging acquisition
Patients were injected intravenously with * 740 MBq (20 mCi) of 99m Tc-PYP.Thorax planar images were acquired for 2 min at 1 and 3 hours later using a hybrid SPECT/CT system (Symbia Intevo, Siemens Medical Solutions AG, Erlangen, Germany) with a low-energy high-resolution collimator.The acquisition time of 2 minutes was acceptable for 750,000 counts in Japanese patients.Before November 2021, SPECT/CT images of the thorax were acquired only 1 hour after radiotracer injection.Thereafter, SPECT/CT images were acquired at both 1 and 3 hours after injection.The SPECT/CT parameters comprised step-and-shoot acquisition with body-contour non-circular orbit, 120 steps at 15 s/each with a total acquisition time of 20 minutes, and zoom 1.0.
Images were reconstructed to a 128 9 128 matrix using a dedicated iterative algorithm (xSPECT QUANT) with 72 iterations and 1 subset, and a 10-mm Gaussian filter.A low-dose, free-breathing, and non-contrast CT image was acquired for attenuation correction and anatomical localization using the following parameters: 130 kV, 50 mAs with care dose, pitch 1.5, rotation time .6s, collimation 16 9 1.2.The CT-based attenuation and scatter corrections were performed automatically.
Quantitative interpretation of images
Figure 1 shows an example of SPECT/CT volumetric evaluation.We used xSPECT Quant (Siemens) to calculate SUVmax, SUVmean, and volumetric parameters.We measured 99m Tc-PYP activity in an aortic blood pool using SUVmax.A spherical volume of interest (VOI; diameter equivalent to half that of the aorta) was positioned in the center of the ascending aorta at the level of the pulmonary artery bifurcation on fused SPECT/CT images. 10,21][10] Total volumes of voxels in the myocardial regions with 99m Tc-PYP uptake [ 1.2, 1.4, and 1.6 9 the SUVmax of the aortic blood pool were automatically evaluated using xSPECT Quant and defined as cardiac PYP volumes (CPV1.2,1.4, and 1.6). 10We visually confirmed that the abnormal uptake areas were in the myocardial regions.The threshold values of 1.2, 1.4, and 1.6 were based on our previous study. 10We also defined cardiac PYP activity (CPA) as CPV 9 (myocardial SUVmean/aortic blood pool SUVmax), using SUVmean in myocardial regions with uptake [ 1.2, 1.4, and 1.6 9 aortic blood pool SUVmax (CPA1.2,1.4, and 1.6). 8,9CPA reflects the volume and intensity of abnormal uptake.We used SUVmean, which reflects the average uptake in the abnormal uptake areas, to evaluate CPA. 8,9,14,22e also evaluated SUVmax within the entire left and right ventricular myocardium.We determined bone SUVmax by placing a spherical VOI at the center of an intact thoracic vertebral body (T12 unless identified as abnormal from a review of bone scan images). 21If T12 was not intact, the spherical VOI was placed at the nearest normal vertebral body.We then determined the soft-tissue SUVmax by positioning a spherical VOI at the paraspinal muscle near T12.Myocardial SUVmax/ aortic blood pool SUVmax, 19,20,23 myocardial SUVmax/ vertebral SUVmax, 13,19,20,23,24 and SUVmax retention index 16 = (myocardial SUVmax/vertebral SUVmax) 9 paraspinal muscle SUVmax were calculated and we referred to them as SUVmax ratios.
We also evaluated the heart to contralateral lung (H/ CL) ratio and the visual grading score. 5The H/CL ratio was calculated as the fraction of the average counts in a circular region of interest (ROI) drawn over the heart of a planar image to that in the contralateral lung ROI of the identical size.The visual grading score was determined by comparing myocardial uptake to rib uptake (grade 0, no uptake; grade 1, uptake less than rib; grade 2, uptake equal to rib; and grade 3, uptake greater than rib).
Cardiovascular magnetic resonance (CMR)
We acquired CMR images using a 3.0-T MRI scanner (Ingenia; Siemens Medical Solutions AG).Native and postcontrast T1 maps were acquired using a modified look-locker technique (with a 5-3-3 beat scheme), under the following parameters: repetition time, shortest; echo time, shortest; flip angle, 20°; slice thickness, 10 mm; matrix, 160 9 160; field of view, 30 cm.Postcontrast T1 maps were scanned 7-10 minutes after 10 mL (1.0 mol/L) of Gadovist (Bayer Pharma AG, Berlin, Germany) was injected.The myocardial extracellular volume (ECV) was calculated as the ratio of changes in myocardial to blood relaxation rates and adjusted to the fractional blood volume of distribution (1-hematocrit).The T1 values were quantified on ROI from the myocardium (septal midventricular wall from the short-axis slice) and the blood (left ventricular blood pool) on both native and contrast T1 maps.The largest possible ROIs were manually drawn, avoiding regions of misregistration (cross-hatched areas) between images at each inversion time.We then applied a copy-and-paste technique to place the same shape ROIs at the exact location on native and postcontrast T1 maps.Thereafter, we calculated the ECV using the average value of each ROI.
Other assessments
Left ventricular ejection fraction (LVEF) and global longitudinal strain (GLS) were assessed by echocardiography.High-sensitivity cardiac troponin T values, hematocrit (Ht), and estimated glomerular filtration rates (eGFR) were measured by blood examination.
Statistical analysis
Continuous variables summarized as means ± standard deviation (SD) were compared using Student's ttests.Uptake parameters were compared between 1 and 3 hours using Student's t-tests for paired comparisons.Categorical variables were summarized as numbers (%).Visual grading scores were compared between 1 and 3 hours using Wilcoxon signed-rank test.Correlations between parameters were assessed using Pearson correlation coefficients (Spearman rank-correlation coefficients for visual grading score).Interobserver variability was assessed using an intraclass correlation coefficient (ICC) and Bland-Altman analysis.Values with P \ .05were considered statistically significant.All data were analyzed using JMP Ò Pro 17 (SAS Institute, Cary, NC, USA).
All patients were assessed by SPECT/CT 1 hour after 99m Tc-PYP injection.Thirteen of them were also assessed by SPECT/CT at 3 hours because we changed the imaging protocol in November 2021, and they included patients who had been diagnosed with hereditary ATTR-CM and treated with tafamidis (Table 1).All 23 patients were proven positive for ATTR amyloid from biopsy specimens.None of the image data have been previously published.
Quantitative uptake parameters
Table 2 and Figure 2 summarize the uptake parameters in images of all 23 patients acquired 1 hour after 99m Tc-PYP injection.The volumetric parameters CPV and CPA decreased as thresholds increased.The myocardial SUVmax/aortic blood pool SUVmax and myocardial SUVmax/vertebral SUVmax were 2.1 ± .7 and 1.5 ± .7,respectively.In a patient with low abnormal myocardial uptake, we evaluated SUVmax in the interventricular septum on fused SPECT/CT images.
Interobserver variability was assessed by SW and TK for CPV1.4 at 1 hour in all 13 patients with SPECT/ CT images acquired at 1 and 3 hours.The ICC(2,1) was excellent at .990, and the interobserver variability was low (mean difference 8.3 cm 3 , 95% limits of agreement -4.2 to 20.9 cm 3 , P = n.s., Supplemental Figure 1).All patients with ATTRv-CM had already been treated with tafamidis b One patient did not undergo a myocardial biopsy, but an intestinal biopsy was positive for ATTR amyloid 99m Tc-PYP SPECT/CT, respectively.LVEF and ECV differed significantly between the two patient groups, but GLS, troponin T, and eGFR did not.
Correlations with prognostic factors
Table 3 summarizes correlations between uptake parameters at 1 hour and the four functional parameters associated with prognosis in all 23 patients.Figure 3 shows representative correlations between CPV and the four prognostic factors.Table 3 shows closer correlations in the same order of volumetric parameters (CPV1.4 and CPA1.4) [ SUVmax ratios (myocardial SUVmax/aortic blood pool SUVmax, myocardial SUVmax/vertebral SUVmax, and SUVmax retention index) [ myocardial SUVmax for the four prognostic factors, LVEF, GLS, ECV, and troponin T. Both CPV1.4 and CPA1.4 significantly correlated with the four prognostic factors (r 2 = .42-.63, P = .0006-.02), whereas SUVmax ratios did not significantly correlate with GLS.Myocardial SUVmax/ vertebral SUVmax and the SUVmax retention index did not significantly correlate with ECV and troponin T. Myocardial SUVmax did not significantly correlate with any of the prognostic factors (r 2 = .00-.32, P = .056-.876).The H/CL ratio and visual grading score did not correlate with LVEF, GLS, and troponin T.
We also evaluated correlations between uptake parameters at 1 h and four prognostic factors in 18 patients not treated with tafamidis.Closer correlations were observed in the same order of volumetric parameters, SUVmax ratios, myocardial SUVmax for the four prognostic factors in 18 patients not treated with tafamidis (Supplemental Table 1).The H/CL ratio and visual grading score did not correlate significantly with LVEF, GLS, and troponin T. We evaluated a CPV1.4 using only the left ventricular region (CPV1.4LV ), excluding right ventricular activity.The CPV1.4 and CPV1.4 LV similarly correlated with the four prognostic factors (Supplemental Table 2).Seven patients (30.4%) had right ventricular uptake of[ 1.4 9 the SUVmax of the aortic blood pool 1 hour after radiotracer injection.
The uptake parameters did not significantly correlate with eGFR (Supplemental Table 3).
Time dependence of uptake parameters and correlations
Table 4 and Figure 4 summarize the uptake parameters of the 13 patients who were assessed by 99m Tc-PYP SPECT/CT at 1 and 3 hours after 99m Tc-PYP injection.The CPV1.4 and 1.6, CPA, myocardial SUVmax, aortic blood pool SUVmax, SUVmax ratios, H/CL ratio, and visual grading score were significantly higher at 1 hour than at 3 hours.In contrast, vertebral SUVmax was significantly higher at 3 hours than at 1 hour.Paraspinal muscle SUVmax did not significantly differ between 1 and 3 hours.
Table 5 summarizes correlations between uptake parameters and the four prognostic factors in patients with SPECT/CT images acquired at 1 and 3 hours after 99m Tc-PYP injection.Figure 5 shows representative correlations between CPV and the four prognostic factors.Table 5 shows significant correlations between the volumetric parameters CPV1.4 and CPA1.4 and the four prognostic factors at 1 and 3 hours.Correlations between volumetric parameters and LVEF, ECV, or troponin T were equivalent between 1 and 3 hours.Correlations between volumetric parameters and GLS were slightly closer at 1 hour than at 3 hours in six patients.The SUVmax retention index closely correlated with LVEF, GLS, and ECV at 1 hour, but the correlation coefficients decreased at 3 hours.
Threshold dependence of correlations
Table 6 summarizes the threshold dependence of correlations between volumetric parameters at 1 hour and the four prognostic factors in all 23 patients.The threshold dependence of correlations was low among six volumetric parameters, and all six significantly correlated with the four prognostic factors.CPV1.4 and CPA1.4 showed slightly higher correlation coefficients (r 2 = .42-.63) than CPV1.2 and CPA1.2 (r 2 = .37-.62).
Table 7 summarizes the threshold dependence of correlations between volumetric parameters and the four prognostic factors in patients with SPECT/CT images acquired at 1 and 3 hours after 99m Tc-PYP injection.At 3 hours, neither CPV1.2 nor CPA1.2 significantly correlated with troponin T, whereas CPV1.4,CPV1.6,
DISCUSSION
This study has two important messages.The volumetric parameters derived from 99m Tc-PYP SPECT/CT, CPV and CPA quantified the ATTR-CM disease burden more accurately than SUVmax and SUVmax ratios.The performance of the volumetric parameters was equivalent between 1 and 3 hours after injecting 99m Tc-PYP.
Both CPV and CPA correlated more closely than the myocardial SUVmax and SUVmax ratios with all four prognostic factors (Table 3).One reason for this could be that the volumetric parameters reflect information about internal abnormal regions, whereas SUVmax and SUVmax ratios represent small areas.Furthermore, SUVmax is affected by various technical and physiological factors.Therefore, volumetric parameters could be useful imaging biomarkers for quantifying the ATTR-CM disease burden.Similarly, volumetric parameters derived from 18 F-fluorodeoxyglucose positron emission tomography ( 18 F-FDG PET) are more effective than SUVmax as prognostic predictors in cancer. 22Volumetric parameters were more effective than the H/CL ratio, which is valuable and widely applied, but has inherent limitations due to being based on two-dimensional images.The H/CL ratio did not correlate with LVEF, GLS, and troponin T (Table 3).Vranian et al. also showed that the H/CL ratio did not correlate with LVEF and GLS in patients with ATTR-CM. 29We previously showed that the H/CL ratio did not correlate with LVEF in patients with ATTR-CM. 10 We found here that CPV and CPA significantly correlated with the four prognostic factors, LVEF, GLS, ECV, and troponin T. Therefore, CPV and CPA should play objective and important roles in prognostic evaluation.An echocardiographic LVEF reduced to \ 50% predicts mortality in patients with wild type ATTR-CM (ATTRwt-CM). 25Furthermore, GLS is an independent predictor of all-cause mortality (hazard ratio [HR]: 1.15 per 1% decrease) and a more effective prognostic factor than all other echocardiographic parameters, including LVEF, in patients with HF and reduced LVEF. 26mpaired GLS is similarly a high risk factor for cardiovascular morbidity and mortality in patients with HF and preserved LVEF. 27Furthermore, the ECV determined by CMR has been validated for measuring the amyloid burden and it is an independent predictor of prognosis (HR: 1.155 per 3% increase). 28Bone scintigraphy can be an alternative modality for patients who cannot undergo contrast-enhanced CMR.High-sensitivity cardiac troponin T is used to assess prognosis in patients with ATTRwt-CM. 11,25ther studies similarly support the notion that 99m Tc-PYP scintigraphy is a useful modality for evaluating prognosis.A volumetric parameter on 99m Tc-PYP SPECT/CT images acquired 3 hours after injection and ECV correlated (r 2 = .76,P = .001,n = 11). 7Even CPV and CPA derived from 99m Tc-PYP SPECT without CT significantly correlated with LVEF and CMR parameters, and had prognostic value and low interobserver variability. 8,9However, studies without CT did not correct attenuation or scatter.They acquired images only after 3 hours and applied filtered back projection.A multicenter study found that even the two-dimensional Table 4. Uptake parameters of images acquired from patients (n = 13) at 1 and 3 h after 99m Tc-PYP injection H/CL ratio of 99m Tc-PYP scintigraphy can deliver prognostic information. 30ere, we revealed for the first time that myocardial abnormal 99m Tc-PYP uptake quantified by SUVmax was significantly higher at 1 hour than at 3 hours (Table 4).The uptake of 99m Tc-DPD, which is slightly different from that of 99m Tc-PYP, 31 similarly peaks at 1 hour. 20he present findings supported the notion that the time-efficient acquisition of 99m Tc-PYP SPECT images at 1 hour could accurately quantify the ATTR-CM disease burden.Correlations between prognostic factors and CPV1.4 and CPA1.4 were equivalent between 1 and 3 hours in our small cohort of patient (Table 5).Thus, larger studies are warranted.To determine whether 99m Tc-PYP SPECT images acquired at 1 hour are useful for prognostic evaluation is important for three reasons.
Myocardial abnormal uptake peaks at 1 hour, then slowly declines.Laboratory throughput and patient satisfaction were increased by the 1-hour protocol compared with the 3-hour protocol.Some experienced centers have adopted 1-hour acquisitions, particularly in the US for 99m Tc-PYP imaging. 32A close correlation has been determined between ECV and myocardial 99m Tc-PYP SUVmax at 1 hour corrected for blood pool (r 2 = .83,P = .001,n = 9). 124][35] Delayed acquisition has been reported as a cause of false-negative diagnoses of ATTR-CM. 36Oncological 18 F-FDG PET image acquisition at 1 hour is popular and delayed acquisition is optional.The expert consensus recommendation for image acquisition timing using 99m Tc-PYP was 1 hour in 2019. 5This consensus was revised to recommend image acquisition at 2 or 3 hours with 1 hour as optional.The main reason for this was to avoid false-positive diagnoses caused by excessive blood pool activity. 37The blood pool activity is particularly important in patients without ATTR-CM.][19][20] We included only patients with biopsy that were positive for ATTR amyloid.This is a strength of our original study.Several studies have applied positive criteria for ATTR-CM, such as visual grading scores of 2 or 3 for myocardial uptake (C rib uptake).However, some patients with ATTR-CM have focal or absent 99m Tc-PYP uptake, 4,6,10,33 which we identified in some of our patients with EMB-proven ATTR-CM (Figure 2B and Table 2).In addition, several causes of falsenegative and false-positive results of bone scintigraphy in patients with suspected ATTR-CM have been identified. 36e evaluated the threshold dependence of correlations and found that the correlation coefficients of CPV1.4 and CPA1.4 were slightly closer than those of CPV1.2 and CPA1.2 for the four prognostic factors (Table 6).The optimal threshold is not necessarily identical between disease burden quantitation and diagnosis.The volumetric evaluation of bone scintigraphy could potentially be an objective marker for the diagnosis of ATTR-CM, especially in patients with focal abnormal uptake.We previously showed that CPV1.2 diagnosed ATTR-CM more effectively than CPV1.4 at 3 hours. 10That finding and those of images acquired at 1 and 3 hours herein (Figure 4B; Table 4) revealed that the threshold 1.4 9 aortic blood pool SUVmax might result in false-negative diagnoses in patients with focal abnormal uptake that might occur during the early stage of ATTR-CM. 10 The present study showed that the SUVmax ratios correlated more closely than myocardial SUVmax with the four prognostic factors (Table 3).This means that accurate measurements of myocardial SUVmax might not directly reflect the disease burden because myocardial SUVmax also depends on inter-organ competition for 99m Tc-PYP uptake.The SUV retention index, which has been mainly investigated using 99m Tc-DPD, 16,18,20 has not been studied in detail using 99m Tc-PYP. 12lthough the SUV retention index has been proposed to account for competition among the myocardium, bone, and soft-tissue radiotracer uptake, 16 99m Tc-DPD and 99m Tc-PYP uptake mechanisms slightly differ. 31The present study showed that the SUVmax retention index was significantly higher at 1 hour than at 3 hours.
Similarly, the SUVpeak retention index using 99m Tc-DPD was significantly higher at 1 hour than at 3 hours. 20
Limitations
This retrospective analysis of a small patient cohort was conducted at a single institution.However, some of these limitations are frequent when investigating rare diseases.Furthermore, although the actual prognosis might be the ideal gold standard for prognostic studies, we did not evaluate this because of our small cohort of patients with diverse backgrounds, and the fact that some were evaluated only for short periods.Although the patients were assessed by SPECT, echocardiography, CMR, and blood tests on different days, this is unlikely to affect the conclusions because 99m Tc-PYP uptake does not change substantially over time. 38The mechanism through which bone-seeking radiotracers bind to an abnormal myocardium remains uncertain, and their relationships with the myocardial amyloid burden have not yet been histologically validated.We also need to further evaluate reproducibility and variability of volumetric parameters, which may depend on the selection of thresholds and image acquisition timing.
CONCLUSIONS
Volumetric parameters derived from 99m Tc-PYP SPECT/CT, CPV and CPA quantified the ATTR-CM disease burden more accurately than the SUVmax, SUVmax ratio, and H/CL ratio.The performance was equivalent between 1 and 3 hours after injection.Larger studies are warranted to clarify an optimal nuclear imaging biomarker to manage patients with ATTR-CM.
NEW KNOWLEDGE GAINED
Volumetric evaluation of 99m Tc-PYP SPECT/CT quantified the ATTR-CM disease burden more accurately than the SUVmax and SUVmax ratios including the SUVmax retention index.Significant correlations between volumetric parameters and prognostic factors were equivalent between images acquired at 1 and 3 hours after 99m Tc-PYP injection, whereas the myocardial SUVmax, retention index, and visual grading score were significantly higher at 1 hour than at 3 hours.
which supplies 99m Tc-PYP in Japan.All others have no relevant disclosures.
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Table 1
summarizes the values for LVEF, GLS, ECV, troponin T, and eGFR.We assessed the LVEF in 22 patients, GLS in 13, and ECV in 12 at 5.8 ± 52.3, 5.8 ± 68.3, and 5.8 ± 69.7 days, respectively, before 99m Tc-PYP SPECT/CT.Ht was measured within 24 hours of CMR.We assessed troponin T in 18 patients and eGFR in 23 at 8.6 ± 32.8 days after, and 2.7 ± 5.7 days before
Table 2 .
Uptake parameters at 1 h after 99m Tc-PYP injection CPV, cardiac pyrophosphate volume; CPA, cardiac pyrophosphate activity; SUV, standardized uptake value; H/CL, heart to contralateral lung a Retention index = (myocardial SUV/vertebral SUV) 9 paraspinal muscle SUV b One patient had Val30Met mutation
Table 3 .
Correlations between uptake parameters at 1 h and four prognostic factors CPA, cardiac pyrophosphate activity; CPV, cardiac pyrophosphate volume; ECV, extracellular volume fraction; GLS, global longitudinal strain; H/CL, heart to contralateral lung; LVEF, left ventricular ejection fraction; R, correlation coefficient; SUV, standardized uptake value *P \ .05 a Retention index = (myocardial SUV/vertebral SUV) 9 paraspinal muscle SUV b Visual grading score of all 12 patients were 3 CPA1.4,and CPA1.6 correlated significantly and comparably with the four prognostic factors.
Table 5 .
Correlations between uptake parameters and four prognostic factors in patients with SPECT/ CT images acquired at 1 and 3 h CPA, cardiac pyrophosphate activity; CPV, cardiac pyrophosphate volume; ECV, extracellular volume; GLS, global longitudinal strain; H/CL, heart to contralateral lung; LVEF, left ventricular ejection fraction; R, correlation coefficient; SUV, standardized uptake value *P \ .05 a Visual grading score of all patients were 3 at 1h
|
2023-08-23T06:16:02.504Z
|
2023-08-21T00:00:00.000
|
{
"year": 2023,
"sha1": "2e11bf7966a85a7b274dc5661a0ef9f470c6b810",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12350-023-03353-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "ac46813d997c1b289e051b66bb9e4555d69127bc",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18127387
|
pes2o/s2orc
|
v3-fos-license
|
Procalcitonin as a marker of sepsis and outcome in patients with neurotrauma: an observation study
Background Procalcitonin (PCT) is a reliable biomarker of sepsis and infection. The level of PCT associated with sepsis and infection in patients with traumatic brain injury is currently unknown. The purpose of this study was to investigate the value of PCT and C-reactive protein (CRP) as diagnostic markers of sepsis and to evaluate the prognostic value of these markers related to the severity of injury, sepsis and mortality. Methods 105 adult patients with neurotrauma were enrolled in this study from June 2011 to February 2013. PCT and CRP were measured at admission and 2, 3, 5 and 7 days after admission. The sepsis criteria established by American College of Chest Physicians /Society of Critical Care Medicine Consensus Conference were used to identify patients. Injury Severity Score (ISS) and Glasgow Coma Score (GCS) were used to assess the severity of the injury. All these patients were monitored for 28 days. Results At admission, the median level of PCT was consistent with the severity of brain injury as follows: mild 0.08 ng/ml (0.05 - 0.13), moderate 0.25 ng/ml (0.11 - 0.55) and severe 0.31 ng/ml (0.17 - 0.79), but the range of CRP levels varied greatly within the given severity of brain injury. Seventy-one (67.6%) patients developed sepsis. The initial levels of PCT at admission were statistically higher in patients with sepsis, compared with patients with systemic inflammatory response syndrome (SIRS), but there were no differences in the initial concentration of CRP between sepsis and SIRS. After adjusting for these parameters, multivariate logistic regression analysis revealed that PCT was an independent risk factor for septic complications (p < 0.05). The areas under the ROCs at admission for the prediction of mortality were 0.76 (p < 0.05) and 0.733 for PCT and CRP, respectively. Conclusions Increased levels of PCT during the course of the ICU stay could be an important indicator for the early diagnosis of sepsis after neurotrauma. In addition, high serum levels of PCT in patients with neurotrauma at admission indicate an increased risk of septic complications, and the daily measurement of PCT assists in guiding antibiotic therapy in neurotrauma patients.
Background
Traumatic brain injury (TBI) accounts for a large proportion of injury-related deaths and disabilities in developed countries [1]. Patients with TBI have an increased risk of consequent infection and sepsis, which requires prompt diagnosis and treatment with appropriate antimicrobial agents to reduce associated morbidity and mortality [2][3][4]. Severe neurotrauma is a great potential cause of systemic inflammatory response syndrome (SIRS) [5], and the currently available markers including fever, C-reactive protein (CRP), IL-6 and total leukocyte count lack sensitivity and specificity thereby making it difficult to distinguish SIRS from infectious diseases [6]. Reducing the risk of infection and subsequent sepsis through adherence to infection control measures is essential to lower in-hospital deaths among patients with TBI.
In the last decade, several papers have been published that report procalcitonin (PCT) as a novel biomarker that is recommended for the assessment of bacterial infection [7][8][9][10][11]. PCT is composed of 116 amino acids and is physiologically synthesised by thyroid C cells, but in the case of sepsis, there is an extrathyroidal origin of PCT genesis [7,8]. In normal conditions, serum PCT levels are negligible, but they become detectable at the onset of infection. PCT levels are closely related to the severity and evolution of infection, and they are thought to be associated with poor prognosis in patients with septicaemia [12,13]. PCT has been used to evaluate the evolution of infections and sepsis in patients with trauma and surgical conditions [5,14,15]. The changes of PCT level in response to therapeutic treatment have also been reported, which suggests prognostic significance in a variety of clinical settings [16,17]. The persistent increase of PCT level is associated with an increased length of ICU stay and mortality [18]. PCT-guided strategies have significantly reduced the use of antibiotics [19]. Among the proposed biomarkers of sepsis, PCT appears to be the most promising in terms of its diagnostic and prognostic benefits. Recently, a newly developed PCT assay with significantly higher discriminatory power has been put into practice [20]. There is a lack of substantial data regarding the use of PCT in the treatment of traumatic brain injury [5,21].
The aim of this study was to investigate the levels of PCT and CRP that are used as diagnostic markers of sepsis in patients with traumatic brain injury at the time of hospital admission, and to evaluate the prognostic value of these markers related to the severity of injury, sepsis and mortality.
Patients and methods
After approval by the Ethics Committee of Huashan Hospital, Fudan University, Shanghai, China (approval number: 2011-281), informed consent was obtained from each patient or their representatives. A total of 105 patients with isolated traumatic brain injury who were admitted to the ICU of our tertiary university teaching hospital from June 2011 to February 2013 were enrolled in this prospective observational study. Patients were included if they fulfilled the following criteria: age over 18 years old and admission within 24 hours of injury. Patients with pre-existing febrile illness, suffering from burns, an Abbreviated Injury Scale (AIS) for all other body regions injury ≥ 3, patients under immune suppressive therapy, patients already on antibiotics for ≥3 days before admission, and patients who did not survive for 48 hours after admission were excluded from the study.
Clinical data collection
All patients in the ICU were monitored by ECG and arterial pressure monitoring as a part of routine clinic practice. The clinical care of the patients was guided by the criteria established by the Brain Trauma Foundation and the American Association of Neurological Surgeons [22][23][24].
Endotracheal intubation was carried out, and mechanical ventilation was initiated as clinically required. Glasgow Coma Score (GCS) and Injury Severity Score (ISS) were used to define the injury severity [25,26]. GCS and ISS were calculated within the first 24 hours after admission and were repeated every day thereafter. The levels of PCT and CRP were measured on days 1 (admission day), 2, 3, 5 and 7. All data including clinical signs, laboratory observations, microbiological pathogens, medicine options, complications, 28-day survival rate and duration treatment and length of stay in ICU as well as the data necessary to evaluate the ISS were documented.
Definitions
The main complication was systemic inflammation, and the various stages of sepsis were defined according to the criteria established by the American College of Chest Physicians/the Society of Critical Care Medicine [6]. We evaluated the onset of sepsis during the first observation week. Patients were allocated to four groups post hoc: (1) NoSIRS (neither SIRS nor sepsis) (2) SIRS (3) sepsis and (4) severe sepsis group (including severe sepsis and septic shock).The stratification of trauma severity in terms of the GCS score was generally recognised and accepted by professionals [25]. Then, patients were allocated to three groups post hoc: (1) GCS13-15 group (2) GCS9-12 group and (3) GCS3-8 group. The patients who survived longer than 28 days were considered as survivors.
Measurement of plasma PCT and CRP
According to the manufacturer's instructions, PCT level was measured by an electrochemiluminescence immunoassay (ECLIA) B.R.A.H.M.S. PCT ELECSYS® using an automated Roche Elecsys and Cobase immunoassay analyser [20]. This new assay is more sensitive compared with the conventional assays, and the sensitivity of this assay is 0.02 ng/ml. CRP level was determined using a fully automated IMMAGE Immunochemistry System (Beckman Coulter, USA), which was derived from the highly sensitive near infrared particle immunoassay method. The lower limit of detection is 3.45 mg/L.
Statistical analysis
Normally distributed variables are presented as the mean +/− standard deviation (SD), and nonparametric continuous variables are expressed as the median and inter-quartile ranges (IQR). PCT levels among sepsis groups or GCS groups were carried out using Kruskal-Wallis tests (data not normally distributed), if statistical significance of differences was detected, then the Mann-Whitney U-test (nonparametric analysis) was used for further comparisons between the two sepsis groups or GCS groups. We used the Pearson chi-square test (χ 2 test) or Fisher's exact test to compare proportions. Multivariate logistic regression was used to assess the performance of the variables in the prediction of sepsis.
Based on the results of univariate analysis, we selected three confounding variables (age (p = 0.789), sex (p = 0.779), ISS (p < 0.05)) that required adjustment to minimise their influence on the results. Receiver operating characteristic curves (ROC) and the area under the curve (AUC) were employed to assess the predictive performance of the models. A two-tailed p value < 0.05 was considered to be statistically significant. Statistical evaluation of data was analysed using the SPSS version 17.0 software.
Demographics and clinical characteristics of patients
One hundred five isolated traumatic brain injury patients without underlying illness or long-term medication were admitted within 24 hours after accident. Among these patients, 90 (85.7%) were traumatised by motor vehicle, whereas 15 casualties were suffered from trauma following a fall from a greater height. The mean age of the patients was 56 years and ranged from 18 to 80 years with 79 males and 26 females. The demographics and clinical characteristics of patients are presented in Table 1.
Generation of PCT and CRP after trauma
The median levels of serum PCT and CRP in the overall patient cohort were 0.2 ng/ml and 23.1 mg/L, respectively. After comparisons were carried out among the different groups that were divided according to their GCS scores, an incremental increase in the median level of PCT that was consistent with increasing severity of brain injury was observed. The median level of PCT in patients with mild brain injury was significantly lower than that in patients with moderate and severe brain injury. The dynamic changes of PCT and CRP levels after neurotrauma are depicted in Figure 1.
Development of sepsis
The incidence of SIRS or sepsis occurred in 79% patients with traumatic brain injury. Among 105 patients, 71 (67.6%) were clinically diagnosed as septic including 15 patients (14.3%) who suffered from severe sepsis or septic shock. Positive blood culture was detected in 43 pneumonia patients, 10 peritonitis patients, 9 urinary tract infection patients, 7 wound infection patients and 8 patients with infection of other parts of body. Sepsis, severe sepsis or septic shock was frequently and directly diagnosed during the observation period in patients with initially high levels of serum PCT. On the contrary, the initial level of CRP was not associated with these categories of infection as shown in Figure 2 and Table 2. For example, the patients PCT level with 0.05 ng/ml that ranged from <0.03 to 0.098 ng/ml initial median (quartiles) did not develop SIRS or sepsis during the entire course of observation. Conversely, the patients with 0.105 ng/ml that ranged from 0.085 to 0.328 ng/ml initial median PCT level developed SIRS (p = 0.002), and those with 0.27 ng/ml that ranged from 0.12 to 0.61 ng/ml initial median PCT level developed sepsis (p = 0.001). The patients with 0.57 ng/ml that ranged from 0.23 to 1.45 ng/ml initial median PCT level developed septic shock (p < 0.001). There was a significant difference in the initial level of PCT between SIRS and sepsis (p = 0.046). PCT concentration remained elevated in patients with sepsis, severe sepsis or septic shock, but it rapidly fell to a near-normal value in the patients who did not develop sepsis (Figure 3). There was a significant difference in the median level of PCT between the measurements at admission and 7 days after admission (0.32 vs 0.2 ng/ml, p < 0.002). Univariate logistic regression analysis indicated that the initial white blood cell count (WBC), CRP and PCT were the high risk factors for sepsis/severe sepsis or septic shock, and after adjusting for these parameters, multivariate logistic regression analysis indicated that the odds ratio for the development of sepsis/severe sepsis or septic shock increased if PCT was >0.215 ng/ml, but other markers had no statistical significance (Table 3).
Serum PCT and prognosis
Of the 105 patients, 16 died of severe head trauma with a GCS score of 3-8 within 28 days after trauma, which accounted for 15.24% mortality. The initial PCT and CRP levels after trauma were significantly higher in nonsurvivors than those in survivors (p < 0.05 by
Discussion
TBI patients present a particular challenge in the diagnosis of sepsis complications as the trauma per se predisposes patients to provoking a systemic inflammatory response that often masks the initial clinical symptoms of sepsis. Therefore, patients with TBI are considered to be at a high risk for sepsis complications, which complicate the ability to distinguish sepsis from SIRS in a clinical setting by using ordinary clinical signs and symptoms such as WBC, high fever and mal-perspiration. Alternatively, PCT is considered as an acute-phase biomarker of the systemic inflammatory response [27]. We carried out this study to determine whether PCT and CRP can be diagnostic markers of sepsis or prognostic indicators of mortality of neurotrauma patients.
In the present study, PCT level increased in the first 24 hours after trauma in patients with GCS < 12. However, the median PCT level in patients with GCS 9-12 was lower than that of the patients with GCS < 8. Various levels of PCT and CRP were detected in patients with TBI. Both biomarkers showed that the variation of CRP levels at different intervals after trauma was a uniform response without significant association with trauma severity. The development of various stages of sepsis was also observed. Furthermore, the concentration of CRP remained elevated for several days after trauma. The variation of PCT levels was moderately consistent with the severity of the trauma that was previously reported by several authors [5,28,29]. Of particular note, the initial median level of PCT was closely associated with the severity of TBI. The mechanism of this phenomenon could be explained by the observation that SIRS is a more common occurrence that is promptly activated in neurotrauma patients or that the PCT level is assayed by a sophisticated instrument, therefore, this phenomenon diminished the value of PCT being used as a specific marker of sepsis in these patients.
As noted, the median PCT, but not CRP, level was determined to be significantly higher in patients with sepsis, severe sepsis or septic shock compared to the levels in patients with SIRS after the evaluation of patients according to criteria established by the ACCM/SCCM. CRP is another acute inflammatory protein and it often takes a long time to activate a reaction whereby its levels are elevated in response to inflammation. Studies have shown that the kinetics of CRP in multiple-trauma is slower and sustains longer than PCT [30]. These findings could be due mainly to the low sensitivity of CRP that is responsive to trauma at an early stage. According to the multivariate logistic regression analysis, after adjustment for age and gender, the PCT value was an independent risk factor of sepsis. Patients with high initial PCT levels were subjected to a 290-fold risk of sepsis compared to those with a low initial PCT level.
There are several studies that have reported using PCT to guide antibiotic therapy in different settings [31][32][33]. Experts have reached a consensus and developed guidelines for the clinical interpretation of elevated PCT and the risk stratification according to different elevated PCT levels. In particular, the negative predictive value (PCT < 0.1 ng/ml) to exclude a risk of sepsis is used. In the present study, we conventionally administered a single shot antibiotic to neurotrauma patients on admission to prevent infection. We determined that the odds ratio for the development of sepsis was increased as PCT > 0.215 ng/ml, and according to the daily measurement of the PCT value, if PCT remains lower than 0.1 ng/ml, further antibiotic treatment was not required due to the low risk for sepsis. This finding was consistent with Marc and colleagues [33]. We also determined that PCT concentration rapidly decreased to a near-normal value in patients who did not develop sepsis, thus, during the further course of treatment, if the PCT level remained <0.1 ng/ml and the combined clinical symptoms did not provided any evidence for sepsis, no antibiotics were required. Thus, the daily use of the negative predictive value of PCT would be clinically helpful.
Our study has several important implications for clinicians. Although the present study population is too small to infer the importance of serum PCT in neurotrauma patients, it definitively indicates that serum PCT could be involved in the entire course of infections to facilitate the management of sepsis in critical care. With the newest assay method, serum PCT is detected with a high accuracy that other currently available tests cannot provide. The accuracy of the serum PCT reference range is not perfect, but it could guide physicians in developing a clinical strategy and incrementally managing neurotrauma patients with sepsis. Additionally, the daily measurement of PCT aids physicians in guiding antibiotic therapy in neurotrauma patients. The test can be performed within 30 minutes and provides valuable information long before culture results are available.
There are some limitations to the present study. First, the sample size of the study was relatively small, and consequently, the power to demonstrate the interaction among serum PCT and prognosis was limited. Second, most patients were young males, which did not represent the entire demographics of neurotrauma patients. Third, the present study did not analyse the kinetics of PCT level and its association with other inflammatory cytokines levels in these patients. Finally, the pathophysiology of neurotrauma was complex and was influenced by patient-specific factors (age, sex) and injury-specific factors (mechanism, severity). Thus, one single biomarker will not be able to accurately predict the clinical course of neurotrauma patients.
Conclusions
These data substantiate the hypothesis that increased levels of PCT during the period of hospitalisation could be an important indicator for the early recognition of bacterial infection and sepsis as well as provide early information about the presence of complications after TBI. These data also indicate that the daily measurement of PCT would be clinically useful. The authenticity and soundness of the findings obtained from our study need to be evaluated in a future study.
|
2017-06-20T22:08:35.618Z
|
2013-12-15T00:00:00.000
|
{
"year": 2013,
"sha1": "50504f0bba0718c46102e29ffaf1c804d7e763e5",
"oa_license": "CCBY",
"oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/1471-2253-13-48",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "40714a8f34541a15151bf3d8229c6c7b62690747",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235290872
|
pes2o/s2orc
|
v3-fos-license
|
Effect of amino acid proline on some growth characteristics of cowpea which exposed to drought stress
The study was conducted in Al-Youssoufia region which located in North West of Baghdad city during the growing season of 2018-2019. The experiment was aimed to demonstrate the optimum concentration of amino acid proline that reduce the effect of drought stress. By using three time for irrigation (4, 8, 12) day and four concentrations of proline (0, 20, 40, 60) mg. L-1 and their interaction on some growth characteristics: Root length, dry weight, nitrogen, phosphorous and potassium content, carbohydrate percentage and peroxidase activity in vegetative part. Data were statistically analysed to find out the least significant difference (LSD) between treatments at 0.05 level. The results indicated the increase of proline concentration from (0 to 60) mg.L-1 caused significant decrease in the average of growth characteristics and peroxidase activity unite.mg.proline-1 with increase in the average of irrigation periods under drought stress.
Introduction
The Cowpea (Vigna unguiculata L.) is a specific type of small oval bean with black dot on it, and have different colors such as; red white, black and brown and although it is very popular for its flavor and delicious taste [1]. The cowpea contains all the essential vitamins and minerals including vitamins A, B and C, folic acid, iron, potassium, magnesium, calcium, selenium, sodium, zinc, copper and phosphorus [2]. People with diabetes can depend cowpea on their diet because it have magnesium in high levels which improve bone health and play a key role in carbohydrate metabolism [3], and can help the body to maintain balanced levels of blood sugar, and it essential for people with colitis because of its rich source of fiber and it can also improve the efficiency of digestion and help in eliminating urination problems [4] it used as a treatment for anemia and iron deficiency, and it is rich with antioxidants vitamin A and C which are benefit for the skin, and for hair, its solution prevent hair loss, and is very rich in vitamin B1 (Thiamine) there for play a role in preventing heart failure and controling ventricles of the heart [5]. The treatment with amino acid proline it play positive role correlation between proline accumulation and plant stress, it plays three major roles during stress like as metal chelator, antioxidative defense molecule and a signaling molecule. Review of the literature indicates that a stressful environment results in an overproduction of proline in plants which in turn enhance stress tolerance by maintaining cell turgor or osmotic balance and stabilizing membranes, and minimize concentrations of reactive oxygen species (ROS) within normal ranges, thus preventing oxidative burst in plant [6]. Proline can enhanced stress tolerance when supplied exogenously at low concentration, but have a toxic effects when supplied exogenously at higher concentration. Plants are subjected to various types of environmental stresses, which include salinity, water deficit, temperature extremes, toxic metal ion concentration and UV radiation. The exogenous applied proline at seedling stage or at vegetative stage of Zea mays resulted in enhanced growth under water deficient environment [7]. The present study aimed to study the effect of increasing concentration of proline and irrigation times and their interaction on growth of cowpea plant and to determine the suitable concentration of proline acid that can avoid the decrease in water content in the plants.
Materials and Methods
The experiment was conducted in growth season at 2018-2019 in the Al-Youssoufia region which located in northwest of Baghdad city. The experiment designed at four levels of proline and three time of irrigation, (3x4x3) by using 30 plants in each treatment unit which was designed according to the Randomized Complete Block Design (R.C.B.D) with three replication.
Soil analysis
Top soil samples were taken from the (0-30) cm layers. Samples were air dried and sieved to a particle size (2 mm) for soil chemical analysis [8] the result of the chemical and physical properties of the soil used in the experiment. Shoot dry weight were calculaed by using sensitive balance after drying in an oven at temperature (60±0.2) C° until weight constant.
Root length
Root measurements were taken from the area associated with the stems to the furthest penetration from the soil and left in the water until the disposal of all clay.
Determination of the Nitrogen concentration (%)
To determine the nitrogen, known weight of plant samples, digested according to [9] method. Nitrogen was determined in the shoot by micro-Kjeldahl using the following equation: %N= In which: 1000= To convert mg. to g. unit.
Determination of soluble carbohydrate percentage
The stock solution of glucose and fructose is prepared by solute (50) gm of glucose and (50) gm of fructose in one litter distiled water, then prepared many concentration of them like (0.0, 0.2, 0.4, 0.6, 0.8, and 1.0) mg.L -1 , (1) ml of each concentration and (1) ml of phenol indicator (5%) is added to each concentration, mixture is read by spectrophotometer at (488) mm wave length. From the relationship between the concentration curve is drawn [10].
Determination of total phosphorus content (mg.plant -1 )
Estimate phosphorus concentration in spectrophotometer digested samples and at length 882 nm according to [11], the concentration was multiplied by plant dry weight to estimate the total phosphorus content.
Determination of total potassium content (mg.plant -1 )
Potassium concentration in digested samples was first determined by the flame spectrophotometer according to [8] it was calculated by multiplying the potassium concentration by dry weight of plant to estimate total potassium content.
-Ribovlavin Prepare riboflavin in solution 47.7 micromole by dissolving 0.0018 gram with, distilled water and complete volume to 100 ml of distilled water
Method of work
Crush 1 gm of soft vegitative tissue from 90-day-old sample with 10 ml potassium phosphate buffer (0.1) molar and kept under the temperature of 3°C refrigeration for 24 hours and put it on centrifuge at 1000 rpm for quart of an hour, 1.5 ml from total volume above in the tubes and add 40 microliter of the solution filtration the transferred to spectrophotometer for absorbance reading at 560 wavelength and comparison with blank sample which did not contain plant tissue were added to then just distilled water, samples were then brought to light using two lamps (20 watt) in box for ten minutes then read the absorbance below the same wavelength the standard was drawn and the inhibitor ratio was calculated from the follow equation: Enzyme activity= × =
Results and Discussion
Effect of proline on some growth characteristics of cowpea plant exposed to water stress:
Dry weight
In Table (3) showed that there was significant increasing at (P= 0.05) in the dry weight with the increasing proline concentration without irrigation time and irrigation time without proline, the dry weight means increased with increasing in the proline levels from (0 to 40) gm.plant-1 this resulted significant in the average dry weight, an increase of (38.397%). The results also indicated a significant increase in the rate dry weight with increases the irrigation time from (4 to 8) day an increase of (7.710%). The effect of bilateral overlap of the study workers showed results, there was significant increase in this characteristic. The highest value for dry weight was at concentration 40 mg.L -1 proline acid and (8) day irrigation time, where it reached (2.704 gm.plant -1 ) compared with (0) mg.L -1 proline and (4 day) where it reached to (1.995 gm.plant -1 ) an increase of (35.538%). We conclude that the spacing of irrigation time affects dry weight as it leads to a decrease in photosynthesis rate as well as few absorption of important nutrient and consequently in metabolism, it is related reduced dry weight by low plant elongation rate and low leaf area rate, where dry matter is the net production of photosynthesis and depends on the balance between two processes photosynthesis and respiration, therefore spraying with proline acid increased that plant ability to build photosynthesis by controlling the opening and closing of stomata the ability of plant to build chlorophyll pigments and prevent them from decomposing and thus helped to balance taking CO 2 and water loss during transpiration [16]. It is believed that the effects of saturation mater deficit and the accumulation of ABA and pick up the leaves, close the stomata, decrease the CO 2 gas representation and increase the concentration of IAAoxidase enzyme which oxidation of natural oxygen in the area of separation of the abscission zone which caused accumulation of ethylene hormone that causes the distraction of chlorophyll [17]. It is believed that the lack of water leads to an increase accumulation of hydrogen peroxide (H 2 O 2 ) and inhibition of energy production by enzyme NADPH oxidase [18].
Root length
In Table (4) it can be observed that there was significant increasing in the root length with increasing in the proline without irrigation time in (40) mg.L-1 concentration which were (5.93) cm root length. In the irrigation time without proline we observed that there was significant increasing in the root length with increasing in the irrigation time at (8) day compared with other irrigation time which recorded (5.93) cm root length compared with other periods. The proline with irrigation time interaction showed that the increasing in the proline and irrigation time levels from (40 mg.L-1 and 8 day) led to significant increasing in the levels root length values at rat of (6.17) cm compared with another means. This may be due to increased dehydration causes to dysfunction of the internal hormonal system so down the gibberellic acid hormone, this leads accumulation of abscisic acid in the plant which reduces the division and size of cells in the apical areas [13]. The water stress is thought to be leads to increase the activity of free radicals of the effective oxygen and nitrogen group and inability for that the plant is inhibited and scavenged in chloroplasts and mitochondria stops the process CO2 stabilization and accumulation of dry matter [14], for that inhibits root growth, and water stress is thought to decrease in the oxygen content due to the activity of the IAA oxidase enzyme which prevents the descent oxygen and the factor of the leaves which causes weakness in growth of root [15].
Effect of different levels of amino acid proline on content of macro elements (NPK) of cowpea plant which exposure to drought stress
In Tables (5,6,7) showed that there was significant decrease in the means of nitrogen phosphor and potassium which affect by drought stress when irrigation intervals diverged from 4 days to 12 days the means decreased specials at 12 days an decreased of (8. (4) day irrigation time. Metabolism disorder of nucleic acid, amine acid and protein where it is believed to affect straining dehydration in polyribosomes and low content in polyribosomes ATP levels and nitrogen content and effect on metabolism disorder due to the increase in effective oxygen compounds, this leads to an increase in the activity of nitrat-redutase and hydrolysis of nucleic acid that effect the absorption of mineral [19]. It is also believed that the cause of low protein content is an increase in activity enzymes such as lipoxygenase, protease and RNase are caused by drought stress and they act on reducing nucleic acid metabolism, as well the glutathione concentration to be increasing at extreme stress to release the accumulation of glutamate resulting from the decomposition of organelles and cell membranes [20], polyamines like glutathione and some amino acids such as proline ammonia modification to maintains the cell's osmotic pressures resulting from the effect of cell water loss due to dehydration which ultimately leads to increase cell osmosis and water absorption [21]. compared with (4) day. As for the effect of proline without irrigation periods, the effect was significant in increasing the percentage of carbohydrates at concentration (20 mg.L -1 ) an increase rate estimated at (58.87%) compared with zero concentration. The interaction between proline and irrigation time was significant at (40 mg.L -1 ) and (4 day) an estimated increase at (74.9%) carbohydrate compared with (0) proline and (4 day) irrigation time. The reduction of carbohydrates are due to the dehydration which effect in photosynthesis as the effect of water tension begins to close the stomata first accompanied by a shortage of the amount of carbon dioxide entering and installed in the leaves results in a significant drop in photosynthesis and this affects the amount of nutrients finally in the overall growth of plant [22].The increase in the percentage of carbohydrates due to effect of proline in increasing leaf area and number of leaves, as well as increasing leaf content of chlorophyll this will increase the efficiency of photosynthesis and then produce carbohydrates [23].
Peroxidase enzyme
In Table ( 9) showed that there was significant at all treatment, the means of peroxidase enzyme under effect of irrigation time increased by increasing the irrigation time, and the time of (12 day) highest value of the enzyme an increase of (88.46 unit. mg.plant -1 ) compared with (4 day) irrigation treatment.
As for the addition of increased concentration of proline we note decrease in the content of the peroxidase enzyme especially at concentration 60 unit.mg.plant-1 and a decreased as (36.36 unit.mg.plant -1 ). The interaction between increased proline spraying and (8 day) irrigation time inhibited the activity content of peroxidase enzyme and decreased as (46.67%) compared with (0) proline unit (4) day irrigation time. Irrigation spacing led to an increase in the activity of the enzymatic oxidation system, which includes super enzymes SOD and peroxide resulting from the activity of oxidized enzymes and free radicals be mater stress and reduced scavenging [24]. Increasing the activity of the enzyme peroxidase by increasing the production of hydroxyl radical and single oxygen with water stress effect is also believed to increase concentration the activity of malonedihyde is vital guide for the plant to provoke antioxidant production [25].
|
2021-06-03T01:39:03.835Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "419ee6f80a2ae57c249b6be4c7a3824f3693f581",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1879/2/022024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "419ee6f80a2ae57c249b6be4c7a3824f3693f581",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
}
|
14949954
|
pes2o/s2orc
|
v3-fos-license
|
Preparation, Characterization and Thermal Degradation of Polyimide (4-APS/BTDA)/SiO2 Composite Films
Polyimide/SiO2 composite films were prepared from tetraethoxysilane (TEOS) and poly(amic acid) (PAA) based on aromatic diamine (4-aminophenyl sulfone) (4-APS) and aromatic dianhydride (3,3,4,4-benzophenonetetracarboxylic dianhydride) (BTDA) via a sol-gel process in N-methyl-2-pyrrolidinone (NMP). The prepared polyimide/SiO2 composite films were characterized using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), scanning electron microscope (SEM) and thermogravimetric analysis (TGA). The FTIR results confirmed the synthesis of polyimide (4-APS/BTDA) and the formation of SiO2 particles in the polyimide matrix. Meanwhile, the SEM images showed that the SiO2 particles were well dispersed in the polyimide matrix. Thermal stability and kinetic parameters of the degradation processes for the prepared polyimide/SiO2 composite films were investigated using TGA in N2 atmosphere. The activation energy of the solid-state process was calculated using Flynn–Wall–Ozawa’s method without the knowledge of the reaction mechanism. The results indicated that thermal stability and the values of the calculated activation energies increased with the increase of the TEOS loading and the activation energy also varied with the percentage of weight loss for all compositions.
Introduction
Polyimides are a type of organic polymers that have been widely used in many applications at high temperatures, such as in aerospace, microelectronic industries [1,2], semiconductor and composites [3]. They demonstrate many advantages, which include excellent heat and chemical resistance [4], as well as outstanding combinations of thermal, mechanical and electrical insulating properties [5,6]. Thermal property is one of the most important properties for polymeric materials. Many researchers have studied the polyimides properties, and most of them have reported excellent thermal stability for polyimides [7]. Meng et al. (2007) have reported the thermal properties of a polyimide based on 2,6-bis(p-aminophenyl)-benzo [1,2-d;5,4-d]bisoxazole. The results showed excellent thermal stability, a 5% weight loss temperature (T 5% ) and glass transition temperatures (T g ) at 572 °C and 283 °C in N 2 respectively. Meanwhile, thermal stability and thermal degradation kinetics are significant to production and application [8]. In particular, thermogravimetric analysis (TGA) has been widely employed to investigate thermal degradation kinetic and thermal stability of polymers [9][10][11][12][13]. However, polyimides have many intrinsic weaknesses such as low thermal coefficient, poor corona-resistance property and comparatively high thermal expansivity, which can cause restrictions in some of their applications [14]. For enhancement and to obtain the desired improvements of polyimides many research activities have been carried out through using a mixture of inorganic in polymer matrices. The sol-gel process is an important method for the preparation of these hybrid materials, whereby both organic and inorganic elements are mixed at a molecular level and the prepared intimate mixing provides various properties which are different from those of the traditional composites [4].
Chemical Analysis by FTIR Spectroscopy
The FTIR spectra of the prepared polyimide/SiO 2 composite films, with different contents of silica, are depicted in Figure 1. The characteristic absorption bands of the imide groups near 1780, 1720 and 1378 cm −1 were observed in the FTIR spectra of the prepared samples after thermal imidization of the poly(amic acid)/SiO 2 precursor. Meanwhile, the characteristic absorption of the amide carbonyl at 1650 cm −1 did not appear in the spectra, indicating that the imidization reaction is complete [15]. The characteristic vibration bands of Si-O-Si hydrolyzed from silica were also observed at 477 cm −1 and near 1100 cm −1 . As the content of SiO 2 particles increased, the intensity of Si-O-Si band gradually became stronger in the FTIR spectra of the polyimide/SiO 2 composite films [16].
X-ray Diffraction Study of Polyimide/SiO 2 Composite Films Structure
The prepared composite films were also characterized by XRD. Figure 2a shows the XRD patterns of the polyimide/SiO 2 composite films with various contents of SiO 2 , prepared according to processing conditions in Section 3.3. Figure 2b shows the XRD pattern of prepared SiO 2 particles in the same conditions but in the absence PAA where the average size of obtained particles was 610 nm. As is clearly seen in Figure 2a, there is a peak in the diffractogram of the polyimide (curve I) as the non-Gaussian distribution pattern that reveals a semi-crystalline structure polymer. This peak was also depicted in the all diffractograms of polyimide composite films. As the loading of TEOS increased in the PAA precursor (curves II-IV), the peaks shoulder after 2θ = 16 also heightened, suggesting that this could be due to the formation of SiO 2 particles and increase of SiO 2 particles content in polyimide matrix. The SEM photographs of the cross-section surfaces of polyimide/SiO 2 composite films in various percentages of TEOS as SiO 2 sources are shown in Figure 3. The created SiO 2 particles which are in white globular shapes have dispersed into the polymer matrix uniformly. The average size of the SiO 2 particles in the composite films were estimated to be around 265, 374, 580 nm for the prepared composite films with 10, 30, 50 wt% of TEOS loading respectively. The sizes of the SiO 2 particles for the polyimide composite films with various percentages of TEOS loading are also compared in Table 1. On the basis of the morphological observations, with the increase of the TEOS loading, the SiO 2 particles size were increased which can be seen from the increase in the aggregation leaning of the SiO 2 particles. The SEM images also revealed that with the increase of the TEOS loading, the dispersion of the SiO 2 particles in the hybrid also became more uniform. The adhesion of the silica particles with the polyimide matrix is low as the particles seemed to have been completely debonded from the surrounding polyimide matrix, indicating a very poor interfacial adhesion between the particles and the matrix. The comparison of the SEM images also indicated that in higher contents of TEOS loading, the interfacial adhesion decreased. As is clearly seen, the distribution and dispersion of the SiO 2 particles within the polyimide matrix are relatively uniform and this factor can be effective on the thermal stability of composite films.
Thermal Properties Study of Polyimide/SiO 2 Composite Films
The thermal stability of the prepared polyimide/SiO 2 composite films can be evaluated by TGA. The TG curves of the polyimide/SiO 2 composite films with various SiO 2 contents at heating rate of 5 °C/min are shown in Figure 4a. The TG curves indicate that water or solvent has been successfully eliminated from the polyimide film and also polyimide/SiO 2 composite films because there is no weight loss below 100 °C. It can be clearly seen in Figure 4a that the residual weight of polyimide/SiO 2 composite films after thermal decomposition is higher than polyimide film above 700 °C. The increase in the weight residues above 700 °C illustrates successful incorporation of higher amounts of silica into the polyimide/SiO 2 composite films and ultimately increases in thermal stability. The temperatures of the thermal decomposition (T d ) of the polyimide film and polyimide composite films are compared in Table 1. Results show that thermal decomposition of composite films increases with the increase of SiO 2 contents leading to the assumption that the inorganic components, such as SiO 2 , can improve the thermal stability of organic materials. The improvement of the thermal stability of the prepared polyimide with SiO 2 can be based on the fact that these materials have inherently good thermal stability and also due to the strong interaction/chemical bonding that exists between the polyimide and silica [16,17].
Theoretical Background
One application of the thermogravimetric analysis is the determination of the kinetic parameters, such as reaction order, activation energy, etc. In the thermogravimetric analysis, the rate of reaction may be defined as the ratio of the actual weight loss to the total weight loss corresponding to the degradation process [8]; where W 0 is the initial weight of the sample, W t is the actual weight of the sample, W f is the final weight of the sample and X is the degree of decomposition. A typical model for a kinetic process can be represented by the decomposition rate (dX/dt) which is a function of temperature and weight of the sample. The decomposition rate can be expressed as: where dX/dt is the decomposition rate, k is the rate constant and f(X) is the differential expression of a kinetic model function. However, the rate constant k can be defined by the Arrhenius expression: A is the pre-exponential factor (s −1 ), E is the activation energy of the degradation reaction (kJ/mol), R is the universal gas constant (8.314 J/mol· K) and T is the absolute temperature (K). The combination of Equations (2) and (3) leads to the following equation: In the thermogravimetric analysis, the sample temperature can be changed with a constant heating rate β (β = dT/dt), whereby, whit the introduction of β, Equation (4) can be modified as follows: Therefore, Equation (5) is a fundamental relation that determines the kinetic parameters on the basis of thermogravimetric data. Based on the degree of conversion measurement, X, and also the heating rate, β there are several methods available for the calculation of the apparent activation energy. Hence, the calculation of the kinetic parameters for the degradation from the thermogravimetric analysis data is strongly dependent on the method of calculation. There are a number of methods used to determine the apparent activation energy based on one or different heating rates of the TGA curves and these include Ozawa, Kissinger, van Krevelen, Coatse-Redfern, etc. [9]. In the present study, the Ozawa's method was employed to calculate the apparent activation energy of the thermal degradation of the polyimide (4-APS/BTDA) and polyimide (4-APS/BTDA)/SiO 2 composite films.
Flynn-Wall-Ozawa Method
The activation energy of the decomposition process can be calculated using the Flynn-Wall-Ozawa's method without knowing reaction order and differential data of TGA [18,19]. The integration of Equation (5) from an initial temperature T 0 , corresponding to a degree of conversion X 0 , to the peak temperature T p , where X = X p , gives: where g(X) is the integral function of conversion. Assuming x = E/RT, Equation (6) can be written as: Ozawa's method is based on Doyle's approximation.
Log p(x) ≈ 2.315 − 0.457x Or lnp(x) ≈ 5.330 − 1.052x For 20 < x < 60, Equation (7) can be written as: Here, A and R are constant and for a particular conversion, g(X) is a constant. Hence, the value of E can be computed by Ozawa's method for any particular degree of decomposition, being determined from the linear dependence of log β versus 1/T plot at different heating rates without knowing of the reaction order.
To determine apparent activation energy using Ozawa's method, several TGA curves at different heating rates (β) are essential. Hence, the dynamic thermogravimetric analysis of polyimide and prepared composite films were performed at various heating rates, namely 5, 10, 15 and 20 °C /min in N 2 . Figure 5 shows the thermal degradation curves of the polyimide and polyimide/SiO 2 composite film with 50% of TEOS loading at different heating rates of 5, 10, 15 and 20 °C /min. As depicted in the figure, the onset decomposition temperature increased with increase of the heating rates for both the compositions. The activation energy of the thermal degradation for pure polyimide and composite films were obtained using the Ozawa's method, Equation (8) Figure 7 illustrates the values of the activation energies for the thermal degradation of the polyimide and the prepared polyimide/SiO 2 composite films with different loading of TEOS versus percentage of weight loss in the nitrogen atmosphere. As depicted, the values of the activation energies vary with the percentages of weight loss for all compositions. From these curves, the mean activation energies of 258.7, 261.4, 266.5 and 272.4 kJ/mol were calculated for the polyimide pure and its composites with 10, 30 and 50 wt% of TEOS loading, respectively. The activation energies gradually increased with a smooth slope for the pure polyimide before 30% and this was before 25% of weight loss for the polyimide composite films, however, after these values, a jump in the activation energy was observed. This might be due to the residue formed during the thermal degradation. This jump in the polyimide composite films happened sooner; this could be due to the presence of SiO 2 particles which had been homogeneously dispersed in the polyimide matrix. The dispersed SiO 2 particles in structure of the prepared polyimide composite films can prevent the permeability of volatile decomposition product from the polyimide.
Preparation of the Polyimide
As a representative procedure, the polyimide can be prepared through thermal imidization. For this purpose a solution of dianhydride monomer BTDA (0.483 g, 1.5 mmol) in NMP (3.0 g) was gradually added to a stirred solution of diamine monomer 4-APS (0.372 g, 1.5 mmol) in NMP (3.0 g) into a 50 mL round-bottomed flask that was equipped with a mechanical stirrer. The mixture was stirred at the room temperature for 24 h to allow viscosity to increase. The prepared PAA solution was subsequently cast onto a clean glass plate. The cast film was dried in an oven at 80 °C for 5 h and then heated at different temperatures and durations (125 °C for 2 h, 150 °C for 2 h, 180 °C for 1 h, 200 °C for 1 h, 250 °C for 1 h and 300 °C for 0.5 h) to convert the PAA into a uniform polyimide film [20] and transparent in yellow color with thickness 90 µm. The temperature and time are both important factors in thermal imidization processing of poly(amic acid) components. In thermal imidization processing the imide ring is formed with elimination of H 2 O molecule from the amic acid and carboxylic acid groups in poly(amic acid) chains. The results showed that the elimination reaction is not relatively fast. Hence, it's necessary selection of the appropriate combination of temperature and time in the thermal treatment experiments to gradual removal of solvent and formation of imide group rings.
Preparation of the Polyimide/SiO 2 Composite by Sol-Gel Process
Sol-gel process was employed in the synthesis of the polyimide/SiO 2 composite films, as depicted in Figure 8. 0.372 g 4-APS was added to a round bottom flask and dissolved in NMP by stirring. An equimolar amount of BTDA (0.483 g) solution in NMP was then added to the prepared 4-APS solution. The mixture was stirred continually for 24 h, and finally, a mixture of TEOS and distilled water (4/1 based on TEOS molars) was added into prepared PAA solution 12 wt%. Hydrochloric acid (HCl) was also added to maintain a pH of 4 and then, the mixture was stirred at room temperature for 24 h to yield a transparent solution. The sol-gel process in preparation of PAA was performed in two steps; namely the hydrolysis of alkoxides to produce the hydroxyl group, and residual alkoxides group to form a three-dimensional network [4]. The obtained solution was cast on a clean glass plate and thermally treated in an oven, as explained Section 3.2. The polyimide/SiO 2 composite films obtained in brownish color with thickness 90 µm. The polyimide/SiO 2 composite films with various contents of SiO 2 were prepared according to Table 2.
Characterization
The products of the prepared polyimide and polyimide/SiO 2 composite films were characterized by FTIR spectra (Perkin-Elmer Model: 100 Series). The created SiO 2 particles into polyimide matrix were investigated using X-ray diffractometer (Shimadzu, Model XRD 6000). The XRD patterns were recorded at a scan speed of 4 °C/min. The fracture surface morphology of the polyimide nanocomposite films were observed by scanning electron microscopy (SEM) using a LEO 1455 VPSEM. The fracture surfaces were sputter-coated with gold before viewing to eliminate electron charging effect. The particles size distribution was determined using the UTHSCSA image Tool Software (Version 3.00). The thermal properties were determined using the thermogravimetric analysis (Perkin-Elmer, Model TGA-7). Experiments were performed at different heating rate of 5, 10, 15 and 20 °C /min in N 2 . The temperature range for TGA measurements were from 35 to 800 °C .
Conclusions
Polyimide (4-APS/BTDA)/SiO 2 composite films with various TEOS loadings were prepared by sol-gel process. Synthesis of polyimide (4-APS/BTDA) and formation of SiO 2 particles were confirmed by FTIR spectroscopy and X-ray diffraction techniques. The SEM microphotographs of the cross-section surfaces of polyimide/SiO 2 composite films showed that the created white globular SiO 2 particles were dispersed evenly in the polyimide matrix. On the basis of morphological observations, the average size of SiO 2 particles increased with the increase of the TEOS loading. The TG curves of the polyimide/SiO 2 composite films with various SiO 2 contents showed that the thermal stability of the prepared polyimide composite films increased with increased SiO 2 content. The thermogravimetric analysis results from the prepared polyimide composite films also illustrated the apparent activation energies of the thermal decomposition are gradually increased with increased SiO 2 content. The dispersed SiO 2 particles, in the structure of the prepared polyimide composite films, may be able to prevent the permeability of volatile decomposition products from the polyimide.
|
2014-10-01T00:00:00.000Z
|
2012-04-17T00:00:00.000
|
{
"year": 2012,
"sha1": "c1aba3bfc7c5e9aaaf1fd0094d6dcc6cdea55ee2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/13/4/4860/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1aba3bfc7c5e9aaaf1fd0094d6dcc6cdea55ee2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
102352526
|
pes2o/s2orc
|
v3-fos-license
|
On the convergence dynamics of the Sitnikov problem with non-spherical primaries
We investigate, using numerical methods, the convergence dynamics of the Sitnikov problem with non-spherical primaries, by applying the Newton-Raphson (NR) iterative scheme. In particular, we examine how the oblateness parameter $A$ influences several aspects of the method, such as its speed and efficiency. Color-coded diagrams are used for revealing the convergence basins on the plane of complex numbers. Moreover, we compute the degree of fractality of the convergence basins on the complex space, as a relation of the oblateness, by using different computational tools, such the fractal dimension as well as the (boundary) basin entropy.
Introduction
The simplest scenario, according to which the two primary bodies perform planar circular orbits (with zero eccentricity, e = 0), is known as the MacMillan problem [9], while this notion was originally introduced in [15]. Moreover, the first qualitative results on the Sitnikov problem have been conducted in [20] and [10].
Knowing the exact coordinates of the points of equilibrium of a system is an important issue. However, this is not possible for many complicated dynamical systems for which there are no analytical equations for the positions of the equilibrium points. This automatically means that only by using numerical methods we can obtain the locations of the libration points. As we know, in all numerical methods the initial conditions are very important. Indeed, for some starting points the numerical methods may converge relatively fast to a root, while for other initial conditions they may require a considerable amount of iterations. Usually, points with fast convergence belong to the basins of convergence (Boc), while slow converging points are situated in the vicinity of the fractal basin boundaries. Therefore, it is of great importance to know the location of the Boc for a dynamical system, because then we automatically are aware of the optimal initial conditions for the numerical methods. Here, we would like to point out that the Boc of a dynamical system strongly depend on the chosen numerical method. In other words, different numerical methods yield to completely different Boc, for the same dynamical system (see e.g., [23,24]).
The Boc define complicated geometrical structures on the complex or the configuration (x, y) plane. Another important aspect is knowing the degree of fractality of the convergence regions. A quantitatively estimation of the degree of fractality can be easily achieved by computing several numerical indicators, such as the uncertainty or fractal dimension (see e.g., [2,3]) or the basin entropy (see e.g., [4]). Both these quantities can provide safe results, regarding the degree of fractality of a dynamical system.
Our article has the following layout: in Section 2 we describe the mathematical formulation of the dynamical model. The next Section 3 contains all the numerical outcomes of our work, about the properties of the Sitnikov problem with non-spherical primaries. In the last Section 4, we emphasize the conclusions of our computational analysis.
Mathematical formulation of the system
The system consists of two primaries whose dimensionless masses are m 1 = µ and m 2 = 1 − µ, where µ = m 2 /(m 1 + m 2 ) ≤ 1/2 is the well known mass parameter [21]. The centres of the two primaries lie on the horizontal Ox axis and in particular at (x 1 , 0, 0) and (x 2 , 0, 0), where x 1 = −µ and x 2 = 1 − µ. For each primary it is assumed that its shape resembles a spheroid, according to the value of the corresponding oblateness A i , i = 1, 2.
According to [1,7,19], the function of the potential of the restricted circular problem with two oblate primaries is given by where are the respective distances between the test particle (third body), and the centers of the two primaries. Moreover, the mean motion is The third body of the system moves according to the following equations For this system there is only one known motion integral, which reads By setting x = y = 0, µ = 1/2, and A 1 = A 2 = A in Eq. (1) we obtain the potential function of the Sitnikov problem where r = z 2 + 1/4. Thus, the vertical motion, along the z axis, of the test particle is governed by the following equationz while we can deduce that the corresponding Jacobi integral becomes In Paper I we demonstrated that the points of equilibrium (roots) of the Sitnikov problem can be obtained through the equation of mition (7). In addition, we seen that the value of the oblateness A greatly influences the nature of the roots. Specifically: -For A < −1/18 there exist two pairs of real and imaginary roots. We also concluded that the levels A = {−1/18, 0, 5/6} are in fact critical levels of the oblateness.
Numerical results of the basins of convergence
The Boc on the plane of complex numbers can be determined by means of the numerical method of Newton-Raphson (NR). In Paper I we shown that the corresponding iterative scheme reads At this point, we should emphasize that from now on the coordinate z is treated as a complex variable z, while the same approach was also successfully followed in [8,[23][24][25]. In [6] it was demonstrated that the use of complex variables is necessary, because all the beautiful and impressive Boc, with the basin boundaries with fractal-like geometry, appear only on the plane of complex numbers. In Paper I we presented in detail the structure of the Boc in the four intervals of A. In Fig. 2 we remind to the reader the structure as well as the geometry and the shape of the convergence regions, for four characteristic values of the oblateness A. It is seen, that in all cases the Boc have finite area. Moreover, the majority of the plane of complex numbers is occupied by initial conditions (yellow regions) for which the NR iterative method quickly diverges to very large complex numbers, thus numerically indicating divergence to infinity. The distributions of the required iterations N are presented in Fig. 3. It is evident that for initial conditions (R, I) near the roots the required iterations are low (N ≈ 5), while near the boundaries of the basins the NR iterative scheme needs more than 15 iterations for obtaining a root, with the predefined accuracy. Fig. 4(a-d) shows the corresponding histograms with the distributions of probability. The definition of the probability is P = N 0 /N t , where N 0 is the number of the initial conditions (R, I) which display true convergence, while N t is the number of the total nodes on the plane of complex numbers.
The histograms shown in panels (a-d) of Fig. 4 can be used for extracting more information, regarding the convergence properties of the NR method. For instance, we can use the Laplace distribution for obtaining the best fits of the right-hand sides (tails) of the histograms (see blue solid lines). We choose to use the Laplace distribution because this is the most natural choice particularly in systems displaying transient chaos (e.g., [11,17,18]).
The Laplace probability density function (PDF) is given by where the parameters l and d > 0 are the location parameter and the parameter of diversity, respectively. From the PDF we need only the N ≥ l part because the Laplace distributions refer only to the tails of the probability histograms.
In Paper I we investigated using numerical methods the convergence dynamics of the Sitnikov problem with non-spherical primaries however for some individual values of the oblateness A. In the present work, we will perform a more systematic numerical analysis in an attempt to determine how A affects the convergence properties of the system. For this task, we classified 1000 grids of 1024 × 1024 starting points (R, I) inside the square region R = [−2, 2] × [−2, 2] on the plane of complex numbers, for the range A ∈ [−0.5, 1], thus following the pioneer works of Nagler [12,13]. In our calculations the desired accuracy, regarding the coordinates of the attractor, was set to 10 −15 , while the maximum allowed number of iterations was N max = 500.
The evolution of the most probable number of iterations N * , per grid, is illustrated in panel (a) of Fig. 5. It is seen, that around the critical levels A = −1/18 and A = 5/6 we have two peaks, while on the other hand around the critical level A = 0 we observe the lowest value of N * . In Paper I we had discussed (see bottom row of Fig. 4) that when A = −1/18 for a set of starting points the NR scheme requires a significant number of iterations for converging to the attractor z = 0. This is exactly why in part (a) of Fig. 5 we see that the highest value of N * is measured near A = −1/18. In parts (b) and (c) of Fig. 5 we present the parametric evolution of the location parameter l and the diversity d, respectively, as a function of the oblateness A. In part (b) we also included, for comparison reasons, using blue color the evolution of the N * of iterations. One can observe that in general terms the location parameter almost coincides with the average number of iterations (almost always |l− < N > | ≤ 2). This implies that the Laplace probability density function (PDF) can satisfactorily fit the tails of the probability histograms. According to the plot shown in part (c) of Fig. 5 the diversity is, in most of the cases, low (d < 3), thus indicating the dispersion of the values of N is very close to N * . On the other hand, in the vicinity of the critical levels of the oblateness A = −1/18 and A = 0 the diversity exhibits a local maximum and a local minimum, respectively. Part (d) of Fig. 5 illustrates the evolution of the differential entropy h = 1 + ln(2d), where d is the diversity. One can see, that the evolution of both d and h displays similar overall patterns. The numerical analysis presented in Paper I revealed the fractal regions on the plane of complex numbers. One of the most convenient ways of measuring the degree of fractality of a system is by computing the uncertainty or fractal dimension D 0 (see e.g., [14]), thus following the computational methodology used in [2,3]. Fig. 6 shows the parametric evolution of the uncertainty dimension, as a function of the oblateness A. In the first interval of δ the fractal dimension is very close to 1, which implies zero fractality. In the first two intervals D 0 decreases, while in the third interval its value is mostly reduced. It is seen, that D 0 displays the maximum value near the value A = 0.7, while the lowest value is measured at A = 0.
Another efficient way for quantitatively measuring the degree of fractality of a system is by computing the so-called basin entropy [4,5]. This method determines the fractality of a basin diagram by the process of examination of its topological properties. The evolution of the entropy of the basins S b , as a function of A, is illustrated in panel (a) of Fig. 7. Once more, we note that in the vicinity of all the critical values of A there are three local minima of S b , mainly because for these values of A the total number of the numerical attractors decreases from 5 to three (when A = −1/18 and A = 5/6) and one (when A = 0). The maximum value of S b was measured near A = 0.7. Therefore, we may argue that two different methods (i.e., the uncertainty dimension and the basin entropy) indicate that the degree of the fractality of the Boc on the plane of complex numbers is maximum near the same value of A.
Apart from the basin entropy there is also the boundary basin entropy S bb [4], from which we can extract additional information about the degree of fractality of the Boc. The parametric evolution of S bb is given in part (b) of Fig. 7. From this type of plot we can also deduce information regarding the fractality of the Boc on the plane of complex numbers. More specifically, we can use the so-called "log 2 criterion", according to which if S bb > log 2 then the basin boundaries are certainly fractal (here note that the converse statement is not valid). As it is seen in part (b) of Fig. 7 the basin boundaries are certainly fractal only when 0.65 < A < 0.78. Once more, the lowest values of S bb are reported in the vicinity of the critical values of the oblateness.
At this point, we would like to briefly discuss the efficiency of the NR method. The classification of the 1000 grids of initial conditions suggested that at least for this dynamical system, the numerical method does display an ill behaviour, for some specific sets of initial conditions. In particular, for a large amount of initial conditions the iterator quickly diverges to extremely large complex numbers. In part (a) of Fig. 8 we give the parametric evolution of the area R on the plane of complex numbers, covered by starting points which diverge to infinity. It is seen, that the highest value of R is observed at A = 0, while for higher values of A the value of the area decreases.
Before closing this section, it would be very illuminating to discuss how the oblateness influences the geometry of the Boc. As we have already seen, the converge regions on the complex plane have finite area. Therefore, we define as R max and I max the maximum values of the Boc, along both axes, respectively, while the ratio r is defined as I max /I max . Part (b) of Fig. 8 shows the evolution of R max , I max and r, as a function of the oblateness. One can see, that for negative values of A the overall structure of the Boc is elongated along the vertical axis. On the contrary, for A > 0 the value of R max increases and at about A = 0.15 we have that R max = I max . Moreover, for A > 0.15 the value of R max increases rapidly which implies that the overall shape of the convergence regions becomes elongated along the horizontal axis.
Concluding remarks
The present paper can be considered as a continuation of Paper I. The scope of this work was to study the convergence dynamics of the Sitnikov problem with non-spherical primaries. Using the NR iterative method we revealed the Boc on the plane of complex numbers, by means of color-coded diagrams. Moreover, we demonstrated how the oblateness A influences the speed and the accuracy of the method. At the same time, it was determined how the same parameter affects the degree of fractality of the convergence regions, by computing modern quantitative indices, such as the uncertainty dimension and the (boundary) basin entropy.
In this work we demonstrate for the first time how the oblateness of the Sitnikov problem with spheroidal primaries influences the overall properties of the system. Additionally, we relate also for the first time different techniques for measuring the fractal degree of a dynamical system. More specifically, we computed and compared the results of both the fractal dimension and the (boundary) basin entropy. On this basis, we claim that the resented outcomes of the article are interesting and novel and add new information on the field of the convergence properties of Hamiltonian systems.
The most important findings of our numerical exploration are listed here: 1. For the majority of the studied cases, regarding the value of A, the NR iterative scheme requires an average number of 8 iterations for leading to one of the numerical attractors (roots), while only near the critical values of A the average number of iterations increases. 2. Our numerical analysis reported the existence of only starting points for which the numerical iterator quickly diverges to infinity. On the other hand, there is no indication of false or non-converging starting points on the plane of complex numbers. 3. The highest fractal degree (measured by using the uncertainty dimension as well as the entropy of the basins) of the convergence diagrams on the plane of complex numbers correspond to about A = 0.7, while the lowest values of the degree of fractality were measured near the critical levels of A, where the number of the numerical attractors is reduced. 4. Exploiting the information of the computation of the boundary basin entropy and the "log 2 criterion" we proved that in this system the basin boundaries of the convergence regions on the plane of complex numbers are certainly fractal for 0.65 < A < 0.78. 5. For negative values of the oblateness the overall shape of the convergence regions on the plane of complex numbers is elongated along the vertical axis, while for A > 0.15 the geometry changes and the shape becomes elongated along the horizontal axis.
The numerical routine of the NR iterative method was written in the standard version of FORTRAN 77 [16]. For classifying the nodes on the plane of complex numbers, roughly about 2.5 minutes of CPU time, per grid, was required using an Quad-Core Intel i7 vPro 4.0 GHz processor. The version 11.3 of Mathematica [22] has been deployed for constructing all the graphics of the paper.
|
2019-04-08T10:06:38.000Z
|
2019-03-14T00:00:00.000
|
{
"year": 2019,
"sha1": "ea7d3afc4fb1f700cbad0dbbc2f3fec90adadcf4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.03924",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea7d3afc4fb1f700cbad0dbbc2f3fec90adadcf4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
253301256
|
pes2o/s2orc
|
v3-fos-license
|
Design and management of humanitarian supply chains: challenges, solutions, and frameworks
The design and management of the humanitarian supply chain are the most critical aspects of the humanitarian aid supply chain. Despite enormous interest among the academic community and the practitioners, the design of a humanitarian supply chain is still not well understood. Most of the publications have attempted to address the mechanisms of the humanitarian relief operations. However, the elements of the humanitarian supply chain designs are not well understood in an integrated manner. In this special issue, we have accepted the articles based on six factors that shape the design and management of the humanitarian supply chain and the influencing factors (see Fig. 4). We have noted the research gaps and offered rich directions for future research.
Introduction
In the last decade, the humanitarian supply chain management field has gained significant attention from academics and policymakers (Kovacs & Spens, 2011a;Gupta et al., 2016;Altay et al., 2021). Humanitarian supply chain management has gained a significant footing after the 2004 Indian Tsunami. The disaster relief efforts following the 2004 Tsunami have received severe criticisms from experts due to poor supply chain management (Kovacs & Spens, 2011b;Oloruntoba et al., 2019). Since then, natural disasters are on the rise (Guha-Sapir & Scales, 2020). As more humanitarian crises are caused by disasters, supply chain research in humanitarian settings must continue to advance in such complex settings (Van Wassenhove, 2006;Starr & Van Wassenhove, 2014;Altay & Labonte, 2014). Following the definition of Burkart et al., (2016, p. 32), "the process of planning, implementing and controlling the efficient, cost-effective flow and storage of goods and materials, as well as related information, from the point of origin to the point of consumption to alleviate the B suffering of vulnerable people", we argue that the design and management of humanitarian supply chain design is one of the most critical aspects of the humanitarian supply chain management. Humanitarian organizations need to respond to the crises on an urgent basis, providing aid to the victims including shelter, food, and other necessary items to alleviate the sufferings of the victims (Charles et al., 2016). In the past, humanitarian organizations have acted in a way to gain maximum benefits in designing the humanitarian supply chain networks. The humanitarian organizations have either positioned their inventories in the location where they are engaged in the relief operations or closer to the airport or the location where they gain a maximum tax advantage (Pettit & Beresford, 2009;Roh et al., 2015). These approaches might have limited the scope of exploring other possible locational advantages (Charles et al., 2016). Thus, the design and management of an optimal supply chain network for the humanitarian organizations that operate in a highly complex setup is a major challenge for the humanitarian practitioners and the policymakers.
Why focus on design and humanitarian supply chain?
In recent years the number of publications on humanitarian logistics/humanitarian supply chain management has increased significantly (see Fig. 1). Yet, few articles focusing on the humanitarian supply chain design suggest a significant research gap (see Fig. 2). Kovacs & Moshtari (2019) suggest that humanitarian studies should be more realistic and focus on real-world problems with a real data set. Charles et al., (2016) argue that the practitioners of humanitarian organizations find it difficult to grasp the underlying assumptions of the complex optimization problems. Moreover, in the case of robust and stochastic optimization, the practitioners drawn from the humanitarian organizations find it difficult to comprehend as it is often hard to assign the probabilities. Given these challenges, we followed the recommendation of Boyer & Swink (2008) that the multi-methods approach is the best way to tackle the complex challenges involved in the design and management of the humanitarian supply chain.
The need for the special issue
The special issue aimed to publish articles that will help advance the theoretical debates on how the humanitarian supply chain design can help tackle complex issues that often trouble the humanitarian relief workers during the disaster relief operations. The intent was clear to publish research articles that investigate the humanitarian supply chain design issues, identify various factors that influence the design and management of the humanitarian supply chain, and how these factors influence the humanitarian supply chain performance. There was no constraint on the type of submissions. These submissions could be analytical, conceptual, empirical studies relying on survey-based data, qualitative studies (i.e., multiple-case-based studies, action research, graph-theoretic approach, grounded theory, or ethnographic approach), or to an extent unique conceptual works that help push the theoretical boundary. Although, we encouraged the authors to address unique challenges faced by the humanitarian organizations in the wake of the exponential rise in disasters across the globe. The result was significant submissions of which we finally accepted 44 articles after multiple rounds of major revision. We have classified our accepted articles based on methods (see Fig. 3). Next, we provide the synthesis of the 44 contributions to theory and practice.
Summary of contributions
Before summarising the accepted contributions, however, it is important to understand them in the context of the design and management of the humanitarian supply chain. To begin with, we need to understand first what are the main factors that shape the humanitarian supply chain design (see Fig. 4).
Further, we have also received publications that provide a retrospective outlook of the humanitarian supply chain management field. For instance, articles A28, A29, A30, A31, A32, A33, and A34 offer many insights to the humanitarian scholars and theories to test in future studies. For instance, article A28 has attempted to address the human-related issues in the humanitarian supply chain. Similarly, article A29 and A33 provides a retrospective review of the humanitarian supply chain literature published in reputable outlets and explains how the scholars and their scholarly output have shaped the evolution of humanitarian supply chain management as a discipline. The author points out some research gaps that may be In A1 authors found that the options contract is one of the best ways of procuring the relief material. Whereas in A38 the authors propose an optimal solution to minimize the supply cost of the relief materials procured from various sources. A41 presents how the innovativeness of the suppliers helps tackle the complexity.
In the future, there is a significant scope to examine and evaluate other contracts to understand the implications of the procurement of relief materials.
Transportation and warehousing A2, A3, A4, A5, A6, A7, A8, A9, A35, A36 The authors have attempted to provide optimal solutions using a wide range of options to improve flexibility, reduce cost and improve service during disaster relief operations.
Despite good efforts, there is a need for solutions that consider realistic conditions. There is a need for developing far more dynamic and robust techniques that help tackle realistic situations.
Coordination/collaboration A10, A11, A1 The authors have discussed the role of coordination in reducing carbon emissions in a sustainable humanitarian supply chain (A10). Further, in A11 the authors have discussed the application of the technologies in improving coordination among the disaster relief actors in the I4.0 era. Further in article A1, the authors have examined the ways of improving coordination to improve the procurement strategy for relief items.
Despite some good efforts, the coordination among the disaster relief actors remains one of the most pressing concerns. In the future the scholars may pay detailed attention to governing mechanism. There is a need for a multi-method approach to building a comprehensive understanding of the coordination mechanism. The authors in article A12 argue beside the role of humanitarian assistance, the fund provided to the victims may help alleviate the suffering post-disaster phase. Further, the authors in article A13, proposed a unique method to evaluate the labor efficiency in the humanitarian sector. The authors in articles A14, A15, and A37 offer some implications for the policy to smoothen the disaster relief operations. In the future how the humanitarian studies can shed more light based on empirical works on the policy front is called for. There is a need for a far more integrated approach to view the micro and macro elements that shape the humanitarian fabrics.
Supply chain properties A16, A17, A18, A19, A20, A30 These articles contribute toward understanding agility, resilience, and how the ripple effect in the humanitarian supply chain can be understood.
In the future, there is a need for in-depth studies that may shed more insights on how to build agile and resilient humanitarian supply chains.
ICTs and enabling technologies A21, A22, A23, A24, A25, A26, A27, A31 The authors offer multiple perspectives that may influence the design and management of the humanitarian supply chain that include forecasting capability, the displaced human beings, complexity, sustainability, pandemics, and culture. In the future, humanitarian scholars can further examine the role of culture in the design of the humanitarian supply chain.
worth investigating. The articles A30, A31, and A32 provide a detailed thematic review such as disruptions and resilience (A30), the role of digital technologies in the humanitarian supply chain (A31), and the quality management issues in the humanitarian supply chain (A32).
Future research directions and opportunities
One of the main aims of organizing this special issue was to identify potential research gaps and further motivate scholars to advance the theoretical debates surrounding humanitarian supply chain design. In Table 1, an attempt has been made to identify some research gaps, however, we believe that the gaps should not be limited to these research gaps noted in Table 1. It must help address the overall challenges that humanitarian organizations face while dealing with such unpredictable events with limited resources and are subject to a high level of scrutiny from the media and political organizations. Therefore, we provide a list of areas that can be tackled in future studies. For instance, the coordination among the humanitarian organizations has received significant attention from the humanitarian communities (see Balcik et al., 2010;Dubey et al., 2019;Ruesch et al., 2022). Yet, coordination in the humanitarian supply chain context is still not well understood. Future research should explore the fit between different types of coordination and humanitarian supply chain strategies. Secondly, following the Haiti earthquake, the use of technology in the humanitarian aid supply chain has received significant attention (Besiou & Van Wassenhove, 2020). However, still, the use of technology in the humanitarian aid supply chain faces enormous challenges (Dubey, 2022). The future study must help address the technology and human interaction issues. Thirdly, innovation in the humanitarian supply chain in recent times has played a significant role in tackling the most complex humanitarian crises (Kovács & Falagara Sigala, 2021). Yet, the innovation in the humanitarian supply chain is not well understood. We believe future studies must help address this research gap. Finally, the role of leadership has been recognized as an important driver in shaping the humanitarian aid supply chain (Salem et al., 2019;Dubey et al., 2021). However, the leadership styles differ in different situations. The humanitarian supply chain literature has largely remained silent on this front with some exceptions (see, Salem et al., 2018Salem et al., , 2019Dubey, 2022). There is a clear research gap that needs to be addressed to understand how different leadership styles can help tackle complex humanitarian relief operations (Fig. 5).
Concluding remarks
There is an enormous interest in the design and management of humanitarian supply chains among the operations and supply chain management community. Yet, the design of the humanitarian supply chain is not well understood. Hence, to bridge the potential gaps that exist in theory and practice, we accepted articles on various aspects of the humanitarian supply chain design and management (see Table 1). Further, we have noted some potential research gaps that may help future scholars to shape their future research.
Fig. 5 Research opportunities
Acknowledgements I sincerely like to express my thanks to Editor-in-Chief Professor Endre Boros for giving me an opportunity to organize a special issue on this interesting topic. Moreover, I am thankful to Ann Pulido for her extensive support from the preparation of the call for papers till the final preparation of the editorial note. In fact, during the pandemic, we all have experienced quite a hard time, but I am grateful to Ann Pulido for her timely support. Third, the role of reviewers is highly important in scientific publications, and without the support of the reviewers, it would not have been Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
2022-11-05T05:09:53.717Z
|
2022-11-02T00:00:00.000
|
{
"year": 2022,
"sha1": "4921e6fb384beb3a2140473ae9f40c5ef2e05b15",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10479-022-05021-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4921e6fb384beb3a2140473ae9f40c5ef2e05b15",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
263321508
|
pes2o/s2orc
|
v3-fos-license
|
Incompetent memory immune response in severe COVID-19 patients under treatment
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) associated coronavirus disease 2019 (COVID-19) pandemic has affected millions of people worldwide and declared a Public Health Emergency by the World Health Organization (WHO) on January 30, 2020. Albeit, unprecedented efforts have been made from the scientific community to understand the pathophysiology of COVID-19 disease, the host immune and inflammatory responses are not explored well in the Indian population. Continuous arrival of new variants fascinated the scientists to understand the host immune processes and to eradicate this deadly virus. The aim of this study was to see the helper and cellular host immune responses including memory and activated cell subsets of COVID-19 patients admitted to the intensive care unit (ICU) at different time intervals during the treatment. PBMCs separated from nine patients with SARS-CoV-2 infection were incubated with fluorescent conjugated antibodies and acquired on flow cytometer machine to analyze the T and B cell subsets. The results in COVID-19 patients versus healthy volunteers were as follows: elevated helper T cells (57.4% vs 44.9%); low cytotoxic T cells (42.8% vs 55.6%), and activated T (17.7% vs 21.2%) subsets. Both, TREG (40.15% vs 51.7%) and TH17 (13.2% vs 24.6%) responses were substantially decreased and high expression of TREG markers was observed in these patients compared with controls.
Introduction
The COVID-19 pandemic caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), emerged first in Wuhan in December 2019 and has rapidly spread globally [1,2].It was declared a Public Health Emergency of International Concern by the World Health Organization (WHO) on January 30, 2020 [3].As of March-2022, a total of 4,30,20,723 laboratory-confirmed cases were identified in India, with 5,21,035 fatalities (1.21%), according to the data from Indian government official reports [4].Although, the virological, epidemiological, and clinical patterns and management outcomes of COVID-19 patients have been defined [1,[5][6][7][8][9], the host immune profile and inflammatory responses are not explored well in the Indian population.
A wide variety of clinical manifestations occurs in COVID-19 patients varying from mild to severe disease causing acute respiratory distress syndrome (ARDS) and death [1,[7][8][9].Inflammation is alleged to buttress severe COVID-19, due to peaked levels of proinflammatory cytokines such as interleukin-1 beta (IL-1b), and tumor necrosis factor alpha (TNF-a) in severe COVID-19 patients [1].Albeit, the host response is lively and can oscillate extensively from one day to another.Whether such variability is important to COVID-19 pathogenesis remains unknown.
Fatalities were higher in the elderly and in individuals with co-morbidities [10], however, during the second wave of SARS-CoV-2, the maximum adverse impact was observed in the younger adults in India.Though inflammation and host immune responses are important for the curtailment of the virus, dysregulated response plays a key role in the pathogenesis.Several studies have shown the induced inflammatory response and elevated pro-inflammatory cytokine & chemokine response, which lead to severe COVID-19 manifestations [11,12].It is well known that CD8 + T cells and NK cells are key subsets in preventing viral replication, lymphopenia and gradual fall in CD8 + T cells can be detrimental in COVID-19 patients [13].
New variants like Delta plus and Omicron, which are highly contagious in nature are currently being reported in several parts of India.The understanding of the host immune process is therefore important to further comprehend the pathophysiology of this viral agent.Very little is known about the impact of different lymphocyte subsets on the immune response of patients with COVID-19 and its consequences.This study was planned to see the inflammatory and immunological parameters and expression of cell surface markers on lymphocyte subsets by flow cytometer in severe COVID-19 patients admitted to the intensive care unit (ICU) of a tertiary care hospital in New Delhi during the first wave.
Patients and sample collection
The study included nine patients with COVID-19 disease admitted to the intensive care unit (ICU) of the tertiary care hospital between October to December 2020 and five healthy volunteers as controls.All the patients were diagnosed with SARS-CoV-2 and confirmed by RT-PCR as per the ICMR guideline performed at the dedicated COVID-19 testing laboratory of the hospital.The study was approved by the Institutional Ethics Committee (IEC-HR) with reference number GTBHEC 2021/P-149.The study population including healthy volunteers was unvaccinated as COVID-19 vaccination launch was still awaited in India and with no history of previous SARS-CoV-2 infection.The criteria used for admitting COVID-19 patients to ICU, was as per CLINICAL MANAGEMENT PROTOCOL: COVID-19, MoH&FW (EMR division), Govt. of India [14].Detailed history, clinical findings and relevant laboratory investigations of patients were collected at the time of enrollment in a pre-designed case record form.
Five ml venous blood sample was collected from the patients and age matched healthy controls in EDTA vials for preparation of peripheral blood mononuclear cells (PBMCs) which was used for screening of molecular markers, and also in plain vial for serum separation.The samples were collected on the day of admission (zero day), followed by 3rd day and 6th day of ICU admission, after getting the informed written consent.Serum was used for the detection of IL6 and ferritin by chemiluminescence immunoassay (CLIA) via IMMULITE®2000 XPI analyzer (Siemens).
PBMCs separation: PBMCs were isolated from peripheral venous blood by the Ficoll-Plaque density gradient method.Briefly, whole blood was diluted with RPMI-1640 (Gibco, Life Technologies), gently overlayered on Ficoll reagent and centrifuge at 1000rpm for 20 min at room temperature.The cloudy interface was collected carefully, taken in a fresh tube and washed twice with ice-cold phosphate buffer saline (PBS; HiMedia, India).The viability of the cells was confirmed by Trypan Blue dead cell exclusion assay under a light microscope using Neubauer Chamber.
Staining with fluorochrome-labeled monoclonal antibodies and flow cytometry: PBMCs were stained immediately with four
Table-1
Details of fluorochrome dyes allocated with anti-human monoclonal antibodies against cell surface markers to study different lymphocyte subsets in COVID-19 patients.
panels of fluorochrome-conjugated anti-human monoclonal antibodies and Th17, Treg, T and B immune cells were studied by flow cytometer.Firstly, PBMCs were incubated with 100 μl of Fc block reagent at room temperature for 10 min and then 1x10 6 cells were resuspended in 25 μl of ice-cold staining buffer and incubated with anti-human monoclonal antibodies, CD3, CD4, CD25, GITR, CD122, CD127, CD152, CCR4, CCR7, IL23R, CD161, HLA-DR, CD45RO, CCR5, CD8 and CD20 (BD Biosciences, CA, US) at room temperature for 20 min in dark (Table -1).Cells were washed with PBS buffer and resuspended in PBS with 2% paraformaldehyde.Samples were acquired on FACS Aria III (BD Biosciences, USA) immediately or within 24 h of staining and at least 30,000 events were recorded for each sample.Area versus height data was recorded on FSC and SSC scale for doublets discrimination.Single positive control and unstained control were used to analyze the data and to avoid the noise population.Briefly, PBMC cells were targeted, followed by singlet selection and then helper T cells (CD3 + CD4 + ) were gated from total T cells (CD3 + ).Data were analyzed on FlowJo software (BD Biosciences, CA, US) and the results were presented as percentage positivity.Data Analysis: The data is described as mean value and range.All the analyses and bar and line diagrams were generated on MS Excel version 19 software.Figures for flow cytometry were plotted on FlowJo_v10 software (BD Biosciences, CA, USA).
Demographics and baseline characteristics of COVID-19 patients
A total of nine patients with COVID-19 disease confirmed by RT-PCR, admitted to the intensive care unit (ICU) during the study period were enrolled in this study.All patients had severe COVID-19 illness, six patients were mechanically ventilated and three others were managed with either non-invasive ventilation or high flow nasal oxygen therapy.The demographics and baseline characteristics of these patients are shown in Table-2 & − 3 and Figure -1A.With a median age of 53 years (45-76), 44% of patients were male.Diabetes mellitus was the most common comorbidity seen in 66.6% of patients followed by hypertension (55%).Fever, breathlessness, cough, diarrhea, and nausea were the most frequent presenting symptoms.
Parameters of routine assessments and laboratory investigations were significantly deranged in all patients and were categorized as severe COVID-19 patients, with high respiration rates during the ICU stay as shown in Table-3.Routine investigations showed total leukocyte count (TLC), liver function test, kidney function test and serum electrolytes were raised in all these patients at different time points i.e., zero day, 3rd day & 6th day, as shown in Table-3 and Figure -1A.Levels of urea, ferritin and interleukin-6 were significantly high in these patients.All patients received systemic corticosteroids, Remdesivir and low molecular weight heparin, as per the institutional management protocol.The stay in ICU was for a median of 18 days (14-23) and all patients succumbed to illness.
Immune status of CD4 + and CD8 + T lymphocyte subsets in patients of COVID-19
Flow cytometer analysis revealed the changes in the numbers of total CD3 + T cells, CD3 + CD4 + helper T cells, CD3 + CD8 + cytotoxic T cells and CD20 + B cells in COVID-19 patients as shown in Table -4 Further, an elevated percentage of CD3 + CD4 + CD45RA + naïve helper T cells (T N ) was observed in COVID-19 patients compared to healthy controls.Over the time during treatment T N cells became inflated but the expression of CCR7 and CCR4 remained significantly low throughout the treatment as was received in ICU.Furthermore, analysis of CD45RO expression (T EM ) on CD3 + cells, CD4+cells and CD8 + T cells revealed the initial heightened memory response, up to 7-10 days of ICU stay in severe COVID-19 patients and later fell below the normal range, Figure -2B, 2C & 3A.In SARS-CoV-2 infected patients, evaluation of T cell activation showed statistically lower activation compared to healthy controls as indicated by the percentage of HLA-DR + CD8 + T cells.The decline in CD3 + CD4+HLADR + helper T cells and CD3 + CD8+HLADR + cytotoxic T cells in severe COVID-19 patients was consistent and remained throughout their stay in ICU, shown in Figure -3B.
TREG and TH17 subsets in patients of COVID-19
Compared with healthy controls, in SARS-CoV-2 infection, there was a significant decline in CD3 + CD4 + CD25 hi CD127 lo TREG cells in patients at all time points.Additionally, significantly higher expression of cytotoxic T lymphocyte associated protein-4 (CTLA-4 or
Immune status of B lymphocyte subsets in patients of COVID-19
In COVID-19 patients, no significant change was recorded in the numbers of CD20 + B cells during treatment.Further, analysis of CD45RA and CD45RO expression showed that numbers of CD45RA + CD20 + naïve B cells remained relatively unaltered, albeit higher numbers of CD45RO + CD20 + memory B cells were recorded in COVID-19 patients and later during the treatment memory B cells declined below the normal range as shown in Figure -1C & 2D.
Discussion
SARS-CoV-2, a member of the family coronavirus, responsible for a global pandemic started from Wuhan in December-2019 and spread worldwide [1].To understand the immune response patterns of patients infected with SARS-CoV-2, we studied 9 cases of severe COVID-19, hospitalized in the ICU of COVID dedicated tertiary care hospital in Delhi.For immune profiling of T cells and B cells, we performed flow cytometry to observe the sequential immune changes during the infection.Additionally, clinical data was retrieved to understand the association between immune responses to SARS-CoV-2 and disease pathogenesis.Inflammatory biomarkers like interleukin-6, and CRP levels were found raised in severe COVID-19 patients.The immune responses as observed in COVID-19 patients with the myriad of cytokines and chemokines, lead to fatal cytokine storms and mortality [15].The host immune response is critically regulated by immune cells like CD3 + T cell, CD4 + helper cell, CD8 + cytotoxic cell, CD20 + B cell lymphocytes, and their subsets.These cells constitute the host humoral and cell-mediated immunity against infections including viral agents.Dysregulation in the phenotypes of these lymphocytes results in the pathogenesis of COVID-19 disease [16,17].SARS-CoV-2 causes a rapid decline in the T cell population which results in lymphopenia and disease progression, hence the understanding of the T cell response to SARS-CoV-2 is critical to give insight regarding the management of severely ill COVID-19 patients.The participation of T cells in establishing long-lasting protective immunity against reinfection and the relevance of cross-reactive cellular immunity in future outbreaks are Data is presented as mean value.
S. Das et al.
other important aspects of the T cell response that need to be explored [3].The helper T cells are the key component of the host immune response in any disease condition.Further, the differentiation of naïve helper T cell population into effector and memory subsets is one of the most fundamental facets of T cell-mediated immunity.Thereby the balance between naïve and memory CD4 + T cells is crucial for maintaining an efficient immune response.Our study observed the CD3 + CD4 + helper T cells, CD3 + CD8 + cytotoxic T cells, CD20 + B cells, and their subsets with further HLA-DR + activation, and CD45RO + memory subsets in order to delineate the underlying mechanism and pathogenesis of COVID-19 disease in patients with severe manifestations.We enrolled nine COVID-19 patients admitted to ICU, at all the time points, the T cell response was represented by a reduced CD8 + cytotoxic T cells, and increased numbers of CD4 + helper T cells, indicating an impairment of the protective immune response during SARS-CoV-2 infection in these patients.An increase in the CD4+/CD8+ ratio indicated a poor treatment response in severe COVID-19 patients, while no significant alteration was seen in B cells of COVID-19 patients.These results are in agreement with previous studies.Cui et al. [18], reported reduced CD8 + T cells in 87% patients of SARS, while Mazzoni et al. [19] found the increased CD4 + T cells in COVID-19 patients.Similarly, several others have reported reduced CD8 + cells in severe COVID-19 illness compared to healthy controls suggesting an underlying uncontrolled inflammatory response in such viral infections [20][21][22].Lymphopenia is commonly triggered by virus attachment or indirectly by immune grievance from inflammatory mediators.Further to explore the role of CD4 + and CD8 + T subsets in the pathogenesis of severe COVID-19, the expression of naïve (CD45RA), memory (CD45RO) and activation (HLA-DR) markers was analyzed.Additionally, Th17 and Treg associated markers which are largely unidentified, were investigated in SARS-CoV-2 infected patients.The activated cell subsets; CD3+HLADR+, CD4+HLADR+ and CD8+HLADR + cells were exclusively low and further declined during the course of illness in COVID-19 patients, while memory subsets; CD3 + CD45RO+, CD4 + CD45RO+ and CD8 + CD45RO+ were high during early treatment during early course of illness and up to 10 days however during the later course of illness numbers fell below the normal range of healthy controls.Similar results were reported by Qin et al. and Zhou et al. with reduced memory cells and activated T cells, respectively, in severe COVID-19 patients in China [23,24].In contrast, few studies also reported the induced activation of helper and cytotoxic T cells in COVID-19 patients compared to controls [23].A memory B cell response (CD20+CD45RO+) was also achieved within a week, sustained up to 2 weeks maximum, and then declined in severe COVID-19 patients.Both, B & T cell memory responses could not be maintained, conceivably due to excessive reduction of CD8 + T cells in severe COVID-19 cases during the 18 days of ICU stay.And with the persistence of COVID-19 disease, CD8 + T cells continued to decline compared to healthy controls.Henceforth, results indicated that the declined host activation response with a temporary early induced memory response in SARS-CoV-2 infected ICU patients is indicative of a poor outcome.
Th17 and Treg balance play a critical role in maintaining the immune homeostasis, and the equilibrium of proinflammatory and suppressive host immune responses [25].In our study we observed a low CD4 + CD25 + regulatory T cells (Tregs) and proinflammatory helper T17 (Th17) cells in severe COVID-19 patients compared to controls [23].Furthermore, analysis of surface molecules confirmed, that although the number of Tregs decreases in severe COVID-19 patients and during treatment, their functional suppressive potential is high with high expression of CD152 (CTLA4), CD122 & GITR.CD25 + helper T cells expressing CD122 were responsible for IL2 signal transduction and activation of NK, B, and T cells [25].Likewise, pro-inflammatory CD4 + CD161+Th17 response was excessively low and became more prominent with the ICU stay.Also, the expression of molecular markers, IL23R and HLA-DR on Th17 cells was decreased which indicated that Th17 cells were functionally exhausted.Low IL23R expression results in absence of IL23/IL23R induced Th17 response and poor expression of HLA-DR indicating the inability to mount a Th17 response.This dysregulated Th17/Treg balance paved the way to COVID-19 disease progression, in severe COVID-19 patients.Though patients infected with SARS-CoV-2 have initially high effector memory helper (T EM ) response but the study of expression of other molecules like, CCR7, CD45RA indicated the increased proportion of naïve helper T cells (T N ) in severe COVID-19 patients and reduced CCR7 expressing central memory T cells (T CM ) compared with HCs, in SARS-CoV-2 infection.
The older age group is vulnerable and at higher risk between immune-senescence and poor lymphopoiesis due to higher IL6 production.Cui et al. observed significant peripheral depletion of T cell; HLA-DR+, CD45RO+ and effector T cell subsets.The recovery of T cell population is a cardinal component for harnessing recovery in patients with severe COVID-19 disease.The memory T and B cells were not so profound in severe COVID-19 patients.
There are limitations in our study which might make some potential prejudice.It was a single center and small sample study of patients admitted to ICU unit of the hospital.Secondly, patients with secondary infections might affect the immune response of COVID-19 patients.Hence, data from a larger cohort of patients would be beneficial to assess the sequential change in the immune responses after infection with SARS-CoV-2.However, our study gave numerous novel information about host immune response in COVID-19 patients that SARS-CoV-2 might act on lymphocytes, particularly T lymphocytes, inducing a cytokine storm during early infection and a sequence of immune responses which eventually damage the host organs.Therefore, early screening of these specific parameters during critical illness is supportive in the diagnosis and treatment of COVID-19.
The relationship of disease severity in COVID-19 patients is multi-factorial.It appears that the adaptive immunity during SARS-CoV-2 infection is dysregulated with high CD4:CD8 ratio, suggesting poor effector T cell response.The high IL6 levels with reduced T & B cell response and associated lymphopenia are a deadly progression of the COVID-19 disease.Excessive cytokine levels lead to severe inflammation, lung injury, ARDS and hence, play a role in progression of the disease.
, Figure-1B, 1C & 2(A-D).Analysis of CD3 + CD4 + and CD3 + CD8 + subsets demonstrated a significant elevation in helper T cells and remained increased during the treatment in severe COVID-19 patients compared to healthy controls and cytotoxic T cells significantly diminished.This resulted in raised CD4+/CD8+ ratio in COVID-19 patients and became more prominent during the course of treatment as compared to healthy controls.
CD152), CD122 and GITR was observed in COVID-19 patients, and reduced expression of CCR4 and CCR7 on regulatory T cells, shown in Table-4 and Figure-1D & 3A.Again, CD3 + CD4 + CD161+ TH17 cells were reported as significantly reduced in severe COVID-19 patients versus controls.IL23R and HLA-DR expressing TH17 cells were recorded in low numbers in COVID-19 patients compared to controls and remained low at all points of the study as shown in Figure-1E & 3B.
Figure- 1 .
Figure-1.Diagram showing the clinical characteristics and lymphocyte subsets in severe COVID-19 patients.[1-A] Bar diagram represents the various clinical parameters on the day of admission (zero day), followed by 3rd day and 6th day of ICU admission during treatment.[1-B] Line diagram represents the naïve, memory and activated cell subsets of Helper & Cytotoxic T cells in COVID-19 patients with respect to healthy volunteers at different time point of ICU treatment.[1-C] Line diagram represents the naïve and memory B cell subsets.[1-D] Line diagram represents the Treg cells and molecular markers, CD122, CD152, GITR, CCR4 & CCR7 expressed on Treg cells in COVID-19 patients compared to controls at all points of ICU treatment.[1-E] Line diagram represents the Th17 cells and molecular markers, IL23R & HLA-DR expressed on Th17 cells in patients compared to controls during ICU treatment.
Figure- 2 .
Figure-2.Flow cytometer diagram represents the T and B cell subsets on FlowJo software.[2-A] Gating strategy to select the CD3 + CD4 + helper T cells after getting the singlets.[2-B] Staggered histogram plot was created to represent the data of naïve, memory and activated Helper T cells at the different time points of ICU treatment, with respect to healthy controls.[2-C] Staggered histogram plot was created to represent the data of naïve, memory and activated Cytotoxic T cells in COVID-19 patients.[2-D] Staggered histogram plot was created to represent the data of naïve and memory B cells.Green plot represents to healthy controls (HC); red plot to COVID-19 patients on zero day of ICU admission; orange plot to 3rd day of admission and pink plot to 6th day of admission.
Figure- 3 .
Figure-3.Flow cytometer diagram showing Treg and Th17 cells and their subsets on FlowJo software.[3-A] Half offset histogram plot represents Treg cells and subsets having expression of CCR4 and CCR7 at different time points of ICU treatment.Staggered histogram plots were generated for CD122, CD152 & GITR positive Treg subsets for better data presentation.[3-B] Half offset histogram plots were generated for Th17 cells, activated Th17 cells and IL23R positive subset.Green plot represents to healthy controls (HC); red plot to COVID-19 patients on zero day of ICU admission; orange plot to 3rd day of admission and pink plot to 6th day of admission.
Table - 2
Details of subjects recruited under this study.
Table - 3
Demographic and baseline characteristics of COVID-19 patients at different time points.
Table - 4
Percentage positivity of different T and B cell subsets of COVID-19 patients at different time point of treatment.
|
2023-10-02T15:12:06.270Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "3d279e287c296dd565b3b31b9ff10fb7a0f1f374",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023077988/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "453263cb347606e0716d4d7e630d4d275d36035b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
36996299
|
pes2o/s2orc
|
v3-fos-license
|
Magnetic structure and local lattice distortion in giant negative thermal expansion material Mn3Cu1−xGexN
Magnetic and local structures in an antiperovskite system, Mn3Cu1-xGexN, with a giant negative thermal expansion have been studied by neutron powder diffraction measurement. We discuss (1) an importance of an averaged cubic crystal structure and a ΓG5g antiferromagnetic spin structure for the large magneto-volume effect (MVE) in this itinerant electron system, (2) an unique role of a local lattice distortion well described by the low temperature tetragonal structure of Mn3GeN for the broadening of MVE.
Introduction
Negative thermal expansion (NTE) materials have already been used in a wide area of technical applications in which it is desperately needed to control the thermal expansion [1,2]. The NTE occurs as a result of the gradual volume expansion accompanied by magnetic ordering, the so-called magneto-volume effect (MVE). The MVE of itinerant electron systems has been investigated since the discovery of extraordinarily small thermal expansion in Invar alloys [3]. Antiperovskite manganese nitrides Mn 3 AN, where A is a metal or a semiconducting element, are well known for their large MVE [4]. However, this system has not been considered as a practical NTE material to date, because all the MVEs reported in Mn 3 AN members have been associated with first-order phase transitions. Recently, Takenaka and Takagi reported that the MVE is broadened against temperature T in Mn 3 Cu 1−x Ge x N and leads to a giant negative thermal expansion coefficient [5,6,7]. At x∼0.5, linear thermal expansion ∆L/L is almost linear to T in the temperature range of 270 ≤ T ≤ 350 K. The large negative coefficient of linear thermal expansion α is about −2 × 10 −5 /K, the largest value among all NTE materials. The clarification of the microscopic mechanism of the MVE may provide us a useful guideline for designing NTE materials with better performance. Especially, to clarify the mechanism for the broadening of MVE (the Invar problem) has been a challenge in solid state physics over a century. We have studied the magnetic and local structures in Mn 3 Cu 1−x Ge x N using neutron powder diffraction [8,9]. In this paper, we review the present understanding of the Ge-doping effect on the magnetic properties and MVE in Mn 3 Cu 1−x Ge x N. The T-dependence of lattice constants are consistent with the linear thermal expansions previously reported by Takenaka and Takagi [5]. Both the 1 0 0 magnetic reflection and the lattice constant exhibit sharp increases at x=0.15 with decreasing T. For x=0.5, they gradually increase with decreasing T in the temperature range from 360 to 320 K. The magnetic reflection intensity grows in a progressively wider T range with increasing Ge content. The width of the magnetic peaks is nearly resolution-limited, suggesting long-range Neel ordering. Based on the neutron and the NMR results [9], we conclude that the systems with a gradual volume change exhibit a gradual change in the magnitude of the ordered moments.
Magnetic structure
Ge-doped samples have a cubic structure (space group: P m3m) and magnetic ordering vector q=(0 0 0). On the basis of these conditions, three possible models have been proposed by Fruchart and Bertaut [4]. Among three magnetic structures, so-called Γ 5g antiferromagnetic (AF) structure shown in inset of Fig. 1 has large intensity of (100) reflection. Calculated intensities of all reflections assuming Γ 5g AF structure can reproduce the observed intensities.
Let us now look at other antiperovskite materials. Mn 3 GaN and Mn 3 ZnN are well known for their large MVEs [10,11]. Their volumes show a sudden and pronounced increase with decreasing temperature at the first-order transition, and exhibit the Γ 5g antiferromagnetic structure in the cubic crystal structure below the phase transition temperature [4]. The MVE of an itinerant electron system has been discussed in terms of the amplitude of magnetic moment. However, the intimate relationship between the Γ 5g antiferromagnetic cubic structure and large MVE in Mn 3 Cu 1−x Ge x N indicates the necessity of a new theoretical framework for MVE, in which the ordered magnetic structure is taken into account.
Local structure
The technological essence of NTE in Mn 3 AN is the discovery of Ge and Sn dopants that broaden the volume change. The strong preference of Ge and Sn for broadening the MVE in Mn 3 AN revealed in previous studies [5,6,7] indicates importance of a local distortion caused by the atomic and/or chemical characteristic of dopants. Therefore we have investigated the local structure by the atomic pair distribution function (PDF) analysis [12,13]. Although the overall crystal structure remains cubic for 0.15 ≤ x ≤ 0.7 in the whole T region, interestingly, the PDF shows considerable deviation of the local structure in short length from the average cubic structure. Figure 2 (a) displays the experimental PDF obtained for x=0.15, 0.5 and 0.7 at 300 K. Negative peaks at ∼1.9Å and ∼2.6 -3.0Å correspond to atomic pair bond distances related Mn atoms, because Mn nucleus has negative neutron scattering length. The first negative peak, which comes from the Mn-N correlation in Mn 6 N octahedra. On the other hand, the second negative peak comes mainly from negative Mn-Cu (or Ge) contribution. The second negative peak around 2.8 is much wider than the first peak and has a double-peak structure shown in the figure, for all samples. In the ideal cubic structure, the second peak should be as sharp as the first peak at ∼1.9Å. The observed PDF provides clear evidence for the local distortion on the Mn-Cu (or Ge) correlation against relatively rigid Mn-N bonds. The observed local distortion is related to the low-temperature tetragonal structure of Mn 3 GeN. Mn 3 GeN shows a transition from the high T cubic to the low T tetragonal T 4 (I4/mcm) phase at T t ∼540 K [14,15]. The transition primarily involves alternate rotation of the Mn 6 N octahedra as shown in Fig. 2(b). By this rotation, short and long Mn-Cu(Ge) bonds are generated, resulting in the splitting of the second atomic pair correlation peak.
From the structure refinement with the T 4 model, we found that the rotation angle θ of Mn 6 N octahedra is a good indicator of the broadening of the MVE. At 300 K, the θs are in order of 2.3(3)(x =0.15) < 4.1(2)(x=0.5) < 4.6(3)(x=0.7). In Mn 3 Cu 1−x Ge x N, θ systematically increases with increasing Ge doping level x, corresponding to the increase of the splitting width of the second negative peak shown in Fig. 2(a). The local octahedral rotation angle clearly correlates with the T dependence of the ordered magnetic moment. Observed structural instability seems to lead to the instability of the amplitude of magnetic moment. For the Invar alloy system, the Invar effect or an instability of the amplitude of magnetic moment also appears near the phase boundary between the fcc and bcc phases [16]. So far, the variation of magnetic phase transition has been discussed on the basis of the spatially averaged information. The present study provides the first experimental result emphasizing the importance of the local structure to the related Invar problem.
Summary
The system with the Γ 5g antiferromagnetic cubic structure exhibits a large MVE in Mn 3 Cu 1−x Ge x N. The present results establish a new MVE paradigm that will require a new theoretical framework that takes into account the ordered magnetic structure. A local T 4 structure in average cubic phase appears as Ge substitution proceeds. This strongly suggests that the gradual growth of magnetic moment leading to the broadening of MVE is related to the structural instability between cubic and tetragonal induced by Ge substitution.
|
2017-11-20T00:03:04.682Z
|
2010-11-01T00:00:00.000
|
{
"year": 2010,
"sha1": "a4fa450f572d08785219ad0f53f8a686743defb4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/251/1/012014",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c63b9176c98f0b4ed5d9631bd6f697ba8d381094",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
22111515
|
pes2o/s2orc
|
v3-fos-license
|
Hepato-biliary profile of potential candidate liver progenitor cells from healthy rat liver
AIM: To evaluate the presence o� progenitor cells in healthy adult rat liver displaying the equivalent ad� vanced hepatogenic profile as that obtained in human. METHODS: Rat �ibroblastic�like liver derived cells (rFLDC) were obtained �rom collagenase�isolated liver cell suspensions and characterized and their phenotype profile determined using flow cytometry, immunocyto� chemistry, reverse transcription polymerase chain reac� tion and functional assays. RESULTS: rFLDC exhibit fibroblastoid morphology, ex� press mesenchymal (CD73, CD90, vimentin, α �smooth muscle actin), hepatocyte (UGT1A1, CK8) and biliary (CK19) markers. Moreover, these cells are able to store glycogen, and have glucose 6 phosphatase activity, but not UGT1A1 activity. Under the hepatogenic differentia� tion protocol, rFLDC display an up�regulation o� hepa� tocyte markers expression (albumin, tryptophan 2,3�di� oxygenase, G6�ase) correlated to a down�regulation o� the expression of the biliary marker CK19. CONCLUSION: Advanced hepatic �eatures observed in human liver progenitor cells could not be demonstrated in rFLDC. However, we demonstrated the presence of an original rodent hepato-biliary cell type. W�G | www.wjgnet.com are able to acquire some hepatic characteristics in the presence of differentiation medium. Applications Isolation and characterization of progenitor/stem cells would be very useful to assay the in-vivo efficacy of liver mesenchymal progenitor cells in syngeneic animal models of liver metabolic diseases, particularly in the Gunn rat, a hyperbilirubinemia. Peer review This study shows that the advanced hepatic features of human liver progenitor cells have not been demonstrated in rFLDC. Although it strengthens the unique specificity of these human liver progenitor cells, it also shows that homologous models for cell therapy can not easily be developed even when the same isolation and culture protocols are applied. The authors should make a comparison of their cells with human
INTRODUCTION
Liver transplantation is considered to be the standard treatment for end-stage liver diseases. Unfortunately, clinical applications are restricted by the scarcity of organs and uncertainty about the very long-term success of the procedure.
In recent years, liver cell transplantation using hepatocytes was successfully performed in patients with inborn errors of metabolism as an alternative, or at least as a bridge to orthotopic liver transplantation [1][2][3][4][5] . However, the success of such a therapeutic approach remains limited by the quality of transplanted cells. In fact, cryopreservation procedures induce significant alterations at morphological and functional levels of the thawed hepatocytes [6,7] .
To overcome these problems, several approaches to isolate and propagate liver stem or progenitor cells have been developed. In our laboratory, Najimi et al [8] isolated adult derived human liver stem/progenitor cells (ADHLSC) with hepato-mesenchymal profile. Under specific hepatogenic conditions, these cells exhibit hepato-specific functions like glycogen storage, gluconeogenesis, urea synthesis, glucuronoconjugation, and pharmacologic properties such as activity of phase Ⅰ and Ⅱ enzymes [9] . These cells are also able to specifically engraft and differentiate into mature human hepatocytes in mouse liver parenchyma [8] .
Preclinical studies using homologous animal models of human liver metabolic diseases are attractive. It is therefore a prerequisite to obtain homologous cells from syngenic animals to perform such studies. The relevance of using human progenitor cells in immunosuppressed animal models is indeed questionable.
In this context, we evaluated the presence of a liver progenitor cell in adult rat liver that would express the same specifications as the previously reported human progenitor cell, referred to as ADHLSC.
In the current study we isolated and characterized rat fibroblastic-like liver derived cells (rFLDC) from healthy adult rats. Characterization included proliferation rate, phenotype, genotype and hepatic-specific functional assays.
Rat fibroblastic-like liver derived cells
Five male Wistar rats weighing ± 200 g were purchased from UCL Animalerie Centrale (Brussels, Belgium) and treated in accordance with the internal Animal Ethic and Welfare Committees (UCL/MD/2009/003).
We isolated rat liver parenchymal cells in a two-step collagenase A (1100 units/L) (Roche, Mannheim, Germany) perfusion procedure according to the Seglen method [10] . We then obtained a hepatocyte enriched cell fraction following low-speed centrifugation (160 r/min for 3 min).
Population doubling (PD) was evaluated after each passage using the following equation: [log (harvested cells)/log (seeded cells)]/log 2. Cumulative population doubling (CPD) was calculated with the sum of PD at all passages.
At passages 2, 4 and 8, cells were analyzed using reverse transcription polymerase chain reaction (RT-PCR), immunocytochemistry and flow cytometry.
Bone marrow mesenchymal stem cells
We obtained bone marrow from Wistar rats by flushing the femur and tibia with ice cold phosphate-buffered saline (PBS) (Lonza, Verviers, Belgium) and isolated the cell fraction using Ficoll (GE Healthcare, Uppsala, Sweden) density gradient centrifugation at 340 r/min for 30 min.
Cells were then resuspended in α-MEM (Invitrogen) supplemented with 10% FBS (Perbio, Erembodegem, Belgium) and 1% P/S (Invitrogen) and seeded in 75 cm² culture flasks. We removed non-adherent cells after 1 d and then refreshed the medium every 3-4 d. When cultures had reached 80%-90% confluence, we harvested the cells with 0.05% trypsin-1 mmol EDTA solution and replated them at a density of 7 �� 10 �� 10 10 3 cells/cm 2 . These cells were used as the internal control in mesodermal differentiation studies.
We then washed and fixed them in cytofix/cytoperm (BD) until analysis with a FACSCantoⅡ flow cytometer (BD).
RT-PCR analysis:
We extracted total RNA from expanded or differentiated rFLDC using the TriPure isolation reagent (Roche) and carried out cDNA with the Thermoscript RT-PCR system (Invitrogen) using 1 µg total RNA, according to the manufacturer's instructions. Rat specific primers used for gene amplification are listed in Table 1. We thereafter electrophoresed amplified cDNA on a 1% agarose gel (Invitrogen) followed by 0.01% ethidium bromide (Sigma) staining.
Mesodermal differentiation: At passages 0, 2 4 and 8, rFLDC were plated at 1.5 �� 10 �� 10 10 4 cells/cm 2 on six-well rat tail collagen I-coated plates. At confluency, we performed osteogenic differentiation with complete DMEM medium containing 0.1 µm dexamethasone, 0.1 mmol/L ascorbate and 10 mmol/L β-glycerophosphate (Sigma). After 4 wk, calcium deposition was evidenced using alizarin red staining. For adipogenic differentiation, we incubated cells with expansion medium complete DMEM containing 1 µm dexamethasone, 0.5 mmol/L isobutylmethylxanthine, 0.2 mmol/L indomethacin (Sigma) and 10 µg/mL insulin (Lilly). Medium change was carried out twice a week. After 4 wk, oil red O staining revealed the presence of lipid vesicles. As a control of mesodermal differentiation capacity, the differentiation procedure was validated with rat bone marrow mesenchymal stem cells using α-MEM complete medium.
Functional hepatic tests
Glycogen storage: Undifferentiated and differentiated rFLDC fixed with 3.5% formaldehyde (Sigma) were incubated for 10 min in 1% periodic acid (Sigma). After washing with distilled water, the cells were incubated with Schiff 's reagent (Sigma) for 15 min. The preparations were then washed and mounted.
Glucose-6-phosphatase activity: We investigated glucose-6-phosphatase (G6Pase) activity in undifferentiated and differentiated rFLDC. After washing with PBS, cells were incubated for 4 h at 37 ℃ in 1.5 mL 50 mmol/L Tris (Sigma) and 50 mmol/L maleate (Sigma) buffer (pH Maerckx C et al . Isolation of progenitor cells from rat liver Table 1 Primers used for reverse transcription polymerase chain reaction 6.7) solution containing 5 mmol/L glucose-6-phosphate (G6Pate, Sigma) and 0.03 g lead nitrate (Acros, Geel, Belgium). We obtained brownish precipitates of lead sulfate following incubation of cells in a solution containing 0.1% ammonium sulfide (Sigma) [11] . Cells were then mounted and viewed by light microscopy (Leica DM IL, Groot-Bijgaarden, Belgium).
Bilirubin conjugation assay
Undifferentiated and differentiated rFLDC were incubated in William's medium and 1% FBS containing unconjugated bilirubin (Sigma) for 24 h and 48 h. Afterwards, we harvested the supernatant and added 2 µg/mL xantobilirubinic acid (use as internal standard: IS). We then submitted the product obtained in this reaction to an alkaline methanolysis followed by nitrogen evaporation as described by Muraca et al [12] . Precipitates were resuspended with 10 µL chloroform (Sigma) and 100 µL dimethyl sulfoxide (Sigma). We then injected ten microliters of this solution into the liquid chromatograph (Waters 515 HPLC pump) and eluted it with a C18 column (Macherey-Nagel, Düren, Germany). Elutriation flow started at 1 mL/min with methanol/water/tetrabutylammonium (solvent A) and ended after 11 min with methanol/ethanol/water/tetrabutylammonium (solvent B). Elution was continued for 6 min with solvent B, and the column was re-equilibrated with solvent A. The absorbance of the eluted pigments was monitored at 436 nm using a 996 photodiode array detector (Waters, Zellik, Belgium) and the area under peak was integrated electronically (Millennium software, Waters). We calculated the concentration, in micromoles per liter, of each bilirubin fraction in samples using the following equation: (Areapigment/areaIS) �� (IS/IV) �� RF. �� (IS/IV) �� RF.
(IS/IV) �� RF. �� RF. RF. In which IS corresponds to micrograms of internal standard added to the sample, SV to the volume of sample (mL), and RF to the response factor.
In which total bilirubin concentration was the sum of unconjugated and conjugated bilirubin.
Isolation and expansion of rFLDC
An enriched population of hepatocytes obtained after collagenase A digestion and low speed centrifugation was plated on type Ⅰ collagen-coated 6-well plates.
During the first step of culture, mature hepatic cells present in the culture died due to their inability to proliferate ( Figure 1). After 7 to 12 d, cells with a fibroblastic-like shape emerged and proliferated ( Figure 1B and C). These cells demonstrated a high proliferative potential with a CPD of 294.55 ± 20.91 after 50 passages (Figure 2). rFLDCs were reproducibly isolated from at least five different liver cell suspensions.
Characterization of rFLDC
All isolated rFLDC were analyzed and characterized after passages 2, 4 and 8 using FACS analysis and RT-PCR. Furthermore, a stable expression profile was observed up to P50 (data not shown).
To further characterize our cell population, we performed immunocytochemistry (ICC) for vimentin, fibronectin and ASMA proteins and compared the findings with rat bone marrow-derived mesenchymal stem cells (rBM-MSC) (Figure 4). The results indicated positive staining for ASMA, vimentin and fibronectin as observed with rBM-MSC.
To confirm the phenotypic profile of isolated rFLDC we performed RT-PCR analysis using specific mesodermal, hepatocyte and cholangiocyte markers at passage 4 ( Figure 5).
In-vitro differentiation
First, we checked the ability of rFLDC to differentiate into adipocytes in the presence of specific media supplemented with dexamethasone, isobutyl-methylxanthine, indomethacin and insulin ( Figure 6). We noticed that at early passages (P0-P2) two out of five rat fibroblastic-like liver derived cell cultures demonstrated a weak localized adipocytic differentiation. This ability was lost in further passages. Under osteogenic induction, no calcium deposit was noted ( Figure 6).
In order to demonstrate their potential to differentiate into mature hepatocytes we seeded 10 4 cells/cm² from passage 4 in serum free-medium in the presence of several "hepatogenic" factors, as described in the Materials and Methods section. After 32 d, cells showed a slight morphology change and few cells adopted a polygonal shape (Figure 7). Using RT-PCR, we compared the expression of immature and mature hepatocytic/biliary mRNA on undifferentiated and differentiated rFLDC ( Figure 5). Despite a variation in serum concentration (10% vs 2%) between the expansion medium and hepatogenic control medium, respectively, no differences in mRNA expression To test their liver metabolic activity, we explored their ability to store glycogen and to perform glucogenesis (G6Pase activity) and their potential to conjugate bilirubin.
Glycogen storage, evidenced by periodic-acid shift staining, showed that, like rat hepatocytes ( Figure 8A), undifferentiated and differentiated cells can store glycogen ( Figure 8B and C).
As shown at Figure 8D-F, rat hepatocytes and rFLDC cells also revealed a basal G6Pase activity. These results were corroborated by the expression of G6Pase at the mRNA level.
In addition to glycogen storage and G6Pase activity we assessed the ability of differentiated and undifferenti-
DISCUSSION
Because preclinical studies use animal models mimicking human diseases, we tried to isolate from rodent liver a liver progenitor cell that would display characteristics reported for ADHLSC. The use of human derived cells in animal models is considered irrelevant, as they may not engraft and function similarly in a xenogenic rodent environment.
Like human cells, rFLDC were isolated and emerged in vitro after culture of liver cell suspension following enzymatic-mediated disaggregation of liver. However, many differences were observed: rFLDC demonstrated a higher proliferative potential and did not reach senescence after at least 50 passages in contrast to human cells which stopped proliferating after 10-12 passages [13] . rFLDC were able, at early passages, to differentiate into adipocytes, in contrast to ADHLSC.
Like human cells, rFLDC displayed a mesenchymal profile as evidenced by the expression of CD44, CD73, CD90 and CD105. The cell population was not contaminated by hematopoietic stem cells as evidenced by the absence of CD45 expression. These results confirmed the presence of a new enriched cell population different from the freshly isolated hepatic cells. RT-PCR also revealed expression of CK8, UGT1A1 and G6P under- In a hepatogenic differentiation medium, low numbers of rFLDC display the polygonal morphology of mature hepatocytes. Differentiated rFLDC express both albumin and TDO. However, we did not observe the expression of more specific hepatic markers such as HNF4 or TAT.
Differentiated rFLDC do not express CK19 or αFP, and differ therefore from small hepatocytes and epithelial cells also recovered from normal livers [17][18][19] Recently, Sahin et al [19] , using a 2-step collagenase protocol, reported a cell population derived from adult rat liver and called them LDPCs (liver-derived progenitor cells). Regarding their oval morphology and expression of HNF3β, CD45, CD34 and CD90, these cells seem to be closely related to oval cells despite the absence of CK7, CK8 and CK19 expression.
In conclusion our results showed that rodent progenitor cells homologous to ADHLSC can not easily be obtained even when the same isolation and culture protocolwas applied using a rat model. However, this protocol allowed the isolation of a novel type of liver progenitor cell population with both hepatic and biliary phenotype including G6Pase activity, glycogen storage, CK8, UG-T1A1 and CK19 expression.
In the presence of a hepatogenic differentiation medium, rFLDC lose the CK19 biliary marker, but do not acquire a more mature hepatic status possibly due to the use of human c ytokines and growth factors, which may not be appropriate for rodent precursors, stressing again the difficulty in generating homologous models.
Further characterization and in vitro hepatogenic differentiation improvement are required before their relevant use in preclinical studies.
Background
Liver cell transplantation using hepatocytes was successfully performed in patients with inborn errors of metabolism. However, the success of such a therapeutic approach remains limited by the quality of transplanted cells. To overcome these problems several approaches to isolate and propagate liver stem or progenitor cells have been developed. The capacity of those cells to restore a liver metabolic function must be demonstrated.
Research frontiers
Preclinical studies using homologous animal models of human liver metabolic diseases are attractive. It is therefore a prerequisite to isolate and propagate human homologous liver stem or progenitor cells from syngenic animals to perform such studies. In this study, the authors showed that rodent progenitor cells homologous to human adult-derived liver stem/progenitor cells can not easily be obtained even when the same protocol was applied.
Innovations and breakthroughs
In this study, the authors reported the isolation of novel potential candidate liver progenitor cells isolated from healthy rat liver called rat fibroblastic-like liver derived cells (rFLDC). These cells express both hepatic and biliary phenotype and
Applications
Isolation and characterization of progenitor/stem cells would be very useful to assay the in-vivo efficacy of liver mesenchymal progenitor cells in syngeneic animal models of liver metabolic diseases, particularly in the Gunn rat, a model of hyperbilirubinemia.
Peer review
This study shows that the advanced hepatic features of human liver progenitor cells have not been demonstrated in rFLDC. Although it strengthens the unique specificity of these human liver progenitor cells, it also shows that homologous models for cell therapy can not easily be developed even when the same isolation and culture protocols are applied. The authors should make a comparison of their cells with human established liver cells. .
|
2018-04-03T01:49:13.778Z
|
2012-07-21T00:00:00.000
|
{
"year": 2012,
"sha1": "8c551955c08f8820cc9f32ffbc5372cc662cf978",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v18.i27.3511",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "097a63bef48420c0ca38d47050a1105a0867942a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252738685
|
pes2o/s2orc
|
v3-fos-license
|
Analyzing the metabolic fate of oral administration drugs: A review and state-of-the-art roadmap
The key orally delivered drug metabolism processes are reviewed to aid the assessment of the current in vivo/vitro experimental systems applicability for evaluating drug metabolism and the interaction potential. Orally administration is the most commonly used state-of-the-art road for drug delivery due to its ease of administration, high patient compliance and cost-effectiveness. Roles of gut metabolic enzymes and microbiota in drug metabolism and absorption suggest that the gut is an important site for drug metabolism, while the liver has long been recognized as the principal organ responsible for drugs or other substances metabolism. In this contribution, we explore various experimental models from their development to the application for studying oral drugs metabolism of and summarized advantages and disadvantages. Undoubtedly, understanding the possible metabolic mechanism of drugs in vivo and evaluating the procedure with relevant models is of great significance for screening potential clinical drugs. With the increasing popularity and prevalence of orally delivered drugs, sophisticated experimental models with higher predictive capacity for the metabolism of oral drugs used in current preclinical studies will be needed. Collectively, the review seeks to provide a comprehensive roadmap for researchers in related fields.
Introduction
Globally, oral administration is still the way for most clinical drugs to delivery. Despite the rapid development of intravenous, subcutaneous or intramuscular injection, oral drug delivery is still considered to be the preferred route in terms of good compliance and ease of administration (McConville, 2017;Manconi et al., 2020). In fact, professionals in the pharmaceutics field have made many new attempts in oral administration routes and doses to increase the drug efficacy in recent years (Hens et al., 2018;Jash et al., 2021). The successful discovery and development of drug candidates include evaluating the fluctuation of indicators in vivo to predict its effectiveness after being delivered to the intended site. The fate of drugs is supposed to be absorbed or excreted through various pathways after entering the living. That is, the life of a drug is being invented and created, then disappears after completing the treatment. Drugs experience absorption, distribution, metabolism and excretion (ADME) while completing their therapeutic errand (Szakács et al., 2008). The term, drug metabolism, refers to the metabolic process in which the parent compound is converted into metabolites to facilitate elimination, also known as bioconversion. The metabolism of drugs after oral administration can be roughly divided into oxidation, reduction, hydrolysis and conjugation. Among them, oxidation, reduction and hydrolysis belong to phase I metabolism, and conjugation belongs to phase II metabolism (Yengi et al., 2007). There are two results of drug bioconversion under a variety of drug-metabolizing enzymes (especially liver enzymes) action. On the one hand, it becomes a drug with no pharmacological activity (inactivation). On the other hand, the non-pharmacologically active substances are converted into pharmacologically active metabolites, e.g., prodrug, and even toxic metabolites are produced (active activation). By definition, prodrugs are derivatives or precursors of therapeutically active molecules, which undergo bioconversion into their active form inside the body, be it via spontaneous processes (e.g., hydrolytic degradation) or through a biocatalytic mechanism.
The drug bioconversion depends on their peculiar physicochemical properties and the interaction between the drug carrier and different sites . Emphasis routinely has been placed on the liver due to the presence of all organelles and their associated drug-metabolizing enzymes (Underhill and Khetani, 2018). Most clinical drugs phase I and phase II metabolic reactions occur with the participation of the liver drug-enzyme system. It can be seen that the liver plays an irreplaceable role in metabolism process (Alves-Bezerra and Cohen, 2017;Xu et al., 2020). The metabolism and elimination of drugs in the gastrointestinal tract (GIT) is a complex and dynamic process involving many mechanisms and pathways. GIT not only affects the metabolism of drugs, but also affects the absorption and transport due to the presence of multiple transporters (Shugarts and Benet, 2009). Flora has gradually attracted attention via participate in drug metabolism by degradation, hydrolysis and reduction in recent years (Clarke et al., 2014). Assuredly, phase I drug-metabolizing enzymes (CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP3A4, CYP3A5) and phase Ⅱ drug-metabolizing enzymes (UGTs, SULTs) are known to be expressed in the human intestine . These enzymes are able to participate in a wide variety metabolism process via oxidation, glucuronidation reaction and sulfonation reaction, etc. Metabolism in the liver and the GIT is the important determinants of the overall disposition of drugs, and metabolites formed can have an impact on the efficacy and safety of humans. Comprehend the factors and physiological barriers that influence drug metabolism and bioconversion is necessary to develop drug candidates with optimal therapeutic efficacy.
In order to the development of the pharmaceutical industry, an in-depth comprehension and application of the hidden mechanisms and different factors involved in drug metabolism at different stages of drug development, as well as good predictive models are required. Over the last few decades, in vivo/vitro models are being gradually developed to study the delivery procedure of drug molecules after oral administration. Considering genetic polymorphisms as well as the environment, diet and lifestyle vary widely among human subjects, human beings are generally used only for the clinical validation phase of the final marker. The metabolic pathways of cells are highly conserved among species and not universal, however, the advantages of cell models include low drug dosage, low cost, high speed, and suitability for highthroughput screening. Furthermore, it is relatively easy to obtain animals various biological samples, and the administrative treatment is highly operable. The animal experiments conditions are easy to control, and individual differences in animals are less variable compared to human beings. Understanding the metabolic mechanisms of drug candidates using metabolic models greatly aids in the development of drug delivery systems with optimal properties. This article focuses on the mechanism of drug metabolism and different experimental models. It is possible to simulate the in vivo environment by culturing cells in related parts in vitro to explore drugs metabolic procedure. In addition, there are experimental bacterial bioconversion models, microsomes, ex vivo tissue section, and animal models with editing genes for different experimental purposes. Herein, we evaluated the strengths and limitations of altered metabolism models. These models may have significant potential to exploit in the preclinical drugs screening.
Drug metabolism process
The natural assimilation process of an orally administered drug involves the breakdown of its components, which are then metabolized primarily by the GIT and the liver due to high levels of metabolic enzymes exists. The metabolism of drugs in the GIT and liver is a complex process, as illustrated in Figure 1. Generally, drugs experienced complex metabolic reactions under various drug-metabolizing enzymes (especially hepatic drug-metabolizing enzymes) and transporters action. In most cases, the polarity of drug metabolites is greater than that of the original drug to facilitate excretion. But there is also the opposite metabolism, such as the acetylated sulfonamides (Marshall, 1954;Shear et al., 1986) or the methylated phenolic hydroxy (Zhang et al., 2020). Interestingly, some drugs are not completely metabolized or some metabolites are still excreted in the original form after many complex steps (Cai et al., 2022).
Drugs have distinct destinies due to different physicochemical properties, including inactivation, activity decreased (Brandao, 1977), activity enhanced (Molet et al., 1997), activation (B'Hymer andCheever, 2010) and toxic metabolites production (Brune et al., 2015). The metabolism of drugs is closely related to their efficacy and safety. Reducing or even avoiding the toxic and side effects can be achieved by studying drugs metabolism properties and laws to improve the bioavailability and efficacy of drugs.
Phase I metabolism pathway
A series of reactions of most drugs occurs under the specific enzymes catalyze, which lead to structure and physicochemical properties changed (Iyanagi, 2007). Drug metabolism is the major source of pharmacokinetic variability in human beings. At the root of this changeability are the phenotypic as well genotypic differences in the expression of the enzymes involved in the metabolism of drugs (Callegari et al., 2013;Zhao et al., 2014). Nicotinamide adenine dinucleotide phosphate (NADPH) -cytochrome P450 (CYP450) is one of the most common phases I drug-metabolizing enzymes that require NADPH as a cofactor (Hurst et al., 2007;Iyanagi, 2007). CYPs, also called hydroxylase and mixed function oxidase (MFO), catalyzes the incorporation of O in the O 2 with lipid-soluble substrates to form hydroxylates or epoxides, and the other O is reduced to H 2 O by NADPH. The reaction formula is illustrated in Figure 1B. It plays an extremely important role in the exogenous and endogenous substances metabolism process (Song et al., 2021). Human hepatocyte CYPs are divided into five major families, including CYP1, CYP2, CYP3, CYP7 and CYP27, among which CYP1, CYP2, and CYP3 are mainly involved in heterologous substances bioconversion (Sarlis and Gourgiotis, 2005). Different CYP families are divided into A, B, C and other sub-families according to amino acid sequence homology. Among them, CYP3A4, CYP2C9, CYP1A2 and CYP2E1 catalyze the hydroxylation reaction which is the most important reaction for changing exogenous substances solubility. In addition, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP3A4 and CYP3A5 are known to be expressed in the intestine (Rendic and Guengerich, 2021). CYP3A4 and CYP3A5 are present in the GIT all regions, and CYP3A4 is highly abundant in the duodenum and jejunum (Esteves et al., 2021). CYPs are associated with a variety of metabolic reactions in the body, including oxidation, sulfur oxidation, aromatic hydroxylation, aliphatic hydroxylation, N-dealkylation, O-dealkylation, and deamination. Oxidation, the primary reaction, catalyze non-polar lipid-soluble compounds (containing hydroxyl or aromatic groups) to generate polar oxygen-bound groups.
Additionally, phase I metabolic enzymes that can participate in oxidation reactions includes flavo-protein monooxygenases (FMO), mono-amine oxidase (MAO), diamine oxidase (DAO) and dehydrogenases (e.g., alcohol dehydrogenases (ADHs) and aldehyde dehydrogenases (ALDHs)). FMOs are widely used in the fields of medicine and chemical industry because they often participate in compound hydroxylation, Bayer-Williger oxidation, sulfur oxidation, epoxidation, and halogenation reactions. MAO mainly metabolizes monoamines in organisms, such as adrenergic drugs including 5-hydroxytryptamine (5-HT) (Chen et al., 2015) and catecholamines (Goldstein et al., 2021). Inhibitors of such enzymes are widely used in depression, Parkinson's and other neurological diseases. DAO is an intracellular enzyme that catalyzes diamines in the mucosa or ciliated epithelial cells of the small intestine (Bounous et al., 1984). It can protect the mucosa by regulating intracellular ion balance and affecting conduction pathways. Other enzymes include nitroreducetase (NRTs), azoreductase (ARTs), esterase, amidase and glucosidase also play an important role in substances metabolism process (Almazroo et al., 2017). Reduction is another important pathway of phase I metabolism, which is important in the metabolism of aromatic nitro, nitroso, azo, and N-oxide compounds. Compounds obtain H from NADH and NADPH to form the corresponding amines. Esterase, amidase and glucosidase are used to hydrolyze ester bonds, amide bonds and glycosidic bonds of lipids, amides and glycosides, respectively, lead to activity reduce or even inactivate.
The principle of using enzymes expressed at targeted sites to engage in bioconversion is a strategy for designing prodrugs. The bioconversion of such new compounds, including but not limited to prodrugs, requires the participation of enzymes, such as esterase (Walther et al., 2017). Prodrugs are cunning derivatives of therapeutic agents designed to improve drug bioavailability. The currently listed prodrugs are mostly ester prodrugs, which need to be activated by esterase hydrolysis of ester bonds. A potent peptidyl inhibitor of aspartic proteases is bioconversion into an opensource antimalarial compound of P. falciparum prodrug activation and resistance esterase (PfPARE), MMV011438 is a good example (Istvan et al., 2017). Another representative prodrug, Prontosil, was reduced to sulfanilamide (SN) with antibacterial activity (Almalki et al., 2022). Another promising new class of drugs, proteolysis-targeting chimera (PROTAC), is also inseparable from the participation of metabolic enzymes (Békés et al., 2022). PROTAC is a bifunctional small molecule bridging a ubiquitin ligase and a target protein. Since the concept was proposed in 2001 (Sakamoto et al., 2001), related industries have developed rapidly. PROTACs has become one of the hotspots in current pharmaceutical research, in which metabolic enzymes play an important role.
Phase II metabolism pathway
Most of the metabolites produced by the phase I pathway can be excreted directly or after the phase II metabolism, while the Frontiers in Pharmacology frontiersin.org 04 other part is directly excreted. Compared to CYP450, phase II enzymes have received relatively less attention in clinical pharmacology, which was outlined in Table 1. The most common phase II drug-metabolizing enzymes are UDPglucuronosyltransferases (UGTs), sulfotransferases (SULTs), N-acetyltransferases (NATs), glutathione S-transferases (GSTs), methyltransferases (thiopurine S-methyltransferases (TPMTs), catechol O-methyltransferases (COMTs)) and acyltransferases (Jancova et al., 2010;Almazroo et al., 2017). Phase II metabolism reaction is the conjugation, which refers to the binding reaction of the drug or its phase I metabolite with the endogenous substances. The polarity genes of drug molecules are covalently bound with endogenous substances (such as glucuronic acid, sulfuric acid, acetic acid, glycine, etc.) to generate highly polar, highly water-soluble conjugates, which are easily excreted in urine and/or bile owing to difficult to reabsorb (Sarlis and Gourgiotis, 2005).
UGTs are a drug metabolizing enzymes superfamily that require UDP-glucuronic acid (UDPGA) as a cofactor. The UGT superfamily consists of four families, UGT1, UGT2, UGT3 and UGT8. The glucuronidation reaction catalyzed by the UGTs family, accounting for the phase II metabolism of over 35% clinical drugs ( Figure 1C). It catalyzes UDPGA transfer to hydroxyl, carboxyl, or amino groups, resulting in compounds that are more hydrophilic than the substrate (Jancova et al., 2010). Phenobarbiturates combine with GA for corresponding metabolic reactions is a representation (Okada et al., 1969). The current study found that the SULTs are divided into SULT1, SULT2, SULT4 and SULT64 families, of which the SULT1 and SULT2 are participated in more studies (Thelen and Dressman, 2009). The sulfonation reaction mediated by the SULTs family is the primary pathways of phase II metabolism, which participated in various substances detoxification and elimination in vivo. The conjugation process catalyzed by SULTs indicated that compounds containing hydroxyl and amino groups to the sulfonic acid group (-SO 3 H) provided by 3′phosphoadenosine-5′-phosphosulfate (PAPS) ( Figure 1D). Sulfonated reaction accounts for a large proportion of exogenous drugs metabolism such as acetaminophen. As far as intestinal phase II drug metabolism is concerned, UGT1A, UGT2B7 and UGT2B15 and SULT1A were shown to be expressed with functional relevance for many drugs, although little quantitative data is available so far (Fritz et al., 2019). GSTs, is the important enzymes family involved in compound metabolism, catalyzing a large number of reactions including nucleophilic aromatic substitutions, Michael additions, hydroperoxides isomeration and reduction, hydrophobic and electrophilic compounds and reduced glutathione conjugation. GSTs are divided into two superfamilies, including the soluble GST superfamily and membrane-associated proteins in eicosanoid and glutathione metabolism (MAPEG, microsomal transferases) (Zarth et al., 2015). Soluble GSTs are subdivided into 8 separate classes designated α, κ, μ, Π, σ, θ, ζ and Ω (Salinas and Wong, 1999). The GSTs metabolizing enzyme family is involved in almost all types of drug metabolism. Acetylation is an important amine-containing substances transformation reaction, and acetyl-CoA is a direct donor of acetyl groups. The bioconversion of carboxyl-containing drugs mainly has glycine as a cofactor. The enzyme expression is polymorphic resulting in different acetylation rates among individuals. NATs have been involved in aromatic amines and hydrazines bioconversion by the transfer from acetylcoenzyme A acetyl group to the parent compound free amino group (Sim et al., 2014). NATs are divided into two subfamilies: NAT1 and NAT2, of which NAT1 is expressed in most tissues and mainly affect p-aminobenzoic acid, p-aminosalicylic acid and p-aminoglutamic acid. Meanwhile, NAT2 mainly mediates sulfamethazine, isoniazid, hydralazine and sulfonamide metabolism (Makarova, 2008). Therefore, an appropriate amount of sodium bicarbonate should be supplemented to improve solubility when taking sulfonamides. S-adenosylmethionine (SAM) is an active methyl donor for TPMT and COMT mediate most of these reactions as the main methyltransferases. TPMTs catalyze the S-methylation of aromatic and heterocyclic sulfur-containing compounds, such as 6-mercaptopurine (6 MP), azathioprine and 6-thioguanine, used in clinical disease treatment (Kouwenberg et al., 2020). COMTs are the phase II enzymes responsible for the transfer of a methyl group from S-adenosylmethionine to its substrate. It is the most efficacious treatment for Parkinson's disease when combined with decarboxylase inhibitor I-dopa (Espinoza et al., 2012). It plays a key role in the regulation of catechol-dependent functions and metabolism of drugs with catechol functional groups attached to their structures (Volavka et al., 2004).
Microbiome metabolism pathway
The gut hosts a diverse bacterial community 10-fold larger than human somatic and germ cells, separated from the internal environment by epithelial cells. It has been estimated that the microbes collectively make up to 100 trillion cells, outnumbering host cells. The microbes, encode unique genes, have a profound influence on human physiology (Qin et al., 2010). Gut microbial species include Actinobacteria, Bacteroidetes, Firmicutes, Fusobacteria, Lentisphaerae, Proteobacteria, Synergistetes, Tenericutes, Verrucomicrobia, Ascomycota, Euryarchaeota, Evosea, Fornicata, Fornicata, etc (Guarner and Malagelada, 2003;Adak and Khan, 2019). They are divided into predominant microflora and sub-dominant microflora according to quantity. The quantity of predominant microflora is generally above 10 7 ~10 8 cfu/g, including obligate anaerobic bacteria such as Bacteroides, Eubacterium, Bifidobacterium, Rumenococcus and Clostridium, which belong to the aboriginal. The amount of sub-dominant microflora is less than 10 7 ~10 8 cfu/g, mainly aerobes or facultative anaerobic bacteria, such as Escherichia coli and Streptococcus (Javdan et al., 2020). It is unavoidable that drugs spend a significant amount of time in the small and/or large intestines, whether prior to or after absorption. More and more evidence suggest that gut microbiota has both direct and indirect impacts on the metabolism process. The microbiome competes with related metabolic enzymes (Weersma et al., 2020). In addition, the following impact mechanisms are included (Vernocchi et al., 2020): (a) chemical crosstalk between microbial and human metabolic compounds, (b) modulation of immune system, (c) protection from pathogens, (d) enteric nervous system regulation, (e) colorectal cancer resistance, (f) neurological behavior, (g) reduction of lipid levels in serum and cholesterol balancing. Crosstalk refers to the influence of the environment (nutritional, social, behavioral, geographic) on host genetics and the subsequent adaptation of the gut microbiome, triggering the molecular mechanisms of communication between the microbiome and the host. Microbiome self-derived enzymes reflect its direct impact on the bioconversion process. They are mainly involved in the degradation, hydrolysis and reduction through hydrolysis, dehydroxylation, deamidation, decarboxylation and reduction of azide groups (Kararli, 1995). For example, the metabolic pathways of anthraquinones are mainly hydrolysis, glycuronidation, sulfation by intestinal flora and hepatic drug-metabolizing enzymes . Bacteroides species can hydrolyze the steviol glycosides by βglucosidase (Renwick and Tarka, 2008). This complex metabolic activity recycles valuable energy and absorbable substrates for the host, and also provides energy and nutrients for the flora growth and proliferation (Mancini et al., 2018). Due to the tremendous progress in the study of microbiota structure and function, its contribution to host physiology, metabolism and disease has gradually been understood and appreciated by relevant researchers in recent years (Fung et al., 2017).
Primary enteric models
The intestine plays a vital role in the absorption of orally ingested compounds, such as nutrients and drugs. Metabolism in the GIT is one of the important determinants of the overall disposition of drugs. However, the significance of the gut in drug metabolic fate has long been underestimated due to the difficulty in distinguishing between the roles of the gut and the liver in in vivo experiments and the lack of sufficiently viable in vitro models. The good news is that the intestine as an important factor in determining the first-pass metabolic fate of drugs has been increasingly recognized by researchers.
Microbiome-based model
The role of the microbiota has been largely overlooked previously. Hence the nickname "the forgotten endocrine organ". The ability of microbes in the human gut to metabolize drugs was discovered nearly a century ago. Orally drugs are exposed to gut flora before being absorbed into the bloodstream. Abundant gut microbes affect compound absorption and metabolism by secreting bioactive molecules such as hydrolase, lyase, oxidoreductase, and transferase to alter drug efficacy and toxicity (Murphy, 2015). The gut microbiome has the ability to produce many kinds of substances. Bacterial culture is often used to study the metabolic effects of intestinal flora on drugs: 1) Frozen glycerol stocks were plated on brain-heart-infusion (BHI) blood agar and incubated at 37°C under anaerobic conditions. 2) Single colonies were inoculated into pre-reduced Gut Microbiota Medium (GMM, 1% w/v arginine). Moreover, gentamicin (200 mg/ml), erythromycin (25 mg/ml), and/or 5fluoro-2deoxy-uridine (FUdR) (200 mg/ml) were added. 3) Bacterial incubated anaerobically at 37°C for 24 h Frontiers in Pharmacology frontiersin.org 07 (Akkermansia muciniphila for 48 h). Samples were collected and stored at −80°C until further processing for analysis (Zimmermann et al., 2019). Han and his colleagues used isolated gut microbiota in combination with liquid chromatography mass spectrometry to investigate the bioconversion of rare protopanaxadiol saponins. The results showed that ginsenosides Rd, F2 and Rg3 were completely converted via deglycosylation. In addition, gut microbiota models for metabolism study are generally developed based on animals (Clarke et al., 2014). Germ-free mice, which do not contain other living organisms, are generally used as controls in experiments. Bäckhed and his co-workers compared the effect of gut microbiota on energy absorption in germ-free mice (C57BL/6) and conventionalized mice, and found that exist of microbiota enhance monosaccharides absorption (Bäckhed et al., 2004). Microbial-based model has many advantages, including the use of affordable and convenient media and can be cultivated on a large scale. In addition, a large number of microbial metabolism studies can be assessed simultaneously. Another key advantage is that higher concentrations of target drugs can be added to microbial cultures compare to animal or enzyme and/or tissue system. Therefore, this facilitates purification and isolation of metabolites and toxicological testing when higher drugs concentrations are used (Lamb et al., 2013). Maria et al. developed a gnotobiotic mouse model to separate host and microbiota contributions to drug metabolism (the host-microbiome model) (Zimmermann-Kogadeeva et al., 2020). This model explicitly models gut microbiota activity in the large intestine to identify conditions that promote microbiota contribution to drug metabolism. It is undeniable that a systematic and standardized Microbiome-Derived Metabolism (MDM) map still lacked in contrast to liver-derived metabolism (Javdan et al., 2020). Currently, researchers have been trying to map the MDM of oral drugs using personalized gut microbiome-derived microbial communities (MDM-Screen) to reliably predict and ultimately interfere with the ability of the microbiome to adversely affect drug pharmacokinetics (PK) and pharmacodynamics (PD).
Cell-based enteric model
In the past 10 years, cell culture models have been extensively used to evaluate the behavior of drug candidates and nutrients in the GIT. The Caco-2 cell line model was first proposed (Rousset et al., 1980) in 1980 as in vitro human intestinal epithelial cell assay system for predicting gastrointestinal absorption (F a ) of oral administration. The cell line, is similar to small intestine characteristics, expresses functional P-gp, MRP2/canalicular multispecific organic anion transporter (cMOAT) at levels that allow reproducible absorption and efflux studies in cell culture. Although a gold standard in exogenous substances behavior studies, Caco-2 cell monolayers exhibit certain limitations such as a much higher transepithelial electrical resistance (TEER, up to 500 Ω cm 2 ) compared to the human intestine (12-69 Ω cm 2 ), overestimated P-gp-mediated efflux and low paracellular permeability, a lack of metabolic mucus and enzymes at their apical side, which limits the relevance of the Caco-2 cell line in metabolism studies under standard culture conditions (Beloqui et al., 2016). CYP3A enzymes, the most abundant P450 present in human hepatocytes and intestinal enterocytes, are heme-containing monooxygenases responsible for the oxidative metabolism of >50% of current drugs on the market (Thummel, 2007). Of the four human CYP3A enzymes identified, CYP3A4 is primarily relevant for drug metabolism, which is involved in the metabolism of approximately 90% of drugs in the gut (van Herwaarden et al., 2009). The discovery of CYP3A4 in the human intestinal mucosa by Watkins and his coworkers and a demonstration that it can operate independently of the liver as a highly efficient metabolic barrier during the uptake of various drugs from the intestine (Watkins et al., 1987;van Herwaarden et al., 2007). The findings of the study illustrate the crucial role that intestinal CYP3A4 expression can have in determining the biological response to an orally dosed substrate. Paine et al. found that CYP3A4 is highly expressed in the gut and varies along the length of the small intestine . To predict the metabolism behavior of drugs in the small intestine accurately, it is necessary to develop a Caco-2 cell model expressing CYP3A4 (Thummel, 2007). Many teams have made various attempts to establish a Caco-2 cell platform that can stably express CYP3A4. Figure 2A clearly demonstrates the use of human artificial chromosome (HAC) vectors to develop Caco-2 cells co-expressing CYP3A4 and CYP450 reductase (CPR) (Takenaka et al., 2017). Specifically, CYP3A4 and CPR genes were cloned into HAC vectors in CHO cells using the Cre-loxP system, and then CYP3A4-CPR-HAC was transferred to Caco-2 cells by chromosomal transfer technology (Ohta et al., 2020). PiggyBac transposon isolated from Trichoplusiani also serves as a tool to overexpress CYP3A4 in Caco-2 cells (Ichikawa et al., 2021). pPB-TRE3G-CYP3A4 and piggyBac transposase vectors were co-transfected into Caco-2 cells and subjected to immunofluorescence analysis ( Figure 2). The researchers made various explorations in establishing the Caco-2 cell model expressing CYP3A4. Vector-bearing Caco-2 cells were selected via resistance to hygromycin B after cDNAs for CYP450 were introduced into an extrachromosomal vector under the control of the cytomegalovirus early intermediate promoter to develop Caco-2 cell expressing high levels of CYP450 enzymes (Crespi et al., 1996). The treatment of Caco-2 cells with 1α, 25dihydroxyvitamin D 3 (1α,25-(OH) 2 -D 3 ), beginning at the confluence, results in a dose-and duration-dependent increase in CYP3A4 mRNA and protein (Schmiedlin-Ren et al., 1997). Enhanced CYP3A4 mediated metabolism in Caco-2 cells transduced with Adenovirus-3A4 vector (Ad3A4) and Adenovirus-P450 Reductase (AdRed) (Brimer et al., 2000). Furthermore, there are more and more studies aimed at enhancing the multiple CYP isoforms in Caco-2 cells, including by creating Caco-2 cell lines expressing nuclear receptors (NR) (Korjamo et al., 2006), constitutive androstane receptor (CAR) and pregnane X receptor (PXR) (Burk et al., 2005). With the development of technologies such as gene editing and transfection, researchers can establish models suitable for experiments. Caco-2 cell is highly expressing CYP3A4 can be established and applied to PK studies via these technologies. The researchers need to resort to a new type of intestinal cells to overcome the lack of metabolic enzymes of the Caco-2 cell line was addressed by employing additional cell lines. The TC-7 cell line, one of the Caco-2 cell line subclones, was isolated to overcome the major limitations of the parental line (Ferrec, 2012). There is a good correlation between this subclone and the Caco-2 cell line, indicating that it is an excellent stand-in for Caco-2 monolayers (Grès et al., 1998). Multiple brush border enzymes similar to human enterocyte metabolism, CYP3A4, CYP3A5, UGT and the hydrolase sucrase-isomaltase, were observed on the TC-7 cell line (Liu et al., 2007). Undoubtedly, the original model was eclipsed by the signature of metabolic enzymes that are expressed very similarly to the human empty gut. Consequently, it is reasonable to consider TC-7 cells as a useful option for studying intestine first-pass metabolism.
Stem cells, replenish their own cell population and maintain the potential to develop into more specialized cells, provide an option for metabolism studies to generate large amounts of mature enterocytes/hepatocytes at constant mass (Bacakova et al., 2018). The cells can be divided into two groups based on differentiation potential and origin: (i) adult stem cells derived from host and non-host, such as mesenchymal stem cells (MSC), and (ii) pluripotent stem cells, mainly including human embryonic stem cells (hESC) and human-induced pluripotent stem cells (hiPSC) (Ma, 2014). Jason and his co-workers establish a robust and efficient process to direct the differentiation of hiPSC into intestinal tissue in vitro using growth factor manipulations to mimic embryonic intestinal development. The resulting three-dimensional human intestinal organoids (HIOs) consisted of a polarized, columnar epithelium that was patterned into villus-like structures and crypt-like proliferative zones that expressed intestinal stem cell markers (Spence et al., 2011). Janssen et al. assess the expression of the most common CYP enzymes in a hiPSC-derived model. This study found relatively high gene expression levels of CYP enzymes in the hiPSC-induced HIO model, indicating that it is a useful in vitro gut model for studying chemical bioconversion (Janssen et al., 2021). Yoshida et al. establish an in vitro differentiation procedure to generate matured small intestinal cells mimicking human small intestine from iPSCs. The tests Frontiers in Pharmacology frontiersin.org results confirmed that these iPSC-derived enterocyte-like cells exhibit CYP3A4-mediated metabolism, and can serve as a model for the evaluation of drug metabolism studies in the human small intestine (Yoshida et al., 2021). hiPSC-derived intestinal tissue should allow for unprecedented studies of intestinal wall metabolism.
Primary hepatic models
Although the study of drugs metabolism in the gut is evolving, it is still falling behind the established liver models to some extent. Most in vivo and in vitro assessments center around hepatic models nowadays. The liver has long been considered as a principal site responsible due to most of the phase I and phase II metabolism reactions occur with liver metabolic-enzymes participation. It is mainly engaged in physiological processes such as compound metabolism, bile secretion and excretion, detoxification and coagulation factors production. Modern research has found that a number of liverderived in vitro systems, such as slices, primary and immortalized hepatocytes, microsomes and S9 fractions are used to assess the xenobiotics metabolism.
Hepatocytes model
Nowadays, there are several attempts to establish hepatocytebased in vitro systems as alternatives for animal experiments. The contribution of the liver for oral drugs metabolism is extensively assessed in drug discovery processes by using fresh or cryopreserved hepatocytes and hepatic subcellular fractions. Primary hepatocytes are considered to be a standard in vitro tool in the drugs metabolism study (Sahi et al., 2010). Primary hepatocytes include primary human hepatocytes (PHHs) (Zeilinger et al., 2016) and primary mouse hepatocytes (PMHs) (Nagarajan et al., 2019) depending on the species. PHH/PMH is usually isolated from whole livers or resected liver tissue by a continuously modified two-step collagenase perfusion technique, what proposed by Seglen andReith in 1976 (Seglen andReith, 1976). The liver tissue separated from the body and rinsed firstly. Then, 5 ml of collagenase V was injected and digested in situ for 10 min at room temperature. The liver was then cut into small pieces and placed in 5 ml of collagenase V for further digestion at 37°C for 30 min. Finally, the digested suspension was repeatedly pipetted to detach the hepatocytes, and the tissue debris was washed away with Dulbecco's modified Eagle's medium (DMEM) before culture. PHH/PMH have some disadvantages including difficulty in culture, phenotype change at an early stage, metabolic enzymes easy inactivation, and individual differences between donors, which limit their practicability and reliability as an in vitro model. Sandwich-cultured hepatocytes (SCHs) is an in vitro model widely used in hepatobiliary transport of drugs. Actually, in this establishment of model, hepatocytes are sandwiched between two layers of artificial matrix glues, rebuilding the polarity of cells and forming a complete bile duct network in order to mimic the internal environment of hepatocytes realistically (Fardel et al., 2019). SCHs model expresses drug-metabolizing enzymes especially CYP enzymes, making it valuable to be developed in the application of drug metabolism research (Matsunaga et al., 2016). There is no doubt that SCHs model is deemed as a superior model to traditional PHH. Mardal's group identify the cannabinoid compound 5F-PY-PICA metabolites via the model combined with liquid chromatography-high resolution mass spectrometry/mass spectrometry (LC-HR-MS/MS) (Mardal et al., 2018). In general, co-culture of two or more cells tends to better mimic physiological conditions. HepatoPac is a co-culture model of primary human hepatocytes and mouse fibroblasts that enable long-term hepatic metabolism and toxicity studies (Chan et al., 2019). This technique shows better in vitro and in vivo correlations than conventional hepatocyte models (Ramsden et al., 2014), especially for medium and low turnover compounds. The architectural organization of HepatoPac cultures has been empirically optimized to promote hepatocyte vitality and enable stable metabolic activity for weeks, rather than hours or days, the typical duration of other culture systems (Kamel et al., 2021).
Hepatic cell lines generated from tumor tissue are widely used in ex vivo culture models due to their high proliferation capacity and stable metabolism. Common human hepatic cancer cell lines include HepG2, HepaRG, Huh7, Huh7.5, PLC, Hep3B, SMMC-7721, MHCC97-H, MHCC9-L, etc., each of which has its own characteristics (Fukuyama et al., 2021). SMMC-7721, MHCC97-H, MHCC9-L cell lines have withdrawn from the experimental stage after being confirmed to be contaminated. Human hepatoma cell lines have similar biological properties to primary hepatic cells and unlimited passage, providing an ideal in vitro model for cancer and drugs metabolism studies. Therefore, the cell line is used in various fields and is expected by professionals to replace primary hepatocytes. HepG2 and HepaRG, the hepatoma cell lines, are attractive tools for in vitro studies under standardized and reproducible conditions (Yokoyama et al., 2018). HepG2 cell line is derived from human hepatoma tissue, which can secrete a variety of plasma proteins. As shown in Table 2, HepG2 cell line is widely used in liver physiology studies (Zhu et al., 2016) with lower levels of specific drugmetabolizing enzymes and transcription factors. HepaRG cell line is a fascinating tool for studying drugs metabolism owing to express liver-specific functions including CYP enzymes, transporters, and nuclear receptors during differentiation (Guillouzo et al., 2007). HepaRG cells express the mature hepatocyte marker, aldolase B, and their mRNA expression level is 20% that of freshly isolated human hepatocytes when highly differentiated into hepatocyte-like cells, while it was not detected in HepG2 cells. The cell line is commonly used in metabolism and toxicity studies due to its high Frontiers in Pharmacology frontiersin.org CYP450 enzymes expression (Zanelli et al., 2012). The HepaRG expresses various CYPs (1A2, 2B6, 2C9, 2E1, 3A4), NR, CAR, PXR at levels comparable to PHH and significantly higher than HepG2 (Guillouzo et al., 2007). HepaRG cell are more economical, convenient, and predictable than fresh or cryopreserved PHH (Tascher et al., 2019). The hydroxylation behavior is favored in PHH, whereas the glucuronidation pathway is favored in HepaRG cells. It is available in proliferative state to be expanded and differentiated in-house, or as cryopreserved, fully differentiated and ready-to-use hepatic cells. The Huh7 cell line is used for metabolism and toxicology studies, but also lacks typical hepatic biochemical functions. Malin and his colleagues (Darnell et al., 2012) found that drugs exhibit markedly different bioconversion behaviors in various cellular systems. The long-term differentiated Huh-7 cell line is a promising tool for in vitro endogenous compounds hepatotoxicity and metabolism testing. Huh-7 cell line upregulated some transporters related to farnesoid X receptor (FXR) and nuclear factor erythroid 2-related factor 2 (Nrf2) (Feng et al., 2018;Thomas et al., 2019). These data indicate that it may be used to study drug interactions with MRP when expressed some major drug transporters. Hepatocyte-like cells differentiated from hiPSC-induced cells are of great interest for applications in pharmacological research, especially drug metabolism testing. Murayama team finding that the induction of typical CYP450s in ihPS-derived hepatocyte occurred after normal culture could facilitate the use of these cells for drug metabolism (Murayama and Yamazaki, 2018).
Hepatocyte subcellular fractions
The liver homogenate was subjected to differential centrifugation to obtain subcellular fractions, including hepatic microsomes, S9 fraction and hepatic cytosolic fractions, as showed in Figure 3. Subcellular fractions can be stored and remain stable at −80°C for many years and be thawed easily before experiments. These advantages make sense for studies in the earliest stage of drugs metabolism including the screening program (Parmentier et al., 2007). It should be noted that the corresponding cofactors need to be added to the reaction system to better simulate the physiological environment. S9 fraction is most similar to the physiological properties due to the minimal number of centrifugations. From the macroscopic point of view, S9 fraction is a mixture of unfractionated microsomes and cytosol containing a wide variety of drug-metabolizing enzymes. It is widely used as a preferred test system in several in vitro ADME studies including phase I and phase II metabolism (Registre and Proudlock, 2016). S9 fraction has a relatively complete metabolic function to provide a relatively comprehensive metabolic profile, which can better mimic the physiological state (Hamel et al., 2016). Cytosol is the fraction obtained by centrifugation of S9 fraction, and consist of NAT, GST, SULT, etc., enzymes (Neal et al., 2019). This subfraction, mainly contains phase II metabolic enzymes, is used to single soluble enzyme activity and specific metabolic pathways studies
FIGURE 3
The preparation of S9 fraction, hepatic cytosolic fractions and microsomes commonly used in drugs metabolism studies.
Frontiers in Pharmacology frontiersin.org (Wahbeh and Christie, 2011). Some exogenous cofactors such as PAPS can be added to stimulate phase II enzymes activity. Hepatic microsomes are mostly existing in ER, so they need to be obtained by differential centrifugation (100,000 g) (Liu et al., 2002). It contains important drug-metabolizing enzymes, such as CYPs, FMO, carboxylesterase and glucuronosyltransferases (GTs), etc., which are responsible for 90% of drug metabolism reactions. Liver microsomes, can be stored for a long time to replace inactivation. It deemed as a vital vector to provide a stable environment relatively for drug metabolism study. Microsome is the most widely used in vitro model by far, providing an affordable way to metabolism studies (metabolic profile and prediction of hepatic clearance) and interaction studies (phenotyping and inhibitory potential studies). But exogenous cofactors such as NADPH for CYP and FMO, and UDPGA/alamethicin for UGT need to be added in many experiments. Although it can be easily manipulated in large quantities, it can lead to certain metabolites and metabolic pathways that cannot be determined. Like other in vitro metabolic models, liver microsomes not fully mimicking the in vivo environment. To alleviate this dilemma, several methods have attracted increasing attention. Formerly, researchers attempted to give the best agreement with in vivo clearance values through inclusion of both blood and microsome binding values (Obach, 1999). It is a novel approach to precisely control the reaction by adding different substances to the liver microsomes system. To improve enzyme activity, Liu et al. added UDPGA, MgCl 2 , alamethicin, saccharolactone and macelignan to the liver microsomal system (Liu et al., 2014). A specific probe-substrate (cocktail assay) coupled with fast liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis was developed in human liver microsomes (Zhou et al., 2022). Cocktail assay enables information on multiple metabolic pathways to be obtained in a single experimental procedure with minimal inter-individual effects (Kahma et al., 2021). Furthermore, multiple models are utilized in combination with each other for different experimental purposes (Mohutsky et al., 2006). Nasser and his colleagues analyzed zorifertinib metabolites by using hepatocytes and liver microsomes (Al-Shakliah et al., 2022). Notably, optimal pH in microsomes is important for the physiological interpretation and predictability of intrinsic clearance (CL int ) (Al-Shakliah et al., 2022).
Others Precision-cut tissue slices
None of above cell cultures, however, can provide a complete intestine or liver model. Precision-cut tissue slices (PCTS) technology refers to cutting fresh tissue into slices of a reproducible and well-defined thickness with a microtome, and incubating with compounds during the experiment (de Kanter et al., 2002). The obtained isolated tissue (intestine or liver) benefit from venous flushing and cooling with ice-cold University of Wisconsin (UW) solution or ice-cold Krebs-Henseleit buffer (KHB) immediately (Palma et al., 2019). It should be noted that KHB should be used simultaneously to prevent intestinal tissue inactivate caused by UW solution singlehanded. The tissue microtome (Krumdieck Tissue Slicer) was used for sectioning as soon as possible within 3 h after the tissue was isolated. Practitioners can obtain precision-cut intestinal slices (PCIS) and precision-cut liver slices (PCLS) according to experimental requirements (Groothuis and de Graaf, 2013). Liver slices are typically prepared at a thickness of 250 μm, which allow the inner cell layer to be fully exposed to oxygen and nutrients. Generally, intestinal tissue is suitable for sectioning when filled and/or embedded with low melting point agarose. In order to maintain intestine and/or liver tissues activity, it is necessary to be continuously gassed with 95% O 2 /5% CO 2 . A medium supplemented with glucose and antibiotics (Williams medium E) was used simultaneously. The addition of insulin (30 nM), glucagon (100 nM), corticosterone (1 mM), epidermal growth factor (1 nM) and/or fetal calf serum (5%) may be beneficial for long -term culture (>48 h) (de Graaf et al., 2010;Groothuis and de Graaf, 2013;Palma et al., 2019). The technique is able to preserve drug-metabolizing enzyme and organelle activity and maintain cell-to-cell and cell-matrix interactions (Ioannides, 2013). Moreover, the model can maintain metabolic activity for a long time (8-12 h) and has a stronger environmental tolerance. In recent years, this method has reached the level of precise cutting with the development of slicer technology. To date, it represents a robust and versatile ex vivo model without separating cells and keeping the natural cellular environment with a full metabolic program (Othman et al., 2020). The gut is heterogeneous, with distinct structural and functional differences are prominent between duodenum, jejunum, ileum and colon. PCIS is particularly suitable for studying the metabolism of intestine different regions and its effects on the metabolism or transport of drugs . PCLS obtained from rat livers are used in most experiments, other animals including mice, miniature pigs, monkeys and dogs. Human livers have been gradually discovered by researchers in recent years. Where human liver slices can be prepared from small pieces of human liver obtained after partial hepatectomy as surgical waste or as part of non-transplantable donor tissue, this allows for interspecies comparisons and interpretation of human-specific function. Although the expensive price of specially designed tissue slicers limits the model application, its application in scientific research really brings scientist very big technical support. Midwoud and his co-workers integrated PCIS and PCLS obtained from rats into microfluidic chambers to demonstrate gut-liver communication as well as mimicked first pass metabolism by transferring metabolites in PCIS to the PCLS using connected flow (van Midwoud et al., 2010). PCTS from rat and mouse were used in order to figure out the metabolite rate of CYP3A and the formation of 3OH-quinidine in the research Frontiers in Pharmacology frontiersin.org (Martignoni et al., 2006). In another case, PCTS was proved as a successful established ex vivo model, and was suitable to apply in the drug transport and metabolism testing (van de Kerkhof et al., 2008).
Isolated tissue perfusion system
Compared with isolated or/and culture cells, it is obviously more reasonable to separate the specific tissue where the metabolic enzymes are located for drugs metabolic research. Relevant practitioners can qualitatively and quantitatively analyze the concentration changes of drug prototypes and their metabolites by using in vitro tissue perfusion system (Andlauer et al., 2000). The segment is placed in a bath filled with buffer and perfused with drug after the intestine or liver is removed from an anesthetized animal (Hamed et al., 2021). This method preserves the integrity of tissue structure and function to a certain extent and dynamically monitors the disposal by the intestine or liver, while eliminating other organs interference. Isolated hepatic perfusion is a procedure in which a catheter is placed into the artery to provide blood, and another catheter is placed into the vein to take blood away. This temporarily separates the liver's blood supply from the rest part circulation, which allows high doses of anticancer drugs to be directed to the target organ. Researchers need to isolate and maintain the tissue at 37°C, then rapidly circulate the perfusate and take samples at specific time points to determine the drug and its metabolites concentration (Windmueller and Spaeth, 1977). Generally speaking, the tissue can basically maintain a normal physiological state under the perfusion state. In order to ensure the activity of drug-metabolizing enzymes, intubation and perfusion oxygen supply should be performed promptly and quickly. Isolated tissue perfusion technique is an effective way to study drugs metabolism and mechanism, but the method requires better perfusion equipment and higher operating technical requirements. The metabolism study of isolated tissue perfusion based on whole organ can exclude the other tissues and organs interference to reflect the state of metabolism truly. The disposition profiles of three of the six major kavalactones (kavain, methysticin and desmethoxyyangonin) and their respective metabolites (p-hydroxykavain, m,pdihydroxykavain and p-hydroxy-5,6-dehydrokavain) were examined in the perfusate and bile of the isolated perfused rat liver by Fu et al (Fu et al., 2012). Based on this, Ma and his colleagues developed a biomimetic and reversibly assembled liver-on-a-chip (3D-LOC) platform and presented a proof of concept for long-term perfusion culture of 3D human HepG2/ C3A spheroids (Ma et al., 2018). The model is beneficial for a variety of potential applications, including the development of bioartificial livers, disease modeling, and drug toxicity screening.
Recombinant enzymes system
With the development of molecular biology, gene recombinant enzymes have been more and more widely used in in vitro metabolism study in recent years. Gene recombinant metabolic enzyme is a recombinant enzyme system produced by using genetic engineering and cell engineering to integrate the genes regulating the expression of metabolic enzymes into E. coli or insect cells. To facilitate the use of recombinant enzymes to test the substrate specificity in vitro, protein arginine N-methyltransferase (PRMT) was cloned in frame into pGEX vectors using standard molecular biology techniques. All nine PRMTs can be expressed in E. coli as GST fusion proteins (Cheng et al., 2012). High levels of metabolic enzymes can be expressed in the cell line after culture. The purity of the recombinant enzyme was monitored using SDS-PAGE (Srividya et al., 2016). The recombinant enzyme recovered from the Q-Sepharose anion-exchange column retains full activity for several months if stored at -80°C in the phosphate buffer containing 20% (v/v) glycerol, pH 7.2 (Miziorko and Narasimhan, 2000). After the early stages of purification, recombinases exhibit a significant requirement for stabilizers such as glycerol or substrates. It is an important model for identifying the major metabolic isoenzymes involved in drug metabolism, drug metabolism polymorphisms and drug metabolic interactions. The gene recombinant CYP450 system has a good correlation with the liver microsome experiment, which is suitable for microscopic and detailed research. The model is superior to other in vitro methods for specificity and selectivity study of drug-enzyme induction. The gene recombinase system can be used to study compound isoforms mediated by different metabolizing enzyme to produce different metabolites. Asano et al. found that the three metabolites of emetine were epicrine, 9-O-Demethyl Epicrine, and 10-O-Demethyl Epicrine, respectively, by using recombinant P450 enzyme and human liver microsomes in vitro incubation method. Among them, CYP3A4 and CYP2D6 catalyze the metabolism of ipecine to epecacine and 9-Odesmethylepecchin (Asano et al., 2001). This method enables to study enzyme structure and function, as well as individual enzymes for substrates and inhibitors. Additionally, it can clarify the results of certain drugs metabolized without the interference of other enzymes. Caroline and others found that CYP2J2 is predominantly expressed in the small intestine and heart via this technique and is an unknown player in first-pass metabolism to a Frontiers in Pharmacology frontiersin.org certain extent (Lee et al., 2010). The high purity of the expressed metabolic enzymes, specificity and selectivity for different experiments, and an effective means for highthroughput screening and analysis of drugs are the advantages of this technology. However, it has a high application cost and cannot reflect drugs overall metabolism.
Animal models
No matter how close the in vitro models are to the physiological environment, the progress of biomedical research still relies on animal models as the experimental basis for experimental and clinical hypotheses. It is readily appreciated that the information obtained from in vitro experimental systems is limited. The advances in detection technology and equipment have made major strides that are in charge of making animal models with more and more reliable predictability. There are three types of animal models: homologous (identical to humans), isomorphic (resembling a human disorder) and predictive (allowing the prediction of human disease and treatment) . The animal models employed to studying the behavior of drugs are rats, mice, rabbits, pigs, canines and sheep, of which rats and mice are the most commonly used species (Shrestha and Préat, 2020). It must be mentioned that the gut microflora, higher metabolic activity and fecal reabsorption are differing from humans though the GIT barrier similar to human beings. An appropriate animal model should be selected on the basis of study purpose, such as transport proteins and metabolic enzymes expression. Indeed, no species is identical to humans at the functional level for any metabolize enzyme, but more similarities are found in higher species. Recent research data provide novel evidence on these observed similarities and differences by molecular biology methods (Tang and Prueksaritanont, 2010). Non-human primates (rhesus (Twaddle et al., 2019) and cynomolgus monkeys (Shen et al., 2021)) have metabolic similarities to human especially for CYPs (Uno et al., 2018), and the chimpanzee has been characterized as a surrogate for drug oxidation and glucuronidation in humans and as a PK model for the selection of drug candidates (Yang et al., 2014). Interestingly, pigs can also be a good model for studying active compounds mainly metabolized by aldehyde oxidase (AOX1), NAT (NAT1 and NAT2) or cytochrome (CYP2C9like) enzymes (Dalgaard, 2015). Bioanalytical technologies (liquid chromatography (Hsieh and Korfmacher, 2006), mass spectrometer (Hsieh, 2008), etc.) are very common methods for evaluating drugs metabolism by collecting blood, bile, urine, feces and tissue samples after administration (Dunn et al., 2011;Chen and van Breemen, 2020). The method can reflect and even quantify the metabolic results of drugs in vivo on a macroscopic scale, but cannot accurately judge the effects of various parts. Obviously, the prerequisite for a valid animal model is similarities to humans in terms of target orthologous CYPs, substrate specificity, response to the inhibitor and disposition mechanism.
With the development of molecular biology and genetic engineering technologies, humanized animal models such as transgenic mice, gene knockout mice and chimeric mice have appeared one after another (Bandzar et al., 2013;Baker et al., 2019). More advanced gene editing technologies, such as zincfinger nucleases (ZFNs) (Geurts et al., 2009), transcription activator-like effector nucleases (TALENs) (Mak et al., 2012) and clustered regularly interspaced short palindromic repeats-CRISPR-associated (CRISPR/Cas9) (Hendriks et al., 2021) were developed, which were used for gene knock-in and knock-out in animals to construct gene editing animal models. ZFN technology consists of the DNA binding domain of zinc-finger protein and the DNA cleavage domain of Fok I endonuclease, while TALENs consist of transcriptional activator-like effector (TALE) protein and DNA cleavage domain. ZFN technology requires highly skilled experts and screening of ZFN libraries to design, while TALENs have disadvantages such as large size, prokaryotic origin, and cytotoxicity. Compared with other geneediting technologies, CRISPR-Cas9 technology is simple, efficient, and very specific (Zhang et al., 2019). The technology involves two key components: a single guide RNA (sgRNA) matching the target gene and Cas9 protein causing double strand DNA break (Lu et al., 2021). Various modifications can be performed in the CRISPR-Cas9 cargo system, as shown in Figure 4, i.e., plasmid DNA encoding sgRNA and Cas9, the combination of sgRNA and Cas9 mRNA, and the combination of sgRNA and Cas9 protein (Sharma et al., 2021). CRISPR-Cas9 technology has developed into a general tool for genome editing, especially for generating robust animal models. An increasing number of engineered mouse/rat models are being used to study the effects of metabolizing enzymes, and the target genes are mainly concentrated in the CYP450 family (Karlgren et al., 2018;Li et al., 2019). Since the first report about CYP450 knockout mouse appeared in 1995, numerous of CYP-knockout and CYP-cDNA transgenic mouse model were created for experimental need (Pineau et al., 1995). So far, CYP-knockout mice models have served to drug metabolism studies, especially in CYP gene family 1-4 (Wei et al., 2013). The role of different kinds of CYP450 enzymes have been researched in the metabolism of acetaminophen (APAP), commonly known for hepatotoxicity, in the examples of CYP450 knockout mouse model (Zaher et al., 1998). Abedelmegeed and his companions found that APAP primed liver damage and protein adduct formation was inhibited in CYP2E1 knockout mouse as well as CYP1A2/CYP2E1 double knockout mouse (Abdelmegeed et al., 2010). Nevertheless, APAP bought the risk of hepatotoxic and death of CYP1A2 knockout mouse in comparison to wild type mice, which indicated the involvement of CYP1A2 was minimal compared to the involvement of CYP2E1. In addition, the application of CYP-Frontiers in Pharmacology frontiersin.org Frontiers in Pharmacology frontiersin.org 15 knockout mouse model participates in chemical carcinogens. Of note, it is recognized that the initial bioactivation of P450 or other bioconversion enzymes is the vital step. Thus, different types of CYP-knockout mouse model vary protective or damaging outcomes after chemical carcinogens metabolism. 3methylindole, a lung and nasal chemical carcinogens in tobacco smoke, were used in CYP2A5 knockout mouse and CYP2F2 knockout mouse, respectively. Zhou et al. demonstrated that although both of them metabolized 3methylindole via either epoxidation or dehydrogenation pathways, CYP2F2 was favorable to produce reactive iminium ions while CYP2A5 was favorable to produce stable derivatives, which caused different degrees of injury to mouse (Zhou et al., 2012). There are numerous factors that affect the accuracy of the test in the long and complicated operation process, including individual differences. Therefore, it is best to combine with other in vitro models to accurately reflect the real situation of drugs in vivo.
Correlations between in vitro and in vivo studies
Increasing emphasis is being placed on using in vitro models results for drugs as a surrogate for their in vivo behavior. In vitro to in vivo extrapolation (IVIVE) can convert in vitro drug metabolism data into in vivo metabolism data (Algharably et al., 2022). During drug development, a variety of in vitro metabolic models are utilized to screen and study the metabolic properties of candidate compounds. Based on relevant data, researchers modify drugs to improve metabolic stability and bioavailability. The results of in vitro experiments are generally served for in vivo experiments, and two types of data need to be combined before used. The in vivo drug clearance rate derived in vitro is often lower than the in vivo measured value within a three-to 10-fold error range (Bowman and Benet, 2019). Researchers established in vitro metabolic data to infer in vivo metabolic models to overcome this dilemma. It is common practice to measure the CL int of drugs in vitro using microsomes or hepatocytes to predict the in vivo CL (Lam and Benet, 2004;Sohlenius-Sternbeck et al., 2012). Afterwards, the correction equation of hepatic CL int was established for in vitro metabolism experiments (Poulin and Haddad, 2013).
Intrinsic CL: Conventional: Conventional bias corrected: Berezhkovskiy: Poulin: CL Q liver × f u liver × CL int(in vivo) f_(u_inc) Q liver + f u liver × CL int(in vivo) f_(u_inc) Direct scaling: Regression equation: (7) where CL int and Q represent the intrinsic clearance and the liver blood flow rate, respectively. fu b , fu b-app and fu liver are the unbound fraction in blood, apparent unbound fraction considering the pH gradient and unbound fraction in liver considering the protein-facilitated uptake and pH gradient. fu inc is the unbound fraction in incubation medium (hepatocytes). SF is the scaling factor (i.e., the physiological SF was (99 × 10 6 cells/g liver) × (1799 g liver/70 kg body weight). AFE obtained from the conventional method predictions for each dataset studied; therefore, a different AFE value was used for each dataset. In regression equation, 0.633 represent the intercept and 0.670 is the slope. In general, liver microsomes have higher CL int than hepatocytes. Therefore, in the absence of significant active uptake transporter action, liver microsomes are more accurate (Bowman and Benet, 2019). These are of great significance to help predict the value of drug development in advance, provide guidance for in vivo experiments, and save development time and costs (Jones et al., 2022).
Physiologically-based pharmacokinetic (PBPK) and metabolism patterns are used to predict and explain the differences in ADME properties of drugs among individuals, which are crucial for simplifying drug formulation development and regulatory evaluation (Chow and Pang, 2013). Currently, PBPK models are usually utilized as a prominent tool to use in silico (distribution) and CL and provide a predictable process of the overall PK profile, which applies in the pharmaceutical industry constantly (Kostewicz et al., 2014). PBPK integrates information such as system properties, drug properties, formulation properties, and anatomical structures of tissues or organs through mathematical models, which can provide a comprehensive description of the in vivo PK behavior of drugs (Dubaj et al., 2022). The model can accurately reflect the drug Frontiers in Pharmacology frontiersin.org concentration vs time in vivo, as well as the effects of disease and physiological factors on PK behavior. ADME properties of drugs cleared primarily by the liver within in vivo systems were determined according to PBPK modeling. Moreover, results of in vitro tests have to be related back to the biological context where metabolism and redistribution occurs, which can be accomplished in part using IVIVE method (Hines et al., 2022). Thus, PBPK modeling and IVIVE are indispensable tools for drug development and interpretation. Compartment model, an abstract concept that does not necessarily represent a specific anatomical part, is the basic analytical method used in PK (Nestorov et al., 1998). The one-compartment model means treating the host as a kinetic unit, which is appropriate for situations where the instantaneous distribution of drugs reaches a dynamic equilibrium. According to the different transport rates exhibited by drugs between different parts in vivo, the part of a rich blood supply and a higher transport rate is called the central compartment. Others are called peripheral chambers, and are further divided into primary peripheral chambers, second peripheral chambers, etc., which are called multi-compartment models. In addition, there are non-compartmental models that cannot be defined by existing information (Jaki and Wolfsegger, 2012). While no single method emerged as superior of all the compounds evaluated, multiple approaches exhibit a higher value. For drugs with low pharmacological activity, the differences between compartments can be ignored. For targeted drugs, the classical compartment model cannot be applied. Notably, a drug can be described by different compartment models with significantly different parameters.
Conclusion and future work
For scientists in specialized fields, exploring the metabolic processes and mechanisms of drugs in vivo is a significant and ongoing clinical challenge. Drug discovery and development are a costly and often time-consuming activity. A tremendous amount of research effort has been devoted to using various models to evaluate the metabolic fate of drugs in vivo. A variety of drug metabolism models are presented in the article, their inherent deficiencies lead to a limited extent and accuracy of prediction in clinical studies though these models are of great significance for drug development. The benefits and limitations of these models are shown in Table 3 and Figure 5. Researchers should choose models and experimental techniques that have been validated by extensive experiments, which are of great value in improving the data accuracy. To date, in vivo models are still the important models which able to predict drugs metabolic fate. For in vitro models, the classic cell culture model remains the most widely used tool for many years due to its low cost and ease of use. These test systems are preferably in vitro models, aiming at reducing the use of animals. However, take into account the relative impact of many biological parameters on the metabolic fate of drugs in the human body, sophisticated models are necessary. Researchers should focus on improving existing models and/or creating new biomimetic models to improve model accuracy. The emergence of in silico models has promoted the development of drug metabolism research techniques. In silico models have become an ideal aid for the evaluation of metabolic experiments due to their rapidity and high throughput. In silico models related to metabolism mainly include: 1) metabolism enzyme substrate and inhibitor classification model, 2) metabolic site prediction model, 3) metabolite prediction model, 4) liver clearance prediction model. Considering the importance of the model in studying drugs metabolism, further study of it is also of great significance for the development and improvement of future metabolic models. Only predominate the sticking point information about the drugs metabolism can researchers choose the appropriate experimental method according to their need. Undoubtedly, this will improve the accuracy of experiments as well as reduce costs, both in time and money. In order to develop more effective clinical drugs, robust and representative model combination is urgently needed. It helps to reduce the high elimination rate in the drugs research and development process, and lays the foundation for the smooth launch of new drugs.
LL:
Conceptualization, Writing-original draft, Writing-review and editing. YL: Investigation, Writing-review and editing. XZ: Graphic design, Writing-review and editing. ZX: Investigation, Writing-review and editing. YZ: Software, Data curation. LJ: Methodology. CH: Writing-review and editing, Supervision. CL: Conceptualization, Funding acquisition, Project administration, Writing-review and editing, Supervision.
Funding
The authors gratefully acknowledge support from the National Natural Science Foundation of China (No. 81673839, 82074304).
Acknowledgments
The content is solely the responsibility of the authors and does not necessarily represent the official views of the Zhejiang Chinese Medical University. In addition, all authors approved the final version of the manuscript.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Frontiers in Pharmacology frontiersin.org
|
2022-10-07T13:42:23.945Z
|
2022-10-07T00:00:00.000
|
{
"year": 2022,
"sha1": "c798d1f2e234dfe57e1f400c92b4a6c50f08a98e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c798d1f2e234dfe57e1f400c92b4a6c50f08a98e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218975890
|
pes2o/s2orc
|
v3-fos-license
|
Outcomes of corneal transplantation using donor corneas retrieved from patients with chronic kidney disease
Purpose: To report the outcomes of corneal transplantation utilizing corneas retrieved from donors with chronic kidney disease (CKD). Methods: Outcomes of corneal transplantation (optical PK and EK) performed from Jan 2018 to Dec 2018 utilizing donor corneas retrieved from CKD patients was performed retrospectively. Results: Of the total of 233 donor corneas retrieved from CKD, 135 (57.9%) were utilized for transplantation after the routine screening protocol of the eye bank. Mean age of the donors was 56.2 ± 13.5 years. The mean endothelial cell density on specular microscopy of the donor corneas used for optical PK was 2685.7 ± 377.6 cells/mm2 (range, 2028–3448 cells/mm2) and for EK was 2731.7 ± 189.1 cells/mm2 (range, 2380–3194 cells/mm2). The overall primary graft failure rate was 5.1%. All grafts except 1, cleared in the PK group. In the EK group (6 DMEK and 16 DSAEK), 1 patient had a complete graft detachment and another 1 had a primary graft failure after DMEK. Conclusion: The donor corneas retrieved from chronic kidney disease patients are safe and suitable for optical keratoplasty provided they meet the criteria for transplantation.
Chronic kidney disease (CKD) is a spectrum of disease associated with gradual decline of renal functions. [1,2] The condition is classified into stages 1 to 5, where stage 5 is a stage of end-stage renal disease that is managed by renal replacement therapy such as hemodialysis or kidney transplantation. Patients with CKD can develop various ophthalmic complications such as cataracts, band-shaped keratopathy, calcification of conjunctiva, renal retinopathy, and retinal detachment. [3,4] The most reasonable explanation for these changes is the breakdown of homeostasis of body fluids. As aqueous humor is extracellular fluid, it is presumed that the metabolic abnormalities in the aqueous humor can affect the health of corneal endothelium and lead to endothelial alterations. Many studies have reported the endothelial abnormalities in patients with CKD. [5][6][7] These changes were reportedly more prominent in those patients undergoing hemodialysis and elevated blood urea levels.
The contraindications for donor corneal transplantation on the grounds of the donor's medical history are exhaustive. Despite reports of endothelial changes in CKD patients, corneas retrieved from these donors have been used for transplantation if the donor corneal parameters fulfilled the requirements for corneal transplantation. The screening of donor corneas using specular microscopy gives an assessment of morphological and quantitative parameters of endothelium. The functionality of corneal endothelium from a transplanted cornea can be only assessed with the recovery of graft clarity after transplantation.
There are no studies on the outcomes of corneal transplantation from CKD donors.
Hence, the purpose of this study is to evaluate the outcomes of corneal transplantation using donor corneas retrieved from CKD patients that were deemed suitable for transplantation after routine donor cornea evaluation.
Methods
This is a retrospective observational study conducted at a tertiary eye care center. All the donor corneas retrieved by the eye bank affiliated to the institute from January 2018 to December 2018 were screened and those with CKD being the cause of death were analyzed. Those donors where diabetes mellitus was the underlying cause of CKD were excluded. Primary graft failure was defined as persistent graft edema at 2 months after keratoplasty.
Donor cornea selection criteria at the Eye Bank
The general criteria for selecting corneas for keratoplasty is endothelial cell density above 2000 cells/mm 2 for optical penetrating keratoplasty (PK) and 2200 cells/mm 2 for endothelial keratoplasty (EK) that includes both Descemet's stripping automated endothelial keratoplasty (DSAEK) and Descemet's membrane endothelial keratoplasty (DMEK). Those corneas with endothelial cell density ranging Cite this article as: Sravani NG, Mohamed A, Nandyala S, Chaurasia S. Outcomes of corneal transplantation using donor corneas retrieved from patients with chronic kidney disease. Indian J Ophthalmol 2020;68:1054-6.
This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com between 1500 and 2000 cells/mm 2 are often selected for use in anterior lamellar keratoplasty or therapeutic penetrating keratoplasty where eradication of infection is the primary goal of surgery.
Inclusion criteria
• All the donor corneas retrieved from CKD patients that were labeled suitable and utilized for corneal transplantation (either PK or EK) at our institute, after eye bank screening protocol.
Exclusion criteria
• Donor corneas which did not meet the suitability criteria for transplantation • Donor corneas which were suitable for transplantation, but were utilized at other hospitals for transplantation were excluded due to lack of access to post-keratoplasty medical records of those patients • Donor corneas that were utilized for anterior lamellar keratoplasty and therapeutic penetrating keratoplasty • Donor corneas where the underlying cause of CKD was diabetes mellitus.
The software Origin 7.0 (OriginLab Corporation, Northampton, MA, USA) was used to perform the statistical analysis. Normality of the continuous data was evaluated using the Shapiro-Wilk test. Mean (± standard deviation) and median (along with inter-quartile range [IQR]) were used to describe the parametric and nonparametric data, respectively.
Results
A total of 233 donor corneas were retrieved from CKD from January 2018 to December 2018, of which 135 (57.9%) were utilized for transplantation after the routine screening protocol of our eye bank. The remaining 98 donor corneas could not be used for the following reasons: seropositive donors (n = 13), intraoperative reasons for non-utilization (n = 4), donor tissue earmarked for therapeutic keratoplasty (endothelial cell density <2000 cells/mm 2 ) but not used (n = 51), and poor tissue quality for any kind of transplant (n = 30).
Of the 135 corneas, 61 donor corneas were utilized at the institute. The surgeries were performed by faculty and senior cornea fellows with adequate experience in performing keratoplasty. Of these 61 corneas, 40 were utilized for optical PK (n = 18) and EK (n = 22). Of the 22 EK, 6 were DMEK and 16 were DSAEK. The remaining 21 donor corneas were used for anterior lamellar keratoplasty (n = 5), therapeutic keratoplasty (n = 12), and keratoprosthesis or patch graft (n = 4) and, hence, were excluded from outcome analysis. Table 1 summarizes the baseline parameters and outcomes of the PK and EK group.
Recipient characteristics
The median age of recipients that had PK was 47.5 years (IQR, 35-61 years) and for those that had EK was 64 years (IQR, 56-68 years). The indications for PK were corneal scar (10 patients), failed prior PK (5 patients), disorganized anterior segment with anterior staphyloma (n = 1), anterior segment dysgenesis (n = 1), and primary congenital glaucoma (n = 1). Of 18 patients that had PK, 11 had an additional procedure at the keratoplasty (amniotic membrane grafting in 2 patients, tarsorrhaphy in 3 patients, cataract surgery in 3 patients, anterior segment reconstruction 2 eyes, and anterior vitrectomy in 1 eye). The indications for EK were pseudophakic
Outcomes of keratoplasty
All grafts except 1, cleared at 1 month in the PK group. The indication for surgery in the patient who had primary graft failure was disorganized anterior segment with anterior staphyloma. Three patients developed secondary graft failure following an episode of rejection at 3 months in 2 patients and at 1 year in 1 patient.
In the EK group (6 DMEK and 16 DSAEK), 1 patient had a complete graft detachment and another one had a primary graft failure after DMEK. In the case of primary graft failure after DMEK, there was a difficult unfolding documented in the operative notes. Two patients had a secondary graft failure following an episode of rejection at 6 months in 1 patient and microbial keratitis at 3 months in the other patient. This patient who had rejection had DSAEK under previous therapeutic penetrating keratoplasty. None of the patients had any acute infective episode following keratoplasty.
The median visual acuity in the PK group at 3 months was logMAR 1.05 (n = 11) and logMAR 1.30 at 6 months (n = 8). The median visual acuity in EK group was logMAR 0.39 at 3 and 6 months (n = 17).
Discussion
In most centers of the world, the donor cornea demand far exceeds the supply of corneas. The donor corneas obtained from patients with a history of chronic medical conditions are harvested and, if deemed suitable after a routine evaluation at the eye bank, are utilized for transplantation. There are many reports of endothelial affliction in patients with CKD. Although there are no reports of clinical corneal edema in patients with CKD, most studies have observed morphological abnormalities in the corneal endothelium of these patients. Ohuguro et al. reported an increased polymegathism and pleomorphism despite a normal endothelial cell density in patients with chronic renal failure. [6] Other authors have found that the endothelial alterations are more marked in those patients who had hemodialysis. [7] During routine donor cornea evaluation in the eye bank, we have observed some instances where both the corneas procured from CKD patients had low to borderline endothelial cell density. However, corneas procured from donors suffering from CKD more often meet the parameters needed for optical keratoplasty and, hence, were considered for use in optical keratoplasty. The purpose of this study was to evaluate the clinical outcomes of optical keratoplasty using donor corneas harvested from patients with CKD.
Of the total of 233 corneas procured from donors with CKD, 102 (43.7%) corneas did not meet the criteria for transplantation for optical keratoplasty. Among those that fulfilled the criteria for utilization for optical keratoplasty (PK and EK) on the basis of endothelial evaluation, except for two, all grafts (95%) cleared after keratoplasty. In the two eyes which had a primary graft failure (2/39, 5.1%; 1 after PK and another after DMEK), the preoperative factors and intraoperative/iatrogenic factors could not be ruled out as contributing factors for primary graft failure.
We could not evaluate an association of donor endothelial health with the stage of CKD and hemodialysis as the data were collected retrospectively and there were inherent limitations in obtaining a clear history on the exact management of CKD patients with hemodialysis from the donor documents maintained at the eye bank. The other limitations of the study are limited follow-up duration after keratoplasty and lack of postoperative endothelial cell density in most of the patients, which would have been useful in correlating the endothelial cell loss after keratoplasty with these donors.
Conclusion
In conclusion, even though literature suggests a reduction in endothelial health parameters in CKD patients, the donor corneas from these patients can be utilized for optical penetrating and endothelial keratoplasty provided they meet the evaluation criteria.
|
2020-05-28T09:18:49.964Z
|
2020-05-25T00:00:00.000
|
{
"year": 2020,
"sha1": "5f511678bedb741c01758dec9eb8cc30191e968b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijo.ijo_1465_19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "621fee3bb484b0ce3492337f09891350d6aba08b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14392950
|
pes2o/s2orc
|
v3-fos-license
|
SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data
Next-generation sequencing (NGS) technologies have led to a huge amount of genomic data that need to be analyzed and interpreted. This fact has a huge impact on the DNA sequence alignment process, which nowadays requires the mapping of billions of small DNA sequences onto a reference genome. In this way, sequence alignment remains the most time-consuming stage in the sequence analysis workflow. To deal with this issue, state of the art aligners take advantage of parallelization strategies. However, the existent solutions show limited scalability and have a complex implementation. In this work we introduce SparkBWA, a new tool that exploits the capabilities of a big data technology as Spark to boost the performance of one of the most widely adopted aligner, the Burrows-Wheeler Aligner (BWA). The design of SparkBWA uses two independent software layers in such a way that no modifications to the original BWA source code are required, which assures its compatibility with any BWA version (future or legacy). SparkBWA is evaluated in different scenarios showing noticeable results in terms of performance and scalability. A comparison to other parallel BWA-based aligners validates the benefits of our approach. Finally, an intuitive and flexible API is provided to NGS professionals in order to facilitate the acceptance and adoption of the new tool. The source code of the software described in this paper is publicly available at https://github.com/citiususc/SparkBWA, with a GPL3 license.
Introduction
The history of modern DNA sequencing starts more than thirty-five years ago. These years have seen amazing growth in DNA sequencing capacity and speed, especially after the appearance of next-generation sequencing (NGS) and massive parallel sequencing in general. NGS has led to an unparalleled explosion in the amount of sequencing data available. For instance, new sequencing technologies, such as Illumina HiSeqX™ Ten, generate up to 6 billion sequence reads per run. Mapping these data onto a reference genome is often the first step in the sequence analysis workflow. This process is very time-consuming and, although state-of-art aligners were developed to efficiently deal with large amount of DNA sequences, the alignment process still remains a bottleneck in bioinformatics analyses. In addition, NGS platforms are evolving very quickly, pushing the sequencing capacity to unprecedented levels.
To address this challenge we propose to take advantage of parallel architectures using big data technologies in order to boost performance and improve scalability of the sequence aligners. In this way, it will be possible to process huge amounts of sequencing data within a reasonable time. In particular, Apache Spark [1] has been considered as the big data framework in this work. Spark is a cluster computing framework which supports both in-memory and ondisk computations in a fault tolerant manner using distributed memory abstractions known as Resilient Distributed Datasets (RDDs). An RDD can be explicitly cached in memory across cluster nodes and reused in multiple MapReduce-like parallel operations.
In this paper we introduce SparkBWA, a new tool that integrates the Burrows-Wheeler aligner (BWA) [2] into the Spark framework. BWA is one of the most widely used alignment tools for mapping sequence reads to a large reference genome. It consists of three different algorithms for aligning short reads. SparkBWA was designed to meet three requirements. First, SparkBWA should outperform BWA and other BWA-based aligners both in terms of performance and scalability. Note that BWA has its own parallel implementation for shared-memory systems. The second requirement is related to keep the compatibility of SparkBWA with future and legacy versions of BWA. Since BWA is constantly evolving to include new functionalities and algorithms, it is important for SparkBWA to be agnostic regarding the BWA version. This is an important difference with respect to other existent tools based on BWA, which require modifications of the BWA source code. Finally, NGS professionals demand solutions to perform sequence alignments efficiently in such a way that the implementation details are completely hidden to them. For this reason SparkBWA provides a simple and flexible API to handle all the aspects related to the alignment process. In this way, bioinformaticians only need to focus on the scientific problem to deal with.
SparkBWA has been evaluated both in terms of performance and memory consumption, and a thorough comparison between SparkBWA and several state-of-art BWA-based aligners is also provided. Those tools take advantage of different parallel approaches as Pthreads, MPI, and Hadoop to improve the performance of BWA. Performance results demonstrate the benefits of our proposal.
This work is structured as follows: Section 2 explains the background of the paper. Section 3 discusses the related work. Section 4 details the design of SparkBWA and introduces its API. Section 5 presents the experiments carried out to evaluate the behavior and performance of our proposal together with a comparison to other BWA-based tools. Finally, the main conclusions derived from the work are explained in Section 6.
MapReduce programming model
MapReduce [3] is a programming model introduced by Google for processing and generating large data sets on a huge number of computing nodes. A MapReduce program execution is divided into two phases: map and reduce. In this model, the input and output of a MapReduce computation is a list of key-value pairs. Users only need to focus on implementing map and reduce functions. In the map phase, map workers take as input a list of key-value pairs and generate a set of intermediate output key-value pairs, which are stored in the intermediate storage (i.e., files or in-memory buffers). The reduce function processes each intermediate key and its associated list of values to produce a final dataset of key-value pairs. In this way, map workers achieve data parallelism, while reduce workers perform parallel reduction. Note that parallelization, resource management, fault tolerance and other related issues are handled by the MapReduce runtime.
Apache Hadoop [4] is the most successful open-source implementation of the MapReduce programming model. Hadoop consists, basically, of three layers: a data storage layer (HDFS-Hadoop Distributed File System [5]), a resource manager layer (YARN-Yet Another Resource Negociator [6]), and a data processing layer (Hadoop MapReduce Framework). HDFS is a block-oriented file system based on the idea that the most efficient data processing pattern is a write-once, read-many-times pattern. For this reason, Hadoop shows good performance with embarrassingly parallel applications requiring a single MapReduce execution (assuming intermediate results between map and reduce phases are not huge), and even for applications requiring a small number of sequential MapReduce executions [7]. Note that Hadoop can also efficiently handle jobs composed by one or more map functions by chaining several mappers followed by a reducer function and, optionally, zero or more map functions, saving the disk I/ O cost between map phases. For more complex workflows, solutions as Apache Oozie [8] or Cascading [9], among others, should be used.
The main disadvantage of these workflow managers is the loss of performance when HDFS has to be used to store intermediate data. For example, an iterative algorithm can be expressed as a sequence of multiple MapReduce jobs. Since different MapReduce jobs cannot shared data directly, intermediate results have to be written to disk and read again from HDFS at the beginning of the next iteration, with the consequent reduction in performance. It is worth noting that even each iteration of the algorithm could consist of one or several MapReduce executions. In this case, the degradation in terms of performance is even more noticeable.
Apache Spark
Apache Spark is a cluster computing framework designed to overcome the Hadoop limitations in order to support iterative jobs and interactive analytics, originally developed at University of California, Berkeley [1], now managed under the umbrella of the Apache Software Foundation. Spark uses a master/slave architecture with one central coordinator (driver) and many distributed workers (executors). It supports both in-memory and on-disk computations in a fault tolerant manner by introducing the idea of Resilient Distributed Datasets (RDDs) [10]. An RDD represents a read-only collection of objects partitioned across the cluster nodes that can be rebuilt if a partition is lost. Users can explicitly cache an RDD in memory across machines and reuse it in multiple MapReduce-like parallel operations. By using RDDs, programmers can perform iterative operations on their data without writing intermediary results to disk. In this way, Spark is well-suited, for example, to machine learning algorithms.
RDDs can be created by distributing a collection of objects (e.g., a list or set) or by loading an external dataset from any storage source supported by Hadoop, including the local file system, HDFS, Cassandra [11], HBase [12], Parquet [13], etc. On created RDDs, Spark supports two types of parallel operations: transformations and actions. Transformations are operations on RDDs that return a new RDD, such as map, filter, join, groupByKey, etc. The resulting RDD will be stored in memory by default, but Spark also supports the option of writing RDDs to disk whenever necessary. On the other hand, actions are operations that kick off a computation, returning a result to the driver program or writing it to storage. Examples are collect, count, take, etc. Note that transformations on RDDs are lazily evaluated, meaning that Spark will not begin to execute until it sees an action.
A Spark application, at a high level, consists of a driver program which contains the application's main function and defines RDDs on the cluster, then applies transformations and actions to them. A Spark program implicitly creates, from defined transformations and actions over RDDs, a logical directed acyclic graph (DAG) of operations, which is converted by the driver into a physical execution plan. This plan is then optimized, e.g., merging several map transformations, and individual tasks are bundled up and prepared to be sent to the cluster. The driver connects to the cluster through a SparkContext. An executor or worker process is in charge of effectively running the tasks on each node of the cluster.
Apache Spark provides both Python and Scala interactive shells, which let the user interact with data that is distributed on disk or in memory across many machines. Apart from running interactively, Spark can also be linked into applications in either Java, Scala, or Python. Finally, we must highlight that Spark can run in local mode, in standalone mode on a cluster, or using a cluster manager such as Mesos [14] or YARN [6].
Burrows-Wheeler aligner (BWA)
Burrows-Wheeler aligner (BWA) is a very popular open-source software for mapping sequence reads to a large reference genome. In particular, it consists of three different algorithms: BWAbacktrack [2], BWA-SW [15] and BWA-MEM [16]. The first algorithm is designed for short Illumina sequence reads up to 100bp (base pairs), while the others are focused on longer reads. BWA-MEM, which is the latest, is preferred over BWA-SW for 70bp or longer reads as it is faster and more accurate. In addition, BWA-MEM has shown better performance than other several state-of-art read aligners for mapping 100bp or longer reads.
As we have previously noted, sequence alignment is a very time-consuming process. For this reason BWA has its own parallel implementation, but it only supports shared memory machines. Therefore, scalability is limited by the number of threads (cores) and memory available in just one computing node.
Although BWA can read unaligned BAM [17] files, it typically accepts FASTQ format [18] as input, which is one of the most common output formats for raw sequence reads. It is a plain text format in such a way that every four lines describe a sequence or read. An example including two reads is shown in Fig 1. The information provided per read is: identifier (first line), sequence (second line), and the quality score of the read (fourth line). An extra field, represented by symbol '+', is used as separator between the data and the quality information (third line). BWA is able to use single-end reads (one input FASTQ file) and paired-end reads (two input FASTQ files). When considering paired-end reads, two sequences corresponding to both ends of the same DNA fragment are available. Both reads are included in different input files using the same identifier and in the same relative location within the files. In this way, considering our example, the corresponding pair of sequence #2 will be located in line 5 of the other input file. On the other hand, the output of BWA is a SAM (Sequence Alignment/Map) [17] file, which is the standard format for storing read alignments against reference sequences. This SAM file will be further required, for example, for performing variant discovery analysis.
Related Work
We can find in the literature several interesting tools based on the Burrows-Wheeler aligner which exploit parallel and distributed architectures to increase the BWA performance. Some of these works are focused on big data technologies like SparkBWA, but they are all based on Hadoop. Examples are BigBWA [19], Halvade [20] and SEAL [21]. BigBWA is a recent sequence alignment tool developed by the authors which shows good performance and scalability results with respect to other BWA-based approaches. Its main advantage is that it does not require any modification of the original BWA source code. This characteristic is shared by SparkBWA in such a way that both tools keep the compatibility with future and legacy BWA versions.
SEAL uses Pydoop [22], a Python implementation of the MapReduce programming model that runs on the top of Hadoop. It allows users to write their programs in Python, calling BWA methods by means of a wrapper. SEAL only works with a particular modified version of BWA. Since SEAL is based on BWA version 0.5, it does not support the new BWA-MEM algorithm for longer reads.
Halvade is also based on Hadoop. It includes a variant detection phase which is the next stage after the sequence alignment in the DNA sequencing workflow. Halvade calls BWA from the mappers as an external process which may cause timeouts during the Hadoop execution if the task timeout parameter is not adequately configured. Therefore, a priori knowledge about the execution time of the application is required. Note that setting the timeout parameter to high values causes problems in the detection of actual timeouts, which reduces the efficiency of the fault tolerance mechanisms of Hadoop. To overcome this issue, as it is explained in further sections, SparkBWA uses Java Native Interface (JNI) to call the BWA methods.
Another approach is applying standard parallel programming paradigms to BWA. For instance, pBWA [23] uses MPI to parallelize BWA in order to carry out the alignments on a cluster. We must highlight that pBWA lacks fault tolerant mechanisms in contrast to SparkBWA. In addition, pBWA, as well as SEAL, does not support the BWA-MEM algorithm.
Several solutions try to take advantage of the computing power of the GPUs to improve the performance of BWA. This is the case of BarraCUDA [24], which is based on the CUDA programming model. It requires the modification of the BWT (Burrows Wheeler Transform) alignment core of BWA to exploit the massive parallelism of GPUs. Unlike SparkBWA which supports all the algorithms included in BWA, BarraCUDA only supports the BWA-backtrack algorithm for short reads. It shows improvements up to 2× with respect to the threaded version of BWA. It is worth to mention that due to some changes in the BWT data structure of most recent versions of BWA, BarraCUDA is only compatible with BWTs generated with BWA versions 0.5.x. Other important sequence aligners (not based on BWA) that make use of GPUs are CUSHAW [25], SOAP3 [26] and SOAP3-dp [27].
Some researchers have focused on speeding up the alignment process using the new Intel Xeon Phi coprocessor (Intel Many Integrated Core architecture-MIC). For example, mBWA [28], which is based on BWA, implements the BWA-backtrack algorithm for the Xeon Phi coprocessor. mBWA allows to use concurrently both host CPU and coprocessor in order to perform the alignment, reaching speedups of 5× with respect to BWA. Another solution for the MIC coprocessors can be found in [29]. A third aligner that takes advantage of the MIC architecture is MICA [30]. Authors claim that it is 5× faster than threaded BWA using 6 cores. Note that, unlike SparkBWA, this tool is not based on BWA.
Another researchers exploit fine-grain parallelism in FPGAs (Field Programmable Gate Arrays) to increase the performance of several short-read aligners including some based on BWT [31][32][33].
Finally, a recent work uses Spark to increase the performance of one of the best well-known alignment algorithms, the Smith-Waterman algorithm [34]. Performance results demonstrate the potential of Spark as framework for this type of applications.
SparkBWA
This section introduces a new tool called SparkBWA, which integrates the Burrows-Wheeler aligner into the Spark framework. As stated in the Introduction, SparkBWA was designed with the following three objectives in mind: • It should boost BWA and other aligners based on BWA in terms of performance and scalability.
• It should be version-agnostic regarding BWA, which assures its compatibility with future or legacy BWA versions.
• An intuitive and flexible API should be provided to NGS professionals with the aim of facilitating the acceptance and adoption of the new tool.
Next, a detailed description of the design and implementation of SparkBWA is provided, together with the specification of the high-level API.
System design
SparkBWA workflow consists of three main stages: RDDs creation, map, and reduce phases. In the first phase input data are prepared to feed the map phase where the alignment process is, strictly speaking, carried out. In particular, RDDs are created from the FASTQ input files, which are stored using HDFS. Note that, in this work, we assume HDFS as distributed file system. In this way, data is distributed across the computing nodes so it can be processed in parallel in the map phase. The read identifier in the FASTQ file format is used as key in the RDDs (see the example of Fig 1). In this way, key-value pairs generated from an input file have the following appearance <read_id, read_content>, where read_content contains all the information of the corresponding sequence with read_id identifier. These RDDs will be used afterwards in the map phase. This approach works properly when considering single-end reads, that is, when there is only one FASTQ input file.
However, SparkBWA should also support paired-end reads. In that case, two RDDs will be created, one per input file, and distributed among the nodes. Spark distributes RDDs in such a way that is not guaranteed that the i-th data split (partition) of both RDDs will be processed by the same mapper. In this way, a mapper cannot process paired-end reads since they are always located in the same i-th data partition of both RDDs. This behavior can be observed in the RDD creation stage of the example displayed in Fig 2(a). Two solutions are proposed in order to overcome this issue: • Join: This approach is based on using the Spark join operation, which is a transformation that merges two RDDs together by grouping elements with the same key. This solution is illustrated in Fig 2(a). Since the key is the same for paired reads in both input files, the result after the join operation will be an unique RDD with the format: <read_id, Tuple<read_content1, read_content2>> (RDD UNSORTED in the example). The resulting RDD after the join operation does not preserve the previous order of the reads from the FASTQ files. This is not a problem because mappers will process the paired-end reads independently from each other. However, Spark provides the sortByKey transformation to sort RDD records according to its key. In the example, the new RDD created after applying this operation is RDD SORTED . We must highlight that the sortByKey operation is expensive in terms of memory consumption.
For this reason this step is optional in the SparkBWA dataflow and users should enable it specifically, if they want to get a sorted output.
• SortHDFS: A new approach is presented in order to avoid the join and sortByKey operations (see Fig 2(b)). This solution can be considered as a preprocessing stage which requires reading and writing to/from HDFS. In this way, FASTQ input files are accessed directly by using the HDFS Hadoop library from the Spark driver program. Paired-end reads (that is, those with the same identifier in the two files) are merged into one record in a new HDFS file. As BWA requires to distinguish between both sequences in the pair, a separator string is used to facilitate the subsequent parsing process in the mappers. Afterwards, an RDD is created from the new file (RDD SORTED in the figure). In this way, key-value pairs have the following format <read_id, merged_content>. This solution performs several time consuming I/O operations, but saves a lot of memory in comparison to the join & sortByKey approach as we illustrate in Section 5.
Once RDDs are available, the map phase starts. Mappers will apply the sequence alignment algorithm from BWA on the RDDs. However, calling BWA from Spark is not straightforward as BWA source code is written in C language and Spark only allows to run code in Scala, Java or Python. To overcome this issue SparkBWA takes advantage of the Java Native Interface (JNI), which allows the incorporation of native code written in languages as C and C++ as well as Java code.
The map phase was designed using two independent software layers. The first one corresponds to the BWA software package, while the other is responsible to process RDDs, pass the input data to the BWA layer and collect the partial results from the map workers. We must highlight that mappers only perform calls to the BWA main function by means of JNI. This design avoids any modification of the original BWA source code, which assures the compatibility of SparkBWA with future or legacy BWA versions. In this way, our tool is version-agnostic regarding BWA. Note that this approach is similar to the one adopted in the BigBWA tool [19]. Another advantage of the two-layers design is that the alignment process could be performed using two levels of parallelism. The first level corresponds to the map processes distributed across the cluster. In the second level each individual map process is parallelized using several threads, taking advantage of the BWA parallel implementation for shared memory machines. We refer to this mode of operation as hybrid mode. This mode can be enabled by the user through the SparkBWA API.
On the other hand, BWA uses a reference genome as input in addition to the FASTQ files. All mappers require the complete reference genome, so it has to be shared among all computing nodes using NFS or stored locally in the same location of all the nodes (e.g., using Spark broadcast variables).
Once the map phase is complete, SparkBWA creates one output SAM file in HDFS per launched map process. Finally, users could merge all the outputs into one file choosing to execute an additional reduce phase.
SparkBWA API
One of the requirements of SparkBWA is to provide bioinformaticians an easy and powerful way to perform sequence alignments using a big data technology as Apache Spark. With this goal in mind a basic API is provided. It allows NGS professionals to focus only in the scientific problem, while design and implementation details of SparkBWA are completely transparent to them.
SparkBWA can be used from the Spark shell (Scala) or console. Table 1 summarizes the API methods to set the SparkBWA options in the shell together with their corresponding console arguments. For example, it is possible to choose the number of data partitions, how RDDs are created, or the number of threads used per mapper (hybrid mode).
1. Spark Shell: Spark comes with an interactive shell that provides a simple way to learn the Spark API, as well as a powerful tool to analyze data interactively. It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries) or Python. Current SparkBWA version only supports the Scala shell. An example of how to perform an alignment using SparkBWA from the Spark shell is False -r Use a reducer to generate one output SAM file.
Auto none |-partitions <num>
By default, data is split into pieces of HDFS block size. Otherwise, input data is split into num partitions.
setNumThreads(int) 1 -threads <num>
If num > 1, hybrid parallelism mode is enabled in such a way that each map process is executed using num threads.
setIndexPath(string) --index <prefix>
Set the path to the reference genome (mandatory option).
setInputPath(string) -Positional Set the path (in HDFS) to the FASTQ input file (mandatory option for single-end and paired-end reads).
setInputPath2(string) -Positional Set the path (in HDFS) to the second FASTQ input file (mandatory option for paired-end reads).
setOutputPath(string)
-Positional Set the location (in HDFS) where the output SAM file/s will be stored. illustrated in Fig 3. First, the user should create a BwaOptions object to specify the options desired in order to execute SparkBWA (line 1). In this example only the mandatory options are set (lines 3-7). Refer to Table 1 for additional options.
Once the options are specified, a new BwaInterpreter should be created (line 9). At that moment RDDs are created from the input files according to the implementation detailed previously in Section 4.1. It is worth to mention that the RDDs creation is lazy evaluated, which means that Spark will not begin to execute until an action is called. This action could be, for example, obtaining explicitly the input RDD using the getDataRDD method (line 10). This method is very useful in the sense that it allows the users to apply to the input RDDs all the transformations and actions that the Spark API provides in addition to userdefined functions. Note that using the getDataRDD method is not necessary to perform the sequence alignment with SparkBWA. Another action that triggers the RDDs creation is runAlignment, which will execute the complete SparkBWA workflow including the map and reduce phases (line 11).
Console:
It is also possible to run SparkBWA from the console, that is, using the sparksubmit command. An example is shown in Fig 4. spark-submit provides a variety of options that let the user control specific details about a particular run of an application (lines 2-6). In our case, the user also needs to pass as arguments the SparkBWA options to Spark (lines 7-11). All the flags supported by SparkBWA are detailed in Table 1. Therefore, SparkBWA provides an easy and flexible interface in such way that users could perform a sequence alignment writing just a couple of lines of code in the Spark shell, or using the standard spark-submit tool from the console.
Evaluation
In this section SparkBWA is evaluated in terms of performance, scalability, and memory consumption. First, a complete description of the experimental setup is provided. Next, SparkBWA is analyzed in detail paying special attention to the creation of RDDs and its different modes of operation (regular and hybrid). Finally, in order to validate our proposal, a comparison to several BWA-based aligners is also provided.
Experimental Setup
SparkBWA was tested using data from the 1000 Genomes Project [35]. The main characteristics of the input datasets are shown in Table 2. Number of reads refers to the number of sequences to be aligned to the reference genome. The read length is expressed in terms of the number of base pairs (bp).
As the alignment can be performed for single or paired-ended reads, it is needed to determine which one is going to be used during the evaluation. As the paired-ended DNA sequencing reads provide superior alignment across DNA regions containing repetitive sequences reads, it is the one that is considered in this work. In this way, each dataset consists of two FASTQ files.
Experiments were carried out on a six-node cluster. Each node consists of four AMD Opteron 6262HE processsors (4×16 cores) with 256 GiB of memory (i.e., 4 GiB per core). Nodes are connected through a 10GbE network. The Hadoop and Spark versions used are 2.7.1 and 1.5.2, respectively, running on a CentOS 6.7 platform. OpenMPI 4.4.7 was used in the experiments that require MPI. The cluster was configured assigning about 11 GiB of memory per YARN container (map and reduce processes) in such a way that a maximum of 22 containers per node can be executed concurrently. This memory configuration allows each SparkBWA container to execute one BWA process, including the memory required to store the reference genome index. Note that the master node in the cluster is also used as computing node.
The behavior of SparkBWA is compared to several state of the art BWA-based aligners. In particular, we have considered the tools detailed in Table 3. A brief description of these tools is provided in Section 3. pBWA and SEAL only support the BWA-backtrack algorithm because both are based on BWA version 0.5 (2009). For fair comparison with these tools, SparkBWA obtains its performance results for the BWA-backtrack algorithm also using BWA version 0.5. In the case of BWA-MEM, three different aligners are evaluated: BigBWA, Halvade and BWA (shared-memory threaded version). For the BWA-MEM performance evaluation, the latest available BWA version at the moment of writing the paper is used (version 0.7.12, December 2014). We must highlight that all the time results shown in this section were calculated as the average value (arithmetic mean) of twenty executions.
Performance Evaluation
5.2.1 RDDs creation. The first stage in the SparkBWA workflow is the creation of the RDDs, which can include a sorting phase (see Section 4.1). Two different approaches were considered to implement this phase: Join and SortHDFS. The first one is based on the Spark join operation, and includes an additional optional step to sort the input paired-end reads by key (sortByKey operation). The latter approach requires reading and writing to/from HDFS. As we pointed out previously, this solution can be considered as a preprocessing stage. Both solutions have been evaluated in terms of the overhead considering different datasets. Results are displayed in Fig 5. The performance of the Join approach (with and without the sortByKey transformation) depends on the number of map processes, so this operation was evaluated using 32 and 128 mappers. As the number of mappers increases, the sorting time improves because the size of the data splits computed by each worker is smaller. This behavior was observed for all the datasets, especially when D3 is considered. The overhead for all the approaches, as it was expected, increases with the size of the dataset. However, the increment rate is higher for SortHDFS. For example, sorting D3 is 10× slower than sorting D1, while the Join approach with and without sortByKey is at most only 5× and 7× slower respectively. Note that D3 is more than 14× bigger than D1 (see Table 2).
The Join approach is always better in terms of overhead, especially as the number of map processes increases. For example, sorting D3 takes only 1.5 minutes with 128 mappers (join only), which means a speedup of 8.7× with respect to SortHDFS. It can also be observed that sorting the RDDs by key consumes extra time. In particular, the overhead means on average doubling the time required by the sorting process when only the join transformation is performed.
On the other hand, speed is not the only parameter that should be taken into account when performing the RDDs sorting. In this way, memory consumption has also been analyzed. In order to illustrate the behavior of both sorting approaches we have considered D3 as dataset. Fig 6 shows the memory used by a map process during the sorting operation period.
According to the results, the Join approach always consumes more memory than SortHDFS. This is caused by the join and sortByKey Spark operations on the RDDs, which both are inmemory transformations. It is especially relevant the differences observed when the elements of the RDDs are sorted by key with respect to applying only the join operation. In this way, the sortByKey operation consumes about 3 GiB extra per mapper for this dataset, which means increasing more than 30% the memory required by SparkBWA in this phase. Note that when considering 32 workers the maximum memory available per container is reached. The memory used by 128 workers is lower because RDDs are split into smaller pieces with respect to considering 32 workers. On the other hand, SortHDFS requires a maximum of 4 GiB to preprocess the dataset in the example. In this way, SortHDFS is the best choice if the memory resources are limited or not enough to perform the Join operation (with or without sortByKey). Note that the overall behavior illustrated in Fig 6 agrees with the observations for the other datasets.
5.2.2 Hybrid mode. As stated in Section 4.1, the design of SparkBWA in two software layers allows to use several threads per worker in such a way that the alignment process is performed taking advantage of two levels of parallelism. In this way, SparkBWA has two modes of operation: regular and hybrid. The hybrid mode refers to using more than one thread per map process, while the regular behavior executes each mapper sequentially.
The memory used by each mapper when hybrid mode is enabled increases with the number of threads involved in the computation. However, since the index reference genome required by BWA is shared among threads, this increase is moderate. This behavior is illustrated in Fig 7, where BWA-MEM is executed using different number of threads with a small split of D1 as input. It can be observed that the difference between the memory used by one SparkBWA mapper considering regular and hybrid mode with 8 threads is only 4 GiB. It means an increase of about 30% in the total memory consumed, while the threads per mapper grows by a factor of 8.
So, taking into account that our experimental platform allows 22 containers per node with 11 GiB of maximum memory, SparkBWA in hybrid mode for this example could use all the 64 cores in the node, e.g., running 16 mappers and 4 threads/mapper. This is not the case of the regular mode, which only allows to use a maximum of 22 cores of the node. Therefore, the hybrid mode can be very useful in scenarios where the computing nodes consist of a high number of cores but, due to memory restrictions, only a few of them can be used.
Next, we evaluate the performance of SparkBWA using both modes of operation. Experiments were conducted using the BWA-MEM algorithm and considering 2 and 4 threads per map process when hybrid mode is enabled. Performance results are shown in Fig 8 for all the datasets and using different number of mappers. There are no results for the 128 mappers with 4 threads/mapper case because it implies that 512 cores are necessary for an optimal execution, while our cluster only consists of 384 cores. Several conclusions can be extracted from the performance results. SparkBWA shows a good scalability with the number of mappers, especially in the regular mode (that is, when each mapper is computed sequentially). Assuming the same number of mappers, more threads per mapper in the hybrid mode is only beneficial for the biggest dataset (D3). This behavior points out that the benefits of using more threads in the computations do not compensate the overhead caused by their synchronization. On the other hand, considering the cores used in the computation (#threads × #mappers cores), we can observe that the regular mode performs better than the hybrid one. For instance, points A, B and C in Fig 8(b) were obtained using the same number of cores. SparkBWA in regular mode (point C) clearly outperforms the hybrid version. This behavior is observed in most of the cases. In this way, as we have indicated previously, SparkBWA hybrid mode should be the preferred option only in those cases where limitations in memory do not allow to use all the cores in each node. Table 4 summarizes the results of SparkBWA in terms of performance for all the datasets. It shows the minimum time required by SparkBWA to perform the alignment on our hardware platform, the number of mappers used, the speed measured as the number of pairs aligned per second and also the corresponding speedup with respect to the sequential execution of BWA. The sequential times are respectively 258, 496 and 5,940 minutes for D1, D2 and D3. In the particular case of D3 it means more than 4 days of computation. It is worth noting that using SparkBWA this time was reduced to less than an hour reaching speedups higher than 125×.
Finally, we verified the correctness of SparkBWA for regular and hybrid modes by comparing their output with the one generated by BWA (sequential version). We only found small differences in the mapping quality scores (mapq) on some uniquely mapped reads (i.e., reads with quality greater than zero). Therefore, the mapping coordinates are identical for all the cases considered. Differences affect from 0.06% to 1% of the total number of uniquely mapped reads. Small differences in the mapq scores are expected because the quality calculation depends on the insert size statistics, which are calculated on sample windows on the input stream of sequences. These sample windows are different for each read in BWA (sequential) and any other parallel implementation that splits the input into several pieces (SEAL, pBWA, Halvade, BWA-threaded version, SparkBWA, etc.). In this way, any parallel BWA-based aligner will obtain slightly different mapping quality scores with respect to the sequential version of BWA. For instance, SEAL reports differences on average in 0.5% of the uniquely mapped reads [21]. 5.2.3 Comparison to other aligners. Next, a performance comparison among different BWA-based aligners and SparkBWA is shown. The evaluated tools are enumerated in Table 3 together with their corresponding parallelization technology. Some of them take advantage of classical parallel paradigms, as Pthreads or MPI, while the others are based on big data technologies as Hadoop. All the experiments were performed using SparkBWA in regular mode. For comparison purposes all the graphs in this subsection include the corresponding results considering ideal speedup with respect to the sequential execution of BWA.
Two different algorithms for paired-end reads have been considered: BWA-backtrack and BWA-MEM. The evaluation of the BWA-backtrack algorithm was performed using the following aligners: pBWA, SEAL and SparkBWA. When paired reads are used as input data, BWA-backtrack consists of three phases. First, the sequence alignment must be performed for one of the input FASTQ files. Afterwards, the same action is applied to the other input file. Finally, a conversion to the SAM output format is performed using the results of the previous stages. SparkBWA and SEAL take care of the whole workflow in such a way that it is completely transparent to the user. Note that SEAL requires a preprocessing stage to prepare the input files, so this extra time was included in the measurements. On the other hand, pBWA requires to perform each phase of the BWA-backtrack algorithm independently despite they are executed in parallel. In this way, pBWA times were calculated as the sum of each phase time. No preprocessing is performed by pBWA. As BWA-backtrack was especially designed for shorter reads (<100 bp), we have considered D1 as input dataset but, for completeness, D2 is also included in the comparison. Fig 9 shows the alignment times using different number of mappers. In this case, each map process uses one core, so both terms, mappers and cores, are equivalent. Results show that SparkBWA clearly outperforms SEAL and pBWA for all the cases. As we have mentioned previously, SEAL times include the overhead caused by the preprocessing phase which takes on average about 1.9 and 2.9 minutes for D1 and D2 respectively. This overhead has a large impact on performance, especially for the smallest dataset.
The corresponding speedups obtained by the aligners for BWA-backtrack are displayed in Fig 10. As reference we have used the BWA sequential time. Results confirm the good behavior of SparkBWA with respect to SEAL and pBWA. For instance, SparkBWA reaches speedups up to 57× and 77× for D1 and D2 respectively. The maximum speedups achieved by SEAL are only about 31× and 42×, while the corresponding values for pBWA are 46× and 59×. In this way, SparkBWA is on average 1.9× and 1.4× faster than SEAL and pBWA respectively.
Finally, the BWA-MEM algorithm is evaluated considering the following tools: BWA, BigBWA, Halvade, and SparkBWA. Fig 11 shows the corresponding execution times for all the datasets varying the number of mappers (cores). BWA uses Pthreads in order to parallelize the alignment process, so it can only be executed on a single cluster node (64 cores). Both BigBWA and Halvade are based on Hadoop, and they require a preprocessing stage to prepare the input data for the alignment process. BigBWA requires, on average, 2.4, 5.8 and 23.6 minutes to preprocess each dataset, whereas Halvade spends 1.8, 6.6 and 22.7 minutes, respectively. Preprocessing is carried out sequentially for BigBWA, while Halvade is able to perform it in parallel. This overhead does not depend on the number of mappers used in the computations. For comparison fairness, the overhead of this phase is included in the corresponding execution times of both tools, since times for BWA and SparkBWA encompass the whole alignment process.
Performance results show that BWA is competitive with respect to Hadoop-based tools (BigBWA and Halvade) when 32 mappers are used, but its scalability is very poor. Using more threads in the computations do not compensate the overhead caused by their synchronization unless the dataset was big enough. BigBWA and Halvade show a better overall performance with respect to BWA. Both tools behave in a similar way, and differences in their performance are small. Finally, SparkBWA outperforms all the considered tools. In order to illustrate the benefits of our proposal it is worth noting that, for example, SparkBWA is on average 1.5× faster than BigBWA and Halvade when using 128 mappers, and 2.5× with respect to BWA considering 64 mappers.
Performance results in terms of speedup with respect to the sequential execution of BWA are shown in Fig 12. The scalability problems of BWA are clearly revealed in the graphs. Hadoop-based tools show a better scalability but it is not enough to get closer to SparkBWA. The average speedup is respectively 50× and 49.2× for BigBWA and Halvade using 128 workers. This value increases up to 72.5× for SparkBWA. Note that the scalability of SparkBWA is especially good when considering the biggest dataset (Fig 12(c)), reaching a maximum speedup of 85.6×. In other words, the parallel efficiency is 0.67.
In this way, SparkBWA has proven to be very consistent in all the scenarios considered, improving the results obtained by other state of the art BWA-based aligners. In addition, we must highlight that SparkBWA behaves better as the size of the dataset increases.
Conclusions
In this work we introduce SparkBWA, a new tool that exploits the capabilities of a Big Data technology as Apache Spark to boost the performance of the Burrows-Wheeler Aligner (BWA), which is a very popular software for mapping DNA sequence reads to a large reference genome. BWA consists of several algorithms especially tuned to deal with the alignment of short reads. SparkBWA was designed in such a way that no modifications to the original BWA source code are required. In this way, SparkBWA keeps the compatibility with any BWA software release, future or legacy.
The behavior of SparkBWA was evaluated in terms of performance, scalability and memory consumption. In addition, a thorough comparison between SparkBWA and several state of the art BWA-based aligners was performed. Those tools take advantage of different parallel approaches as Pthreads, MPI, and Hadoop to improve the performance of BWA. The evaluation shows that when considering the algorithm to align shorter reads (BWA-backtrack), SparkBWA is on average 1.9× and 1.4× faster than SEAL and pBWA. For longer reads and the BWA-MEM algorithm, the average speedup achieved by SparkBWA with respect to BigBWA and Halvade tools is 1.4×.
Finally, it is worth noting that most of the next-generation sequencing (NGS) professionals are not experts in Big Data or High Performance Computing. For this reason, in order to make SparkBWA more suitable for these professionals, an easy and flexible API is provided which will facilitate the adoption of the new tool by the community. This API allows to manage the sequence alignment process from the Apache Spark shell, hiding all the computational details to the users.
|
2018-04-03T02:12:33.110Z
|
2016-05-16T00:00:00.000
|
{
"year": 2016,
"sha1": "9c465b7d37024f6afe8a7063590c38fb69ec815c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0155461&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c465b7d37024f6afe8a7063590c38fb69ec815c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
1974013
|
pes2o/s2orc
|
v3-fos-license
|
PB1-F2 Attenuates Virulence of Highly Pathogenic Avian H5N1 Influenza Virus in Chickens
Highly pathogenic avian influenza virus (HPAIV) is a permanent threat due to its capacity to cross species barriers and generate severe infections and high mortality in humans. Recent findings have highlighted the potential role of PB1-F2, a small accessory influenza protein, in the pathogenesis process mediated by HPAIV in mammals. In this study, using a recombinant H5N1 HPAIV (wt) and its PB1-F2-deleted mutant (ΔF2), we studied the effects of PB1-F2 in a chicken model. Unexpectedly, when using low inoculation dose we observed that the wt-infected chickens had a higher survival rate than the ΔF2-infected chickens, a feature that contrasts with what is usually observed in mammals. High inoculation dose had similar mortality rate for both viruses, and comparison of the bio-distribution of the two viruses indicated that the expression of PB1-F2 allows a better spreading of the virus within chicken embryos. Transcriptomic profiles of lungs and blood cells were characterized at two days post-infection in chickens inoculated with the wild type (wt) or the ΔF2 mutant viruses. In lungs, the expression of PB1-F2 during the infection induced pathways related to calcium signaling and repressed a large panel of immunological functions. In blood cells, PB1-F2 was associated with a gene signature specific for mitochondrial dysfunction and down-modulated leucocytes activation. Finally we compared the effect of PB1-F2 in lungs of chickens and mice. We identified that gene signature associated to tissue damages is a PB1-F2 feature shared by the two species; by contrast, the early inhibition of immune response mediated by PB1-F2 observed in chickens is not seen in mice. In summary, our data suggest that PB1-F2 expression deeply affect the immune response in chickens in a way that may attenuate pathogenicity at low infection dose, a feature differing from what was previously observed in mammal species.
Introduction
Since 1997, H5N1 highly pathogenic avian influenza virus (HPAIV) is an omnipresent public health threat [1]. This concern is based on capacity of such virus to cross the species barrier from their avian hosts to humans. Waterfowl and shorebirds constitute the natural hosts of H5N1 HPAIV; infected wild aquatic birds usually develop relatively mild symptoms and limited disease. In ducks, infection is usually asymptomatic [2,3]. Nevertheless, when the virus is transmitted to poultry, mortality rate can reach 100%. The ability of H5N1 HPAIV to cross species barriers and spread to humans has been reported in 15 countries and confirmed by World Health Organization (WHO). The total number of cases confirmed by WHO is 650, with case fatality rate of 60% [4]. However, seroprevalence study among poultry workers suggested that H5N1 HPAIV could also cause infections with only mild symptoms in humans [5]. Human-to-human H5N1 HPAIV transmission has not been described yet, but recent works designed to address the questions of transmissibility and adaptation of H5N1 HPAIV to the human host identified important determinants in the HA-encoding segment [6,7].
Influenza A virus belongs to the Orthomyxoviridae family, and its genome consists of eight negative strand RNA segments encoding up to 14 proteins [8,9]. Viral determinants conferring host adaptation can be acquired through segment exchange between strains. This gene reassortment ability provides genome plasticity facilitating the crossing of species barriers [10,11]. The PB1encoding segment from the H1N1 influenza viruses responsible for the 1957 and 1968 pandemics was shown to have an avian origin [10]. This segment was also demonstrated to exert an important role in the pathogenesis mediated by the 1918 pandemic influenza virus [12]. In addition to the PB1 component of the polymerase complex, this segment 2 also encodes the N40 protein, a Ntruncated version of PB1 lacking transcriptase function, and PB1-F2, a pro-apoptotic protein [13,14]. PB1-F2 drew attention in recent years since a number of reports associated its expression with an increase of virus pathogenicity [15][16][17][18][19][20][21]. When looking at the prevalence of a functional PB1-F2 in strains of various origins, its expression appears unequally distributed: for example 96% of avian strains encodes a full-length PB1-F2 while only 7% of human H1N1 isolates express a functional PB1-F2 [22,23]. Thus, several reports suggest that the loss of a functional PB1-F2 could be beneficial for the virus when it crosses the species barrier to spread through humans [22,[24][25][26]. Consequently, the loss of PB1-F2 could be an adaptation process of the virus to mammalian hosts in order to confer optimized replicative efficiencies and viral fitness [27]. On the contrary, PB1-F2 from a highly virulent avian strain could be acquired by a circulating human strain through segment exchange, and then contribute to increased virulence associated with seasonal influenza viruses.
PB1-F2 was first described in 2001 [13]. This 90 amino-acid long protein is encoded by an alternative open reading frame (ORF) overlapping the PB1 ORF. In mammals, PB1-F2 can induce apoptosis of immune cells [13] and promote inflammation in a strain-dependent manner [17,19,27,28]. PB1-F2 triggers apoptosis by disturbing the mitochondrial membrane potential, an event that can lead to cytochrome c release and subsequent activation of caspases [29]. Inflammation induction is triggered by an exacerbation of the NF-kB transcription factor activity [19,30]. The molecular basis of this exacerbation is currently unknown but the propensity of PB1-F2 to form b-sheet structures and amyloid fibers are suspected to play an important role in this process [31]. PB1-F2 has also been described to up-regulate the viral polymerase activity [32], but this property is strain-specific and has no impact on the pathogenesis [33].
In mammals, PB1-F2 behaves as a virulence factor [15]. A recombinant WSN/1933 (H1N1) mutant lacking PB1-F2 has been shown to be less virulent than its wild-type (wt) counterpart [19]. Such effects on the mortality are not always visible, especially when highly pathogenic H5N1 strains are studied due to the extreme 50% lethal doses (LD50) of these viruses [34]. However, in mice models, the high LD50 can be by-passed using Mx+/+ transgenic mice [20]. Varga and co-workers described the inhibition of type I interferon (IFN) by PB1-F2; they showed that PB1-F2 antagonizes the function of MAVS through disruption of the mitochondrial membrane potential [18,35,36]. On the other hand we observed an exacerbation of the IFN expression when PB1-F2 was expressed during WSN/1933 infection [30]. This feature correlates well with the proinflammatory properties of PB1-F2. The sequence polymorphism of PB1-F2 that dictates the pro-inflammatory properties of PB1-F2 may explain the observed differences [28].
In this study, we compared the pathological process mediated by a H5N1 HPAIV unable to express PB1-F2 with its PB1-F2expressing counterpart in chickens to elucidate the role of PB1-F2 in an avian host. Survival curves, bio-distribution and host responses to both viruses were analyzed in order to determine and analyze the impact of PB1-F2 expression in this host. Unexpectedly, and in contrast to what is observed in mammals, PB1-F2 attenuates virulence in chicken. Using functional genomics tools, we delineated the impact of PB1-F2 in the lungs of infected chickens. Finally we compared these effects with what we previously observed in mouse infected with the same H5N1 HPAIV couple [34].
Viral distribution of wild-type and PB1-F2 deleted viruses in chicken embryos
The recombinant influenza A/duck/Niger/2090/2006(H5N1) (herein named Nig06) and its PB1-F2 knocked out counterpart (named DF2) were previously produced [34]. To study the effects of PB1-F2 expression in the chicken host, we first performed histological investigations to characterize the bio-distribution of Nig06 and DF2 viruses within chicken embryos. Examination of embryos at 24 hours post inoculation by immunohistochemistry showed that the viral antigen was more widespread in the wt group as compared to the DF2-virus group. In examined tissues of most of the embryos including the brain, heart, spleen, liver, lung, kidney, spinal cord, bone and skeletal muscle, there was weak to moderate immunostaining in the DF2 group whereas extensive to widespread immunostaining was observed in the wt group (Fig. 1). In both groups antigen was primarily observed in endothelial cells in all tissues. Some spread to surrounding parenchyma including hepatocytes, neurons, skeletal muscle, cardiomyocytes and pneumocytes was observed in both groups but was more pronounced in the wt group. In both groups there was consistently strong immunostaining in the chorioallantoic membrane. At the 18 and 48 hour timepoints post-inoculation, there were no differences observed between the two groups. In summary, these data suggest that PB1-F2 expression may accelerate the systemic spreading of the virus, as previously described [20].
Mortality associated with PB1-F2 from Nig06
To determine the role of PB1-F2 in highly pathogenic AIV pathogenesis, we first aimed to characterize the impact of PB1-F2 on the mortality in adult 8 weeks old white leghorn chickens. Chickens were challenged using 1000 plaque forming units (PFU) of wt Nig06 (n = 10) or DF2 Nig06 (n = 10). As shown in Fig. 2A, this amount of virus killed 90% of the chickens. Death of the animals occurred between days 3 and 8 post-infection (pi) and the mean time of death is 4 days pi. No differences could be observed between the 2 types of viruses. Remarkably, we found that wt Nig06-infected chickens (n = 20) had a higher survival rate than DF2 Nig06-infected chickens (n = 20): 50% vs. 15% respectively in survival experiments using a lower viral dose of 100 PFU (Fig. 2B). The chicken LD 50 was estimated to be of 10 2.05 PFU for the wt Nig06, and of 10 1.62 PFU for the DF2 Nig06. These results suggest that the expression of PB1-F2 in birds infected with low dose of HPAIV could be beneficial for the avian host survival.
Replication and shedding of wt and DF2 Nig06
To further characterize the role of PB1-F2 in the pathogenic process exerted by the viral infection, we investigated whether PB1-F2 could influence the spreading of the virus by quantifying the tissue distribution of the virus using qRT-PCR after 2 days pi (Fig. 3A). The amounts of viral copy number of wt and DF2 viruses were roughly the same in trachea, lung, spleen and thymus tissues. However, in the blood of DF2-infected chicken, although the difference is not statistically significant (p = 0.17), we observed a higher amount of virus as compared to wt-infected chicken virus. Such differences are also observed in heart, liver, kidney and brain but to a lesser extent. In contrast, in the pancreas (not statistically significant) and in the intestine, we detected a higher amount of viral copies in the wt-infected chicken. Importantly, the difference observed in the intestine is statistically significant (p,0.05), and suggests that PB1-F2 could influence the enteric tropism of the virus, though, as the difference between the two conditions is less than one log viral copy number, this result could be statistically significant but biologically non relevant. We next compared the shedding capacities of the wt and the DF2 viruses. Oropharyngeal and cloacal swabs were taken at day 2 pi for virus copy number quantification. The analysis of swabs from wt-infected chickens revealed a higher number of positive samples (7/10 vs. 4/10 for the DF2 group), yet, when looking at the amount of viral RNA detected within the positive individual swabs, no differences could be evidenced between the two types of infection ( Fig. 3B and 3C). In order to confirm the qRT-PCR virus RNA quantification data, we attempted virus isolation on oral swabs by plaque assay method to quantify the virus shedding ( Figure S1).
Impact of PB1-F2 on the expression profiles of representative genes of the host response Prior to characterization of the global transcriptome of the infected-chickens, we first explored the impact of PB1-F2 on the host response by using qRT-PCR assays. We focused on several gene representatives of the inflammatory and immune responses: STAT1, b2M, TLR4, IFNAR1, CTLA-4, CCL5, TLR7, HSPA2, IL2RG, TLR6 and BSL2L1. The two inoculum doses were then compared (Fig. 4). Surprisingly, despite the outcome of the survival curves, the differences between wt-and DF2-infected chickens were larger and statistically more significant in the 1000 PFU group (p value: 0.0006 vs. 0.0331). Consequently, the 1000 PFU inoculum dose was chosen for further global transcriptomic analysis.
Overview of differences in gene expression following infection with wt or DF2 Nig06
To determine how PB1-F2 modulates virus pathogenicity, we investigated the overall host response to wt-and DF2-Nig06 infection. Microarray analyses of RNA samples extracted from lungs and blood samples of wt-or DF2-infected chicken were carried out. Two groups of 5 White Leghorns chickens were intranasally infected using 1000 PFU of each virus. After 2 days pi, blood samples were collected; chickens were euthanized at the same day and lungs were collected. Total RNA were extracted, processed and analyzed using the Agilent Chicken (V2) Gene Expression Microarray (4644 K). Each probe signals were normalized and statistically treated as described in the materials and methods section. To identify outliers arrays, hierarchical correlation clustering (uncentered with average linkage) of the entire set of samples was carried out. As a consequence, one chicken was excluded in each group of the functional analysis. The 8 remaining chicken specimens showed a strong ''compartment'' effect: lungs and blood samples displayed a clear distinct clustering (Fig. 5A). Remarkably, within each ''compartment'' cluster, the 2 viral infections also constitute 2 separate clusters, indicating a clear effect of PB1-F2 during the Nig06 infection of chickens. In order to illustrate the global variance by which different sample sets correlate, arrays data from each samples were subjected to principal component analysis (Fig. 5B). As for the hierarchical correlation clustering, the 4 sets of samples clustered in distinct localized areas, indicating different gene expression profiles in lung and blood in response to wt or DF2 viruses. Collectively, these data indicate a potent effect of PB1-F2 during the chicken response to Nig06 infection, and also a tissue specific-effect of PB1-F2.
Analysis of the impact of PB1-F2 expression in Nig06infected chickens lungs
The functional consequences of PB1-F2 expression during Nig06 infection of lungs was addressed by analyzing the functions associated with the differentially expressed genes. Regulated genes in the group of wt-infected chickens were directly compared to regulated genes from the group of DF2-infected chicken to generate 2 sets of genes: genes up-regulated and genes downregulated in presence of PB1-B2. We selected genes that were at least 2 fold different in their expression between wt-and DF2infected chicken (adjusted p-value,0.05). Figure 6A represents the distribution of the genes, among the 3848 regulated genes, half of them appear up-regulated in the presence of PB1-F2 and half of them appear down-regulated. We then explored the functional consequences of PB1-F2 expression by using ontological annotations of the differentially expressed genes. The set of genes upregulated during wt Nig06 infection revealed a strong association of PB1-F2 to ''Calcium Signaling'' pathways: Glutamate Receptor Signaling, GABA Receptor Signaling, G-Protein Coupled Receptor Signaling (Fig. 6B). This suggests that PB1-F2 could disturb intracellular calcium stores and interfere with pathways in which calcium is involved. Logically, when looking at the heat map representative of this pathway (Fig. 7A), we observed that most of the genes were down-regulated in the absence of PB1-F2 except few genes playing important role in calcium homeostasis: ITPR2, ATP2B1, ASPH and CREB3. ITPR2 and ATP2B1 are two Ca 2+ pumps localized within the endoplasmic reticulum (ER) [37,38]. ASPH is an aspartate-hydroxylase regulating the Ca 2+ release from the ER [39]. CREB3 is a ER-bound transcription factor activated by Ca 2+ signaling and involved in inflammatory gene expression [40]. Overall, the analysis of this group of genes revealed that PB1-F2 expression modulates Ca 2+ signaling pathways within the lungs of infected chicken.
The pathways essential for the mobilization and activation of dendritic cells and lymphocytes: CD40, OX40, iCOS, CTL4 and CD28 signaling pathways. This strong effect is linked to the regulation of the group #3 of genes which is involved in ''Cell to Cell Signaling and Interaction''. It is composed of genes regulating the signal transduction of several cytokines including NF-kB and interferon signaling pathways. Heat maps of representative genes of these 2 main functions are shown in figures 7B and 7C.
The expression of PB1-F2 also down-regulate a cluster of genes functionally associated to ''cellular compromise''. Two canonical pathways are linked to these genes: ''ER Stress Pathway'' and ''Protein Ubiquitination Pathway''. These two functional categories illustrate the virulence factor properties of PB1-F2 during influenza virus infection. As shown in figure 7D, we found that the expression of PB1-F2 during the infection represses the entire gene cluster representative of the two pathways with the exception of ERN1. Interestingly, ERN1 (also known as IRE1) is a transmembrane protein resident of the ER which is implicated in the sensing of unfolded proteins in the lumen of the ER. Activation of ERN1 leads to a potent transcriptional response triggering growth arrest and apoptosis [41]. Collectively, these transcriptomic data on the lung response associated to PB1-F2 revealed a potent effect of PB1-F2 on the host response within chicken airways, and particularly on genes involved in Ca 2+ homeostasis and ER stress pathways.
Analysis of the impact of PB1-F2 expression in blood of Nig06-infected chickens As illustrated in Fig. 2A, HPAIV are able to spread systemically in chicken. To gain insight into the functions of PB1-F2 beyond the respiratory tract, we studied the host response of chicken blood cells. RNA samples were collected before infection and at day 2 pi. Each RNA sample from blood of infected chicken was directly compared to RNA sample from the same chicken before infection using a dual-color hybridization design. Four chickens in each group (wt-and DF2-infected) were used to identify the genes activated during the host response. The PB1-F2-dependant gene profile in blood cells was very different from the profile obtained in lungs: the set of up-regulated genes was restricted to 81 genes, representing only 0.5% of the analyzed genes (Fig. 8A). The number of down-regulated genes was comparable to that observed in the lungs (7.9%). The 81 PB1-F2-up-regulated genes are strongly associated to ''Mitochondrial Dysfunction'' and ''Glucocorticoid Receptor Signaling'' pathways (Fig. 8B). The mitochondrial dysfunction pathway appears very interesting since PB1-F2 is mainly localized to the mitochondria [13,42,43] and is able to permeabilize mitoplasts (Christophe Chevalier, unpublished data). Among those 81 genes, a number of genes encoded by the mitochondrial genome are present (Fig. 8C, left panel). The relative levels of expression of 3 mitochondrial genes: COX1, COX2 and COX3 were confirmed in blood samples using quantitative PCR (Fig. 8D). The ontological analyses of down-regulated genes showed that PB1-F2 down-regulates activation of leukocytes (Leukocyte Extravasation Signaling, p38 MAPK Signaling, Wnt/ b-catenin Signaling; Fig. 8B). A heat map of representative genes implicated in these biological processes is represented in fig. 8C (right panel).
Taken together, these blood cells transcriptomic data suggest a strong induction of mitochondrial genes transcription mediated by PB1-F2 and a down-regulation of leukocyte activation which could be due to an alteration of the mitochondria integrity. The loss of the mitochondrial membrane potential could also explain the inhibition of leukocytes activation as previously described [36].
Comparison of Nig06 PB1-F2 functions in chicken and mouse infected lungs
Since the Nig06 is able to infect mammals and birds, we compared the host response associated with PB1-F2 in the lungs of mice and chickens after 2 days pi. We used the chicken data obtained in the present study and the mouse data published in a previous work [34]. We generated two sets of genes by directly comparing wt-and DF2-infected animals. The median up-and median down-regulated genes were selected using an adjusted pvalue,0.05 (Fig. 9A and 9B), then we compared these two groups of genes in an ontological analysis. The functional classification of the genes regulated by PB1-F2 in chicken and mouse revealed major differences in the host response. In chickens, genes associated with ''inflammatory response'' and ''immunological disease'' correspond with 25% and 15% of the regulated-genes respectively, whereas in mice these functions are underrepresented with only 2% and 5% (Fig. 9C and 9D). When comparing the canonical pathways associated with the genes deregulated by PB1-F2 in both species, we identified host-specific PB1-F2 functions. As shown in Fig. 9E, in chicken, PB1-F2 regulated numerous inflammatory and immune pathways. Importantly, most of these pathways were down-regulated by the expression of PB1-F2 during the infection. For example, in chicken, the majority of the genes implicated in ''CTLA4 Signaling in Cytotoxic T Lymphocytes'' were down-regulated, suggesting that this specific function is down-modulated by PB1-F2 or that the cell type involved in this pathway is depleted by infection with PB1-F2 expressing virus (Fig 10A). On the contrary, in mouse, only few canonical pathways were regulated by PB1-F2 at this time-point. In particular, we found that PB1-F2 upregulated the expression of genes associated with ''PPARa/RXRa Activation'', a pathway exerting antiinflammatory functions (Fig. 10B). Presumably as a consequence of this, a delay in the triggering of the immune response is observed in mouse infected by the PB1-F2-expressing Nig06 virus [34]. Our data also indicate that PB1-F2 expression triggers host responses shared by avian and mammal hosts. Among these responses, we found ''Aldosterone Signaling'', ''PDGF Signaling'' and ''eNOS signaling'' pathways which illustrate the damages caused by PB1-F2 within the epithelium, connective tissue and endothelium respectively (Fig. 10C-E).
In summary, PB1-F2 from Nig06 exerts different functions in mouse and chicken. In lungs of the avian host, PB1-F2 strongly decreases the inflammation response whereas its impact is mild in mice. However, the epithelial cells damage provoked by PB1-F2 expression appears to be a feature shared by both species.
Discussion
Highly pathogenic H5N1 AIV infections among domestic poultry have become endemic in several countries in Asia and in Egypt [44]. An AIV is defined as ''highly pathogenic'' when it causes at least 75% mortality in 8-week-old naïve chickens intravenously infected [2], or if its hemagglutinin contains polybasic cleavage site [45]. Most highly pathogenic AIV provoke multi-organ failure, including hemorrhage in the intestinal and respiratory systems and lymphoid necrosis. In this work, we analyzed the contribution of PB1-F2 in the pathology of chickens infected by a highly pathogenic AIV isolated in Niger in 2006 [34]. Unexpectedly, at low inoculum dose, the PB1-F2-expressing virus infection resulted in a reduced mortality in comparison to the DF2 virus. This feature contrasts with what is observed in mammals since PB1-F2 usually increases pathology in mice models [15,[17][18][19]27,34]. To characterize the functional causes of this observation, we explored the host responses of the infected chickens in two tissues: lung and blood. By comparing the genes differentially expressed in presence or absence of PB1-F2 during infection, we were able to identify genes signatures associated with this protein.
In lungs, PB1-F2 expression increases the transcription of genes involved in calcium signaling and alters the ER integrity to induce a ER-stress pathway. The ER stress response, also known as the unfolded protein response, is regulated by several enzymes including ERN1 (also known as IRE1). ERN1 activity has been shown to be important during the viral cycle of influenza virus [46]. The up-regulation of ERN1 in presence of PB1-F2 is related to the ability of PB1-F2 to misfold and aggregate in membrane environments [31]. The cellular Ca 2+ concentration dynamics plays a critical role in epithelium homeostasis, and prolonged decrease of Ca 2+ concentration within the ER triggers multiple cellular cascades that can ultimately lead to cell death [47]. A multitude of factors regulate the Ca 2+ fluxes, including cytokines and reactive oxygen metabolites, but the strong membrane affinity of PB1-F2 and its capacity to alter membrane integrity [31,48] supports the hypothesis that PB1-F2 itself could modify Ca 2+ fluxes and then generate a gene signature in relation with ''calcium signaling''. Importantly, Ueda and colleagues previously characterized the apoptosis induction of duck epithelial cells infected by a highly pathogenic H5N1 AIV through an extracellular Ca 2+ influx mechanism [49]. This Ca 2+ imbalance results in an excess of Ca 2+ transport into the mitochondria, inducing a loss of mitochondrial membrane potential and apoptosis. Alternatively, the mitochondrial Ca 2+ overload can occur through ER-mitochondria direct transfer, a mechanism facilitated by ER sensitization [50].
Interestingly, we identified a mitochondrial dysfunction gene signature associated to PB1-F2 expression in blood of infectedchickens. Mitochondrial dysfunction occurs when the ROSmediated oxidative stress overpowers the antioxidant defense system. In parallel, inhibition of pathways relative to leukocyte activation was also evidenced in the presence of PB1-F2 (Fig. 8C). These two concomitant events are reminiscent of the fact that PB1-F2 was shown to inhibits immune response by decreasing the mitochondrial membrane potential [36]. Hence, PB1-F2 could trigger mitochondrial dysfunction in immune cells and may consequently deplete leukocytes through apoptosis induction in the infected host. This mechanism can be amplified through cytokine secretion such as CTLA-4 (Fig 4 and Fig 7). CTLA-4 is expressed by T-cells and has been described to play a crucial role in the immunomodulatory properties of these cells [51].
It is worth noting that the helicase RIG-I is absent in the chicken genome [52]. This is of first importance since PB1-F2 has been described to inhibit the type I interferon induction in mammals by binding to MAVS, the RIG-I adaptor protein allowing signal transduction [35,36]. Thus, it seems unlikely that PB1-F2 acts in this way in the chicken host. In contrast to chickens, ducks express a functional form of RIG-I [52]. This differential expression of RIG-I between chickens and ducks is probably a key component that could explain the opposite PB1-F2 phenotype observed in these two species. Indeed, Schmolke and collaborators demonstrated that the deletion of PB1-F2 caused a delayed onset of pathologic signs and systemic spreading of virus [20]. These data in conjunction with previous findings in other studies support the hypothesis that the detrimental effect of PB1-F2 on the host could be mediated by the activated form of MAVS (i.e. filamentous form, [53]). The absence of RIG-I in the chicken genome imply that MAVS is probably less activated during infection and consequently reduce the negative effects of PB1-F2.
Essentially, no difference was observed in the outcome of the infection between the wt and the DF2 H5N1 virus using the dose of 1000 PFU. This would be consistent with the highly pathogenic genotype and phenotype of the virus due to multibasic cleavage site of the hemagglutinin. Interestingly, notable difference in the outcome of the infection was observed at an inoculation dose of 100 PFU, with the wt inoculated chickens being the survivors. Despite the molecular signatures associated with PB1-F2, it is still difficult to explain why the wt Nig06 is less virulent than the DF2 virus at a low dose. Analysis of gene regulation indicated that numbers of genes are strongly down regulated in the wt inoculated birds. While some of the genes are linked to pathogenesis, the overall combined effect of this lack of upregulation may have led to reduced mortality in this inoculation group of birds. It cannot be excluded that the role of PB1-F2 in birds is to attenuate the influenza virus to allow survival of the reservoir host, since almost all known avian isolates express this protein [22,23], which appears to be excluded upon adaptation of the virus to a mammalian host, where gene upregulation caused by PB1-F2 contributes to pathogenesis [17,19,27,28]. The down regulation of immune and inflammatory pathways in presence of PB1-F2 is associated to a better survival rate of infected chickens; this suggests that the host response could be implicated in the pathogenesis as previously described in mice models [54]. However, the molecular profiles of DF2-infected chickens and the cytokine analysis did not show aberrant cytokines production (data not shown) as described in the ''cytokine storm'' observed in human acute respiratory distress syndrome [55]. Nevertheless, a recent work by Kuchipudi and collaborators studied the kinetic of cell death induction in avian species infected with AIV and provided a link between the rapid induction of apoptosis and resistance to virus [56]. As a consequence, AIV could have evolved a mechanism implying PB1-F2 to delay the death of the host in order to have time to efficiently spread.
By comparing the host responses regulated by PB1-F2 in chicken and mouse using the same H5N1 wt-and mutant viruses, we were able to identify common and differentially regulated pathways in both hosts. The identified functions suggest that PB1-F2 expression provokes damages to the infected lungs in both species. PB1-F2 has affinity for hydrophobic environments and is able to disturb membrane integrity [31,48,57], this property is likely to induce damages to the epithelium structure and to alter ion channels control of fluid and electrolyte transport across epithelium. Consequently, the pathway ''Aldosterone Signaling in Epithelial Cells'' is regulated by PB1-F2 since it integrates genes implicated in ions flux and water retention in epithelia. Such genes like ASIC1, SCNN1A and SCNN1B (ENaCa and b) encode ion channels; their expressions are upregulated by PB1-F2 in chicken lungs (Fig. 9C) and suggest a perturbation of the electrodiffusion of the apical membrane of epithelial cells.
As a consequence of epithelial tissue integrity alteration, the wt Nig06 is able to spread more efficiently in infected embryos and also in intestine of infected chickens even if the biological relevance of the intestine viral replication data are questionable. Given that fecal-to-oral transmission is the most common mode of spread between birds, the aptitudes attributed to PB1-F2 to increase the biodispersion of the virus through the host organs could provide an advantage to avian viruses that express PB1-F2. This can explain why 96% of the avian virus genomes encode a functional PB1-F2. On the contrary, as PB1-F2 promotes inflammation in mammals [19,27,34] and facilitates secondary bacterial pneumonia [17,21], the loss of PB1-F2 functions could be beneficial for an adaptation of the virus to the mammal hosts. This loss of function is particularly visible in the H1N1 influenza viruses isolated in humans: only 7% of the recently isolated viruses express a full length PB1-F2 [23].
In summary, the present study shed in light the complexity of PB1-F2 functions. PB1-F2 has strain specificity, cell type specificity and host specificity. Further research to identify relationship between specific PB1-F2 structural motif and pathogenesis is needed, a better understanding of the chicken antiviral responses is also necessary.
Ethics statement
All animal work was carried out in compliance with Canadian Council on Animal Care guidelines and was approved by the
Viruses
Influenza A/duck/Niger/2090/2006 (H5N1) was used in this study. Wild type (wt) and PB1-F2 knockout (DF2) viruses were produced by reverse genetics system using a bidirectional transcription plasmid derived from pHW2000 [58]. The viruses were prepared as previously described [34] and titrated on Madin Darby canine kidney (MDCK) cells.
Embryos histological analysis
Ten days old embryos were fixed in 10% neutral phosphate buffered formalin, routinely processed and sectioned at the level of the brain, thorax, abdomen and legs as well as the chorioallantoic membrane. For immunohistochemistry, paraffin tissue sections were quenched for 10 minutes in aqueous 3% H 2 O 2 then pretreated with proteinase K for 15 minutes. The primary antibody was a mouse monoclonal antibody specific for influenza A nucleoprotein (NP) (F26NP9, produced in-house) and was used at a 1:10,000 dilution for one hour. They were then visualized using a horseradish peroxidase labelled polymer, Envision + system (anti-mouse) (Dako, USA), reacted with the chromogen diaminobenzidine (DAB). The sections were then counter stained with Gill's hematoxylin. Slides were examined and the extent of immunostaining was scored as weak (less than 20 cells staining), mild 1+ (, 25% of the section staining), moderate 2+ (25-50% of the section staining), extensive 3+ (51 to 75% of the section staining) or widespread 4+ (.75% of the section staining).
Infection of chickens with wt or DF2 Nig06 and sample collection
Specific pathogen free (SPF) white Leghorn chickens at 40 days old were obtained from Fallowfield, Ottawa CFIA laboratory. They were floor housed in heated BSL3 animal cubicles and allowed 1 week of acclimatization before the start of experiments. Ten to 20 chickens per group were inoculated intranasally with 10 2 or 10 3 PFU per chicken of either wt or DF2 Nig06 H5N1 in 0.5 ml sterile PBS distributed into both nares for the survival experiments. Cloacal and oropharyngeal swabs were collected from each chicken on day 0 (prior to challenge) and at predetermined time points post challenge. Chickens were monitored daily for clinical signs including: depression, apathy, ruffled feathers, respiratory distress, hemorrhages and necrosis on the face, eyes, nares, combs, wattles, feet and legs. Clinical signs were scored as normal, mildly depressed, depressed and severely depressed. For animal welfare reasons, severely depressed chickens were humanely euthanized with intravenous injection of sodium pentobarbital (240 mg/ml). In second set of experiments targeting the host response, five chickens per group were inoculated intranasally with 10 3 PFU per chicken of either wt or DF2 Nig06 H5N1 in 0.5 ml sterile PBS distributed into both nares. Blood was collected from the wing vein of anaesthetized chickens on days 0 and 2 post infection (pi). At 2 days pi, chickens were euthanized with intravenous injection of sodium pentobarbital (240 mg/ml), and tissue samples were collected. Swabs, trachea, lung, blood, heart, liver, kidney, spleen, brain, pancreas, intestine and thymus were stored at 270 uC.
RNA extraction and qRT-PCR
RNA was isolated from swabs, bloods and lungs homogenates using the TRIzol-chloroform method according to the manufacturer's instructions (Invitrogen). RNA quality was checked on a Bioanalyzer 2100 (Agilent Technologies) and samples with a RNA Integrity Number (RIN) score between 7 and 9 were used in microarray or qRT-PCR experiments. Viral loads in swabs and tissues homogenates were determined by qRT-PCR assay specific for the influenza A M1 gene [59] with primers and probes that were modified to detect the Nig06 IAV: forward primer, 59-CTT CTA ACC GAG GTC GAA ACG TA -39; reverse primer, 59-GGT GAC AGG ATC GGT CTT GTC TTT-39; probe, 59-TET-TCA GGC CCC CTC AAA GCC GAG-BHQ-39 as previously described [34]. The mRNA levels of host genes were assayed using the Mastercycler realplex sequence detector (Eppendorf) and the double strand specific dye SYBR Green system (Applied Biosystems). Details of the primers are provided in Table 1. The PCR conditions and cycles were as follows: initial DNA denaturation 10 min at 95uC, followed by 40 cycles at 95uC for 15 sec, followed by an annealing step at 60uC for 15 sec, and then extension at 72uC during 30 sec. Each point was performed in triplicate. To ensure that the primers produced a single and specific PCR amplification product, a dissociation curve was performed at the end of the PCR cycle. Relative quantitative evaluation was performed by the comparative DDCt method. The mean DCt obtained in mock-infected chickens for each gene was used as calibrator, after normalization to endogenous control bactin. The results are presented as an n-fold difference relative to calibrator (RQ = 2 2DDCt ).
Microarray experiments
Transcriptional profiling was performed using the Agilent-026441 Gallus gallus Oligo Microarray (v2), 4644Kslides (GEO accession: GPL15357). A dual color design was used to provide direct comparisons between infected lungs [infected-by-wt-virus/ infected-by-DF2-virus], and between infected bloods and mockinfected bloods [infected-by-wt-virus/mock-infected] and [infected-by-DF2-virus/mock-infected]. To reduce potential experimental biases, RNA samples were collected from 5 different chickens for each experimental condition (Mock, wt-infected, DF2-infected). A total of 24 samples were analyzed, corresponding to 6 microarray slides. Fluorescently labeled cRNAs were obtained using the two colors Low Input Quick Amp Labeling Kit (Agilent Technologies) and starting from 100 ng of total RNA for each sample. supplemented with known synthetic RNA (two-color RNA spike-in kit, Agilent Technologies). cRNAs were subsequently purified using RNeasy Mini Spin columns (Qiagen) and the purified cRNAs were then run onto a Bioanalyzer 2100 using RNA 6000 Nano Chip (Agilent technologies). cRNA yields and specific activities were measured using a NanoDrop 2000 (Thermo Scientific). An equal amount of 800 ng of cyanine 3 and cyanine 5 labeled cRNA were hybridized for each sample. The hybridization and the washing steps were performed following the manufacturer's recommendations. The arrays were scanned using an Agilent G2505C scanner, and the scan protocol used a resolution of 5 mm and a 20 bit dynamic range. The resulting .tiff images were analyzed using the Feature Extraction software version 10.7.3.1 (Agilent) using the GE2_107_Sep09 protocol. For each channel, the median of the signal intensity was used without background subtraction. Data were analyzed to define genes that are differentially expressed between wt-infected and DF2-infected samples. Differentially expressed genes were identified with a False Discovery Rate (FDR) equal to 5%. Microarray data are available in the Gene Expression Omnibus (GEO) database (http://www. ncbi.nlm.nih.gov/geo/) under GEO accession number GSE56506.
Transcriptomic analysis
For functional analysis the data files resulting from differential analysis were imported into GeneSpring GX 12.1 software (Agilent Technologies). Hierarchical clustering analysis was performed to analyze cellular genes that were differentially expressed during infection (Euclidian distance, average linkage). For further analysis, data files were uploaded into the Ingenuity Pathways Analysis (IPA) software (Ingenuity Systems).
Statistical analysis
Survival of chickens was compared using Kaplan-Meier analysis and log-rank test. qRT-PCR quantification are expressed as the mean 6 standard error of the mean (SEM) of at least four chickens, and statistical analyzes were performed using the paired Student T test. Correlation clustering and principal component analysis were performed on normalized data using the FactoMi-neR package. Ontological analysis made with IPA used the righttailed Fisher's exact test to calculate a p-value determining the probability that each biological function and disease assigned to that data set is due to chance alone. Figure S1 Virus isolation by plaque assay on MDCK cells from oral swabs of chickens infected with 1000 PFU collected at 2 dpi (wt n = 15; DF2 n = 15) and at 4 dpi (wt n = 7; DF2 n = 5). No live virus could be detected in swabs from 6 chickens in the wt-infected condition at day 2 pi, and from 9 chickens in the DF2-infected condition at day 2 pi. (PDF)
|
2016-05-04T20:20:58.661Z
|
2014-06-24T00:00:00.000
|
{
"year": 2014,
"sha1": "510c3ea3f39509901279da4cee15cc02dcad6824",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0100679&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "510c3ea3f39509901279da4cee15cc02dcad6824",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
267956918
|
pes2o/s2orc
|
v3-fos-license
|
Chinese Children's Choir The Artistic Style of Northern Children's Choir
: The aim of this study is to explore the artistic styles of Chinese children's chorus - Northern children's chorus. Firstly, the acoustic characteristics, repertoire selection and arrangement, as well as performance styles and techniques of northern children's chorus are discussed. Then it analyses the song forms, stage images and costumes, as well as stage sets and lighting effects of northern children's chorus. It then discusses the teaching methods and teaching materials, vocal training and skill development, and artistic quality and team consciousness of Northern children's chorus. Then the development history of children's choir in the north is reviewed, successful cases are analysed and evaluated, and future trends and challenges are discussed. Finally, a conclusion is made, including a summary of the study and a prospect of the existing problems.
Introduction
Chinese children's chorus, as a unique form of music, has always been highly loved and respected by people.As one of the important branches of Chinese children's chorus, Northern children's chorus has its unique artistic style and characteristics.With the rapid development of China's economy and the strengthening of cultural exchanges, the influence of northern children's chorus at home and abroad is increasing, and it has attracted the attention of more and more scholars and artists.However, there are relatively few studies on northern children's chorus at home and abroad.At present, the research on children's chorus in the north mainly focuses on its acoustic characteristics and performance skills, and the in-depth exploration of its artistic style is still relatively insufficient.The research on the training and education of children's chorus in the north, as well as the development and prospect of children's chorus in the north is also still blank.Therefore, this study aims to systematically explore the artistic characteristics, performance forms, training and education of children's chorus in the north, as well as its development and prospects.Through in-depth research on the art and education of children's chorus in the north, it aims to provide theoretical guidance and practical reference for the further development and enhancement of children's chorus in the north.At the same time, this study will also explore the future development trends and challenges of northern children's chorus by analysing and evaluating its successful cases, so as to provide valuable experience and inspiration for researchers and practitioners in related fields.
Artistic Characteristics of Northern
Children's Chorus
Sound Characteristics
First of all, the timbre of northern children's chorus is distinct and unique.Since the vocal cords of children's choir singers are not yet fully developed, their timbre is more pure and brighter than that of adults.This clear, innocent tone gives people a warm and pleasant feeling, which makes the northern children's chorus more affinity when expressing pure and childish emotions.
Secondly, the volume control of northern children's chorus is very important.In the singing process, the children's choir needs to reasonably adjust the volume according to different repertoire and occasions.In the melodious lyrical repertoire, the children's chorus shows the soft and delicate volume to show the beauty of the repertoire; while in the exciting and passionate repertoire, the choir needs to show the full and powerful volume to express the impact of the repertoire.
In addition, the northern children's chorus has a wide range.In the music arrangement, different ranges are arranged according to the characteristics of different vocal parts.Generally speaking, the children's chorus has a relatively high range and can easily sing the treble part, but its performance in the bass part is relatively limited.Therefore, in the selection and arrangement of the repertoire, it is necessary to determine the range of the register according to the characteristics of the children's chorus and the actual situation, in order to ensure the smoothness and harmony of the singing.
Finally, the sound quality of northern children's choir is also one of its acoustic characteristics.Sound quality refers to the characteristics of the musical sound itself.Northern children's chorus is famous for its clear and bright sound quality.In order to ensure the beautiful sound quality, the choir needs to carry out targeted vocal training and focus on the cultivation of vocal skills.In the process of singing, the choir should reasonably use breathing control, resonance and other skills according to the repertoire and performance needs, to improve the purity of the sound quality and the beauty of the tone.
Selection and Arrangement of Repertoire
First of all, in the choice of repertoire, attention should be paid to the adaptability and challenge of the repertoire.Adaptability means that the repertoire should be in line with the children's range and timbre characteristics, whether it is a children's solo or choral ensemble, should be able to reflect the children's clear and innocent tone.Challenging means that the repertoire should have a certain degree of difficulty, so that children can get a sense of achievement and improve their musical skills in the process of singing.
Secondly, when arranging the repertoire, it is necessary to consider the unity and diversity of the repertoire.Unity means that the repertoire has a similar artistic style and emotional expression, so that the whole performance has a consistent effect.Diversity means that there should be differences between the repertoire, including changes in rhythm, key, style, etc., so as to increase the richness and interest of the performance.
In addition, when selecting and arranging the repertoire, it is also necessary to consider the audience's acceptance and appreciation experience.The choice of repertoire should take into account the audience's cultural background and level of musical knowledge, and try to choose the repertoire that the public is familiar with and loves, so as to increase the audience's sense of participation and interest.At the same time, the arrangement of repertoire should pay attention to the ups and downs and sense of hierarchy, through the arrangement of different rhythms and emotions of the repertoire, so that the whole performance has the artistic effect of ups and downs.
In conclusion, repertoire selection and arrangement is an important part of the artistic style of northern children's chorus.By reasonably selecting adaptable and challenging repertoire, and reasonably arranging unity and diversity, it can make the northern children's chorus performance more vivid and wonderful, and can improve the children's musical literacy and team consciousness.In future practice, further research and exploration are needed to continuously improve the repertoire selection and arrangement strategies of northern children's chorus.
Performance Style and Technique
In terms of performance style, the Northern Children's Choir pays attention to the combination of tradition and modernity, and strives to incorporate modern music elements on the basis of maintaining the traditional flavour, so as to make the singing more individual and modern.In terms of traditional music, Northern Children's Choir focuses on dignified and solemn singing style, pursuing the purity and beauty of music.At the same time, the use of modern music elements also makes the performance style of northern children's choir more diversified, lively and dynamic.When singing, the children's choir should pay attention to the matching of timbre and the details of the performance, so that the audience can feel the emotion and connotation contained in the music itself.
At the same time, the northern children's choir also has certain characteristics in the use of skills.Firstly, in the processing of sound, attention should be paid to the overall volume control and rhythm of the choir.Because the chorus is a multi-part ensemble, it is necessary to ensure that the volume of each part is balanced, and the overall sound is clear and bright.Secondly, strict requirements on pitch, to maintain accurate pitch and intervals, to build a harmonious harmonic effect.In addition, it is also important to pay attention to the clarity of words and accurate pronunciation, so that the audience can hear the content of the lyrics clearly.In addition, the northern children's chorus also emphasises the expression of body language, focusing on the overall image of the choir and the coordination of body language.When performing, attention should be paid to the creation of stage atmosphere, including the arrangement of stage movements and the coordination of dancing, so as to make the singing more vivid and figurative.
Performance Forms of Northern
Children's Chorus
Song Form
First of all, in the song form of northern children's chorus, classic national songs are a highlight.These songs are selected from the folk traditional music of various nationalities, with sincere lyrics and simple melody.For example, "Jasmine Flower", "Red Plum Praise" and so on, these classic national songs have been widely sung in the northern children's chorus.In the singing process, the children's choir focuses on integrating the emotional expression of national music, and conveys the respect and love for traditional culture through the delicate voice and pure timbre.
Secondly, the song forms of northern children's choirs also include the creation and interpretation of modern musical works.With the development of the times and the progress of music, many composers began to create modern musical works suitable for children's chorus.For example, "Promise under the Stars", "Memories of Childhood", etc.This modern music works in the northern children's choir singing, showing the children's choir's unique clear and bright voice and lively and dynamic performance.In the song form of the Northern Children's Choir, there are also some works with special themes or expressions.
In the song form of northern children's choir, there are also some works with special themes or expressions.For example, in "My Country and I", "Ode to Joy", etc., these works through the colorful stage image and gorgeous choreography, and children's chorus voice perfectly match, forming a double enjoyment of audio and visual.For example, "My Country and I", "Ode to Joy", etc., these works through the colourful stage image and gorgeous choreography, and children's chorus voice perfect match, forming a double enjoyment of audio and visual.
Stage Presence and Costumes
Stage images and costumes play an important role in the Northern Children's Choir performance.They can not only show the overall style and characteristics of the children's choir but also strengthen the audience's visual impact on the performance and enhance the viewing pleasure.This chapter will focus on the design and selection of stage images and costumes of the Northern Children's Chorus.The stage image of the Northern Children's Chorus focuses on showing the characteristics of childishness, cuteness, and innocence.
In the design of stage images, the children's innocence is usually taken into account, trying to make them look cute and eye-catching on stage.Therefore, stage images are usually designed in a bright, lively, and colorful way.Therefore, stage images are usually designed in a bright, lively and colourful way.In the choice of costumes, the Northern Children's Chorus will usually be based on the style and theme of the performance repertoire.
In the choice of costumes, the northern children's chorus will usually be based on the style and theme of the performance repertoire.The design of stage costumes should be in harmony with the repertoire and its performance style as well as meet the aesthetic requirements of the audience.For example, when singing lively and cheerful songs, brightly coloured and exaggerated costumes can be chosen to increase the visual effect of the performance; while when singing soft and lyrical songs, light and soft costumes can be chosen to highlight the emotion of the songs.
In addition, the design of stage costumes also needs to take into account the age characteristics of children and the requirements of stage performance.The comfort and flexibility of the costumes are very important for children's performances, and they need to be able to show their body's dance movements and singing skills freely.
Stage Set and Lighting Effects
In the performance of Northern children's chorus, the stage set plays the role of the top and bottom.The design of the stage set should take into account the venue and the audience's viewing angle, but also be in line with the theme requirements.The design of the stage set should take into account the venue and the audience's viewing angle, but also in line with the theme requirements.For example, in the singing of grand epic songs, you can design a magnificent landscape background, giving a grand and majestic feeling; while in the singing of youthful and energetic songs, you can design a campus background full of vitality and energy.The design of the stage set should highlight the theme of the performance so that the audience can better understand the expression of the song.
Lighting effects are also crucial in the performance of the Northern Children's Chorus.It can bring more distinctive visual effects to the stage set and performers, and enhance the atmosphere and emotional expression of the scene.Lighting design should be based on the style and rhythm of the performance pieces, and precisely control the brightness, color, and change of the light, to create just the right light and shadow effects.For example, in the singing of passionate songs, you can use bright red light to create a warm and passionate atmosphere; while in the singing of gentle lyrical songs, you can use soft yellow light to create a warm and romantic atmosphere.Through the clever use of lighting, you can make the performance more varied and layered.
At the same time, the coordination of the stage set and lighting effects also need to have a good match.They should echo each other and set off each other, making the whole performance more harmonious and perfect.When designing the stage set and lighting, the overall style and theme of the repertoire need to be taken into account to enhance the artistic effect and visual impact of the performance as much as possible.
In the future development of children's chorus in the north, the importance of stage scenery and lighting effects will be more prominent.With the continuous progress of science and technology, the technical means of stage scenery and lighting will be more abundant and diversified, which will bring more wonderful and shocking effects to the performance.Therefore, stage set and lighting designers need to continue to learn and innovate, to integrate more artistic elements into the design, and to present the audience with more expressive and infectious performance.
Training and Education of Northern
Children's Choir
Teaching Methods and Teaching Materials
First of all, for the choice of teaching methods, diversified methods can better stimulate students' learning interests and potential.In the teaching process of Northern Children's Chorus, a variety of teaching methods should be used, such as demonstration teaching, listening teaching, practical teaching, and so on.Demonstration teaching can be carried out through the singing of instructors or excellent choirs, and students can improve their singing skills and expressive ability through imitation and singing along.Aural teaching emphasizes the cultivation of students' music perception ability and improves their music appreciation and performance skills by listening to different types of musical works.Practical teaching combines theoretical knowledge with actual practice, consolidates what students have learned, and improves the overall effect of the choir by letting them participate in singing and choral activities in person.
Secondly, the selection of teaching materials is a part of the teaching process that cannot be ignored.Good teaching materials should not only have a certain depth of knowledge but also meet the characteristics and requirements of children's chorus.In the teaching of children's chorus in the north, we should choose teaching materials that reflect the characteristics of Chinese culture and meet the interests of students.The content of the teaching materials should contain a rich variety of children's chorus repertoire, ranging from classical choral works to modern compositions, focusing on the artistry and expressiveness of the songs, but also taking into account the adaptability of the student's age and vocal characteristics.At the same time, the teaching materials should also provide detailed singing instructions and skill explanations to help students correctly understand and master the requirements of choral skills, pitch, and rhythm.
In addition, the teaching methods and selection of teaching materials need to be adjusted according to the characteristics of students of different ages.For younger students, a gamebased teaching method can be used to stimulate their motivation and desire to learn through games and competitions.For older students, inquiry-based teaching methods can be used to encourage independent thinking and creative expression and to develop their musical literacy and teamwork skills.
Vocal Training and Skill Development
Firstly, vocal training is the foundation of a children's choir.
Vocal training includes the cultivation of basic skills such as breathing, vocalization, and resonance.Through scientific breathing training, the choir can master the correct breathing method, enhance lung capacity and breathing control ability, and provide sufficient breath support for singing.At the same time, vocal training helps to develop the vocal stability and clarity of children's choir members, so that they can maintain good sound quality and pitch in the singing process.
Resonance training can help chorus members develop a good resonance cavity, improve the penetration and flavor of the voice, and make the singing effect more outstanding.
Secondly, skill development is an important part of children's choir to improve their singing level.Skill development includes range expansion, tone shaping, coordination, and so on.Range expansion refers to widening the range of chorus members through practice, increasing their ability of pitch and bass, and improving the richness and expressiveness of singing.Tone shaping, on the other hand, through the joint efforts of chorus members, develops a unique choral tone style, so that the singing has distinctive personality characteristics.In addition, coordination training helps chorus members to collaborate and understand each other during the singing process, which ensures the overall effect of the music is perfect.
In the process of implementing vocal training and skill development, professional guidance and appropriate methods are crucial.Choirs need experienced vocal teachers to guide them and develop scientific and reasonable training programs.In addition, a combination of classroom teaching and individual counseling will help choir members to strike a balance between individual skill development and overall collaboration.At the same time, vocal training and skill development should be systematic and progressive, and the training should be targeted according to the actual level and needs of the chorus members to achieve the best results.
Cultivation of Artistic Literacy and Team Awareness
First of all, to cultivate artistic literacy, the team should focus on each member's learning and mastery of basic music knowledge and skills.This includes the cultivation of music theory knowledge, singing skills, vocal training, and so on.Choral conductors and music educators should formulate a systematic teaching plan to provide members with rich musical materials and training opportunities so that they can continuously improve their musical skills.
In addition, the team should strengthen the cultivation of art appreciation and performance skills.Members of choral teams should have the ability to understand and appreciate musical works and be able to accurately read out lyrics and interpret appropriate emotions.They should also learn how to cooperate with others, play to their strengths, and work well with each other in a team.Performance skills can be developed through regular group rehearsals and stage performance training.
A sense of teamwork crucial to the success of a choral team.Team members should develop a sense of common purpose and the ability to collaborate to ensure overall musical and performance effectiveness.Choral conductors and music educators should set up a team reward mechanism to encourage team members to cooperate and help each other and organize regular collective activities to enhance team cohesion.
The cultivation of artistic literacy and team awareness also needs to focus on the balance between individual development and overall development.While cultivating individual artistic literacy, the chorus team should focus on cultivating the overall performance ability and artistic level.Team members should always keep the identity of the overall goal, and improve the performance level of the whole team through common efforts and training.
The Development History of Northern Children's Choir
The development of children's choirs in the North can be traced back to the 1950s and 1960s when children's choirs in the north started late due to the limitations of social and economic conditions and the lack of music education resources.However, through years of hard work and unremitting pursuit, the Northern Children's Choir gradually showed its unique artistic style and achievements.
In the early stage of development, the main goal of the children's chorus in the north is to cultivate children's interest in music and expressiveness.At first, this form was mainly based on some school and community choirs, which inspired children's love of music by carrying out some simple choral activities.Over time, more and more educational institutions began to pay attention to children's choir in the north, and have opened relevant courses and training sessions to provide children with more professional music education.
In the decades after the reform and opening up, the Northern Children's Chorus has made great progress.Under the impetus of education reform, the popularity of music education has gradually increased, and more and more children have access to choral singing as an art form.Some excellent choral conductors and music educators have joined the cause of children's chorus in the north, bringing more innovation and development opportunities to this field.
At the same time, the repertoire of children's chorus in the north is gradually enriched and diversified.From the traditional children's songs at the beginning to the chamber choral works and independent compositions, to the crossborder cooperation and innovative attempts now, the musical forms and expressions of the northern children's chorus are gradually rich and diversified, presenting a more unique and personalized artistic style.
In addition, the Northern Children's Choir focuses on cultivating the children's musical literacy and stage performance ability by continuously improving their choral skills and performance styles.The management and organization of the choir are also becoming more and more perfect, focusing on cultivating the children's team consciousness and collaborative ability.These efforts have made the Northern Children's Choir achieve remarkable results and recognition at home and even internationally.
With society's increasing emphasis on art education, the development prospect of Northern Children's Choir is very optimistic.In recent years, more and more music colleges and art education institutions have begun to offer professional children's chorus courses to cultivate more talented children's chorus talents.At the same time, the support of the government and the community for art education is also increasing, providing a more favorable environment and opportunities for the development of children's chorus in the north.
However, the development of children's chorus in the north also faces some challenges and problems.On the one hand, due to the unbalanced distribution of educational resources and insufficient popularisation of art education, the children's chorus career in some areas is still in a relatively weak state.On the other hand, the education and training system of the children's chorus still needs to be further improved, and the professional teaching team and standardized teaching methods need to be upgraded urgently.
Analysis and Evaluation of Successful Cases
Firstly, Beijing Children's and Young People's Choir is one of the representative groups of children's Choirs in China.The group was founded in 1958 and has a history of several decades.Their performance repertoire is wide-ranging, covering a variety of fields such as Chinese classical, modern, and ethnic music.They focus on the integration and coordination of voices in their acoustic characteristics and show the unique charm of Chinese children's chorus through the delicate expression of vocal parts.In addition, they are very delicate in the selection and arrangement of repertoire, able to make appropriate selections and reasonable arrangements according to different themes and occasions, to show rich musicality and emotional expression.
Secondly, Hebei Provincial Children's Choir is a highly regarded northern children's choir.They are committed to combining traditional northern music with choral art to create a unique and characteristic performance style.The choir focuses on the cultivation of performance skills and makes great efforts in the design of stage images and costumes.They focus on the team's overall cooperation ability, and show a high degree of tacit understanding and team spirit in the performance, bringing the audience visual and audio enjoyment.
In addition, the Shanxi Provincial Children's Choir is also an important force in the field of children's choir in the north.They are excellent in the use of stage sets and lighting effects and can present a unique visual effect to the audience through the arrangement of the stage and the deployment of lights, increasing the artistic impact of the performance.They also pay great attention to their vocal training and skill development, and through careful teaching methods and selection of appropriate teaching materials, they have produced several outstanding child singers.
Overall, the analyses and evaluations of these success stories show that northern children's choirs have an important position and role in music education and performance.They have made positive contributions to the development of the Northern Children's Chorus through innovation and development in acoustic characteristics, repertoire selection and arrangement, performance styles and techniques, song forms, stage images and costumes, stage sets and lighting effects, teaching methods, and teaching materials selection, vocal training and skills development, and artistic qualities and teamwork awareness development.
Future Development Trends and Challenges
First of all, with the continuous development of society and the expansion of the children's chorus audience, the performance form and repertoire selection of the children's chorus will be more diversified and personalized.In the future, children's chorus will pay more attention to innovation and breakthroughs in artistic expression, constantly explore new music repertoire suitable for children's voice development, and at the same time pay more attention to cultivating and showing children's unique artistic expression ability.
Secondly, the training and education of children's chorus will pay more attention to specialization and systematization.
To improve the artistic level of the children's chorus and cultivate more excellent children's chorus teams, the teaching methods and teaching materials will be more scientific and professional.In addition, vocal training and skill development will be paid more attention to, through scientific training methods and skill development, to improve the children's artistic quality and team consciousness.
Thirdly, the future development of children's choirs will pay more attention to international integration exchange, and cooperation.With the intensification of globalization, children's choirs will have more frequent and closer exchanges and cooperation in the international community.Through exchanges and cooperation with outstanding foreign chorus teams, we can learn from and absorb international advanced experience, which will promote the development of Chinese children's chorus.
Finally, the future development of children's choirs will face some challenges.Firstly, the professional training and education cost of children's choir is high, so how to guarantee the supply of funds and resources will be an important challenge.Secondly, the diversity of children's chorus audience groups and the individualization of needs is also a challenge.How to better meet the needs of different audience groups and provide more diversified performance forms and repertoire choices will be a major challenge for the development of children's chorus in the future.
Research Summary
This study has comprehensively explored and researched the artistic style of the Northern Children's Chorus.By analyzing the acoustic characteristics, repertoire selection and arrangement, performance styles and techniques, stage image, and costumes of the northern children's Chorus, we have gained an in-depth understanding of the artistic characteristics and performance forms of northern children's Chorus.In terms of the training and education of children's chorus in the north, we found that the teaching method and teaching material selection, vocal training, and skill cultivation, artistic quality, and team consciousness cultivation are the keys.In the analysis of the development and prospect of the children's chorus in the North, we evaluate and look forward to the development history, successful cases, and future development trends of the chorus in the North.
Problems and Prospects
Firstly, the Northern Children's Chorus has certain deficiencies in terms of acoustic characteristics.Due to limited equipment conditions, the sound equipment in some performance venues is not perfect, resulting in unsatisfactory singing effects.Therefore, we need to devote ourselves to improving the quality and effect of sound equipment to enhance the singing quality.
Secondly, there are some problems in repertoire selection and arrangement.At present, the repertoire selection of children's chorus in the north is relatively single, lacking diversity and innovation.At the same time, the arrangement of the repertoire is not rich and diverse enough, lacking a sense of hierarchy and highlights.Therefore, we need to strengthen the excavation and creation of the repertoire, focus on diversity and uniqueness, and improve the artistry and ornamental of singing.
In addition, there are also some problems in the performance style and skills.The performance style of the northern children's chorus is generally in favor of tradition, lacking modernity and innovation.Meanwhile, in terms of performance skills, the performance skills of some choral groups are not mature enough and need to be further improved and perfected.Therefore, we need to constantly explore new performance styles and techniques to inject more vitality and charm into Northern Children's Chorus.
In addition, the stage image and costumes of the Northern Children's Chorus also need to be improved.At present, some chorus teams lack unity and professionalism in stage image and costumes and have not formed a unique style and image.Therefore, we need to pay attention to the design and presentation of the stage image, carefully choose suitable costumes, and create a recognizable and artistic image of the northern children's Chorus.
In addition, the stage set and lighting effects also need to be strengthened.At present, the stage sets and lighting effects of some performance venues are not exquisite and professional enough, which affects the viewing effect of the whole performance.Therefore, we need to strengthen the design and arrangement of stage scenery and lighting effects to enhance the visual effect and artistic sense of the performance.
In the training and education of children's chorus in the north, there are also some problems with the teaching methods and choice of teaching materials.At present, the teaching methods of some chorus groups are relatively traditional, lacking diversity and pertinence.At the same time, the selection of teaching materials needs to be more careful and scientific to suit the characteristics and needs of children of different ages.Therefore, we need to keep exploring and innovating teaching methods, actively searching for suitable teaching materials, and improving the educational quality and achievements of children's chorus in the North.
In terms of vocal training and skill development, we encourage the Northern Children's Choir to strengthen professional vocal training and improve the control of pitch and tone.At the same time, we focus on cultivating the children's artistic qualities and team awareness and improving their singing skills and co-operation ability.
Finally, for the development and prospect of children's chorus in the north, we should actively advocate and support the exchange and cooperation of chorus teams to promote the interaction and development of children's chorus in the north.At the same time, we need to pay close attention to the relevant developments and trends at home and abroad, and adjust and update our development strategies to maintain the leading position of Northern Children's Chorus in the field of art.
|
2024-02-27T18:24:38.775Z
|
2024-01-12T00:00:00.000
|
{
"year": 2024,
"sha1": "72b3f7dbb07215ed7878cb155b379f6d2772de7a",
"oa_license": "CCBY",
"oa_url": "https://drpress.org/ojs/index.php/jeer/article/download/16038/15561",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0ceab765b0f382b4809be32fc448db4ec1213c57",
"s2fieldsofstudy": [
"Art",
"Education"
],
"extfieldsofstudy": []
}
|
266786144
|
pes2o/s2orc
|
v3-fos-license
|
Camptothecin bioprocessing from Aspergillus terreus, an endophyte of Catharanthus roseus: antiproliferative activity, topoisomerase inhibition and cell cycle analysis
Attenuation of camptothecin (CPT) productivity by fungi with preservation and subculturing is the challenge that halts fungi to be an industrial platform of CPT production. Thus, screening for novel endophytic fungal isolates with metabolic stability for CPT production was the objective. Catharanthus roseus is one of the medicinal plants with diverse bioactive metabolites that could have a plethora of novel endophytes with unique metabolites. Among the endophytes of C. roseus, Aspergillus terreus EFBL-NV OR131583.1 had the most CPT producing potency (90.2 μg/l), the chemical identity of the putative CPT was verified by HPLC, FT-IR, NMR and LC–MS/MS. The putative A. terreus CPT had the same molecular mass (349 m/z), and molecular fragmentation patterns of the authentic one, as revealed from the MS/MS analyses. The purified CPT had a strong activity against MCF7 (5.27 μM) and UO-31 (2.2 μM), with a potential inhibition to Topo II (IC50 value 0.52 nM) than Topo 1 (IC50 value 6.9 nM). The CPT displayed a high wound healing activity to UO-31 cells, stopping their metastasis, matrix formation and cell immigration. The purified CPT had a potential inducing activity to the cellular apoptosis of UO-31 by ~ 17 folds, as well as, arresting their cellular division at the S-phase, compared to the control cells. Upon Plackett–Burman design, the yield of CPT by A. terreus was increased by ~ 2.6 folds, compared to control. The yield of CPT by A. terreus was sequentially suppressed with the fungal storage and subculturing, losing ~ 50% of their CPT productivity by 3rd month and 5th generation. However, the productivity of the attenuated A. terreus culture was completely restored by adding 1% surface sterilized leaves of C. roseus, and the CPT yield was increased over-the-first culture by ~ 3.2 folds (315.2 μg/l). The restoring of CPT productivity of A. terreus in response to indigenous microbiome of C. roseus, ensures the A. terreus-microbiome interactions, releasing a chemical signal that triggers the CPT productivity of A. terreus. This is the first reports exploring the potency of A. terreus, endophyte of C. roseus” to be a platform for industrial production of CPT, with an affordable sustainability with addition of C. roseus microbiome. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-023-02270-4.
Introduction
Camptothecin (CPT) is a quinoline pentacyclic alkaloid that was firstly isolated from Camptotheca acuminata (Nyssaceae, happy tree), in the southwest provinces of China [1].CPT displayed a powerful broad-range antiproliferative activity towards various types of solid tumors [2].CPT derivatives are one of the most prescribed anticancer drugs, after Taxol and vincristine [3].The powerful antiproliferative activity of CPT derivatives elaborates from their higher affinity to bind with the topoisomerase I and II [4], causing a subsequent inhibition to the enzyme activity.DNA topoisomerases exists in all nucleated cells maintaining the topology of DNA strands during DNA replication, RNA transcription, recombination, chromatin association and remodeling [4][5][6].The topoisomerase I breaks only one strand of duplex DNA giving 3′-phospho-tyrosine intermediate, type II breaks both strands of DNA duplex with the formation of a pair of 5′-phosphotyrosine covalent intermediates [4].The topoisomerase causes single/double strand breakages, relaxing the DNA strands, and catalyzes the religation of the cleaved DNA [4].In presence of CPT, the DNA-topoisomerase I, II complex was stabilized, preventing the subsequent step of religation and cleavage [7][8][9][10].CPT is one of the monoterpenoid indole alkaloids (TIA) that derived from the strictosidine precursor, which is produced by the combination of monoterpenoid secologanin and tryptamine indole amino acid [11,12].
Camptotheca acuminata is the major source of CPT, the seeds and bark of this plant contains ~ 0.2-0.3%,whereas the leaves contains up to 0.4% of CPT [13][14][15].The bark of C. acuminata contains about 0.5% of 10-hydroxycamptothecin [13].However, the tiny yield, with the heavy demand of this compound, causing a drastic harvesting of this plant, with subsequent negative effect on their natural ecosystem, in addition to the limitation of this plant to certain geographical niches [16][17][18].Additionally, the natural low abundance, diverse aromaticity and complexity of extraction of CPT from plants are the major challenges [19,20].Endophytic fungi from the medicinal plants were considered as an unexploited reservoir of numerous secondary metabolites with diverse activity that could be due to the horizontal gene transfer, sharing the diverse molecular biosynthetic machineries of the plant host and their endogenous microbiome [21][22][23][24][25].The metabolic potency of fungi for CPT biosynthesis has been firstly emphasized with the ability of Entrophospora infrequens an endophyte of Nothapodytes foetida [26][27][28] for CPT production followed by a plethora of endophytic fungi from various plants with the ability to produce CPT were reported [21][22][23][24][25].The metabolic biosynthetic potency of CPT by the endophytic fungi elevates the prospective industrial applications of fungi, for their fast growth, feasibility of bulk biomass production and independence on the environmental conditions [26,27,29].However, the further endeavor for employment of fungi for the commercial application of CPT is the attenuation of the biosynthetic machinery of CPT, with an obvious subsequent reduction to the CPT yield with the fungal storage and subculturing [14,[26][27][28][29][30].Several trials have been implemented to restore the biosynthetic potency of fungi via co-cultivation with the microbiome of the host plant, addition of different plant extracts.The yield of CPT by Aspergillus terreus, an endophyte of Ficus elastica [23], Cestrum parqui [21], Cinnamomum camphora [22], A. flavus, an endophyte of Astragalus fruticosus [25], Penicillium chrysogenum an endozoic of Cliona sp [24] was strongly reduced with the fungal storage.Thus, the main objective of this study was to isolate a novel endophytic fungal isolate with a plausible CPT biosynthetic stability and to assess their antiproliferative, biological activity, and topoisomerases inhibition activity.
Collection of the plant samples and isolation of endophytic fungi
Catharanthus roseus were collected from the Botanical Garden of Zagazig University, Zagazig, Alsharqia province, Egypt, in September/2021.Fresh parts including leaves, stems and flowers were brought to the lab in sterile plastic bags, washed thoroughly with sterile distilled, sectioned into small segments (1 × 1 cm).The plant parts were surface sterilized with 70% ethyl alcohol for 1 min, 2.5% sodium hypochlorite for 2 min, then washed with sterile distilled water to exclude any epiphytic microbial flora [31,32].The sterilized plant segments were placed on the surface of potato dextrose agar (PDA), and Czapek's-Dox media with antibacterial agent ampicillin (1 μg/ml) and incubated for 8 days at 30 °C [33,34].The recovered fungal isolates were morphologically identified according to macroscopical and microscopical features according to the universal keys of fungal identification [35][36][37][38][39][40].
Molecular identification of the isolated endophytic fungi
The potent CPT producing fungal isolate were molecularly confirmed based on their ITS1-ITS2 sequences [31,41,42].The genomic DNA (gDNA) was extracted by cetyltrimethylammonium bromide (CTAP) reagent, used as a PCR template with the primers ITS4 5′-GGA AGT AAA AGT C-GTA ACA AGG-3′ and ITS5 5′-TCC TCC GCT TAT TGA TAT GC-3′.The PCR reaction contains 10 μl of 2 × PCR master mixture (i-Taq ™ , Cat.No. 25027, INTRON Biotech), 2 μl of the gDNA, 1 μl of each primer (10 pmol), and completed to 20 μl with sterile distilled water.The PCR was programmed to initial denaturation at 94 °C for 2 min, then denaturation at 94 °C for 30 s, annealing at 55 °C for 10 s, extension at 72 °C for 30 s for 35 cycles, and final extension at 72 °C for 2 min.The PCR products were analyzed by 1.5% agarose gel in TBE buffer, sequenced by Applied Biosystems Sequencer, HiSQV Bases, Version 6.0 with the same primers.The sequences were non-redundantly searched on BLAST tool, aligned by ClustalW muscle algorithm [43], and the phylogenetic relationship was constructed with neighbor-joining method with 100 bootstrap replication [44].
Screening, chromatographic analyses of CPT production by the recovered fungi
The CPT productivity by the recovered endophytic fungal isolates was screened by growing on potato dextrose broth (PDB) (BD, Difco, Cat# DF0549-17-9) [22,23,45,46].A agar plug from 6 days old PDA fungal culture was taken from each fungus, inoculated into 50 ml PDB/250 ml Erlenmeyer flask, incubated at 30 °C for 15 days, the cultures were filtered and the CPT was extracted from the fungal filtrates by methylene chloride, and concentrated by rotary evaporator till oily residues.The extract was fractionated by TLC (Merck 1 mm (20 × 20 cm), pre-coated silica gel plates, Silica gel 60 F254, KGaA, Darm.Germany) with dichloromethane and methanol (9:1 v/v), as solvent system.After running, the CPT spot was detected by illumination at λ 254 nm, normalized to standard one (Cat.7689-03-4).The putative CPT spots gave the same blue color, and relative mobility as standard was considered.The intensity of putative spots was determined by the Image J package, regarding to the known concentrations of authentic CPT.CPT was extracted from the spots of CPT containing silica [21][22][23] and analyzed by HPLC (YOUNG In, Chromass) with RP-C18 column (Cat.#959963-902) with methanol/ water (60:40 v/v) at a flow rate 1.0 ml/min, for 20 min.The CPT concentrations were assessed from the retention time and area of the peak at λ 360 nm, compared to the authentic CPT [21][22][23][24][25].The chemical identity were confirmed from the standard CPT retention time and peak area compared to the authentic one.
UV-Vis, FT-IR, NMR, LC-MS/MS analyses
The purified putative CPT samples were dissolved in methanol and scanned by UV-Vis at wavelength range λ 200 -λ 500 nm (RIGOL, Ultra-3000 Spectrophotometer).Methanol was used as blank baseline.Authentic CPT was scanned at the same conditions, and the spectroscopic identity of the sample was assigned comparing to the authentic one.
The FT-IR spectra of the sample of CPT were assessed from 400 to 4000 cm −1 with KBr discs, compared to authentic CPT.The chemical identity of the extracted CPT was resolved from the 1 HNMR (JEOL, ECA-500II) [21][22][23].The chemical shifts (δ-scale) and coupling constants (Hz) were expressed by ppm.
The chemical identity of the CPT samples was analyzed by the liquid chromatography-tandem mass spectrometry (LC-MS/MS) (Thermo Scientific LCQ Deca mass spectrometer, equipped with an electrospray source operated in positive ion mode) [23].The mobile phases consisted of water with 0.1% formic acid (A), and acetonitrile with 0.1% formic acid (B).The sample were injected into a Thermo Scientific Hypersil Gold aQ (C18 column), and the elution system was a gradient of 2-98% mobile phase B over 30 min, with a flow rate of 0.2 ml/ min, for total run time was 40 min.The electrospray ionization (ESI) source operated with a spray voltage of 4 kV and a capillary temperature of 250 °C.The ion trap was scanned in a positive-ion mode from m/z 300-2000, with recorded mass scan between 300 and 2000 Da.The chemical identity of the components was identified based on their mass spectra fragmentation pattern and retention times.Further fragmentation analyses to the selected peaks of the putative molecular mass corresponding to authentic CPT at 349.1 m/z.The identity of the extracted CPT was confirmed from the molecular fragmentation pattern corresponding to the authentic one.
Antifungal activity guided-assay of the putative CPT samples
The activity of the extracted CPT from the selected fungal isolates was assessed towards various CPT and non-CPT producing fungal isolates recovered from the flowers of Catharanthus roseus.Different concentrations of the putative CPT extracts of the selected fungal isolates were injected into 9 mm wells on PDA culture plates of the recovered endophytic fungal isolates from C. roseus.The plates were incubated at 30 °C for 5 days, and the diameters of the inhibition zones were measured compared to 1% DMSO as negative control.
Antiproliferative activity of the extracted CPT
The activity of the extracted CPT was assessed against breast carcinoma (MCF7) and Renal cancer cell lines (UO-31), compared to the normal oral epithelial cells (OEC), with MTT assay [47].The breast carcinoma (MCF7) (ATCC HTB-22) and the Renal cancer (UO-31) (EZT-UO31-1) cell lines were obtained from the American Type Culture Collection and EZ-Biosystems.The cells were cultured on DMEM (Invitrogen/Life Technol.)supplemented with 10% FBS (Hyclone), 10 μg/ml of insulin (Sigma), and 50 U/ml penicillin and 50 μg/ml streptomycin.All of the other chemicals and reagents were from Sigma, or Invitrogen.The 96-well microtiter plate was seeded with 10 3 /well, incubated overnight at 37 °C, amended with various concentrations (1.0, 2.0, 4.0, 8.0 and 10 μM) of the purified CPT dissolved in 2% DMSO as vehicle, then further incubated for 48 h at the same conditions.DMSO at 2% was used as negative control The MTT reagent was added, the developed formazan complex with purple color was measured at λ 570 nm.The IC 50 value was expressed by the amount of CPT reducing the growth of tumor cells by about 50%, compared to the controls (without drug).
Kinetics of DNA topoisomerase I inhibition in response to the extracted CPT
The human topoisomerase I activity was assessed based on converting of the supercoiled circular DNA into relaxed DNA [10], the relaxed DNA suppresses the fluorescent intensity than the supercoiled one of the fluorescence dye H19 (Cat.#.HRA020K, ProFoldin, Hu, USA).The reaction mixture of Topo I assay contains HT buffer, 10 × supercoiled plasmid DNA, 1500 × Dye H19 and 550 μl of 10 × H19 dilution buffer, incubated for 60 min at room temperature, in presence of different concentrations of the CPT.One unit is the enzyme activity was represented by the amount of enzyme required for relaxing of supercoiled DNA in 30 min at 37 ºC, the florescence emission intensity was measured at λ 535 nm at excitation λ 485 nm [8].
Wound healing of tumor cells in response to the extracted CPT
The wound healing and cell migration potency of the tested tumor cells in response to the extracted CPT was assessed [48,49].Breifly, the UO-31 cells were seeded at 5 × 10 4 cells per 40 mm 2 plate, incubated for 24 h to form a confluent monolayer (about 60 k/cm 2 ), then a wound/ scratch was made.The plate were rinsed with PBS and treated with the extract of CPT.DMSO was used as control.The wound closure due to the cell migration was monitored, imaged by phase-contrast microscope.The wound healing percentage was determined based on the gap area of the treated cells, compared to the control cells.
Apoptosis and cell cycle analyses of UO-31 cells in response the extracted CPT
The apoptosis of the UO-31 cells was detected using Annexin V-FITC Apoptosis Kit (Cat #: K101-25) according to the manufacturer's instructions.The concept of this assay is relied on, with the initiation of apoptosis process, the membrane phosphatidylserine (PS) of the inner face of plasma membrane was externalized to the cell surface that can be easily detected by fluorescent stain Annexin V, that has a higher affinity for PS binding, then the Annexin V-PS interaction was analyzed by flow cytometry [50].Briefly, the UO-31 cells were seeded into 12-well plate culture (2 × 10 6 cells/well), amended with different concentrations of the extracted CPT, incubated for 48 h at standard conditions.The cells were collected and washed with phosphate buffered saline, annexin-binding buffer, followed by Annexin V-FITC and PI, according to manufacturer's instructions.The assay was incubated in dark for 15 min at room temperature.Annexin-binding buffer was added before the flow cytometry analysis.The Annexin V-FITC binding was detected by flow cytometry (Ex, 488 nm; Em, 530 nm) with FITC signal detector and PI staining by the phycoerythrin emission signal detector.
The cell cycle of UO-31 cells was analyzed by Propidium Iodide (PI) Flow Cytometry Kit (Cat#.ab139418) according to the manufacturer's instructions.The UO-31 cells were seeded in 12-well microtiter plate, incubated for 12 h at 37 °C, then amended with the IC 25 value of extracted CPT, and continue incubated for 48 h.The cells were collected and fixed in 1 ml of icecold 70% ethanol for 2 h at 4 °C, then rehydrated with 1 ml PBS, and stained with 500 μl of PI with RNase, for 30 min at room temperature in dark.The DNA content of the cells was analyzed by the flow cytometry at Ex λ 493 nm and Em λ 636 nm.The percentage of G0-G1, S and G2-M cells were then calculated using Fluorescence-activated cell sorting (FACS) software.
Bioprocessing of the CPT yield by selected fungal isolates with Plackett-Burman Design
The nutritional requirements of the potent isolates were optimized to maximize their yield of CPT with by the Plackett-Burman design [21][22][23][51][52][53].Nineteen variables namely, malt extract, yeast extract, glucose, sucrose, salicylic acid, asparagine, glutamine, cysteine, tryptophan, glycine, phenylalanine, peptone, pH, incubation time, sodium acetate, citric acid, CaCl 2, NaCl, methyljasmonate were optimized by Plackett-Burman design.The nineteen parameters assessed by Plackett-Burman design were represented by high (+ 1) and low (− 1) levels.Statistical nutritional optimization has been used frequently to evaluate the interactions of the independent factors and their consequences on the response CPT yield, unlike to the traditional optimization method (one-factor-at-time).The design of Placket-Burman depends on the first order reaction: Y is the predicted CPT production, Xi is an independent variable, βi is the linear coefficient, and β0 is the model intercept.All the runs were conducted in triplicates and the average of CPT production was used as response.
Metabolic biosynthetic stability of CPT productivity by the potent fungal isolates
The metabolic biosynthetic stability of CPT by the potent fungal isolate was assessed with the fungal storage and subculturing.The axenic CPT-producing fungal culture was successively sub-cultured for 9 generations with the a plug centrally inoculated on PDA plate incubated at 30 °C for 8 days lifespan [25,53,54].The fungal productivity for CPT was determined by growing on the optimized media, incubated at standard conditions, and then the CPT was extracted and quantified by HPLC.
As well as, the axenic 1st fungal culture was stored as slope PDA culture at 4 °C, was tested for their CPT productivity by growing on PDA media, monthly along 7 months, and the CPT was extracted and quantified as determined above.
Restoring the biosynthetic potency of A. terreus CPT upon addition of organic extracts and indigenous microbiome of C. roseus
To restore the metabolic biosynthetic potency of CPT by A. terreus, different organic extracts of C. roseus (methylene chloride, methanol, ethylacetate, petroleum ether, and isopropyl alcohol) were amended to the CPT production medium.Ten grams of fresh leaves of C. roseus were pulverized in each solvent (100 ml) for 12 h, the extracts were filtered, centrifuged, and concentrated to 20 ml.The plant extracts were added to the 3 days old pre-fungal cultures at concentrations 1, 5 and 10 ml, and the cultures were incubated for 15 days under the standard conditions.After incubation, CPT was extracted and quantified by HPLC.
The influence of the indigenous microbiome of C. roseus leaves on restoring the biosynthetic potency of CPT by A. terreus was assessed.The leaves of C. roseus were sectioned into small parts, surface sterilized and amended into 3 days old culture of A. terreus grown on PDB medium, and the cultures were continue for 15 days incubation, then the CPT was extracted and quantified by HPLC.Surface sterilized leaves of C. roseus were inoculated into blank PDB media at the same concentrations, and used as control, regarding to the A. terreus culture without plant parts.
Fungal deposition
The isolate Aspergillus terreus EFBL-NV was deposited into the Genbank with accession number OR131583.1.
Statistical analysis
The experiments were conducted in triplicates, and the results were expressed by the mean ± SD.The statistical analyses were conducted by one-way ANOVA, and Tukey's HSD test was determined by CoStat software (CoStat 2005; Version 6.311).
Isolation of the fungal endophytes of Catharanthus roseus; Screening for CPT production, and Molecular identification
Twenty-five fungal isolates were isolated from the twigs, leaves and flowers of C. roseus, these fungal isolates were morphologically identified based on their macro and microscopical features according to the universal identification keys.Ten fungal isolates were recovered on PDA medium and 15 isolates were recovered on Czapek's-Dox medium.These fungi belong to the genera; Aspergillus, Penicillium, Alternaria, Rhizopus and Trichoderma.Practically, ten endophytic fungal isolates were recovered from the flowers, and fifteen isolates were recovered from the leaves of C. roseus (Additional file 1: Table S1).The recovered fungal isolates was grown on PDB media, incubated at the standard conditions, CPT was extracted and quantified by TLC and HPLC.From the screening profile (Fig. 1B-D), the highest CPT productivity was reported for Aspergillus terreus, an endophytes of C. roseus flowers, (90.3 μg/L), followed by Alternaria brasicola (97.9 μg/l), A. fumigatus (69.9 μg/l), and A. flavus (67.6 μg/l).The yield of CPT was verified by HPLC, the HPLC chromatogram of the most potent CPT producer "A.terreus" in addition to non CPT producer as negative control was shown (Fig. 1D).The putative sample gave the same retention time (4.7 min) as the authentic one, ensuring its chemical proximity as CPT.The remaining endophytic fungal isolates from the flowers and leaves of C. roseus lacks the metabolic potency to produce CPT.Interestingly, the most potent CPT producer A. terreus NV1 were recovered from the flowers, however, three isolates of A. terreus NV2, NV3, and NV4 were recovered from the leaves of C. roseus with tiny yield of CPT, suggesting the dependence of expression of CPT biosynthetic genes on microbiome of plant flower than the leaves.The obvious fluctuation on the yield of flowers inhabited A. terreus NV1 and other leaves inhabited A. terreus isolates, ensures the key role of the fungal-plant host interaction, host physiological and biochemical identities on modulating the selective expression of CPT encoding genes.
The morphologically identified potent CPT-producing endophyte of C. roseus "A.terreus NV1" was molecularly confirmed based on the sequence of ITS region.The amplicon of the ITS region of the fungal isolate was ~ 650 bp (Fig. 2).The PCR amplicon was sequenced, and ITS sequence was non-redundantly BLAST searched on the NCBI database.The ITS sequence of A. terreus EFBL-NV1 was deposited to the Genbank with accession number OR131583.1.From the alignment profile and phylogenetic analysis of ITS sequences, the isolate A. terreus EFBL-NV displayed 99% similarity with various A. terreus isolates of accession # MG575483.
Chromatographic, spectroscopic analyses, and LC-MS/MS analyses of the extracted CPT
The identity of the putative CPT from A. terreus was confirmed by the UV-Vis, FTIR, HNMR, and LC-MS/ MS analyses, compared to the authentic CPT.After cultural incubation, CPT was extracted, fractionated by TLC, and the spots of CPT containing silica gel with the same mobility and color, were scraped-off and dissolved in methanol for chemical analysis (Fig. 3A).The purified CPT from A. terreus had the same VU-absorption pattern of the authentic CPT, with maximum absorbance at wavelength 360 nm (Fig. 3B).From the FTIR spectra, the purified CPT of A. terreus had a peak at 3406.6 and 3393.3 cm −1 that were assigned for the hydroxyl (OH) and amide group stretches, respectively.As well as, the CPT had a distinct peak of 2923.5, 1729.8 and 1604.5 cm −1 that was assigned to the aliphatic CH, ester groups and aromatic rings stretch, respectively.The COO stretching frequency peaks at 1268.9 cm −1 , 1029.8 cm −1 were assigned for the aromatic C and H blends.The distinct peaks of CPT was resolved at 3438, 1666, 1113 and 1035 cm −1 that refers to the stretching of OH, C=O, C=N, C-C(=O)-O and C-O functional groups, respectively (Fig. 3C).From the FTIR spectrum, the purified CPT from A. terreus had the same functional groups orientation and stretching patterns of authentic one, ensuring the chemical identity of the purified sample as CPT.The chemical structure of the CPT from A. terreus was resolved from the HNMR displayed the same signals of the authentic one, distributed between 1.0 and 8.0 ppm, with three proton signals resolved at 1.0-2.5 ppm corresponding to methyl, acetate and acetylene groups, and signals for aromatic moieties resolved at 7.0-8.4ppm (Fig. 3).
The molecular identity of CPT has been confirmed by the LC-MS/MS analysis at positive mode.The CPT of A. terreus had the same molecular mass to charge ratio (349.2 m/z) of the authentic CPT from Camptotheca acuminata [1].Moreover, the parent CPT molecule (349.2 m/z) was further fragmented by MS/MS applying collision energy of 35 electron Volts (eV), the fragments of molecular mass 57.0, 133.1,167.9,181.08, 220.009, 234.32, 248.94, 277.06, 303.2 and 305.01 m/z were recovered, with the same fragmentation pattern of the authentic CPT.From the 1st mass spectra, a peak at retention time 5.67 min with a molecular ion peak at m/z 349.12 [M + H] + corresponding to the molecular formula C 20 H 16 N 2 O 4 .The peaks at retention times 5.67, 5.76, 5.84 min, exhibited a protonated molecular ion peak [M + H] + of CPT at m/z 349.12.Thus, the putative sample of A. terreus has been chemically authenticated as CPT, normalizing to the authentic one.
Guided-activity of the putative CPT against the CPT producing and non-producing fungi
The antimicrobial activity of the metabolites has been used frequently as preliminary signs of antiproliferative, since the physiological behaviors of the microbial cells are mostly similar to tumor cells.The activity of the extracted CPT from A. terreus was assessed against the CPT producing fungi, as well as, against the non-producing fungal endophytes of C. roseus.The CPT of A. terreus was purified, and assessed towards the positive CPT producing fungi; A. terreus, A. fumigatus and A. flavus and A. oryzae, in addition to the non-producing isolates; Rhizopus oryzae, Mucor sp, T. atrovirdie and P. polonicum.From the results (Fig. 4), the extracted CPT from A. terreus NV1 had no activity towards the CPT producing fungal isolates, in contrary to the dramatic activity towards the non-CPT producing fungi.Obviously, the activity of A. terreus CPT towards the non-CPT producing fungi has been observed as a concentrationdependent manner as revealed from the inhibition zones (Fig. 4B, C).The diameter of inhibition zone of the A. terreus CPT for Rhizopus oryzae, Mucor sp, T. atroviride and P. polonicum were ranged between 27 and 37 mm in response to CPT concentration 12 μg/ml, normalized to 10% DMSO as a negative control.Interestingly, the lack of effect of CPT on CPT-producing fungi, in contrary to the strong inhibitory effect on the non-producing fungi, ensures the possessing of former fungi to a specific mechanism of resistance to CPT effect, that might be by blocking the transportation of this compound to the cytosol of fungal cells or altering the orientation of topoisomerases I, II targets to be inaccessible for CPT binding.
Antiproliferative, topoisomerases inhibition and wound healing activity of the extracted A. terreus CPT
The activity of the extracted A. terreus CPT was assessed against MCF7 and UO-31 cell lines, at different CPT concentrations (1-10 μM).From the calculated IC 50 values (Fig. 5A), the extracted A. terreus CPT had a significant activity towards the MCF-7 (5.2 μM) and UO-31 (2.25 μM) cell lines, compared to staurosporine as authentic anticancer drug, that has 7.8 μM and 4.2 μM, towards the cell lines, respectively.So, the extracted CPT of A. terreus displayed a powerful activity towards the MCF7 and UO-31, than the authentic anticancer drug "staurosporine".From the IC 50 values, the activity of A. terreus CPT towards UO-31 was ~ two-folds higher than MCF7, ensuring the susceptibility of UO-31 to CPT, that might due to feasibility of entrance to the cytosol and binding with topoisomerases.The higher sensitivity of UO-31 to CPT, might be related to the structural activity relationships of binding the CPT with the topoisomerases, in addition to targeting another metabolic process and/or structural organelles.
The ability of the purified A. terreus CPT to inhibit the DNA topoisomerases 1 and II was assessed.Different CPT concentrations were amended to the reaction assay of Topo I, II, and the residual enzymatic activity was determined.From the results (Fig. 5B), the purified A. terreus CPT displayed a significant activity towards topoisomerase II than I by about 13 folds, that being matched with the results of staurosporine.The IC 50 value of the A. terreus CPT towards topoisomerase I and II was 6.9 nM and 0.52 nM, respectively.The inhibitory effect of A. terreus CPT and staurosporine was closely similar for the topoisomerase II, however, The wound healing activity of UO-31 in response to A. terreus CPT treatment was assessed, by inspecting the gap closure after 24 and 48 h, comparing to untreated cells (control).The UO-31 cells were used for further cell cycle and apoptosis analyses, due to their sensitivity to the purified CPT (IC50 value 2.5 μM) compared to MCF-7 cell (IC50 value 5.2 μM).Obviously, the percentage of scratch/gap closure was noticeably inhibited upon treatment with A. terreus CPT, with the incubation time, compared to the control cells (Fig. 6A).Practically, the wound healing of the homogenous monolayer of UO-31 cells was approximated by about 55.5% compared to 97% of control cells, after 24 h (Fig. 6B).With the prolongation of incubation time to 48 h, the wound closure of UO-31 cells was recorded by about 98% and 60.4% for the untreated and CPT treated cells, respectively.The remarkable wound healing suppression ensures the interference of CPT with the cell regeneration, and matrix formation of the UO-31 tumor cells.So, the dual activity of CPT by binding with topoisomerases I and II, in addition to prevent the cellular matrix formation and motility seems to be more therapeutically affordable.
Apoptosis and cell cycle analysis of UO-31 in response to CPT of A. terreus
The apoptotic process of UO-31 in response to the CPT of A. terreus was assessed by Annexin V-PI assay that mainly based on the externalization of membrane phosphatidylserine (PS) in early stages of apoptosis, forming Annexin V-PS complex that can be easily analyzed by flow cytometry, to elucidate the different apoptosis stages.From the flow cytometry results (Fig. 7A-C), a significant shift of the normal cells to apoptotic phase was observed in response to the CPT of A. terreus, compared to control cells (untreated cells).Upon treatment with A. terreus CPT, the percentage of the UO-31 cells in early apoptosis, late apoptosis, and necrosis were ~ 15.5%, 12.02% and 5.1%, respectively.However, the percentage of early apoptosis, late apoptosis, and necrosis were 0.58%, 0.1% and 0.47%, respectively.So, upon addition of A. terreus CPT, the total apoptosis of UO-31 cells was increased by about 16 folds, compared to the untreated cells.
The cell cycle of UO-31 was analyzed in response to addition of CPT of A. terreus, by propidium iodide assay.The cells were amended with the IC 25 values (1.1 μM) of CPT, incubated, collected and fixed in ice-cold ethanol, and the percentage of G0-G1, S and G2-M cells were calculated.From the cell cycle analysis (Fig. 7E-G), the growth of UO-31 cells was maximally arrested at S-phase, compared to the control cells without treatments.However, a similar effect has been observed for UO-31 cell at the G0-G1 and G2-M phases for the treated and untreated cells.Overall, CPT of A. terreus had a noticeable inhibitory effect to the cells at the S-phase, as revealed from the maximum growth arrest, compared to the other cell cycle phases.The values were represented by the means, followed by letters a, and b within the same column that is a significantly different (ONE Way ANOVA, LSD test, p ≤ 0.05).ns refers non-significant, *refers to significant difference, **refers to highly significant difference.LSD is the least significant difference
Bioprocess of CPT production by A. terreus using the Plackett-Burman design
The productivity of CPT by A. terreus was maximized by the nutritional optimization, since the chemical components and their interactions are essential in controlling the biosynthesis of bioactive secondary metabolites [21-24, 33, 51].The nutritional requirements for maximum CPT production by A. terreus were optimized by Plackett-Burman design as 1st order model.The tested nineteen parameters including the various carbon, nitrogen, growth elicitors, growth modulators and physical factors for growth of A. terreus were studied with their lower and higher values (Table 1).The impact of the tested variables affecting CPT productivity by A. terreus, with the predicted, corresponding actual responses, and their residuals were summarized in Table 2.The actual and predicted yield of CPT by A. terreus were noticeably fluctuated from 9.3 to 255.5 μg/L confirming the significance of tested variables on CPT biosynthesis, reveals the efficiency of the Plackett-Burman design.The F-value (9.8), p-value (< 0.0007) and adjusted determination coefficient (Adj.R 2 = 0.92) refers to the efficiency of the model as shown in Table 3.The main effects, normal probability of the tested factors were plotted (Fig. 8), revealing the six different independent factors including the incubation time, yeast extract, glutamine, tryptophan, CaCl 2 and methyljasmonate that have a significant effect on CPT productivity by A. terreus.The 3D surface response methodology plots of the most significant variables affecting CPT productivity of A. terreus was illustrated in Fig. 9 The maximum yield of CPT (255.6 μg/l) by A. terreus was reported at run# 10, with the medium components malt extract (+ 1), yeast extract (− 1), glucose (+ 1), sucrose (− 1), salicylic acid (− 1), asparagine (+ 1), glutamine (+ 1), cysteine (− 1), tryptophan (+ 1), glycine (+ 1), phenylalanine (− 1), peptone (− 1), pH (+ 1), incubation time (+ 1), sodium acetate (+ 1), citric acid (+ 1), CaCl 2 (− 1), NaCl (+ 1) and methyljasmonate (− 1).The lowest CPT yield (9.1 μg/l) was recorded at run # 19 and run # 4, respectively.From the ANOVA analyses, the model was highly significant as reveled from the values of Fisher's f-test 13.6 and probability p-value 0.0001.From the Plackett-Burman design, the most significant variables affecting CPT productivity by A. terreus was the yeast extract, glutamine, tryptophan, incubation time, CaCl 2 , and methyljasmonate.The actual yield of A. terreus CPT was fluctuated from 255.1 to 9.1 μg/l, confirm the significance of the tested variables on biosynthesis of CPT.So, the optimal components for the maximum CPT production by A. terreus contains Yeast Extract (− 1), Glutamine (+ 1), Tryptophan (+ 1), Methyl jasmonate (− 1), and CaCl 2 (− 1), after 15 days of incubation time.The first order polynomial equation for camptothecin produced by A. terreus regarding to the significant independent variables was derived from the following equation, So, upon Placket-Burman optimization process, the yield of CPT by A. terreus was increased by about 2.8 folds (255 μg/l) compared to the control PDB medium (~ 90.1 μg/l).
Productivity of CPT by A. terreus with the subculturing and storage
The biosynthetic stability of A. terreus for CPT production with the subculturing and storage was assessed.The 1st isolate of A. terreus preserved as slant cultures on PDA for 8 days at 30 °C, was subcultured till the 9th generation, and their CPT productivity was determined by TLC.Practically, a noticeable loss has been observed on the CPT productivity by A. terreus with the successive subculturing (Fig. 10A, Predicted (µg/l) by the 5th generation (116 μg/l).At the 7th subcultures, the yield of CPT by A. terreus was reduced by 3.7 folds (70 μg/l), compared to the 1st culture.So, attenuation of the yield of CPT with the successive subculturing of A. terreus has been noticeably recorded.
In addition, the effect of storage of A. terreus as slope culture on PDA at 4 °C has been evaluated, intervally till 7th months.Remarkably, A. terreus lost their productivity of CPT by about 50%, by the 3rd month of storage as slope culture at 4 °C.The CPT yield of the 1st A. terreus culture (257.2 μg/l) was decreased into 68.1 μg/l by storage by the 5th month, with ~ 3.7 folds reduction (Fig. 10B).
Effect of organic solvent extracts and indigenous microbiome of C. roseus on restoring the CPT productivity of A. terreus
Reduction of productivity of CPT by fungi with their subcultures and storage is the major metabolic change that limits the further industrial applications of fungi [21][22][23][24]33].Several hypotheses unravel the fungal-plant interactions, revealing the dependence of the biosynthetic machinery of CPT by the endophytic fungus on some chemical signals from the plant or from their indigenous microbiome.So, the 5th A. terreus culture was amended with different organic solvents extracts of C. roseus, incubated at standard conditions, then the CPT was extracted and quantified by HTLC and PLC.From the results (Fig. 10C), the organic solvent extracts of C. roseus "methanol, dichloromethane, ethylacetate, petroleum ether, isopropyl alcohol" had no an obvious effect on restoring the biosynthesis of CPT by A. terreus.The negative effect of the utilized wide-range polarity solvents on the yield of CPT negates the association of prompting signals from the host plants, or weakening of these signals during the downstream extraction processing.
The biosynthetic potency of CPT by the 5th A. terreus culture was assessed in response to addition of different plant parts.Interestingly, the yield of CPT by A. terreus was strongly restored and enhanced upon addition of surface sterilized parts of C. roseus flowers and plant twigs.The plant parts without A. terreus were used a negative controls.The yield of CPT by the 5th culture of A. terreus was maximally increased by addition of 1% of C. roseus flowers (305 μg/l) and twigs (315 μg/l).So, the overall yield of CPT by 5th A. terreus was completely restored and increased over the 1st culture of A. terreus by about by 1.3 folds, upon addition of 1% leaves of C. roseus.Thus, with addition of the indigenous C. roseus microbiome, the biosynthetic machinery of A. terreus CPT was reinstated.
Discussion
CPT derivatives have been recognized as one of the most prescribed anticancer drugs for most of the solid tumors, due to their unique ability to bind with topoisomerase I of tumor cells, thus, keeping the DNA-supercoiling, and preventing the relaxation of DNA, and leading to cell death.[4].The biosynthetic potency of CPT by endophytic fungi raise the hope for the commercial production of this compound, for the fast fungal growth, accessibility for bulk biomass, independence on environmental conditions, and feasibility of metabolic engineering, however, the anticipation of fungi for commercial production of CPT has been challenged by the loss of CPT productivity with the storage and subculturing [14,22,23,29,55].Thus, screening for a novel fungal endophyte with higher productivity and affordable biosynthetic CPT stability was the objective.So, we have been motivated for screening of CPT production from fungal endophytes inhabiting the medicinal plants with traditional pharmaceutical uses, especially Catharanthus roseus.Catharanthus roseus is one of the most crucial world-wide medicinal plants, possessing a wide-range of phytochemicals with diverse biological activities; antioxidant, antimicrobial, and anticancer properties [56,57].Vinblastine and vincristine as common anticancer drug were isolated from C. roseus [58].
Among the recovered endophytic fungal isolates inhabiting the flowers of C. roseus, A. terreus EFBL-NV1 was recognized as the most CTP producing isolate (~ 90.1 μg/L), it was molecularly confirmed based on the ITS sequence, and deposited on Genbank with accession # OR131583.1.Consistently, isolates of A. terreus, endophytes of F. elastica, Cestrum parqui, Astragalus fruticosus and Cinnamomum camphora were recognized as CPT producers, ensuring the harboring of the distinct CPT biosynthetic machinery of A. terreus regardless to the different plant hosts [21][22][23][24][25]59]. Remarkably, the common presence of isolates of A. terreus with potency for CPT production among various medicinal plants, declares the efficacy of the biosynthetic machinery of CPT by A. terreus, as reciprocal mechanism for plant protection via the fungal-plant interaction.The disparity on the yield of CPT by the isolates of A. terreus inhabiting different plant hosts, might be attributed to the fungal-microbiome interactions, modulating the molecular expression of the CPT encoding genes by A. terreus [21][22][23][24].
The chemical identity of the putative CPT from A. terreus was confirmed by the UV-Vis, FTIR, H NMR and LC-MS/MS analyses, ensuring the chemical identity of the purified sample as CPT.The putative CPT had the same molecular mass (349 m/z) and molecular fragmentation pattern as revealed from the MS and MS/MS, that identical to the authentic CPT of C. accuminata [1,22].The MS/MS fragmentation pattern of the current CPT sample was coincident to the fragmentation pattern of Nothapodytes nimmoniana [55, 60], and A. terreus [21,23,24,59].Thus, from the FT-IR, HNMR, LC-MS/MS, the putative sample of A. terreus has been chemically authenticated as CPT, normalizing to the authentic one.
The antimicrobial activity of metabolites has been used as preliminary signs for antiproliferative activity, since numerous physiological features of microbial cells are mostly identical with tumor cells.extracted CPT had no activity against A. terreus, A fumigatus, A. flavus and P. chrysogenum "CPT producers", unlike to the dramatic activity against Rhizopus sp, Mucor sp, T. atroviride and P. polonicum, as non CPT-producers, in a concentrationdependent manner.The lack of inhibitory effect of CPT on CPT-producing fungi, ensures the possessing of specific mechanisms of resistance to CPT effect, or by blocking the transportation of this compound to the cytosol of fungal cells, or altering the orientation of topoisomerases target to be inaccessible for CPT binding [22].The antiproliferative activity of the extracted A. terreus CPT was assessed towards MCF7 and UO-31 cell lines.The extracted A. terreus CPT had a significant activity for the MCF-7 (5.2 μM) and UO-31 cells (2.24 μM), compared to staurosporine as authentic drug.Consistently, the antiproliferative activity of the extracted A. terreus CPT was coincident with CPT from various endophytic fungi [21,22], towards various cell lines.The affinity of purified A. terreus CPT to inhibit the DNA topoisomerases 1 and II, was assessed.The purified A. terreus CPT displayed a significant activity towards topoisomerase II than I by about 13 folds.The higher affinity of A. terreus CPT for binding with topoisomerase II and I could be an affordable therapeutic criterion, since topoisomerase II catalyzes cleavage of both DNA strands, and down-regulation of the Topo I could be an adaptive mechanism of tumor cells to resist the CPT effect [5,7].The topoisomerase II is able to catalyze the relaxation of both positively and negatively supercoiled DNA.The unique affinity of A. terreus CPT to inhibits Topo II than Topo I, could be due to their specific structural activity relationships (SAR), of stereostructural conformation of the current CPT, so, further molecular modeling are needed to explore the higher affinity of A. terreus CPT to Topo II than I. Consistently, evodiamine, a natural product from C. acuminata has a dual catalytic topo I/II inhibitor, exhibits an enhanced inhibition against CPT [61].Therefore, targeting both Topo I and II simultaneously should lower the potential for the development of resistance against such inhibitors [6].Human Topo II is an effective target in the treatment of a wide spectrum of cancers etoposide, doxorubicin, daunorubicin, and mitoxantrone [5,7].Both Topo I and II have an overlapping functions in DNA metabolism and essential in the normal progression of the cell cycle, so targeting both enzymes simultaneously lead to synergistic anticancer effects [5,7] [62].So, the dual activity of CPT as an efficient inhibitor of Topo I, II, in addition to its antifungal is one of the most intriguing biological criteria since the chemotherapy cause suppression to the immune system, permitting to the opportunistic microbial flora to be pathogenic.In cancer patients, invasive fungal disease remains an important complication causing considerable mortality and morbidity.The activity of CPT against Rhizopus sp is a very promising criterion, since Rhizopus is one of the causes of Mucormycosis, an emerging invasive fungal infection in immunocompromised patients [63].So, this assumption was authenticated from the common properties of tumor and fungal cells such as replication rate, modalities of spreading within the host, rapid development of drug-resistance, and tendency to be more aggressive during disease progression [64].
The effect of extracted A. terreus CPT on the wound healing activity of the UO-31 was assessed, after 24 and 48 h, comparing to untreated cells.The wound healing of the monolayer cells of UO-31 was reduced by about 60%, comparing to control.The remarkable wound healing suppression ensures the interference of CPT with the cell regeneration, cell divisions, and matrix formation of the tumor cells UO-31.So, the strong antiproliferative activity of A. terreus CPT could be due to inhibition of Topo I and II, in addition to prevent the cellular matrix formation, and motility.The wound healing assay is a standard in vitro approach for checking the collective cell migration in two dimensions [48].The cell migration is usually involved in several pathological disorders such as tumor invasion, angiogenesis, and metastasis [48,65].The apoptotic process of UO-31 in response to CPT of A. terreus was assessed by Annexin V-PI assay.A significant shift of the normal cells to apoptotic phase in response to A. terreus CPT, compared to control cells (untreated cells).Upon addition of A. terreus CPT, the total apoptosis of UO-31 cells was increased by 16 folds, compared to the untreated cells.The growth of UO-31 cells was maximally arrested at S-phase, compared to the control "untreated" cells.Similar results for CPT on apoptosis and cell cycle [66] The productivity of CPT by A. terreus was maximized by Plackett-Burman nutritional optimization design, the actual yield of CPT was increased into 255.5 μg/L, at run #10, compared to the control cultures.Similar results for maximizing the CPT yield by A. terreus, A. flavus, and P. chrysogenum by Plackett-Burman Design bioprocessing were reported [21][22][23][24][25]33].So, the yield of A. terreus CPT was increased by ~ 2.5 folds, compared to the control cultures.The biosynthetic stability of A. terreus for CPT production with the subculturing and storage was assessed.The yield of CPT by the 1st culture of A. terreus was reduced by ~ 2.2 folds by the 5th generation, also, the CPT productivity by A. terreus was reduced by ~ 50%, by the 3rd month.So, attenuation of the CPT productivity by fungi with the subculturing and storage is the challenge that halts the further ongoing industrial uses of fungi to be CPT producing platform [22,23,29,55].Several hypotheses unravel the fungal-plant interactions, revealing the dependence of the biosynthetic machinery of CPT by the endophytic fungus on the some chemical signals from the plant or from their indigenous microbiome [22,23,25,29,55,67,68].The biosynthetic potency of CPT by the 5th culture of A. terreus was not only restored, but also over-increased above the 1st culture of A. terreus by ~ 1.3 folds, in response to surface sterilized leaves of C. roseus.So, with the addition of leaves parts of C. roseus, the biosynthetic machinery of A. terreus CPT was reinstated, suggesting the releases of indigenous microbiome of plant tissues, microbiome cross-communication, and intimate growth with A. terreus triggering their CPT biosynthesis machinery [22,23,25,29,55,67,68].
In conclusion, A. terreus, an endophyte of C. roseus, was the potent CPT producers, the CPT had a strong activity toward the non CPT-producing fungal isolates, while, the CPT producing fungi gave an obvious resistance to CPT toxicity.The purified A. terreus CPT had a potential antiproliferative activity, inhibition of Topo I and II, preventing the wound healing, and induce the cellular apoptosis.The biosynthetic potency of A. terreus CPT was attenuated with the subculturing and storage, however, this biosynthetic machinery was completely restored upon addition of surface sterilized leaves of C. roseus, confirming the releases of specific signals from plant tissues or from their entire microbiome triggering the expression of biosynthetic machinery of A. terreus CPT.Further studies are ongoing to explore the molecular biosynthetic machinery of CPT by A. terreus with differential transcriptomics and proteomic approaches, to sustain their biosynthetic stability, to be a novel industrial platform for CPT production.
• fast, convenient online submission • thorough peer review by experienced researchers in your field
• rapid publication on acceptance
• support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
Fig. 1
Fig. 1 Screening for CPT production by the fungal endophytes of Catharanthus roseus.After incubation, CPT was extracted and screened by TLC and the yield of the most promising CPT producing isolates was quantified by HPLC.A Morphological view of the leaves and flowers of C. roseus.B Selected CPT producing endophytic fungal isolates.C, TLC profile of the recovered endophytic fungal isolates for CPT screening.Five μl of each sample were spotted to the TLC plate, compared to the authentic CPT (5 μl of at 50 μg/ml).D Yield of CPT from the most potent fungal isolates quantified by Image J Software Package.E HPLC chromatogram of the highest CPT producing isolates 14, normalized to the sample # 1, as non CPT-producers
Fig. 2
Fig. 2 Morphological and molecular identification of A. terreus as the most potent CPT producer.A Plate culture of A. terreus on PDB after 6 days of incubation at 30 °C.B The microscopical view of the conidial heads of A. terreus at 400 × (scale bar 40 μm), and 1000 × (scale bar 10 μm).C PCR amplicon of the ITS regions of A. terreus, normalized to the DNA ladder (1 kb Nex-gene Ladder, Puregene, Cat.# PG010-55DI).D Molecular phylogenetic analysis of A. terreus, an endophyte of C. roseus by Maximum Likelihood method
Fig. 3
Fig. 3 Chromatographic and spectroscopic analysis of the purified CPT of A. terreus.A TLC chromatogram of the putative CPT, the target spots were scraped-off from the plates and used for further analyses.B UV-spectra of the purified CPT, compared to the authentic one.C FT-IR spectra of the putative CPT and authentic one.D The HNMR spectra of the putative CPT of A. terreus.E The LC-MS analysis of the putative CPT of 349 m/z.F MS/MS fragmentations of the parent CPT molecule (349 m/z).Ath, refers to authentic CPT
Fig. 4
Fig.4 Antimicrobial activity of the extracted A. terreus CPT towards CPT producing and non-producing fungi.The CPT spots were scrapped-off from the TLC silica gel plates, eluted, and different concentrations of the CPT (5, 10 and 15 μg/ml) was applied to the tested fungal cultures, incubated for 5 days, then the diameter of the inhibition zone was measured.After A The panel of CPT-producing fungi (A.terreus, A. fumigatus, A. flavus, A. oryzae).B The panel of non CPT-producing fungi (Rhizopus sp, Mucor sp, Trichoderma sp, and Penicillium sp).C The diameter of the inhibition zone of the CPT producing and non CPT-producing fungi by the purified A. terreus CPT
Fig. 5 Fig. 6
Fig. 5 Antiproliferative activity and kinetics of inhibition of Topoisomerase I and II by the purified CPT from A. terreus.A The IC50 values of the purified A. terreus CPT towards the MCF7 and UO-31 cell lines, compared to the normal OEC.B Kinetics of inhibition of topoisomerase I and II by CPT of A. terreus, compared to Staurosporine as reference drug
Fig. 7
Fig. 7 Cell cycle and apoptosis of UO-31 in response to CPT of A. terreus.The cell cycle of UO-31 cells without CPT (A), and treated with A. terreus CPT (B), and the overall cellular growth arrest (C) in response CPT compared to control.Cell cycle analysis by Annexin-V-PI of UO-31 cells without CPT (E), with A. terreus CPT (F) and the overall apoptotic ratios (G).The statistical analysis results of the one-way ANOVA was summarized.The values were represented by the means, followed by letters a, and b within the same column that is a significantly different (ONE Way ANOVA, LSD test, p ≤ 0.05).ns refers non-significant, *refers to significant difference, **refers to highly significant difference.LSD is the least significant difference
Fig. 8
Fig. 8 The main effects of different variables on CPT production by A. terreus with the Plackett-Burman experimental design.A Pareto chart illustrates the order of significance of each variable.Normal plot (B) and half-normal (C) of probability with standardized effect.D Box-Cox of power transform.E Normal plot of the internally standardized residuals.F Plots of residuals versus predicted response of CPT by A. terreus Plot of the correlation of the predicted and actual camptothecin yield by A. flavus.E Normal plot of the residual
Fig. 9
Fig. 9 Three-dimensional surface plots for interactions of the variables for CPT production.The interaction of for glycine and tryptophan (A), sodium acetate and incubation time (B), asparagine and salicylic acid (C), phenylalanine and glycine (D), incubation time and pH (E), and salicylic acid and sucrose (F)
Fig. 10
Fig. 10 Metabolic stability of A. terreus for CPT production with the fungal subculturing and storage.The fungal isolate was grown on PDB for 8 days, and CPT was extracted and quantified.The yield of CPT of A. terreus in response to fungal subculturing (A), and storage for 7 months (B).The upper panels are the TLC and lower panels were the yield quantified by Image J. C, The yield of CPT by the 5th generation of A. terreus amended with different organic solvents extracts of C. roseus.D, The yield of CPT of 5th culture of A. terreus amended with surface sterilized leaves and flowers of C. roseus
Table 2
Matrix of the Plackett-Burman Design for optimization of CPT production from A. terreus
Table 3
ANOVA for selected factorial model, Analysis of variance table [Partial sum of squares-Type III]
|
2024-01-07T05:07:51.430Z
|
2024-01-05T00:00:00.000
|
{
"year": 2024,
"sha1": "6c2464c2856446a3c0ed8eed4b83d2129c4a9d05",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6c2464c2856446a3c0ed8eed4b83d2129c4a9d05",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255302971
|
pes2o/s2orc
|
v3-fos-license
|
Bilateral ureteroinguinal hernia into the scrotum repaired in a staged fashion
Abstract Bilateral ureteroinguinal hernia is a rare presentation with only five cases previously presented in the literature. We describe a 60-year-old male who was diagnosed with large bilateral hernias extending into the scrotum containing redundant ureters. Both hernias were repaired in staged fashion with retrograde temporary ureteral stent placement and modified Lichtenstein technique. Follow-up computerized tomography imaging with >5 years follow-up revealed intact repairs with no evidence of urinary obstruction.
INTRODUCTION
Inguinal hernia repair is one of the most common operations performed in the United States with over 800 000 surgeries annually [1]. Ureteroinguinal herniation is rare with <70 cases reported in English literature [2]. Our review of the literature identified five cases of bilateral ureteroinguinal hernia [3][4][5][6][7]. There is only one previous case in which bilateral repair has been reported [7].
Most patients diagnosed with herniation of urologic organs have no symptoms of urologic pathology. The most common diagnosis is in obese men in their 50s or 60s [4]. Some patients are diagnosed with urinary frequency or urgency, sepsis, hydronephrosis, obstruction and acute kidney injury. Most published cases were identified at the time of surgical exploration or post-operatively as the result of a complication [8]. It has been reported that 23.5% of patients with herniation of urologic organs are associated with complications [9].
We report a case of large bilateral ureteroinguinal scrotal hernia identified pre-operatively and repaired successfully in a staged fashion.
CASE REPORT
This is a 60-year-old male with a body mass index of 45.4 who was referred to the emergency room with a complaint of acute onset left f lank pain. Past medical history was significant for hypertension and hyperlipidemia. Past surgical history was notable for appendectomy and multiple right lower extremity operations as a child to correct deformities from polio. He continued to require right lower extremity bracing. At the time of diagnosis, he had no additional gastrointestinal or urinary symptoms. Examination was notable for bilateral groin bulge suggestive of hernia versus hydrocele. The complete blood count, basic metabolic panel and urinalysis were unremarkable. A computerized tomography (CT) scan of the abdomen was performed and revealed bilateral inguinal hernias containing ureters and fatty tissue (Fig. 1). Mild right hydronephrosis was noted. Pain improved after the administration of ketorolac and patient was referred to general surgery for further evaluation.
The patient was taken to the operative suite where he underwent cystoscopy, retrograde pyelogram and temporary ureteral stent placement (Fig. 2). Right inguinal exploration was performed, and the patient was found to have a large direct hernia containing abundant adipose tissue. The ureter and cord structures were identified. Adipose tissue was amputated with an energy device. The ureter was reduced into the retroperitoneum. The hernia was then repaired with modified Lichtenstein technique using macroporous polypropylene mesh. Retrograde pyelogram was repeated at completion of the hernia repair following stent removal. A serpiginous course of the ureter was noted without obstruction (Fig. 3). He was observed overnight and discharged home the following day. Post-operative course was uneventful.
The patient was allowed to fully recover over the next 4 months. He then returned to the operating suite and a nearly identical procedure was performed on the left side. His post-operative course was uneventful. At a 6-month follow-up, the patient exhibited no recurrence of his hernias, markedly decreased size of the scrotum and significant improvement in urinary soiling due to retraction of the phallus.
CT abdomen performed >5 years after repair revealed intact bilateral inguinal hernia repair and no evidence of hydronephrosis.
DISCUSSION
Bilateral ureteroinguinal hernia has rarely been described in the literature and there are not clear guidelines for best management. There are two variations of ureteroinguinal hernias, paraperitoneal and extraperitoneal. Paraperitoneal comprises 80% and has an indirect sac that pulls on the ureter. Extraperitoneal occurs without a peritoneal sac, and the ureter moves with the retroperitoneal fat into the scrotum and is thought to be a congenital defect in which the Wolffian duct fails to separate from the ureteric bud [10].
Due to the high volume of inguinal hernia surgery and low incidence of herniated genitourinary organs, it is difficult to recommend preoperative imaging for all patients. Performing a quality preoperative history may be beneficial inquiring about twostage micturition, hematuria, acute urinary obstruction or ipsilateral f lank pain. Patients with suspicious history may warrant preoperative imaging with ultrasound or CT scan [7]. Interestingly, Allam et al. reported in a case series that anterior displacement of the ureter by >1 cm anterior to the psoas at the L4 level on CT was associated with inguinoscrotal herniation of the ureter.
It has been consistently reported that ureteroinguinal hernia is associated with large cord lipomas and sliding fat, especially in obese patients in their 50s and 60s. It is important to consider this possibility to limit iatrogenic ureteral injury.
In patients who have ureteroinguinal hernia identified pre-and post-operatively management options that have been reported include anterior repair with reduction into the retroperitoneum, percutaneous nephrostomy and stenting, retrograde stenting, nephrectomy, exploratory laparotomy and resection of redundant ureter with re-implantation [8,11]. There is one reported case of addressing ureteroinguinal hernia with a laparoscopic approach [12].
In our patient, we elected to perform repair in a staged fashion to ensure the patient did not develop obstruction prior to proceeding with contralateral repair. Given the large size of the hernias, risk of ureter injury and the decision to perform a staged approach, we chose to perform open repair rather than minimally invasive. Retrograde stenting was beneficial to facilitate identification of ureters and decrease the risk of injury. Retrograde pyelogram after completion of repair confirmed no obstruction prior to leaving the operating suite. This approach resulted in a successful outcome with a greater than 5-year follow-up.
|
2023-01-01T05:09:28.990Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2d14c8a07c52ecb07a60ee69e05679dccfff7502",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2d14c8a07c52ecb07a60ee69e05679dccfff7502",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
231760894
|
pes2o/s2orc
|
v3-fos-license
|
Redox Homeostasis in Poultry: Regulatory Roles of NF-κB
Redox biology is a very quickly developing area of modern biological sciences, and roles of redox homeostasis in health and disease have recently received tremendous attention. There are a range of redox pairs in the cells/tissues responsible for redox homeostasis maintenance/regulation. In general, all redox elements are interconnected and regulated by various means, including antioxidant and vitagene networks. The redox status is responsible for maintenance of cell signaling and cell stress adaptation. Physiological roles of redox homeostasis maintenance in avian species, including poultry, have received limited attention and are poorly characterized. However, for the last 5 years, this topic attracted much attention, and a range of publications covered some related aspects. In fact, transcription factor Nrf2 was shown to be a master regulator of antioxidant defenses via activation of various vitagenes and other protective molecules to maintain redox homeostasis in cells/tissues. It was shown that Nrf2 is closely related to another transcription factor, namely, NF-κB, responsible for control of inflammation; however, its roles in poultry have not yet been characterized. Therefore, the aim of this review is to describe a current view on NF-κB functioning in poultry with a specific emphasis to its nutritional modulation under various stress conditions. In particular, on the one hand, it has been shown that, in many stress conditions in poultry, NF-κB activation can lead to increased synthesis of proinflammatory cytokines leading to systemic inflammation. On the other hand, there are a range of nutrients/supplements that can downregulate NF-κB and decrease the negative consequences of stress-related disturbances in redox homeostasis. In general, vitagene–NF-κB interactions in relation to redox balance homeostasis, immunity, and gut health in poultry production await further research.
Introduction
Redox biology is a very quickly developing area of modern biological sciences, and roles of redox homeostasis in health and disease have recently received tremendous attention [1][2][3][4][5][6]. There are a range of redox pairs in cells/tissues responsible for redox homeostasis maintenance/regulation. They include, but are not limited to, NAD + /NADH, NADP + /NADPH, GSSH/GSH (glutathione system), Trx ox /Trx red (thioredoxin system), protein thiols ox /protein thiols red . It is believed that redox signaling is tightly integrated with various homeostatic mechanisms [7] and all redox elements are interconnected and regulated by various means, including antioxidant and vitagene networks [1]. The redox status is responsible for maintenance of cell signaling and cell stress adaptation. There are The noncanonical pathway (known as the alternative pathway) is associated with p52 (the product of p100), RelB, NF-κB-inducing kinase (NIK), and IKKα and it is known to be triggered by a range of stimuli, including lymphotoxin B receptor, B-cell activating factor receptor 3, cluster of differentiation 40 (CD40), and receptor activator of NF-κB ligand (RANKL). Therefore, ligand-induced activation of the aforementioned receptors leads to the activation of NF-κB-inducing kinase (NIK), which specifically activates IKK1, inducing the phosphorylation and proteolytic processing of p100 to p52, followed by heterodimer formation with RelB to regulate target gene expression [56]. It is established that the noncanonical NF-κB pathway is deeply involved in regulation of the immune system, including creation of the adaptive immune response [69]. This includes regulation of B-cell development and function, including differentiation into long-lived antibodyproducing plasma cells and memory B cells (for a review, see [46,70]), both being an integral part of the humoral immune response. Since, in comparison to mammals, avian spices are characterized by a different set of immunoglobulin (Ig) classes (IgD and IgE molecules are absent in birds) and a different cytokine repertoire [41], understanding the molecular mechanisms underlying regulation of the noncanonical NF-κB pathway in birds is a priority for avian scientists. Furthermore, a range of vaccinations used in poultry production are based on humoral response activation and memory B-cell formation [31,71], and the noncanonical NF-κB pathway could be a target for improvement of vaccination efficacy. It should be mentioned that the noncanonical NF-κB pathway was also shown to be involved in T-cell development in the thymus, and in orchestrating the formation and maintenance of effector and memory T cells [65]. Indeed, the cell-mediated immunity based on T-cell activity is of paramount importance for poultry, including resistance to viral diseases and vaccination efficacy [72,73].
It is important to mention that canonical and noncanonical NF-κB pathways interact with each other. For example, in the classical NF-κB pathway, the first protein transcribed is IκBα. Therefore, it is believed that, in order to inhibit further transcription and restore the original latent state of NF-κB signaling, newly synthesized IκBα can enter the nucleus, remove NF-κB from DNA, and export the complex back to the cytoplasm [49]. Interestingly, the canonical NF-κB pathway is considered to be antiapoptotic, while the noncanonical pathway is proapoptotic [50].
It well known that NF-κB responds to a large variety of external and internal stress signals/stimuli including oxidative stress [62], playing essential roles in the development and maintenance of tissue homeostasis by regulating the transcription of an array of different genes, including proinflammatory cytokines, as well as adhesion molecules, antimicrobial peptides, and acute phase proteins [74][75][76]. In fact, NF-κB signaling can be considered as an emergency response system, since activation of NF-κB was shown to occur very quickly (within minutes) as a result of release from IκB or as a consequence of cleavage of the inhibitory ankyrin repeat domains of p100 and p105 [50]. Under physiological conditions, the majority of NF-κB-activated genes regulate biological processes associated with cell growth, protection, and repair. They are involved in T-cell maturation, DNA damage repair, tissue healing after injury, and orchestrating the fight against infections [60,67]. Indeed, it has been proven that activation of NF-κB is an evolutionarily conserved, effective mechanism of host defense against infection and stress [77]. However, excessive NF-κB activation in commercially relevant stress conditions in poultry and farm animal production systems can lead to detrimental consequences, including chronic inflammation, compromised health status, and decreased productive and reproductive performance. The repertoire of stimuli implicated in the NF-κB activation is very diverse and also includes inhibitory κB kinases, cell-surface receptors, and NF-κB-inducible inhibitor proteins (IB proteins), as well as factors regulating the post-translational modification of the Rel proteins, etc. [63,64,[74][75][76]. Furthermore, p65 and p50 are the targets of many other post-translational modifications such as ubiquitination, acetylation, methylation, phosphorylation, oxidation/reduction, and prolyl-isomerization, leading to a change in NF-κB transcriptional activity due to affecting the interaction with DNA or as a result of changes in the protein-protein association of NF-κB [78].
NF-κB signaling in numerous cell types is involved in the development of various metabolic disorders. In particular, it is thought that resident tissue cells activate NF-κB in response to stress associated with nutrient excesses [58]. Furthermore, oxidized lipids in the bloodstream can induce NF-κB in vascular endothelia, while, in adipocytes, hepatocytes, and neurons, NF-κB is induced by metabolic or oxidative stress in the ER due to overnutrition. Furthermore, an excess of free fatty acids could also activate NF-κB via TLR4 [58]. Some examples of activation of NF-κB associated with regulation of downstream transcriptional antioxidant and pro-oxidant targets in the canonical pathway are shown in Figure 2.
Depending on physiological context, the activation of NF-κB can have different consequences. Indeed, NF-κB does not function alone but is part of various networks, including crosstalk with other transcription factors (Nrf2; signal transducer and activator of transcription 3, STAT3; Forkhead box O3, FOXO3; etc.), upstream kinases, sirtuins, Wingless-related MMTV integration site 4 (Wnt4), ROS, p53, and miRNAS, which determine the pattern of its effects on the expression of a battery of various genes [48,60,67]. Furthermore, there are regulatory mechanisms coordinating NF-κB association with various important pathways [50]. It has been suggested to consider NF-κB as a stress response factor, since NF-κB signaling is condition-dependent, and NF-κB-dependent cell death or survival would depend on the stimulus and the cell type involved. It seems likely that this complexity is responsible for many apparent contradictions in the literature [79]. However, most research Antioxidants 2021, 10, 186 7 of 50 data indicate that NF-κB signaling pathway enables cells to maintain homeostasis and survive under various stress conditions, including genotoxic stress [60,67]. related MMTV integration site 4 (Wnt4), ROS, p53, and miRNAS, which determine the pattern of its effects on the expression of a battery of various genes [48,60,67]. Furthermore, there are regulatory mechanisms coordinating NF-κB association with various important pathways [50]. It has been suggested to consider NF-κB as a stress response factor, since NF-κB signaling is condition-dependent, and NF-κB-dependent cell death or survival would depend on the stimulus and the cell type involved. It seems likely that this complexity is responsible for many apparent contradictions in the literature [79]. However, most research data indicate that NF-κB signaling pathway enables cells to maintain homeostasis and survive under various stress conditions, including genotoxic stress [60,67]. NF-κB is involved in the modulation of many different molecular events, including inflammation, immune function, cellular growth, and apoptosis [46,70]. There are a range of NF-κB activators, including pathogen-derived substances (LPS) and inflammatory signals (TNF-α, IL-1), as well as other signals recognized by various receptors, including TNFRs, TLRs, T-cell receptors (TCRs), B-cell receptors (BCRs), and cytokine receptors, which lead to an activation of IκB kinase (IKK) with subsequent phosphorylation of NF-κB inhibitor. This leads to proteasomal degradation of IκB. As a result, the released NF-κB migrates into the nucleus and binds with its corresponding DNA-responsive elements in the presence of coactivators. This results in the transcription of antioxidant (anti-inflammatory) or pro-oxidant (proinflammatory) mediators [46,85]. It is believed that p65 can induce the expression of both negative regulators (IκBα, IκBε, etc.) and positive regulators (Relα, TNFα, etc.) participating in tuning the NF-κB pathway [83]. It is important to mention that NF-κB can be directly activated or inhibited by ROS in a context-dependent manner, including levels of ROS, exposure, and cell type [59,86,87]. Indeed, ROS-mediated oxidation of redox-sensitive cysteine residues of NF-κB subunits was shown to have dual effects (inducing or inhibiting) on the NF-κB signaling depending on the level of ROS, the cell type, and the types of stimuli [81,88]. On the one hand, ROS can activate the NF-κB pathway by imposing disulfide bond formation between Cys54 and Cys347 in IKKγ [89]. On the other hand, ROS can have an opposite effect: inhibiting NF-κB activation as a result of restricting IκBα degradation, due to inactivation of the proteasome [90].
The NF-κB system integrates diverse upstream input signals (from various stresses to pathogen-related molecules) recognized by various receptors into varied downstream output responses. This function is mediated via promotion of the expression of a variety of genes responsible for the synthesis of antioxidant or prooxidant molecules, improving antioxidant defenses and redox homeostasis. Alternatively, NF-κB activation can also lead to synthesis of proinflammatory cytokines, imposing inflammation and causing detrimental health-and production-related consequences in poultry and farm animals. In particular, NF-κB and STAT3 regulate common processes and share regulatory binding sites of antiapoptotic, cell cycle and proliferation, tissue resistance, and repair genes. Furthermore, hypoxia-inducible factor (HIF) and NF-κB share common activating stimuli, regulators, and targets [61].
NF-κB and Oxidative Stress
Free-radical production is considered to be an important process in biological systems responsible for the antibacterial action of oxidative burst in phagocytes, cell signaling, and stress adaptation [7]. However, an excess of reactive oxygen and nitrogen species (RONS) due to high level of stress or a compromised antioxidant system leads to damages to major biological molecules (proteins, polyunsaturated fatty acids (PUFAs), DNA, etc.) associated with immunosuppression, gut health problems, and decreased productive and reproductive performance of poultry [30]. Therefore, a variety of protective mechanisms have been developed during evolution to deal with RONS excess, and many transcription factors are involved in this process via regulating vitagenes and a myriad of antioxidant enzymes in stress conditions [1].
There are a range of transcription factors acting cooperatively with NF-κB. Forexample, NF-κB and STAT3 are shown to regulate common pathways and share regulatory binding sites of various protective genes, while HIF and NF-κB are reported to share common activating stimuli, regulators, and targets [61]. Indeed, the redox balance is believed to be orchestrated by a range of transcription factors, including Nrf2, NF-κB, activator protein 1 (AP-1), FoxO, peroxisome proliferator-activated receptors (PPARs), peroxisome proliferator-activated receptor-gamma coactivator 1α (PGC-1α), p53, and mitogenactivated protein kinase (MAPK; Figure 3 [91,92]). It seems likely that transcription factors and vitagenes are involved in the regulation of redox status by effectively modulating the expression and activity of ROS-generating enzymes and antioxidant enzymes [93]. NF-κB has long been considered to be a prototypical proinflammatory signaling pathway stimulating the immune system in response to various stimuli, including physical, physiological, and/or oxidative stress. For example, NF-κB is a key target in receptor- NF-κB has long been considered to be a prototypical proinflammatory signaling pathway stimulating the immune system in response to various stimuli, including physical, physiological, and/or oxidative stress. For example, NF-κB is a key target in receptorindependent hypothalamic microinflammation [95] associated with intracellular organelle stress, including RNA stress response [96], endoplasmic reticulum (ER) stress [97], and defective autophagy [98]. NF-κB is involved in the regulation of many important physiological processes; however, its overactivation has been shown to be associated with increased risk of disease, while NF-κB suppression is associated with risk reduction [63]. Taking the former into account, understanding the role of NF-κB signaling in stress adaptation awaits further investigation. For example, HO-1 can improve cell protection from apoptosis by stimulating free heme catabolism. Interestingly, the HO-1 promoter region contains an NF-κB responsive element and, therefore, HO-1 expression is regulated by NF-κB, as well as by other transcription factors [99]. A central role for NF-κB in regulating mitochondrial respiration has been suggested [100]. In fact, by controlling the balance between glycolysis and respiration for energy provision, NF-κB is involved in energy homeostasis and metabolic adaptation [101]. The authors suggested to consider NF-κB as an important checkpoint connecting cell activation and proliferation with energy sensing and metabolic homeostasis. Since mitochondria are the main ROS source in the cell, it could be that NF-κB signaling is involved in the regulation of ROS formation, detoxification, and the maintenance of redox homeostasis.
Nrf2 and NF-kB Interplay in Oxidative Stress
Proof of the interaction and cooperative action of Nrf2 and NF-κB was taken from experimental work with various model systems employing plant extracts, individual compounds in vitro and in vivo, pure chemicals, and some known toxicants [6]. In our recent review, a central role of Nrf2 in antioxidant defenses and vitagene regulation was described in detail [27], and it seems likely that, under oxidative stress, the transcription factors NF-κB and Nrf2 antagonize each other to coordinate a stress response [60,67,76]. For example, deletion of Nrf2 (Nrf2 knockout mice) enhanced inflammation, while Nrf2 upregulation was reported to decrease NF-kB-dependent proinflammatory and immune responses [62]. In fact, several known Nrf2 activators are able to inhibit the NF-κB pathway. There are many examples showing that activation and repression occur between members of the Nrf2 and NF-κB pathways through various mechanisms [102]. Some mechanisms of Nrf2-NF-κB interactions are summarized in Table 1. Table 1. Possible mechanisms of Nrf2-NF-κB interactions.
The redox outcome of the NF-κB-Nrf2 interaction would depend on the activation/inhibition of various antioxidant and prooxidant enzymes. It is known that some antioxidant enzymes are dependent on both Nrf2 and NF-κB. For example, expression of HO-1 is shown to be regulated by Nrf2, NF-κB, and HIF-1α signaling [60,67]. On the one hand, HO-1 was shown to possess a functional ARE that is activated by Nrf2 [141]. On the other hand, HO-1 was shown to have a functional NF-κB site [142]. HO-1 is known to be the stress-inducible enzyme providing AO protection in vertebrate systems, par-ticipating in the maintenance of redox balance and being responsible for adaptation to oxidative, inflammatory, and cytotoxic stress [25]. Similarly, a key catalytic subunit of glutamate-cysteine ligase, the key enzyme of the cellular GSH biosynthetic pathway, also has an ARE and can be activated by Nrf2 [143], whereas it also possesses a κB site and can be induced by NF-κB [144]. Since GSH is a key physiological buffer responsible for the redox homeostasis [145], regulation of its synthesis via Nrf2 and NF-κB pathways is of great importance for redox homeostasis maintenance related to high immunocompetence. Furthermore, MnSOD is also a target for both NF-κB [146] and Nrf2 [27]. It is well established that MnSOD, a key enzyme of the first line of the AO network, is located in mitochondria and deals with major biological ROS, namely, superoxide radicals, and it is considered to be a major player in the establishment and maintenance of redox homeostasis [30]. Furthermore, glutathione peroxidase 1 (GPx1) and glutathione S-transferase (GST) expression and activities are also under strict control by NF-κB [81] and Nrf2 [13]. The important roles of these AO enzymes in AO defense and redox homeostasis have been previously discussed [13]. It seems likely that another redox balance regulator, namely, thioredoxin, is also regulated by NF-κB [146,147] and Nrf2 [25]. It is interesting that HO-1, SOD, and thioredoxins belong to the vitagene family responsible for stress adaptation and redox homeostasis [24].
It should also be mentioned that the NF-κB pathway can induce free-radical production via activating ROS-producing enzymes, including NADPH oxidase [148], cyclooxygenase-2 (COX-2) [102], cytochrome p450 enzymes, inducible nitric oxide synthase (iNOS), neuronal NOS (nNOS), and xanthine oxidase/dehydrogenase [81]. The impact of such ROS production on redox balance and adaptation to stress is still not well established; however, this complicates the interpretation of results related to NF-κB-Nrf2 interactions in biological systems under various stress conditions. In many cases, activation of various transcription factors, including Nrf2, NF-κB, AP-1, HIF-1α, p53, PPAR-γ, and β-catenin/Wnt, was associated with the oxidative stress [149]. Therefore, a complex crosstalk between Nrf2 and NF-κB pathways under various stress conditions [62] further complicates interpretation of results related to the relative impact of each pathway on the regulation of stress adaptation. Indeed, as mentioned above, Nrf2 and NF-κB affect each other's expression and activity to coordinate antioxidative and inflammatory responses; however, molecular mechanisms of this interconnection are not yet known [62].
It is believed that condition-dependent, stress-associated changes in redox balance and in expressions/activities of transcription factors (e.g., Nrf2/Keap1 and NF-κB/IκB/IKK) are responsible for providing adaptive cell responses to a variety of stress stimuli through orchestrating the optimal expression of protective target genes [150]. A hypothetical scheme of the Nrf2-NF-κB crosstalk is shown in Figure 4.
In physiological conditions, a delicate balance between Nrf2 and NF-κB expression in various tissues is well coordinated and maintained. It seems likely that increased NF-κB expression as a result of low/moderate stresses can lead to a simultaneous increase in the expression of Nrf2, leading to improved antioxidant defenses. At the same time, decreased NF-κB expression can be observed as a feedback mechanism. This balance is also regulated by other transcription factors and vitagenes. In the case of high oxidative stress, when the ability of the AO defense network to deal with RONS production is overwhelmed, the Nrf2/NF-κB balance would be broken. In such conditions, redox status would be compromised with detrimental consequences to animal health. Furthermore, the productive and reproductive performance of poultry and farm animals would be decreased.
Antioxidants 2021, 10, x FOR PEER REVIEW 12 of 50 stimuli through orchestrating the optimal expression of protective target genes [150]. A hypothetical scheme of the Nrf2-NF-κB crosstalk is shown in Figure 4. In physiological conditions, a delicate balance between Nrf2 and NF-κB expression in various tissues is well coordinated and maintained. It seems likely that increased NF-κB expression as a result of low/moderate stresses can lead to a simultaneous increase in the expression of Nrf2, leading to improved antioxidant defenses. At the same time, decreased NF-κB expression can be observed as a feedback mechanism. This balance is also regulated by other transcription factors and vitagenes. In the case of high oxidative stress, when the ability of the AO defense network to deal with RONS production is overwhelmed, the Nrf2/NF-κB balance would be broken. In such conditions, redox status would be compromised with detrimental consequences to animal health. Furthermore, the productive and reproductive performance of poultry and farm animals would be decreased.
NF-κB in Poultry Production
The regulatory roles of NF-κB in poultry are still poorly understood, but accumulating information clearly indicates that, similar to mammals, NF-κB is a main regulator of many important processes, including inflammation in avian species. In 1993, complementary DNA (cDNA) clones encoding the chicken NF-κB p65 subunit were isolated, and, according to the information provided by the authors, chicken NF-κB can be briefly characterized as follows [151]: • Chicken p65 was shown to be approximately 55% identical to the mouse and human p65 proteins. Similar to its mammalian counterpart, chicken p65 contains the Rel homology domain (RHD) in its N-terminal consisting of 286 amino acids and the putative transactivation domain in its C-terminal region;
NF-κB in Poultry Production
The regulatory roles of NF-κB in poultry are still poorly understood, but accumulating information clearly indicates that, similar to mammals, NF-κB is a main regulator of many important processes, including inflammation in avian species. In 1993, complementary DNA (cDNA) clones encoding the chicken NF-κB p65 subunit were isolated, and, according to the information provided by the authors, chicken NF-κB can be briefly characterized as follows [151]: • Chicken p65 was shown to be approximately 55% identical to the mouse and human p65 proteins. Similar to its mammalian counterpart, chicken p65 contains the Rel homology domain (RHD) in its N-terminal consisting of 286 amino acids and the putative transactivation domain in its C-terminal region; • It was proven that the RHD was highly conserved between the chicken and mammalian p65 proteins; • The highest expression of a 2.6 kb transcript of p65 was detected in the spleen. It was also detected in other organs; • A fusion protein containing the RHD of chicken p65 was reported to bind to a consensus kappa B-site; • p65 was shown to form one or more complexes with various cellular proteins, including p50, p105, and c-Rel in chicken spleen cells [151].
Furthermore, the cDNA clones encoding chicken p50B/p97 were isolated [152]. The amino-acid sequence of the precursor protein p97 was found to be characterized by a conserved structure. In particular, it was shown to have 86% identity in the RHD and lower (56%) identity in the ankyrin repeat domain (ARD) to human p50B/p97. Similar to previous findings, expression of this gene was also found to be highest in the chicken spleen [152]. In 1995 from a chicken genomic library, a clone containing the avian I kappa B-alpha gene was isolated [153]. Main characteristics of I kappa B-alpha can be summarized as follows: recognizable promoter elements (i.e., TATA and CAAT boxes) were not found in avian I kappa B-α. There were seven putative Rel/NF-kappa B binding sites in avian I kappa B-α. When transfected into cells which produce I kappa B-α, a CAT reporter construct containing the 5 upstream region of I kappa B-α was expressed. The regulatory elements promoting I kappa B-α expression were identified within 1000 nt of the transcription start site. I kappa B-alpha was shown to be found as a single-copy gene per haploid genome. This gene was expressed in avian hematopoietic tissues and in lymphoid cells transformed by avian reticuloendotheliosis virus [153]. It was suggested that, similar to mammals, in chicken, p65 and c-Rel comprise components of the protein complexes that are able to bind to the kappa B-like sequence. This binding could lead to the progressively activated expression of the chicken lysozyme gene observed during the terminal differentiation of macrophages [154].
In 2001, Piffat et al. constructed and characterized a composite cDNA encoding most of the chicken RelB transcription factors [155], and their results can be summarized as follows: within the RH domain, chicken RelB (cRelB) protein was characterized by a high degree of sequence similarity to other vertebrate RelB proteins. However, outside this domain, cRelB was substantially less conserved. cRelB was found to be more widely expressed than mammalian RelB, and it was identified to have functional properties similar to other vertebrate RelB proteins. cRelB was reported to be unable to bind DNA in a homodimer form; however, it could form DNA-binding heterodimers with NF-kappaB p50 or p52. Overexpressed cRelB was shown to be present in the nucleus in chicken embryo fibroblasts. The nonconserved C-terminal sequences of cRelB contained a transactivation domain found in chicken and mouse fibroblasts [155]. A new isoform of chicken myeloid differentiation factor 88 (MyD88-2) expression was detected in a range of tissues tested and its overexpression was found to significantly induce the activation of NF-κB in vitro [156]. Recently the duck IKKα (duIKKα) gene was cloned and characterized. In fact, DuIKKα was reported to encode a protein containing 757 amino acids and having high sequence identities with the goose IKKα. Duck liver and heart were characterized by a high expression of duIKKα messenger RNA (mRNA), while its expression was reported in all tested tissues, including muscular stomach, spleen, heart, liver, lung, kidney, cerebellum, cerebrum, windpipe, muscle, glandular stomach, thymus, duodenum, cecum, pancreas, and bursa of Fabricius [157]. An important role of du IKKα in NF-κB regulation has been demonstrated by increasing or inhibiting expression of duIKKα. On the one hand, overexpression of duIKKα was shown to substantially increase NF-κB activity with subsequent induction of cytokines interferon beta (IFN-β), IL-1β, IL-6, and IL-8 in duck embryo fibroblasts. On the other hand, knockdown of duIKKα was found to significantly decrease LPS-, poly(I:C)-, poly(dA:dT)-, duck enteritis virus (DEV)-, or duck Tembusu virus (DTMUV)-induced NF-κB activation [157]. It seems likely that IKKα is evolutionarily conserved. In fact, phosphorylation of Ser176 and Ser180 in the active center of IKKα is believed to be vital to IKKα activation, and those Ser residues were shown to be well conserved among mammals, birds, and fish [157].
It was shown that the NF-κB family of transcription factors contribute to activationinduced cytidine deaminase-mediated gene conversion in chickens [158]. Gallus heat-shock cognate protein 70 was shown to regulate RelA/p65 gene expression induced by Apoptin, a nonstructural protein of chicken anemia virus [159]. In chicken heterophils, bacterial TLR agonists were indicated to activate NF-κB-mediated leukotriene B4 and prostaglandin E2 production [160]. A switchlike response in NF-κB activity is based on the existence of a threshold in the NF-κB signaling module, and phosphorylation of the Ser-578 residue of the scaffolding protein caspase recruitment domain (CARD)-containing protein 1 (CARMA1) was shown to account for the feedback [161]. It is known that tumor necrosis factor receptorassociated factors (TRAFs) are responsible for activation of various signaling cascades, being key regulatory proteins in NF-κB signaling pathways [162]. It seems likely that avian TRAFs play important roles in defending against both RNA and DNA virus infection. In fact, chicken TRAF3 (chTRAF3) was shown to encode a protein of 567 amino acids with high identity to TRAF3 homologs from mammals being abundantly expressed in the spleen, thymus, lung, and small intestine [163]. Of note, the authors showed that Newcastle disease virus F48E9 challenge was responsible for TRAF3 suppression in chicken embryo fibroblast cells. Recently, the full-length duck TRAF6 (duTRAF6) cDNA from embryo fibroblasts was cloned, and it was shown that duTRAF6 was widely expressed in different tissues. Interestingly, overexpression of duTRAF6 was found to activate NF-κB and induce interferon-β expression [164]. It has been shown that goose TRAF6 shared similar features with the TRAF6 of other avian species, being an essential regulator for inducing the activity of NF-κB and playing important roles in innate immune response [165]. The amino-acid sequence of pigeon FRAF6 (piTRAF6) was shown to share a strong identity with that of other birds. Furthermore, piTRAF6 expression was shown in all examined tissues, including heart, lung, spleen, thigh muscle, large intestine, caecum, kidney small intestine, brain, bursa of Fabricius, rib, and muscular stomach [166]. The heart was characterized by the highest level of piTRAF6 transcript, and the muscular stomach had the lowest level of piTAF6 transcript. On the one hand, overexpression of piTRAF6 was shown to induce NF-κB in a dose-dependent manner with increased IFN-β expression. On the other hand, piTRAF6 knockdown was reported to suppress NF-κB activation in HEK293T cells [166]. Furthermore, the pigeon TRAF3 (PiTRAF3) gene was reported to be highly expressed in the spleen, lung, kidney, brain, thymus, and muscle, while a moderate expression was observed in the small and large intestines, with relatively weak expression in the heart and liver [167].
It seems likely that NF-κB is involved in the activation of avian antimicrobial peptides. For example, chicken intestine defensins (e.g., AvBD13) were suggested to be endogenous ligands for TLR4 able to enhance the proliferation of monocytes via the NF-κB pathway [174]. It should be mentioned that cathelicidins (CATHs), short cationic host defense peptides, also act in close concert with NF-κB. Indeed, in macrophages primed by LPS, pigeon CATH2 was shown to act through MAPK and NF-κB signaling pathways to enhance expression of the anti-inflammatory cytokine, while downregulating the expressions of inducible nitric oxide synthase and proinflammatory cytokines and inhibiting the TLR4 pathway [175]. Furthermore, NK-lysin/granulysin (NKL), an antimicrobial cationic peptide expressed in natural killer cells and cytotoxic T lymphocytes, was identified in different avian species, including chicken, turkey, zebra finch, and quail, and the 5 flanking region of quail NKL was shown to contain two NF-κB-binding sites [176], suggesting participation of NF-κB in regulation of NKL activity.
In hen vaginal cells, NF-κB was shown to be the transcription factor responsible for the expression of various proinflammatory cytokines and chemokines. In fact, in response to the ligands of TLR3, 4, and 21, increased expression of IL1B, IL6, and CXCLi2 was observed, while IL1B expression was found in response to the ligands of TLR5 and 7 [177]. The authors suggested that NF-κB-dependent expression of cytokines might provide the important defense capability of vaginal tissue to bacterial and viral infections. Activation of TLR3 was shown to induce the expression of NF-κB and the production of type-I interferon [178]. IFN-κ (a type I IFN) in both chicken and duck was found to be constitutively expressed in a range of tissues, including spleen, skin, lung, and peripheral blood mononuclear cells (PBMCs), and it could be induced after treatment with virus in PBMCs [179]. The duck TLR4 (duTLR4) gene was shown to be strongly expressed in the liver, kidney, spleen, intestine, and brain [180].
Goose TLR3 was shown to be analogous to mammalian TLR3 and recognized doublestranded RNA with subsequent activation of NF-κB [178]. In fact, the goose TLR3 gene was shown to encode a protein containing 896 amino acids, sharing 46.7-84.4% homology with other species with highest expression in the pancreas and lowest in the skin. The authors showed that geese infected with H5N1 were characterized by significant upregulation of TLR3 in various tissues, including the lung and brain [178]. The goose TLR5 (gTLR5) gene was shown to be expressed in all studied tissues, including high expression in the liver, spleen, and brain, moderate expression in kidney, lung, heart, bone marrow, small intestine, large intestine, and PBMCs, and minimal expression in the cecum [181]. It was also shown that gTLR5 can detect flagellin from Salmonella Typhimurium with subsequent NF-κB activation in HEK293 cells. It seems likely that there is a tissue-specific regulation of TLR expression in the process of orchestrating the immune response against bacterial pathogens [181]. Goose TLR2-1 was also shown to play an important role in the recognition of Mycoplasma fermentans lipopeptide, Mycoplasma gallisepticum (MG) and Salmonella enteritidis (SE), and it induced the activation of NF-κB [182]. Furthermore, in HEK293T cells, flagellin was shown to induce pigeon NF-κB via TLR5 activation. This was associated with significant upregulation of IL-1β, IL-8, TNF-α, and IFN-γ. Importantly, the levels of TLR5, NF-κB, IL-6, IL-8, chemokine ligand 5 (CCL5), and IFN-γ mRNA were significantly upregulated as a result of flagellin stimulation of pigeon splenic lymphocytes. As could be expected, goose TLR5 knockdown was shown to be associated with the significantly downregulated expression of NF-κB and related cytokines/chemokines [183]. Interestingly, the antiviral activity of pigeon IFN-α is believed to depend on the expression of NF-κB [184]. It is known that single-stranded viral RNAs and antiviral imidazoquinoline compounds can be recognized by TLR7 with subsequent NF-κB activation. Recently, it was shown that, in pigeon, agonist R848 (imidazoquinoline) can activate NF-κB via TLR7 [185].
It seems likely that chicken NOD1 activation in response to pathogenic invasion is of great importance for immune defense. In partridge chicken, NOD1 was shown to be widely distributed in various tissues, with the highest expression found in testes. Of note, as a result of S. enterica serovar Enteritidis infection, induced expression of chNOD1, as well as the effector molecule NF-κB, was observed in the spleen tissue [186]. Duck NOD1 (duNOD1) was shown to be widely distributed in various organs, including heart, liver, spleen, lung, kidney, cerebrum, cerebellum, colon, glandular stomach, thymus, and bursa of Fabricius tissue with the highest expression found in the liver. Of note, duNOD1 overexpression induced NF-κB, TNF-α, and IL-6 activation in duck embryo fibroblasts (DEFs), while silencing duNOD1 was indicated to decrease the activity of NF-κB in stimulated DEFs [187].
Chicken IL-26 was shown to regulate immune responses through the NF-κB and the Janus kinase (JAK)-signal transducer and activator of transcription (STAT) Janus kinase signaling pathways [188]. Similarly, chicken IL-11 was shown to bind to IL-11R and activated the NF-κB, JAK/STAT, and MAPK signaling pathways, leading to modulation of T helper 1 (Th1)/Th17 and Th2 cytokine production in chicken cell lines [189]. Chicken interleukin-17B was shown to induce the NF-κB signaling pathway, leading to increased expression of proinflammatory cytokines playing a critical role in host defense against the bacterial pathogens [190]. In eukaryotic and prokaryotic expression systems, recombinant chicken TNF-α was generated to demonstrate its biological activity. In particular, as a result of binding to TNF-α receptor 1, the cytokine was shown to induce a complex signaling cascade leading to induction of the classical NF-κB pathway [191].
In Gaoyou duck skeletal muscle (Anas platyrhynchos domesticus), NF-κB motifs (binding sites) were identified, which are believed to be responsible for transcriptional regulation of the slow skeletal muscle troponin I (TNNI1) gene [192]. It seems likely that chicken NF-κB plays a central role in antiviral defense. In fact, chicken tracheal epithelial cells were shown to initiate effective antiviral responses after stimulation with TLR ligands as a result of interferon regulatory factor 7 (IRF7) and NF-κB signaling pathways associated with activation of other cells, such as macrophages [193].
Receptor activator of NF-κB ligand (RANKL), a new member of the chicken TNF superfamily, was recently identified and characterized [170]. Therefore, chicken RANKL (chRANKL), sharing~59-62% identity with mammalian RANKL, was shown to be ubiquitously expressed in chicken tissues. In nonlymphoid tissues, chRANKL mRNA expression levels were shown to be highest in muscle, while, in lymphoid tissues, the highest RANKL expression was found to be in the thymus, followed by the upper gut and the bone marrow [194]. Recently identified and functionally characterized chicken leukocyte immunoglobulin-like receptor A5 (LILRA5) was reported to activate/induce NF-κB, as well as other immunoregulatory pathways [195].
Thermal Stress
Continuous exposure of farm animals to an acute or gradual rise in habitat temperature was shown to induce oxidative stress leading to reduced survivability and longevity [196], reduced growth, decreased productive and reproductive performance, and compromised health in poultry [197,198]. Intestinal damages due to thermal stress could lead to redox balance disturbances and inflammatory reactions regulated via NF-κB [12]. It seems likely that NF-κB expression in thermally stressed birds is condition-dependent, including temperature, exposure duration, and bird's age. On the one hand, when quails at the age of 20 weeks were heat-stressed (34 • C for 4 h per day for 20 consecutive days), liver IL-1β and TLR4 mRNA levels were significantly increased, while NF-κB mRNA levels were significantly decreased in comparison to the control group birds kept in normal physiological conditions [199]. Contrary to the former, heat stress (32 ± 1 • C, 6 h/day for 9 weeks) in 25 week old Roman egg-laying hens was shown to be associated with increased serum inflammatory cytokine (IL-1β, IL-6, and TNF-α) response as compared to control nonstressed birds. Furthermore, heat stress was also responsible for significantly increased proliferating cell nuclear antigen (PCNA), TLR4, and NF-κB protein expression [200]. The authors showed the protective anti-inflammatory effects of curcumin (100 and 200 mg/kg) in the heat-stressed layers. Similarly, in black-boned chickens exposed to circular heat stress, dietary supplementation with resveratrol (400 mg per kg) was shown to improve intestinal integrity and ameliorate the mRNA overexpression of HSP70, HSP90, and NF-κB on the 6th, 10th, and 15th days of stress [201].
It seems likely that cold stress can also impose oxidative stress and enhance in vivo proinflammatory cytokine gene expression in chickens [202]. In fact, the expression of inflammatory factors (iNOS, COX-2, NF-κB, TNF-α, and prostaglandin E synthases (PT-GEs) were shown to be increased in chicken heart due to cold stress [203]. Under cold stress in quail, the SOD activity decreased, reflecting an oxidative stress state, while the mRNA expression of NF-κB increased in the duodenum, jejunum, and ileum [204]. The inflammatory factors (COX-2, PTGEs, iNOS, NF-κB, and TNF-α) and Hsp70 mRNA levels were shown to be increased in quail spleen as a result of the acute and chronic cold stress (12 ± 1 • C) compared with birds in the control groups [205]. Increased malondialdehyde (MDA) content and upregulation in HSP27, HSP40, HSP70, NF-κB, COX-2, PTGEs, iNOS, TNF-α, and IL-4 mRNAs, as well as in protein levels of HSP40, NF-κB, and iNOS, were observed in heart due to acute cold stress (7 • C for 24 h) in broiler chickens [206]. Therefore, both heat and cold stress in poultry could be responsible for oxidative stress and inflammation, NF-κB proven to play crucial roles in the regulation of those processes.
Mycotoxins
Mycotoxins are considered as major nutritional stress factors in poultry production [1] imposing oxidative stress, immunosuppression [207], and low-grade inflammatory response in the chicken intestine [44] and compromising intestinal barrier functions [208]. Among feed-contaminating mycotoxins, AFB1 is considered to be the most toxic mycotoxin. A low level of AFB1 in broiler diet (74 µg/kg) was shown to increase the serum levels of MDA, TNF-α, and IFN-γ. These changes were inhibited by alpha-lipoic acid (α-LA) dietary supplementation (300 mg/kg). Interestingly, the activities of total SOD and GPx and the expression of NF-κB p65 and HO-1 were not affected by AFB1 [209]. In a similar experiment, an AFB1-contaminated diet (74 µg/kg) fed to chickens was associated with upregulation of the proinflammatory cytokine IL-6 and an increase in the protein expressions of both NF-κB p65 and i-NOS in the liver. Those negative effects of dietary AFB1 were shown to be inhibited by dietary alpha-lipoic acid (300 mg/kg [210]).
In an experiment with chicken feed contaminated with 1 mg/kg AFB1 fed from day 1 until day 28, broilers exposed to AFB1 were characterized by increased serum concentrations and mRNA expressions of TNF-α, IFN-γ, IL-1β, IL-10, and IL-6 as compared to the control group. In addition, AFB1 caused increased degradation of the IκBα protein and significantly elevated the phosphorylation of NF-κB (p65). Furthermore, AFB1 was responsible for a significant reduction in the mRNA level and protein expression of the Nrf2 gene. As a result, the mRNA expression and protein expression level of Nrf2-dependent antioxidant genes (HO-1, GPx1, NQO1, and GCLC) in the AFB1 group were shown to be significantly downregulated [211]. Interestingly, the authors demonstrated that most aforementioned changes in NF-κB and Nrf2-related parameters were partly alleviated by feeding grape seed proanthocyanidin extract (250 mg/kg) simultaneously with AFB1. Mn excess (600-1800 mg/kg feed) was shown to be associated with upregulated mRNA expression of TNF-α, COX-2, NF-κB, iNOS, and NO content in chicken testis on the 60th and 90th days [212]. The inflammatory response and the mitochondrial dynamics and apoptosis under Cu (300 mg/kg for 90 days) exposure in the heart of chickens were also investigated. It was shown that Cu exposure induced NF-κB-mediated pro-inflammatory cytokines, and the mitochondrial network was suggested to be considered as the cytosolic sensor responsible for the induction of NF-κB-mediated inflammatory responses under stress conditions [213].
In chickens, dietary Cu excess (220 and 330 mg of Cu/kg dry matter) was shown to increase the number and area of splenic corpuscles, as well as the ratio of cortex and medulla in the thymus and bursa of Fabricius. Furthermore, excessive Cu intake was associated with decreased AO defenses, indicated by the reduced activities of SOD, CAT, and GPx and increased content of MDA. There were also increased TNF-α, IL-1, and IL-1β concentrations, upregulated mRNA levels of TNF-α, IFN-γ, IL-1, IL-1β, IL-2, iNOS, COX-2, and NF-κB, and increased protein levels of TNF-α, IFN-γ, NF-κB, and p-NF-κB in immune organs due to Cu toxicity [214].
As and NF-κB
The proinflammatory activities of As were shown in different tissues of birds, including liver, heart, brain, muscles, and kidney. For example, in birds chronically treated with As 2 O 3 , the expression levels of NF-κB and IL-6, IL-8, and TNF-α (critical mediators in the inflammatory response) in the liver were shown to be increased [215]. Indeed, As 2 O 3 exposure (7.5-30 mg/kg for 90 days) led to oxidative stress, inflammatory response, and histological and ultrastructural damage, as reflected by altered levels of cardiac enzymes in chicken heart tissues. In addition, the messenger RNA levels of NF-κB and inflammatory cytokines (TNF-α, COX-2, NOS, and PTGEs) significantly increased due to As 2 O 3 intoxication [216]. Similarly, when As 2 O 3 (1.25 mg/kg body weight (BW), corresponding to 15 mg/kg feed) was added to a basal diet and fed to male Hy-line chickens (1 day old) for 4, 8, and 12 weeks, the expression of TNF-α, NF-κB, and iNOS in chicken heart was shown to be increased compared with the corresponding control group [217]. Arsenic (7.5, 15, or 30 mg/kg feed) was shown to increase the expression of NF-κB and proinflammatory cytokine expression in Gallus gallus brain tissues including cerebrum, cerebellum, thalamus, brainstem, and myelencephalon [218]. The toxic effects of arsenic trioxide (As 2 O 3 , 7.5-30 mg/kg for 30-90 days) in the muscular tissues (wing, thigh, and pectoral) of chickens were also investigated. The results showed that As 2 O 3 caused oxidative stress as indicated by decreased activities of AO enzymes (catalase (CAT) and GPx) and increased MDA content. There was a significant upregulation of the mRNA levels of NF-κB and inflammatory cytokines (TNF-α, COX-2, iNOS, and PTGEs) and heat-shock proteins (HSPs) in muscular tissue in the As 2 O 3 exposure groups [219]. In Hy-line chickens, As 2 O 3 exposure (7.5, 15, and 30 mg/kg diet) was shown to induce oxidative stress and inflammatory-mediated nephrotoxicity. In fact, elevated nuclear migration of NF-κB and inflammation-related phenotypes were observed. They led to marked renal injury and apoptosis through a mitochondrion-dependent pathway in chicken kidneys [220].
Cu, As, and NF-κB
Oxidative stress-induced skeletal muscle injury due to Cu 2+ (300 mg/kg feed) and/or arsenite (2.5 mg/kg BW, corresponding 30 mg/kg feed) exposure in chickens was associated with inflammation in skeletal muscles induced via the NF-κB-mediated response pathway. Indeed, the increased protein and mRNA levels of NF-κB and TNF-α in skeletal muscles and the enhanced mRNA expressions of IL-1β, IL-6, and IL-12β were indicative of proinflammatory responses occurring due to Cu and/or As exposure [221]. Arsenic (30 mg/kg) and/or copper (300 mg/kg for 12 weeks) were shown to induce oxidative stress, inflammation, and autophagy in chicken brains. In fact, the mRNA levels and protein expressions of inflammation markers (NF-κB, TNF-α, COX-2, and PTGEs) were shown to be significantly increased due to As and Cu exposure [222]. Chicken exposure to As (30 mg/kg) and/or Cu (300 mg/kg for 4.8 and 12 weeks) was shown to lead to oxidative stress, inflammatory response (an increase in expression of NF-κB and its downstream inflammation-related genes), and liver damage through mitochondrial and death receptor-dependent pathways [223]. Arsenic trioxide (30 mg/kg) and/or copper sulfate (300 mg/kg) were added to the chicken basal diet for 12 weeks. Significantly reduced thymus weight and thymus index, hyperemia visible to the unaided eye, and inflammatory cell infiltration were observed. Concurrent administration of arsenic and copper significantly enhanced inflammation as indicated by increased levels of NF-κB, COX-2, iNOS, PTGEs, and proinflammatory cytokines in chicken thymus. Additionally, oxidative stress imposed by As and Cu was associated with elevation of the heat-shock protein levels [224].
Increased NF-κB expression and inflammation induction in chicken gizzard were also shown to be a result of As 2 O 3 and/or CuSO 4 dietary exposure [225]. Similarly, As and/or Cu exposure in the same doses was shown to induce immunotoxicity through triggering oxidative stress, inflammation (upregulation of NF-κB, inflammatory mediators, and proinflammation cytokines, accompanied by depletion of anti-inflammatory cytokines), and immune imbalance (decreased ratio of IFN-γ/IL-4 and increased level of IL-17) in the bursa of Fabricius of chicken [226]. In the chronic poisoning of Cu and/or As, inflammation occurs in the chicken thalamus as indicated by increased NF-κB expression, causing oxidative stress (MDA accumulation) and mitochondrial damage, leading to apoptosis [227]. Excessive intake of As (1.25 mg/kg BW) and/or Cu (CuSO4, 300 mg/kg feed) for 12 weeks was shown to lead to a significant reduction in the total antioxidant capacity (T-AOC), catalase level, and hydroxyl radical formation in chicken brain. In addition, an increase in the expression of HSPs and NF-κB, as well as NF-κB pathway-related proinflammatory mediators (COX-2, TNF-α, and iNOS), due to As/Cu intoxication was observed [228]. Therefore, the proinflammatory activities of Cu and As combinations were confirmed in the chicken liver, thymus, bursa of Fabricius, gizzard, thalamus, and brain.
Pb and NF-κB
Pb poisoning in chickens was shown to increase the mRNA expression of inflammation factors (NF-κB) and HSPs in chicken livers simultaneously with the induction of NO content and iNOS activity [229]. It was shown that Pb exposure was associated with increased Pb content in chicken serum, induced the NF-κB pathway, and increased the expression of selenoproteins in chicken neutrophils [230].
More data on Pb-associated modulation of the expression of NF-κB and related cytokines is subsequently discussed in the Se section.
Cd and NF-κB
It was shown that Cd significantly induced the expression of NF-κB, leading to activation of its downstream cytokines, IL-1β, TNF-α, and IL-6, in chicken peripheral blood lymphocytes [231]. As a result of CdCl 2 (10 mg/kg feed) administration to chickens for 90 days, levels of NF-κB and phosphorylated c-Jun N-terminal kinase (p-JNK)/JNK in the spleen increased significantly, while those of mechanistic target of rapamycin (mTOR) and HSP70 decreased [232]. Exposure of 120 day old layers to Cd (150 mg/kg for 120 days) was associated with oxidative stress, increased NO production, iNOS activity, and increased expression of inflammatory factors (iNOS, NF-κB, TNF-α, and PTGE) and heat-shock proteins (HSPs 27, 40, 60, 70, and 90) in the liver tissues of birds [233]. In livers of duck exposed to a combination of molybdenum and cadmium, mRNA expression of Hsp60, Hsp70, Hsp90, TNF-α, NF-κB, and cyclooxygenase-2 (COX-2) was significantly upregulated [234]. Nickel chloride (NiCl 2 ) was shown to cause inflammatory responses, indicated by activation of the NF-κB pathway and a reduction in the expression of anti-inflammatory mediators in broiler chicken kidney [235].
Cd exposure was associated with oxidative stress as indicated by increased MDA and reduced SOD and GPx in chicken peripheral blood lymphocytes. Interestingly, Astragalus polysaccharide was shown to inhibit Cd-induced cytotoxicity through regulation of NF-κB signaling [231]. It was shown that Agaricus blazei Murill polysaccharide (ABP) significantly reduced the accumulation of Cd in the chicken spleens and reduced the expression of NF-κB and its downstream inflammatory cytokines (IL-1β, IL-6, TNF-α, and IFN-β). Interestingly, ABP ameliorated the Cd-induced increase in protein levels of HSPs (HSP60, HSP70, and HSP90) in spleens. Furthermore, the activities of main antioxidant enzymes (SOD and GPx) significantly increased, while lipid peroxidation (MDA) decreased in the ABP + Cd group [236]. Therefore, as indicated by the above-presented data, the toxic effects of heavy metals (As, Pb, and Cd) and Cu excess in poultry have been associated with oxidative stress and increased expression and activity of NF-κB in various tissues, leading to inflammation. It seems likely that usage of various protective nutrients can prevent oxidative stress and control/decrease NF-κB expression. This can be demonstrated with plant polysaccharide or selenium (see Section 7.1) dietary supplementation.
Other Toxic Stress Factors
Hydrogen peroxide (H 2 O 2 ) was shown to cause oxidative stress and impair redox status in farm animals [237] and poultry [238]; therefore, intraperitoneal injection of H 2 O 2 can be used as an important model of oxidative stress in poultry.
Air quality, especially increased ammonia (NH 3 ) and hydrogen sulfide (H 2 S) concentrations, is an important factor influencing poultry health and bird performance, including feed efficiency, growth rate, carcass quality, and susceptibility to diseases. Indeed, harmful concentrations of NH 3 and H 2 S can suppress/dampen adaptive immune responses [239].
H 2 O 2
Oxidative stress in chickens induced by H 2 O 2 injection was shown to suppress NF-κB signal activation and initiate autophagy in breast muscles [240]. In an experiment, Arbor Acres chickens were grown for 42 days, and, on days 16 and 37 of growth, control chickens were injected with saline, while experimental chickens received an intraperitoneal injection of H 2 O 2 with 0.74, 1.48, and 2.96 mM/kg BW.
It was shown that the two highest doses of H 2 O 2 imposed oxidative stress (decreased SOD and GPx activity), disturbed the redox balance, and significantly decreased the expression of NF-κB and its subunits (p50 and p65) in the chicken liver on day 42, triggering apoptosis and autophagy [241]. Indeed, H 2 O 2 is considered to be a central redox signaling molecule in physiological conditions, while increased concentrations of H 2 O 2 (>100 nM) can cause oxidative stress [242].
NH 3
Ammonia was shown to increase NF-κB expression in chicken trachea, associated with activation of downstream inflammation genes including iNOS and COX-2, reflecting a respiratory inflammation response [243]. The NH 3 -induced immunotoxic effects and inflammatory damage of broiler spleens were associated with the Th1/Th2 imbalance, NF-κB pathway, and compensatory response of HSPs. In particular, NH 3 exposure led to inflammatory damage, indicated by decreased inflammation-related miRNAs (miR-133a and miR-6615), cytokines secreted by Th1 cells, and HO-1. Furthermore, the increased expression of two target genes of the two miRNAs, three cytokines secreted by Th2 cells, seven inflammation-related factors, and five heat-shock proteins was observed in broiler spleens due to NH 3 exposure [244]. In a broiler model of ammonia exposure, it was shown that NH 3 excess was associated with reduced breast weight and thigh weight, histopathological changes in kidney tissues, and increased iNOS activity and NO content. Furthermore, the mRNA and protein expression of inflammatory factors, including NF-κB, COX-2, prostaglandin E synthases, and iNOS, increased. At the same time, T helper 1 and regulatory T cytokines were shown to be downregulated with simultaneous upregulation of Th2 and Th17 cytokines [245]. A study was conducted to investigate NH 3 -induced inflammation in chicken bursa of Fabricius and thymus. Experimental chickens were divided into three groups: low (5.0 mg/m 3 ), middle (10.0-15.0 mg/m 3 ), and high (20.0-45.0 mg/m 3 ) NH 3 -treated chickens. In comparison to the low NH 3 -treated group, high NH 3 exposure was shown to induce inflammation associated with increased nuclear debris and vacuoles in the cortex and medulla of thymus and bursal follicles. Furthermore, reduced bursa of Fabricius and thymus index and increased NO content and iNOS activity due to high NH 3 exposure for 14, 21, or 42 days were observed. Lastly, the inflammatory cytokine contents and mRNA levels of NF-κB, COX2, TNF-α, IL-6, IL-10, IL-1β, IL-18, TLR-2A, and iNOS were also increased in conditions of high NH 3 exposure [246].
The effect of ammonia (1 mmol/L and 5 mmol/L) on chicken splenic lymphocyte apoptosis was studied. The results showed that NH 3 exposure imposed oxidative stress, indicated by the increased release of calcium (Ca 2+ ) and ROS from mitochondria. Furthermore, an increase in the mRNA levels of GPx, inflammation-related genes (NF-κB, COX-2, iNOS, TNF-α, and transforming growth factor-β (TGF-β)), and apoptosis-related genes (B-cell lymphoma 2, BCL-2; Bcl-2-associated X protein, BAX; cytochrome C, Caspase-9, and Caspase-3), and an increase in protein levels of NF-κB, iNOS, BAX, cytochrome C, Caspase-9, and Caspase-3 were also observed due to ammonia exposure. This was also associated with a decreased expression of GST and HO-1 in splenic lymphocytes exposed to ammonia [247]. In chickens, the spleen tissues were seriously injured due to high ammonia concentration (45 ppm from day 22 for 3 weeks) exposure. In the same group of birds, there was increased expression of IL-4, IL-6, and IFN-γ and decreased expression of IL-2 in the spleen, showing an imbalance in the Th1/Th2 response. Furthermore, the proinflammatory factors, including NF-κB, COX-2, iNOS, and prostaglandin E (PGE), were also upregulated in the high ammonia-exposed chickens [248].
H 2 S
It is known that the decomposition of sulfur-containing organics in poultry houses is responsible for the production of a large amount of H 2 S, a highly toxic air pollutant, having detrimental effects on poultry health and leading to extensive damage to the body. In poultry, H 2 S exposure is thought to damage the respiratory system and cause an inflammatory reaction. In particular, it was shown that H 2 S exposure can inhibit the anti-inflammatory and antioxidant effects of PPAR-γ/HO-1 and activate proinflammatory NF-κB pathway-related genes and downstream genes, leading to aggravation of pneumonia induced by LPS. In particular, the expression of IL-4, IL-6, TNF-α, and IL-1β was increased and that of IFN-γ decreased, and the level of PPAR-γ/HO-1 was significantly suppressed by H 2 S exposure. Furthermore, the increased expression of I-κB-β and NF-κB genes confirmed that the NF-κB pathway was activated, with subsequent activation of COX-2, PGE, and iNOS [249].
Fourteen day old chickens were exposed to 30 ppm H 2 S for 14 days, and inflammation and oxidative stress indices were determined in the lymphocytes from peripheral blood samples. An increase in the inflammatory response associated with upregulation of the heat-shock proteins, NF-κB, COX-2, and iNOS was detected in the H 2 S group in comparison to the control untreated chickens [250]. Furthermore, H 2 S exposure (0-3 weeks: 4 ppm, 4-6 weeks: 20 ppm of H 2 S gas) was shown to induce oxidative stress and energy metabolism dysfunction. It also led to necroptosis, activated the MAPK pathway, and triggered the NF-κB pathway associated with a promotion of inflammatory response in chicken spleens [251]. To study the immunotoxicity of H 2 S, 1 day old broiler chicks were exposed to atmospheric H 2 S for 42 days. As a result, H 2 S was shown to activate the TLR-7/MyD88/NF-κB pathway and the NOD-like receptor protein 3 (NLRP3) inflammasome to promote an inflammatory response, leading to tissue damage in broiler thymus and a Th1/Th2 imbalance. In fact, H 2 S was indicated to significantly induce IL-1β, IL-4, and IL-10 levels, and it downregulated IL-12 and IFN-γ. In addition, mitochondria were shown to be swollen, the chromatin was condensed, and nuclear structures were destroyed due to H 2 S exposure [252].
LPS-Induced Stress
The stimulating effect of LPS on NF-κB expression was shown in vitro in model systems and in vivo with poultry. For example, chicken thrombocytes responded to LPS through TLR4, MAP kinase, and NF-κB pathways associated with increased expression of IL-6 and cyclooxygenase-2 and enhanced production of prostaglandin E2 [253]. Similarly, in chicken thrombocytes, LPS-induced IL-6 production was shown to be mediated via activation of NF-κB, extracellular-signal-regulated kinase (ERK) 1 2 , and MAPK [254]. Furthermore, LPS was shown to upregulate IL-6 and CXCLi2 gene expression in chicken heterophils via ERK1/2-dependent activation of AP-1 and NF-κB signaling pathways [255]. In laying hens, NF-κB was shown to participate in the induction of mucin expression by LPS in the vaginal mucosa, improving barrier function against infections [256]. The LPS challenge led to an increased mRNA abundance of TLR4, NF-κB, IL-1β, and IL-6 jejunal mucosa of broilers. However, these effects of the LPS administration were ameliorated by dietary Astragalus polysaccharide [257]. Salmonella LPS injection was found to induce liver damage as indicated by increased necrotic symptoms, severe fatty degeneration, increased alanine aminotransferase (ALT) activity, ballooning degeneration, congestion, and increased inflammatory cell infiltration in liver sinusoids. Significant upregulation in TLR4 expression and its downstream molecules (e.g., NF-κB, MyD88, TNF-α, IL-1β, and TGF-β), increased apoptosis, and decreased proliferation were also observed [258]. Acute spleen injury induced by LPS in young chicks was shown to be associated with significant upregulation of TLR4 at 36 h post LPS stimulation and a slight increase in the expression of NF-κB at 12 h post LPS treatment. The NF-κB-regulated cytokines (TNF-α and IL-6) were shown also to exhibit significant upregulation at 12 h following LPS stimulation [259]. The aforementioned data clearly indicate that LPS can activate NF-κB expression in vitro and in vivo.
An LPS-induced ileum injury model in chickens was established, and histological examination showed a fragmented structure of blood vessels in the ileum and presence of necrotic tissue in the lumen in the LPS-treated chickens. In the LPS group, the structure of the villi was chaotic with rough and uneven surface [238]. Moreover, in comparison to the control group, LPS (60 mg/kg) induced an increase in TLR4 protein expression levels and p65/p65 ratio, increased the mRNA expression of IL-6, IL-8, and TNF-α, and decreased the mRNA expression of IL-10 [260]. Dihydromyricetin (DHM), a natural flavonoid compound with anti-inflammatory activity (0.05% and 0.1%), was shown to have protective effects against LPS-induced inflammatory responses, including regulation of NF-κB expression [260]. Supplementation with leonurine hydrochloride (LH), an alkaloid isolated from Herba leonuri, attenuated LPS-induced intestinal inflammation and barrier dysfunction by significantly downregulating the mRNA expression of NF-κB, COX-2, and proinflammatory cytokines (TNF-α, IL-1β, and IL-6) in the jejunal mucosa. Furthermore, LH administration attenuated LPS-induced IκBα phosphorylation and nuclear translocation of NF-κB (p65) in the jejunal mucosa [261].
Diseases
Modern breeds/strains of commercially grown meat-type broiler chickens are characterized by increased body weight, improved meat yield, including the Pectoralis major (breast) muscle, improved feed conversion, and decreased time to processing. However, myopathies affecting meat quality, especially in the Pectoralis major muscle, are considered as a major challenge for modern broiler production. It seems likely that broiler breast muscle myopathies are associated with inflammation [262]. The NF-κB signaling pathway was found to be induced, the mRNA expression levels of downstream inflammatory mediators were increased, and TLR levels were upregulated in Pectoralis major of Wooden breast myopathy-affected broiler chickens [263]. The authors also showed that, in the serum of broilers with breast myopathies, contents of IL-1β, IL-8, and TNF-α were increased. At the same time, in breast muscle, the mRNA expression of inflammatory cytokines was dysregulated, showing association of this myopathy with an immune disorder and systemic inflammation response.
The regulatory roles of NF-κB in the development and pathogenesis of various bacterial and virus diseases were recently studied. In particular, the roles of NF-κB and inflammation in the pathogenesis of pathogenic Escherichia coli, various Salmonella species, and Mycoplasma gallisepticum, Eimeria tenella and Clostridium perfringens have received the most attention among bacterial diseases. Infectious bursal disease and Newcastle disease were on the frontline for understanding roles of inflammation and NF-κB in their pathology.
Escherichia coli
Escherichia coli is known to be a Gram-negative, facultative anaerobe bacterium belonging to the Enterobacteriaceae family [264]. Certain E. coli strains, known as "avian pathogenic E. coli" (APEC) are responsible for colibacillosis, one of the most important causes of chicken mortality in the poultry industry worldwide [265]. To explore the hostpathogen interaction, a response in global gene expression profiling of chicken type II pneumocytes (CP II cells), responsible for secreting surfactants and modulating lung immunity, to avian pathogenic Escherichia coli (APEC-O78) infection was determined. In fact, CP II cells were shown to respond to APEC infection with marked changes in the expression of 1390 genes (from 18,996 genes identified) with 803 downregulated mRNAs and 587 upregulated mRNAs [266]. The major enriched pathways were identified to be related to the NF-κB signaling pathway, apoptosis pathway, tight junction, and cytokinecytokine receptor interaction. Furthermore, the top 15 upregulated biological process terms were found to include regulation of the Toll signaling pathway, apoptotic process, and intracellular signal transduction [244]. The expression of phosphorylated NF-κB p65 and phosphorylated IκB was significantly upregulated in the APEC-infected chicken type II pneumocytes compared with the control group. However, baicalin, a medicinal ingredient isolated from dry roots of Scutellaria baicalensis Georgi, was shown to significantly inhibit the expression of phosphorylated NF-κB p65 and phosphorylated IκB induced by APEC-O78 [267].
Furthermore, the protective effects of baicalin on avian pathogenic APEC-induced acute lung injury associated with NF-κB activation and inflammation in chicken were shown [268]. Artemisinin, a drug derived from the Asian plant Artemisia annua, was shown to alleviate Eimeria tenella infection in chickens as a result of facilitating the apoptosis of host cells and suppressing the inflammatory response by suppressing the increased mRNA expressions of NF-κB and interleukin-17A in ceca during infection [269]. Schizandrin, a group of bioactive chemical compounds found in Schisandra chinensis, was shown to attenuate inflammation induced by APEC-O78 in chicken type II pneumocytes by decreasing the levels of IL-1β, IL-8, IL-6, and TNF-α via its inhibitory effect on NF-κB and MAPK activation [270]. Dietary treatment with both live yeast and mannan oligosaccharide was shown to alleviate E. coli-induced increases in ileal Toll-like receptor 4, NF-κB, and IL-1 β expression in broilers [271].
Salmonella
Salmonella, a Gram-negative bacterium belonging to the Enterobacteriaceae family, is commonly found in the digestive tract of infected chickens. Furthermore, it is an important cause of foodborne human illnesses worldwide, and poultry meat is reported to be responsible for up to 25% of outbreaks caused by foodborne pathogens [272]. Infection of chicken TLR5 transfected cells with Salmonella enterica serovar Enteritidis was shown to activate NF-κB in a dose-and flagellin-dependent fashion [273]. In order to study the role of NF-κB in the signal transduction pathway of the Salmonella enteritidis-challenged cells, chicken macrophage HD11 cell line and small interfering RNAs (siRNA), specifically inhibiting NF-κB1 expression, were used. In particular, it was found that a 36% inhibition of NF-κB1 expression was associated with increased gene expression of both TLR4 and IL-6 at both 1 h and 4 h following Salmonella challenge [274]. TLR4 was shown to activate NF-κB signaling during cerebral ischemia-reperfusion, leading to increased secretion of inflammatory cytokines and damage of brain tissue [275]. Nucleotide-binding oligomerization domain-containing protein-1 (NOD1), known as a cytoplasmic pattern recognition receptor (PRR), is considered to be a key member of the NOD-like receptor (NLR) family. As a result of recognition of various pathogens by NLRs, NF-κB signaling is modulated, leading to induction of the host innate immune response. In fact, following S. enterica serovar Enteritidis infection, induced expression of chicken NOD1 and NF-κB was demonstrated [186]. In carrier chickens challenged with Salmonella enterica serovar Pullorum, upregulation of NF-κB and NRLC5 signaling pathways at different persistence periods was observed [276].
The Salmonella secreted factor L, a deubiquitinase that contributes to the virulence of Salmonella Typhimurium, was shown to suppress the intracellular NF-κB pathway associated with enhancement of the virulence of Salmonella Pullorum in a chicken model [277]. In chickens, Salmonella Typhimurium was shown to significantly reduce chicken performance, including the feed intake and body weight gain, detrimentally affecting the feed conversion ratio. At the same time, Salmonella infection induced the inflammatory expressions of NF-κB and MyD88 genes and decreased the expressions of claudin-1, occludin, and mucin-2 tight junction genes in the intestines. Furthermore, S. Typhimurium was reported to significantly decrease ileal bacterial diversity indices [278,279]. The invasion plasmid antigen J (IpaJ) from Salmonella Pullorum was reported to suppress NF-κB activation by inhibiting IκBα ubiquitination and modulating the subsequent inflammatory response [280].
Mycoplasma gallisepticum
Mycoplasma gallisepticum (MG), an avian pathogen, belonging to the class of Mollicutes, is known as the primary etiological agent of chicken chronic respiratory disease, causing inflammatory damage of the host respiratory system [281]. Initially, when live MG bacteria were incubated with primary chicken tracheal epithelial cells, inflammatory NF-κB-dependent genes were upregulated, while an NF-κB inhibitor abrogated the inflammatory response [282]. Furthermore, TLR2-2 and TLR6 were reported to be upregulated upon MG infection, followed by induction of the NF-κB-mediated inflammatory responses [283]. At the next stage of the research related to the relationship between MG infection and NF-κB expression, microRNas were employed. Indeed, noncoding RNAs, including microRNAs (miRNAs), are known to be involved in the regulation of various cellular processes including gene expression at the post-transcriptional level [284]. Among them, MiR-21 is an evolutionarily conserved miRNA found in a wide range of vertebrate species, including mammals and birds [285].
Recently, it was shown that, in order to provide an effective defense against MG infection, gga-miR-21 is involved in the activation of MAPKs and NF-κB signaling pathways, leading to increased production of inflammatory cytokines and suppressing cell apoptosis [286]. Similarly, upon MG infection, gga-miR-146c upregulation was shown to repress MMP16 expression and activate the TLR6/MyD88/NF-κB pathway. This was associated with inhibiting cell apoptosis and promoting cell proliferation, important events to defend against host MG infection [287]. Furthermore, upregulating gga-miR-16-5p was reported to decrease multiplication and cycle progression and increase apoptosis of MG-infected DF-1 cells, by inhibiting the phosphatidylinositol 3-kinase (PI3K)/protein kinase B (Akt)/NF-κB pathway to exert an anti-inflammatory effect [288]. In a model system with DF-1 cells in chicken embryo fibroblasts, gga-miR-146c was shown to activate the TLR6/MyD88/NF-κB pathway through targeting matrix metalloproteinase-16 (MMP16) to prevent MG infection in chickens [287]. Upon MG infection, upregulation of miR-130b-3p was shown to activate the PI3K/Akt/NF-κB pathway and induce cell proliferation as a result of downregulating phosphatase and tensin homolog (PTEN). Importantly, inhibition of miR-130b-3p led to the opposite results [289]. In DF-1 cells exposed to Mycoplasma gallisepticum, lipid-associated membrane proteins were reported to induce IL-1β production through the NF-κB pathway [290]. Indeed, MG infection was shown to trigger an inflammatory response through the TLR-2/MyD88/NF-κB signaling pathway, leading to tissue damage in chicken thymus [43,291].
Polydatin (PD), a resveratrol glycoside isolated from Polygonum cuspidatum, with prominent anti-inflammatory activity, was used as a therapeutic means against MGinduced inflammation in chickens. First, histopathological studies clearly showed that PD treatment (15,30, and 45 mg/kg) was able to alleviate MG-induced pathological changes in the chicken embryonic lung. Second, PD treatment (15,30, and 60 µg/mL) was shown to significantly suppress the expression of IL-6, IL-1β, and TNF-α induced by MG in chicken embryo fibroblast (DF-1) cells. Furthermore, the MG-induced levels of TLR6, MyD88, and NF-κB were also significantly decreased by PD treatment, which restrained the MG-induced NF-κB-p65 nuclear translocation [292].
As mentioned above, MG can target host cells and cause chronic respiratory disease in chicken. In fact, in chicken spleen and DF-1 cells, MD infection was shown to impose oxidative stress and inflammation. However, baicalin was reported to suppress TLR2-NF-κB signaling pathway by inhibiting the phosphorylation of p65 and IκB [293]. Interestingly, baicalin was reported to restore the mRNA expression of mitochondrial dynamics-related genes and maintain the balance between mitochondrial inner and outer membranes, as well as upregulate the Nrf2/HO-1 pathway and suppress the NF-κB pathway in the spleen of MG-infected chicken [294]. Similar protective effects of baicalin [295] and polydatin, a resveratrol glycoside isolated from Polygonum cuspidatum [292], against MG-induced inflammation injury in chicken embryonic lung were associated with their inhibition of the TLR6/MyD88/NF-κB pathway.
Puerarin (PUE), an isoflavone found in a number of plants and herbs, was shown to inhibit MG-induced inflammation and apoptosis via suppressing the TLR6/MyD88/NF-κB signal pathway in chickens. In fact, compared to the MG-infected group, PUE was found to effectively inhibit the expression of MG-induced inflammatory genes, including TNF-α, IL-1β, IL-6, TLR6, MyD88, and NF-κB. In particular, PUE was reported to dose-dependently inhibit MG-induced NF-κB p65 translocation to the cell nucleus [296].
Eimeria tenella
Chicken coccidiosis is an enteric disease caused by Eimeria infection, leading to severe economic losses associated with immunosuppression and a high level of mortality in the poultry industry worldwide [297]. In fact, Eimeria tenella infection was shown to significantly increase the expression of NF-κB mRNA in chicken cecal tissue in vivo [269], and a similar increase in the expression level of NF-κB was observed in chicken intestinal epithelial cells in vitro after infection with E. tenella sporozoites [298].
There is a need for more research related to the regulatory roles of NF-κB pathway in the development of a chicken coccidiosis prevention strategy. 6.6.5. Clostridium perfringens Clostridium perfringens-induced necrotic enteritis in chickens has become an economically significant problem for the broiler industry [299,300], especially at farms that have stopped the use of antibiotic growth promoters [301]. It is known that the Clostridium perfringens main cell-wall component, peptidoglycan, can be recognized by TLR2 with subsequent activation of the NF-κB signaling pathway to induce cytokine and chemokine production, leading to inflammation [302]. The authors conducted an in vitro study with primary intestinal epithelial cells to assess the chicken intestinal inflammatory responses to C. perfringens and showed increased cytokine expression related to NF-κB activation [302]. Furthermore, pathways affected by the infusion of C. perfringens culture supernatant in the duodenum of broilers included NF-κB signaling, death receptor signaling, and an inflammatory response [303].
Importantly, two Lactobacillus species were shown to reduce the growth of Clostridium perfringens and inhibit the upregulation of NF-κB p65 in C. perfringens-challenged chicken intestinal epithelial cells [304]. Indeed, inclusion of L. acidophilus into the chicken diet was shown to improve gut health and reduce the mortality of Clostridium-challenged broiler chicks suffering from necrotic enteritis [305]. 6.6.6. Chlamydia psittaci Chlamydia psittaci, a pathogen in poultry and pet birds, is known to have some protective mechanisms to cope with proinflammatory mediators during the early host response, leading to effective evasion and causing psittacosis/ornithosis [306]. The polymorphic membrane protein D (PmpD) is known as a highly conserved outer-membrane protein helping the pathogen to decrease immune system protection during Chlamydia psittaci infection. Therefore, the ability of the N-terminus of PmpD (PmpD-N) to regulate the functions of chicken macrophages was studied. In particular, it was shown that stimulation of HD11 macrophages with PmpD-N was associated with an increased secretion of the Th2 cytokines, IL-6, and IL-10 and upregulated expression of TLR2, TLR4, MyD88, and NF-κB. In great contrast, inhibition of TLR2, MyD88, and NF-κB in HD11 cells was reported to significantly decrease IL-6 and IL-10 cytokine levels associated with significantly enhanced NO production and phagocytosis [307]. The plasmid-encoded protein CPSIT_P7 of Chlamydia psittaci was shown to induce the TLR4/Mal/MyD88/NF-κB signaling axis and orchestrate the inflammatory cytokine response [308].
It is important to mention that, during host cell infection, NF-κB is activated by various pathogens, leading to the creation of a hostile environment for invading infectious agents; however, the pathogen can diminish the protective inflammatory response by blocking NF-κB translocation to the nucleus [309]. 6.6.7. Infectious Bursal Disease As mentioned above, NF-κB is involved in the pathogenesis of various virus-induced diseases. In fact, infectious bursal disease virus (IBDV) is known as the etiological agent of a highly transmissive and immunosuppressive disease detrimentally affecting domestic chickens (Gallus gallus) in commercial poultry production. Indeed, IBD (Gumboro disease) can cause high morbidity and mortality of infected birds, leading to major economic losses in the poultry industry worldwide. The danger of IBD is associated with its immunosuppressive action associated with a loss of IgM-bearing B lymphocytes and the destruction of the bursa of Fabricius [72]. There is some evidence indicating that IBDV infection can cause oxidative stress in chickens [310], but regulatory roles of the antioxidant defense network in IBD need further elucidation.
IBDV infection was found to induce spleen macrophage activation via p38 MAPK and NF-κB pathways [311]. However, the molecular mechanisms of IBDV development and pathogenicity are still poorly understood; nevertheless, poorly regulated cytokine production and B-cell depletion due to apoptosis are believed to be important contributing factors to the disease pathology and severity. In IBDV-infected chicken embryonic fibroblasts, a great number of target genes and inducers of NF-κB were reported to be upregulated, in comparison to noninfected cells. It could well be that IBDV may support its replication and facilitate viral spread by affecting host-cell survival and apoptosis through NF-κB activation [312]. Interestingly, exacerbated apoptosis of cells infected with IBD virus upon exposure to IFN-α was shown to be associated with double-stranded RNA-dependent protein kinase (PKR), TNF-α, and NF-κB expression. Indeed, their downregulation is reported to drastically reduce the extent of apoptosis [313]. Protocatechuic acid (PCA), a type of widely distributed naturally occurring phenolic acid, was found to activate NF-κB signal pathways in the early stage of IBDV infection, leading to apoptosis promotion [314].
Newcastle Disease
Newcastle disease (ND) is regarded as one of the most important avian diseases significantly affecting poultry production all over the world, being a great threat to the poultry industry [315]. It is well known that ND epidemics can cause high chicken mortality with great economic losses [315]. There is no effective treatment for the disease, and poultry producers rely on vaccination and strict biosecurity as vital measures for controlling the spread of the disease [316]. It is known that NDV can cause oxidative stress in poultry [310]; however, the roles of redox homeostasis in ND development are poorly understood. In fact, intense inflammatory responses leading to excessive cellular apoptosis and tissue damage were shown to be a result of Newcastle disease virus (NDV) infection in poultry. However, the molecular mechanisms of such actions have not been fully elucidated [317]. In NDV-infected chickens, glucocorticoid dexamethasone was shown to modulate NF-κBdependent gene expression by upregulating FK506-binding protein 51 expression [318]. Furthermore, Newcastle disease virus-like particles were shown to induce dendritic cell maturation with synthesis of proinflammatory cytokines through the TLR4/NF-κB pathway [319]. Recently, it was shown that, during NFV infection, high-mobility group box 1 (HMGB1), a key member of the damage-associated molecular patterns (DAMPs), was responsible for NF-κB induction and a drastic increase in proinflammatory cytokine production in DF-1 and A549 cells [317]. It seems likely that activated NF-κB signaling can suppress NDV replication. This was shown experimentally with DF-1 cells (e.g., a chicken embryo fibroblast cell line) by using gga-miR-19b-3p, which enhanced NF-κB activity and led to increased inflammatory cytokine production and inhibition of NDV replication [320].
Expression of IFIT5 (interferon-induced protein with tetratricopeptide repeats 5) possessing antiviral activity and enhancing innate immunity was studied in chickens. The relative mRNA expression level of chicken IFIT5 (chIFIT5) was shown to be the highest in spleen, and the expression level of chIFIT5 was found to be significantly upregulated following NDV infection. In particular, it was shown that overexpression of chIFIT5 could promote IRF7-and NF-κB-mediated gene expression following NDV infection [321]. The DNA-sensing pathway is known to induce innate immune responses against DNA virus infection, with NF-κB signaling being critical for the establishment of innate immunity. 6.6.9. Other Viral Diseases It seems likely that Marek disease virus (MDV) and reovirus infections also affect NF-κB signaling. In fact, Marek's disease (MD) is a neoplastic virus disease infecting chickens and frequently causing cancers in animals [322]. It was shown that NF-κB is involved in MDV-induced neoplastic transformation of CD30-expressing chicken lymphocytes in vivo [323]. Furthermore, chicken MD virus RLORF4 (a MDV-specific gene) was shown to inhibit type I interferon production by antagonizing NF-κB activation. In fact, RLORF4 binds to the Rel homology domains of the NF-κB subunits p65 and p50, interrupting their translocation to the nuclei and, thus, inhibiting IFN-β production [324].
Avian reoviruses are important pathogens causing infectious arthritis/tenosynovitis, stunting syndrome, respiratory disease and enteric disease, immunosuppression, and malabsorption syndrome in poultry [325,326]. Thus, avian reovirus can cause oxidative stress and disturb redox homeostasis in poultry [310]. Avian reovirus S1133 in cell cultures, in the early stages of infection, was shown to induce Akt/NF-κB and STAT3 signaling, leading to an inflammatory response and delayed apoptosis [327]. Furthermore, in avian reovirus-infected chickens, the expression peak for NF-κB in peripheral blood lymphocytes was shown to occur at 3 days post infection (dpi). Similarly, IFN-α, IFN-β, and IL-12 expression levels also peaked at 3 dpi, while IFN-γ, IL-6, IL-17, and IL-18 expression reached a maximum level earlier (at 1 dpi), whereas IL-8 (5 dpi) and IL-1β and TNF-α (7 dpi) peaked later [328]. Recently, the phosphoproteomic responses of duck to reovirus infections in the spleen tissue were studied, and 16 proteins involved the intracellular signaling pathways of PRRs were shown to be phosphorylated proteins. In particular, changes in the phosphorylation levels of NF-κB, as well as MyD88, receptor interacting protein 1 (RIP1), MDA5, and IRF7, indicated an important role of protein phosphorylation in duck immune responses to viral antigens [329]. Pattern recognition receptor signaling during innate responses to influenza A viruses in the mallard duck was recently reviewed [330], and the fundamental roles of NF-κB in innate immune responses to duck Tembusu virus infection were discussed in detail [331].
As can be seen from the above-presented data, NF-κB plays a pivotal role in poultry protection against major microbial and viral diseases, by regulating immunity and inflammation; however, the molecular mechanisms of its regulation in avian species await further investigation.
Selenium
There is a great body of evidence indicating that the micronutrient selenium (Se) and selenoproteins are involved in the regulation of inflammatory signaling pathways, including NF-κB signaling, implicated in the pathogenesis of various diseases [13,332]. As a part of 25 selenoproteins, Se is involved in antioxidant defenses and the maintenance of redox balance [13].
The literature data related to the effect of Se on NF-κB and inflammation can be divided into three groups. Firstly, the detrimental effects of Se deficiency or excess on NF-κB signaling were shown. Secondly, the protective effects of Se in Pb and Cd toxicity were described. Thirdly, in LPS-induced models of oxidative stress and inflammation, Se was shown to be protective.
Se Deficiency
Se deficiency in chickens was shown to lead to activation of the NF-κB pathway, with a change in selenoprotein gene expression resulting in kidney dysfunction [333]. Furthermore, Se deficiency was reported to attenuate chicken duodenal mucosal immunity via activation of the NF-κB signaling pathway. In particular, Se deficiency enhanced the phosphorylation of IκB-α and phosphorylation of kappa-B kinase subunit alpha (IKKα), as well as increased p50 and p65 DNA-binding activities. Furthermore, in Se deficiency, IKKα was elevated, but IκB-α was decreased [334]. The increasing levels of ROS in chicken duodenal mucosa due to Se deficiency could trigger NF-κB signal transduction [335]. In a recent experiment, the control group was fed a complete formula feed (0.2 mg Se/kg), while the experimental group of chickens was fed a self-made low-Se diet (0.004 mg/kg) for 15, 25, 35, 45, and 55 days. In chicken spleen at 15-45 days, the relative expression of TLR4 mRNA was shown to be increased due to Se deficiency. The relative expression of NF-κB mRNA in the experimental group was also increased in comparison to that in the control group at 15-45 days. The relative expression of IL-6 mRNA and the protein expression level of TLR4 in the experimental group were increased due to Se deficiency at 15-45 days of age [336]. The authors concluded that Se deficiency is associated with inflammatory injury as a result of the TLR4/TIR-domain-containing adapter-inducing interferon-β (TRIF)/NF-κB signaling pathway activation in chicken spleen.
Interestingly, the adverse effects of Se excess/toxicity (15 mg/kg Se for 45 days) on inflammatory and immune responses in chicken spleens were also associated with enhancement of the expression of NF-κB, iNOS, COX-2, PTGE, IL-6, TNF-α, and IL-4, but a depression of FOXP3 and IFN-γ [337]. However, Se dietary supplementation at 2 mg/kg did not affect the mRNA levels of NF-κB, COX-2, PTGEs, and TNF-α in chicken kidneys.
Se and Pb Toxicity
Dietary Se has been proven to alleviate the Pb-induced increase in NF-κB and HSP expression in chicken livers [229]. Importantly, Se supplementation (1 mg/kg diet) was shown to reduce Pb concentration in serum, partly mitigated the effect on the activation of the NF-κB pathway, and further enhanced selenoprotein expression induced by Pb exposure [230]. One week old male chickens were treated via drinking water with Pb (350 mg/L) and provided with dietary Se (1 mg/kg) or both Pb and Se. On the 4th, 8th, and 12th weeks, kidneys were used to assess oxidative stress indicators, relative expressions of cytokines, and other inflammatory factors. The results showed that Pb consumption imposed renal injuries associated with increased lipid peroxidation (MDA), as well as the content and expression of IL-1β, IL-6, IL-17, NLRP3, caspase-1, NF-κB, COX-2, TNF-α, and PTGEs, and with reduced GSH content, as well as GPx and SOD activities, in the chicken kidneys. Se administration was shown to alleviate the aforementioned changes [338].
Chicken Cd exposure (150 mg/kg) was shown to activate inflammation-related genes including TNF-α, NF-κB, iNOS, COX-2, and prostaglandin E synthase (PTGEs) in chicken breast muscles [323,343]. Interestingly, Se (2 mg/kg as sodium selenite for 90 days) was reported to alleviate Cd-induced inflammation and meat quality deterioration via antioxidant and anti-inflammation action [343]. Supplementation with Se-yeast (0.5 mg/kg) was shown to have an antagonistic effect on Cd-induced inflammatory injury in chicken livers [233].
Se and LPS
By inhibiting the phosphorylation of NF-κB, Se was shown to reduce breast tissue inflammatory injury induced by LPS [344]. In laying hens, LPS stimulation (injected LPS into the abdominal cavity at the age of 8 months) imposed oxidative stress, indicated by decreased activity of SOD, GPx, and CAT, decreased GSH content, and increased H 2 O 2 and MDA content in the chicken myocardium. LPS also increased the expression of p38 MAPK and NF-κB, as well as TNF-α, IL-1, PTGE, COX-2, and iNOS. Interestingly, the addition of dietary SeMet (0.5 mg/kg for 4 months) was found to alleviate the changes in the above inflammation indicators [345]. Similarly, SeMet (0.5 mg/kg) was shown to inhibit the LPS-induced inflammation of liver tissue via suppressing the TLR4-NF-κB-NLRP3 signaling pathway in chickens [346].
In a recent experiment with 46 week old ISA laying hens, birds were injected with LPS (200 mg/kg) intraperitoneally, and, after 5 h, the tracheal tissue was collected for various assays. In the LPS-treated group, the epithelial cells were shown to be degenerated with necrotic changes accompanied by inflammation. The expression of the NF-κB pathway and related inflammatory factors, including TNF-α, iNOS, NF-κB, COX-2, and PTGEs, was significantly increased in the trachea tissue due to LPS treatment. In such conditions, increased (from 0.2 up to 0.5 mg/kg) SeMet supplementation for 90 days showed antiinflammatory effects [347].
Amino Acids
Dietary L-arginine supplementation (1.05-1.9%) was shown to attenuate the LPSinduced inflammatory response in broiler chickens, as evidenced by the decreased expression of IL-1β, TLR4, and PPAR-γ mRNA in the spleen, and IL-1β, IL-10, TLR4, and NF-κB mRNA in the cecal tonsils [348]. There were no significant interactions between immune stress caused by bovine serum albumin (BSA) and supplementation of threonine (0.49-0.76% for 21 days) for NF-κB gene expression in the jejunum or ileum of Pekin ducks [349].
Interestingly, NF-κB expression in the jejunum was twofold higher than that in the ileum. Leucine was reported to alleviate LPS-induced inflammatory responses as a result of downregulating the NF-κB signaling pathway. In particular, a model system employing the intestinal tissue from specific pathogen-free chick embryos cultured in the presence of LPS for 2 h was used. LPS was shown to increase the phosphorylation of NF-κB while decreasing the phosphorylation level of mTOR. In this system, leucine supplementation at 40 mM was reported to suppress the phosphorylation levels of NF-κB, while restoring the phosphorylation level of mTOR [350].
Phytogenic Supplements
Recently, various phytogenic supplements received tremendous attention in poultry and animal nutrition, and the molecular mechanisms of their protective actions in many cases were related to their antioxidant properties. However, our analysis of the current data in this area showed that polyphenolic compounds are poorly absorbed and their concentrations in target tissues are several orders lower than those used in in vitro studies [351]. Furthermore, their antioxidant properties are condition-dependent, and, in many cases, polyphenols could show pro-oxidant activities. Therefore, it was suggested that the polyphenolic effects on NF-κB and Nrf2 expression could be a major molecular mechanism of their protective action in various model systems and in poultry nutrition in general [351,352]. Data presented in Section 4 showing the activation effects of polyphenol compounds on Nrf2 expression and activity with simultaneous suppression of the NF-κB pathway confirmed that idea. There are also a range of publications showing protective effects of phytogenic supplements in poultry nutrition under various stress conditions.
In chickens receiving conventional vaccinations, the NF-κB gene mRNA relative expression in hepatocytes linearly decreased as a result of increasing resveratrol, a plantderived polyphenolic compound, with a dietary concentration from 200 to 800 mg/kg of diet [353]. Similarly, dietary resveratrol (200-600 mg/kg) was shown to reduce the protein expression of NF-κB, HSP70, and HSP90 in the jejunal chicken villi after 15 days of heat stress [201]. Dietary resveratrol (400 mg/kg) was also shown to protect quail hepatocytes against heat stress by decreasing the expression of NF-κB, Hsp70, and Hsp90, and increasing the hepatic and SOD, CAT, and GSH-Px activities [354]. Daidzein (DA), a soy isoflavone, included into the breeder diet at 20 mg/kg was shown to activate the NF-κB, MAPK, and Toll-like receptor signaling pathways of the offspring broilers. Furthermore, DA promoted lymphocyte development and differentiation and downregulated the expression of genes regulating lymphocyte apoptosis. It also increased the proportion of B cells, leading to promotion of Ig secretion with increased serum IgA and IgG levels and serum ND virus antibody titers [355]. In healthy Arbor Acre broilers, quercetin supplementation (0.04% and 0.06% for 6 weeks) was shown to significantly increase the expression of TNF-α, TNF receptor associated factor-2 (TRAF-2), TNF receptor superfamily member 1B (TN-FRSF1B), nuclear factor kappa-B p65 subunit (NF-κBp65), and interferon-γ (IFN-γ) mRNA, while expression of NF-κB inhibitor-alpha (IκB-α) mRNA was significantly decreased [356]. Ginsenosides, the major constituents of ginseng with unique biological activities, were shown to promote proliferation of chicken primordial germ cells through protein kinase C (PKC)-involved activation of NF-κB [357].
Tanshinone IIA (TIIA), a major lipophilic component extracted from the root of Salvia miltiorrhiza Bunge used in Chinese medicine, was shown to have a protective effect against pulmonary arterial hypertension-related inflammatory responses [358]. Treatment with an extract of Hypericum perforatum L., also known as Saint John's wort, at doses of 480-120 mg/kg for 5 days was shown to reduce infectious bronchitis virus (IBV)-induced injury and reduced the mRNA expression level of IBV in the chicken trachea in vivo. In particular, the expression of IL-6, TNF-α, and NF-κB was shown to be significantly decreased, but mitochondrial antiviral signaling gene, IFN-α, and IFN-β mRNA levels were shown to be significantly induced in vitro and in vivo [359].
Other Nutrients and Probiotics
Retinoic acid, an active vitamin A metabolite, was indicated to activate the PI3K/Akt and NF-κB signaling pathways, leading to proliferation of the cultured chicken primordial germ cells [360]. In LPS-challenged chickens, increased vitamin E supplementation (50 mg/kg vs. 10 mg/kg) was shown to decrease the expression of nuclear NF-κB p65 and increase the levels of IκBα in the liver [361]. It seems likely that the inhibitory effects of vitamin E on NF-κB expression could also be seen in physiological conditions, without any stress challenges. For example, in broilers fed increased vitamin E levels (14.11-14.91 mg/kg vs. 4.38-4.63 mg/kg) for 21 or 42 days, liver NF-κB p65 levels were significantly decreased, whereas liver IκB-α levels were significantly increased [362]. The NF-κB DNA-binding activity in the high-density housing group was shown to increase significantly compared with the free-range and low-density housing birds. Dietary taurine (0.1%) was shown to significantly alleviate NF-κB DNA-binding activity in chicken liver [363] and layer oviduct tissue [364], which was initially increased due to high-density housing systems. Interestingly, in the chicken renal tissue, the same dose of dietary taurine was able to decrease the NF-κB DNA-binding activity only in the low-density housing environment [365].
Necrotic enteritis (NE) infection was shown to significantly upregulate the mRNA levels of the immune-related molecules TLR-2, IL-1β, IL-4, IL-10, IFN-γ, and iNOS and the growth factors TGF-β3 and insulin-like growth factor 2 (IGF2) in the jejunum of broiler chickens. However, NF-κB expression was not affected. Interestingly, compared with nonsupplemented groups, probiotic Enterococcus faecium NCIMB 11,181 was shown to ameliorate necrotic enteritis-induced intestinal barrier injury in broiler chickens and increased gene expression levels of TLR2, MyD88, NF-κB, IL-4, iNOS, TGF-β3, PI3K, IGF-2, glucagon-like peptide-2 (GLP-2), and epidermal growth factor receptor (EGFR). There was also a significant interactive effect of NE infection and E. faecium treatment on the mRNA expression of various immune-inflammation factors, including NF-κB [366]. A dendritic cell (DC)-targeted mucosal vaccine using Lactobacillus plantarum as an antigen delivery system against G57 virus infection was developed. The vaccine was shown to confer efficient protection against G57 H9N2 infection, due to activation of DCs by the TLR-induced NF-κB pathway. This was associated with improved DC migration and improving the presentation of immunogen to T and B lymphocytes, causing changes in T-cell polarization toward Th1, Th2, and regulatory T cells (Treg cells) and inducing more efficient mucosal and adaptive immunity responses [367]. Chickens infected with Salmonella enteritidis (0.5 mL of S. enteritidis bacterial suspension, 10 9 CFU/mL) through the oral cavity at 9 days of age) were administered with a probiotic Pediococcus pentosaceus microcapsule (1 g/kg). It was shown that the probiotic effects on proinflammatory indices were time-dependent; the probiotic significantly decreased the NF-κB expression in spleen and cecum samples at 1 dpi, whereas the difference disappeared at 3 dpi, and NF-κB expression was significantly increased in samples from chickens fed the probiotic at 7 dpi before disappearing again 14 dpi [368].
NF-κB and Inflammation in Poultry Production
Inflammation is known to be a highly orchestrated and tightly controlled protective mechanism dealing with infection and coordinating the repair and regeneration process. However, in the case of misregulation of inflammation or its chronic character, it can be responsible for severe tissue damage [369]. Therefore, elucidation of the molecular mechanisms underlying the regulation/control and resolution of inflammation is among the major topics of modern medical and veterinary sciences. Indeed, the relationship among inflammation, NF-κB signaling, and chronic diseases, including cancer [370], liver diseases [371], multiple sclerosis [372], immune-related disorders [69,373], and others [374], has received tremendous attention in recent years. Uncontrolled inflammation is shown to be associated with many widely occurring human diseases, including cardiovascular disease, neurodegenerative diseases, diabetes, obesity, asthma, arthritis, and periodontal diseases [375]. Therefore, modulating inflammation through the negative regulation of NF-κB signaling is an important approach in medical sciences and clinical practices [376].
In fact, the efficacy, duration, and outcomes of an inflammatory response in poultry were shown to be condition-dependent and determined by the triggering signal recognized by the innate immune receptors [377]. It seems likely that poultry metabolic diseases related to cardiovascular ailments, responsible for a major portion of the flock mortality, as well as musculoskeletal disorders responsible for slowing down chicken growth and causing lameness [378] and leg/bone disorders [379], are associated with increased and misregulated inflammatory responses. Intestinal health is known to depend on a range of host-related factors (immunity, mucosal barrier), as well as nutritional, microbial, and environmental challenges. Thus, intestinal dysbiosis, compromised mucosal barrier, and gut inflammation are among major issues in commercial poultry production systems after the ban on growth-promoting antimicrobials in animal feed [380]. Furthermore, non-nutritive dietary ingredients, biosecurity, immunology, vaccine technology, and genetics are suggested to play an important role in antibiotic-free poultry production [40].
It was hypothesized that subclinical gut disorders leading to a chronic low-level inflammatory response in the gut associated with oxidative stress and redox disbalance could result in the disruption of digestive function and poor immune competence [44]. Furthermore, the authors also suggested that chronic intestinal inflammation is responsible for the decreased performance and increased incidence of intestinal problems observed in poultry production. In fact, there are a range nutritional stress factors, including feed with a high content of nonstarch polysaccharides, crude protein excess, ingredient rancidity, or mycotoxin-and/or heavy-metal-contaminated feed, which could trigger gut inflammation in poultry species [30,44].
Inflammation and the susceptibility to inflammatory disorders are known to be regulated by nutrition, the gut microbiome, and genetics. In fact, altered nutrient status is shown to reprogram host inflammation via the gut microbiota [381]. Furthermore, the crosstalk between gut bacteria and host immunity is considered to be of great importance in intestinal inflammation [382], and the intestinal microbiota has emerged as a key player in metabolic inflammation and dysfunction [383]. The modulation of inflammatory responses associated with NF-κB regulation has been described for many nutrients, including selenium, taurine, carnitine, silymarin, and other nutrients [30]. Furthermore, commercially used feed additives, including probiotics, prebiotics, phytobiotics, organic acids, short-and medium-chain fatty acids, essential oils, and enzymes, can help maintain optimal host-microbiome interactions to support gut integrity and efficient growth and health [384,385].
NF-κB is known as a master regulator of the inflammatory response in the complex inflammatory network, playing a critical role in the host defense against pathogens, and it protects cells from apoptosis, as a result of triggering the upregulation of proinflammatory genes [386]. Furthermore, in acute inflammation, NF-κB, as a redox-sensitive transcription factor, was shown to participate in condition-dependent selective regulation of the expression of many target genes. They include proinflammatory cytokines (TNF-α, IL-1β, IL-6, and IL-12), various antioxidants (e.g., glutamate cysteine ligase, SOD1, SOD2, and catalase) [386], proinflammatory enzymes (secretory phospholipase 2, sPLA2; COX-2, and iNOS), chemokines (macrophage inflammatory protein 1α (IP-1α), monocyte chemoattractant protein-1 (MCP1), growth factors, cell-cycle regulatory molecules, anti-inflammatory molecules, and adhesion molecules [387]. Under stress conditions, the NF-κB network is responsible for balancing the effect of various intracellular stress signals to maintain homeostasis and cell/tissue integrity [30]. Therefore, a better understanding of the condition-dependent molecular events/mechanisms determining the point at which NF-κB responses switch from being protective to become damaging is of great importance [372,388] and awaits further investigation.
Conclusions
Today, NF-κB is known as a redox-sensitive, inducible nuclear transcription factor that regulates the expression of a number of genes associated with important biological processes, including innate and adaptive immune responses, cell growth, maturation, survival, and adaptive homeostasis establishment via interactions with other transcription factors and vitagenes [1]. Indeed, NF-κB signaling is an important element in cell adaptation to a diverse array of environmental stimuli, including oxidative stress. These stimuli can be recognized by various receptors, and the subsequent response involves specific adapter proteins. Under normal physiological conditions, the antioxidant defense network and the optimized redox balance in the cell/tissue are responsible for orchestrating important biological processes associated with cell protection, repair, and growth. This includes T-cell maturation, fighting against infections, DNA damage repair, and tissue healing and integrity restoration after injury. However, under various stress conditions, associated with redox disbalance, aberrant NF-κB activation can lead to detrimental changes, including the development of many age-related diseases. In fact, it was reported that activation of NF-κB is an important mechanism of host defense against infection and stress [77]. Indeed, the results from gene knockout experiments clearly showed that mice deficient in NF-κB-mediated transcriptional regulation were susceptible to a variety of infections and were characterized by compromised immunity: specifically depressed immunoglobulin expression, defective humoral immune responses, and decreased responses to LPS [389]. In fact, it was recently proposed that NF-κB is a vital regulator of inflammation, indicating that the dynamical attributes and the composition of the nuclear NF-κB complexes cumulatively instruct context-specific inflammatory gene patterns [48]. A variety of endogenous and exogenous stimuli in poultry have been characterized, including danger, damage, and survival signals from viral and bacterial components, proinflammatory cytokines, DNA damage, growth factors, and other stressors. As mentioned above, the stimuli can be recognized by a range of various receptors with subsequent activation of NF-κB, along with involvement of an array of specific adapter proteins [60,67].
Poultry production was shown to be associated with a range of stresses, which can be divided into four major categories, namely, environmental, technological, nutritional, and internal/biological stresses [28,29]. It was proven that RONS excess and a compromised antioxidant defense network are responsible for compromised health, as well as the decreased productive and reproductive performance of growing broilers, laying hens and breeder birds [30,390]. Therefore, the antioxidant defense network in poultry is of great importance, and understanding the molecular mechanisms underlying its regulation to optimize redox balance in various tissues and in the body in general under commercially relevant stress conditions is an important topic of current research in avian biology and poultry health [26,30].
Our analysis of the recent literature related to the regulatory roles of NF-κB in poultry can be summarized as follows: • Similar to mammalian species, in poultry, NF-κB plays a central role in the regulation of many physiological and pathological processes.
•
In thermally stressed birds, NF-κB expression is condition-dependent, including temperature, exposure duration, and bird's age.
•
The effects of dietary AFB1 on NF-κB expression in chicken liver are also conditiondependent. In general, AFB1 was shown to compromise AO defenses and increase proinflammatory cytokine production via NF-κB induction. • Mn or Cu excess in the chicken diet was shown to increase the expression of NF-κB in testes, heart, and immune organs.
•
The proinflammatory effects of heavy metals (As, Cd and Pb) in chickens were shown to be mediated via NF-κB pathway activation in various tissues. • Increased concentrations of NH 3 and H 2 S (main environmental stressors in poultry production) in air during chicken housing were shown to impose oxidative stress and inflammatory responses via NF-κB activation.
•
The stimulating effect of LPS on NF-κB expression was shown in vitro in model systems and in vivo with poultry. • In many bacterial and viral diseases, NF-κB is activated to increase proinflammatory cytokine production and impose an inflammatory response to create a hostile environment for pathogens.
•
The main bacterial pathogens causing various diseases in poultry production, including Escherichia coli, various Salmonella species, Mycoplasma gallisepticum, Eimeria tenella, Clostridium perfringens, and Chlamydia psittaci, were shown to induce proinflammatory responses in birds associated with increased NF-κB expression and activity.
•
In model systems based on an investigation of gene expression changes due to various infections, it was proven that the development of viral diseases, including infectious bursal disease, Newcastle disease, Marek's disease, and reovirus challenges, was associated with the induction of NF-κB and inflammatory responses.
• Nutritional modulation of NF-κB expression and activity was shown to be achieved by using various antioxidants, including selenium, various polyphenols, taurine, retinoic acid, vitamin E, and some probiotics. In fact, under various stress conditions, these nutrients can ameliorate (partly or completely) the increased NF-κB expression and activity imposed by stressors.
An opportunity to use nutritional/pharmacological means to modulate NF-κB expression and activity should be exploited further to deal with inflammation-associated poultry disorders/diseases, including gut health problems associated with antibiotic-free poultry production. In fact, various nutrients, including taurine [391], carnitine [392][393][394], silymarin [352], vitamin E [21], selenium [13,20], carotenoids [22], and others [26], have been shown to affect antioxidant defenses and vitagenes, helping maintain the redox balance in various chicken tissues. There is a need for further research related to interactions of the antioxidant defense network with vitagenes and transcription factors, including NF-κB and Nrf2, which are responsible for the maintenance of redox homeostasis and stress adaptation under the commercial conditions of poultry production.
Author Contributions: P.F.S. wrote the manuscript; I.I.K. was involved in data analysis; M.T.K. was involved in data analysis and editing the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: P.F.S. and I.I.K. were supported by a grant from the Government of the Russian Federation (Contract No. 14.W03.31.0013).
Acknowledgments:
The authors acknowledge financial support in the form of a grant from the Government of the Russian Federation (Contract No. 14.W03.31.0013) to P.F.S. and I.I.K.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-02-03T06:20:12.031Z
|
2021-01-28T00:00:00.000
|
{
"year": 2021,
"sha1": "2fa05e94435dfc29ad0f3d9388db1a06bf74413b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/antiox10020186",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6147607be9aeb75e96ce844f0d879b614e9ed27",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255723943
|
pes2o/s2orc
|
v3-fos-license
|
Psychometric Evaluation of the Drive for Muscularity Scale and the Muscle Dysmorphic Disorder Inventory among Brazilian Cisgender Gay and Bisexual Adult Men
Despite high levels of muscularity concerns among sexual-minority men, most of the existing literature on the drive for muscularity and muscle dysmorphia focuses on heterosexual men and has mainly been conducted in Western and English-speaking regions. The present study aimed to evaluate the psychometric properties of the Drive for Muscularity Scale (DMS) and the Muscle Dysmorphic Disorder Inventory (MDDI) in Brazilian cisgender gay and bisexual adult men who were 18–50 years old. We evaluated the factor structure of both measures using a two-step, split-sample exploratory (EFA; n = 704) and confirmatory (CFA; n = 705) factor-analytic approach, which supported the original three-factor structure of the MDDI and resulted in a reduced two-factor solution with 13 items for the DMS. Convergent validity was supported through associations of the DMS and the MDDI with eating disorder symptoms, body-ideal internalization, self-objectification beliefs and behaviors, and body appreciation measures. Additionally, we found good internal consistency, and test–retest reliability of both measures. Results support the validity and reliability of the DMS and the MDDI in Brazilian cisgender gay and bisexual adult men and will support future studies exploring these constructs in Brazilian sexual-minority men.
Introduction
Muscle dysmorphia (MD) is characterized by a pathological preoccupation with one's degree of muscularity that involves distress and fear over the idea that one's body is too small or not sufficiently muscular [1]. Individuals with MD engage in rigid and obsessive behaviors that can have serious health consequences and impair psychosocial functioning [1]. Specifically, individuals with MD display attitudes and behaviors similar to people with a high drive for muscularity, including excessive exercise, muscularityoriented disordered eating, and the use of performance-enhancing drugs (e.g., androgenic anabolic steroids) [2][3][4]. Indeed, a systematic review with meta-analysis pointed to the drive for muscularity as a risk factor for MD [5]. Furthermore, research using non-clinical samples has demonstrated associations between drive for muscularity and MD symptoms such as higher eating disorder (ED) symptoms [6][7][8][9], body-ideal internalization [10], depressive symptoms [11], self-objectification, and exercise dependence [12]. Moreover, higher muscularity concerns have been associated with lower self-esteem [13] and body appreciation [14,15].
Several instruments have been developed to study and assess MD symptoms in the absence of a clinical interview (i.e., using the Diagnostic and Statistical Manual of Notably, there is limited research in this area, and the present investigation could help to support more research into muscularity concerns and MD symptoms among sexualminority men-an at-risk population. Indeed, there has been a dearth of research on MD in non-heterosexual men, despite increasing evidence of elevated muscularity concerns among sexual-minority men, including drive for muscularity [8,31]. For example, a review that included five studies composed of more than 100,000 participants found that gay men were more likely than heterosexual men to report dissatisfaction with their physical appearance and muscle size/tone; and experience objectification, surveillance, appearancebased social comparison, and pressure from the media to be attractive [32]. These results demonstrate that sexual orientation is an important factor to be considered in studies on body image and MD in men.
The studies by Convertino et al. [33] and Oshana et al. [34] support that this increase, experienced by sexual-minority men may be in part due to minority stressors, such as fear of rejection, sexual orientation concealment, internalized homophobia, stigma, prejudice, and discrimination. Interestingly, recent research found that minority stressors can impair one's self-perception and may be risk factors for body dysmorphic disorder, including MD [34]. Again, these results indicate that sexual-minority men have idiosyncrasies in relation to their body image and physical appearance concerns which should be considered in validation studies.
In addition to the small number of studies on muscularity concerns in non-heterosexual men, there is a paucity of studies on drive for muscularity and MD symptoms in non-Western or non-English-speaking countries. In Latin America, although anti-discrimination laws exist, cisgender gay and bisexual men have suffered from high rates of discrimination and violence [35,36]. Furthermore, researchers have suggested that Brazilian culture promotes a "cult of the body" attributing importance to physical appearance and sculpted bodies for sexual-minority men and men regardless of sexual orientation alike [37]. For example, in a study conducted with Brazilian gay men (n = 646), 69.7% of the participants reported body image dissatisfaction [38]. Research also supports that body-dissatisfied gay men are more likely to engage in health risk behaviors, such as condomless anal sex [39], anabolic-androgenic steroid use [9], and disordered eating [40]. In a prior study, it was highlighted that Brazilian gay and bisexual men have a higher prevalence of mental health concerns when compared to Brazilian heterosexual men, causing greater demand for mental health services in this population [41].
Therefore, the objectives of the current study were: (1) to examine the factor structure of the MDDI and the DMS, using a two-step, split-sample exploratory (EFA) and confirmatory (CFA) factor analytic approach, in a sample of Brazilian cisgender gay and bisexual adult men; (2) to evaluate the convergent validity of the MDDI and the DMS with ED symptoms, self-objectification beliefs and behaviors, body-ideal internalization, and body appreciation measures; and (3) to estimate the internal consistency and two-week test-retest reliability of the MDDI and the DMS in a sample of Brazilian cisgender gay and bisexual men. It was hypothesized that the DMS [25,26] and MDDI [20] would replicate the original two-and three-factor structure, respectively (Hypothesis 1). Further, it was also expected that the DMS and MDDI would be positively correlated with measures of ED symptoms, self-objectification beliefs and behaviors, and body-ideal internalization (Hypothesis 2). Conversely, it was anticipated that the DMS and MDDI would be negatively correlated with body appreciation (Hypothesis 2). Finally, adequate internal consistency and good test-retest reliability were expected for both measures (Hypothesis 3).
Participants and Procedures
As part of a larger study, the present study aimed to evaluate the psychometric properties of body image and MD measures among Brazilian cisgender gay and bisexual adult men. A total of 1418 Brazilian cisgender gay and bisexual adult men participated in the current study. For conducting EFA and CFA, a 20:1 participant per item ratio was used [42]. Furthermore, after the two-week period, a random subset of participants (n = 188) were selected to respond to the DMS and the MDDI to examine test-retest reliability.
A relevant institutional review board was responsible for the ethical approval of the present study (Universidade Federal de Juiz de Fora, Brazil, approval number 4.690.224), and all procedures were in accordance with the principles specified in the Declaration of Helsinki. Potential participants were invited through websites, social networking, and online communities. Including the title of the study, the advertisement invited Brazilian cisgender gay and bisexual adult men to participate in a study on body appearance concerns. Specific inclusion criteria were: (a) being a Brazilian citizen, (b) self-identified as a cisgender gay or bisexual man, (c) aged 18-50, and (d) having the ability to read and respond to a questionnaire written in Brazilian Portuguese. Exclusion criteria (i.e., having any medical condition that may directly or indirectly influence one's physical appearance, including rheumatic or autoimmune diseases, cancer, or severe burns) was used to approximate sampling criteria used in prior Brazilian studies [43].
Data were collected on a cloud-based, web-responsive, secure platform accessible from any smartphone, tablet, or computer (i.e., Google Forms), from August to December 2021. A digital informed consent was provided by the participants and after they responded to the research protocol (measures described below). In order to control order effects, the scales were counterbalanced. Participants were volunteers and did not receive any benefit for participating in the study. To ensure that participants responded only once to the survey protocol, IP addresses (internet protocol) were controlled, not allowing access more than once.
Drive for Muscularity Scale (DMS)
The DMS is a 15-item self-report measure that aims to assess muscularity-oriented attitudes and behaviors [25]. The DMS factor structure is composed of two subscales: Muscularity-Oriented Behaviors (MB; 7 items) and Muscularity-Oriented Body Image (MBI; 8 items) [25,26]. The Brazilian version of the DMS [46] showed adequate factor validity, good internal consistency, and satisfactory evidence of convergent and discriminant validity, and was used in the present study. It is noteworthy that the factorial structure of the DMS proposed by Campana et al. [46] is composed of only 12 items and two subscales, MBI and MB. However, in the current study, the 15 Brazilian translated items were used [46]. Each item is scored on a six-point Likert-type scale (1 = always to 6 = never). Subscale scores are derived from the sums of all items that compose them. Scores for each item were reversed to calculate the subscale scores; higher scores reflect greater muscularity-oriented attitudes and behaviors.
Muscle Dysmorphic Disorder Inventory (MDDI)
The MDDI is a 13-item self-report instrument designed to assess core MD attitudes and behaviors [20]. The MDDI factor structure is composed of three subscales, in which the Drive for Size (DFS) subscale is represented by five items, and the Functional Impairment (FI) and Appearance Intolerance (AI) subscales are represented by 4 items each. The Brazilian version of the MDDI [47] upheld the original factor structure and showed good convergent validity, internal consistency, and test-retest reliability, and was applied in the current study.
Each MDDI item is scored using a five-point Likert-type scale (1 = never to 5 = always). The score for each subscale is obtained by the sum of all the items that compose them. Higher scores indicate greater core MD attitudes and behaviors.
Eating Disorder Examination-Questionnaire (EDE-Q)
The EDE-Q is a 28-item self-reported measure developed to assess ED symptoms [48]. A set of 22 items are rated on a seven-point Likert-type scale (0 = no days/none of the times/not at all to 6 = every day/every time/markedly), and six items assess the frequencies of behaviors (i.e., items 13 to 18) occurring during the past 28 days [48]. The Brazilian Portuguese version of the EDE-Q was used in the present study [49,50]. The global score may be obtained by the average of the items. Higher scores are indicative of greater ED symptoms. In the present study, the EDE-Q demonstrated good internal consistency (McDonald's omega (ω) = 0.92 (95% confidence interval [CI] = 0.91, 0.93)).
Sociocultural Attitudes towards Appearance Questionnaire-4 Revised (SATAQ-4R)
The SATAQ-4R, its male version, is a 28-item self-report measure developed to assess appearance-ideal internalization and appearance pressures [51]. The Brazilian SATAQ-4R appearance-ideal internalization subscales (i.e., Internalization: Thin/Low Body Fat (BF), Muscular (MUS), and General Attractiveness (GA)) were used in the present study [52]. The subscale items are scored on a five-point Likert-type scale (1 = definitely disagree to 5 = definitely agree). The SATAQ-4R appearance-ideal internalization subscales score is obtained by the average of the items that compose it. Higher scores are indicative of greater appearance-ideal internalization. For the GA subscale, items #9 and #14 were scored in reverse. Regarding internal consistency, the SATAQ-4R MUS subscale proved to be adequate (ω = 0.93 (95% CI = 0.92, 0.94)). The inter-item correlation was performed to calculate the reliability of the SATAQ-4R BF (r = 0.84) and the SATAQ-4R GA (r = 0.72) subscales, given both have only two items each [53].
Self-Objectification Beliefs and Behaviors Scale (SOBBS)
The SOBBS is a 14-item self-report measure developed to assess self-objectification beliefs and behaviors [54]. The Brazilian SOBBS subscales: Observer's Perspective (OP) and Valuing the body above other attributes and qualities and the body capable of representing itself (VB), were used in the present study [52,54]. The subscales items are scored on a five-point Likert-type scale (1 = strongly disagree to 5 = strongly agree), and each SOBBS subscales score is calculated by averaging the items. Higher scores indicate greater self-objectification beliefs and behaviors [52]. In the current study, the SOBBS subscales showed good internal consistency (SOBBS-OP subscale, ω = 0.91 (95% CI = 0.90, 0.92); SOBBS-VB subscale, ω = 0.85 (95% CI = 0.83, 0.86)).
Body Appreciation Scale-2 (BAS-2)
The BAS-2 is a 10-item self-report instrument designed to assess body appreciation [55]. The items are scored on a five-point Likert-type scale (1 = never to 5 = always). The total score is derived from the average of all items. Higher scores are indicative of greater body appreciation. The Brazilian version of the BAS-2 showed good factor validity (i.e., EFA and CFA), convergent validity, good internal consistency, and two-week test-retest reliability for a one-dimensional structure [15,56]. In the present sample, the BAS-2 showed adequate internal consistency (ω = 0.94 (95% CI = 0.93, 0.95)).
The categorical variables were described by relative and absolute frequencies, and the numerical data were described by means (M) and standard deviations (SD). Univariate (i.e., asymmetry < 3 and kurtosis < 7) and multivariate normality (i.e., Mardia's coefficient < 5) were assessed [42]. To compare the sociodemographic data between the EFA and CFA samples and between CFA and retest samples, the chi-squared test and Student t test were used [42].
Factor Structure
EFAs with principal-axis factoring and oblique promax rotation were conducted to explore the factor structure of both the DMS and the MDDI [42]. The Kaiser-Meyer-Olkin (KMO > 0.80) and the Bartllet's sphericity test (p < 0.05) were performed to identify the suitability of the data for factor analysis [42]. To decide on the number of factors to retain in each EFAs, we used multiple criteria, including the Kaiser-Guttman criterion (eigenvalue > 1), examination of the scree plot, and parallel analysis [59]. The factor loadings (λ) matrix was analyzed to identify the correspondence of the items with their respective factors, in which values ≥ 0.40 were considered adequate. Only items with λ ≥ 0.32 were considered cross-loading [58].
Using data from the second half-sample, CFAs with the weighted least square mean and variance adjusted (WLSMV) were performed to confirm each factor structure previously identified for the DMS and the MDDI. The existence of multivariate outliers was explored by the Mahalanobis squared distance (D 2 ). The model's adequacy [58] was evaluated by the Chi-squared test weighted by degrees of freedom (χ 2 /df < 3), root mean-square error of approximation (RMSEA < 0.08; 90% CI; p > 0.05), comparative fit index (CFI; values close to 0.95), Tucker-Lewis index (TLI; values close to 0.95), and standardized root mean-square residual (SRMR < 0.08). The model adjustment was performed using the Lagrange multipliers when the score was greater than 11 [59].
Internal Consistency and Test-Retest Reliability
To estimate the internal consistency of the measures, McDonald's omega (ω) was used, whose values of 0.70 or higher were considered acceptable internal consistency [61]. Using a two-week interval, test-retest reliability was evaluated through Spearman's rho and intraclass correlation coefficient-ICC [59]. Correlations of 0.10-0.29, 0.30-0.49, and above 0.50 were considered small, moderate, and large, respectively [60]. Following Koo and Li [62] cut-offs, ICC values greater than 0.90 were considered to show excellent reliability, between 0.75 and 0.90 were considered good, between 0.50 and 0.75 were considered moderate, and less than 0.50 were considered poor.
Descriptive Statistics
Demographic data regarding age, BMI, race/ethnicity, and sexual orientation for the two half-samples (i.e., EFA and CFA) can be seen in Table 1. Regarding demographic data, no statistically significant differences were found between EFA and CFA samples (ps > 0.05). Participants (n = 188; 159 gays and 29 bisexual men) who responded to the two-week retest for the DMS and MDDI (i.e., retest sample; M age = 27.06, SD = 5.60) self-identified themselves as White (60.63%), Brown (26.06%), Black (11.70%), and Other ethnic origins (1.61%). The BMI of the participants from the retest ranged from 17.49 to 45.84 kg/m 2 (M = 25.97, SD = 5.70). No statistically significant differences were found between the CFA and retest samples (ps > 0.06). Note. EFA = exploratory factor analysis; CFA = confirmatory factor analysis; BMI = body mass index. The categories suggested by the Brazilian Institute of Geography and Statistics [45] were used to classify race/ethnicity. Namely, these were: White (branco), Brown (pardo or "Mixed"), Black (preto), Yellow (amarelo), Indigenous (indígena), and "Other" (outro). a Results expressed by mean and standard deviation-M (SD). b Results expressed in absolute and relative frequency. c Test result for numerical data (Student's t-test) or categorical data (chi-squared test).
Factor Structure
Regarding the DMS, the KMO (0.89) and the significance of Bartllet's sphericity test (χ 2 [105] = 6440.736, p = 0.001) showed adequate common variance to conduct the factor analysis. All retention criteria (i.e., eigenvalue, scree plot, and parallel analysis) showed that the two-factor structure was the most appropriate. Parallel analysis showed that item #12 (i.e., "I think that my weight-training schedule interferes with other aspects of my life") did not have adequate loading in its respective factor (λ = 0.29). Additionally, item #9 (i.e., "I think that I would look better if I gained 10 pounds in bulk") showed cross-loading. After excluding both items, the DMS presented an adequate two-factor structure (KMO = 0.89; Bartllet's sphericity test (χ 2 (78) = 5919.507, p = 0.001)). The factor loadings, eigenvalues, and the total variance explained are described in Table 2.
Regarding the MDDI, the KMO (0.79) and the significance of Bartllet's sphericity test (χ 2 [78] = 4020.732, p = 0.001) indicated good suitability to factor analysis. All retention criteria demonstrated that a three-factor structure was the best fit to the data, replicating the original structure [20]. The factor loadings, eigenvalues, and the total variance explained are described in Table 3. Figures S1 and S2). Furthermore, it was decided not to use the modification indices.
Convergent Validity
The convergent validity of the DMS and the MDDI were performed with the CFA sample ( Table 4 The DMS MB subscale exhibited positive and large correlations with the MDDI FI and SATAQ-4R MUS subscales. In addition, a positive and moderate association was found between the DMS MB subscale and the MDDI DFS subscale. In addition, positive and small associations were found between the DMS MB subscale and the SATAQ-4 GA, SOBBS OP, and SOBBS VB subscales. Finally, a positive and small association was found between the DMS MB subscale and the EDE-Q.
In turn, the MDDI DFS subscale showed a positive and large association with the SATAQ-4R MUS subscale. Furthermore, a positive and moderate association was found between the MDDI DFS subscale and the SOBBS OP subscale, and positive and small associations with the SOBBS VB and the SATAQ-4R GA subscales. Finally, the MDDI DFS subscale showed moderate negative associations with the SATAQ-4R MUS subscale and the BAS-2.
The MDDI AI subscale showed large positive associations with the EDE-Q and the SOBBS OP subscale. Furthermore, moderate positive associations were found between AI and the SATAQ-4R BF and the SOBBS VB subscales, and small positive associations were found with the SATAQ-4R GA and the SATAQ-4R MUS subscales. The MDDI AI subscale showed a negative and large association with the BAS-2.
The MDDI FI subscale showed positive and moderate associations with the SATAQ-4R MUS and the SOBBS OP subscales, and the EDE-Q. Positive and small associations were found between the MDDI FI subscale and the SOBBS VB and the SATAQ-4R GA subscales. Finally, the MDDI FI showed a negative and small association with the BAS-2.
Internal Consistency and Test-Retest Reliability
The DMS and the MDDI demonstrated adequate internal consistency for the total and subscale scores, and good test-retest reliability, showing strong association between the test and retest scores. These results are presented in Table 5.
Discussion
Cisgender gay and bisexual men have been mostly neglected in the MD literature, despite evidence of greater levels of body dissatisfaction and drive for muscularity [8,31]. Perhaps, this is in part due to uncertainty regarding the applicability and utility of existing measures cross-culturally [8,31]. Thus, in this study we evaluated the psychometric properties of the DMS and the MDDI among Brazilian cisgender gay and bisexual adult men. Results from EFAs and CFAs confirmed the original three-factor solution for the MDDI [20] and resulted in a reduced two-factor solution with 13 items (excluding items #9 and #12) for the DMS [26], respectively. Additionally, the DMS and the MDDI showed evidence of convergent validity, as well as good internal consistency and two-week test-retest reliability.
As previously hypothesized, our results supported the original three-factor solution of the MDDI, (i.e., DFS, AI, and FI; [20]) with all 13 items. These findings have been shown to be stable for many countries. For instance, a previous validation study with physically active Brazilian college men [47] and sexual minorities (i.e., living in the U.S. or its territories) was evaluated in three different studies. Specifically, cisgender gay men [8], gender-expansive individuals [7], and transgender men [9] were analyzed. Interestingly, our results confirm previous data from MDDI validation studies, which suggests stability of the factorial structure of the MDDI, regardless of gender identity and sexual orientation.
Results also confirm the original factor structure of the DMS [26], with the exclusion of items #9 and #12 in the EFA, resulting in a reduced two-factor structure with 13 items, which was then confirmed through CFA. Previous validation studies of the DMS among sexual-minority men have confirmed the two-factor structure with 14-items [10,29,30]. However, previous validation studies with men (i.e., men regardless of sexual orientation) have failed to confirm the original factor structure of the DMS with all 14 items [46,64,65]. For example, the Canadian version of the DMS was tested in recreational weightlifters and non-weightlifters and resulted in a reduced version after excluding items #10 and #15 [65]. Keum et al. [64] found support for a reduced version of the DMS for Asian American men after excluding items #4, #5, and #10. Specifically, a previous validation study with Brazilian men [46] resulted in a reduced version, after excluding items #7 (i.e., "I feel like I have too much body fat"), #9 (i.e., "I think that I would look better if I gained 10 pounds in bulking"), and #10 (i.e., "I think about taking anabolic steroids"). Our results indicated that item #9 may be particularly problematic in the Brazilian context. However, the reasons for the different findings across cultures are not clear. Maybe, some items of the DMS function differently for Brazilian men regardless of their sexual orientation [46], and Brazilian gay and bisexual men. Future studies may benefit from the analysis of measurement invariance of the DMS between heterosexual and sexual-minority men.
Regarding item #10 (i.e., referring to the use of anabolic steroids), the original DMS study advocates that the item did not load significantly on any of the lower-order factors [26]. McCreary et al. [26] suggested that item #10 can be included or deleted from the DMS at the researchers' discretion; however, when included, it should not be considered in the calculation of the overall DMS score. For example, in the study of McPherson et al. [27], item #10 loaded into the DMS MB subscale. All available validation studies of the DMS among sexual-minority men have omitted item #10 [10,29,30]. In particular, the Brazilian version of the DMS by Campana et al. [46] chose to omit item #10 because of their high residuals. Deblaere and Brewster [29] suggested that investigators allow the scope and intention of their research questions to guide their decision regarding the retention or exclusion of item #10, and that future studies should evaluate the validity of the 15 items of the DMS with sexual-minority men. In the present study, we chose to evaluate all 15 items of the DMS. We kept item #10 in EFA due to its satisfactory factor loading (λ = 0.43) in its respective MB factor. CFA confirmed that item #10 should be retained.
Confirming our second hypothesis, the MDDI and DMS demonstrated good convergent validity, as evidenced through associations with ED symptoms, self-objectification beliefs and behaviors, body-ideal internalization, and body appreciation. Eik-Nes et al. [11] found an association between drive for muscularity and ED symptoms in a sample of 2460 gay, bisexual, and heterosexual men. As in our findings, Martins et al. [66] found a positive correlation between the drive for muscularity and self-objectification in gay and heterosexual men. Nerini et al. [10] validated the DMS for Italian heterosexual and gay men, finding convergent validity of the DMS with a measure of body-ideal internalization. Moreover, Alleva et al. [14] found small, negative correlations between body appreciation and drive for muscularity among gay and heterosexual men.
Regarding MD symptoms, the MDDI scores exhibited positive and small to strong associations with ED symptoms, self-objectification beliefs and behaviors, and body-ideal internalization. Moreover, the MDDI scores demonstrated a moderate, negative association with body appreciation. Our results are consistent with a recent meta-analysis that showed that MD and ED symptoms are positively associated (r = 0.36; 95% CI = 0.30, 0.41) [6]. Prior studies have also confirmed that MD symptoms are associated with self-objectification [39]. Furthermore, Klimek et al. [67] found a positive association between body-ideal internalization and MD symptoms in gay, bisexual, and straight men. Taken together, these results support the convergent validity of the DMS and the MDDI among Brazilian cisgender gay and bisexual men.
Both the DMS and the MDDI demonstrated good internal consistency and adequate two-week test-retest reliability, confirming our third hypothesis. The internal consistency of the MDDI scores found in the present study is consistent with finding from Compte et al. [8], which evaluated gay men living in the U.S. or its territories (ω ranging from 0.77 to 0.89). Although evaluating temporal stability of a new measure is an important and indispensable step [58], none of the previous validation studies of the MDDI conducted with sexual-minority men [7][8][9] have examined the temporal stability. Results from the present study support that MDDI scores demonstrated good test-retest reliability, supporting the use of the MDDI across time to assess MD symptoms in Brazilian cisgender gay and bisexual adult men.
The internal consistency of the DMS scores was good in the present study, with similar values found in previous studies with sexual-minority men from the U.S. [29,30]. It is worth mentioning that none of previous validation studies of the DMS with sexual-minority men [10,29,30] evaluated the test-retest reliability of the measure. Thus, the current study provides new data of temporal stability of the DMS among Brazilian cisgender gay and bisexual men.
Strengths of the present study include: (a) the focus on understudied and underrecognized populations in the MD literature; (b) recruitment of a large sample of gay and bisexual men; and (c) use of best practices in translation and validation of body-image instruments [58]. Furthermore, to the best of our knowledge it is the first study to assess the temporal stability of the MDDI and DMS in a sample of gay and bisexual adult men. However, certain limitations also should be noted. First, due to the non-probabilistic sample, the results may not be generalizable to all Brazilian cisgender gay and bisexual men. Second, assessments were limited to self-report, which may reflect social bias. Finally, sample recruitment was carried out through social networks (i.e., Facebook ® , Twitter ® , and LinkedIn ® ), which may result in an overrepresentation of the sample, possibly limiting generalization.
Conclusions
Taken together, results of the present study provide support for MDDI and DMS as appropriate measures to assess muscularity concerns in Brazilian cisgender gay and bisexual adult men. These findings provide the foundation for expanding the drive for muscularity and muscle dysmorphia literature to more diverse populations. Future studies will be needed to yield further psychometric validation of the MDDI and the DMS among gender minorities (i.e., transgender and non-binary individuals, among others), and people from diverse racial or ethnic backgrounds (i.e., black, other ethnic origins) in Latin America.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph20020989/s1. Figure S1: Confirmatory factor analysis results (standardized factor loading and residuals) of the Muscle Dysmorphic Disorder Inventory (MDDI) for Brazilian cisgender gay and bisexual adult men. Figure S2: Confirmatory factor analysis results (standardized factor loading and residuals) of the Drive for Muscularity Scale (DMS) for Brazilian cisgender gay and bisexual adult men. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to not obtaining consent from respondents to publish the data.
|
2023-01-12T17:32:43.888Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "854e833ec24c54a0701d013e6553b02ce9152d82",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/2/989/pdf?version=1672913569",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b84084cf95f72841982bf36f73b7f378b39e3a9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
37327395
|
pes2o/s2orc
|
v3-fos-license
|
Predicting Human Computation Game Scores with Player Rating Systems
. Human computation games aim to apply human skill toward real-world problems through gameplay. Such games may suffer from poor retention, potentially due to the constraints that using pre-existing problems place on game design. Previous work has proposed using player rating systems and matchmaking to balance the difficulty of human computation games, and explored the use of rating systems to predict the outcomes of player attempts at levels. However, these predictions were win/loss, which required setting a score threshold to determine if a player won or lost. This may be undesirable in human computation games, where what scores are possible may be unknown. In this work, we examined the use of rating systems for predicting scores, rather than win/loss, of player attempts at levels. We found that, except in cases with a narrow range of scores and little prior information on player performance, Glicko-2 performs favorably to alternative methods.
Introduction and Background
Human computation games (HCGs) have been shown to provide a unique lens into solving problems that are computationally hard or ill-defined [12,13].Some notable examples include The ESP Game, which asks users to complete relatively simple image recognition tasks [1], and Foldit, which involves relatively complex protein folding problems [4].
One potential upside to leveraging a gaming environment when utilizing human intelligence is the potential to harness the motivational power of games.However, even with exemplar cases such as Foldit, human computation games generally have issues engaging and retaining players.Engagement is widely considered a foundational element in a good game.Additionally, the level of engagement experienced by the player can influence how motivated they are to play.The prime factor of engagement is the construct of flow [5], which embodies a range of subjective experiences, but most notably "is the idea that there should be an optimal match between the skills an individual possesses and the challenges presented by an activity" [2, pp.2].Furthermore, HCGs have several design constraints which limit the extent to which the core task of the game can be edited or modified.Knowing the difficulty of each task within the game beforehand may not be possible, as determining the difficulty of each task by hand circumvents the need to crowdsource the solution.It has been suggested in Cooper et al. [3] that dynamic difficulty adjustment through task ordering may be a logical solution, and that this could be accomplished through the use of player rating systems and matchmaking.They applied player rating systems to an HCG when examining the effect of the bipartiteness of the graph of matches on prediction accuracy of player attempts at levels.To accomplish this, they put in place a somewhat ad-hoc threshold as a "target score", where going beyond the target score counted as a win and failing to do so counted as a loss.
Player rating systems were designed with the intent to give players more fair matches.Several rating systems exist, but the most noteworthy examples include Elo, Glicko-2 and TrueSkill.Elo [7] is a system created by Arpad Elo to rate the relative skill of chess players.His system revolves around a few key assumptions.Mainly, that a player's performance in each match is a normally distributed random variable, and the outcome of a match is the result of a pairwise comparison.Glickman developed the Glicko [9] and Glicko-2 [8] systems, which built upon this model by incorporating additional parameters, notably, a rating deviation parameter and a volatility parameter, which capture the expected rating reliability and fluctuation of a given player.TrueSkill [10] is a rating system developed by Microsoft Research for the purposes of multi-player rating and matchmaking, encompassing both individuals and teams.This is important for their uses and an interesting development because it allows the use of virtually any match configuration (for example, team versus team or free for all).
In this work, we explore generalizing the use of player rating systems to predict outcomes of HCGs from win/loss to continuous scores.As a case study, we used player data from the HCG Paradox.Ultimately, when attempting to predict scores, we found that the Glicko-2 player rating system usually outperforms Elo and our baseline measure.
Data Collection
For this work, we collected data from the puzzle game Paradox [6], an HCG that draws on the maximum satisfiability problem (MAX-SAT) to create levels for the players to solve.The game, initially designed to crowdsource formal verification of software, provides various "brushes" for the player to use.These brushes are essentially player guided algorithms to help solve the problems.A player's score is represented as a percentage of satisfied clauses (0%-100%), and the player is given a target score within that range.If a player can complete a level, they have contributed a solution to the underlying MAX-SAT problem.A screenshot of the version of Paradox used is given in Figure 1.Players were recruited to play Paradox through Amazon Mechanical Turk (MTurk), where we posted a Human Intelligence Task (HIT).We recruited 50 players, who were paid $1.50 when they completed the HIT.Upon accepting the HIT, players were given brief instructions about the HIT and game.They then had to complete 9 short tutorial levels meant to introduce gameplay.Data from tutorial levels was not used in our analysis.Players then proceeded to the challenge levels.We selected 33 challenge levels, each of which was either derived from SATLIB Benchmark Problems1 or randomly generated.These levels were served to the players in random order.Players would not see the same level a second time until they had seen each level at least once.For the challenge levels, players were given a target score of 100% (which is not always necessarily possible).Players were able to skip challenge levels without completing them, and upon skipping 3 levels they could then also exit to complete the HIT.We excluded data from one participant who merely skipped 67 matches without attempting any of them.This brought the participant count to 49 and the total number of matches played to 221.
Rating System Implementation
Our goal was to compare the error of different rating systems when predicting scores achieved by players attempting levels in Paradox.Since these systems are conventionally used for the player versus player style games, we have to treat both the players and levels as "players" in the rating system.A match is between a player and a level; players cannot play other players and levels cannot play other levels.The data extracted from the MTurk HIT was played back into the rating systems.We used our own implementation of Elo with a K factor of 24 and the pyglicko2 [11] implementation of Glicko-2.To predict score outcomes using the rating systems, we used expected score of a match based on the ratings of the player and level in the match.For a baseline comparison, we used a simple system that used the average score of all preceding matches to predict the outcome of a match.
To measure the prediction error of each approach, we set up the playback simulation to predict match outcomes before playing them back on matches where both the player and level had been in at least some minimum number of matches M .This allowed us to examine the impact the minimum number of matches played on the performance of the rating systems relative to the baseline.It also let us determine how many matches a player and level needs to play before the rating system starts to outperform the baseline.We used M = 0, 3 and 6.
If M = 0, for example, we predicted the outcome of all matches, and if M = 3, we only predicted for matches where the player and level have been in at least 3 previous matches.The simulation is constructed this way because this is the desired use case for a rating system implemented into an HCG.The specific order of the matches played influences the early state of play for both players and levels.
Results
Paradox scores can range from 0%-100% but were scaled to a range of 0.0-1.0 for use within the player rating systems.We examined scaling with absolute score (linearly mapping 0% to 0.0 and 100% to 1.0, which is the score shown to the player in game) and relative score (linearly mapping each level's starting % to 0.0 and 100% to 1.0, capturing player improvement over the starting score).The minimum, mean, and maximum absolute scores observed were 0.52, 0.88, and 1.0, respectively, and the minimum, mean, and maximum relative scores were 0.0, 0.53, and 1.0, respectively.The error between observed and predicted values was computed using root mean squared difference (RMSD).Generally speaking, RMSD is a good measurement of accuracy, but specifically accuracy between models measuring the same variable as the scales need to be the same.Results of our predictions are shown in Table 1.Glicko-2 performs the best in every case except for absolute score with M = 0, and improves error over our baseline predictions by up to 32%.As the absolute scores cover a smaller range of possible scores, it is unsurprising that for absolute score predictions the RMSDs are in general lower, and the baseline average score predictions are more accurate.
Conclusion
The fact that Glicko-2 outperforms our baseline measure as the system is fed more information about player performance (as M increases) suggests that utilizing a player rating system as a basis for dynamic difficulty adjustment tool could work for HCGs.This is due to the fact that both 3 and 6 minimum matches played seems like a reasonable requisite number of matches played for HCGs before beginning to make predictions.This is especially true if in the long run a system such as Glicko-2 improves player retention.In this sense, the system works to improve player retention but also does its job better the longer they are retained.
Although we found that player rating systems improved prediction error over baseline, it remains to be determined if the accuracy achieved is practically useful.Additionally, the impact of using a matchmaking system based on player rating system score predictions remains to be explored.
Utilizing continuous data as opposed to win/loss unlocks a lot of potential when serving levels to players in HCGs.The surface level improvement is that there is no longer a need to implement a fixed "target score" to allow the rating system to function.Additionally, the precision of predicting and utilizing the score allows for a better determination of what levels are appropriate for which players.For example, if previously the target score was set a threshold of 80%, we can now appropriately differentiate between players who barely beat that target score (i.e.82%) and players who did far better than the target score (i.e.98%).Additionally, this may allow for more fine-tuned matchmaking, where each potential player-level match combination has an individualized expected score, and the system recognizes a player who can potentially achieve a new record score on a given level-a very useful feature for HCGs seeking to find new solutions to problems.
Fig. 1 .
Fig. 1.A screenshot of the game Paradox used in this work.
Table 1 .
RMSD error and percentage improvement over baseline.The lowest error in each row is shown in bold.
|
2018-01-23T22:45:42.774Z
|
2017-09-18T00:00:00.000
|
{
"year": 2017,
"sha1": "a0e98bf1e000008c42b2233be360e031a27b61ca",
"oa_license": "CCBY",
"oa_url": "https://hal.inria.fr/hal-01771305/file/978-3-319-66715-7_31_Chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "502439a5112ee4bb8e47a0cd4b23db25967df5a3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
234461300
|
pes2o/s2orc
|
v3-fos-license
|
Destruction Law of Borehole Surrounding Rock of Granite under Thermo-Hydro-Mechanical Coupling
The destruction of the rock that surrounds boreholes under thermo-hydro-mechanical coupling is an important factor for borehole stability in hot dry rock (HDR) geothermal energy extraction. Failure experiments for granite under triaxial stress (σ1 > σ2 > σ3) were conducted as 500°C superheated steam was transported through the borehole. High-temperature steam leads to large thermal cracks in the surrounding rock, which are randomly distributed around the borehole and gradually expand outwards. The randomly distributed thermally induced microcracks increase the complexity of the initial fracture morphology around the borehole and contribute to the appearance of multiple branch fractures. Fracture development is negligibly affected by ground stresses during the initial stages. However, fractures are deflected towards the maximum horizontal principal stress under ground stresses during later periods. During fracture propagation, high-temperature steam more easily penetrates the rock because its viscosity is lower than water. Towards the end of the crack expansion, the steam loses heat and liquefies, which increases the elongation resistance, and results in the arrest and intermittent expansion of the cracks.
Introduction
The geothermal energy of hot dry rock (HDR) is an undeveloped, safe, and renewable energy that is abundant all around the world [1,2]. China has a large number of geothermal resources, with developmental potential, which are distributed primarily in Tengchong in the Yunnan province, Qiongbei in the Hainan province, Changbaishan in the Jilin province, Wudalianchi in the Heilongjiang province, and Yangbajing in Tibet [3][4][5].
From 1973 to 1982, a group of scientists from the USA, Japan, and Germany carried out successful HDR geothermal resource extraction tests in Mount Fenton, New Mexico [1,2,6,7]. The USA built the first HDR geothermal power plant in 1981. In the following decades, France, Germany, Japan, and other countries conducted experiments to extract HDR geothermal resources, and a series of research achievements were obtained [1,2,[8][9][10][11][12][13].
The geothermal energy resources of HDR are mainly stored in granite strata. Injection and production wells are needed in the granite to develop geothermal energy, where artificial reservoirs are built using hydrofracturing [14]. High temperatures trigger thermal cracking in the rock and induce changes in its internal structure. The physical and mechanical properties of rocks near wellbores change with higher temperatures, for example, decreased elastic modulus, a drop in mechanical strength, and increased permeability [15][16][17][18]. In addition, thermal stresses accelerate the crushing of the rock that surrounds the wellbore, which significantly reduces the drilling stability [19]. The failure, deformation of the neck, instability, and collapse of the surrounding rock cause a substantial increase in the drilling and maintenance cost of wellbores. Most enhanced geothermal system (EGS) wells are vertical and located in a normal or strike-slip faulting regime. Under these conditions, hydraulic fractures would be expected to form axially along the wellbore. However, wellbore observations from EGS projects indicate that flows have typically been localized at discrete zones along the wellbore and correlate with the locations of preexisting fractures. This may be caused by mechanical instabilities in the rock matrix during stimulation [13]. Therefore, the destruction and stability of the granite that surrounds boreholes under thermo-hydro-mechanical coupling are essential factors that restrict the efficient extraction of HDR geothermal energy.
Borehole damage is caused by stress concentration at the cross-section of the wellbore [20]. At room temperature, rock cut from the original location during the drilling process leads to a loss of support in the rock around the borehole. Intragranular and transgranular microcrack expansions are generated in the granite around boreholes, which can cause the exfoliation of local minerals and lead to borehole failure [20]. Subjected to continuous loads such as hydraulic pressure and stress, the rock crack rheological fracture can take place as time goes, which results in reduced stability of the borehole surrounding rock mass [21]. In high-temperature drilling processes, mud circulation is used to cool the bit. During the exploitation of HDR geothermal energy, water is injected into the reservoir for heat exchange to rapidly cool the hot rock mass close to the injection wells while the peripheral rock mass undergoes slow cooling. The sudden decrease of temperature around the wellbore, i.e., thermal shock, causes thermal cracking. Normally, the degradation of the rock mechanical properties under thermal shock is greater than that under slow cooling [22]. Hence, during the extraction of HDR geothermal energy, the boreholes are under the conditions of high temperature and pressure, which increases the complexity of the destruction law of the granite surrounding the borehole.
Zhao [23] simulated the drilling process at different temperatures at a 4000 m depth (100 MPa) in his doctoral thesis. He found that there were no obvious cracks around the borehole at 150°C. When the temperature increased to 300°C, several radial cracks appeared around the borehole and rock particles on the inner wall of the borehole fell off. At 500°C, the radial crack width around the borehole increased significantly, and a severe fracture zone appeared in the borehole. Zhao et al. [24] performed stability experiments of granite drilling under high temperature and pressure. They found a large number of cracks around the drill hole during the experiments and presented deformation laws of the rock and boreholes as functions of temperature and pressure while noting that the critical conditions for drilling instability are 500°C and 150 MPa of hydrostatic pressure. Kumari et al. [25] studied the effect of reservoir depth, temperature, and sample heterogeneity during hydraulic fracturing and the influences of rock microstructure on fracture propagation. Zhou et al. [26] conducted a hydraulic fracturing experiment under high temperature and pressure with large samples and found that the mechanism for decreased crack initiation pressure is the thermal shock generated by the action of fracturing fluid of rock at high temperature. Brudy and Zoback [27] studied the failure of boreholes in two geothermal projects in Kontinentales Tiefbohrprogramm der Bundesrepublik (KTB) in Germany and Soultz in France. The research indicated that cold mud circulation during the drilling pro-cess leads to a cooling contraction of the rock surrounding the borehole, and the resulting thermal stresses induce tensile cracks. Cornet et al. [28] conducted large-scale water injection experiments in a geothermal reservoir at a 5000 m depth in Soultz. The test results and borehole images showed that when the pore pressure increases by more than 10% of the minimum principal stress, massive shear failure can occur in the injection wells and reservoirs.
The above research describes drilling failure during the development of typical HDR geothermal resources around the world and introduces the results of drilling, hydraulic fracturing, and borehole stability under triaxial stresses (σ 1 > σ 2 = σ 3 ) in the laboratory. However, horizontal ground stress inequalities (σ 2 ≠ σ 3 ) have a significant influence on borehole failure for the high-temperature rock mass. Owing to limitations in experimental equipment, laboratory studies on borehole failure under high temperatures and true triaxial stresses (σ 1 > σ 2 > σ 3 ) have been rarely reported.
This work used the "true triaxial heat injection rock mechanics experimental system" developed by the Taiyuan University of Technology to perform destruction tests on a borehole under true triaxial stress (σ 1 = 4:5 MPa, σ 2 = 1:0 MPa, and σ 3 = 0:5 MPa) when a superheated, 500°C, hightemperature steam is injected. The failure law and drilling morphology under the effect of solid-flow-heat coupling were analyzed to further study the failure mechanism of production wells for HDR geothermal energy extraction. The study showed how thermal cracks and fractures initiate and propagate in borehole surrounding rock under thermohydro-mechanical coupling. The results of this work can be applied to study hydraulic fracture propagation in geothermal reservoirs, and it is useful to assess the stability of borehole surrounding rock in HDR geothermal resource extraction.
Materials and Methods
2.1. Experimental Equipment. The true triaxial heat injection rock mechanics experimental system (shown in Figure 1) consists of a high-temperature steam generation system, true triaxial pressure equipment, and a testing system. The hightemperature superheated steam is produced from a gas boiler, where the maximum operating temperature and the highest steam pressure are 600°C and 3 MPa, respectively. The true triaxial pressure equipment has a maximum axial load of 100 tons and a maximum lateral load of 60 tons. An acoustic emission (AE) measurement system (DISP-PCI2, Physical Acoustics Corporation (PAC)) was used to record all emissions generated during the destruction tests for the granite samples. A dial gauge was used to test the deformation of the sample.
Samples.
The samples were made with Shandong grey granite from Pingyi, Shandong province, China, and were in a natural water state with a density of 2.71 g/cm 3 , sized at 300 × 300 × 300 mm. A hole (diameter φ = 20 mm, height h = 180 mm) was drilled at the center of each sample, as shown in Figure 2 Table 1.
Experiment Procedures.
Zhao et al. [16,24] reported that the critical conditions for drilling instability are 500°C and 150 MPa of hydrostatic pressure. Thermal cracking mainly occurs at grain boundaries as intergranular microcracks along with apparent weaknesses that develop with rising temperatures. Intragranular cracks are observed when heating to 500°C, indicating that thermal cracking in granite under high temperature and pressure is induced by both the intragranular and intergranular thermal stresses. Therefore, the superheated steam injected into the sample for the experiment was at 500°C. The specific loading methods are shown in Figure 3. Sensors that monitor the steam temperature and pressure were placed at the inlet. Technology utilizing AE was employed to monitor thermal cracking with four AE sensors placed on the four sides of the samples ( Figure 3). Four holes were machined in the four-side pressure platen to ensure the AE sensors were in direct contact with the rock sample. Environmental noise (generated by the operation of the gas boiler and pressure equipment) was tested before the experiment. The AE-measured amplitude of the noise was between 20 and 40 dB. Therefore, the threshold value of the AE was 45 dB, the preamplifier gain was set to 40 dB, and the sampling rate was 5 MSPS (mega samples per second, 5 MSPS = 5 MHz).
Superheated steam at 500°C and 2.8 MPa was injected into and flowed through the borehole. To ensure constant steam pressure, a counterbalance valve was assembled at the end of the outlet. The superheated steam was first injected into the borehole via the inlet and then flowed directly into the bottom of the borehole via a steel tube to ensure there was no water in the borehole. The superheated steam flowed out of the borehole via the outlet after heating the surrounding rock, as shown in Figure 3.
An asbestos board and a high-temperature-resistant adhesive were used to seal the injection well during the experiments. The asbestos board was fixed on the samples with a high-temperature-resistant adhesive (see Figure 2) to ensure there was no steam leakage between the sample and the asbestos board. Several grooves were machined on top of the pressure platen (see Figure 1(c)) to ensure there was no Figure 4 shows the experimental pressure-time curve. The experiment lasted 210 min. The superheated steam at 500°C was first introduced into the borehole while the pressure was increased at a slow rate of 0.08 MPa/min. The pressure reached its nominal value of 2.8 MPa after 36 min. During this process, the sample did not undergo macroscopic damage. Subsequently, the sample was macroscopically destroyed after approximately 200 min with constant steam pressure of 2.8 MPa while the steam leaked from the side of the sample. Figure 5 demonstrates the spatial pattern of the fracture propagation in the sample. Several fractures appeared during the fracture initiation period; these were all vertical fractures. Three of them penetrated the sample, and one changed direction and stopped after extending for a distance. Wing-A was initiated along the direction of the minimum horizontal principal stress. During the expansion process, Wing-A gradually deflected towards the maximum horizontal principal stress. The fractures then propagated to the side and bottom of the sample ( Figure 5). Wing-B was initiated at 30 degrees from the maximum horizontal principal stress and gradually turned towards the maximum horizontal principal stress before going through the sample. Wing-C was initiated and extended along the maximum horizontal main stress direction and also went through the sample. Wing-B and Wing-C only penetrated the side of the sample, and there was no fracture at the bottom. The expansion of the Wing-A, Wing-B, and Wing-C fractures along the height direction (300 mm) was much larger than the depth of the hole (180 mm). Wing-D stopped expanding relatively quickly after fracture initiation. Wing-C is a typical tensile crack, while the appearances of Wing-A, Wing-B, and Wing-D are the results of the joint action of shear and tension.
Sample Destruction.
There were several bifurcations in the fractures around the wellbore, especially for Fracture A (see Figure 5), which extends along the direction of the minimum horizontal principal stress. Figure 6 shows magnified views of the bifurcations, which are located on the side of the sample, as seen in Figure 5. Tomac and Gutierrez [29] have shown that when the brittle rock is in the solid-fluid-thermal coupled field and convective heat transfer is considered, secondary cracks occur along the main fracture, and the low-viscosity fracturing fluid is more likely to cause additional finger-like cracks to appear near the main fracture. High-temperature superheated steam has lower viscosity than water, and convective heat transfer occurs during the experiments. The hightemperature steam is a fracturing medium and the main reason for the frequent crack bifurcations. In the fracture initiation stage (Stage I of Fractures A, B, and D, as seen in Figure 5), the extension direction of the fractures is affected less by the horizontal principal stress. In the later stage of the expansion (Stage II of Fractures A and B, as seen in Figure 5), the fractures are deflected towards the maximum horizontal principal stress due to the influence of the horizontal principal stress. The thermal stress caused by high temperatures is the main reason for the complicated fracture Figure 7 indicates the deformation of the sample; the superheated steam pressure in the hole increases from 0 to 2.8 MPa (0 to 33 min). The axial deformation is compression deformation, and the lateral deformation is expansion deformation. The axial compression deformation rate decreases with the increase of pressure, and the lateral expansion deformation rate increases with pressure. After constant steam pressure of 2.8 MPa from 33 to 108 minutes, the lateral deformation continues to climb, and the deformation is basically linear with time. The axial deformation gradually transforms from the form of compression to that of expansion. At about 108-110 min, the 0.3 mm expansion occurs in the direction of the lateral maximum horizontal principal stress in 2 minutes, and 0.6 mm expansion appears in the lateral minimum horizontal principal stress in the same period. From 110 to 200 minutes, the specimen keeps expanding, the lateral deformation rate becomes stable, and the axial deformation rate gradually decreases until the specimen is destroyed.
Acoustic Emission
Feature. When the mineral particles in the rock are shifted or microcracking occurs, the strain energy is released in the form of elastic waves, which cause AE events. During the loading process, each AE event corresponds to the occurrence of a microfracture within the rock; hence, there is a corresponding relationship between the AE event and the fracture expansion [32,33].
The AE energy reflects the energy level of the stress waves produced by crack propagation, which characterizes the crack propagation activity within the rock samples. The accumulated energy of the AE events reflects the energy necessary to cause crack initiation and extension. The AE event and count reflect the frequency of microcracking. Figure 8 shows variations in the AE event, count, and energy that were collected during the experiment. The AE count and energy were low from 0 to 100 min, and an interval between the quiet zone and the concentrated area appeared at this time. Thus, the accumulated count and energy of the AE increased slowly. The AE count and energy were high from 100 to 110 min, and the accumulated event, count, and energy of the AE increased rapidly. From 110 to 160 min, the AE count and energy slightly reduced but were still higher than at the initial stage of the experiment, which caused the accumulated event, count, and energy to increase more slowly. After 160 min, a second AE concentration area appeared, and the cumulative parameters increased rapidly until the sample was destroyed.
From 100 to 110 min, the AE event, count, and energy increased rapidly. At this point, the sample did not experience macroscopic damage, and there was a substantial increase in the intensity and the frequency of AE signals. This indicates that the fracture initiation occurred around the borehole of the sample. Figure 8, from 100 to 110 minutes, the occurrence of AE count and energy increased, the accumulated event, count, and energy of the AE increased rapidly. These AE activities are believed to be an indication of hydraulic fracture initiation [30]. From Figure 7, it can be seen that at 108-110 min, 0.3 mm expansion deformation occurs in the direction of the lateral maximum principal stress of the rock, and 0.6 mm expansion deformation takes place in the direction of the lateral minimum horizontal principal stress. The results of deformation and AE in the experiment indicate that large-scale destruction occurred within the sample during 100-110 min. The initiation of crack happens, and macrocrack initiation develops.
Macrocrack Initiation. In
AE localization events are microcrack damages monitored in real time. They can reflect the accumulation of microfaults and the evolution of macrocrack formation. Deformation can only reflect the results of the macrocrack development of the specimen but cannot reflect the process of accumulation of microfaults. The judgment of the moment of the macrocrack initiation by the deformation is delayed compared with one of the acoustic emission characteristic parameters. Hence, AE technology is an effective method to predict fracture initiation.
Destruction
Rules of the Borehole. The AE event locations recorded during the experiments were used to estimate the fracture propagation in the rock surrounding the borehole. The fundamental basis for calculating the location is the simple time-distance relationship as implied from the velocity of the sound wave. Figure 9 shows the AE location map for the destruction, which reflects the fracture law of the sample at different experimental stages. The acoustic locations obtained at the different stages are indicated by different colors. Figure 9 The destruction of rock that surrounds boreholes can be divided into three stages. (33-100 min). The thermal cracks were randomly distributed around the borehole and gradually expanded outwards with the borehole at the center. During this process, the density of the thermal cracks around the borehole gradually increased.
As can be seen from Figure 8, the AE energy of the thermal cracking is lower than those during fracture propagation, while the quiet section and the dense section of the AE events appeared alternatively. Thermal cracking is the result of the alternating accumulation and release of energy as generated by the deformation mismatch between grains with different thermal expansivities. The thermal cracking of granite occurs mainly at grain boundaries, while cracks in the grains occur 8 Geofluids when the granite is heated to 500°C [15,16]. A large number of thermal cracks occurred around the borehole, and the accumulation of these microcracks promotes the formation of the initial fracture. Figure 9(g) indicates the location of the AE during the fracture initiation stage (100-110 min). After the thermal cracking stage, the density of thermal cracks around the borehole increased, and microcracks connected to each other and expanded to form the initial fracture. As seen in Figure 8, the energy of the AE was large at this stage, and the cumulative parameters increased rapidly. The location of the fracture initiation was random and nearly independent of the ground stress. There were four initial fractures randomly distributed around the borehole, as seen in Figure 6. Theoretically, initial fractures should be initiated along the maximum horizontal principal stress (e.g., Fracture C in Figure 5), but Fractures A, B, and D (in Figure 5) violated the above law. Under thermo-hydro-mechanical coupling, propagation of macrocracks was affected by the combined effects of the horizontal principal stress, the existing thermal cracks, and the shape of mineral grains. At a temperature above 300°C, a macrocrack was formed by connecting the thermal cracks during its propagation [34]. Moreover, thermal fractures allow the steam at high pressure to penetrate into and cause stress concentration around the fracture tips, potentially pushing the thermal fractures to propagate further [35]. There were large thermal cracks around the borehole, as shown in Figure 9. Hence, the randomness of the spatial distribution for thermal cracking is favorable to the formation of multiple weak sections around the borehole. This easily forms multiple initial fractures that are barely affected by the horizontal principal stress.
4.2.
3. Fracture Propagation Stage. Figures 9(g)-9(n) show the time-order characteristics of the AE localization events during fracture propagation (110-210 min) in the sample. The AE events were densely distributed around the borehole and along the fracture direction. Fracture A, which is in the direction of the minimum horizontal principal stress, was formed earlier (Figure 9(g)), and Fractures B and C, which are in the direction of the maximum horizontal principal stress, appeared soon after (Figures 9(h)-9(j)). Due to the small size of Fracture D, the AE location is unknown. In the initial fracture propagation stage, the extension direction of the fractures was affected less by the horizontal principal stress. In the later stage of fracture expansion, the fractures deflected towards the maximum horizontal principal stress due to the influence of the horizontal principal stress.
The time-order characteristics of the AE localization events in Figures 9(h)-9(j) indicate that Fracture A underwent a process of expansion, crack arrest, and reexpansion. When Fracture A was experiencing crack arrest, initial fractures were formed for Fractures B and C, and three cracks expanded together thereafter (133-223 min). Crack expansion always occurs along the easiest direction. The intermittent expansion of Fracture A indicates that the energy for cracking is insufficient, and the maximum horizontal princi-pal stress cannot be overcome. Therefore, the energy needs to be accumulated.
The thermal cracking is favorable to form multiple initial fractures around the borehole, which are barely affected by the horizontal principal stress (seen in Figure 6). The hightemperature effect (thermal stress) is the main reason for the complexity in the fracture geometry for the sample, which manifests itself as the failure mode for multiple fractures.
Extension
Resistance of Crack Propagation. Figure 5 shows that Fracture D stopped expanding shortly after fracture initiation, and Figures 9(h)-9(j) demonstrate that Fracture A intermittently extended. Figure 6 show that the fractures (especially Fracture A) bifurcated during the expansion, which suggests that the extension resistance is relatively large during fracture propagation. Temperature stresses and pore pressures cannot continuously provide the energy for fractures to expand indefinitely. In particular, Fracture A began cracking along the direction of the minimum horizontal principal stress and needed to overcome the constraints of the stress to continue expanding. This requires the accumulation of energy or an expansion in the direction of smaller energy consumption (crack bifurcation).
The multicrack propagation mode and the phase transition of the high-temperature steam during crack propagation are the main reasons for the increased extension resistance. During crack propagation, the high-temperature steam at the crack tip turned into water due to heat transfer. This caused reduced thermal stress and pressure within the fracture, and the added resistance inhibited the crack from expanding. Simultaneously, the high-temperature vapor has low viscosity, which makes it easier to penetrate into the rock, and increases thermal damage around the crack, which reduces the pressure in the crack and delays its initiation and expansion [29].
Influence of Thermal Stress on the Destruction of the Rock
Surrounding the Wellbore in HDR Geothermal Energy Recovery. Thermal stresses can cause damage to rocks. The rock surrounding the production well is destroyed by rising temperatures (high-temperature steam passes through the ambient temperature rock mass), which damages the injection well due to cooling (ambient temperature water is injected into the high-temperature rock mass). Based on the experimental results, while a production well is in operation, the surrounding rock primarily undergoes expansion deformations, as shown in Figure 7. The failure is mainly due to the cracking of the rock that surrounds the wellbore and from the extension of these cracks as there is no significant contraction and collapse of the borehole (seen in Figure 5). Expansion deformations of the rock mass cause permeability [36], and the efficiency of geothermal energy extraction in an HDR reduced.
Thermal cracking during the cooling process is more severe than that during the heating process [22]. According to the theory of thermoelasticity, localized tensile thermal stresses are more likely to occur near the wellbore of the injection well, while failure of the injection wellbore is more severe than that of the production wellbore, which results in greater crack widths [29]. In addition, the large temperature difference between the injected fluid and the rock body results in the complex thermo-fluid-solid coupling effect between rock and liquid, which causes intense thermal shock to the rock body [26,37]. Thermal shock causes elastic wave shock and thermal wave shock near the borehole, and the tensile stress caused by these two shocks makes it easier for the borehole to fracture [26]. Research has shown that the subcritical crack growth velocity of rock in water or an aqueous solution is much higher than that in air, and the fracture toughness of rock decreases when the experimental environment was changed from air condition to water condition [38]. Large shear failures of injection wells and reservoirs occur when a large amount of water is injected in the 5000 m deep experimental geothermal reservoir at Soultz (France) [28]. Therefore, the destruction of the injection well is more severe than that of the production well and has a more complex destruction pattern. Large-scale shear damage in the reservoir is beneficial to increase the permeability [39] and heat transfer area and improve the efficiency of geothermal energy extraction in an HDR.
When the surrounding rock of the injection well is broken, the water exits at the back of the fracture tip and changes into steam at the tip. The water pushes the steam and expands the fracture. The large compressibility of the steam indicates that crack expansion can cause an instantaneous expansion of the steam, which has a dynamic effect on fracture propagation to induce larger fracture openings. For example, the use of supercritical carbon dioxide for HDR reservoir fracturing takes advantage of thermal effects and the instantaneous expansion of gas during crack propagation, which forms fractures with larger widths that enhance the reservoir permeability. In contrast, when the surrounding rock of a production well is broken, the steam exits at the back of the fracture tip and changes into water at the tip. When the crack expands instantaneously, the vapor liquefies, pore pressure in the fracture decreases, and energy accumulation is needed to continue the crack expansion. When the extension resistance increases, the cracks either stop or intermittently expand. The viscosity of injected fluid plays an important role in determining the crack expansion effect [25,29,40]. Superheated steam in the fracture tip induces faster crack expansion in HDR reservoirs than water in the fracture tip because of the high mobility of steam.
Conclusions
This study used the "true triaxial heat injection rock mechanics experimental system" to destroy a borehole under triaxial stresses under the injection of high-pressure (2.8 MPa) and high-temperature (500°C) superheated steam. The following conclusions are drawn from the experiments: (1) The effect of thermal stresses causes thermal cracking to be randomly distributed around the borehole, which gradually expands outwards with the borehole as the center. This leads to the appearance of fracture characteristics around the borehole, where the orien-tation of the fracture initiation is barely constrained by the geostress. In the fracture propagation process, the direction of crack expansion is affected by the ground stress and gradually deflects along the direction of the maximum horizontal principal stress (2) During the fracture expansion process, along the direction of the maximum nonhorizontal principal stress, the extension resistance is large, cracks are arrested, and intermittent expansion occurs. This is primarily because the high-temperature vapor has low viscosity and can easily penetrate the cracks and liquefy at the tips (3) The destruction of rock that surrounds the borehole can be divided into three stages: (a) thermal cracking, (b) fracture initiation, and (c) fracture propagation. AE technology is an effective method to estimate the fracture initiation and propagation
Data Availability
All data used to support the study is included within the article.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
|
2021-01-07T09:08:08.674Z
|
2020-12-30T00:00:00.000
|
{
"year": 2020,
"sha1": "c90a24405751e3187d55817d8faa1ea32cef44d2",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/geofluids/2020/6627616.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5b5ece974b84abb50b42c320dad400c4003c3425",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
158298458
|
pes2o/s2orc
|
v3-fos-license
|
Revisiting the Exchange Rate Pass Through: A General Equilibrium Perspective
A large literature estimates the exchange rate pass-through to prices (ERPT) using reducedform approaches; whose results are an important input for analyses at Central Banks. We study the usefulness of these empirical measures for monetary policy analysis and decision making, emphasizing two main problems that arise naturally from a general equilibrium perspective. First, while the literature describes a single ERPT measure, in a general equilibrium model the evolution of the exchange rate and prices will differ depending on the shock hitting the economy. Accordingly, we distinguish between conditional and unconditional ERPT measures, showing that they can lead to very different interpretations. Second, in a general equilibrium model the ERPT crucially depends on the expected behavior of monetary policy, but the empirical approaches in the literature cannot account for this, providing a misleading guide for policy makers. We first use a simple model of a small and open economy to qualitatively show the intuition behind these two critiques. We then highlight the quantitative relevance of these distinctions by means of a DSGE model of a small and open economy with sectoral distinctions, real and nominal rigidities, and a variety of driving forces; estimated using Chilean data. JEL Classification: E5, F3, F4. We would like to thank Markus Kirchner, Martin Uribe, Stephanie Schmitt-Grohé, Andrés Sansone, Lucas Bertinatto, Mariano Palleja, Pablo Cuba-Borda, as well as seminar participants at the Central Bank of Chile, Columbia University, Cochabamba, LACEA and UTDT for useful comments. David Chernin and Francisco Pinto provided excellent research assistance. The views and conclusions presented in this paper are exclusively those of the authors and do not necessarily reflect the position of the Central Bank of Chile, Central Bank of Argentina or its Board members. † E-mail: mcgarcia@bcentral.cl ‡ E-mail: javier.garcia@bcra.gob.ar
The opinions expressed in this paper are the sole responsibility of its authors and do not necessarily represent the position of the Central Bank of Argentina. The Working Papers series is comprised of preliminary material intended to stimulate academic debate and receive comments. This paper may not be referenced without authorization from the authors.
Introduction
The exchange-rate pass-through (ERPT) is a measure of the change in the price of a good (or basket of goods) after a change in the nominal exchange rate (NER), computed at different horizons after the initial movement in the NER. Its estimates are not only a relevant part of the international macroeconomics literature, but for actual monetary policy as well. For instance, when a country is a price taker in the world markets, a change in the nominal exchange rate affects directly the local currency price of the goods bought internationally, and this way importable inflation. It may even affect other sectors of the economy, and for a prolonged period of time if there are propagation mechanisms at play. In the last years this topic has received a renewed interest, particularly since many countries experienced large depreciations after the Tapering announcements by the Fed in 2013.
The relevance of the topic for actual monetary policy making can be seen from three different perspectives. First, in the vast majority of Central Banks one can find studies estimating the ERPT for the particular country. Second, international institutions such as the International Monetary Fund (IMF), the Bank of International Settlements (BIS), and Inter-American Development Bank (IDB), among others, also actively participate in this discussion. For instance, some of the flagship reports of these institutions (such as the World Economic Outlook by the IMF or the Macroeconomic Report by the IDB) frequently include estimates of the ERPT and use them to draw policy recommendations. Moreover, a significant number of papers in this literature come from economists working at these institutions. Finally, it is easy to find references to the ERPT in many Monetary Policy Reports, proceedings from policy meetings, and speeches by board members at many Central Banks.
Policy related institutions and central banks use estimates of ERPTs that are mostly computed using empirical/reduced-form approaches based on vector auto-regressions (VAR) or single equation models. 1 The ERPT measures are generally used for two purposes. The first is to predict the effect that an observed depreciation will have on inflation. The second use is for ex-post analysis, after some time has passed, with the goal of understanding what happened and explain differences, if any, with what was expected to happen. In light of this widespread use, in this paper we question the usefulness of the empirical ERPT measures for these purposes using a general equilibrium framework.
In particular, we highlight two shortcomings of using reduced-form estimates of ERPT for policy analysis, that can be improved by using dynamci and stochastic general equilibrium (DSGE) models. First, empirical ERPT estimates do not control (completely) for the endogeneity of the NER. The evolution of the NER, as well as its relation with other prices, will depend on the shocks hitting the economy. 2 Second, these empirical estimates do not control for the dependence of ERPT measures on the expected reaction of monetary policy, which can affect them significantly. While the first shortcoming has been discussed recently in the literature, as detailed below, the second has not been analyzed.
We distinguish between conditional and unconditional (or aggregate) ERPT measures. The former refers to the ratio of the percentage change in a price index, relative to that in the NER, that occurs conditional on a given shock. The unconditional or aggregate measure is the analogous ratio obtained from reduced-form methodologies. We show how they relate to each other, we explore their differences and how they depend not only on parameters of the model, but also and more importantly on the reaction assumed for monetary policy.
Our analysis is based on two dynamic and stochastic general equilibrium (DSGE) models. The first is a simple small-open-economy model, with traded and non-traded goods and price rigidities. This model allows to grasp the intuition behind the two shortcomings of the empirical literature that we highlight, but is not built to talk about its quantitative relevance. To that end, we then set up a fully-fledge DSGE model with sectoral distinctions, nominal and real rigidities, driven by a wide variety of structural shocks. We estimate it using a Bayesian approach with quarterly Chilean data from 2001 to 2016. 3 Our first contribution is to study the relationship between conditional and unconditional ERPTs. We first show analytically that, under certain assumptions in the context of linear, dynamic and stochastic models, the unconditional ERPT obtained using a VAR is a weighted average of the conditional ERPTs in the model. Thus, to the extent that the conditional ERPTs are significantly different depending on the shock, the empirical measures will provide a biased assessment of the expected relationship between the NER and prices at any point in time. In general, using the unconditional ERPT will systematically miss the expected evolution of the NER and prices.
Our second contribution is to define unconditional ERPT measures directly comparable to the empirical literature estimates. In general, the mapping between unconditional and conditional ERPTs cannot be obtained algebraically, so we define two measures that can be computed for any model to mimic what an econometrician from the empirical literature would obtain if the general equilibrium model was the true data generating process.
Our third contribution is to study the dependence of ERPT measures on the reaction of monetary policy. As any endogenous variable, the conditional and unconditional ERPTs depend on how monetary policy reacts and is expected to react. How this fundamental fact is captured in the empirical ERPT estimates is not clear. It might be argued that in these estimates it is implicitly assumed that monetary policy follows a policy rule that captures the "average" behavior followed by the central bank, during the sample analyzed. However, as there is no explicit description of this rule, it is hard to know what the central bank is assumed to be doing (and expected to do) in the estimated ERPT coefficient. Thus, the use of reduced-form estimates as a way to forecast the likely dynamics of inflation after a movement in the NER neglects the fact that monetary policy (both actual and expected) will influence the final outcome. With this in mind, one could instead compute several ERPT measures, one for each alternative expected path for monetary policy that a central bank might consider.
Our results show that the conditional ERPTs for the main drivers of the NER are in fact very different from each other, and that the unconditional measures lie between the conditional ones. The analysis is done for 3 different price indexes; the consumer price index (CPI), a tradable and a nontradable price indexes. In the quantitative model, the two main drivers of the NER are found to be a common trend in international prices and shocks affecting the interest rate parity condition. The conditional ERPT each of them generate are quantitatively different, varying depending on the time period and the price index that is being considered. At the same time, the unconditional ERPT lies between the conditional ones, and are comparable with empirical estimates. Overall, this evidence points to the importance of identifying the source of the shock that originates the NER change in discussing the likely effect on prices.
The results concerning the dependence of ERPTs on monetary policy is show to be model dependent. In principle, it is not clear how the ERPT will differ under alternative policy paths, since a more dovish policy will induce a higher inflation and a larger nominal depreciation. In our simple DSGE model, the effect under alternative policy paths is stronger after a shock that affects the interest rate parity condition than after a shock to external prices. The opposite is true in our fully-fledged DSGE model for Chile. While it remains an open question which conditional ERPT is more sensitive for other countries and other models, this emphasizes the importance of analyzing these issues with model that can properly account for the observed dynamics. 4 In terms of the related literature, Shambaugh (2008) and Forbes et al. (2015) compute different ERPTs depending on shocks using VAR models. They use alternative identification assumptions to estimate how several shocks might generate different ERPTs; in the same spirit as our definition of conditional pass-through. Our work deepens their analysis in two ways. First, these studies do not show how these conditional ERPT measures compare with unconditional ones; a comparison that we explicitly perform to understand the bias that might be generated by relying on unconditional ERPTs. Second, they use structural VAR models whose identified shocks are still too general as compared to the shocks in a DSGE model. 5 Our approach can then provide a relatively more precise description of the relevant conditional ERPTs.
Two related papers using DSGEs are those by Bouakez and Rebei (2008) and Corsetti et al. (2008). The work by Bouakez and Rebei (2008) is, to the best of our knowledge, the only one that uses an estimated DSGE to compute conditional ERPTs (estimating the model with Canadian data) and that also provides a measure that would qualify as unconditional ERPT. Our paper differs from theirs since it provides an unconditional ERPT measure that is directly comparable to the methodology implemented in the empirical literature, and it also analyzes the specific relationship between the measures obtained in the reduced-form approaches with the dynamics implied by a DSGE model. Moreover, our estimated DSGE model has a richer sectoral structure, allowing to characterize not only the ERPT for total inflation, but also that for different prices such as tradables and non-tradables. Corsetti et al. (2008) explore the structural determinants of an ERPT to import prices from a DSGE perspective and assess possible biases in single-equation empirical methodologies. While our paper shares common points with this study, we distinguish between conditional and unconditional ERPTs and provide a quantitative evaluation of the biases. Still, none of these studies explore the second shortcoming we highlight regarding the expected monetary policy.
The relationship between monetary policy and the ERPT has been the topic of several studies, but none has analyzed explicitly how alternative expected paths of the monetary rate affects the ERPT, which is a crucial input for policy makers. For instance, Taylor (2000), Gagnon and Ihrig (2004) and Devereux et al. (2004) use dynamic general equilibrium models to see how monetary policy can alter the ERPT, proposing that a greater focus on inflation stabilization can provide an explanation to why the empirical measures of ERPT seem to have declined over time in many countries. Others have analyzed how monetary policy should be different depending on structural characteristics associated with the ERPT, such as the currency in which international prices are set, the degree of nominal rigidities, among others. Some examples are Devereux et al. (2006), Engel (2009), Devereux and Yetman (2010), and Corsetti et al. (2010. The point we want to stress, although related to these previous papers, is however different: the choice of the expected policy path can influence significantly the realized ERPT; an issue that is generally omitted in policy discussions. Finally, this paper relates to the extensive literature comparing DSGEs and VARs in terms of their usefulness for different types of analyses. As discussed in Giacomini (2013) there are several reasons why the mapping between a DSGE and a VAR can be broken, making the use of DSGEs beneficial in some cases and of VARs in others. On one hand, it is a general believe that VARs perform better in forecasting variables than DSGEs because they imply less restrictions as can be seen in Schorfheide (2000) 6 . On the other hand, DSGEs are better suited to understand the intuition and mechanisms behind economic movements, since these can be tracked to the original structural shocks as is the case of this paper. In addition, as our analysis of the dependence of ERPTs to the monetary policy reaction highlights, DSGEs are better suited to analyze counterfactual scenarios and to understand which parameter or mechanism is critical for a given result.
The rest of the paper is organized as follows. Section 2 describes the empirical strategies used in the literature and their relationship with DSGE models. The analysis based on a simple model is presented in Section 3. The quantitative DSGE model and the ERPT analysis based on it are included in Section 4. Conclusions are discussed in Section 5.
The Empirical Approach to ERPT and DSGE Models
In this section we first describe two methodologies generally used in the reduced-form literature to estimate the ERPT: single-equation and VAR models. We then use a general linearized DSGE model to introduce the concept of conditional ERPT. Finally, we discuss the relationship between conditional ERPTs from DSGE models and the measure obtained using a VAR approach.
The Empirical Approach
The two approaches most commonly used by the empirical literature are single-equation models and VARs. In the first the estimated model takes the form, where π j t denotes the log-difference in the price of a good (or basket of goods) j, π S t is the log-diference of the NER, c t is a vector of controls and v t is an error term. The parameters α, β j , and γ are generally estimated by OLS, and the ERPT h periods after the movement in the NER is computed as h j=0 β j ; i.e. the percentage change in the price of good j generated by a 1% permanent change in the NER.
The VAR strategy specifies a model for the vector of stationary variables x t that includes π S t , π j t , as well as other control variables (both of domestic and foreign origin). The reduced-form VAR(p) model is, where Φ j for j = 1, ..., p are matrices to be estimated, and u t is a vector of i.i.d. reduced-form shocks, with zero mean and variance-covariance matrix Ω. Associated with u t , the "structural" disturbances w t are defined as, where P satisfies Ω = P P ′ , assuming the variance of w t equals the identity matrix. In the empirical ERPT literature P is assumed to be lower triangular, obtained from the Cholesky decomposition of Ω, and the ERPT h periods ahead is defined as.
where CIRF V k,i (h) is the cumulative impulse-response of variable k, after a shock in the position associated with variable i, h periods after the shock. In other words, the ERPT is the ratio of the cumulative percentage change in the price, relative to that in the NER, originated by the shock associated with the NER in the Cholesky order. 7 While both approaches can be found in the literature, here we use the VAR as a benchmark for several reasons. First, in the most recent papers the VAR approach is generally preferred. Second, the ERPT obtained from (1) assumes that after the NER moves, it stays in that value forever. In contrast, the measure (4) allows for richer dynamics in the NER after the initial change. Third, the OLS estimates from (1) will likely be biased, as most of the variables generally included in the righthand side are endogenous. The VAR attempts to solve this problem by including lags of all variables, and by means of the identification strategy, as long as the Cholesky decomposition is correct. 8 Finally, the VAR model might, in principle, be an appropriate representation of the true multivariate model (as we will discuss momentarily), but this is not generally true for single-equation models.
DSGE Models and Conditional ERPT
The linearized solution of a DSGE model takes the form, where y t is a vector of variables in the model (exogenous and endogenous, predetermined or not), e t is a vector of i.i.d. structural shocks, with mean zero and variance equal to the identity matrix, and the matrices F and Q are non-algebraic functions of the deep parameters in the model. 9 Using the solution, the ERPT conditional on the shock e i for the price of good j is defined as, price j, relative to the cumulative change in the NER, originated by shock e i .
The Relationship Between VAR-and DSGE-based ERPT
We want to explore the relationship between ERP T V π j (h) and CERP T M π j ,i (h), in order to construct a measure of unconditional ERPT from the DSGE model that is comparable to ERP T V π j (h). Relevant for this discussion is the work of Ravenna (2007), who explores conditions under which the dynamics of a subset of variables in the DSGE model can be represented with a finite-oder VAR model. The general message is that is not obvious that a DSGE model will meet these requirements, implying that the relationship we wish to find can only be obtained analytically for specific cases. 10 In Appendix A.1 we show that, if the assumptions for the existence of a finite VAR representation of the DSGE model hold, and if π S t is ordered first in the VAR, the following relationship holds where n e is the number of shocks in the vector e t and ω s (h) are weights associated with each shock. In other words, the ERPT obtained from the VAR is a weighted sum of the conditional ERPTs in the DSGE model. For h = 0 the weight ω s (0) corresponds to the fraction of the forecast-error variance of the NER, at horizon h = 0, explained by the shock s. For h > 0 the weight ω s (h) is equal to ω s (0) adjusted by the change in the response of the NER at horizon h > 0 relative to the response at h = 0. 11 In simpler terms, the weights depend on the relative importance that each shock has in explaining the fluctuations in the NER. Moreover, the relative importance of the particular shock in accounting for the dynamics of inflation is not relevant for its weight in the unconditional ERPT. The relationship (7) is an important result because it implies that, to the extent that the conditional ERPTs are different, predicting the effect on a price of any movement of the NER with the unconditional measure will almost surely be inappropriate. It will only give a correct assessment of the likely dynamics of inflation if the combination of shocks hitting the economy in a given moment is equal to the weights implicit in the VAR-based ERPT. But in the context of shocks with a continuous support, this event has zero probability. As we will see in the next sections the conditional ERPTs are indeed very different and so this is actually an important disadvantage of using unconditional ERPTs.
The conditions behind (7) may not hold in general DSGE models. Thus, we propose two alternatives to compute the unconditional ERPT. The first one assumes that the relationship in (7) holds in general. We label this as U ERP T M π j (h) ≡ ne s=1 CERP T M π j ,s (h)ω s (h), where CERP T M k,i (h) is computed as in (6), and ω s (h) is analogous to the one in (7).
The second measure of unconditional ERPT answers the following question: what would be the ERPT that someone using the empirical VAR approach would estimate if she has an infinite sample of the variables commonly used in that literature, generated by the DSGE model? We call this alternative unconditional ERPT using a Population VAR, labeled as U ERP T P V π j (h); which is analogous to (4) but the matrices Φ j and Ω are obtained from the population (i.e. unconditional) moments computed from the solution of the DSGE model. 12 In conclusion, for any particular DSGE model, we have two unconditional ERPTs to compare 10 A related issue is analyzed by Fernández-Villaverde et al. (2007), showing conditions under which the shocks identified in a VAR for a subset of the variables in a DSGE can capture the same shocks featured in the DSGE model. However, as the empirical VAR literature of ERPT does not claim that it is identifying any particular shock that can be interpreted from a DSGE model, this aspect is not as relevant for our discussion.
11 See Appendix A.1 for the precise expression for ωs(h).
12 Appendix A.2 shows how this is computed.
6 with the conditional ones, in order to assess their differences. In the following sections we apply these measures to both a simple and a quantitative DSGE model.
A Simple DSGE Model
In this section we develop a simple DSGE model to show the importance of differentiating between conditional and unconditional ERPT, as well as of accounting for the expected paths of monetary policy. The model is based on Schmitt-Grohé and Uribe (2017, sec. 9.16), extended to include a Taylor rule for the interest rate, indexation and external inflation.
Description of the Model
The model is relatively small and has only the necessary ingredients to highlight the differences in ERPTs that we want to show. It features three shocks (world interest rate, external inflation and monetary policy) to show the differences between conditional and unconditional ERPTs, 13 and it features two sectors (tradable, T , and non-tradable, N ) to show differences between ERPT in different prices. Monetary policy sets the short-term interest rate following a Taylor rule in the baseline case and this assumption is temporarily relaxed later on to evaluate effects of alternative policy paths. Finally, it includes Calvo pricing in sector N with indexation to past inflation, for its importance in the transmission of changes in the exchange rate to internal prices. In what follows we describe the different agents in the model, while Appendix B presents all the equilibrium conditions and the computation of the steady state.
Households
There is a representative household that consumes, works and saves. Her goal is to maximize, where C t is consumption and h t are hours worked, β is the discount factor, σ is the risk aversion parameter, ϕ is the inverse of the Frish elasticity of labor supply and ξ is a scale parameter. Her budget constraint is Here P t is the price of the consumption good, S t is the exchange rate, B * t is the amount of external bonds bought by the household in period t, B t the analogous for local bonds bought by the household in t, W t is the wage, R * t is the external interest rate, R t is the domestic interest rate, and Π t collects all the profits from the firms in the economy, since households are the owners of firms.
The consumption good is a composite of tradable consumption, C T t , and non-tradable consumption, C N t . Additionally, non-tradable consumption is an aggregate of non-tradable varieties, C N t (i). These technologies are described by, Those shocks were chosen because of their importance in the larger model of the next section.
7
where γ is the share of N in total consumption, ̺ is the elasticity of substitution between C N t and C T t , and ǫ is the elasticity of substitution between the varieties i ∈ [0, 1] of non-tradables. From the problem of choosing the minimum expenditure to get the consumption good, we obtain the definition of the consumer price level as, where P T t is the local price of the tradable good and P N t is a price index for the non-tradable composite.
Firms
There are two sectors, tradable and non-tradable. The former is assumed to have a fixed endowment, Y T , each period with a local price P T t = S t P T, * t , where P T, * t is the foreign price of the tradable good. In contrast, in the non-tradable sector, each firm j ∈ [0, 1] produces using labor with the technology where Y t (j) is the production of firm j, h t (j) is the hours hired and α ∈ (0, 1] is a parameter. Firm j faces a downward sloping demand given by: where ǫ is the elasticity of substitution among varieties, P N t (j) is the price of variety j in the N sector and Y N t is non-tradable composite. They choose prices a la Calvo, where the probability of choosing prices each period is 1 − θ. In the periods that firms don't choose prices optimally, they update their prices using a combination of past inflation, π t−1 and the inflation target,π: Note that all prices that are not chosen optimally are indexed either statically tō π, or dynamically to π t−1 . The final dynamic indexation in the model is given by θζ, since it is the fraction indexed to past inflation, ζ, among the prices that are not chosen optimally, θ. Note also that in the long-run indexation is complete, in the sense that all prices will grow at the same rateπ. This eliminates the welfare cost of price dispersion in steady state (and in a first-order approximation).
Monetary Policy
We assume a simple Taylor rule for the domestic interest rate: where the variables without a time subscript are steady state values, GDP t is gross domestic product (see the appendix for a definition) and e m t is the monetary shock, assumed to be i.i.d..
Foreign Sector
The rest of the world provides the external price of the tradable output, P T, * For the second, we assume that the external interest rate relevant for the country, R * t is given by where R W t is the risk-free external interest rate, which follows an exogenous process and φ B ,b > 0 are parameters. This equations is the closing device of the model.
Exogenous Processes and Parametrization
The model includes 3 shocks: the monetary policy shock, ǫ m t , foreign inflation, π * t , and the risk-free external interest rate, R W t . It is assumed that each one of these shocks follows a process . For simplicity, we assume ρ x = 0.5 for x = {π * , R W } and ρ ǫ m = 0, which is the regular case used in the literature. We alow for the monetary shock to have a positive autocorrelation coefficient later on to highlight the connection between different expected monetary paths and ERPTs. Table 1 shows the parametrization used, which closely follows Schmitt-Grohé and Uribe (2017, sec. 9.16). In the baseline parametrization, we set the indexation parameter to zero, to later explore the role of different values for ζ.
Conditional vs. Unconditional ERPTs
In this section we show how even in this simple model there are significant differences among the conditional ERPTs, depending on the shock that is hitting the economy and also on the price considered. Note first that, by construction, the reaction of tradable inflation and the nominal exchange rate depreciation is the same for the monetary shock and the shock to the external interest rate, implying a conditional ERPT for these shocks equal to one at all horizons. This is because prices in the tradable sector are given by the foreign price of the tradable good, which is exogenous, times the NER. Also note that since the real exchange rate and all relative prices are stationary in the model, these shocks will also have a conditional ERPT of one in the long run for non-tradable and total prices. In contrast, this is not the case for the shock to foreign-inflation, which does not require a complete ERPT to any domestic price, at any horizon.
To understand the propagation of the different shocks, we first present the impulse-response analysis. A positive change in the external interest rate, showed in figure 1, causes two effects: a negative income effect (because this economy is assumed to be a net debtor), and an intertemporal substitution effect, increasing the incentives to save today. Both of them decrease current demand of both goods, while increasing labor supply at the same time. The drop in the demand for non-tradables, as well as the increase in labor supply, tend to decrease the relative price of these goods, leading to a real depreciation. 14 Due to sticky prices, the nominal exchange rate also increases. Inflation rises for both types of goods and, as a result, the policy rate increases. 15 A negative shock to external inflation, showed in figure 2, affects the economy through several channels. 16 In principle, this shock should affect export-related income, generating a wealth effect. However, as the domestic price of tradables is fully flexible, ceteris paribus, the relevant relative price (the price of exports over that of imports) does not change; so this channel is not active in this simple model. 17 Another channel is due to the fact that foreign bonds are denominated in dollars: an unexpected drop in foreign prices will increase, ceteris paribus, the burden of interest payments from external debt in domestic currency units, generating a negative wealth effect. This channel tends to contract aggregate demand, which reduces consumption of both goods and increases labor supply. Since the non-tradable sector has to clear, its relative price falls. Both a nominal and a real depreciation materialize, inflation rises for both types of goods and the policy rate increases. While qualitatively these effects are analogous to those originated by a rise in the world interest rate, there is an attenuation effect that happens due to the drop in foreign inflation, which leads to a smaller conditional ERPT.
Finally, a negative shock to the policy rule, showed in figure 3 generates a drop in the nominal interest rate for a given value of inflation and output. This causes an intertemporal substitution effect towards current consumption. 18 The higher demand of non-tradables causes an increase in its relative price as well as a rise in its output. This leads to both a real and nominal depreciation, which increases inflation.
We now turn to the conditional ERPTs which, as can be seen in figure 4, significantly differ depending on the shock. First note that, as expected, the ERPTs of tradable prices is in general much higher than of non-tradable, since the former is not subject to price rigidities. For tradable prices, as discussed at the beginning of the section, the conditional ERPT given either a monetary or foreign Note: Each graph displays the percentage change, relative to steady state, originated by the shock, in the following variables: total, non-tradable and tradable inflation (π, π N and π T ), nominal depreciation (π S ), output (gdp), total, non-tradable and tradable consumption (c, c N and c T ), the (CPI-based) real exchange rate (rer), the policy rate (R), and the variable hit by the shock.
interest rate shock equals one for all horizons. In contrast, the ERPT as a response to foreign inflation is around 0.6 in the first period and decreases over time. This is in line with the distinction we made when analyzing the responses to a shock in foreign inflation. For non-tradable prices, it is also true that the conditional ERPTs in response to a shock in the foreign interest rate and monetary shock are higher than after a foreign-inflation shock; but they are not equal to one. As seen in the figure, it is only for the monetary shock that the ERPT becomes close to one around the 8th quarter, being much lower for the foreign interest rate. Note that as a response to foreign inflation, the ERPT is only 0.02 even after 12 quarters.
Since the CPI is an average of tradable and non-tradable price indices, its conditional ERPT lies between the conditional ERPTs of these two prices. So, for consumer prices, we can see that the highest ERPT is in response to the monetary shock, then to foreign interest rate and then to foreign inflation. Also note that it is increasing in the case of the monetary shock and foreign interest rate, but decreasing in the case of foreign inflation.
In figure 5 we can see the unconditional ERPTs for each price index calculated using the two measures explained in the previous section. 19 As can be inferred from comparing the unconditional ERPTs, in figure 5, with the conditional ones, in figure 4, the shock to foreign inflation explains a higher fraction of the changes in the nominal depreciation rate, and so it has a larger weight in the unconditional ERPT measures. This can be appreciated by noticing that the unconditional ERPTs of each price are closer to the ones of that shock than to those of the other shocks. As discussed in the introduction of the paper, we can see how much information is lost when using the unconditional ERPT measures to predict the effect in prices after a given shock. Only in the case that "the given shock" is a specific combination of the three shocks of the model, the predicted movement in prices using the unconditional ERPTs will be correct. In all other cases, it will be incorrect. How relevant is this bias will depend on which price is being predicted and which shock or shocks hit the economy. In this simple model, it seems that the mistakes using the unconditional measures are less of a problem for tradables in the first quarters, since all the conditional ERPTs are relatively high. This is in part due to the assumption of complete pass-trough to domestic tradable prices. In contrast it is more misleading for non-tradables and consumer prices, particularly after a policy shock and at long horizons. In that specific example one would use an ERPT of around 0.05 and 0.16 for non-tradables and consumer prices respectively and the actual values are around 0.9 and 0.95. Overall, even in this simple model, the differences between conditional and unconditional ERPT measures cannot be taken for granted.
Importance of Expected Monetary Policy for ERPTs
This subsection shows the importance of taking into account expected monetary policy when discussing ERPTs 20 . As a first exercise we change the autocorrelation of the policy shock, implying different policy paths relative to the baseline. The second exercise is closer to a real world alternative: it compares the conditional ERPTs to foreign shocks and the unconditional ERPTs in the baseline model with cases when the policy rate, instead of following the rule, is held fixed for a number of periods, starting at the same time the shock hit the economy. Figure 6 presents the conditional ERPTs to the monetary policy shock in the baseline calibration, as well as the alternatives in which the policy shock displays an autocorrelation of either 0.5 or 0.9 21 . We can see that the ERPTs for non-tradables and total CPI change significantly with more persistent shocks 22 . When the autocorrelation increases from 0 to 0.5, the ERPTs of P N and P are not significantly affected in the very short run, but they decrease systematically starting from around the second quarter. This implies that it converges to 1 slower than in the baseline case. When the autocorrelation is further increased, the short run ERPT increases, and then it also converges slower to 1, making the ERPT smaller than the baseline starting around the 3rd quarter.
The second exercise, shown in Figure 7, compares ERPTs when following alternative policy paths. In the baseline, shown in blue, after each shock the policy rate follows the rule, as assumed in the impulse responses in the previous section. Alternatively, we assume that at the time of the shock, the policy maker credibly announces that the policy rate will be maintained fixed (at its steady-state value) for a given number of periods, returning to the Taylor rule afterwards. 23 In the figure, the baseline is contrasted with alternatives in which the interest rate is fixed for 2 and 4 periods. A priori, the effects on ERPTs are not evident. On one hand, fixing the rate following a nominal depreciation is more dovish so inflation will likely be higher. On the other, a more dovish policy path induces a higher NER. Therefore, the effect on the ratio computed in the ERPT is unclear. Figure 7 shows that the effects of alternative policy paths are not monotone. When the interest rate is fixed for 2 periods, the conditional ERPTs are generally higher than when the interest rate follows the Taylor rule. In contrast, when the interest rate is fixed for 4 periods, conditional ERPTs are not only lower than when the interest rate is fixed for 2 periods, but also than the baseline. Moreover, the influence of alternative policy paths seems to affect more the conditional ERPTs after a foreign interest rate shock than after a shock to foreign inflation. As expected, the changes in unconditional ERPTs go in the same direction as the changes in conditional ERPTs.
Overall, we have shown that alternative policy paths can greatly influence ERPTs, both conditional and unconditionally. Thus, it would be much more informative for policy makers if they are presented with alternative ERPT measures, for different choices of future policy paths. The methodologies from the empirical literature cannot produce such an exercise. And while a DSGE model can be used to this end, as we mentioned in the introduction, there is no such analysis available yet in the model-based literature.
Sensibility of ERPTs to different parameters
ERPTs, as any other statistic, depend on the dynamics of the model and can crucially change with alternative parameter values. One of the parameters relevant for inflation dynamics in general and for ERPT in particular is indexation to past inflation. The baseline version of the model assumes that the N sector, which is the only sector where prices are set locally, is indexed to the inflation target when prices are not chosen optimally. Instead, we show here how the results change when the non-tradable sector indexes to their own inflation, π N t−1 or to total inflation, π t−1 24 .
When indexation is only to the target, the connection between non-tradable prices and the nominal exchange rate is only through a general equilibrium channel. For a given shock, the N market has to clear, and so prices move. If we add indexation to the own inflation when prices are not set optimally, there will be an amplification mechanism at work for the same general equilibrium effect. This is because, after a given shock, for the same change in the nominal exchange rate, the change in nontradable inflation will be amplified due to indexation. This can be seen in the dashed-black lines in figure 8. Compared to the baseline case, this model shows higher ERPTs in general, with the same general evolution for foreign shocks and an overreaction for the monetary shock. When firms in the N sector are indexed to total inflation there is a significant change in price dynamics, making non-tradable inflation follow with a lag the changes in total inflation. Because of this, in addition to the general equilibrium effect, changes in the exchange rate will have a direct impact on non-tradable inflation, since the indexation of the N sector is now directly affected by the depreciation of the NER. As the ERPTs of tradable prices are generally very high, this change in the model brings a significant increase in the ERPTs of non-tradable prices as well as for CPI. This is true for both conditional and unconditional ERPTs, and particularly important for the ERPT conditional on foreign shocks. There are other model features that can have a direct impact on ERPTs. Some of these are introduced in the quantitative model of the next section, such as using imported inputs in the production of local goods, introducing price rigidities in the imported sector, using importable goods in investment, nominal rigidities and indexation in wages, among others.
The Quantitative DSGE Model
As we have argued, the shortcomings of the empirical approach to ERPTs are of quantitative nature, and therefore we need a model that matches satisfactorily the dynamics observed in the data. To that end, in this section we reproduce the analysis presented with the simple model using a DSGE model estimated for Chile. Given that the model is relatively large, here we present an overview of the model, leaving to the Appendix D the full description, as well as the equilibrium conditions, the parametrization strategy and goodness-of-fit analysis. We then proceed by analyzing what are the main driving forces behind exchange rate fluctuations in the model, and provide intuition on how these shocks propagate to the economy. The comparison between conditional and unconditional ERPTs is performed next, and we finish by analyzing how alternative policy paths influence ERPTs.
Model Overview
Our setup is one of a small open economy with both nominal and real rigidities, and incomplete international financial markets. There are three goods produced domestically: commodities (Co), non-tradables (N ), and an exportable good (X). The first is assumed to be an exogenous endowment that is fully exported, while the other two are produced by combining labor, capital, imported goods (M , which are sold domestically through import agents) and energy (E). Consumption (both private and public) and investment goods are a combination of N , X and M goods. 25 The model features exogenous long run-growth under a balanced growth path, although we allow for sector-specific trends in the short-run. Households derive utility from consumption and leisure, borrow in both domestic and foreigncurrency-denominated bonds, and have monopoly power in supplying labor. Moreover, we assume imperfect labor mobility across sectors. Household's utility exhibits habits in consumption, and investment is subject to convex adjustment costs.
Firms in the X, N and M sectors are assumed to have price setting power through a monopolisticcompetition setup. The problem of choosing prices, as well as that of setting wages, is subject to Calvostyle frictions, with indexation to past inflation. As discussed above, the possibility of indexation to aggregate inflation is relevant to determine the ERPT to different goods, particularly non-tradables. Accordingly, we allow indexation to both past CPI and own-sector inflation, as well as the target, estimating the parameters that govern the relative importance of each of these indexations.
Monetary policy sets the interest rate on domestic bonds, following a Taylor-type rule that responds to past policy rate (smoothing), deviations of CPI and core inflation from the target, and the growth rate of GDP relative to is long-run trend. Fiscal policy is assumed to finance an exogenous stream of consumption using lump-sum taxes and proceedings from the ownership of part of the commodity production. The final relevant agent is the rest of the world, where international prices and interest rates are set exogenously, following the small-open economy assumption.
The model features 24 shocks, both of domestic and foreign origin. These are: • Domestic (15): consumption preferences, labor supply (X and N ), stationary productivity (X y N ), long run trend, desired markups (M , X and N ), endowment of commodities, relative prices of food and energy, efficiency of investment, government consumption, and monetary policy.
• Foreign (9): world interest rate (risk free), foreign premium (2 shocks, described later), international prices of commodities, imported goods and CPI for trade partners (4 shocks, described later), demand for exports of X, GDP of trade partners.
All these variables are assumed to be AR(1) processes, with the exception of international prices which we describe below.
The parameter values are chosen by a combination of calibration and Bayesian estimation. We use data for Chile, at a quarterly frequency from 2001.Q3 to 2016.Q3. The data includes aggregate variables for activity, inflation, interest rates and the exchange rate, as well as sectoral series for activity, prices and wages. The dataset also includes international variables such as interest rates, prices and GDP of trading partners. In the appendix we include a complete description of the model and the parametrization strategy. Moreover, we also show that the estimated model can satisfactorily match second moments for the relevant observables in the data. 26
Main Drivers of the NER and Implied Dynamics
As we discussed before, the analysis of the ERPT requires to first identify the main shocks driving the movements in the NER. While the model features a large number of shocks, the estimation indicates that five shocks can explain almost 95% of the variance of the nominal depreciation. Of these five, four are related with the uncovered interest rate parity in the model (which we later describe): the world interest rate (R W ), two types of risk premia (country premium, C.P., and deviations from U IP ), and monetary policy (M.P.). The other is a common trend in international prices denominated in dollars (∆F * ), which we describe in more detail below. In what follows, we first show the relative importance dynamic in the short run. 26 All the results presented in the following subsections use the posterior mode as the parameter values.
of each of these by means of a variance-decomposition exercise, and then provide intuition for their propagation mechanism. Table 2 shows the contribution of these five shocks to account for the unconditional variance of the NER depreciation (π S ). In addition, we show the contribution of these shocks in the variance decomposition for alternative inflation measures, the policy rate and the real exchange rate. Note: Each entry shows the % of the unconditional variance of the variable in each row, explained by the shock in each column, computed at the posterior mode. The shocks correspond to monetary policy (M.P.), world interest rate (R W ), country premium (C.P.), deviations from UIP (U IP ) and the trend in international prices (∆F * ). The variables are: nominal depreciation (π S ), total, tradable, imported and non-tradable inflation (respectively, π, π T , π M and π N ), the policy rate (R) and the real exchange rate (rer).
As can be seen, the shock that contributes more to NER fluctuations is the trend in international prices (∆F * ), explaining almost 70% of its variance. The risk shock that emerges as deviations from the interest parity (U IP ), as well as the world interest rate (R W ), also explain a non trivial part of the volatility of π S . Together the three account for almost 90% of the variance of the NER. These shocks also play a non trivial role in accounting for inflation variability, explaining around 50% of tradable inflation, almost 30% of non-tradable, and 30% of total CPI, as well as a non-trivial fraction of the variance of R and rer. Thus, while clearly not the only relevant factors, the determinants of the NER important for inflation fluctuations as well.
A relevant distinction is that, while the shock to the trend in international prices is the most relevant for the NER, its relative contribution for inflation is smaller. This is because the flexible exchange rate acts as a buffer to nominal external shocks, isolating, to a large extent, domestic variables from their influence. This distinction will be crucial for the conditional vs. unconditional ERPT analysis below.
Next, we discuss how these shocks enter in the model, and the dynamics they generate. The model features three international prices denominated in dollars: commodities (P Co * t ), imported goods (P M * t ), and CPI of commercial partners (P * t ). 27 These prices need to cointegrate because relative prices are stationary in the model. Specifically, we assume the following structure for these prices: 28 for j = {Co * , M * , * } and ǫ j t are i.i.d. exogenous shocks. Under this specification, each price is driven by two factors: a common trend (F * t ) and a price-specific shock (u j t ). The parameter Γ j determines how slowly changes in the trend affect each price. The presence of a common trend generates cointegration among prices (as long as Γ j < 1), and the fact that the coefficients (8) add-up to one forces relative prices to remain constant in the long run 29 . While in principle both the trend and the price-specific shocks can affect all variables in the model, according to the estimation, only the trend is quantitatively relevant to explain fluctuations in the NER.
This specification for international prices is more complex than in the simple model of the previous section, however ∆F * qualitatively resembles the shock to inflation of traded goods (π * ). Thus, the intuition behind the effect of shocks to ∆F * is similar to that of π * in the simple model. Figure 9 shows impulse responses to a shock of ∆F * . After a negative shock to the international trend in prices, aggregate demand falls. As the market for non-tradable goods has to clear domestically, the shock generates a fall in the relative price of non-tradables, a real exchange rate depreciation, a drop in the production of N , an increase in the output of X, and an overall fall in GDP. Moreover, given the real depreciation and the presence of price rigidities, the nominal exchange rate depreciates as well.
To explain the dynamics of inflation first note that, without indexation, the required fall in the relative price of non-tradables would lead to an increase in the price of tradables (due to the nominal depreciation) and a drop in the price of non-tradables, which can actually be observed in the very short run. But with indexation to aggregate inflation (in both wages and prices), inflation of non-tradables starts to rise after a few periods. 30 Therefore, the indexation channel affects significantly the dynamics of inflation (and the ERPT) in the non-tradable sector. Finally, given the monetary policy rule, the domestic interest rate increases to smooth the increase in inflation.
The other shocks are associated with the uncovered interest rate parity, which up to first order can be written as, 31R HereR t is the domestic rate,R W t is the foreign risk free interest rate, E t π S t+1 is the expected nominal depreciation, and φ bd * t is a premium elastic to foreign debt,d * t , which acts as the closing device. Additionally, there are two risk premium shocksξ R1 t andξ R2 t . They differ in that the first one is matched with a measure of the country premium in the data (the JP Morgan EMBI Index for Chile), 32 while the second is unobservable and accounts for all other sources of risk that explain deviations from the EMBI-adjusted interest rate parity. In the tables and figuresξ R1 t is labeled as C.P. andξ R2 t is called U IP . Figure 10 shows the responses to a positive realization of the U IP shock, which is qualitatively analogous to the influence of a shock to the world-interest-rate in the simple model 33 . This shock increases the cost of foreign borrowing, which triggers both income and substitution effects, leading to a contraction in aggregate demand. This leads to both real and nominal depreciations, and a reduction 29 The usual assumption for these prices in DSGE models with nominal rigidities is obtained as a restricted version of this setup explained in Appendix D. 30 The fraction of prices and wages in the N sector that are indexed to aggregate inflation per period is around 18% and 11% respectively. One can numerically show that if these were set to zero, the response of π N would be negative for the relevant horizon. 31 A hat denotes log-deviations relative to steady state. 32 Specifically, the EMBI index is matched with φ bd * t +ξ R1 t . 33 The responses to shocks R W and C.P. in the quantitative model are similar to those originated by a U IP shock, and thus are omitted to save space. in all measures of activity; except for production in X that is favored by the reallocation of resources from the N sector. All measures of inflation increase, and the role of indexation in explaining π N is similar to what we described before. Accordingly, the policy rate rises after this shock. We conclude by reminding that, as discussed before, even though both shocks have an impact through aggregate demand, the shock to ∆F * has also a direct impact on inflation that dampens the effect generated by NER changes. In this more complex model, this happens through two different channels. First, a drop in international prices puts downward pressure to the domestic price of imports. Second, given the presence of imported inputs in the production of both X and N , a reduction in world prices will, ceteris paribus, reduce the marginal cost in these sectors, dampening also the response of X and N inflation. Thus, as in the simple model, shocks to international prices are expected to have lower conditional ERPTs that shocks to the interest rate parity condition.
Conditional vs. Unconditional ERPTs
We begin by computing the conditional ERPTs associated with the three main shocks behind fluctuations in the NER. We present the results for aggregate CPI (P ), tradables (P T ), imported (P M ) and non-tradables (P N ), the last three excluding Food and Energy. In line with the previous discussion, and as can be seen in figure 11, the conditional ERPTs generated by ∆F * are significantly different from those implied by shocks to the U IP and to the world interest rate R W . For a horizon of 2 years, the conditional ERPT given a shock to international prices is less than 0.1 for total CPI, smaller than 0.05 for non-tradables, and close to 0.15 for both traded and imported goods. In sharp contrast, for the same horizon, the conditional ERPTs to the U IP shocks are much higher for all prices: close to 0.5 for CPI, larger than 0.8 for tradables and importables, and near 0.2 for non-tradables. For the world-interest-rate shock the conditional ERPTs are somehow smaller, but still larger than those obtained after a shock in the trend of international prices. Figure 12 displays both measures of unconditional ERPTs we introduced in Section 2: panel A shows the weighted average of conditional ERPTs, while panel B displays the measure obtained using the Population VAR approach. 34 In line with our previous analysis, both measures of unconditional ERPT lie between the conditional measures reported before. 35 Moreover, the empirical VAR literature using Chilean data estimates an ERPT close to 0.2 for total CPI after two years, with a similar value for tradables and close to 0.05 for non-tradables. 36 These are close to the measures of unconditional ERPTs we report here. Overall, the evidence presented in this section confirms the intuition developed with the simple model: conditional ERPTs are quite different from those obtained from aggregate ERPT measures comparable to those in the literature. Thus, using the results from the empirical literature will almost surely lead to biases in the estimated dynamics of inflation following movements in the NER. In turn, the analysis can be greatly improved by an assessment of which shocks are behind the particular NER change, and the use of conditional ERPT measures.
ERPT and Expected Monetary Policy
Our second concern regarding the use of the ERPT obtained from the empirical literature is that it could mistakenly lead to thinking that actual and future monetary policy has little to say about the behavior of both the NER and prices. Conceptually, this discussion is independent from the potential differences between conditional and unconditional ERPTs; although we will see that quantitatively the source of the shock also matters for this discussion.
The starting point is to notice that, as discussed in Section 4.2, in the benchmark model the monetary policy rate increases (and it is expected to remain high) in response to the main shocks that depreciate the currency. We compare the benchmark ERPTs, obtained assuming the policy rate for CPI (π), tradables (π T ), importables (π M ) and non-tradables (π N ). These series are those used in the empirical literature. The ERPT is computed using the shock for π S in the Cholesky decomposition. We ran a VAR(2) based on the BIC criteron. follows the estimated rule, with alternative scenarios that deviate temporarily. In particular, as we did with the simple model, it is assumed that, when the shock hits the economy, the central bank announces that it will maintain the interest rate at its pre-shock level for T periods, and return to the estimated rule afterwards. Figure 13 shows how selected impulse-response functions change with these policy alternatives, for the main shocks that drive the NER. As in the simple model, the reaction of the ERPTs are not ex-ante evident, since the figure shows that a more dovish policy increases both inflation and the NER.
As shown in figure 14, when the shock to the trend in international prices hits the economy, conditional ERPTs vary significantly depending on the reaction of monetary policy. For instance, after two years, the ERPT to total CPI almost doubles if the policy rate remains fixed for a year; and the difference is even larger for non tradables. At the same time, conditional on shocks to either the UIP or the world interest rate, the ERPT measures do not seem to vary significantly as monetary policy changes; except for non-tradables where we can see some differences.
In Figure 15 we compute the unconditional ERPT using the weighted average measure as in (7). 37 As can be seen, influenced mainly by the behavior of the ERPT after the shock to international prices, the unconditional ERPT also increases with a more dovish policy. This comparison provides yet another reason to properly account for the source of the shock and to compute conditional ERPTs, as the effect of alternative policy paths will be relevant depending on the shock.
In sum, this analysis highlights that, in thinking about how monetary policy should react to shocks that depreciate the currency, a menu of policy options and their associated conditional ERPT should be analyzed. For some shocks, monetary policy has a significant role to determine the final outcome of both inflation and the NER. As we have argued, this kind of analysis cannot be performed using Note: See Figure 13 the tools and results from the empirical literature, and the related literature using DSGE models has not analyzed the role of alternative policy paths for the ERPT.
Conclusions
This paper was motivated by the widespread use of ERPT measures generated by empirical, reducedform methodologies for monetary-policy analysis. We highlighted two potential problems: the dependence of ERPTs on the shock hitting the economy (separating conditional and unconditional ERPTs), and the influence of alternative expected paths of monetary policy. We first established the relationship between ERPT measures used in the empirical literature with related objects obtained from general equilibrium models. We then used a simple model to conceptually understand how the two shortcomings that we highlight arise in any model. Finally, to assess the quantitative importance of making these distinctions, we used a DSGE model estimated with Chilean data. We found that these distinctions are indeed relevant, and that a policy maker using the results from the empirical literature alone is probably basing her decision on inappropriate tools. Another way to frame this discussion in a more general context is the following. From the point of view of general equilibrium models, one can define alternative measures of what "optimal" policy means and then fully characterize how monetary policy should respond to particular shocks hitting the economy, in order to achieve the optimality criteria. In that discussion, structural parameters, the role of expectation formation, the nature of alternative driving forces, among other details, will be relevant to determine the path that monetary policy should follow. However, as the empirical measure of the ERPT computed in the literature is, in one way or another, a conditional correlation and not a structural characteristic of the economy, all the relevant aspects of optimal monetary policy can be described without using the concept of ERPT at all. Thus, while the results of the empirical literature can be useful for other discussions in international macroeconomics, its relevance for monetary policy analysis is more limited.
Finally, it is our perception that the role of expected policy to determine the ERPT has not been properly considered in actual policy making. To a large extent, the realized ERPT after a given NER movement can be influenced by monetary policy. However, the widespread use of empirical measures of ERPT for policy analysis, which completely omits this issue, indicates that this is not the way policy makers think about the ERPT. In that way, a fruitful venue for future research could be to study particular episodes of large depreciations, to estimate the extent to which the expected path of policy perceived at the time of the NER movement influenced the dynamics of inflation that followed.
A.1 Conditions for an Exact Relationship
The linearized solution of a DSGE model takes the form where s t is a n × 1 vector of predetermined variables, both endogenous and exogenous, c t is a r × 1 vector of non-predetermined variables, e t is a m × 1 vector of i.i.d. exogenous shocks (with E(e t ) = 0, E(e t e ′ t ) = I, and E(e t e ′ j ) = 0 for t = j), while A, B, C and D are conformable matrices. The solution in (5) can be obtained by defining Let x t be a k × 1 vector collecting variables from either s t or c t , such that x t = S[c ′ t s ′ t ] ′ = Sy t for an appropriate selection matrix S. From (A.1) and (A.2), If k = m (i.e. the same number of variables in x than shocks in the model), under certain conditions stated in Ravenna (2007) a finite VAR representation for the vector x t exists and takes the form As long as the solution of the DSGE model is stationary, we can always find the MA(∞) representation of the vector x t . Under the assumptions in Ravenna (2007), we can write it as, with F 0 = I and F j =ĀC j−1 DB −1 . Using this representation, the cumulative response of the variable in position k of vector x t , h periods after a shock in position i of vector e t , is given by where F (h) ≡ h j=0 F j , and the notation X ij indicates the element in the ith row, jth column of matrix X. Thus, the conditional ERPT after a shock i, for variable k, h periods ahead is given by i.e. the ratio of the cumulative response of variable k to the cumulative response of the nominal depreciation (π S t ), after shock i.
At the same time, if the model (A.1)-(A.2) is the true data generating process, someone using the approach in the VAR-based literature will first estimate a reduced form VAR given by Clearly, if a finite VAR representation of the DSGE model exists and the lag-length is chosen properly, we have Θ j = Φ j and Ω ≡ E(u t u ′ t ) =BB ′ . The MA(∞) representation of this reduced-form is The Cholesky decomposition of Ω is a matrix P satisfying Ω = P P ′ . The cumulative IRF of variable k after a shock corresponding to the nominal depreciation equation is given by (A.9) and the ERPT for variable k, h periods ahead, is computed as, i.e. the ratio of the cumulative response of variable k to the cumulative response of the nominal depreciation after a shock in the equation of the nominal depreciation.
To study the relationship between ERP T V k (h) and CERP T M k,i (h), assume that the nominal depreciation (π S t ) is ordered first in the vector x t . 38 Then, we can write the conditional ERPT as By the same token, the ERPT from the VAR is In addition, by the properties of the Cholesky decomposition, we have P 11 = (Ω 11 ) 1/2 , P j1 = Ω j1 (Ω 11 ) 1/2 for j = 2, ..., m.
Thus, the ERPT from the VAR can be written as Moreover, as Ω =BB ′ , we have Ω ji = m s=1B jsBis Thus, To grasp some intuition on the weight ω s (h), notice that at h = 0, i.e. the fraction of the one-step-ahead forecast-error-variance of the nominal exchange rate that is due to the shock s. In other words, the weight of the conditional ERPT given shock s depends on how much of the fluctuations in the nominal exchange rate is explained by this shock. For h ≥ 1, the forecast-error variance is adjusted by the ratio of the response of the NER at period h relative to that at h = 0.
A.2 ERPT from the Population VAR
From the linearized solution of the DSGE model (5), provided stationarity, the variance-covariance matrix Σ 0 ≡ E(y t y ′ t ) satisfies, which can be easily computed. 39 In addition, the matrix containing the auto-covariance of order p is Σ p ≡ E(y t y ′ t−p ) = F p Σ 0 for p > 0. Finally, we are interested in subset x t of n variables from y t , that will be included in the VAR model, defined as x t ≡ Sy t for an appropriate choice of S. In that case, we have for p ≥ 0. The structural VAR(p) model for the vector x t in (2)-(3) can be written in more compact form, defining the vector Using (A.13) the IRF of the variable in position j of vector x t to the shock associated with the variable in position i of the same vector, h periods after the shock, is given by the {j, i} element of the matrix Φ hP . The cumulative IRF is the element {j, i} of matrix h s=0Φ sP .
An econometrician would proceed by choosing a lag order p in the VAR and estimate (A.12) by OLS. If she had an infinite sample available, she could estimate (A.12) using the population OLS; i.e. choosingΦ to minimize, This is equivalent toΦ satisfying the first order condition, Similarly, In most applied cases, with finite samples, econometricians estimate the parameters of the VAR and use asymptotic theory to derive probability limits and limiting distributions to perform inference, 40 such as hypothesis testing or computing confidence bands. The case we want to analyze here is different, as we assume the DSGE model is the true data generating process, and we wish to compute the model that an econometrician would estimate with an infinite or population sample. This is equivalent to computeΦ andΩ in (A.14)-(A.15) using the population moments from the DSGE. Given x t = Sy t , and recalling the definition of X t , we have, which are all the elements required to computeΦ andΩ. A final comment relating the usual practice in the VAR literature. In most papers the vector x t contains foreign variables. If the assumption of a small and open economy is used, it is generally assumed that the matrices Φ j for j = 1, ..., p are block lower triangular: i.e. lags of domestic variables cannot affect foreign variables. In practice, this second constraint is implemented by estimating the matrices Φ j by FGLS o FIML, applying the required restrictions. Here, however, if the DSGE model assumes that foreign variables cannot be affected by domestic variables, the auto-covariance matrices Σ j will have zeros in the appropriate places, so thatΦ will display the same zero constraints that an econometrician would impose.
B.1.1 Household
From the decision of final consumption, labor and bonds, and defining as λ t the multiplier of the budget constraint, we have the first order conditions: In addition, the optimality conditions for the decision between tradable and non-tradable consumption are:
B.1.2 Firms in N Sector
The aggregation creates a ∆ variable in this case: The problem solved by firms when choosing prices can be written as: All markets clear: Which correspond to the local bonds market and goods market. The ∆ N t variable is a measure of price dispersion in N , defined as: The rest of the equations correspond to policy and foreign equations described in the text and to equations concerning the evolution of price indexes. In addition, we have the resource constraint: And the equations for the exogenous processes that are described in the text.
B.3 Steady state
The
39
C Additional IRFs Baseline Model There is a representative household that consumes, works, saves, invests and rents capital to the producing sectors. Her goal is to maximize, where C t is consumption and h J t for J = {X, N } are hours worked in sector J.C t denotes aggregate consumption (i.e. the utility exhibits external habits), 41 and κ t ≡ (C t − φ CCt−1 ) −σ . 42 There are three preference shocks, ξ β t and ξ h,J Central Bank.
D.1.2 Consumption Goods
Consumption C t is composed by three elements: core consumption (C N F E t ), food (C F t ) and energy (C E t ). For simplicity, food and energy consumption are assumed exogenous and normalized to one (so total and core consumption are equal). In contrast the price of the consumption good will be a composite of the price of the core good, energy and food the following way: is the price of core consumption, P F t is the price of food and P E t is the price of energy. 43 We further assume that the prices of both F and E relative to that of the tradable composite (T , defined below) follow exogenous processes (p F t and p E t respectively). 44 Core consumption is a composite of non-tradable consumption C N t and tradable consumption C T t , while the latter is composed by exportable C X t and importable C M t goods, where ̺ is the elasticity of substitution between non-tradables and tradables. The last equation specifies that exportable, importable and non-tradable consumption are made of a continuum of differentiated goods in each sector, combined by an aggregator G, which we assume features a constant elasticity of substitution ǫ J > 1 for J = {X, M, N }. Moreover, it is assumed that the aggregator is subject to exogenous disturbances (ξ J t ), generating markup-style shocks in the pricing decisions by firms as in Smets and Wouters (2007).
D.1.3 Capital and Investment Goods
The evolution of the capital stock in sector J is given by It is assumed that installed capital is sector-specific, there are adjustment costs to capital accumulation with Γ ′ (.) > 0 and Γ ′′ (.) > 0 and there is a shock u t to the marginal efficiency of investment. 45 The parameter δ ∈ (0, 1) is the depreciation rate. Households choose how much to invest in each type of capital, which constitutes the demand for investment. The supply of investment is assumed to be provided by competitive firms that have a technology similar to the consumption preferences of households, but with different weights, γ I and γ T I , and elasticity of substitution ̺ I : Similar to consumption, each investmentĨ J t for J = {X, M, N } is a continuum of the differentiated goods in each sector with the same aggregator G.
D.1.4 Firms
There are three sectors in addition to commodities (assumed to be an endowment); exportable, importable and non-tradable. Firms in the importable sector buy an homogeneous good from foreigners and differentiate it, creating varieties which are demanded by households and firms. Firms in the exportable and non-tradable sector combine a value added created using labor and capital with a composite of the varieties sold by the importable sector to produce their final product.
Each firm in each sector supplies a differentiated product, generating monopolistic power. Given their marginal cost, they maximize prices a la Calvo with probability θ J for J = {X, M, N } of not being able to choose their price optimally each period. When not chosen optimally, the price is updated according to: (π J t−1 ) ̺ J (π t−1 ) 1−̺ J ζ Jπ1−ζ J , with π J t−1 being inflation of sector J in the previous period, and parameters {̺ J , ζ J } ∈ [0, 1]. In this way, the indexation specification is flexible enough to accommodate both dynamic as well as static indexation, with a backward-looking feedback that can be related to either sector specific or aggregate inflation; and we let the data tell the preferred values for ̺ J and ζ J in each sector.
Sector M:
Each firm i in this sector produces a differentiated product from an homogeneous foreign input with the technology Y M t (i) = M t (i). The price of their input is given by P m,t = S t P M * t , where P m,t is the price of the good that is imported in local currency and P M * t is the price in foreign currency and is exogenously given.
Sector X and N :
All firms in both sectors have the same format. Each firm i of sector J produces a differentiated product that is a combination of value added V J t is the marginal cost of producing V J t (i), which is the same for all firms, and P M E t is the price of a composite between a continuum of the importable goods sold by the M sector and energy, i.e.
with γ EM ∈ [0, 1]. As in the case of the household with Energy and Food, M J t (i) can be interpreted as only the continuum of importable goods or the composite between energy and the importable goods, since firm take the quantity of energy as exogenous and so it has been normalized to one.
Commodity:
The commodity is assumed to be an exogenous and stochastic endowment, Y Co t which has its own trend A Co t that cointegrates with the other sectors, follows an exogenous process. The endowment is exported at the international price P Co * t . It is assumed that a fraction ϑ of commodity production is owned by the government and the rest, (1 − ϑ), is owned by foreigners.
D.1.5 Fiscal and Monetary Policy
The fiscal policy introduces an exogenous expenditure that is completely spent in non-tradable goods. The government receives part of the profits of the commodity sector, can buy local bonds, B G t , and gives transfers to households, T t . Its budget constraint is Similarly to the household, government expenditure is the same composite of non-tradable varieties. We assume g t ≡ Gt follows an exogenous process.
Monetary policy follows a Taylor-type rule of the form,
D.1.6 Foreign Sector
The rest of the world sells the imported inputs at price P * m,t , buys the commodity at price P Co, * t and buys the exported products Y X t at the price set by local producers. For these last goods, the aggregator of the varieties is the same as for the households. In contrast, the demand for the composite exportable is, , international prices of commodities (P Co * ), imported goods (P M * t ) and CPI for trade partners (P * t ), demand for exports of X (ξ X * t ), and GDP of trade partners (y * t ). All these processes are assumed to be Gaussian in logs. Markup and monetary-policy shocks are i.i.d. while the rest, with the exception of international prices, are independent AR(1) processes.
As the model features a balanced growth path and preferences are such that relative prices are stationary, foreign prices should co-integrate, growing all at the same long-run rate. 48 Defining inflation of foreign CPI as π * t = P * t P * t−1 , with steady state value of π * , we propose the following model for international prices, Under this specification, each price is driven by two factors: a common trend (F * t ) and a pricespecific shock (u j t ). The parameter Γ j determines how slowly changes in the trend affect each price. The presence of a common trend generates co-integration among prices (as long as Γ j < 1), and the fact that the exponent in the trend and in the lagged price in (D.2) add-up to one forces relative prices to remain constant in the long run. 49 The usual assumption for these prices in DSGE models with nominal rigidities is obtained as a restricted version of this setup, imposing Γ j = 0 for j = {Co * , M * } and σ 2 * = 0. In other words, the relative prices of both commodities and imports are driven by stationary AR(1) processes, while the inflation of commercial partners is a stationary AR(1) process. The specification in (D.2)-(D.4) generalizes this usual assumption in several dimensions. First, in the usual set up, the common trend of all prices is exactly equal to the CPI of commercial partners. This might lead to the wrong interpretation that inflation of commercial partners is a significant driver of domestic variables, while in reality this happens because it represents a common trend in all prices. Second, the usual specification imposes that every change in the common trend has a contemporaneous one-to-one impact in all prices, while in reality different prices may adjust to changes in this common trend at different speeds. Finally, for our specific sample the data favors the general specification (D.2)-(D.4) relative to the restricted model. Overall, the model features 24 exogenous disturbances, related to the 23 exogenous state variables previously listed plus the common trend in international prices.
D.2 Parametrization Strategy and Goodness of Fit
The values of the parameters in the model are assigned by a combination of calibration and estimation.
The resulting values are presented in tables D.2 to D.5. Parameters representing shares in the different aggregate baskets and production functions are calibrated using input-output tables for Chile. In addition, we target several steady-state ratios to sample averages of their observable counterparts. For parameters that are not properly identified in our data set, we rely on studies estimating DSGE models for Chile. Finally, the parameters characterizing the dynamics of some of the external driving forces are calibrated by estimating AR(1) processes.
The remainder of the parameters are estimated with a Bayesian approach using the following series at quarterly frequency from 2001.Q3 to 2016.Q3: 50 • Real growth rate of: GDP , GDP X (Agriculture, Fishing, Industry, Utilities, Transportation), GDP N (Construction, Retail, Services), GDP Co (Mining), private consumption (C), total investment (I), and government consumption (G).
• The ratio of nominal trade balance to GDP.
• The growth of nominal wages (π W X and π W N ) measured as the cost per unit of labor (the CMO index), using sectors consistent with the GDPs definition.
• The nominal dollar exchange-rate depreciation (π S ) and the monetary policy rate (R).
• External: World interest rate (R W , LIBOR), country premium (EMBI Chile), foreign inflation (π * , inflation index for commercial partners, the IPE Index), inflation of commodity prices (π Co * , copper price) and imports (π M * , price index for imported of goods, the IVUM index), external GDP (Y * , GDP of commercial partners).
All domestic observables are assumed to have a measurement error, with calibrated variance equal to 10% of the observable variance 51 . Priors and posteriors are shown in tables D.3 to D.5. When possible, priors are set centering the distributions around previous results in the literature. The estimated model is able to properly match the volatilities and first-order autocorrelation coefficients of the domestic observables, as can be seen in Table D Note: The variables are: the growth rates of GDP, private consumption, investment, and GDP in the X and N sectors, the trade-balance-to-output ratio, inflation for total CPI, tradables, non-tradables and imported, the growth rate of nominal wages in sector X and N , the monetary policy rate, and the nominal depreciation. Columns two to four correspond to standard deviations, while five to seven are first-order autocorrelations. For each of these moments, the three columns shown are: point estimates in the data, GMM standard-errors in the data, and unconditional moments in the model evaluated at the posterior mode.
D.2.1 Calibrated and Estimated Parameters
51 Except for the interest rate. (2007) Notes: a This includes the public production and the taxes received by the government of the rest of the production. b See for example recent DSGE's in Kirchner and Tranamil (2016) and García-Cicco et al. (2015). From the decision of final consumption, labor, bonds and capital and defining as λ t the multiplier of the budget constraint, µ J t λ t the multiplier of the capital accumulation equation for J = {X, N } and as µ W J t W J t λ t the multiplier of the equalization of labor demand and supply, we have the first order conditions: where J = {X, N } for the second and last two equations. The functional form for Γ(x) is: where a is the steady state value of the trend growth. From the optimality conditions of choosing wages, we can write the first order conditions as: where W J, * t is the optimal wage chosen and this equation holds for J = {X, N }. In addition, the optimality conditions for the decision between tradable and non-tradable consumption are: where it was used the fact that C SAE t = C t .
54
And between the exportable and importable:
D.3.2 Investment Good Production
The first order conditions between tradable and non-tradable investment can be written as: where P T I t is the price index defined for the tradable investment. The FOC between exportable and importable investment is given by:
|
2019-05-20T13:06:13.472Z
|
2018-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "409b777eed2c07a5d7f8ab9bcda220711d28b388",
"oa_license": "CCBYNCSA",
"oa_url": "https://repositorio.uca.edu.ar/bitstream/123456789/9974/1/revisiting-exchange-rate-pass.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a93da0e5414a70d2b0174adf2bd490750dbc9d78",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
9662181
|
pes2o/s2orc
|
v3-fos-license
|
Effects of adenoidectomy on markers of endothelial function and inflammation in normal-weight and overweight prepubescent children with sleep apnea
BACKGROUND: This trial study aimed to assess the effects of adenoidectomy on the markers of endothelial function and inflammation in normal-weight and overweight prepubescent children with obstructive sleep apnea (OSA). METHODS: This trial study was conducted in Isfahan, Iran in 2009. The study population was comprised of 90 prepubescent children (45 normal-weight and 45 overweight children), aged between 4-10 years old, who volunteered for adenoidectomy and had OSA documented by validated questionnaire. The assessment included filling questionnaire, physical examination, and laboratory tests; it was conducted before the surgery and was repeated two weeks and six months after the surgery. RESULTS: Out of the 90 children evaluated, 83 completed the 2-week evaluation and 72 patients continued with the study for the 6-month follow up. Markers of endothelial function, i.e., serum adhesion molecules including endothelial-leukocyte adhesion molecule (E-selectin), intercellular cell adhesion molecule-1 (ICAM-1), and vascular cell adhesion molecule-1 (sVCAM-1), and the markers of inflammation, i.e., interleukin-6, and high-sensitive C-reactive protein (hs-CRP) decreased significantly in both normal-weight and overweight children after both two weeks and six months. After six months, the total and LDL-cholesterol showed a significant decrease in the overweight children. CONCLUSIONS: The findings of the study demonstrated that irrespective of the weight status, children with OSA had increased levels of the endothelial function and inflammation markers, which improved after OSA treatment by adenoidectomy. This might be a form of confirmatory evidence on the onset of atherogenesis from the early stages of the life, and the role of inflammation in the process. The reversibility of endothelial dysfunction after improvement of OSA un-derscores the importance of primordial and primary prevention of chronic diseases from the early stages of the life.
bstructive Sleep Apnea (OSA) is a prevalent medical condition, with an estimated prevalence of 2-3% in children; it is characterized by repetitive upper airway obstruction, resulting in continued breathing effort with diminished airflow. [1][2][3][4] Although the main symptom of OSA is daytime hypersomnolence, patients with OSA are at a higher risk of metabolic disorders 5,6 and the incidence of cardiovascular disease (CVD) morbidity and mortality. 7 It was previously assumed that these complications are related to obesity; however, the recent data suggests that OSA may have an independent association with cardio metabolic risk factors. 8 There is a growing body of evidence on the O Archive of SID www.SID.ir interaction of OSA with the metabolic dysfunction, which is known as a risk factor for CVD in adults. 8,9 Although it is well-documented that CVDs origin from the early stages of life and the CVD risk factors tend to track from childhood into the adulthood, [10][11][12] limited experience exist on the association of OSA and cardio metabolic risk factors in the pediatric age group. Improvement of OSA by adenoidectomy might have beneficial effects on metabolic dysfunction. The current trial aimed to assess the effects of adenoidectomy on the markers of endothelial function and inflammation in the normal-weight and overweight prepubescent children with OSA.
Methods
This clinical trial study was conducted among children who volunteered for adenoidectomy in Isfahan, the second large city in Iran from May to December 2009.
Participants
The study population were comprised of 90 prepubescent children (45 normal-weight and 45 overweight children), aged between 4-10 years old, who volunteered for adenoidectomy and had OSA documented by a validated questionnaire. Those children with syndromic obesity, endocrine disorders, any physical disability, and or history of any chronic medication use were not included in the trial. Two groups of normal-weight and overweight children 13 were selected consecutively among the children who were referred for adenoidectomy.
The study was conducted according to the Declaration of Helsinki, and was approved by the Ethics Committee of the School of Medicine, Isfahan University of Medical Sciences. After providing detailed oral information to the children and their parents, written informed consents were obtained from the parents of eligible children.
OSA was documented by a widely used and validated questionnaire. 14 The questionnaire was extended with questions concerning (i) Child's demographic data (i.e., gender, age, height, weight, household smoking, and pa-rental education), (ii) Daytime behavior (e.g., hyperactive-inattentive behavior and tiredness), (iii) Frequent sleep problems (i.e., sleeponset delays, enuresis, night waking, nightmares, and sleep walking), and (iv) the Current health status (e.g., frequency of upper respiratory tract infections).
Anthropometric Measurement and Clinical Examination
All measurements were made by a trained team of general physicians and nurses under supervision of the same pediatrician, using calibrated instruments and standard protocols. The weight (Wt) and the height (Ht) were measured by calibrated scale and Stadiometer (Seca, Japan) with participants lightly clothed and barefooted nearest to 0.1 cm and 0.1 kg, respectively. Body Mass Index (BMI) was computed as Wt (kg) divided by Ht (m) squared. The BMI percentiles was compared to the BMI charts of the Centers for Disease Control and Prevention; the BMI levels corresponding the age and gender-specific 5 th -85 th percentile were considered as normal-weight, and the BMI ≥ 85 th percentile was considered as overweight. 13 The blood pressure (BP) was measured using mercury sphygmomanometer under the standard protocol. The readings at the first and the fifth Korotkoff phase were taken as systolic and diastolic BP (SBP and DBP), respectively. The average of the two BP measurements was recorded. 15
Biochemical measurements
Participants were asked to fast for 12 hours before the screening and compliance with fasting was determined by interview on the morning of examination. While one of the parents accompanied the child, fasting blood samples were taken from the ante-cubital vein, and within 30 minutes after venipuncture were centrifuged Archive of SID www.SID.ir for 10 minutes at 3000 rpm. The fasting blood glucose (FBG), total cholesterol (TC), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C) and triglycerides (TG), and high-sensitive C-reactive protein (hs-CRP) were measured using auto-analyzer. HDL-C level was determined after dextran sulphate-magnesium chloride precipitation of non-HDL-C. 16 Serum adhesion molecules, i.e., intercellular cell adhesion molecule-1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1) and endothelial-leukocyte adhesion molecule (Eselectin), as well as interleukin-6 (IL-6) were measured by enzyme-linked immunosorbent assay (ELISA) method using standard kits (Bender Med Systems, GmbH, Vienna, Austria).
Comparisons
All the baseline assessments including filling the questionnaire, physical examination and laboratory tests were repeated within two weeks and six months after adenoidectomy to determine the short-term and long-term changes in both groups after the OSA treatment.
Statistical Analysis
The data was stored in a computer database. Statistical analyses were performed using SPSS for Windows software (version 15.00, SPSS, Chicago, IL.). The descriptive data are presented as mean ± standard deviation (SD). The normality of the distribution of variables was verified by Kolmogorov-Smirnov test. The time trend of the changes within and between the groups was analyzed by the analysis of variance (ANOVA) and post-hoc tests. The significance level was considered at p < 0.05.
Results
As presented in Figure 1, among the 90 potential candidates who initially agreed to participate in the study, there were 41 children in group A (normal BMI) and 42 children in group B (BMI ≥ 85 th percentile) after two weeks follow up, because some participants refused the blood sampling or declined to come for the follow up visits. At the 6-month follow up, the number of participants reduced to 37 in group A and 35 in group B. Based on the data obtained from the questionnaires, the OSA symptoms disappeared in both study groups. Table 1 shows the metabolic and inflammatory changes in the normal-weight and overweight children before the operation and two weeks and six months after undergoing the adenoidectomy.
After six months, the total and LDLcholesterol had significant decreases in overweight children. The most remarkable changes were the decline in the levels of markers of endothelial function and inflammation, i.e., ICAM-I, VCAM-I, E-Selectin, IL-6, and hs-CRP, which decreased in both normal-weight and overweight participants after both two weeks and six months.
Discussion
This trial revealed an independent association between OSA and the level of endothelial function and inflammation markers, which decreased after adenoidectomy in normal-weight and overweight children. These changes occurred in absence of the changes in most conventional cardio metabolic risk factors.
The relationship of the inflammatory processes with the progress of atherosclerosis provides important links between underlying mechanisms of atherogenesis and CVD risk factors. Therefore, the inflammatory biomarkers are considered as potential predictors of the present and future risk of CVD. Up-regulation of endothelial adhesion molecules, i.e., endothelial-leukocyte adhesion molecule (Eselectin), intercellular cell adhesion molecule-1 (ICAM-1), and vascular cell adhesion molecule-1 (sVCAM-1), might have a crucial role in the earliest phases of atherosclerosis. 17,18 Concentrations of inflammation markers and soluble adhesion molecules were found to be higher in obese children than those in lean children. 19,20 These findings suggest early stages of endothelial dysfunction in children.
Atherosclerosis starts from the fetal life and its natural course consists of interrelations between the traditional risk factors and inflammatory and endothelial biomarkers. The features of chronic inflammation can be detected in fatty streaks, i.e., the first stage of atherosclerotic lesions. 21 Childhood obesity has become a health issue problem among Iranian children, even in those as young as six years of age, 22 and considering that many studies have documented the presence of atherosclerosis and inflammation surrogate markers as well as structural arterial changes among obese children, [23][24][25][26] the importance of the prevention and controlling this type of nutritional disorder is underscored.
The findings of the current study suggested the independent association of OSA with the inflammation marker levels in the normalweight and overweight children. Concentration of these markers declined shortly after the OSA treatment. Children with OSA, experience a combination of oxidative stress, inflammation, autonomic activation, and disruption of sleep homeostasis. 27 The independent association of the OSA with markers of inflammation in the normal-weight and overweight prepubescent children documented in the current trial is consistent with the independent association of the OSA with the metabolic syndrome in adults. 28 Our findings are in line with the findings of a previous study conducted in the normalweight children with OSA who underwent adenoidectomy, which reported a decrease in the endothelial function markers levels. 29 It was also found that children with resolution of OSA abnormalities experienced a change in the total and LDL-cholesterol levels, supporting the hypothesis that reversal of OSA may also reverse the progression of dyslipidemia over time, which is an important implication for the future CVD risk. 30
www.SID.ir
The main limitation of this study was the questionnaire-based diagnosis of OSA, because of the high costs of polysomnography (PSG). The main novelty of the study is the measurement of markers such as adhesion molecules that have not been previously examined in trials among children with OSA.
Conclusion
The findings of the study demonstrated that irrespective of the weight status, children with OSA had increased the endothelial function and inflammation markers level, which improved after the OSA treatment by adenoidectomy. This might be complementary evi-dence on the onset of atherogenesis from the early stages of life and the role of inflammation in this process. The reversibility of endothelial dysfunction after the OSA treatment underscores the importance of the primordial and primary prevention of chronic diseases from the early stages of life. Future longitudinal studies documenting OSA by polysomnography (PSG) are recommended.
Source of funding
This trial was conducted as a thesis funded by the Vice-Chancellery for Research, Isfahan University of Medical Sciences, Isfahan, Iran.
|
2016-04-25T20:03:41.712Z
|
2011-03-01T00:00:00.000
|
{
"year": 2011,
"sha1": "2cc06b28b4437be74d1db080e81f9940a16754ba",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b90b7241341bd3bacdecc2dc97bf85d96c3cbe4b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
91281582
|
pes2o/s2orc
|
v3-fos-license
|
STUDY OF RESISTANCE TO ANTIBIOTICS OF MICROORGANISMS ISOLATED FROM ANTARCTIC CLIFFS AND BLACK SEA BOTTOM SEDIMENTS
Resistance to antibiotics of bacteria isolated from extreme ecosystems was compared. Pseudomonas аntarctica and Rothia sp. strains isolated from rocky lichen of Antarctic Galindez Island were highly resistant to the antibiotics. Rhodococcus fascians 181n3, Sporosarcina aquamarina 188n2 and Staphylococcus epidermidis 190n1 strains were found to be sensitive to all antibiotics studied. Plasmid localization of genes coding resistance to Cr (VI) and Co was determined. Correlation of resistance of microorganisms isolated from «ecologically pure» ecosystem to antibiotics demonstrated in this study and resistance of these microorganisms to the toxic metals exhibited by us earlier is fundamentally new data which testify the phenomenon of simultaneous resistance of microbial community to different extremal factors.
INTRODUCTION
Antarctica is a continent with unique climatic features.A combination of extreme factors is always present here (sharp daily temperature differentials, strong winds, high level of UV radiation).Microbial communities on vertical rocky scarps (referred to as cliffs) are affected by these extreme factors to the fullest extent.Besides, an extremely uneven distribution of organic compounds is observed in biofilms on cliffs.This presumes a competition between microorganisms for nutrient sources and as a result the expressed phenomenon of antibiosis.Antibiosis shows itself in two ways: on the one hand it is a synthesis of antibiotics by microorganisms; on the other hand it is a resistance to them.Furthermore antibiosis is strengthened as a result of high concentrations of microorganisms in biofilms.After all, lichens, bacteria, micromycetes, which are the components of cliff phytocenoses, produce organic acids which mobilize metals from effusive rocks.The presence of these factors leads to a formation of relevant adaptive or protective mechanisms in microorganisms.The ecosystem of the Black Sea abyssal zone sediments is surely considered to be an extreme one, but it significantly differs from Antarctica by its conditions (physicochemical factors).Bottom deposits of the Black Sea miss О 2 , solar radiation and temperature differentials.High concentration of microorganisms in biofilms on Antarctic cliffs and in the Black sea bottom mud is a common feature for both ecosystems.This is supposed to be a source of antagonism and antibiosis phenomena in microbial communities in these ecosystems, which differ so much from each other.
We consider antibiotics to be a separate class of extreme factors together with UV radiation, toxic metals, etc.That is why the study of the Antarctic and Black Sea microbial communities' resistance to antibiotics allows revealing the common regularities of microbial communities' resistance to these extreme biological factors.It is necessary to mention that microbial communities are subject to a simultaneous impact of the mentioned factors in natural extreme ecosystems.That is why it was interesting to compare the resistance of Antarctic cliff microorganisms at least to two types of extreme factors, i.e. to antibiotics and toxic metals.Moreover, it is well known that the resistance to toxic metals correlates with the resistance to antibiotics [1][2][3].
Earlier we found out that microorganisms, isolated from the Galindez Island cliffs lichen samples, possess high resistance to UV radiation [4], as well as they are highly resistant to toxic metals -Co 2+ , Ni 2+ , Cu 2+ , Hg 2+ , Cr(VI) [5,6].That is why we have studied the resistance profile of bacteria isolated from Antarctic cliffs and from the Black Sea bottom sediments sample (depth 806 m) to a wide spectrum of antibiotics, as well as we have selected the strains, which are resistant both to antibiotics and to toxic metals.
The evaluation of resistance of microorganisms isolated from Antarctic cliffs and Black Sea bottom sediments to different antibiotics was the aim of this study.[7][8][9].
strains of aerobic chemoorganotrophic bacteria isolated form soils and phytocenoses of
Paenibacillus barcinonensis 3225 culture, isolated from the Black Sea bottom sample (depth 806 m), was also investigated.Microorganisms were cultivated on the solidified Nutrient Agar medium (NA of HiMedia Laboratories Pvt.Ltd company) at 28º С during 24 hours.
Sensitivity of Antarctic strains to antibiotics was determined by means of Kirby-Bauer disk diffusion test [10].The results were evaluated by diameter of the area free from microorganism growth.
Plasmid DNA was separated from Microbacterium trichothecenolyticum 3208 (plasmid P 08Со) and Enterobacter hormaechei 3202 (plasmid P02Cr).The cultures were cultivated in NB nutrient broth medium (HiMedia Laboratories Pvt.Ltd.) at a temperature of 28 0 С on a shaker (200 r.p.m.) during 24 hours.Bacterial suspensions were centrifuged (2 min, 8000 r.p.m.) and plasmid DNA was separated in NucleoSpin Plasmid columns (Macherey-Nagel (Germany) in accordance with the protocol of the manufacturing company.Concentration of the separated DNA was determined spectrophotometrically (12.8 µg/ml for Р08Со and 35 µg/ml for P02Cr).
The Escherichia coli JM103 and E.coli XL1 Blue competent cells were prepared in accordance with the protocol [11].Transformation of E. coli was performed using the standard method [11].After transformation the bacterial suspension was inoculated on nutrient agar with the selective agent (5 g/l Cr (VI) or 0.5 g/l Со 2+ during transformation with P02Cr and P08Со plasmids, as well as on medium without selection.Besides, the non-transformed strains of E. coli were cultivated as a control under the similar conditions (with selection and without it).
RESULTS AND DISCUSSION
Resistance to antibiotics of bacteria from different taxonomic groups, isolated from extreme ecosystems, was compared.Phylogenic analysis and properties of these strains were described before [4-6, 8, 9, 12].
Antibiotics with different mechanisms of action were used for evaluation of the sensitivity to antibiotics.We used antibiotics -inhibitors of microorganisms cell wall (ampicillin, oxacillin, vancomycin, imipenem), inhibitors of protein synthesis at ribosome level (tetracycline, chloramphenicol, lincomycin, gentamycin, oleandomycin) and RNA polymerase inhibitors (rifampicin).
Pigmented strain, isolated from rocky lichen of Galindez island -Rhodococcus fascians 181n3 (orange pigment), Sporosarcina aquamarina 188n2 (brown pigment), as well as Sporosarcina aquamarina 188n2 and Staphylococcus epidermidis 190n1 pigmentless strain were found to be sensitive to all studied antibiotics.Strains Microbacterium foliorum 181n2 (yellow pigment) and Brevundimonas vesicularis 182n1 (pink pigment) were sensitive to the majority of antibiotics with the exception of Microbacterium foliorum 181n2 (yellow pigment) and Brevundimonas vesicularis 182n1 (pink pigment) strains were sensitive to the majority of antibiotics with the exception of ampicillin, oxacillin and imipenem.At the same time Pseudomonas fluorescens 180n1 and Rothia sp.190n2 strains demonstrated high level of resistance to the studied antibiotics (Table 1).
Analysis of sensitivity to antibiotics of strains isolated from soil samples of biogeographic range showed the following results.M. luteus 3201 was sensitive to all antibiotics tested, E.hormaechei 3202, B. antarcticum 3204, P.barcinonensis 3225 strains were resistant only to oxacillin and oleandomycin.Pseudomonas fluorescens 180n1, Rothia sp.190n2 and Microbacterium trichothecenolyticum 3208 strains showed high level of resistance (Table ).
It is possible to presume that the revealed resistance to antibiotics of studied strains is caused by the extreme conditions under which the microorganisms of this ecosystem exist.They include high level of UV radiation as a mutagenic factor, desiccation and sharp temperature fluctuations.Miller et al. [13] did not associate antibiotic resistance of Antarctic microorganisms with plasmids and consider spontaneous mutations in the structural genes to be the cause of this resistance.According to the authors, antibiotic resistance of microorganisms can also be caused by human factors and depend on the temperature sensitivity.However, one can not exclude the likelihood that increased resistance to antibiotics occurs due plasmid genes localization.
It is also known that the resistance to antibiotics can correlate with the resistance to toxic metals [14,15].Particularly, Antarctic microorganisms which are resistant to chloramphenicol, ampicillin, streptomycin, tetracycline and kanamycin as well as to metal toxic compounds (K 2 CrO 4 , CdCl 2 , ZnCl 2 и HgCl 2 ) were isolated.This resistance can also be coded in some cases by plasmid genes [16][17][18].
In our experiments E. hormaechei 3202 and M. trichothecenolyticum 3208 strains were used for studying the nature of Antarctic microorganism resistance to toxic metals and antibiotics.These strains are relevantly highly resistant to Cr(VI) and Co 2+ compounds [6].It was found out that both strains had the plasmids with a size more than 20 kb.We transfered this plasmid to E.coli XL1Blue and E.coli JM103 strains which were originally sensitive to Cr(VI) and Co 2+ .Colonies resistant to the mentioned metals were obtained after the transformation of the bacteria.Moreover, the transformed E. coli bacteria became resistant to lincomycin.That is why the results obtained testify for a plasmid nature of the resistance of some studied microorganisms both to metals and antibiotics.
CONCLUSION
Resistance to antibiotics of bacteria from different taxonomic groups, isolated from extreme ecosystems, was compared.Pseudomonas аntarctica and Rothia sp.strains, isolated from rocky lichen of Antarctic Galindez Island were highly resistant to the antibiotics studied.Thus, pigmented strains Rhodococcus fascians 181n3 (orange pigment), Sporosarcina aquamarina 188n2 (brown pigment) as well as Staphylococcus epidermidis 190n1 pigmentless strains were found to be sensitive to all studied antibiotics.E.hormaeche 3202, M. richothecenolyticum 3208, Rothia sp.190n2 strains growth demonstrated correlation between antibiotic resistance and toxic metal resistance.
We have confirmed that microorganisms isolated from natural ecosystems demonstrated in some cases the resistance to antibiotics.Studying of the nature of these strains resistance to toxic metals and antibiotics indicated plasmid localization of genes, coding resistance to Cr (VI) and Co 2+ .Demonstrated resistance of microorganisms isolated from «ecologically pure» and non-contaminated by anthropogenic factors ecosystems to antibiotics and toxic metals [6] and is fundamentally new data which testify the phenomenon of simultaneous resistance of microbial communities to different extremal factors.
Microbacterium trichothecenolyticum 3208.Earlier they were isolated from rocky lichens, sampled from the different stationary monitoring points on rocks and from soils of Galindez island
Table 1 -
Sensitivity of Chemoorganotrophic Bacteria Isolated From Cliff Lichens and Soil Samples of Galindez Island to broad antibiotics spectrum
|
2019-04-03T13:09:57.885Z
|
2017-03-31T00:00:00.000
|
{
"year": 2017,
"sha1": "d89ad68c827d48792cd3f3a07c4a3e114e60e667",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.32006/eeep.2017.1.6872",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d89ad68c827d48792cd3f3a07c4a3e114e60e667",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
222341839
|
pes2o/s2orc
|
v3-fos-license
|
Thermal Motors with Enhanced Performance due to Engineered Exceptional Points
A thermal current, generated by a temperature gradient between two reservoirs coupled to a carefully designed photonic or (micro-) electromechanical circuit, might induce non-conservative forces that impulse a mechanical degree of freedom to move along a closed trajectory. We show that in the limit of long - but finite - modulation periods, the extracted power and the efficiency of such autonomous motors can be maximized when an appropriately designed spatio-temporal symmetry violation is induced and when the motor operates in the vicinity of exceptional point (EP) degeneracies. These singularities appear in the spectrum of the effective non-Hermitian Hamiltonian that describes the combined circuit-reservoirs system when we judiciously tailor the coupling between them. In the photonic framework, these motors can be propelled by thermal radiation and can be utilized as autonomous self-powered microrobots, or as micro-pumps for microfluidics in biological environments. The same designs can be also implemented with electromechanical elements for harvesting ambient mechanical (or electrical) noise for powering a variety of auxiliary systems.
I. INTRODUCTION
The manipulation of microscopic objects via currents has become an indispensable tool in many disciplines of science and technology, revolutionizing a variety of applications in areas as diverse as micro-engineering and micro-robotics, to biology and medicine [1][2][3][4][5][6][7][8][9][10] . Depending on the application, the source of these currents varies from thermal radiation and thermal vibrations to electrical and chemical energy extracted in biological processes. On the fundamental level, such applications require the development of design principles that will allow us to realize powerful and efficient engines that operate between two reservoirs at different "temperatures" (or chemical potentials), and produce useful work with maximum efficiency. In particular, in the framework of thermal engines, the question of maximum efficiency has been addressed by the pioneering work of Carnot which pointed out that the efficiency of a thermal engine that performs a cycle between two reservoirs with temperatures T H and T C (T H > T C ) is bounded by the so-called Carnot efficiency η C = 1 − T C /T H 11,12 . Of course, this thermodynamic bound is of limited practical importance since the corresponding heat engine must work reversibly, and thus its output power is zero. A more practical direction is to identify conditions under which the power of irreversible thermal engines, working under finite-time Carnot cycles, is optimized while their efficiency is still high [13][14][15][16][17][18] . The situation is even more complex when one abandons the convenience of macroscopic thermodynamics framework and delves into the challenges of modern nano-devices, where wave interferences and thermal fluctuations dominate their performance [19][20][21][22] .
A prominent framework where many of these challenges met is in photonics. In this case, the near field thermal radiation, emitted from a hot reservoir towards a cold reservoir, can be harvested by an optomechanical circuit as a non-conservative "wind-force". Under its influence, a (slow) mechanical degree of freedom (MDF) FIG. 1. Schematic representation of our thermal motor. (a) A photonic circuit connected to two thermal baths at different temperatures TH > TC is able to divert part of the thermal radiative energy into useful work in the form of the motion of a mechanical degree of freedom (MDF) described by a rotor. (b) An electromechanical motor consisting of two coupled LC resonators. The capacitance plates of the LC circuits are coupled to pistons whose motion is out-of-phase with one another, thus excessing a torque to a rotor. When the system operates in the vicinity of EPs and violates specific spatiotemporal symmetries, (or the baths are subjected to spectral filtering), the motor operates at optimal performance. undergoes a closed path periodic motion. We show that for a long -but finite -driving period of the MDF, these circuits act as autonomous radiative motors, whose extracted power and efficiency are maximized when they are designed to operate in a domain of their parameter space which is in the vicinity of an exceptional point (EP) degeneracy. The latter signifies a coalescence of the eigenvalues and the corresponding eigenvectors of the ef-fective non-Hermitian Hamiltonian that describes the coupling of these motors with the thermal reservoirs. We have engineered such EPs via a judicious coupling contrast with the reservoirs and we have harvested its influence in the enhancement of the performance of the motor, by appropriate manipulation of its spatio-temporal symmetries and of the thermal emissivity of the attached reservoirs. Our predictions can guide the design towards optimal operational conditions of autonomous motors. Their applicability extends beyond the photonic framework to other platforms like electromechanical circuits for harvesting mechanical (e.g. vibrational) ambient noise for power supply of a variety of auxiliary systems [23][24][25][26][27][28] .
II. MATHEMATICAL FORMULATION
The system consists of two thermal reservoirs at different temperatures T H > T C , which are brought in contact via a circuit. The latter is coupled to a MDF from which we extract work. For simplicity, we will assume that the circuit incorporates two single-mode resonators whose frequencies ω n (where n = 1, 2) are modulated by the motion of the MDF. The temperature gradient between the two reservoirs produces a thermal current through the circuit that, in turn, exerts a force to the MDF engaging them in slow periodic motion along a given closed trajectory C in a parameter space. The design is chosen in a way that the motion along the path C creates outof-phase variations in ω 1,2 , thus leading to a violation of spatio-temporal symmetries of the structure. One possible implementation of the above set-up is in the photonic framework Fig. 1a, while a parallel proposal in the electromechanical framework is shown in Fig. 1b. Below, we will mainly use the photonic "language" associated with Fig. 1a, while we will also have the electromechanical scenario of Fig. 1b in mind.
In typical circumstances, the MDF X = {X 1 , X 2 , · · · , X M } describes a change in position or angle of the mechanical element due to a respective force or torque. For concreteness of our presentation, we will assume that M = 2. The coordinate X abides by the Langevin equation where M is the generalized inertia tensor, ξ(t) is a fluctuating force and Γ is the friction tensor which satisfies a fluctuation-dissipation relation. In our analysis below, we will assume that the fluctuating force ξ can be neglected due to the large inertia of the MDF. Consequently, we can approximate the dynamics of X by its mean value x = X , where · indicates a thermal averaging. Finally, the mean "force" F av , drives the mechanical rotor diverting energy from the "photonic" thermal current to produce mechanical work. In the photonic framework ( Fig. 1a) F av is analogous to the radiation pressure asso-ciated with the radiation inside the circuit. In the electromechanical framework of Fig. 1b, F av is associated with a torque acting on the mechanical rotor. The interaction between the mechanical part and the radiation is obtained from the variation of the energy inside the photonic circuit due to a displacement x of the MDF. Specifically, the thermal averaged force is (2) where Ψ = (ψ 1 , · · · , ψ N ) T , ψ n is the field amplitude at the n−th resonator of the circuit ( ω n |ψ n | 2 represents the energy density in the n−th mode/resonator), and H 0 ( x) is the effective Hamiltonian of the circuit that provides a description of the dynamics of the radiation field in the single-mode resonators. The dynamics of the open system (circuit coupled with reservoirs) is described in terms of a temporal coupled-mode theory (CMT) 29 where the matrix D, with elements D n,α = √ 2γ α δ n,α , describes the coupling of the circuit with the thermal baths. The thermal excitations from (towards) the α-th reservoir are given by the incoming (outgoing) complex fields θ (+) [θ (−) ]. In the frequency domain (using the Fourier transform whereΘ(ω) = Φ(ω) · Θ(ω), with Φ(ω) being a noise filter function and Θ α (ω) = {exp [ ω/(k B T α )] − 1} −1 is the Bose-Einstein statistics describing the mean number of photons which are emitted from reservoir α with frequency ω. Finally T α is the temperature of the α-th reservoir.
III. WORK
We assume that the dynamics of the MDF Eq. (1) occurs on time-scales much larger than the ones associated with the field dynamics Eq. (3). Under this assumption, we can invoke the Born -Oppenheimer approximation and obtain the work performed by the motor along the path C as [30][31][32][33][34][35][36] where S x (ω) = −I Nα +iDG x (ω)D T is the unitary instantaneous scattering matrix, and G x = [ωI N −H eff (x)] −1 is the Green's function associated with the effective Hamiltonian H eff (I m is the m × m identity matrix). In Eq. (5), the kernel P α (ω) indicates the spectral response of the system at a frequency ω. Since P α (ω) only involves a parametric integral along the path C, it is a geometric quantity 37,38 . It turns out that for the two-reservoir setup of Fig. 1, P α can be written only in terms of the reflectance R x and the corresponding reflection phase α x (see the right part of Eq. ( 5)). As a matter of convention, a positive W in Eq. (5) indicates that the dynamics of x follows the positive direction of the path C.
An analytically useful expression of P α is achieved by substituting in Eq. (5) the scattering matrix in terms of the Green's function. We get where we have used that ∂H0 ∂xp n,m = ∂ωn ∂xp δ n,m δ n,p , and we have introduced the work density per unit area as Direct inspection of Eq. (5) allow us to establish the following two conditions for the implementation of our proposal as a motor: (a) the force has to be non-conservative which means that the ∇ x × (S x ) † ∇ x S x α,α = 0, and (b) the closed path C must enclose a non-zero area in the parameter space {x 1 , x 2 }. A bi-product of the last condition is that variations of x 1 , x 2 with a phase difference 0 or π cannot produce work.
IV. ENGINEERING EP DEGENERACIES
In the frequency range near an EP-degeneracy the resolvent of the effective Hamiltonian H eff can be approximated by a 2 × 2 subspace involving only the resonant modes associated with the EP. We therefore consider a minimal model consisting of two coupled modes with resonant frequencies ω 1 , ω 2 . Alternatively, one can consider, as a concrete example, the set-up of Fig. 1a. The system consists of two single-mode resonators coupled asymmetrically to two reservoirs at temperatures T α=1 = T H and T α=2 = T C . The effective Hamiltonian of such a reduced system reads where κ describes the coupling between the two modes and γ 1 , γ 2 are the (asymmetric) decay rates of the two modes due to their coupling with the two reservoirs. The spectrum of and ∆ω = ω 1 − ω 2 and ∆γ = γ 1 − γ 2 = 0. The corresponding (non-normalized) eigenvectors are u 1,2 = 2κ, −∆ω + i∆γ ± (∆ω − i∆γ) 2 + 4κ 2 . It is easy to show that when ∆ω = ∆ω EP = 0 and κ EP = ∆γ/2 the system supports an EP degeneracy with ω In fact, under the condition ∆ω = ∆ω EP , the Hamiltonian Eq. (8) respects a (pseudo-)parity-time (PT ) symmetry that reveals itself after renormalizing the losses with respect to their mean value γ 0 39 . Below we will be discussing in detail two distinct scenarios involving perturbations around the EP that violate this (pseudo-) PT -symmetry either spontaneously or explicitly. We will show that each of these cases affects in a dramatically different manner the characteristic features of the work density W α .
V. WORK DENSITY IN THE PRESENCE OF AN EP
We analyze the extracted work density of the motor when the center of the modulation cycle is in the proximity of an EP. To this end, we consider a modulation cycle C associated with changes of the resonant fre- describes a resonance detuning that displace the unmodulated system Eq. (8) from the EP by violating explicitly its (pseudo-) PT -symmetry. In order to satisfy the criteria for non-zero work, we have assumed that the two resonances are modulated out of phase i.e. φ 1 = π/2, φ 2 = 0. For such a modulation scenario, the associated enclosed area in the parameter Next, we assume a generic perturbation p which displaces the center of the modulation cycle with respect to the EP. Using Eq. (7), we have evaluated the work density W α in terms of the Green's function G x . In fact, for the 2 × 2 case, the calculations for the Green's function can be carried out explicitly for any perturbation, giving In the above expression, the generic perturbation p is "hidden" in the parameters that define H eff e.g. in the frequencies ω 1,2 = ω 1,2 (p) and/or the coupling κ = κ(p) between the two resonant modes. When p → 0 the Green's function can be approximated with the last expression, where A and B are frequency-independent matrices (see methods). It turns out that the functional dependence of W α on ω, in the vicinity of the EP, is dramatically affected by the presence of the square-Lorentzian term on the last part of Eq. (9). This unique spectral feature is a consequence of the degeneracy of the eigenvectors of H eff at the EP. Furthermore, a squared Lorentzian lineshape implies a narrower emission/absorption peak and greater resonant enhancement in comparison with a non-degenerate resonance at the same complex frequency. We will show that the competition between the two terms appearing at the right equality of Eq. (9) determines the conditions under which W α acquires its maximum value (see below). A more elaborated treatment can extend the above analysis of G x , in order to include any number of modes, by using a degenerate perturbation theory that takes into consideration the singular nature of EPs. In this case, the standard modal decomposition of the Green's function is not applicable since the bi-orthogonal eigenvectors of H eff do not span the Hilbert space. Instead, one has to complete the eigenvectors of H eff into a basis by introducing the associated Jordan vectors 40 . Following this approach, we can recover the last expression of G x in Eq. (9).
Substituting the expression for the Green's function back in Eqs. (6,7) we get that: where for the evaluation of the contour integral in Eq. (6) we have explicitly written ω 1,2 in terms of the parameters ω 0 and that define the position of the path C. The constant c = (∆γ/2) 2 − κ 2 and/or with the detuning indicate the degree of deviation from the EP. Let us exploit further Eq. (10) by considering two specific examples corresponding to perturbations that preserve/violate the pseudo-PT symmetry of the effective unmodulated Hamiltonian H eff . In the first case, we displace the system away from the EP by varying the coupling κ = κ EP while keeping = 0. We find that the work density takes the form In fact, by considering the EP condition c = 0 we are able to identify in the denominator of W 1 above, the signature of the square-Lorentzian anomaly associated with the collapse of the eigenvector basis. Equation (11) allows us to conclude that W 1 is non-monotonic and antisymmetric with respect to the EP resonance frequency axis ω = ω 0 for all κ-values. Furthermore, W 1 (ω = ω 0 ) = 0 = W 1 (ω → ±∞) while its extrema occur in the vicinity of the EP (see the filled magenta circle) at ω = ω 0 ± 1 7 γ 0 , see Fig. 2a. The situation is dramatically different when we choose to perturb the system away from the EP using a parameter that enforces an explicit (pseudo-)PT -symmetry violation of the unmodulated Hamiltonian H eff . An example case is when the resonances of the two coupled modes are detuned by . In this case, the diagonal elements of H eff take the form ω 1,2 =ω 1,2 ± + ω 0 δ cos(x + φ 1,2 ). Furthermore, the work density does not have a definite symmetry with respect to (ω − ω 0 ). To be concrete, we consider the particular case κ = κ EP = ∆γ/2 for which the work density is where now the denominator takes the form demonstrating the traces of the square-Lorentzian anomaly. The latter is better appreciated in the limit of = 0 (EP condition). For ω 0 , we can further expand up to leading order in the denominator and get where the term associated with the perturbation is an even function in (ω − ω 0 ). We conclude, therefore, that the work density W α loses the parity as soon as is turned on, see also Fig. 2b. Below we will be discussing the consequences of such effect in the power extraction of the autonomous motor.
VI. WORK IN THE PRESENCE OF EP
We are now ready to exploit the properties of W α for the design of autonomous motors with optimal performance. To this end, we remind that the extracted work W is essentially the frequency integral of W α , weighted with the functionΘ α (ω), see Eq. (5).
Let us first discuss the family of perturbations that preserve the (pseudo-)PT -symmetry of the unmodulated effective Hamiltonian. In this case, the antisymmetric form of the work density W α with respect to the ω 0 −axis, results in a near-zero total work, see Fig. 2c. The slight deviation from zero (towards positive W > 0) is due to the fact that Eq. (5) involves a product of W α with Θ(ω) which slightly de-symmetrizes the integrand towards smaller frequencies (see continuous blue line). We can revert the situation by introducing a spectral filtering function Φ(ω) which enhances the unbalance contribution of positive and negative work densities in the integral of Eq. (5). The resulting extracted work, for the example case of a filter function Φ(ω) = H(ω − ω 0 ), is reported in Fig. 2c with a black dashed line (H(x) is the Heaviside function). Our results indicate that such a spectral filtering approach can lead to an increase in W which is higher by two orders of magnitude with respect to the unfiltered case. The same data indicate that the maximum work occurs in the vicinity of the EP where W α acquires its maximum value (violet vertical line) and where the de-symmetrization strategy via spectral filtering is more impactful.
An alternative way to induce an asymmetric integrand in Eq. (5) is by perturbing the system away from the EP via a perturbation that will explicitly violate the (pseudo)-PT symmetry of the unmodulated effective Hamiltonian. In the previous section, we have identified one such perturbation being the frequency detuning between the two resonators. In this case, the work density itself becomes asymmetric (see Fig. 2b), leading to a frequency integral Eq. (5) which is different from zero. In fact, the maximum W occurring in the proximity of κ EP , is again two orders of magnitude enhanced in comparison to the = 0-case, see the blue dashed line in Fig. 2c.
The enhancement of the extracted work W via engineered perturbations that violate the (pseudo)-PTsymmetry of the motor is better appreciated in Fig. 2d. Here, we report the extracted work W (for fixed κ = κ EP ) for both spectrally unfiltered/filtered noise versus the perturbation . For the unfiltered case (solid red line), we find that in the vicinity of the EP the total work is proportional to , a relation that it is a direct consequence of the expansion Eq. (13) for the work density. Specifically, assuming for simplicity that Θ 1 (ω) ≈ Θ 1 (ω 0 ), the integration over ω leads to the conclusion that W ≈ AΘ 1 (ω 0 ) dω/(2π)W 1 ∝ . The same argument applies also in the case of spectral filtering with Φ(ω) = H(ω − ω 0 ) (see dashed red line). In both cases, the extreme work W max occurs at perturbation strengths max in the vicinity of the EP, where the linear approximation Eq. (13) breaks down. An additional conclusion that we extract from the above analysis is that the spectral filtering method (combined with perturbations that violate the (pseudo)-PT -symmetry lead to a slightly (two-fold) increase of the extracted work as compared to the unfiltered case (see solid red line).
A panorama of the extracted work W versus and κ is shown in Fig. 2e. Here we report only the unfiltered case i.e. Φ(ω) = 1. The data demonstrate nicely that the extreme value of the extracted work occurs in the vicinity of ( , κ) = (0, κ EP ) where the EP is located. The case of spectral filtering with a function Φ(ω) (e.g. Φ(ω) = H(ω − ω 0 )) shows the same qualitative features (with the only difference that W is flat in the negative semiplane due to the specific filter function) and therefore is not reported here. (a-c) The dynamics of the angular velocity Ω(t) for some representative values of the coupling coefficient κ whose terminal velocity determines the work delivered by the photonic circuit. (d) Work performed by the two-resonator circuit setup versus the coupling parameter κ. The numerical evaluation for the work (dots with error bars) is based on the value of the terminal angular velocity, see Eq. (14). The TD simulations match nicely the theoretical predictions for the work (green line) given by Eq. 5. The blue dashed line reports the work predicted by the CMT modeling, see Eqs. (3,8). 48 The vertical red dotted line indicates the position of the EP.
VII. TIME DOMAIN SIMULATIONS AND IMPLEMENTATION USING ELECTROMECHANICAL SYSTEMS
We validate the above proposal by performing timedomain (TD) simulations using COMSOL software 41 with a realistic electromechanical system, see Fig. 1b. The setup consists of a pair of capacitively coupled resonators with impedance Z 0 = 70 Ohm tuned at different frequencies, ω 1,2 = ω 0 ± , which enforces violation of the (pseudo-)PT -symmetry of the unmodulated system. In our simulations, we have considered that ω 0 = 2πf 0 , with f 0 = 1 MHz, and = 0.0488 · ω 0 . The capacitors C 1,2 are considered as a pair of conductive plates separated by a median air gap d 0 = e0·A C0 , where e is the vacuum permittivity and C 0 = 1 Z0·ω0 is a median capacitance. The upper plates of the capacitors are assumed to be attached to a wheel (the MDF) of radius r = d 0 /10 in a way that during the wheel rotation with angular velocity Ω the plates will undergo a motion described by the displacements d 1,2 = d 0 + r · cos(φ 1,2 ) with φ 1 = Ωt and φ 2 = φ 1 + π/2. The wheel is assumed to have mass m = 1 g, moment of inertia I = 0.5 · mr 2 = 7.58 · 10 −15 kg· m 2 , and experiences friction with the ambient medium with friction coefficient Γ = 2.5 · 10 −13 N·m ·s/rad. The coupling capacitance between the two LC resonators is C c = 2κ · C 0 , where the coupling coefficient κ is a tunable parameter of the simulations. The left/right resonators are coupled to a hot/cold baths via capacitors C e1 = 0.1 · C 0 , and C e2 = 0.03 · C 0 respectively, which yields the following value for the critical coupling κ EP = 0.001625 (red dot-ted line on Fig. 3d).
To enhance further the extracted work from the MDF we have introduced, in addition to the detuning , spectral filtering of the thermal baths. Specifically, the hot bath is producing a noise signal consisting of 200 spectrally uniformly distributed harmonics V (t) = V 0 · 200 i=1 sin (ω i · t + ϕ i ), where V 0 = 1V is the amplitude of the noise,ω i is a frequency of each noise harmonic, and ϕ i -is a random phase shift. The lower frequency of the noise considered in the simulations isω 1 = 2π · 0.85 MHz with an upper limit ofω 200 = 2π · 1.1 MHz.
In the simulations the wheel is given an initial angular velocity Ω 0 = 2.5 · 10 4 rad/s. Its angular velocity is monitored as a function of time until it saturates at a certain value Ω s . From here, we evaluate the work per cycle via the relation where τ is the torque produced by the capacitor plates on the wheel and x is the angular displacement. The subindex TD indicates that the evaluated work is extracted from our time-domain simulations.
In Figs. 3a-c we show the transient dynamics of the angular velocity Ω(t) for three typical coupling constants κ. Notice that in some cases (e.g Fig. 3a) the angular velocity Ω(t) acquires negative values indicating that the wheel rotates opposite to the direction of the closed path C. We find that in the long time limit the MDF reaches a terminal angular velocity Ω(t → ∞) ≡ Ω T D s which can be used in Eq. (14) for the numerical evaluation of W T D . In each of the subfigures 3a-c, we are also indicating (see dashed black line), the theoretical values of the saturation velocity Ω s . The latter has been extracted via Eq. (14), where the work W on the left-hand-side has been calculated using Eq. (5). For the theoretical evaluation of W , we have extracted the elements of the instantaneous S-matrix of the circuit using a frequency domain analysis of COMSOL 41 .
In Fig. 3d we report a summary of the extracted W T D versus the coupling constants κ. The error bars reflect the fluctuations in the numerical evaluation of Ω T D s and are extracted from the temporal analysis of Ω(t) as Ω T D min/max = min/max (Ω(t ∈ [t 1 , t max ])), where t 1 is the time during which Ω(t) reaches the theoretical value of Ω s for the first time for a given value of κ; t max = 0.13s -is a maximum time used in a TD analysis. At the same figure, we are also plotting the theoretical predictions for the work W (green line) that have been derived using Eq. (5) with instantaneous scattering matrix elements given by the COMSOL frequency analysis of the electromechanical system. Finally, at the same figure, we are presenting the predictions of the CMT modeling of Eqs. (3,8). In the latter case, the various parameters (coupling, resonance frequencies, linewidths, etc.) of the CMT model have been extracted from the transmission spectrum of the electronic circuit (see methods). The nice agreement between CMT and TD simulations confirm the validity of our CMT modeling and establishes the influence of the EP protocols in extracting maximum work from thermal autonomous motors.
VIII. EFFICIENCY
The temperature gradient between the two thermal reservoirs induces a thermal current that goes through the motor. Part of the associated input power is dissipated due to friction, resulting in a reduction in the amount of usable output power 35 . The latter can be used e.g. for lifting a weight or charging a capacitor. The usable output power is where we have assumed that the MDF has large inertia, forcing the rotor to move with terminal velocitẏ x ≈ Ω s . The optimal terminal angular velocity that maximizes the usable work is dictated by the parameters of the setup, and can be found from Eq. (15) to be Ω * s = W/(4πΓ) leading to P * out = W 4π 2 1 Γ , which is half of the total "frictionless" power W 4π 2 2 Γ . For circuit parameters such that Ω s Ω * s , the motor dissipates most of the incident energy while in the other limiting case where Ω s Ω * s the friction can be neglected but the device does not generate much power. In both limits, the usable output power is nearly zero.
It is, therefore, useful to quantify the performance of an autonomous motor by introducing its efficiency η. The latter is defined as the ratio of the net usable average output power P out that is extracted from the motor during one period of the cycle 2π/Ω * s when it operates under optimal conditions (i.e. Ω s = Ω * s ), to the total input power P in delivered to the photonic circuit. Specifically: where for the evaluation of P in we have also considered the fact that the slow variation of the photonic network's parameters induces a pumping energy currentĪ p in addition to the energy currentĪ b due to the temperature bias 42 . Both currents above are measured at the hot reservoirs. Since typicallyĪ p Ī b we can omit the pumped current from the denominator while we can substitute in Eq. (16) the maximum usable power as P * out ∼ W 2 /Γ. Therefore η * ∼ ( W I b ) W Γ , which suggests that the maximum η * ≤ η C 2 might be expected in the parameter domain where W acquires its maximum values, see Figs. 2,3.
An efficient way to test the above expectations of the performance of our EP-influenced motor is by simultaneous evaluation of its efficiency Eq. (16) together with the corresponding power P * out . These quantities are plotted FIG. 4. Maximum power P * out (z-axis) and efficiency η * normalized with respect to the maximum efficiency ηC /2 (color-scale) at optimal operational conditions corresponding to Ωs = Ω * s . These quantities are plotted as a function of the perturbation parameters κ and for a fixed temperature gradient. The former perturbation respects the pseudo-PTsymmetric nature of the unmodulated system while the latter violates this symmetry. In these extensive simulations, we have used the CMT modeling with parameters associated with the circuit setup (see the previous section).
in Fig. 4 as a function of the perturbation parameters κ, associated with the coupling and the resonance detuning between the two LC resonators of the electromechanical system of the previous section. For these calculations, we have used the CMT modeling with parameters that reproduce the results of the direct TD simulations of COM-SOL for the electromechanical motor 48 (see Fig. 3d). Furthermore, we have ensured that the angular frequency Ω * s is small enough such that the Born-Oppenheimer approximation is valid. From Fig. 4 we see that both η * and P * out acquire their maximum values at the vicinity of the EP -albeit at slightly different (κ, )-parameter values. This is because of a natural trade-off between efficiency and extracted power which has triggered a number of recent studies to identify conditions where this trade-off is optimized 13,18,37,[44][45][46][47] . Our proposal shed new light in this direction since it identifies as an optimal domain for the design of cycles C, the parameter space in the proximity of an EP.
IX. CONCLUSIONS
We have theoretically proposed and numerically demonstrated, a dramatic enhancement of the performance of thermal motors when they are operating in a parametric domain which is in the proximity of an EP degeneracy. The latter appears in the spectrum of the effective non-Hermitian Hamiltonian that describes the open circuit and it is achieved via a judicious (differential) coupling of the isolated circuit with the ambient baths. In the proximity of the EP, the eigenvector basis collapse (eigenvector degeneracy), leading to an enhanced spectral work density W(ω). In typical circumstances, W(ω) is anti-symmetric with respect to the position of ω EP leading to a near-zero total work W . When, however, the spectral work density W(ω) is de-symmetrized, the total extracted power and the motor efficiency can acquire their maximum values in the domain of the parameter space which is in the vicinity of the EP. We have shown that this de-symmetrization can occur either via an explicit PT -symmetry violation of the unperturbed system or via a spontaneous symmetry where, however, one needs to supplement it with additional spectral filtering of the radiation of the bath.
Our results pave the way towards the development of a new generation of optimal thermal motors that utilize engineered non-Hermitian spectral degeneracies. The proposed scheme can find applications for on-chip photonics (e.g. self-powered micro-robots or micro-pumps in microfluidics), and electromechanical systems for harvesting ambient noise for powering a variety of auxiliary systems. It will be interesting to extend our study of motor efficiency to cases where the closed path in the parameter space is in the proximity of an EP degeneracy of higher order. It is plausible that the higher-order divergence of the resolvent will lead to a further enhancement of the total work. Similar questions emerge in the case where there are more than one EPs in the proximity of the closed path in the parameter space. The possibility to extend these design schemes for the realization of optimal quantum motors 30 is also another promising direction. These, and other, questions will be addressed in a separate publication.
Supplementary Material: "Thermal Motors with Enhanced Performance due to Engineered Exceptional Points"
SI. Expressions for forces and work
In this section we derive the expressions for the forces and work in terms of the instantaneous scattering matrix of the associated photonic network. In particular, we arrive to Eq. (5) of the main text.
The starting point is the definition of the force, Eq. (2) of the main text, which, in turn, requires the knowledge of the field amplitude. The later is given by the Coupled-Mode Theory (CMT) in Eq. (3). Here, we assume that the dynamical time scales of the mechanical degree of freedoms (MDFs) x are much slower than the photonic time-scales, i.e., we invoke the Born-Oppenheimer approximation. Under this approximation, we have the CMT in frequency domain (we use the convention f (t) = ∞ 0 f (ω)e −iωt dω for the Fourier transform) where I m is the m × m identity matrix and we assume that D and independent of ω and x. Here, the field amplitude Ψ and the outgoing scattering field θ (−) are dictated by the "frozen" or "instantaneous" effective Hamiltonian H eff = H 0 ( x) − i D T D 2 , the Green function G x (ω), and the Scattering matrix S x (ω). In order to keep the notation simple, from now on we will drop the index " x" and the dependence on x of G x ≡ G and S x ≡ S will remain implicit.
We get the generalized force by starting from Eq. (2) and using Eqs. (S1) and (4) where α labels both, the reservoir and the resonator coupled to it. To arrive to the second line we used that S1 which can be probed by using Eq. S1 and by noticing that These equations are adequate to predict the forces and hence, energy extraction capabilities, benefiting from different experimental situations. While the first equation requires the energy distribution inside the resonators of the photonic circuit via the Green's function, the second equation utilizes the scattering coefficients of the circuit.
Finally, we calculate the energy extraction capability of our motor, W = C F av · d x, by integrating the force when the generalized coordinates x move along a path C, resulting in Eq. (5) of the main text.
SII. Work density in terms of the Green's functions
In this section we derive the analytical expression for the work density, Eq. (7) in terms of the Green's functions. For simplicity, from now on we will consider that only diagonal elements of the Hamiltonian change with x, i.e. (∇ x H 0 ) n,m = (∇ x H 0 ) n,n δ n,m . Next, we consider a driving protocol such that in our photonic network only two resonators are driven, which we denote with indexes n = p, q; and each one of the resonant frequencies of those resonators ω n = (H 0 ) n,n depend on only one coordinate x ν , i.e. ∂ωn ∂xν = ∂ωn ∂xn δ n,ν . By using Eq. S2, we calculate the work as W = C F av · d x = A (∇ x × F av ) · d A, i.e., where, to arrive to the second equality, we have used Green's theorem. It is useful to turn Eq. S4 into a more compact expression by using the geometric integral P α , defined in Eq. (5). Finally, the work density per unit area where, we have used ∂|Gnm| 2 ∂xj = 2Re G * nm G nj ∂ωj ∂xj G jm , and since A = A ∂ωp ∂xp ∂ωq ∂xq dx p dx q → 0, the Green's functions are evaluated at the center of the loop {x p , x q }.
SIII. Green's function near the EP
When the path C is performed in the vicinity of an EP, the interplay of the coalescing resonances can dramatically affect the response of the system under small perturbations. This abrupt behavior can be related to the Green's function, which in the vicinity of a (second order) EP, presents a sharp Lorentzian-squared resonance, evidenced through the modal expansion around the EP S2 To arrive to the right hand side, we assume ω ≈ ω EP and then we can restrict the summation to n indexes whose eigenfrequencies ω n , and the corresponding right (left) eigenvectorsũ n (ṽ n ) of H eff , are associated to the EP. To simplify the notation, we denote them by ω ± andũ ± (ṽ ± ). Next, we use an expansion in a Newton-Puiseux series invoking fractional-powers of a perturbation parameter p 1 S3 ω ± = ω EP ± p 1/2 λ 1 + pλ 2 ± p 3/2 λ 3 + . . . , u ± = u 0 ± p 1/2 λ 1 u 1 + pw 2 ± p 3/2 w 3 + . . . , where a similar equation holds forṽ ± and the Hamiltonian H eff = H T eff ≈ H 0 + pH 1 + · · · . The EP occurs at p = 0, and there, the defective right (left) eigenvector u 0 (v 0 ) of H 0 and the associated Jordan vector u 1 (v 1 ) satisfy the Jordan chain relations with the normalization conditions v T 0 u 1 = 1 and v T 1 u 1 = 0 and the properties v T 0 u 0 = 0 and v T 1 u 0 = v T 0 u 1 = 1. It follows that λ 1 = ± v T 0 H 1 u 0 , which determines the leading order of the expansions in Eqs. S7. By keeping these expansions up to leading order, we arrive to the right hand side of Eq. S6, where A = u 1 v T 0 + u 0 v T 1 and B = u 0 v T 0 .
|
2020-10-15T01:01:09.606Z
|
2020-10-14T00:00:00.000
|
{
"year": 2020,
"sha1": "963f64128f790004b494ddf8cdeae9aece81888f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "963f64128f790004b494ddf8cdeae9aece81888f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267438382
|
pes2o/s2orc
|
v3-fos-license
|
Two tumor types in a unilateral testis in a patient with severe oligozoospermia and a history of cryptorchidism surgery: A case report
Testicular cancer, the most common cancer among young male adults, is associated with infertility. A 38-year-old male patient was admitted to Dokkyo Medical University Saitama Medical Center, Japan, with infertility associated with severe oligozoospermia. Scrotal ultrasonography revealed two distinct tumors in the left testis: A mass with abundant blood flow on the cranial side and a mass with poor blood flow on the caudal side. Additional analysis revealed mild elevation of intact human chorionic gonadotropin (hCG) levels (tumor marker level assessment), high testosterone and low luteinizing hormone and follicle-stimulating hormone levels (hormonal level assessment) and severe oligoasthenozoospermia (semen assessment). The preoperative diagnosis was left-sided testicular cancer and severe oligoasthenozoospermia and the patient underwent left high orchiectomy and oncological testicular sperm extraction. Based on the pathological assessment, the cranial tumor was diagnosed as a seminoma with syncytiotrophoblastic cells, whereas the caudal tumor had only scar tissue with germ cell neoplasia in situ in the adjacent parenchyma. Following surgery, intact hCG and hormone levels of the patient were normalized, and the semen parameters (semen volume, sperm density, and motility) improved dramatically. To the best of our knowledge, the present case is the first report of two types of testicular tumor in a unilateral testis in a patient with a history of cryptorchidism surgery. The present case demonstrated that scrotal ultrasonography should be performed in patients with abnormal semen results to rule out testicular tumors.
Introduction
Testicular cancer is the most common type of cancer in men aged 14-44 years; its incidence has increased over the past two decades in Western countries (1).Approximately 50% of patients with testicular cancer are diagnosed with seminoma, whereas the remaining are diagnosed with various types of non-seminoma or mixed testicular germ cell tumors (1).The implementation of cisplatin-based chemotherapy regimens and refinement of surgical procedures have improved the long-term survival.The cure rate in all patients with testicular cancer and those with metastatic disease is >95 and 90%, respectively (2).Undescended testis, contralateral testicular tumor, and familial testis cancer are established risk factors for testicular cancer (3).Moreover, there is a proven correlation between infertility and testicular cancer, and infertile men with semen abnormalities are 20 times more likely to develop testicular cancer (4).Future fertility is a concern for young patients undergoing cancer treatment (5).Oligozoospermia is present in >50% of patients with testicular tumors before treatment (6) and testicular tumors are sometimes identified during infertility examinations (7).Declining semen quality in testicular cancer could be due to mechanical loss of physical testicular volume in the affected testis, paracrine and endocrine effects on the ipsilateral and contralateral testis from the tumor, and congenital factors (5).
The present report describes a case in which a patient with infertility and history of cryptorchidism surgery was diagnosed with two types of testicular tumor in one testis; semen parameters and hormonal status improved following high orchiectomy.
Case study
A 38-year-old male patient was referred to Dokkyo Medical University Saitama Medical Center, Saitama, Japan) in April 2021 with oligozoospermia, detected during investigation of infertility.The patient had a history of surgery for left cryptorchidism during infancy and no medication history.A physical examination revealed no abnormalities in the testes.Scrotal color Doppler ultrasonography showed that the right testis was normal; however, the left testis had a mass with clear margins and abundant blood flow in the cranial part and a mass with clear margins but poor blood flow in the caudal part (Fig. 1).Magnetic resonance imaging showed diffusion limitation in the cranial, but not the caudal part, of the left testis (Fig. 2).Chest-abdomen-pelvis computed tomography (CT; TSX-301C/3A, Canon Medical Systems) did not reveal any metastasis.
The preoperative diagnosis was left testicular cancer and severe oligoasthenozoospermia.The patient underwent left high orchiectomy and oncological testicular sperm extraction (onco-TESE).Gross examination of the extracted left testis revealed a reddish-brown mass in the cranial and a grayish-white mass in the caudal part (Fig. 3).Pathological assessment was performed on the formalin-fixed, paraffin-embedded (FFPE) tissue block of surgical specimen stained with hematoxylin and eosin.Immunohistochemical staining for octamer binding transcription factor (OCT)-3/4, D2-40, hCG, SALL4 and testosterone was performed on the FFPE tissue block.Samples were fixed in 10% neutral PBS at room temperature for 24 to 48 h; thickness of section, 4 µm.Antigen retrieval was performed using EnVision FLEX Target Retrieval Solution, High pH (Agilent Technologies, 97˚C, 20 min).Quenching step was performed using EnVision FLEX peroxidase blocking reagent, Hydrogen peroxide solution (ready to use, Agilent Technologies); v) the following primary antibodies were used: OCT-3/4 (1:100, NCL-L-OCT3/4, Leica Biosystems), D2-40 (ready to use, 713451, Nichirei Biosciences), hCG (ready to use, IS508, Dako), SALL4 (1:1,000, H6271-6E3, Sigma-Aldrich), and testosterone (1:400, cat.no.ab217912, Abcam) incubated at room temperature for 30 min; vi) the following secondary antibodies were used: EnVision FLEX/HRP (ready to use, K8000, Agilent Technologies), incubated at room temperature for 20 min; vii) EnVision FLEX DAB+ Substrate Chromogen System (Agilent Technologies) was used for chromogen detection, while Mayer's Hematoxylin Solution (room temperature, 30 sec) was used for counterstain.Pathological assessment of the cranial tumor demonstrated a proliferation of tumor cells with round nuclei and well-defined nucleoli.The tumor cells were OCT-3/4 + and D2-40 + , and had characteristics of a seminoma with numerous hCG + trophoblastic cells.The caudal tumor was composed of vitrified material with few cellular components and no evidence of malignancy.Dysplastic cells with round nuclei and well-defined nucleoli were observed in the adjacent intratubular parenchyma.The dysplastic cells were OCT-3/4 + and SALL4 + and had the characteristics of germ cell neoplasia in situ (GCNIS; Fig. 4).Both tumors were negative for testosterone.The pathological findings of onco-TESE were a small number of spermatocytes and spermatozoa in a few seminiferous tubules (Johnsen score, 5.4) (9).Based on the pathology, cranial tumor was diagnosed as a seminoma with syncytiotrophoblast cells, and the caudal tumor was diagnosed as regressed GC tumor.
Following surgery, the patient was followed up without medication.At 1 month post-surgery, hormone level assessment demonstrated improvements in several hormone levels (testosterone, 5.24 ng/ml; estradiol, 10.5 pg/ml; LH, 6.5 mIU/ml and FSH, 5.6 mIU/ml).Furthermore, the intact hCG at 10 months after surgery was almost undetectable (<0.5 mIU/ml).Semen analysis 2 months after surgery demonstrated an improvement in semen parameters (semen volume, 5.0 ml; sperm density, 29x10 6 sperm/ml and motility, 44.8%).The patient and his partner achieved spontaneous conception 12 months after surgery and a healthy baby was born 22 months post-surgery.As of November 2023, the patient had no recurrence at CT follow-up checks and no elevation of serum tumor marker levels 30 months after surgery.
Discussion
The risk of testicular tumor is 4.8-fold higher in patients with a history of cryptorchidism, which is an established risk factor for testicular tumors (3); however, the mechanism underlying the association between cryptorchidism and testicular tumors remains unclear (10).In the present case, two distinct tumors were noted in the left testis of a patient with a history of cryptorchidism, suggesting that cryptorchidism may be associated with tumor development.
Patients with testicular tumors in one testis are more likely to develop contralateral testicular tumors than patients without testicular tumors (3).There are numerous reports of bilateral testicular tumor development (3,11), but no reports of two types of testicular tumors in one testis, to the best of our knowledge.Therefore, the present case is rare.
In the present case, the tumor on the caudal side had regressed, resulting in a lack of symptoms.Hence, it is possible that the tumor on the cranial side (the seminoma) would not have been detected until it increased in size.Early detection was achieved via scrotal ultrasonography during an infertility examination.As certain patients may have no symptoms, scrotal ultrasonography should be performed in those with abnormal semen results to rule out testicular tumors.The cranial seminoma may be considered a metastatic lesion of the caudal tumor and the caudal tumor, which had only scar tissue, may be considered a regressed GC tumor.Pathological findings of regressed GC tumors typically include scarring, decreased spermatogenesis and microlithiasis (12).Notably, the findings of GCNIS in the adjacent parenchyma, and coarse and large intratubular calcifications have been suggested to be specific for GC tumor regression rather than non-neoplastic scarring (12).Non-neoplastic scarring secondary to ischemia, trauma or infection is typically seen in testes lacking diffuse atrophy and is often multifocal.Additionally, non-neoplastic scarring may be associated with vascular lesions such as thrombi and vasculitis and is not associated with more specific features of regression such as GCNIS, and coarse and large intratubular calcifications.Nodular and stellate atrophy with interstitial fibrosis in testicular regressed GC tumors are distinguished from pure atrophy (13).The patient in the present case had a distinct nodular scar with GCNIS in the adjacent parenchyma, which may indicate a regressed GC tumor.The association between male infertility and testicular tumors is well-established, and ≤50% of patients with testicular tumors prior to high orchiectomy have abnormal semen parameters (4,14).In the present case, hormonal status (high testosterone and low LH and FSH levels) and semen parameters improved notably following resection of the testicular tumors.This indicated that the testicular tumors caused hormonal abnormalities and infertility.Pathological findings demonstrated no testosterone production in either tumor; however, the seminoma contained syncytiotrophoblast cells that were positive for hCG.
Previous studies have reported that in patients with testicular tumors and elevated blood β-hCG levels, hCG has an LH-like effect, gonadotropin production is suppressed and blood testosterone and estradiol levels are increased (15,16).In the present case, the blood hCG-β levels were within the normal range; however, the intact hCG levels in the blood were mildly elevated, which may have contributed to the increase in testosterone levels.It is likely that hCG concentrations were higher in the left testis than in the blood as hCG is produced by the cranial tumor, a seminoma with syncytiotrophoblast cells (17).The high hCG environment in the left testis may have stimulated the production of testosterone by Leydig cells, which in turn suppressed LH and FSH secretion by the pituitary gland via negative feedback.As a result, spermatogenesis may have been notably inhibited, causing severe oligoasthenozoospermia.In addition, an increase in blood estradiol levels has a negative feedback effect on activity of the hypothalamic-pituitary-gonadal axis (18).In the present case, the estradiol levels were elevated, and suppression of gonadotropin production may have led to progressive dysfunction of spermatogenesis.Testicular tumors promote production of several hormones (e.g., hCG-β, estradiol, and prolactin) and cytokines (e.g., interleukin-1, interleukin-6, and tumor necrosis factor-α) that notably change the intratesticular environment (19).These changes cause spermatogenic dysfunction.The present case is a good clinical example of changes in multiple hormone levels due to testicular tumor treatment improving semen parameters.
In conclusion, the present report describes the first case, to the best of our knowledge, in which two types of testicular tumors were found in a unilateral testis in a patient with a history of cryptorchidism surgery.The present report demonstrated that scrotal ultrasonography should be performed in patients with abnormal semen results to rule out testicular tumors.
Figure 1 .
Figure 1.Preoperative ultrasonography of the left testis.(A) Ultrasonographic image shows an isoechoic mass with clear margins in the cranial part and an isoechoic mass with clear margins in the caudal part.(B) Color Doppler ultrasonographic image shows abundant blood flow in the cranial and poor blood flow in the caudal mass.The red and white arrows indicate the cranial mass and caudal mass, respectively.
Figure 2 .
Figure 2. Preoperative magnetic resonance imaging of the left testis.(A) Magnetic resonance image of two tumors at T2 in the left testis.(B) Diffusion-weighted image shows hyperintensity in the cranial and hypointensity in the caudal part.The red and white arrows indicate the cranial tumor and caudal tumor, respectively.
Figure 3 .
Figure 3. Intraoperative and gross examination findings.(A) Intraoperative image of left testis and (B) cross-section of the formalin-fixed left testis show a reddish-brown mass in the cranial and a grayish-white mass in the caudal part.The red and white arrows indicate the cranial tumor and caudal tumor, respectively.
Figure 4 .
Figure 4. Pathological assessment.(A) Hematoxylin and eosin staining of left testis indicated substantial growth of tumor cells with round nuclei and well-defined nucleoli in the cranial tumor (magnification, x400).(B) Vitrified material with few cellular components and no malignancy in caudal tumor or dysplastic cells with round nuclei and well-defined nucleoli in the adjacent parenchyma (magnification, x100).(C) Immunohistochemical assessment indicated numerous human chorionic gonadotropin-positive trophoblastic cells in cranial tumor (magnification, x400) and (D) octamer binding transcription factor-3/4-positive cells in parenchyma near the caudal tumor (magnification, x100).
Table I .
Pre-and post-surgery blood and semen parameters., lactate dehydrogenase; AFP, α-fetoprotein; hCG, human chorionic gonadotropin; LH, luteinizing hormone; FSH, follicle-stimulating hormone; a Reference intervals were set by the Department of Clinical Laboratory at Dokkyo Medical University Saitama Medical Center. LDH
|
2024-02-06T18:22:46.632Z
|
2024-01-30T00:00:00.000
|
{
"year": 2024,
"sha1": "d988d79c83bd0fedf83f9ec1a0841aba6ba85d17",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2024.14262/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21d47283dc03885a86379e003bbdc08df7ec39a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269961581
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneous Remission of Epileptic Seizures Following Norovirus Infection in a Patient With DNM1 Encephalopathy
Epileptic seizures can be worsened by infections; however, they sometimes disappear or decrease after an acute viral infection, although this is rare. We report the spontaneous remission of epileptic seizures following norovirus-induced viral gastroenteritis in a boy with DNM1 encephalopathy. He had clonic seizures daily from the age of two months and developed epileptic spasms at 14 months of age; he was admitted to the hospital at this time. A physical examination revealed hypotonia, strabismus, tongue protrusion with drooping, and widely spaced teeth. Although brain magnetic resonance imaging was unremarkable, electroencephalography revealed frequent occipital spikes. Three days after admission, the patient developed frequent diarrhea without a fever. A rapid immunochromatographic test of norovirus in a stool sample was positive. Immediately after the appearance of diarrhea, the epileptic seizures disappeared. Currently, at the age of five years, the patient has a profound psychomotor developmental delay; he has no verbal expression and is unable to walk. He has experienced involuntary movements of the myoclonus since 10 months of age. Whole-exome sequencing of the patient’s DNA revealed the presence of a heterozygous de novo variant of DNM1: c.709C>T (p.Arg237Trp). Although the findings from our patient suggest that underlying neural network abnormalities were ameliorated by immunological mechanisms as a result of the viral infection, further research is needed to clarify the mechanisms behind this spontaneous remission of seizures.
Introduction
In children with epilepsy, viral infections can often worsen seizures.One of these viral infections is norovirus.Norovirus infection is the common cause of gastroenteritis, leading to vomiting and diarrhea.However, it has also been reported that epileptic seizures, especially those associated with infantile epileptic spasms syndrome, can disappear or decrease after viral infections, albeit rarely [1,2].
The DNM1 gene encodes dynamin 1, a GTPase that plays a crucial role in the catalysis of clathrin-mediated endocytosis and synaptic vesicle recycling, which is necessary for signaling pathway function and central nervous system development.Heterozygous variants in DNM1 are associated with epileptic encephalopathy, such as infantile epileptic spasms syndrome and Lennox-Gastaut syndrome.Patients with DNM1 variants reportedly have severe intellectual disability, a lack of speech, hypotonia, and an inability to walk [3].
We herein report the spontaneous remission of epileptic seizures following norovirus-induced viral gastroenteritis in a patient carrying a DNM1 variant.
Case Presentation
A boy was the first child of healthy, nonconsanguineous parents.He was born at 41 gestational weeks by normal vaginal delivery following an uneventful pregnancy.His father's sister had Turner syndrome.He experienced daily clonic seizures from two months of age; these were treated with carbamazepine and phenobarbital.Additionally, he had myoclonus at 10 months of age.His psychomotor development was delayed; he acquired head control at four months, rolled over at seven months, and was able to sit with support at 13 months.The patient was admitted to the hospital at 14 months because he developed frequent seizures that were suspected epileptic spasms, consisting of brief tonic contractions of the axial muscles.These occurred two to three times daily, with each series consisting of 10-30 seizures.After their onset, motor regression occurred, and the patient was unable to roll over or sit with support.
Physical examination revealed hypotonia, strabismus, tongue protrusion with drooping, and widely spaced teeth.Brain magnetic resonance imaging findings were unremarkable.An electroencephalogram revealed frequent occipital spikes without hypsarrhythmia (Figure 1a).Three days after admission for the treatment of epilepsy, the patient developed frequent diarrhea with no fever.A rapid immunochromatographic test of a stool sample was positive for norovirus.The diarrhea improved over approximately one week.Immediately thereafter, the patient's seizures disappeared spontaneously without changes to antiepileptic drugs.To date, he has experienced no seizures for approximately four years, and an electroencephalogram at the age of three years and four months revealed no spikes (Figure 1b).However, myoclonus remained present and was treated with clonazepam and valproic acid.The patient regained the ability to roll over at one year and 11 months and crawled at four years.Currently, at the age of five years, he has a profound psychomotor developmental delay; he has no verbal expression and is unable to walk.Whole-exome DNA sequencing revealed that he is heterozygous for a known pathogenic DNM1 variant (NM_004408.4:c.709C>T, p.Arg237Trp).His parents lack the variant, indicating that it occurred de novo.
Discussion
In children with epilepsy, acute viral infection often worsens epileptic seizures.Moreover, both norovirusand rotavirus-induced gastroenteritis are known inducers of seizures in children [4,5].However, some viral infections, including exanthema subitum, rotavirus colitis, and measles or herpes stomatitis, may also improve intractable seizures [6].For example, Hattori reported that 86% of spontaneous seizure remissions are preceded by viral infections such as exanthema subitum, rotavirus gastroenteritis, measles, or chickenpox [1].Similarly, Yamamoto et al. reported the disappearance of epileptic seizures subsequent to viral infections such as exanthema subitum, rotavirus colitis, measles, or mumps [2].In our patient with DNM1 encephalopathy, we observed the spontaneous remission of epileptic seizures following norovirus infection.
The mechanisms by which viral infection may affect the pathophysiology of epilepsy are unknown.Moreover, the mechanism by which epileptic seizures are kept in remission for a long time is also unclear.Individuals with norovirus infection exhibit early elevation of chemokines such as IL-8 and monocyte chemoattractant protein-1 and a persistent elevation of IL-10 [7].IL-10 expression protects neurons and glia in the brain, mainly by inhibiting proapoptotic cytokines and stimulating protective signaling reactions [8].Although IL-10 might be related to the cessation of epileptic spasms, we did not perform immunological investigations in the present case.However, we speculate that underlying neural network abnormalities were ameliorated by immunological mechanisms caused by the viral infection rather than by a specific immune response to DNM1 encephalopathy.Pathogenic DNM1 variants affect brain development and function and cause epileptic encephalopathy with severe neurodevelopmental complications [3,9].A c.709C>T variant in DNM1 was identified in our patient; this has previously been reported as a common variant.For example, Li et al. reported that c.709C>T was the most common variant in patients with DNM1 variants (identified in eight of 33 patients) [10], and von Spiczak et al. reported that it was the most common pathogenic variant in patients with DNM1 encephalopathy.Patients carrying this variant have homogeneous phenotypes, displaying infantile spasms with developmental delay before seizure onset and progressing to refractory epilepsy and movement disorders such as hyperkinetic movement and dystonia [11].Similarly, our patient experienced epileptic seizures with suspected epileptic spasms, severe developmental delay, and myoclonus.
Epilepsy is generally intractable in patients carrying DNM1 variants, and the efficacy of antiepileptic drugs is limited.Notably, although many genetic variants have been identified in patients with epileptic encephalopathy using next-generation sequencing [12], there have been no reports of the genetic basis of patients who experience spontaneous remission of intractable seizures following infection.The clarification of this point may lead to new therapeutic options for epileptic seizures.
FIGURE 1 :
FIGURE 1: Drug-induced sleep electroencephalogram findings (a) Electroencephalogram showing frequent occipital spikes (black arrow) at 14 months of age.(b) No spikes were evident at three years and four months of age.
|
2024-05-23T15:09:18.594Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "80d26db7ac377a9c88c0a0b7794029cbd094df3a",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/254558/20240521-28414-lbf8f5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f9cdb090d65669cb710389b86b82aeb0a78ad6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
227038958
|
pes2o/s2orc
|
v3-fos-license
|
Second-Harmonic Young Interference in Atom-Thin Heterocrystals
Optical second-harmonic generation (SHG) is a nonlinear parametric process that doubles the frequency of incoming light. Only allowed in non-centrosymmetric materials, it has been widely used in frequency modulation of lasers, surface scientific investigation, and label-free imaging in biological and medical sciences. Two-dimensional crystals are ideal SHG-materials not only for their strong light-matter interaction and atomic thickness defying the phase-matching requirement but also for their stackability into customized hetero-crystals with high angular precision and material diversity. Here we directly show that SHG in hetero-bilayers of transition metal dichalcogenides (TMDs) is governed by optical interference between two coherent SH fields with material-dependent phase delays using spectral phase interferometry. We also quantify the frequency-dependent phase difference between MoS2 and WS2, which also agrees with polarization-resolved data and first-principles calculations on complex susceptibility. The second-harmonic analogue of Young double-slit interference shown in this work demonstrates the potential of custom-designed parametric generation by atom-thick nonlinear optical materials.
semiconducting TMDs 8 , and single-photon emitters of insulating hBN 9 .Their interaction with light is further diversified and strengthened not only by their wide-varying electronic structures but also their low dimensionality and reduced dielectric screening 10 .The second-order polarization responsible for SHG is also greatly enhanced by the strong excitonic resonances in TMDs 11,12 .The complex nature of the nonlinear susceptibility arising from the light absorption 13 provides another control, the phase delay between the fundamental and SH waves, to manipulate the photonic process.In this work, we report interferometric mixing of SH lights generated in van der Waals hetero-crystals (vdW) consisting of single layers (1Ls) of MoS2 and WS2.We also show that the mixing is governed by SHG phase delays characteristic of materials, and quantify them by spectral phase interferometry.VdW stacks of 2D crystals are an excellent photonic system not only for distinctive electronic structures but also for facile integration into photonic structures including waveguides 14 and cavities 15 .
As model systems (Fig. 1a & Fig. S1), homo-bilayers of MoS2/MoS2 (2LMoMo) and hetero-bilayers of MoS2/WS2 (2LMoW) were fabricated on fused quartz substrates by the deterministic dry transfer method 16,17 (see Methods).The stack angle ( s : 0 ~ 60 o ) between two single layers (1Ls) as defined in Fig. 1b could be controlled within one degree during the transfer step using the crystallographic orientations of each layer determined by SHG measurements.The step height determined from the topographic AFM image in Fig. 1a was 1.0 ± 0.2 nm, which indicated that its vdW gap size of 2LMoW was close to that of 2H-type bilayers 18 , and thus the two 1Ls were in good contact.An average gap size obtained for multiple samples of 2LMoW and 2LMoMo was 1.0 ± 0.1 nm (see Fig. S1 for more samples and also Methods for post-stacking treatments).Raman and photoluminescence spectroscopy showed that individual 1Ls were of high quality and the artificial stacking did not induce significant changes (Fig. S2).As schematically shown in Fig. 1b, the frequency-doubling process was induced in the samples by a plane-polarized fundamental beam (frequency ) focused with a refractive objective lens. ∥ , SHG signal parallel to the electric field of the fundamental beam ( ), was collected using a polarizer from bilayers and their unstacked 1Ls by varying the azimuthal angle () of in the basal plane (Fig. 1b).The second-order susceptibility tensor of 3ℎ 1 space group which 1L MoS2 and WS2 belong to requires that ∥ is proportional to cos 2 3 and reaches a maximum when is parallel to armchair directions ( ⃗⃗⃗⃗⃗ ) as marked in Fig. 1b (see Supplementary Note A) 19 .Indeed, the unstacked 1L areas of 2LMoMo (Fig. 1c) and 2LMoW (Fig. 1d) obeyed the predicted angular relation exhibiting 6-fold symmetry with angular nodes as was varied by rotating the sample.Then, the difference in the angles for maximum intensities, 33 o (34 o ) for 2LMoMo (2LMoW), corresponded to .Between two candidate angles (one < 30 o and the other > 30 o ) for , the one that was bisected by the six lobes of bilayers (blue circles in Fig. 1c & d) was defined as .
We found that the SHG response of hetero-bilayers was distinctive from that of homobilayers.The polar graph of 2LMoMo blue-shaded in Fig. 1c also exhibited 6-fold symmetry with obvious nodes like those of 1Ls.As the bottom and top MoS2 1Ls are coherently polarized by the fundamental pulse at 800 nm, the SHG signal is the superposition of the SH fields generated in both layers 19,20 .This interpretation was directly confirmed by the fact that the data of 2LMoMo matched well with the blue dotted line representing the vectorial superposition of the SH fields from the two individual MoS2 1Ls (see Supplementary Note B).In contrast, 2LMoW (blueshaded in Fig. 1d) lacked nodes despite its 6-fold symmetry, which could not be explained by the simple superposition (blue dotted line in Fig. 1d).Notably, its minimum intensity was substantially high (37% of the maximum) unlike that of 2LMoMo which remained typically below 0.5% (Fig. 1e).The anomaly was observed in multiple hetero-bilayers with various stack angles.Whereas all the samples exhibited the 6-fold symmetry (Fig. S3), the minimum/maximum intensity ratio (R) was higher for larger , but the opposite for > 40 o as shown in Fig. 1f.
The anomaly suggests that the SH light from hetero-bilayers contains complexity beyond a simple plane polarization.SHG polar graphs remained unchanged (Fig. S4) after vacuum annealing which drastically affected interface quality (Fig. S1).This fact implied that the anomaly is induced by neither charge nor energy transfer.To anatomize the polarization state of SHG signals, we performed polarization-resolved measurements by rotating the analyzing polarizer located in front of the detector (see Methods).As shown in Fig. 2a, 2LMoMo obeying the Malus law generated plane-polarized signals like 1Ls, which is consistent with the tensor model (Supplementary Note A).The signals of 2LMoW, however, were elliptically polarized with a ratio of 0.37 ± 0.03:1 between the minor and major axes irrespective of the sample orientation ().This observation is reminiscent of polarization mixing by a quarterwave plate made of birefringent materials.As depicted in Fig. 2b, two plane-polarized light waves with zero phase difference generate another plane-polarized light.With finite phase difference, however, the superposition leads to a light wave of elliptical polarization in general.
Then it can be seen that the phase difference (ϕMoW = ϕMo -ϕW) between MoS2 and WS2 governs the SHG interference along with the stack angle, as illustrated in Fig. 2c.Note that ϕMo and ϕW represent the phase delay of SH fields generated respectively in MoS2 and WS2 with respect to the fundamental fields.Furthermore, one can determine the phase difference from the polarization-resolved data shown in Fig. 1d and Fig. S3 using the interference model of SH waves (Supplementary Note B).The minimum/maximum intensity ratios (R) in Fig. 1f were best described by the solid line representing ϕMoW = 61 o at 800 nm.The value also agreed well with the average (61.0 ± 7.5 o ) obtained by fitting the data obtained from multiple samples (Fig. S3).This finding reveals that the phase delay between the fundamental and SH waves is substantially dependent on materials.As will be described below, the phase difference also exhibited a strong dependence on photon energy.
Using spectral phase interferometry 13,21,22 as an independent and the most definitive probe, we directly measured the phase delay of individual TMD layers of the heterostructures (Supplementary Note C).As shown in Fig. 3a (see Methods), the reference SHG pulse (2 ref ) generated in an -quartz crystal was delayed by (2.86 ps for Fig. 3b) behind the sample SHG pulse (2 sample ) because of finite optical dispersion between and 2 induced by the optical materials shown in Fig. 3a.During diffraction by a grating in the spectrometer, the two coherent pulses with a temporal width of ~100 fs were stretched to ~300 ps and overlapped each other in space and time at the CCD detector plane (Supplementary Note C).Unlike conventional intensity spectra, the SHG interferograms contained prominent oscillations, as shown in Fig. 3b for 1LMo.Whereas the oscillation period of the interferograms in the frequency domain is inversely proportional to 21 , the positions of crests and valleys depend on the phase delay defined with respect to the reference SHG signal from -quartz (Supplementary Note C).The interferograms in Fig. 3c present the oscillating components only with the rest removed using the Fourier transform analysis.We first confirmed that the interferograms shifted by half of one period when the 1LMo sample was rotated by 60 o or its multiples in Fig. 3c (top) (also see Fig. S5a for the phase inversion near 30 o ).Because such rotations inverse the lattice with respect to the polarization of the fundamental beam, the observation validates that the offset in energy corresponds to the phase difference between the two SHG signals.The phase values of 1LMo were also consistent within three degrees at 800 nm across the sample (Fig. S5b).Even homo-bilayer area of 2LMoMo ( =33 o ) gave phase values identical to that of each monolayer (Fig. S5c).
Remarkably, the interferograms of 1LMo and 1LW areas in the 2LMoW sample showed a substantial offset in energy, as shown in Fig. 3c (middle) (see Fig. S6 for the optical micrograph and raw interferograms).The inter-material phase difference (ϕMoW) corresponding to the displacement was 61 o at 800 nm and decreased significantly to 32 o at 900 nm (Fig. 3c, bottom).
In Fig. 3d, we presented two sets of ϕMoW values independently obtained from the interferometry (Fig. 3) and polarized SHG measurements (Fig. 1) for a wide range of fundamental photon energy (ℏω).Most of all, both methods yielded highly consistent results for ϕMoW, substantiating the interference model involving complex susceptibility (Fig. 2 and Supplementary Note B).The agreement indicates that the interlayer interactions hardly affect ϕMoW because the interferometry probed 1L regions unlike the angle-resolved SHG.It is also notable that the phase difference drastically decreased and finally reached zero as the photon energy was lowered from 1.55 eV (800 nm) to 1.24 eV (1000 nm).For the highest energy (1.70 eV for 730 nm) that could be handled with the setup, the phase was even larger than that for 1.55 eV.
To unravel the origin of the energy and material-dependence, we performed firstprinciples calculations of the second-order susceptibility (2) (see Methods and Supplementary Note D for the details of density functional theory calculation).The SHG phase values of both monolayers extracted from the real and imaginary parts of (2) (Fig. S7) remained near zero for energy below 0.7 eV and exhibited a noticeable difference from each other for the fundamental's energy above 1.3 eV, which can be seen more clearly in the calculated ϕMoW shown in Fig. 3d.We also note that the theory predicted the experimental data reasonably well.
Viewing the amplitude of (2) dictating SHG intensities (Fig. S7d) and optical absorption 23 , the rise of ϕMoW at 1.3 eV was attributed to the distinctive band structures and unequal optical transitions mostly by C excitons at ~2.8 eV (= 2ℏω) in the two materials 11,23 .The energy region above 1.6 eV where ϕMoW increased further is occupied with intense optical transitions of D excitons 24 .In the picture of a driven harmonic oscillator, finite damping (light absorption) at second-harmonic frequency leads to a phase delay with respect to the fundamental driving field 13 .Then nonzero ϕMoW and its frequency dependence are due to material-dependent resonance frequencies and damping.
In summary, we reported the optical second-harmonic interference occurring in the two-dimensional limit of atom thickness.The SHG signals from artificial 2D heterocrystals of MoS2/WS2 underwent coherent superposition and exhibited complicated polarization behavior for varying stack angle and photon energy.Using spectral phase interferometry and polarized SHG, we directly measured the inter-material difference of the SHG phase originating from differential interactions of both materials with light.First-principles calculations on secondorder susceptibilities revealed its electronic origins also verifying the superposition model.This work will also contribute toward creating novel nonlinear optical and photonic applications using low-dimensional materials.
Figure and captionsFig. 1 .
Figure and captions
Fig. 2 .
Fig. 2. Elliptical polarization induced by material-dependent SHG phase.(a) Polarization analysis of SHG signals from 2LMoMo and 2LMoW.The sample was rotated in a step of 10 o to give three different angles.The polar graphs are given as a function of angle () between the analyzing polarizer and the polarization of the fundamental beam.Solid lines are fits to the data: plane polarization (cos 2 ) for 2LMoMo and elliptical polarization (cos 2 + 2 sin 2 ) for 2LMoW.(b) Superposition of two plane-polarized SH fields without (2LMoMo) and with (2LMoW) phase difference.(c) Schematic representations of phase-delayed SHG in 2LMoMo (left) and 2LMoW (right), respectively.
|
2020-08-07T01:00:25.415Z
|
2020-08-06T00:00:00.000
|
{
"year": 2020,
"sha1": "2f5521473bf971fd850e5de07aadacc5b4666f5a",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2008.02403",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4cc7fd9a6475a12323372ca639a6a212ac2d0735",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Physics"
]
}
|
251483404
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Community-Based Health Education Campaign Based on the Theory of Planned Behavior on Reduction of Hookah Smoking Among Women in Hormoz Island in the South of Iran
Background : Hookah smoking is considered a health problem in women and interventions should be designed to reduce it. This study aimed to explore the effect of an educational campaign based on the theory of planned behavior (TPB) on the reduction of hookah smoking. Materials and Methods: This quasi-experimental research was conducted on 177 female hookah smokers above 15 years of age residing in Hormoz Island. The data collection instrument contained two sections: demographic information and the constructs of the TPB. The questionnaire was completed twice, before the educational intervention and 3 months after the intervention. The educational campaign was conducted by making cell phone calls, putting up billboards, distributing pamphlets, holding face-to-face classes, and forming a peer group. The data were analyzed using paired-sample t-test, and McNemar’s test in SPSS version 23.0. Results: The mean age of the participants and the beginning age of smoking hookahs among the participating women were 34.16 ± 10.47 and 22.20 ± 8.45 years, respectively. After the educational campaign, there was a significant increase in the scores of the TPB constructs (attitude, subjective norms, perceived behavioral control, behavioral intention) and a reduction in hookah smoking among participants ( P < 0.05). Moreover, the frequency of smoking hookah per month and per week showed a statistically significant reduction among the participants after the administration of the educational campaign ( P < 0.05). During this time, 6 participants stopped smoking hookah. Conclusion: In the light of the present findings, it can be concluded that the conducted educational campaign based on the TPB significantly reduced hookah smoking among girls and women.
has been increasing in many countries and there is evidence that hookah smoking by girls and women is significantly on the rise (5). In Iran, this disease is more prevalent in the south compared to other geographical areas (6).
The related literature showed that the prevalence of smoking hookahs was reported to be 14.8% in Hormozgan (7) and 13.6% in Bandar Abbas (8). Moreover, the related literature showed that hookah smoking is more prevalent among women than men (9).
Different factors have increased the rate of hookah smoking among women. These factors include a positive attitude towards hookah consumption, social and psychological needs, social and cultural acceptance of hookah, easy access, lack of prohibitory rules and regulations, and low perceived risk of hookah smoking (9). In fact, the majority of hookah smokers believe that the adverse effects of hookah smoke are fewer than those of cigarettes. However, scientific evidence has shown the presence of certain cancerous and toxic materials in the hookah smoke so that each hookah smoking can enter cancerous aromatic carbohydrates into the body 50 times as high as one cigarette (10,11).
Considering the increasing trend of hookah smoking especially among women, if effective and appropriate changes are not made in order to prevent or reduce hookah smoking, along with the increasing prevalence of consumption, smoking can affect the whole population (12).
Education is one of the most applicable ways of reducing tobacco consumption (13). Therefore, designing and implementing educational programs to reduce hookah smoking among girls and young women seem essential.
Education is a fundamental basis of preventive measures that can cause stable changes in people's attitude and performance and can finally change their lifestyle (14). A study showed that educational campaigns were effective in reducing the rate of tobacco consumption (15).
Campaign refers to a series of informative, communicative, and educational activities through different channels combined aiming to convey target messages to a given population within a particular time limit to serve a particular purpose (16). Research findings have shown that an effective campaign along with reliable messages influences public attitude and behavior. If campaigns address a particular sub-section of the population rather than the whole population, it can be much more effective (17).
The relevant findings show that the model of tobacco consumption is a function of complicated social and structural processes (18). Considering this complication, it is essential to use the theories of behavioral change to determine effective factors involved in this behavior (19).
Sharma introduced the theory of planned behavior (TPB) (20). In this theory, the foremost factor involved in showing the behavior was behavioral intention (21), which is in turn predicted by three factors, including attitude towards behavior, subjective norms, and perceived behavioral control. The attitude towards behavior is a positive or negative evaluation of showing the target behavior. It consists of two constructs: behavioral beliefs and evaluation of behavioral outcomes which can help to create a certain attitude towards a behavior (20). Subjective norms refer to the perceived social pressure one experiences while showing or not showing the target behavior (22). Perceived behavioral control is a degree of feeling in control of a behavior. Behavioral intention represents the intensity of willingness and a will to show a certain behavior. Behavior always follows from the behavioral intention and is closely connected with it (23,24).
Different studies acknowledged the effectiveness of the TPB in predicting hookah smoking and reducing the rate of tobacco consumption (2,4,25). Considering the rising trend of hookah smoking among women (9) and the prominent role of women in the family as a role model, reducing the use of hookah in them is essential to maintain the health of the family and society. Furthermore, the effective educational role women play in families, often setting examples to follow, adds to the significance of reducing hookah smoking among them to maintain social health and guarantee the health of generations (12). Therefore, the present research aimed to explore the effect of an educational campaign based on the TPB on the reduction of hookah smoking among women above 15 years of age living in Hormoz Island in the south of Iran.
Materials and Methods
This quasi-experimental study with a pre-test post-test design was conducted on women above 15 years of age living in Hormoz Island in 2020. The sample size was 177. Hormoz Island has a small population so it was not possible to have a control group in the study.
Sampling
To select the required sample through simple randomization, based on the household records, a list of women above 15 years of age was made. They were visited at home and were asked about hookah smoking. When any individual participant was selected according to the inclusion criteria, a brief explanation of the purpose of the study was provided and if the individual was willing to participate, the questionnaire was given. According to the related literature, a hookah smoker is one who has smoked hookah for at least one day during the past week (26).
Because the environment of Hormoz Island was limited and there was a possibility of communication between the two groups of intervention group and control group, we Health Education Campaign for Reduction of Hookah Smoking http://thj.hums.ac.ir http did not have any control group. By referring to Hormoz Island Health Center, a list of women over 15 years old was prepared and they were asked about hookah use. The eligible subjects were identified according to the inclusion criteria and were included in the study if desired.
The inclusion criteria were being over 15 years of age, smoking hookah for at least a year, living in Hormoz Island, and having willingness to participate in research and complete the questionnaire. Additionally, illiterate participants were asked questions orally. The exclusion criterion was not completing the questionnaire in the campaign programs.
Data Collection Tools
The required data were collected via a standard questionnaire made up of two sections. The first section explored demographic information such as age, education level, occupation, history of smoking hookahs, and attempts to stop smoking hookahs. The second section included items exploring the constituent constructs of the TPB and hookah smoking behavior. All items concerning the constructs were rated on a 5-point Likert scale. The behavioral intention construct was measured with 2 items. To score intention to reduce hookah smoking, the scores of these two items were added up. The minimum score for the intention to reduce hookah smoking was 2 and the maximum score was 10.
Attitude was measured with the sub-constructs: behavioral beliefs (4 items) and evaluation of outcomes (4 items). The minimum score for attitude was 8 and the maximum score was 40. Moreover, the minimum and maximum scores for behavioral beliefs and evaluation of outcomes were 4 and 20.
The subjective norms construct was measured with its sub-constructs including normative beliefs (4 items) and motivation to comply (4 items). The minimum and maximum subjective norms scores were 8 and 40. Besides, the minimum and maximum scores for normative beliefs and motivation to comply were 4 and 20, respectively.
Perceived behavioral control was measured with its sub-constructs including control beliefs (5 items) and perceived power (5 items). The minimum and maximum perceived behavioral control scores were 10 and 50. Moreover, the minimum and maximum scores for control beliefs and perceived power were 5 and 25.
In each of the above-mentioned constructs, first, the scores related to the subscales (matching pairs) were multiplied and then the results were added together.
Hookah smoking behavior (3 items) was measured as the frequency of smoking hookah within the past month, the frequency of smoking hookahs within the past week, and the act of stopping smoking. The reliability and validity of the questionnaire were confirmed in a study by Firoozabadi et al (4).
Steps of the Study 1. A Pretest
The above-mentioned questionnaire was completed as a self-report by the participants.
Development of a Communicative Message for the Program
First, guided by the pretest results, the constructs of the TPB correlating with reduced hookah smoking were identified so as to develop the content for educational messages. The messages were developed based on a review of credible scientific sources (27,28) and in the next step, the selected messages were evaluated by a panel of 4 experts in health education and promotion as well as the field specialists. Once they were confirmed, they were put to use. In order to develop a correct attitude, information was provided on the desirable effect of not smoking hookahs, prevention of premature aging and loss of physical attraction, positive effect on the lower risk of diseases, higher quality of life and so on. Concerning the effect on subjective norms, emphasis was put on the comments made by successful individuals about stopping tobacco consumption. Besides, the educational pamphlets were distributed to women and they were persistently asked to provide influential people around them with pamphlets and to encourage them to read through the pamphlets. This would adequately inform the influential people around the smoker about the detriments of hookahs and benefits of stopping hookah smoking. This would increase the chances of approving the target behavior. In order to increase perceived behavioral control and facilitate the act of reducing the hookah smoking behavior, decisions were made on when and where to make the intervention, the promoters and inhibitors of the behavior, and how to overcome barriers to the behavior. Accordingly, the educational messages were conveyed to the participants.
Communicative strategies
The strategies used included face-to-face meetings, development and distribution of educational pamphlets, putting up educational banners and posters regarding the adverse effects of hookah smoking especially among women residing in the island with the help of city hall, use of successful models among women, creation of WhatsApp group, sending persuasive text messages every 10 days, and use of educated messengers and peer group.
Implementation the Campaign
The campaign was held through the following channels: • Cell phone: A group was formed in WhatsApp to include all participants. The educational content was posted as videos and questions and answers. Moreover, the educational pamphlets and posters were posted in the group. During the two educational sessions, face-to-face instructions were provided along with question and answer (Q & As) and group discussions. The content of the first session concerned the adverse effects of hookah smoking and that of the second session was about the required skills to reduce the rate of hookah smoking. • Peer group: Peers were used to encourage the participants to take part in face-to-face educational classes. The peer group was people who had successful experience in quitting hookah smoking. Moreover, they joined the WhatsApp group and informed the researcher and designer of the study of the questions and answers (Q & As) exchanged. They also cooperated in the distribution of questionnaires and educational pamphlets.
Evaluation of Campaign
During the present research, in order to evaluate the strengths and weaknesses and follow up the procedures based on the goals specified, phone calls were used as well as face-to-face talks with the participants to receive feedback. Moreover, in order to explore the effect of the educational campaign on the reduction of hookah smoking, the questionnaires were first distributed before the campaign and once again 3 months after the campaign and the results were cross-compared.
Data Analysis
The acquired data were analyzed in the SPSS software version 22.0 using paired test, McNemar's test. The significance level was set at P <0.05. The normality assumption was tested and confirmed by Kolmogorov-Smirnov test.
Results
The mean age of the participants and the beginning age of smoking hookah among the participant were 34.16 ± 10.47 and 22.20 ± 8.45 years, respectively. The other demographic information is presented in Table 1.
The pre-test results showed that the three constructs including attitude towards reduced hookah smoking, subjective norms, and perceived behavioral control predicted the reduced hookah smoking behavior. Perceived behavioral control was a stronger predictor of reduced hookah smoking, followed by subjective norms and attitude towards the behavior, respectively. Therefore, in the development of the educational campaign, the three main constructs of the TPB were taken into account ( Table 2).
The frequencies of smoking hookah per month and per week by the participants were cross-compared before and after the educational intervention. After the educational Table 3). Making attempts to stop hookah smoking by the participants was compared before and after the educational campaign. The results showed a statistically significant increase in attempts to stop smoking hookah by the participants (P <0.001) ( Table 4).
After conducting the educational campaign, 6 (3.4%) of the participants stopped smoking hookah.
Discussion
People smoking hookah justify their continued habit of smoking by erroneous perceptions of hookah and hookah smoking. These perceptions should be recognized and discarded. The present research explored the effect of an educational campaign based on the TPB on the reduction of the rate of hookah smoking among women above 15 years of age living in Hormoz Island.
Attitude
According to the findings of the study, the mean score of attitude towards smoking hookah among female participants was significantly increased after the educational campaign. That is to say that when people perceived that stopping hookah smoking was accompanied by positive health outcomes, they adopted the healthy behavior and maintained it. Various studies have shown predictive value of attitude for smoking and hookah use (4,29).
The research findings by Barati (2) were consistent with the findings of the present study as they indicated the effectiveness of educational interventions based on the TPB in promoting a negative attitude towards tobacco consumption.
In a study on women visiting healthcare centers, the results showed a significant reduction of the positive attitude towards hookah smoking from 10 to 5% after the educational intervention (32).
These findings are also similar to those of the research by Joveyni et al (33). Negative attitudes toward hookah easily prevent women from experiencing hookah use, and if they find themselves in situations where they are encouraged to smoke hookah, they can resist or leave. The results of a study conducted by Makvandi et al on students' attitudes towards hookah smoking showed that they continued hookah smoking because they thought it was not addictive, which is not consistent with our study (25).
Subjective Norms
The present research showed that the mean score of subjective norms was significantly increased among women participating in the educational campaign. That is to say that the more pressure the family members and influential people around the smoker exert and the more they encourage her to give up the habit, the more likely the smoker adopts the healthy behavior in practice. Therefore, in order to reduce the rate of hookah smoking among girls and women, special attention needs to be paid to the influential people around them. On the one hand, smoking friends and peers and on the other hand, the sense of belonging to a group which is a key human need can be among the effective factors involved. The strength of the effect would depend on every individual's living conditions (34). Findings of studies by Momenabadi et al (35), Joveini et al (36), Jafari et al (37), and Barati et al (38) are similar to the results of our study, indicating an increase in perceived behavioral control. Similar studies also showed a high predictive power of perceived behavioral control in intention to consume addictive drugs (39). Perceived behavior control is in turn affected by control beliefs and perceived competence to adopt a certain behavior. In other words, if people believe that they lack the capabilities and facilities needed for a certain behavior, even if that behavior is approved by influential people around them (subjective norms), they do not show that behavior. Moreover, in a body of social psychological research, the level of behavioral control showed to be low in people with low self-confidence and self-efficacy. Therefore, such people are more prone to drug abuse under the influence of others. If these people are educated and enabled to confidently reject others' invitation to smoke, they will be less prone to social threats and their perceived behavioral control will be increased (21). Research findings reported by Fathi et al (2) with the aim of determining the effect of an educational program based on the TPB on preventing and reducing tobacco consumption among university students of medical sciences were not consistent with the present findings.
Perceived Behavioral Control
The present research showed an increase in perceived behavioral control. This contradiction can be explained by different research populations, purposes of research, and types of interventions involved. The present research revealed that the mean score of intention to reduce hookah smoking among women participants significantly increased after the educational campaign. This can point to the effectiveness of the educational campaign and can also result from an increase in the other constructs of the theory (attitude, subjective norms, and perceived behavioral control). According to the existing literature, intention plays a key role in forming or changing a certain behavior. Generally, a behavior follows from one's intention to show it. In other words, the behavior does not emerge unless it follows from the behavioral intention (40). (44) reported that implementing an educational program via a media campaign could reduce the rate of smoking cigarettes (18%) among the participants. Yothasamut et al (48) evaluated the effectiveness of interventions including a TV and radio campaign and setting rules to limit the availability of alcohol in reducing alcohol consumption among construction workers. These researchers found out that alcohol consumption was stopped among workers receiving the messages. They also found that matching educational campaign with religious beliefs can help motivate workers to stop consuming alcohol. Another finding of the present research was that 6 participants (3.4%) stopped smoking hookah after the educational campaign. Higher affective attitude and intention to stop smoking among these participants before the educational campaign as well as the lower frequency of smoking among these people than others can be among the reasons why they quit smoking. The results of studies by Lipkus et al (49), Dogar et al (50), and Mohlman et al (51) also agree with the present findings and indicate the stopping of hookah smoking by some members of the intervention groups.
One limitation of the present study is the use of selfreports to collect the required data, which is accompanied by chances that the information provided lacks enough precision or is exaggerated. Other limitations include the conduction of the study in a limited region, the absence of a control group, and use of a small sample size and shortterm follow-up.
Conclusion
In the light of the present findings, it can be concluded that the educational campaign designed based on the TPB could effectively reduce the frequency of hookah smoking among girls and women living in Hormoz Island by influencing perceived attitudes, subjective norms, and behavioral control. This effectiveness can be accompanied by fewer physical, mental, and social adverse effects of hookah smoking. Therefore, considering the fact that health education programs are cost-effective interventions to promote health in society and given the positive results obtained in the present research, it is suggested that this theory be used in educational programs at different preventive levels. It is also suggested that further studies with a longitudinal design be conducted to include both genders to evaluate the present findings.
|
2022-08-11T15:07:57.322Z
|
2022-01-30T00:00:00.000
|
{
"year": 2022,
"sha1": "43adf20c8a7dbc9bd0f19d98e13b8c7f1a96ca4b",
"oa_license": "CCBY",
"oa_url": "https://thj.hums.ac.ir/PDF/thj-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0aa02cc5648aed1048200e9fe85851eb3a8977d3",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": []
}
|
266462802
|
pes2o/s2orc
|
v3-fos-license
|
Predictive value of inflammatory factors on the efficacy of adjuvant Dexamethasone in the treatment of refractory purulent meningitis among pediatric patients
Background The aim of this study was to figure out the predictive value of inflammatory factors on the efficacy of Dexamethasone adjuvant therapy for refractory purulent meningitis in children. Methods In, this study, a regression analysis method was employed to select a sample of 38 children with refractory purulent meningitis, 40 children with purulent meningitis, and 40 healthy children who visited to Ganzhou People's Hospital for physical These participants were then assigned to the Dexamethasone, standard care and the control groups. The inflammatory factors in the three groups were compared, and a multivariate Logisitic regression was analysis was conducted to examine the predictive indicators and efficacy of Dexamethasone treatment in children with refractory purulent meningitis. Results The levels of CRP, TNF-a, IL-6, PCT and IL-1 were found to be significantly higher in the Dexamethasone group to both the standard care and the control (P < 0.05). Through multivariate Logisitic regression analysis, it was determined that CRP, TNF-a, IL-6, PCT, and IL-1 were reliable predictors of the efficacy of Dexamethasone treatment in children with refractory purulent meningitis. These biomarkers demonstrated good predictive performance, with CRP and IL-1 showing superior predictive performance. Conclusions Inflammatory factors have a certain predictive value for the efficacy of Dexamethasone adjuvant therapy for refractory purulent meningitis in pediatric patients.
Introduction
Purulent meningitis is central nervous system common purulent infection, and its main clinical symptom is persistently elevating intracranial pressure, meningeal irritation sign, fever, convulsions, etc.Most often take place in neonates and children.It has some characteristics like rapid onset, rapid development and serious illness.The disability rate associated with purulent meningitis ranges from 20% to 50%, while the fatality rate can reach 10% to 15%, thereby posing a significant threat to the overall health of children (1).At present, a sufficient and sufficient dose of antibiotics should be used as soon as possible to treat purulent meningitis.With the continuous maturation of antibiotics, the treatment rate of the disease is gradually elevated, and the disability rate is decreasing.However, some children with purulent meningitis are not satisfied with the curative effect of conventional antibiotic treatment, and the symptoms such as fever and convulsions recur, and have a high disability rate and mortality rate.This particular group of children is strongly associated with the specific pathogen causing the infection, and is clinically categorized as children with refractory purulent meningitis, which warrants attention from healthcare professionals.In recent years, inflammatory factors have been extensively employed in the clinical diagnosis, efficacy and prognosis evaluation of refractory purulent meningitis in children.C-reactive protein (CRP), tumor necrosis factor-a (TNF-a), interleukin-1b (IL-1b), interleukin-6 (IL-6), procalcitonin (PCT) are considered being closely implicated in the presence and progression of purulent meningitis (2).Dexamethasone has been shown to have significant anti-inflammatory effects in children with refractory purulent meningitis (3).There is limited research on the predictive value of inflammatory factors of children with refractory purulent meningitis.Based on this, this study aims to investigate the potential of inflammatory factors in predicting the effectiveness of dexamethasone in the treatment of refractory purulent meningitis in children.
General information
From January 2020 to December 2021, 38 children with refractory purulent meningitis, 40 children with purulent meningitis, and 40 healthy children who were admitted to Ganzhou People's Hospital for physical examination during the same period were selected as the research subjects and assigned into the dexamethasone group, standard care and control groups in turn by regression analysis.This study was approved by the Institutional Review Committee of Ganzhou People's Hospital (Approval Number: 2019GZ1106).The experimental consisted of 25 males and 13 females, at the age of 1-6 years, with an average age of (3.48±1.04)years.The control covered 24 males and 16 females, at the age of 1-6 years, with an average age of (3.41±1.01)years.The healthy contained 23 males and 17 females, at the age of 1-6 years, with an average age of (3.36±1.11)years.No clear difference exhibited in general data such as gender and age among the three groups (P = 0.936 P = 0.873).
Diagnosis, inclusion and exclusion criteria
The symptoms and results of the subjects meet the diagnostic criteria for purulent meningitis (Supplementary Table I).Diagnostic criteria: in the light of the diagnostic criteria of refractory purulent meningitis in children in Zhu Futang Practical Pediatrics (4): pediatric patients who meet one or more of the following criteria are classified as refractory purulent meningitis: (i) the main clinical symptoms are acute fever, convulsions, depressed mood, lethargy, irritability, etc. (ii) with bregma bulge, meningeal irritation sign and other signs; (iii) Abnormal tabular brain parenchymal areas on CT or MRI of the head; (iv) Accompanying persistent complications such as subdural effusion, ependymitis, and hydrocephalus; (v) Sequela during death or late follow-up period, such as secondary epilepsy, cranial nerve injury, and psychomotor delay; (vi) After one week of conventional treatment (penicillin ceftriaxone and cefotaxime), there are still symptoms of fever or other recurrent purulent meningitis.(vii) Recurrent purulent intracranial infection of unknown origin.
Inclusion criteria: complete clinical data; no previous treatment with dexamethasone; aging 1-7 years; all children's guardians gave informed consent and signed an informed consent form.
Exclusion criteria: combined with other organ dysfunction symptoms; immunodeficiency, systemic infection; septic shock; fungal meningitis, tuberculous meningitis and other non-bacterial central nervous system infections; intracranial hemorrhage, craniocerebral trauma, brain tumor and other diseases; the children whose guardians are unwilling to participate in this research.
Conclusions: Inflammatory factors have a certain predictive value for the efficacy of dexamethasone adjuvant therapy for refractory purulent meningitis in pediatric patients.
Methods
The control was given infusion of 20% mannitol 125 mL, once a day, to lower elevated intracranial pressure and to improve cerebral perfusion pressure.In addition to conventional antibiotics (penicillin and cephalosporins).If the use of penicillin and cephalosporin antibacterial therapy is ineffective, the children should be given intravenous infusion of meropenem (manufacturer: sumitomo pharmaceutical (Suzhou) Co., LTD., batch no.20161123) 40 mg•kg, once a day.
The experimental was given dexamethasone (manufacturer: Tianjin Jinyao Pharmaceutical Co., Ltd.; batch number: 12091821) on basis of the treatment of the standard care group.The first intravenous injection was 10 mg/m 2 , and then dexamethasone 15 mg/m 2 was intravenously dripped to the 5% glucose injection.The above treatments were all continued for 7 d.
Observation indicators
Correlation analysis between general clinical indicators (gender age disease duration APACHE II score clinical symptoms) and refractory purulent meningitis in children.The inflammation factors in the three groups were compared, the predictors of the efficacy of dexamethasone treatment in children with refractory purulent meningitis, and the predictiveefficacy of CRP, TNF-a, IL-6, PCT, IL-1b on dexamethasone treatment in children with refractory purulent meningitis was observed.
After 7 days of treatment, 3 mL of fasting venous blood was drawn from all subjects, and all samples are processed within 2-5 hours, and the supernatant was collected after centrifugation at 3000 r/min (centrifugation radius 13.5 cm).Inflammation factors: CRP, TNF-a, IL-6, PCT and IL-1b were determined by ELISA.Reference ranges of individual inflammatory parameters.CRP<8 mg/L, TNF-a<50 ng/L, IL-6 <10 pg/mL, PCT<0.5 mg/L IL-1b<15 ng/L.
Statistical methods
SPSS 21.0 software was employed to analyze the data.The measurement data conforming to the normal distribution were clarified as⎯x±s; The overall comparison of the data in each group was by one-way analysis of variance, and the pairwise comparison of the data between groups and within the group was by the LSD method; the count data were illustrated by the rate (%), and the chi-square 2 test was used.The parameters with statistically significant differences in the univariate analysis were included in the multivariate logistic regression model for analysis.Multivariate logisitic regression was applied to analyze the predictors of the efficacy of dexamethasone treatment in children with refractory purulent meningitis.Receiver operating characteristic (ROC) curve was drawn to evaluate the predictive value of serum CRP, TNF-a, IL-6, PCT, and IL-1b on the efficacy of dexamethasone treatment in children with refractory purulent meningitis.Good predictive performance has an AUC of 0.75-1.00.P < 0.05 emphasized obvious statistical meaning.
Comparison of general data of three groups
No clear difference was exhibited in gender or age in the general data of the three groups (P =0.936, P = 0.873); No distinct difference was presented in clinical symptoms between the experimental and the control (P = 0.905); The course of disease and APACHE score in the experimental were higher than the control group (P < 0.05, Table I).
Comparison of inflammation factors among the three groups
Next, we compared the differences in inflammatory factors between groups.And we found that CRP, TNF-a, IL-6, PCT and IL-1b were higher in the experimental vs. the control and the healthy, and in the control vs. the healthy, with clear difference (P < 0.05, Table II).
Multivariate logistic regression analysis of predictive indicators of dexamethasone treatment effect in children with refractory purulent meningitis
The variables examined in the univariate analysis, including the course of disease, APACHE II score, CRP, TNF-a, IL-6, PCT, and IL-1b levels.The dependent variable of interest was the efficacy of dexamethasone treatment in children with refractory purulent meningitis.Subsequently, a multivariate logistic Table III Multifactor logisitic regression analysis of predictors of therapeutic effect after dexamethasone treatment in children with refractory purulent meningitis.regression analysis was performed, revealing that CRP, TNF-a, IL-6, PCT, and IL-1b were significant predictors of the efficacy of dexamethasone treatment in this population (Table III).
Predictive efficacy of CRP, TNF-a, IL-6, PCT, IL-1b on the efficacy of dexamethasone treatment in children with refractory purulent meningitis
The AUCs values for CRP, TNF-a, IL-6, PCT, and IL-1b in predicting the efficacy of dexamethasone in children with refractory purulent meningitis were 0.877, 0.798, 0.765, 0.736, and 0.817, respectively.These values indicate good prediction performance.Notably CRP and IL-1b exhibited superior prediction performance, as demonstrated in Table IV and Figure 1.
Discussion
Purulent meningitis, a familiar condition in the field of pediatrics, primarily arises from infectious triggered by pathogenic bacteria, including Escherichia coli, Klebsiella pneumoniae, and Staphylococcus aureus invading the pia mater.This invasion can result in detrimental effects on the nervous system of children.In the absence of prompt and efficacious intervention, the condition may give rise to neurological sequelae, such as sensorineural deafness, cognitive impairment, and residual motor dysfunctions, and potentially culminate in fatality among pediatric patients.(5).However, despite timely treatment being administered to certain children, the presence of the blood-cerebrospinal fluid barrier hinders the accurate identification of cerebrospinal fluid pathogens., resulting in empirically blind antibacterial, and untargeted and standardized selection of antibacterial drugs, resulting in refractory purulent meningitis in children.Pediatric refractory purulent meningitis has a high morbidity and mortality rate, necessitating the use of reliable indicators to assess its effectiveness in children with this condition.Dexamethasone, a commonly employed corticosteroid in clinical settings, exhibits favorable pharmacokinetic properties.The pharmacological effects of dexamethasone encompass antiviral, anti-inflammatory, and anti-allergic properties.It exerts its anti-inflammatory effect by inhibiting the synthesis and release of inflammatory factors in monocytes and T lymphocytes, as well as suppressing the aggregation of hemameba and macro phages at the site of inflammation.Furthermore, dexamethasone hinders refrain the formation of chemical transmitters during the inflammatory and metamorphic.At present, dexamethasone is frequently utilized in the clinical treatment of refractory purulent meningitis in children, yielding satisfactory outcomes (6)(7)(8).Nowadays, various methods such as clinical symptoms and signs, Glasgow coma score, imaging examination and other indicators are frequently applied to evaluate the clinical efficacy of children with refractory purulent meningitis.However, there remains a dearth of objective and quantifiable evaluation indicators.This is particularly challenging in young children who lack typical clinical symptoms, making it difficult to determine their mental awareness and evaluate the extent of inflammation and the efficacy of drug treatment in a timely manner.
Previous research indicates that following infection with pathogenic bacteria, purulent meningitis stimulates the brain tissue to produce a variety of cytokines, which can impact the immune function of children and the efficacy (9)(10).Several studies have confirmed CRP, TNF-a, IL-6, PCT, IL-1b, and other pro-inflammatory factors are elevated in the cerebrospinal fluid of children with refractory purulent meningitis, demonstrating significant diagnostic value (11)(12).For example, he research conducted by Freer et al. (13) points out IL-6 and TNF-a are elevated in young children with purulent meningitis, suggesting their potential utility as diagnostic indicators for this condition.Additionally, CRP and PCT are proteins with specific functions.. Additionally, CRP and PCT are proteins with specific functions.Following a severe infection, CRP levels rapidly rise, serving as an effective marker for inflammation within the body.Conversely, PCT is primarily secreted by the thyroid gland and typically maintains minimal concentrations under normal physiological conditions.With a short half-life, its level is slightly elevated or not elevated in patients with non-bacterial infections, and it is highly expressed in infectious diseases, making it an ideal indicator for diagnosing infectious diseases (14)(15).TNF-a serves as both the primary inducer immune inflammatory response, and a pivotal element in the inflammatory »cascade reaction«.It is capable of stimulating the production and release of pro-inflammatory factors such as IL-1b and IL-6, while also pro- moting the adhesion of inflammatory cells and increasing the permeability of the blood-brain barrier (16).IL-1b is an inflammatory factor that is closely implicated in diversified pathological injuries in the body.It is the way IL-1b exists in the brain tissue.When the body develops epilepsy due to brain injury, craniocerebral injury, intracranial infection, or other pathologies, its level is clearly elevated (17).IL-6 is a kind of pro-inflammatory factor that can facilitate the activation of matrix protein metalloenzymes and damage the blood-brain barrier (18).Therefore, CRP, TNF-a, IL-6, PCT, and IL-1b can be applied as one of the crucial indicators to evaluate the curative effect of refractory purulent meningitis in children.The results of this study demonstrated that levels of CRP, TNF-a, IL-6, PCT, and IL-1b were significantly higher in both the cerebrospinal fluid and blood of the dexamethasone group compared to the standard care and control groups.Meanwhile, these factors were also elevated in the control group compared to the healthy group, which aligns with previous research and indicates a clear elevation of that CRP, TNF-a, IL-6, PCT, and IL-1b in children with refractory purulent meningitis.
Bedetti (19) and Keus et al. (29) report that hs-CRP, TNF-a, IL-6 and PCT in the serum and cerebrospinal fluid of neonatal purulent meningitis are elevated and closely linked with the prognosis of children.The findings of this study demonstrate that CRP, TNF-a, IL-6, PCT, IL-1b serve as significant predictors for evaluating the effectiveness of dexamethasone treatment in children diagnosed with refractory purulent meningitis.Moreover, this study also examines the efficacy of CRP, TNF-a, IL-6, PCT, and IL-1b in predicting the effectiveness of dexamethasone treatment in children with refractory purulent meningitis.The AUCs of these factors in predicting the efficacy of dexamethasone in children with refractory purulent meningitis were determined to be 0.877, 0.798, 0.765, 0.736, and 0.817, respectively, all of which had good predictive performance.Notably, CRP and IL-1b demonstrated superior predictive performance, suggesting that CRP, TNF-a, IL-6, PCT, and IL-1b can serve as effective indicators for evaluating efficacy of dexamethasone treatment in children with refractory purulent meningitis.
Overall, children with refractory purulent meningitis exhibit a significant upregulation of inflammation factors, which can serve as reliable indicators for assessing the effectiveness of dexamethasone treatment in this population.However, this study still has the following shortcomings: (1) The children included in this study were all children with refractory purulent meningitis who were admitted to Qingtian County People's Hospital of Lishui City.The sample size is small, and the follow-up time is short.In the later stage, other hospitals should be combined to expand the sample and extend follow-up time; (2) Due to the limitations of the age of the subjects and the wishes of the parents, sampling was difficult, and the cerebrospinal fluid was not detected.It is hoped that these aspects can be noticed at a the later stage to further confirm the results of this study.
4
Zhong et al.: The predictive value of meningitis efficacy Table II Comparison of inflammation factors levels among the three groups ( `x±s).
Figure 1
Figure 1 Predictive efficacy of CRP, TNF-a, IL-6, PCT and IL-1b in dexamethasone treatment of refractory purulent meningitis in children.
Table I
Comparison of general data of the three groups.
Table IV
Predictive efficacy of CRP, TNF-a, IL-6, PCT and IL-1β in children with refractory purulent meningitis treated with dexamethasone.
|
2023-12-21T16:21:42.464Z
|
2023-12-18T00:00:00.000
|
{
"year": 2024,
"sha1": "f25708e3539641b7f4d765f7e1bbee703eb6de37",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f25708e3539641b7f4d765f7e1bbee703eb6de37",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263828551
|
pes2o/s2orc
|
v3-fos-license
|
MCM4 acts as a biomarker for LUAD prognosis
Abstract MCM4 forms the pre‐replication complex (MCM2‐7) with five other minichromosome maintenance (MCM) proteins. This complex binds to replication origins at G1 stage in cell cycle process, playing a critical role in DNA replication initiation. Recently, MCM4 is reported to have a complex interaction with multiple cancer progression, including gastric, ovarian and cervical cancer. Here, this study mainly focused on the expression of MCM4 and its values in lung adenocarcinoma (LUAD). MCM4 was highly expressed in LUAD tumours and cells, and had an important effect on the overall survival. Overexpression of MCM4 promoted the proliferation, and suppressed the apoptosis in LUAD cells. However, MCM4 silence led to the opposite results. In vivo, knockdown of MCM4 inhibited tumour volume and weight in xenograft mouse model. As a member of DNA helicase, knockdown of MCM4 caused cell cycle arrest at G1 stage through inducing the expression of P21, a CDK inhibitor. These findings indicate that MCM4 may be a possible new therapeutic target for LUAD in the future.
| INTRODUC TI ON
Cancer is characterized by high heterogeneity and complexity, and is related to a series of genetic and epigenetic aberrations. 1,2Lung cancer has the highest incidence and is the biggest cause of cancer mortality worldwide.It consists of two main subtypes, including small cell lung cancer (SCLC) and non-SCLC (NSCLC). 35][6] LUAD develops from type II alveolar epithelial cells, [7][8][9] and constitutes unique lung cancer subtypes with mutational landscapes and distinct cellular. 8Due to the absence of effective screening programmes and clinical symptoms, most patients are at the advanced stage of disease when they are first diagnosed.However, this is not the best time for treatment, which is probably the most significant reason for the high death rate of lung cancer patients. 102][13] According to statistics, the 5year survival rate for NSCLC is less than 15%. 3,4Therefore, it is high time to research on the molecular mechanisms underlying lung cancer development to determine useful diagnostic markers and more effective treatments.
The minichromosome maintenance (MCM) proteins are generated from MCM2-7, which belong to AAA+ ATPase family. 14,15MCMs interact with each other, and form a complex of a six-membered replicative helicase. 16This complex plays a key role in binding to the replication origin, melting double-stranded DNA (dsDNA) to initiate replication, and acting as a helicase on elongating DNA. 17 early G1 stage, MCM2-7 complex is loaded on replication origins in a Cdt1-and Cdc18/Cdc18/Cdc6-Dependent manner to form the prereplicative complex.Then, this complex unwraps the initiation DNA with the assistance of CDC45 protein and GINS complex, initiating DNA synthesis. 15In addition, MCM is reported to be one of the most valuable biomarkers for cancer diagnosis due to its abnormally high expression in tumour cells. 18,19ong the subunits of MCM complex, MCM4 is considered as the most conservative protein in the entire evolutionary process. 15It is targeted to a region head-to-head a DNA-activated protein kinase (PRKDC/DNA-PK), which plays a significant role in DNA doublestrand breaks repair. 20MCM4 is a key to the initiation of eukaryotic genome replication, and has an important effect on replication forks formation and recruitment of other DNA replication-related proteins. 21MCM4 high expression was observed in multiple cancers.
Guo and colleagues 22 found that MCM4 expression was closely correlated with tumour stage, and increased levels of MCM4 was related to better progression-free survival (PFS) and overall survival (OS) in human gastric cancer.In laryngeal squamous cell carcinoma (LSCC), MCM4 suppression obviously caused cell proliferation inhibition and apoptosis induction.Furthermore, MCM4 overexpression was found in carcinoma tissues. 21Huang et al. found the positive rate of MCM4 was much higher in esophageal squamous cell cancer (ESCC) than in normal controls.Moreover, compared with stage T1 ESCC, MCM4 positive rate was significantly higher in stage T3 ESCC. 23 Choy et al. claimed that MCM4 could act as prognostic factor in oesophageal carcinoma, ovarian and cervical cancer. 24,25Kikuchi and colleagues also showed increased MCM4 level in NSCLCS cells.And, its abnormal expression was related to male gender, heavy smoking, poorer differentiation and non-adenocarcinoma histology. 26However, the molecular mechanism underlying the relevance of MCM4 in NSCLC progression is still urgently needed to be studied.
Here, the present study researched MCM4 functions in LUAD development and explored its mechanism.High level of MCM4 was observed in LUAD tissues/cells in related to normal controls.And also, there was a clear correlation between MCM4 and OS.In addition, MCM4 overexpression promoted proliferation, suppressed apoptosis and accelerated cell cycle progression.While MCM4 silence resulted in the opposite results.What's more, MCM4 silence suppressed tumour growth in vivo.
| Cell culture
Four human LUAD cell lines (H441, H460, H522, A549) and two human embryonic lung fibroblast cell lines (WI38 and MRC5) were obtained from the American Type Culture Collection (ATCC).These cells were cultured in RPMI 1640 medium (Invitrogen Life Technologies, Inc.) mixed with 10% fetal bovine serum (FBS, Gibco) and incubated at 37°C in a humidified atmosphere with 5% CO 2 .
| Plasmid construction
To overexpress MCM4 in H441 and A549 cells, pcDNA3.1-MCM4expression plasmid was constructed.The inserted sequences were verified by DNA sequencing.
| Cell transfection
Small interfering RNAs (siRNA) or short hairpin RNAs (shRNA) was used to inhibit MCM4 expression.The former was used for experiment in vitro, and the latter was used for constructing xenograft mouse model.Briefly, cells were seeded in six-well plates overnight, and transfected with siMCM4 or shMCM4 or pcDNA3.1-MCM4mixed with lipofectamine 2000 solution (Invitrogen).The siRNA and shRNA targeting MCM4 (Target sequence:5′-AAATG CAT TCT TCA GCT ATCCCTT-3′) were bought from Santa Cruz Com.
| Western blotting assay
To extract protein from cells, Ripa buffer with 50x protease inhibitor was used to lyses cells.We then used 10% sodium dodecyl sulfate (SDS) gel wells to isolate proteins at 80 V in the start and 120 V when samples reached into separating gel, and then transferred protein into nitrocellulose membranes (NC, Millipore).
| Colony formation assay
To detect clonogenic ability of a single cell, this study performed colony formation assay.After transfection, cells were cultured in sixwell plates at 1000 cells/well and cultured for 10 days.And then, we stained the cells for visualization by using 1% crystal violet.After washing and air drying, colonies pictures were taken.Colonies number in each well was counted.
| MTT assay analysis
The present study detected cell viability by using MTT assay.After transfection, cells were seeded in 96-well plates, and incubated with 5 mg/mL MTT for 3 h at 37°C.And then, the liquid supernatant was sucked and discarded.Each well was added with 150 μL DMSO.The OD value of each well at 450 nm was recorded by a microplate reader (Thermo Fisher Scientific).Graphs were plotted to show cell viability of different cells.
| Cell apoptosis detection
To test cell apoptosis, this study examined the activity of caspase 3/7.After transfection, cells were cultured in 96-well plates at 37°C with 5% CO 2 , and then collected, washed with PBS and lysed in lysis buffer.The pyrolysis productions are collected and centrifuged.The supernatant fluid mixed with reaction buffer as well as caspase 3/7 substrate, and incubated at 37°C for 4 h.In the end, the OD value at 405 nm was detected using a microplate reader (FACSCanto II system, BD).
| Cell cycle detection
After transfection, cells were fixed with 70% ethanol overnight at 4°C.After washing, they were stained with PI and ribonuclease for 30 min at 37°C.After that, flow cytometer (FACSCanto II system, BD) was used to measure cell cycle phase.
| Animal experiments
LSL-Kras G12D/+ mice are LUAD models with a conditionally activatable allele of oncogenic K-ras.This model is constructed by using a recombinant adenovirus, which express Cre recombinase (AdenoCre).The enzyme can express K-ras G12D. 27Tumour tissues and adjacent normal tissue were collected for western blotting assay.
The animal experiments were carried following animal research guideline.Total 12 female Balb/c nude mice were used in this study, and divided into two groups.One group of mice was injected with A549 cells transfected with shMCM4 in the right flank.Another group including six mice was as controls injected with A549 cells.
Tumour volume (mm 3 ) was detected every 3 days by digital calliper, and calculated by the equation: 1/2 × length × width 2 .At the end of these experiments, the mice were scarified and tumours weight was measured.
| Statistics analysis
Every experiment was performed three times individually in this study.
All data was presented as the mean ± standard error.Statistically significant differences before and after treatment were calculated by Paired Student's t-test.SPSS software was conducted to analyse all the statistics.p < 0.05 was identified as statistically significant difference.
| MCM4 was upregulated in LUAD cells and tumour samples
To investigate the possible involvement of MCM4 in LUAD, our study first detected MCM4 expression in patients with LUAD, and evaluated the connection between its expression and OS by analysing TCGA database.A total of 483 tumour samples from LUAD patients and 347 normal samples were analysed.This study found high expression of MCM4 in tumour samples in relative to normal control groups (Figure 1A).Furthermore, significantly higher survival rate was observed in patients with downregulation of MCM4 than that of patients with upregulation of MCM4 (Figure 1B,C).LSL-Kras G12D/+ mice models of LUAD showed the same results that increased levels of MCM4 were showed in LUAD samples in relative to control groups (Figure 1D).LSL-Kras G12D/+ mice are LUAD models with a conditionally activatable allele of Kras. 27In addition, MCM4 levels in human embryonic lung fibroblast cells lines (MRC5/WI38) was significantly lower than that in LUAD cell lines (H441/H460/H522/A549) (Figure 1E,F).These findings revealed that MCM4 was upregulated in LUAD samples and cells, and its high expression was closely related to low survival rate.
| MCM4 induced LUAD cell growth
MCM4 was knockdown or overexpressed in A549 and H441 cells to detect the functions of MCM4 on LUAD cells growth.Its expression in these stable cells was verified by western blotting assay, suggesting the construction was successful (Figure 2A).MTT assay results suggested the absorbance of cells with MCM4 overexpression were more than that of normal control cell groups both in A549 and H441 cells, whereas cells with MCM4 knockdown had low OD values (Figure 2B).Additionally, this study carried on colony formation assay analysis.Overexpression of MCM4 in A549 and H441 cells led to increased colony numbers compared with normal cells, and MCM4 knockdown led to the opposite results (Figure 2C,D).PCNA, a marker for cell proliferation, 28 was significantly upregulated in A549 and H441 cells caused by MCM4 overexpression (Figure 2E).While, inhibition of MCM4 suppressed PCNA protein levels.We also investigated the correlation between MCM4 and PCNA expression, and PCNA were positively correlated with MCM4 (Figure 2F).These findings revealed that MCM4 overexpression significantly enhanced the growth and proliferation of LUAD cell lines.
| MCM4 inhibited LUAD cell apoptosis
Cell apoptosis was analysed in cells with MCM4 overexpression/ silencing using caspase 3/7, a worthy marker for cell apoptosis.Upregulated MCM4 reduced caspase 3/7 activity in LUAD cells compared with control groups, while MCM4 knockdown caused the opposite results (Figure 3A,B).Caspase-3 is recognized as critical enzyme in relation to apoptosis. 29Here we found overexpression of MDM4 could increase caspase-3 protein expression, and decrease cleaved-caspase-3 protein levels (Figure 3C,D), and MDM4 knockdown led to the adverse results.
| MDM4 accelerated cell cycle progression from G1 to S phase by suppressing P21 protein level
Cell cycle phase in LUAD cells with MCM4 overexpression or knockdown was investigated using flow cytometry analysis.MCM4 upregulation accelerated cell cycle development from G1 to S phase, and MCM4 silence led to G1 phase arrest.Nevertheless, we did not find any significant difference at G2 phase (Figure 4A,B).
P21 was the well-known tumour suppressor contributing to G1 stage arrest. 28,30,31In this study, western blotting assay showed that overexpression of MCM4 could inhibit P21 protein level and MCM4 silencing upregulated its level (Figure 4C,D).This may be the explanation for G1 stage arrest in cells with MCM4 knockdown.
| Inhibition of MCM4 attenuated tumour growth in vivo
The previous data we obtained in vitro suggested that MDM4 absence inhibited LUAD progression.We further evaluated the potential effects of MDM4 on tumour growth by using xenograft mouse model, which were injected with A549 cells or cells with MDM4 knockdown subcutaneously.The results presented in Figure 5 showed that MDM4 knockdown significantly decreased both tumour volume and weight after 24 days (Figure 5A-C).
Here we also tested PCNA and P21 levels, and found that MCM4 silence inhibited its protein levels in tumour tissues (Figure 5D).
PCNA is a protein that generated mainly in proliferating and transforming cells, and associated with DNA replication and replication-related pathways. 32These findings suggested that MCM4 knockdown suppressed tumour growth via inhibiting PCNA levels.
| DISCUSS ION
MCM complex is composed of six subunits (MCM2-7), playing a key role in DNA replication initiation. 15,16Among MCM2-7, MCM4 is considered as the most conservative protein, and its abnormal expression is associated with cancer progression, including mammary carcinoma, oesophageal and breast cancer. 18,25,33However, few reports focused on the role of MCM4 on lung cancer, especially LUAD.
Therefore, this study investigated the impacts of MCM4 on LUAD development in vitro and in vivo.We found the oncogenic role of MCM4 in LUAD progression, suggesting that MCM4 could use as a reliable marker for LUAD diagnosis and treatment.
In details, we found MCM4 was overexpressed in LUAD tumour samples and cells in relative to their corresponding normal controls, and high MCM4 levels led to low survival rate in patients with LUAD.Previous research also reported excessive MCM4 expression in LSCC, 21 ovarian, 34 gastric, 22 ESCC 23 and breast cancer.MCM4 is high expression in breast cancer patient and silencing MCM4 significantly inhibited the proliferation of breast cancer cells.E2F2 induced upregulation of MCM4 expression in ovarian cancer, and was significantly associated with the poor prognosis of patients. 34In addition, excessive MCM4 expression is a potential prognostic marker for LSCC, which is related to the poor prognosis of patients. 21Those finding suggests that MCM4 may act as a pro-oncogenic factor in most tumours, and may be involved in tumour formation and progression.Subsequently, this study found that MCM4 upregulation accelerated cell proliferation, and suppressed apoptosis in LUAD cells.While, MCM4 silencing caused the opposite results.In vivo, knockdown of MCM4 attenuated tumour growth in xenograft mouse model.Our results are similar to the findings of Han et al. 21d Junko et al. 26 These results suggest that MCM4 is a potential molecular target for LUAD.PCNA, a critical eukaryotic replication accessory factor, is highly conservative, and interacts with multiple proteins. 35,36And it mediates DNA replication, apoptosis, repair and cell cycle control in vitro and in vivo.PCNA expression is reported to be unregulated in cancer cells and has been a biomarker for cell proliferation in tumours. 31,37 this study, PCNA protein level was increased by MCM4 overexpression, and was decreased by MCM4 knockdown in A549 and H441 cell lines.The same results were observed in tumour tissues from mice models injected with LUAD cells with MCM4 knockdown.
F I G U R E 1
Elevated expression of MCM4 was observed in LUAD tissues and cells, which caused low overall survival rate.(A) MCM4 expression in TCGA datasets.MCM4 level in LUAD tumour samples was higher than that of adjacent normal tissues.(B) The overall survival rate of patients with high MCM4 expression was reduced compared with that with low MCM4 expression.(C) Cox univariate regression analysis of age and histological grade in TCGA cohort.(D) Western blotting assay showed MCM4 level was increased in tumour tissues in relative to those in normal tissues in LSL-Kras G12D/+ mice model.(E) MCM4 was overexpressed in LUAD cells (H441, H460 and H552) compared with human embryonic lung fibroblast cells (MRC5 and WI38).*p < 0.05.N, adjacent normal tissues; T, tumour tissues.(F) Quantitative results of MCM4 expression.
F I G U R E 2
MCM4 promoted cell growth and viability in A549 and H441 cells.(A) Stable cell lines with MCM4 overexpression (MCM4-OE) or knockdown (siMCM4) were constructed.MCM4 protein expression in these cells were validated using western blotting.(B) MTT assay analysis showed that MCM4 overexpression significantly increased the OD values of A549 and H441 cell lines, and MDM4 silence reduced the OD values.(C, D) Overexpression of MDM4 significantly increased colonies number of LUAD cells (A549 and H441 cell lines).On the contrary, the colonies number was decreased in cells with MCM4 knockdown.The relative number of colonies is calculated by normalization to untreated group as 100%.(E) PCNA is a biomarker for cell proliferation.MCM4 overexpression increased PCNA protein levels, whereas MCM4 inhibition reduced its level.Data are expressed as mean ± S.E.M. ***p < 0.001.MCM4-OE, cells with MCM4 overexpression; NC, normal control; siMCM4, cells with MCM4 knockdown.(F) The expressive correlations between MCM4 and PCNA.
F I G U R E 4
MCM4 was involved in cell cycle in LUAD cells.(A,B) Compared with normal cell lines, MCM4 overexpression accelerated cell cycle progression from G1 to S phase in both A549 (A) and H441 (B) cells.Whereas, MCM4 silence result in G1 arrest.(C, D).P21, an inhibitor of CDKs, induces G1 arrest and blocks the entry to S phase.Upregulation of MCM4 decreased protein level of P21, and MCM4 silence increased its level compared with control group.Data are expressed as mean ± S.E.M. **p < 0.01; ***p < 0.001.MCM4-OE, cells with MCM4 overexpression; NC, normal control; siMCM4, cells with MCM4 knockdown.
These findings further verified the carcinogenic role of MCM4 in LUAD.Cell proliferation is mediated by multiple mechanisms.Uncontrolled self-replication of tumour cells enhanced cancer development.38MCM2-7 is connected with DNA replication initiation, F I G U R E 5 MCM4 silence inhibited tumour growth in vivo.(A) Female Balb/c nude mice were injected with A549 cells or cells with shMCM4.Tumour volume was measured every 3 days.After 24 days, tumours were removed and photographed.(B) MCM4 inhibition significantly decreased tumour volume compared with control group.(C) Tumour weight was weighed.MCM4 knockdown significantly inhibited tumour weight.(D) Consistent with the results of experiment in vitro, MCM4 silence also decreased PCNA and P21 protein levels.Data are expressed as mean ± S.E.M. **p < 0.01.sh NC, Mice injected with LUAD cells; shMCM4, Mice injected with LUAD cells with MCM4 knockdown.
|
2023-10-12T06:18:02.737Z
|
2023-10-10T00:00:00.000
|
{
"year": 2023,
"sha1": "0d0f95f841d053c7f580f94869f58cf3bae73de7",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.17819",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4a6d7dbc19a1a18e60dbdf810a364421e06ed92",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215790932
|
pes2o/s2orc
|
v3-fos-license
|
Bacterial DNAemia is associated with serum zonulin levels in older subjects
The increased presence of bacteria in blood is a plausible contributing factor in the development and progression of aging-associated diseases. In this context, we performed the quantification and the taxonomic profiling of the bacterial DNA in blood samples collected from forty-three older subjects enrolled in a nursing home. Quantitative PCR targeting the 16S rRNA gene revealed that all samples contained detectable amounts of bacterial DNA with a concentration that varied considerably between subjects. Correlation analyses revealed that the bacterial DNAemia (expressed as concentration of 16S rRNA gene copies in blood) significantly associated with the serum levels of zonulin, a marker of intestinal permeability. This result was confirmed by the analysis of a second set of blood samples collected from the same subjects. 16S rRNA gene profiling revealed that most of the bacterial DNA detected in blood was ascribable to the phylum Proteobacteria with a predominance of the genus Pseudomonas. Several control samples were also analyzed to assess the influence of contaminant bacterial DNA potentially originating from reagents and materials. The data reported here suggest that para-cellular permeability of epithelial (and, potentially, endothelial) cell layers may play an important role in bacterial migration into the bloodstream. Bacterial DNAemia is likely to impact on several aspects of host physiology and could underpin the development and prognosis of various diseases in older subjects.
P s e u d o m o n a s A r t h r o b a c t e r A c i n e t o b a c t e r E s c h e r i c h i a -S h i g e l l a P h y l l o b a c t e r i u m P a r a c o c c u s T e p i d i
Escherichia-Shigella Cluster_10 Arthrobacter Solirubrobacterales 67-14 genus Caulobacteraceae genus Cluster_364 Burkholderia Clostridium Fig. S5. Abundance of 16S rRNA gene copies of taxonomic units detected in second set of blood samples (n=42) that significantly correlated with the serum levels of zonulin. ρ, Spearman's rank correlation coefficient; P, P value of the Kendall's rank correlation. Taxa that resulted significantly correlated with zonulin also from the analysis of the first set of blood samples are indicated in bold and red color.
Fig. S6. Correlations of the taxonomic units detected in blood (expressed as relative abundances) toward age, BMI, and metabolic and functional markers
determined in blood of the older subjects under study (n=43). This figure only includes taxa whose abundance significantly correlated with at least one parameter.
Technical issues concerning zonulin quantification
In this study, zonulin quantification in serum samples was carried out by means of the most commonly used commercial ELISA kit. Recently, the specificity of this and others ELISA assays has been questioned 2 and, consequently, it was suggested to interpret with caution data collected as direct assessment of intestinal permeability 3 . In this context, it is noteworthy that already Scheffer and colleagues 4 , previously identified through the use of the same kit a variety of proteins structurally related to zonulin (in particular properdin).
Consequently, the authors suggested that although the assay was not specific for pre-haptoglobin2 quantification, other members of permeability-regulating proteins belonging to the mannose-associated serine protease family could be determined 4 .
Technical issues concerning the detection and taxonomic profiling of bacteria DNA in blood
In a recent publication, circulating cell-free DNA isolated from human blood plasma was subjected to massive shotgun sequencing 5 ; more than half of the identified contigs had little or no homology with sequences in available databases and, interestingly, were assigned to hundreds of entirely novel microbial taxa. In our study, we did not find such a large presence of unknown microorganisms. Nonetheless, two main aspects distinguish the research by Kowarsky et al. from ours: (i) we performed 16S rRNA gene profiling and not shotgun metagenomic sequencing and (ii) we analyzed DNA isolated from whole blood and not plasma. This second aspect is particularly important considering the presence of bacterial DNA in blood cells such as erythrocytes and antigen-presenting cells 6,7 .
In this study, the bacterial DNA isolated from blood was taxonomically profiled through MiSeq sequencing of 16S rRNA gene amplicons. We presented above numerous similarities, both quantitatively (i.e., abundance of 16S rRNA gene copies) and qualitatively (i.e., detected taxa), between the results of our study ad what reported in several other studies available in literature. However, none of the papers we referenced above focused specifically on the evaluation of potential contaminant DNA, originating from any possible experimental step. The use of 16S rRNA gene profiling for the bacterial taxonomic characterization of low microbial biomass samples, such as blood, has been criticized as being at high risk of microbial contamination that may occur at any step of the protocol, from sample collection until sequencing 8,9 . In our study, we analyzed several control samples to assess the potential presence of contaminants in labware (e.g. vacutainer and EDTA tubes) and reagents (e.g. solutions used during extraction, library preparation, sequencing, and qPCR). According to qPCR experiments, we always detected in control samples a quantity of bacterial DNA much lower than that quantified in blood samples, suggesting the potential contaminants should not have significantly affected the taxonomic profiling of blood samples. However, the confirmation of a significant correlation between zonulin and 16S rRNA gene copies in blood (total and ascribed to Pseudomonas) also in the second set of blood samples investigated supports the conclusion that the bacterial DNA detected in blood largely do not derive from contamination. Nonetheless, it is also important to mention that most of the bacterial
Supplementary Material
13 genera detected in blood in our study have been reported as contaminants occurring during microbiome research in other studies (reviewed in 8 ).
Considering the relative abundance of bacterial taxa detected in blood and control samples, we hypothesize that the most probable contaminants belong to the families Enterobacteriaceae, Micrococcaceae and Moraxellaceae (the second, third and fifth most abundant families detected in blood, respectively), whereas at least most part of the DNA ascribed to Pseudomonadaceae (the most abundant family detected in blood) is less likely to derive from contaminants. Lists of bacterial taxa that were identified in negative controls during different independent studies have been proposed 8,10 , cataloging up to 70 different genera to be considered as potential contaminants 8 . These lists contain numerous Proteobacteria including Pseudomonas, which was found to be the most prevalent and abundant bacterial genus in the blood samples investigated in our study. Pseudomonas is a ubiquitous bacterium, which colonizes numerous environments, such as soil, water and various plant and animal organisms, due to minimal survival requirements and remarkable adaptation ability 11 . Notably, Pseudomonas is also one of the microorganisms most frequently isolated from patients with bacteremia, particularly the species P. aeruginosa 12 . In this report, the partial sequence of the 16S rRNA gene belonging to the most prevalent and abundant OTUs found in the analyzed blood samples (Cluster 1 and Cluster 3, Fig. 5) shared 100% similarity with P. fluorescens and other species of the same phylogenetic lineage. Although far less pathogenic than P. aeruginosa, P. fluorescens has been often reported as the aetiologic agent of opportunistic infections in lungs, mouth, stomach, urinary tract, skin, and, most commonly, blood 13,14 . Notably, P. fluorescens is recognized as the most important cause of iatrogenic sepsis, attributed to contaminated blood transfusion or contaminated equipment used in intravenous infusions [15][16][17] .
Although the literature evidence discussed above suggests that P. fluorescens and related species can be contaminants (see Supplementary Discussion in Additional file 1), on the other hand, these bacteria were also reported to possess numerous functional properties that support their survival and growth in mammalian hosts 14 . Furthermore, an interesting association was found between the presence of serum antibodies against the I2 peptide encoded by P. fluorescens and Crohn's disease 18 , celiac disease 19 , ankylosing spondylitis 20 , and chronic granulomatous disease 21 . In addition, P. fluorescens was reported to be regularly cultured from clinical samples even in the absence of acute infection 14 . Finally, P. fluorescens was demonstrated to induce zonulin expression and decreased intestinal permeability in a time dependent manner in an in vitro model of intestinal epithelium 22 . In the same study, the authors found increased zonulin levels and higher abundance of Pseudomonas 16S rRNA gene copies (as determined through qPCR with genus-specific primers) in coronary artery disease (CAD) patients compared to non-CAD subjects 22 . Altogether, these reports support the hypothesis that human-adapted P. fluorescens strains constitute low-abundance indigenous members of the microbial ecosystem of various body sites, such as the lungs, mouth, and stomach 14,[23][24][25] . Contextually, we can speculate that certain P. fluorescens-related strains are highly adaptable and poorly pathogenic members of the microbiota in several body sites that may frequently translocate into the bloodstream, providing a dominant contribution to bacterial DNAemia. However, we are conscious that our results do not conclusively demonstrate the actual presence of Pseudomonas (cells or free DNA) in blood. We believe that DNA-
|
2020-04-16T09:19:21.357Z
|
2020-04-10T00:00:00.000
|
{
"year": 2021,
"sha1": "b6f8a8193f17bf9d3a1f3436dec2fd99449f0e40",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-90476-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abb0e7dd9d9835f4d1ce1e501d1261358765b603",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
228082305
|
pes2o/s2orc
|
v3-fos-license
|
Connotation of Otological Symptom in Tempro Mandibular Joint Disorder and Vise-Versa (Clinical Comparative Original Study)
Many articles have highlighted the association between otological symptoms and TMJD. Anatomic, neurologic and emotional relationships considered the causative factors of symptom association. According to multiple published literatures otalgia can be common sign for TMJD subjects with tenderness and pain on mandibular condyle. Aims of study: The temporomandibular joint disorder may be accompanied by a series of otological symptoms such as otalgia, investigators aims here in this clinical study multiple points such as analyzing the prevalence of different symptoms of TMJD, else more to estimate the importance of cooperation between two different specialties these are maxillofacial and ENT specialties. To verify the frequency of otologic signs and symptoms with TMJ disorder and vice versa; specifying to which limit the therapeutic intervention on the articular pathology may influence the manifestations. Furthermore; authors targeting to detect the influence of proper diagnosis and selection of proper treatment plane. Emphasize the different correlation between the two specialties. Material and Method: One hundred participants are sharing in this research divided equally in to dual specialties origin, fifty patients attain private clinic of the maxillofacial surgeon and fifty participants attain private ear nose throat clinic. All of these patients are share to have temporomandibular joint disorders (TMJDS) and otological symptom (OS) but they attain different specialty. Result: One hundred patients shared in TMJDS and OS analyzed according to different age groups as well as gender difference. Descriptive analyses for cases in both specialties are cleared the highest incidences are highlighted. Thirty-one to forty forming the highest age group in both specialties (38%, 30%) consequently for maxillofacial and ENT patients. Most of the mutants are females (74%) equally for both branches. Comparison between patients attain maxillofacial clinic and patients attain ENT clinic are analyzed according to age and gender in respect to symptoms, investigations, site involved, question answer. Discussion: The management of patients with TMD is a challenge, and an approach that includes dentists and oto-rhinolaryngologists is necessary to rule out, for example, otological diseases. It is imperative to understand the need for interdisciplinary management between the doctor and the dental specialist in craniofacial pain. Also try for a conservative vision in the treatment of muscular tensions of these masticatory and auditory muscles. Specialists in a single discipline cannot always individually resolve the symptoms present in a patient, without the invaluable support of multidisciplinary management. Each specialty contributes in its specific knowledge to the differential diagnosis process that guides a correct treatment plan.
INTRODUCTION
Temporomandibular joint dysfunctions being considered the third stomatological disease in regard to inhabitant illness owing to its chronicity and widespread prevalence, E. Ferendiuk stated that "the disorders occur in more than 10% of population" [1].
Epidemiological data confirm that painful TMJ disease is increasing (to about 40%) while the age of patients are dropped; furthermore female are affected more than male [2]. In this circumstances; such problem can disturb individuals' capability causing restriction to achieve every day functions at home and work; this can make discussing and solving patients' compline are priorities [1].
There is no doubt that the functional relevance between stomatognathic system and the acousticvestibular apparatus are widely discussed in many points such as anatomical, physiological and symptomatical relation. Temporomandibular disorder (TMJD) is an idiom reflects different clinical problems involve mastication muscles and temporomandibular joint with its associated structures.
Etiology of TMJD is varied can be ranged from mild stress and emotional problems, muscle hyper or hypoactivity to more advanced causes as occlusion disturbances, degenerative disorder, traumatic causes and harmful habits that generate persistent [3].
Pain, clicking or crackling sounds in the TMJ, in addition limitation of mouth opening are utmost clinical symptoms of TMJD. Pain can spread to different regions of the head, including the pre-auricular and auricular regions [4].
Many articles have highlighted the association between otological symptoms and TMJD [5][6][7]. Anatomic, neurologic and emotional relationships considered the causative factors of symptom association.
According to multiple published literatures otalgia can be common sign for TMJD subjects with tenderness and pain on mandibular condyle palpation [8,9]. In addition, tinnitus, vertigo and ear fullness show high prevalence association moreover [10,11].
AIMS OF STUDY
The TMJD may be accompanied by a series of otological symptoms such as otalgia, investigators aim from this clinical study multiple points such as analyzing the prevalence of different symptoms of TMJD, else more to estimate the importance of cooperation between two different specialties these are maxillofacial and ENT specialties. To verify the occurrence of otologic signs and symptoms with TMJ disorder and vice versa; specifying to which limit the therapeutic intervention on the articular pathology may influence the manifestations. Furthermore; authors targeting to detect the influence of proper diagnosis and selection of proper treatment plane. Emphasize the different correlation between the two specialties.
MATERIAL AND METHOD
One hundred participants are sharing in this research divided equally in both specialties origin, fifty patients attain private clinic of the maxillofacial surgeon and fifty participants attain private ear nose throat clinic. All of these patients are share to have temporomandibular joint disorders (TMJDS) and otological symptom (OS) but they attain different specialty.
Inclusion Criteria
Patient with age from 14-70 y No sex predilection. All clinical records routinely used for patients treated at these 2 private clinics for a six-month's period were initially selected, for a total 197 patients record. Ninety seven patients are excluded either no complete data or no fit the eligibility criteria. Final sample was 100 records Demographical patient information are recorded include identity details, history both medical and dental, physical and clinical examinations, which involve temporomandibular joint area and the ear region too. Muscles of mastication and muscles of the head included in examination, functional tests of the temporomandibular joint and occlusion details also recorded.
Clinical sign and symptom are recorded as appropriate to reach definite diagnosis in full details in accordance to branches. For maxillofacial surgeon; these are pain type, intensity, site extension, causes, elevating factors, reducing factors, period and number of pain attacks with previous treatment underwent. Tenderness as well as clicking in the site is documented. On the contrary; pain, hearing loss and tinnitus are chronicled by the ENT surgeon. Line of treatment provided and Follow up for both branches are estimated more over.
The specific shared question in both branches is "What is the first symptom making you seek treatment: ear or joint".
The data analyzed were gender (male and female), age (up to 20 years old, between 21 and 30, between 31 and 40, between 41 and 50, and 51 or older). Related symptoms such as dizziness, ear fullness, and imbalance recorded but not included in the parameters.
Microsoft Excel used to record data which analyzed by Statistical Package for Social Sciences (SPSS) software program IBM version 20. Primarily informative analyses were done The correlation between age and the symptoms in different branches are estimated. Comparisons between age and symptoms in different branch were performed also. Same analysis carried out for gender as well. Analysis of the question's answer also valued. Moreover the correlation between otological symptoms and temporomandibular disorder are analyzed. Temporomandibular site involved analysis (Right, Left or Both). Estimation of difference in line of treatment and percentage of referral between the two specialties is analyzed. The significance level was 5%.
Descriptive Analyses
One hundred patients shared in TMJDS and OS analyzed according to different age groups as well as gender difference. Descriptive analyses for cases in both specialties are cleared in Table-1. The highest incidences are highlighted. Thirty-one to forty forming the highest age group in both specialties (38%, 30%) consequently for maxillofacial and ENT patients. Most of the mutants are females (74%) equally for both branches.
Comparison between patients attain maxillofacial clinic and patients attain ENT clinic are analyzed according to age and gender in respect to symptoms, investigations, site involved, question answer and follow up as will be explained consequently. Table 2 and 3 show the symptoms comparison in regards to age groups in both specialties were pain tenderness as well as clicking show significant difference between maxillofacial and ENT patients (0.034, 0.48, 0.049 consequently). Age group (31-40 years) shows the highest percent 38% in maxillofacial cases. On the contrary pain, hearing loss and tinnitus don't show significant difference in addition the age groups (21-30 and 31-40 years) show equal high percentage (26%). Gender express no significant difference in both branches whether TMJD symptoms or otological one (Figures 1 and 2).
Investigations Comparison
Thirty patients are performing radiographical assessment added to clinical examinations for maxillofacial patients while zero patients investigated by radiograph in ENT patients, no significant difference illustrated (0.218) ( Table-4).
Site Involvement Comparison
Both TMJ involvement display a significant difference between maxillofacial and ENT patients (0.012), Thirty one patients screening bilateral joint symptoms from these 38% in age group (21-30 and 31-40 years). Age groups (21-30 and 31-40) accountings about more than half of ENT patients collectively (28%) were left TMJ involved ( Table-5).
Question Answer
All participants are asked one question about which symptom makes him seek treatment whether joint pain or ear pain? Maxillofacial patients exhibit a significant result 0.004, thirty nine patients answer clearly that pain in TMJ area make them pursues maxillofacial clinic, eleven patients are referred from ENTs to consultate maxillofacial surgeon. All patients attain ENT' clinic responds that ear pain is the priorities ( Table-6).
Follow up Comparison
Two weeks follow up don't illustrate significance differences quit the opposite four weeks follow up display significance result (0.019). Thirty seven maxillofacial patients were responding to treatment in the first two weeks in contrast to eight ENT patients (Table-7).
Gender Relations
Equivalent comparisons are also done in regards to gender with symptoms, investigation, site involved, question answer and follow up periods. No significant differences are exhibited in all in spite of that gender manifest significant relation in conformity to TMJ pain (0.001, 0.001), tenderness (0.001, 0.004), otological pain (0.001, 0.001) for both specialties and tinnitus (0.005) added to ENT's patients.
DISCUSSION
The academic and clinical knowledge of anatomy gathered for years represent an indisputable proof of the connection that exists between the both entities, but there is a continuing argument nowadays about the TMJ pathology extension that can generate otological symptoms. Temporomandibular joint disorder is common disorder display multiple symptoms and varied terminology are screened in the literature such as -painful myofascial syndrome‖ or -occlusal-articular algodysfunctional syndrome‖. In some instances, it can affect patients' capability even causing psychological disturbances as it can manifest symptom with teeth, muscle or joint which can extend affecting and influence the diminution of the manifestations of the acoustic-vestibular system [12].
About 60-70% of the general population has at least one sign of temporomandibular joint dysfunction (TMD), but only one out of four individuals is aware of these symptoms and reports them to a specialist [13]. Many etiological factors are coincident to cause TMJD; simply these are highlighted with details in a review article published by Chisnoiu 2015. Occlusion conflicts, psychology disorders as well as hormones disturbances can affect the disease occurrences. In some cases hereditary factors also have a role [14].
Pain is the most common symptom alarming the patients and encourages them to looking for treatment. Severity with extensions of pain are diverse, may radiate to the preauricular, nasal, orbital, occipital, mastoid, supraclavicular level. Gabriela Musat 2017 an ENT surgeon focuses on importance of TMJ pain differential diagnosis [12]. This is in the line with the result of this study as pain forming the highest symptom for both specialties.
Musat also conclude that TMJD predominantly unilateral beside the connotation of otic phenomena; on the contrary of the result of this study which show significance result in bilateral site involvement (0.012*) for maxillofacial patients. This can be related to different hypotheses as proposed by Ren and Isberg show a" significant correlation between the unilateral presence of tinnitus and the movement of the condyle from the TMJ" [13]; mechanical transmission forces basically are possibility for such tinnitus from the TMJ transmission to the middle ear, via the disco-malleolar ligament [15]. Likewise, auriculotemporal nerve irritation by condylar part of the joint can be another hypothesis explains unilateral pain of otic area [16]. [17][18][19]. Fifty nine patients are complaining from otalgia collectively in both specialties forming more the half of all patients 59% are the result of this study agreed with the previous mentioned articles.
Hearing loss occur in 14 patients; six out of them are below 20 years of age in this study; Decker described cases series of hearing impairment manifest in patients with deep occlusion or posterior localization of the mandibular condyle in the glenoid fossa [16]. Furthermore, Goodfriend 1933, noted the link between the incidence of tinnitus and the stomatognathic system dysfunction, argument also approached by Costen in 1934. Other author considers there is an increase in tinnitus intensity with up to 75% during voluntary movements of the temporo mandibular joint [12].
Le Resche L. climax the age prevalence with TMJD; he describe it as" it follow an inverted U curve" [20]. Age incidences are controversial. Age from 45-64 years olds express the peak prevalence [21,22] while prevalence peaked in women of child bearing ages year-olds) in older studies [23]. Despite the relatively high prevalence of TMD in the elderly population, there are no current review articles that focus on this specific age group [24].
Conventionally, TMD thought to trouble women. This also approved in recent prospective study discuss orofacial pain conclude same evidence that only the chronic form of TMD disease predominantly afflicted women. Moreover various cross sectional studies confirm that increased prevalence of TMD found in women [25,26]. This is equivalent to this study, as females can affect more with psychological factors and hormonal factors.
Dentist or maxillofacial surgeons when dealing with TMJD patients usually try to look for the etiology of such conditions before passing to management as clinical examinations only can be vague and indistinct; radiographs are highly important as an investigation protocol before. ENTs usually can discover source of patients complain by use of otoscope device to examine the ear for such reason radiographs might have no role. Nicholas L. 2020 publish a ENT book on otoscope examination emphasize that most of clinician perform this type of examination for assessment of different ENT disease as the external auditory canal (EAC), tympanic membrane (TM), and the middle ear [28].
Daniel et al., 2018 declare that not all patients need radiographical assessment, many cases can be treated by clinical examinations solely; which is the most important step in the diagnosis of TMJ pathology, "special imaging techniques are needed due to the complex anatomy and pathology" [29]. Epstein et al., [30] consider the "clinical findings of greater relevance than panoramic images for patients with TMD". Nevertheless, some authors have suggested panoramic radiography as a good imaging modality for TMJ visualization [31]. Joint function interpretation can be estimated through imaging TMJ which can be accomplished by comparing the condyle in the closed and opened mouth position so individual selection criteria will make clinician properly decide which patients would need special imaging techniques. In this study; ENT ' surgeon depends on clinical examinations only to manage the patients. Thirty maxillofacial patients are treated depends on clinical and radiographical assessment.
There is no conflict by choosing patients for the private clinic to treat their problems this settled through analyzing patients' answers about the first symptom making them seek treatment: ear or joint. Otalgia pursue patients going to ENT clinics whereas 39 patients attain maxillofacial clinic complain from TMJ pain, rest of participants (11) are attain the same clinic although of presence of ear pain not TMJ pain. This can be attributed to connotation of the symptoms and their severities disturb the patients' decisions.
One month medical treatment is enough periods to cure simple to moderate TMJDS; in severe cases other advanced management' protocols can be used. Stephan et al., 2017 [32]; publish in his systematic review article a special proposed management pathway for suspected temporomandibular disorder with otological symptoms (Figure 4) as well as V. Santosh 2020; focus on special management considerations cited for both otolaryngologists and dentists concluding that "since the symptoms of TMD are overlaid with otological symptoms, such patients may commonly visit otolaryngologists. A comprehensive evaluation as suggested in this article from the literature would be a good tool for the otolaryngologists as well as dentists to follow for better management of TMD patients" [33].
Recognizing and diagnosing the TMJ disorder patients with otalgia need intimate teamwork support between dental specialist in craniofacial pain and ENT's. Otorhinolaryngologist should "confirm the absence of any significant or detectable auditory, genetic, drug-related, or trauma-related causes of otalgia" [34]. Then, referral to experienced dentist in treating TMJD for suspected TMJ disorder cases.
This study requires further studies to detect the influence of proper diagnosis and selection of proper treatment in cooperation with TMJ centers.
The management of patients with TMD is a challenge, and should approach through multidisciplinary team work to rule out, for example, otological diseases. Clinical evaluation is essential and should explore signs and symptoms, assessment of ergonomic posture, and visual and physical examination of the extraoral and intraoral regions (including palpation of the head and neck muscles) and the TMJ [35].
It is imperative to understand the need for interdisciplinary management between the doctor and the dental specialist in craniofacial pain. Also try for a conservative vision in the treatment of muscular tensions of these masticatory and auditory muscles. Specialists in a single discipline cannot always individually resolve the symptoms present in a patient, without the invaluable support of multidisciplinary management. Each specialty contributes in its specific knowledge to the differential diagnosis process that guides a correct treatment plan. Clinical success therefore depends on the ability of each specialist to analyze different aspects of the same problem [36].
CONCLUSION
TMJD is a common performance can be seen in general practice settings. Primary care setting for TMJD includes assessment clinically and radiographically if need ending with diagnosis of TMJD. Conservative approaches in most instances are the best management way. In the majority of instances a trial of conservative therapy should be offered prior to referral to specialist care.
Patients' quality life can be affected through TMJD symptoms specifically pain, psychological discomfort. Identifying etiological factors as fast as possible should approach through multidisciplinary team work are highly crucial to provide appropriate treatments plan to improve and eliminate the debilitating symptoms of TMD.
Author Contributions
Author is contributed to acquisition, statistical analysis and interpretation of data, drafted the manuscript and critically revised the manuscript for important intellectual content. Author gave final approval and agrees to be accountable for all aspects of the work in ensuring that questions relating to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Conflict of Interest:
The authors report no conflicts of interest related to this review.
|
2020-12-09T20:02:42.048Z
|
2020-11-13T00:00:00.000
|
{
"year": 2020,
"sha1": "af2c55110aed70d36e26542269ad20e94b7257ed",
"oa_license": null,
"oa_url": "https://doi.org/10.36348/sjodr.2020.v05i11.002",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "af2c55110aed70d36e26542269ad20e94b7257ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
53663068
|
pes2o/s2orc
|
v3-fos-license
|
A microlocal approach to the enhanced Fourier-Sato transform in dimension one
Let $\mathcal{M}$ be a holonomic algebraic $\mathcal{D}$-module on the affine line. Its exponential factors are Puiseux germs describing the growth of holomorphic solutions to $\mathcal{M}$ at irregular points. The stationary phase formula states that the exponential factors of the Fourier transform of $\mathcal{M}$ are obtained by Legendre transform from the exponential factors of $\mathcal{M}$. We give a microlocal proof of this fact, by translating it in terms of enhanced ind-sheaves through the Riemann-Hilbert correspondence.
Let V be a one-dimensional complex vector space, with coordinate z, and V * its dual, with dual coordinate w. The Fourier transform, originally introduced as an integral transform with kernel associated to e −zw , has various realizations. At the Weyl algebra level, it is the isomorphism P → L P given by z → −∂ w , ∂ z → w. This induces an equivalence between holonomic algebraic D-modules on V and on V * , that we still denote by M → L M.
At a microlocal level, the Fourier transform is attached to the symplectic transformation where we used the identifications T * V = V×V * and T * V * = V * ×V of the cotangent bundles. This induces the Legendre transform from Puiseux germs on V (i.e., holomorphic functions on small sectors, which admit a Puiseux series expansion) to Puiseux germs on V * .
The exponential factors of a holonomic D V -module M are Puiseux germs on V describing the growth of its holomorphic solutions at irregular points. We are interested here in the stationary phase formula, which states that the exponential factors of L M are obtained by Legendre transform from the exponential factors of M.
The Riemann-Hilbert correspondence allows a restatement of this fact in terms of yet another realization of the Fourier transform, that is the Fourier-Sato transform for enhanced ind-sheaves. In this setting, we provide a microlocal proof of the stationary phase formula.
Let us explain all of the above in more details.
1.2. Let a be a singular point of M (this includes a = ∞, where M is naturally extended as a meromorphic connection). Let z a be the local coordinate given by z a = z − a if a ∈ V, and z ∞ = z −1 . Denote by S a V the circle of tangent directions at a. Let P SaV be the sheaf on S a V whose stalk at θ ∈ S a V is the set of holomorphic functions on small sectors around θ, which admit a Puiseux series expansion at a. Its sections are called Puiseux germs. The sheaf P SaV is the quotient of P SaV modulo bounded functions.
The Hukuhara-Levelt-Turrittin theorem describes both the formal and the asymptotic structure of M at a, as follows.
At the formal level, after a ramification u p = z a and formal completion by C((u)), M decomposes as a finite direct sum of modules of the form E f ⊗ D R f . Here, f ∈ C{u}[u −1 ] is a meromorphic germ, the D-module E f corresponds to the meromorphic connection d + df , we denote by ⊗ D the tensor product for D-modules, and R f is a regular holonomic D-module.
For a chosen determination of z 1/p a at θ ∈ S a V, let us still denote by f the Puiseux germ with expansion f ∈ C{z 1/p a }[z −1/p a ] as above. (We write (a, θ, f ) instead of f if we need more precision.) As the isomorphism class of E f only depends on the equivalence class [f ] ∈ P SaV , we can assume that different summands correspond to different equivalence classes. Then, the set N >0 θ of those f 's with R f = 0 is called a system of exponential factors of M at θ, and the rank N(f ) of R f is called the multiplicity of f .
At the asymptotic level, the Hukuhara-Levelt-Turrittin theorem states that for any θ ∈ S a V there is a basis {u f,i } f ∈N >0 θ ,i=1,...,N (f ) of holomorphic solutions to M on a small sector around θ, such that e −f u f,i has moderate asymptotic growth at a. Let N = D V * /D V * Q be the D V * -module associated with the Airy operator Q = ∂ 2 w − w. It is regular everywhere except at w = ∞. Recall that the Airy equation Qψ = 0 has two entire solutions ψ ± with the following Figure 1. The microlocal Fourier transform. asymptotics at η = 1 · ∞ ∈ S ∞ V * : where we chose the determination of w 1/4 with 1 1/4 = 1. Then, g ± (w) = ± 2 3 w 3/2 are the exponential factors of N at η. At the level of the Weyl algebra, one has Q = L P , for P = z 2 − ∂ z . Hence N ≃ L M, where M = D V /D V P is regular everywhere except at z = ∞. The equation P ϕ = 0 has the entire solution ϕ(z) = e z 3 /3 . Here there is no ramification, and f (z) = z 3 /3 is the only exponential factor of M at any θ ∈ S ∞ V.
Let us show how to compute the exponential factors g ± of N from the exponential factor f of M.
As pictured in Figure 1, one has which is a ramified double cover of V * .
1.4. Note that the above microlocal construction fails if the Puiseux germ f is linear, that is, if f (z) = bz for some b ∈ V * . In fact, in this case C f = {w = b}, so that χ ρ (C f ) is not of graph type. Let a ∈ V ∪ {∞} and θ ∈ S a V. We say that a Puiseux germ (a, θ, f ) is admissible if f is unbounded at a and (if a = ∞) not linear modulo bounded functions. The Legendre transform L establishes a one-toone correspondence from admissible Puiseux germs on V to admissible Puiseux germs on V * .
We can now state the stationary phase formula. This result is classical (see §1.15 for some references). Let us translate it through the Riemann-Hilbert correspondence.
1.5. In order to keep track of the behavior at ∞, instead of V consider the pair V ∞ = (V, P), where P = V ∪ {∞}. Such a pair is called a bordered space. By definition, a holonomic D V∞ -module is a holonomic analytic D P -module such that M ≃ M( * ∞). Then, the category of algebraic holonomic D-modules on V is equivalent to that of holonomic D V∞ -modules.
The Riemann-Hilbert correspondence of [7] provides an embedding (i.e., a fully faithful functor) ) from the triangulated category of holonomic D V∞ -modules to that of R-constructible enhanced ind-sheaves on V ∞ . Let us briefly recall the construction of the latter category, and the behavior of the "enhanced solution functor" Sol E V∞ . Note that, to simplify the presentation, some of the definitions that we give in §1.6 and §1.7 are different, but equivalent, to those that we recall in §2.
Denote by D b
R-c (C V∞ ) the triangulated category of R-constructible sheaves on V ∞ , that is, the restrictions to V of R-constructible sheaves (in the derived sense) on P.
Consider the bordered space R ∞ = (R, R), where R = R∪{−∞, +∞}, and denote by t ∈ R the coordinate. The . Let U be an open subset of V, subanalytic in P. Let ϕ : U − → R be a globally subanalytic function, that is, a function whose graph is subanalytic in P × R. Set It belongs to the heart of the natural t-structure A structure theorem asserts that for any F ∈ E b R-c (C V∞ ) there is a subanalytic stratification of V such that F decomposes on each stratum U as a finite direct sum of shifts of objects of the form E ϕ U |V or E ϕ + ⊲ϕ − U |V . 1.7. The category E b R-c (I C V∞ ) of R-constructible enhanced ind-sheaves on V ∞ is the triangulated category with the same objects as E b R-c (C V∞ ), and morphisms In order to avoid confusion, denote by C E V∞ + ⊗ F the enhanced ind-sheaf corresponding to the enhanced sheaf F .
As for enhanced sheaves, any K ∈ E b R-c (I C V∞ ) locally decomposes as a finite direct sum of shifts of objects of the form There is a natural embedding where π : V × R − → V is the projection. [17] associates to a regular holonomic D V∞ -module L, the C-constructible complex of its holomorphic solutions L. One has Sol E V∞ (L) ≃ e(L). Let f be a meromorphic function on P with possible poles at a ∈ P and ∞. Denote by E f V\{a}|V∞ the holonomic D V∞ -module associated with the meromorphic connection d + df . Then, one has 1.9. Let us translate the Hukuhara-Levelt-Turrittin theorem in terms of enhanced ind-sheaves. Let a ∈ P. A multiplicity at a is a morphism of sheaves of sets 3 for the precise definition). Consider the induced multiplicity class Let us say that F ∈ E b R-c (C V∞ ) has normal form at a if there exists a multiplicity N such that any θ ∈ S a V has an open sectorial neighborhood Note that the multiplicity N is uniquely determined by F . Let us say that K ∈ E b R-c (I C V∞ ) has normal form at a if there exists (This definition is different, but equivalent, to the one we give in § 5.4.) Note that, if N is the multiplicity of F , the multiplicity class N is uniquely determined by K. Let M be a holonomic D V∞ -module. In terms of enhanced solutions, the Hukuhara-Levelt-Turrittin theorem states that K = Sol E V∞ (M) has normal form at any singular point of M.
More precisely, as it was observed in [28], there is an equivalence of categories between germs of meromorphic connections M with poles at a, and germs of enhanced ind-sheaves K with normal form at a, such that π −1 C V\{a} ⊗ K ≃ K.
1.10. As a technical tool, to any admissible Puiseux germ (a, θ, f ) on V we associate the multiplicity test functor (see §2.1) , with values in the bounded derived category of C-vector spaces.
If V θ is a sectorial neighborhood of θ and Moreover, if (a, θ, h) is another Puiseux germ, In particular, if K has normal form at a with multiplicity class N, then
Consider the correspondence
The Fourier-Laplace transform for D-modules and the Fourier-Sato transform for enhanced ind-sheaves are the integral transforms with kernel associated to e −zw . More precisely, for M ∈ D b hol (D V∞ ) and where ⊗ D , Dq * , Dp * and + ⊗, Eq !! , Ep −1 denote the operations for Dmodules and for enhanced ind-sheaves, respectively.
Since the enhanced solution functor is compatible with these operations, one has . We can thus deduce the stationary phase formula of Theorem 1.1 from the following analogue for enhanced ind-sheaves.
In fact, the statement still holds when replacing , for an arbitrary base field k.
As we now briefly explain, our proof of this result is based on microlocal arguments.
1.12. The microlocal theory of sheaves of [20] associates to an object H of D b (C V×R ) its microsupport SS(H) ⊂ T * (V × R), a closed conic involutive subset of the cotangent bundle. Denote by (t, t * ) ∈ T * R the natural homogeneous symplectic coordinates. Recall that there is an equivalence In particular, considering the map there is a well defined microsupport for enhanced sheaves given by To be more precise, the microsupport is a subset of the cotangent bundle to the real affine plane V R underlying V. Here, for ϕ a real valued smooth function, we use the identification 1.13. Using the same definitions as for enhanced ind-sheaves, there is a Fourier-Sato transform for enhanced sheaves which, for It was proved in [33] that Thus, one has the following link between the Fourier-Sato transform and the Legendre transform: 1.14. Our proof of Theorem 1.2 proceeds by the following arguments. Assume for example that a = ∞ and b = ∞ as in §1.3. (The other cases are treated in a similar way.) Since K has normal form at ∞, we can write where the above multiplicity test functor for enhanced sheaves has the same definition as the one for enhanced ind-sheaves. For A microlocal argument similar to the one sketched below shows that We can take R big enough so that {|z| > R} is covered by sectors V θ where F decomposes as in (1.2).
Moreover, since L F is R-constructible, a generic η ∈ S ∞ V * has a sectorial neighborhood W η where there is a decomposition with d i , d j ∈ Z, and ϕ i , ϕ ± j analytic and globally subanalytic functions such that ϕ − j < ϕ + j . By (1.3), we are left to show On one hand, (1.2) implies where Z is due to spurious contributions from {|z| = R}.
On the other hand, (1.5) gives Let φ be either ϕ i or ϕ ± j . We can show that (1.4), the same holds for C φ = C L(∞,θ,f ) unless L(∞,θ,f ) = (∞, η, g). It follows that where I g = {i ∈ I ; ϕ i = Re g}, J ± g = {j ∈ J ; ϕ ± j = Re g}, and . For this, we use a result of [20] which allows to keep track microlocally of multiplicities and shifts, by viewing the Fourier-Sato transform as a quantization of the symplectic transformation χ ρ .
1.15. Let us mention some related literature.
The Fourier-Laplace transform for holonomic D-modules in dimension one has been studied in [26], and more systematically in [27], where the Stokes phenomenon is also considered. See [32,16] for explicit computations in some special cases.
Classically, the stationary phase formula is stated in terms of the socalled local Fourier-Laplace transform for formal holonomic D-modules. This was introduced in [4] (see also [13,3]), by analogy with the ℓ-adic case treated in [25]. (For related results in the p-adic case, see e.g. [29] for analytic étale sheaves and [2] for arithmetic D-modules.) An explicit stationary phase formula was obtained in [30,11] (see also [14]) for D-modules, and in [12,1] for ℓ-adic sheaves.
The fact that the Riemann-Hilbert correspondence of [7] intertwines the Fourier-Laplace transform of holonomic D-modules with the enhanced Fourier-Sato transform was observed in [22], where the non-holonomic case is also discussed. In dimension one, the enhanced Fourier-Sato transform of perverse sheaves has been studied in [6], where the Stokes phenomenon is also considered.
In the present paper we do not discuss linear exponential factors. Note that a point a ∈ V gives a linear function w → aw on V * . Let M be a holonomic D V∞ -module. In [9] we relate, in the framework of enhanced ind-sheaves, the vanishing cycles of M at a with the graded component Gr aw Ψ ∞ ( L M) of its Fourier transform.
1.16. The contents of this paper is as follows.
After recalling some notations in Section 2, we study in Section 3 the objects of the form , which are the building blocks of R-constructible enhanced sheaves (resp. indsheaves).
We study their behavior on sectorial neighborhoods in Section 4. This is used in Section 5 to discuss the notion of enhanced (ind-)sheaves with normal form at a point. In Section 6, to an enhanced ind-sheaf with normal form we attach a filtered Stokes local system. In particular, we get a notion of exponential factor. We also introduce the multiplicity test functor, which detects the multiplicities of exponential factors. In Section 7 we recall the Legendre transform for Puiseux germs, highlighting its microlocal nature. We can then state the stationary phase formula in terms of enhanced ind-sheaves. Our proof of this formula uses techniques of the microlocal study of sheaves, which are detailed in Section 8.
2.1.
A remark on inductive limits. Denote by Mod(k) the Grothendieck category of k-vector spaces, by D(k) its derived category, and by D b (k) ⊂ D(k) its bounded derived category, whose objects K satisfy H n K = 0 for |n| ≫ 0.
Since Mod(k) is semisimple, there is an equivalence of additive categories It follows from (2.1.1) that small filtrant inductive limits exist in D(k). Let us say that a small filtrant inductive system u : Denote by t the coordinate in R, and consider the maps where p 1 , p 2 , π are the projections, and µ(x, t 1 , t 2 ) = (x, t 1 +t 2 ). The convolution functors with respect to the t variable in D b (k M ×R ) are defined by Note that the object k {t 0} is idempotent for The triangulated category of enhanced sheaves is defined by The quotient functor has fully faithful left and right adjoints, respectively given by Enhanced sheaves are endowed with the six operations , and the exterior operations are defined by The natural t-structure of D b (k M ×R ) induces by L E a t-structure for enhanced sheaves, and we consider its heart Ind-sheaves on bordered spaces. Let M be a good topological space. Denote by D b (I k M ) the bounded derived category of ind-sheaves of k-vector spaces on M, that is, ind-objects with values in the category of sheaves with compact support. There is a natural exact embedding , which has an exact left adjoint α, that has in turn an exact fully faithful left adjoint β.
A The triangulated category of ind-sheaves on M ∞ is defined by The quotient functor has a left adjoint l and a right adjoints r, both fully faithful, given by N ) a morphism of bordered spaces, the six operations for ind-sheaves on bordered spaces are defined by where p : , which has an exact left adjoint α.
Enhanced ind-sheaves. Let
, with R = R∪{−∞, +∞}, and recall that t ∈ R denotes the coordinate. The triangulated category of enhanced ind-sheaves on M ∞ is defined by where the convolution functor + ⊗ is defined as in §2.2, replacing R with R ∞ and Rµ ! with Rµ !! . The quotient functor ) has a fully faithful left and right adjoint L E and R E , respectively, defined as in §2.2.
The six operations for enhanced ind-sheaves are denoted 1 by is a morphism of bordered spaces). As in §2.2, the exterior operations are defined via the associated morphism f R = f × id R∞ . Denote by + ⊠ the external tensor product.
There are outer hom functors 2 The triangulated category E b + (I k M∞ ) has a natural t-structure, and we denote by E 0 are well defined. In fact, one has The following lemma will be of use later.
Proof. One has Recall that Since there is a natural commutative diagram Finally, one has The family of R-constructible enhanced sheaves is stable by the six operations, assuming semiproperness for direct images.
One sets where Q is the quotient map (2.4.1), and j : The triangulated category of R-constructible enhanced ind-sheaves is (The equivalence of this description with that in §1.7 follows from [7,Proposition 4.7.9].) The family of R-constructible enhanced ind-sheaves is stable by the six operations, assuming semiproperness for direct images.
There is a natural embedding Note that the canonical functor is essentially surjective but not fully faithful (see §1.7).
2.6. Microsupport. Let M be a real analytic manifold. To L ∈ D b (k M ) one associates its microsupport SS(L) ⊂ T * M, a closed conic involutive subset of the cotangent bundle. Denote by (t, t * ) ∈ T * R the homogeneous symplectic coordinates. There is an equivalence Then, the space (T * M) × R has a contact structure given by dt + ω M .
Consider the maps where ρ is the projection.
. For a complex manifold X, denote by X R the underlying real analytic manifold. For F ∈ E b + (k X ), its microsupport SS E ρ (F ) is a subset of the cotangent bundle T * (X R ). In this paper, we use the identification 3 2.7. D-modules. Let X be a complex manifold, and denote by O X and D X the sheaves of holomorphic functions and of differential operators, manifolds is a morphism of bordered spaces such that the closure of its graph is a complex analytic subset of The operations for D-modules extend to bordered spaces. The family of holonomic D-modules is stable by the operations, assuming semiproperness for direct images.
Enhanced exponentials
As explained in [7,8], constant sheaves on the epigraphs of subanalytic functions are the building blocks of R-constructible enhanced (ind-)sheaves.
Here we consider their analogues on topological spaces, and state some of their properties. (See [28, §3] for similar results.) 3.1. Exponential enhanced sheaves. Let M be a good topological space, and U ⊂ M an open subset. Let ϕ, ϕ + , ϕ − : U − → R be continuous functions with ϕ − (x) ϕ + (x) for any x ∈ U. The associated exponential enhanced sheaves are defined by By the definitions, one has Then, the natural exact sequence for some open neighborhood Ω of x}.
Proof. (i) We have
Then, by (i), one has where ( * ) follows from the assumption, and the last isomorphism follows from the fact that Z ⊂ U and S = Int U (Z), the interior of Z relative to U .
Proof. (i) The only non trivial implication is the "if" part. Assume that there exists an epimorphism E ϕ (ii) follows from (i).
By the definitions, one has One has (i) For any n ∈ Z, one has where ( * ) follows from [7, Proposition 4.7.9] and ( * * ) from Lemma 3.1.1.
(ii) By (i) and Lemma 3.1.1 (ii), one has The statement follows.
The following result was stated in [8, §3.3] in the subanalytic case.
Real blow-up
In this section M is a smooth manifold, except in §4.4 where it is a real analytic manifold.
We use here the real blow-up of a point on a manifold to describe morphisms of exponential enhanced (ind-)sheaves on small sectors.
where S a M := ̟ −1 a (a) ≃ S n−1 is the sphere of tangent directions at a. Let θ ∈ S a M and V ⊂ M. One says that V is a sectorial neighborhood of θ if V ⊂ M \ {a} and S a M ∪ a (V ) is a neighborhood of θ in M a . This is equivalent to saying that V = −1 a (U) for some neighborhood U of θ in M a .
For θ ∈ S a X, we will: • write for short x − → θ instead of a (x) − → θ, • write θ∈ V to indicate that V is a sectorial neighborhood of θ, • say that a property P (x) holds at θ, if there exists V∋ θ such that P (x) holds for any x ∈ V . One says that U ⊂ M \ {a} is a sectorial neighborhood of I ⊂ S a X if U∋ θ for any θ ∈ I.
Proof. One has , and similarly for E̟ ! a K 2 replaced with E̟ −1 a K 2 . It is then enough to remark that With the above notations, Lemma 2.4.1 implies (i) If ψ − ϕ is bounded from above at θ, then Proof. (a) Let us first prove the statements concerning E ϕ U |M and E ψ U |M , namely By Lemma 3.2.2, one has Up to shrinking, we may further assume that V is contractible. Under these conditions, one has Here, I, J are finite sets, d i , d j ∈ Z, and ϕ i , ϕ ± j : V − → R are analytic and globally subanalytic functions, such that 1 and the isomorphism (4.4.1) holds for any connected component V of U. However, in loc. cit. the functions ϕ i , ϕ ± j are not supposed to be analytic, and only satisfy the weak inequalities ϕ − j ϕ + j . We can assume that the functions ϕ i , ϕ ± j are analytic, after removing from V their singular loci. Then, we can remove those indices j ∈ J for which ϕ − j − ϕ + j ≡ 0 on V . Moreover, we can remove from V the zero loci of ϕ − j − ϕ + j for the remaining indices j ∈ J. This shrinking of the connected components of U leaves Z of dimension 1.
To conclude, note that θ∈ U if θ ∈ S a M is outside the finite set S a M ∩ ̟ −1 a Z \ S a M . Hence, θ∈ V for some connected component V of U.
Enhanced normal form
In this section, X denotes a smooth complex analytic curve.
As recalled in §1.2, the Hukuhara-Levelt-Turrittin theorem describes the formal and asymptotic structure of holonomic D-modules. We discuss here an analogous condition for enhanced ind-sheaves. 5.1. Puiseux germs. Let X be a smooth complex analytic curve, and take a ∈ X. Recall from Section 4.1 the notations relative to the real blow up X a of X at a, i.e. the blow-up of the smooth real analytic surface underlying X: Here, S a X ≃ S 1 is the circle of tangent directions at a. A local coordinate z a at a is a holomorphic function z a defined on a neighborhood of a such that z a (a) = 0 and (dz a )(a) = 0. (i) Let θ ∈ S a X and U∋θ. We say that f ∈ O X (U) admits a Puiseux expansion at θ if there exist p ∈ Z >0 , a local coordinate z a at a, an open subset V ⊂ U with θ∈ V , and a determination of z 1/p a on V , such that f (x) = h(z a (x) 1/p ) for x ∈ V , where h is a section of O C ( * 0) in a neighborhood of 0.
(ii) We denote by P Xa the subsheaf of a * j −1 a O X whose sections on U ⊂ X a are holomorphic functions on −1 a U admitting a Puiseux expansion at any point of U ∩ S a X. (iii) The sheaf P SaX :=ĩ −1 a P Xa is called the sheaf of Puiseux germs on S a X. It is a locally constant sheaf. (iv) For λ ∈ Q, let P λ a|X ⊂ P a|X be the subsheaf of sections of pole order λ, i.e. of sections that locally belong to for some (hence, any) local coordinate z a at a, and some (hence, any) determination of z 1/p a at θ.
(v) Set P SaX :=P SaX /P 0 SaX and, for f ∈ P SaX , denote by [f ] its image in P SaX . (vi) For λ ∈ Q, let P λ SaX ⊂ P SaX be the subsheaf of sections of pole order λ, i.e. of sections that locally at any θ ∈ S a X belong to for some (hence, any) local coordinate z a at a, and some (hence, any) determination of z 1/p a at θ. (vii) Let I ⊂ S a X be an open connected subset. For non-zero f ∈ P SaX (I) we set ord a (f ) := λ, where λ is the unique rational such that f ∈ P λ SaX,θ for some (hence, any) θ ∈ I. It is called the pole order of f . We set ord a (0) = −∞. (viii) For K ⊂ R an interval, set P K SaX := λ∈K∩Q P λ SaX .
The sheaves P SaX , and P λ SaX are locally constant sheaves on S a X, and one has SaX is a locally constant subsheaf of P SaX . Definition 5.1.2. Let θ ∈ S a X and Φ ⊂ P SaX,θ . One says that Φ is well In particular, this implies that there is a natural bijection between Φ and its class Φ ⊂ P SaX,θ . For example, for a choice of local coordinate z a at a, let P ′ SaX ⊂ P SaX be the subsheaf of sections that belong to for some (hence, any) local determination of z 1/p a . Then, P ′ SaX is representative.
Note that if P ′ SaX ⊂ P SaX is a representative subsheaf, then it is a locally constant sheaf, and its stalks are well separated. After shrinking U, we may assume that the Γ s 's do not cross, and that they are non singular with |z a | as parameter, for z a a local coordinate at a. Then St a (f, h) is the set of tangent directions at a of the Stokes curves.
Lemma 5.1.7. Let f, h ∈ P SaX,θ and assume that θ ∈ St a (f, h). Then, θ ∈ St a (f + k, h) for any k ∈ P SaX,θ such that ord a (k) < ord a (h − f ).
5.2.
A lemma on nearby morphisms. Let a ∈ X and θ ∈ S a X. Recall the maps (5.1.1). (i) One has Proof. (i) By the definition, one has If h θ f , the statement follows from Lemma 4.3.1 (i).
(i-b) Otherwise h − f θ 0, so that in particular h − f ∈ P λ SaX,θ for some λ ∈ Q >0 . After a ramification and the choice of a local coordinate z a at a, we can assume that h − f = z −1 a and θ is a direction in the closed half-space Re z a 0. By Lemma 3.2.2 (i), one has For c > 0, {Re(z −1 a ) > c} is an open disc with center 1/2c and radius 1/2c. Hence there is a cofinal system of sectorial neighborhoods V of θ such that for any c ≫ 0 one has Indeed, we can take V = {z a ∈ γ ; |z a | < ε} for ε > 0 and γ an open convex proper cone intersecting Re z a > 0 (see Figure 3).
(ii) The proof is similar to that of (i).
Let I ⊂ S a X be a connected open subset, and denote by j I : I − → S a X the embedding. Then, Lemma 5.2.1 immediately implies the following lemma. A multiplicity at a ∈ X is a morphism of sheaves of sets ) ⊂ P SaX,θ is well separated and finite for some (hence, any) θ ∈ S a X.
A Puiseux germ f ∈ N >0 θ is called an exponential factor of N at θ, and the positive integer N(f ) is called its multiplicity. Definition 5.3.1. Let a ∈ X and F ∈ E b R-c (k X ). One says that F has a normal form at a if there exists a multiplicity N : P SaX − → (Z 0 ) SaX such that any θ ∈ S a X has an open sectorial neighborhood V θ such that Note that the multiplicity N is uniquely determined by F . Note also that if F has a normal form at a, then there exists an open neighborhood Ω of a such that
Normal form for enhanced ind-sheaves.
A multiplicity class at a ∈ X is a morphism of sheaves of sets θ is a finite set for some (hence, any) θ ∈ S a X. For f ∈ P SaX , we write for short N (f ) = N([f ]). Definition 5.4.2. One says that K ∈ E b R-c (I k X ) has a normal form at a ∈ X if there exists a multiplicity N such that any θ ∈ S a X has an open sectorial neighborhood V θ such that Note that the class N of N is uniquely determined by K. We call it the multiplicity class of K.
Remark 5.4.3. If k = C and K corresponds to a holonomic D X -module by the Riemann-Hilbert correspondence, this definition corresponds to the classical notion of quasi-normal form (see [7, §7.3]). As we deal with Puiseux germs, we do not distinguish here between normal and quasinormal forms.
R-c (I k X ) has a normal form at a ∈ X if and only if for any θ ∈ S a X there exists a finite subset Φ θ ⊂ P SaX,θ and integers n θ (f ) ∈ Z >0 for f ∈ Φ θ such that for some open sectorial neighborhood V θ of θ.
Proof. As the "only if" part is clear, let us prove the "if" part.
Let P ′ SaX ⊂ P SaX be a representative subsheaf. As the isomorphism class of E Re f V θ |X only depends on [f ] ∈ P SaX,θ , we can assume that Φ θ ⊂ P ′ SaX,θ . Then, it follows from Corollary 5.2.3 that {Φ θ } θ gives a local system Φ ⊂ P ′ SaX , and that {n θ } θ gives a morphism of sheaves n : Φ − → Consider the multiplicity given by and N(f ) = 0 otherwise. Then the isomorphisms in the statement show that K has normal form at a with multiplicity class N .
R-c (I k X ) have normal form at a ∈ X with multiplicity class N . Let N be a multiplicity with class N . Then there exist an open neighborhood Ω of a, and F ∈ E 0 R-c (k X ) with normal form at a and multiplicity N, such that Proof. By definition, any θ ∈ S a X has an open sectorial neighborhood V θ such that Let Θ ⊂ S a X be a cyclically ordered finite subset such that (i) {a} ∪ θ∈Θ V θ is a neighborhood of a, (ii) for any θ, θ ′ ∈ Θ with θ = θ ′ the intersection V θ ∩ V θ ′ is non empty if and only if θ and θ ′ are consecutive, and in this case V θ ∩ V θ ′ is contractible. Let θ 1 < · · · < θ d < θ d+1 := θ 1 be the cyclic ordering of Θ. Write for short and denote by j a : Ω \ {a} − → X and j k : V k − → X the embeddings. Set The isomorphism (5.4.1) induces an isomorphism The isomorphisms u kl 's satisfy the usual cocycle condition u kl • u lm = u km , since so do the isomorphisms u E kl 's. Hence, they patch the F ′ k 's to an object F ′ ∈ E 0 R-c (k Ω\{a} ). This proves the statement, with F := Ej a ! (F ′ ).
Stokes filtered local systems
In this section, X denotes a complex analytic curve.
To an enhanced ind-sheaf K on X with normal form at a ∈ X, we attach a Stokes filtered local system on S a X. If K = Sol E X (M) for M a meromorphic connection, this coincides with the classical construction in [10,26] (see [19, §1 and §8] for a detailed explanation). Then, we introduce the multiplicity test functor, which allows to detect the multiplicities of exponential factors. 6.1. Stokes filtrations. Stokes filtered local systems were introduced in [10,26], and we refer to [31] for an exposition.
Let a ∈ X and L a local system of finite rank on S a X. (i) A pre-Stokes filtration F • L on L is the data of a subsheaf F f L ⊂ L| I for any open subset I ⊂ S a X and any f ∈ P SaX (I), such that for any θ ∈ S a X and any f, h ∈ P SaX,θ with f θ h, one has (F f L) θ ⊂ (F h L) θ .
(ii) A pre-Stokes filtration F • L on L is a Stokes filtration if there exists a multiplicity N : P SaX − → (Z 0 ) SaX at a such that for any θ ∈ S a X there are an open neighborhood I ⊂ S a X, and an isomorphism inducing for any η ∈ I and f ∈ P SaX,η an isomorphism Note that F f L only depends on the class of f in P SaX (I).
Notation 6.1.2. Let F • L be a Stokes filtration on L.
(i) For f ∈ P SaX (I), let F ≺f L ⊂ F f L be the subsheaf such that, for any θ ∈ I, one has Note that this defines indeed a subsheaf since, under (6.1.1), one has (ii) For f ∈ P SaX (I), set Note that F ≺f L and Gr f L only depend on the class of f in P SaX (I).
Note also that (6.1.1) implies Gr f L| I ≃ k N (f ) I , and in particular Gr f L is a locally constant sheaf.
6.2. Enhanced nearby cycles. Here, to an enhanced ind-sheaf K on X with normal form at a ∈ X, we associate a Stokes filtered local system on S a X.
For a ∈ X and I ⊂ S a X an open subset, consider the commutative diagram Definition 6.2.1. Let a ∈ X, I ⊂ S a X an open subset, f ∈ P SaX (I), with U ⊂ X a sectorial neighborhood of I where f is defined.
Note that R-c (I k X ) have normal form at a ∈ X with multiplicity class N. Then, with the above notations, (i) Ψ a (K) is concentrated in degree zero, and is a local system of finite rank on S a X; (ii) F f Ψ a (K) is concentrated in degree zero, and is an R-constructible sheaf on I; In particular, Gr f Ψ a (K) is a local system on I of rank N (f ).
By the Definition 5.4.2, this follows from the next lemma.
Proof. (i) Since h is locally bounded on U, one has
(ii) follows from Lemma 5.2.2 (i).
It was shown in [10,26] that there is an equivalence between the category of germs of meromorphic connections with pole at a, and the category of finite rank local systems on S a X endowed with a Stokes filtration.
Let Ω ⊂ X be a contractible open neighborhood of a. Let us consider the following conditions for K ∈ E 0 R-c (I k Ω ): (1) K| Ω\{a} ≃ e(L), for L a local system of finite rank on Ω \ {a}, K has normal form at a. Proposition 6.2.4. The functor Ψ a induces an equivalence between the full subcategory of E 0 R-c (I k Ω ) whose objects satisfy conditions (1)-(3) above, and the category of finite rank local systems on S a X endowed with a Stokes filtration.
Proof. Let I ⊂ S a X. Denote by S h the constant sheaf k I endowed with the Stokes filtration Then, for f, h ∈ P SaX (I), one has Hom where z a is a local coordinate at a, V runs over the open sectorial neighborhoods of θ, c − → +∞, and δ, ε − → 0+. This does not depend on the choice of the local coordinate z a .
Note that one has for U an open sectorial neighborhood of θ where f is defined.
Let (a, θ, f ) be a Puiseux germ on X, and K ∈ E b + (I k X ).
R-c (I k X ) have normal form at a with multiplicity class N. Then, Proof. The second isomorphism follows from Proposition 6.2.2. Then, decomposing K as in Definition 5.4.2, and using Lemma 6.3.2 (i), the statement follows from Lemma 6.3.4 below.
Proof. The first isomorphism follows from Lemma 6.3.2 (iii). Let us prove the second isomorphism. Let us set so that there is a distinguished triangle in D b (k) Using a local coordinate z a at a, let β ∈ C × be such that Re(θβ) > 0. Then, after shrinking U, there is a constant C > 0 such that 0 < Re z a β |β| |z a | C Re z a β on U. It follows that where we set f δ,ε (z) := f (z) − δ(z a β) −ε , and For this, we are going to use Lemma 5.2.1 (i).
δ,ε . 6.4. A vanishing result. Let V be a complex affine line with coordinate z. Let P = V∪{∞} be its projective compactification. In this subsection, with respect to the notations in (5.1.1), we consider X = P and a ∈ {0, ∞}. Let us take z 0 = z and z ∞ = z −1 as local coordinate at 0 and ∞, respectively. We shall write S ∞ V instead of S ∞ P.
Proof. Since the proofs are similar, let us only consider the case a = ∞.
After replacing p by one of its multiples, we can assume λ ∈ 1 p Z >0 and f (z) = cz λ 1 + O(z −1/p ) for z − → ∞, with c ∈ C × . Then, Re f (z) = s λ ψ(s −1/p , ζ), where ψ is a real valued real analytic function in a connected neighborhood of (0, θ • ) ∈ R × S ∞ V with ψ 0 (ζ) := ψ(0, ζ) ≡ 0 at θ • . It follows that, for a generic θ near θ • , one has ψ 0 (θ) = 0 and We will use arguments similar to those in the proof of Lemma 6.3.4. There is a distinguished triangle in D b (k) where we set Note that z − → ∞ in U is equivalent to s − → +∞.
Stationary phase formula
After recalling in some details the Legendre transform, we state here the stationary phase formula in terms of enhanced ind-sheaves. 7.1. Fourier-Laplace and enhanced Fourier-Sato transforms. Let V be a one-dimensional complex vector space, with coordinate z, and let V * be its dual, with dual coordinate w. Let P = V ∪ {∞} and P * = V * ∪ {∞} be the associated projective lines, and consider the bordered spaces V ∞ = (V, P), V * ∞ = (V * , P * ). Consider the morphisms induced by the projections. The Fourier-Laplace transform for D-modules is defined as follows. For Note that D b hol (D V∞ ) is equivalent to the bounded derived category of algebraic D V -modules with holonomic cohomologies. Then (see [24]), the above functors are compatible with the Fourier transform at the level of the Weyl algebra, given by the isomorphism C[z, In particular, L and L r are quasi-inverse of each other, and interchange Mod hol (D V∞ ) and Mod hol (D V * ∞ ).
The Fourier-Sato transform for enhanced sheaves was introduced and studied in [33] (see also [5,22]). It extends to enhanced ind-sheaves as follows.
. The functors L and L r are quasi-inverse of each other and, since p and q are semiproper, they interchange E b R-c (I k V∞ ) and E b R-c (I k V * ∞ ). Note also that L and L r interchange E b + (k V ) and E b + (k V * ), as well as Recall that the Riemann-Hilbert correspondence of [7] provides a fully faithful embedding and similarly for L r . Since, for k = C, , the next proposition immediately follows from the functoriality of Sol E . This was first observed in [22], where the case of non holonomic Dmodules was also discussed.
The next lemma will be of use later. For a ∈ V, let τ a : V ∞ − → V ∞ be the morphism induced by the translation τ a (z) = z + a.
Proof. Since the proofs are similar, let us only consider the first isomorphism. Consider the maps R 2 − → R given by p ′ (t, s) = t, q ′ (t, s) = s, σ ′ (t, s) = s − t, and use the same notations for the associated maps Recall that we set p R = p × id R and q R = q × id R . Then, one has where ( * ) follows from [7, Lemma 4.1.4].
7.3. Microsupport and enhanced Fourier-Sato transform. Consider the symplectic coordinates Recalling the definitions of Φ L and Φ L r from §7.2, consider the Lagrangians Let us concentrate first on Λ L . With the notations of §2.6, one has Let Λ a L be the image of Λ L by the endomorphism of T * (V × R × V * × R) changing the sign of z * and t * . Then Λ a L is the graph of the homogeneous symplectic transformation Recall the notations in §2.6. Then χ induces by γ the contact transformation underlying the Legendre transform. In turn, this induces by ρ the symplectic transformation Similar considerations hold for Λ L r , interchanging the roles of V and V * , and replacing χ, χ, and χ ρ with their respective inverses. Note the latter are explicitly given by χ −1 : ((w; w * ), t) → ((−w * ; w), t + Re ww * ), Recall the notion of Puiseux germ from Definition 5.1.1. Recall that we write S ∞ V instead of S ∞ P.
Let a ∈ P, θ ∈ S a V and f ∈ P SaV,θ .
Definition 7.4.1. Let us say that the Puiseux germ (a, θ, f ) is: Let τ ∈ C be the coordinate, and consider the complex version of the contact transformation χ from §7.3. We have a commutative diagram where the map Re is induced by C ∋ τ → Re τ ∈ R.
To the Puiseux germ (a, θ, f ) on V, one associates the germ of La- for z near θ. Note in particular that, with notations as in §1.3, one has where ρ C : (T * V) × C − → T * V is the projection.
(See Lemma 7.4.5 below for an explicit computation of b and η.) Let z = ψ(w) be the inverse of w = f ′ (z) for z near θ, and denote by g(w) a primitive of −ψ(w). Thus g satisfies the relations which are equivalent to d(zw − f (z) + g(w)) = 0. Choosing the only primitive g which satisfies provides the solution to (7.4.3). In other words, we obtained We shall prove the admissibility of (b, η, g) in Lemma 7.4.5, by explicit calculation of ord b (g).
With notations as in the above statement, we give in Lemma 7.4.5 below an explicit computation of b, η, and of the pole order of the difference between g and its linear part. (i) We fix z a = z − a as local coordinate at a ∈ V, and z ∞ = z −1 as local coordinate at ∞.
We regard a Puiseux germ (a, θ, f ) as a point of the étalé spacé endowed with its natural topology.
Lemma 7.4.5. Let a ∈ V and b ∈ V * . At the level of étalé spaces, the Legendre transform gives the homeomorphisms L : More precisely, one has: Proof. The inverse L r of L is obtained by replacing χ C with χ −1 C in Definition 7.4.2.
Concerning L, recall that (7.4.3) is equivalent to (7.4.5) and implies (7.4.4). We will prove the statement using the relations (7.4.4).
Let us show that the Legendre transform is compatible with the equivalence ∼ θ . Lemma 7.4.6. For i = 1, 2, let (a, θ, f i ) be an admissible Puiseux germ on V, and set Proof. There are three possible situations: (i) a ∈ V and f 1 , f 2 ∈ P (0,+∞) Since the arguments are similar, let us only discuss case (i).
Since K has normal form at 0, by Proposition 6.3.3 (i) it is equivalent to prove where N is the multiplicity class of K. We will proceed by dévissage in K, in the category E b + (I k V∞ ). We thus reduce to show that, for some r > 0, one has G (∞,η,g) L (π −1 k {|z|<r} ⊗ K) ≃ k N (f ) .
(2) Consider the distinguished triangle Since π −1 k {0} ⊗ K is a finite direct sum of copies of e(k {0} [n]) for n ∈ Z, it follows that L (π −1 k {0} ⊗K) is a finite direct sum of copies of k E V * [n + 1]. Since ord ∞ (g) > 0, Lemma 6.3.4 gives G (∞,η,g) (k E V * ) ≃ 0. We are thus left to show that, for some r > 0, (3) For r > 0 small enough, we can assume that where F ∈ E 0 R-c (k V∞ ) satisfies F ≃ π −1 k {0<|z|<r} ⊗ F and has a normal form at 0 with multiplicity of class N. Since L (k E V + ⊗ F ) ≃ k E V * + ⊗ L F , by Lemma 6.3.2 we are reduced to show Then, we conclude by Proposition 8.4.1 (i) below.
Microlocal arguments for the proof
We collect here some results used in the proof of the stationary phase formula, that we obtain using techniques from the microlocal study of sheaves of [20]. 8.1. Notations. Recall the identification (T * V) R × R ≃ (T * V R ) × R from §2.6. We will write (T * V) × R instead of (T * V) R × R, for short.
Proof. Since the arguments are similar, let us only discuss (i).
8.4.
A microlocal approach to multiplicity test.
If N(f ) > 0, by replacing f withf such that [f ] = [f ] and N(f ) > 0, we may assume from the beginning that N (f ) = N(f ).
We will use some arguments from the proof of Proposition 8.3.1.
Set for short
where I and J are finite sets, d i , d j ∈ Z, and ϕ i , ϕ + j , ϕ − j : W − → R are real analytic and globally subanalytic functions with ϕ − j (w) < ϕ + j (w) for any w ∈ W and j ∈ J.
By (8.4.1), one has On the other hand, since F has normal form at 0, for r > 0 small enough one has Using the fact that SS E (F ′ ) is Lagrangian, one deduces Take w 0 near η in W , and set q 0 = (w 0 , g ′ (w 0 ), − Re g(w 0 )) ∈ Λ Re g W |V * . Setting p 0 := χ −1 (q 0 ), one has p 0 ∈ χ −1 Λ Re g W |V * = Λ Re f V θ |V . Since F ′ has normal form at 0 with multiplicity N, we can decompose it as in Definition 5.3.1. Thus F ′ is of type k N (f ) with shift 1/2 at p 0 along Λ Re f V θ |V . It follows from Proposition 8.2.1 that L F ′ is of type k N (f ) with shift 1/2 at q 0 along Λ Re g W |V * . On the other hand, according to the possibilities (a)-(c) above, (8.4.1) implies that L F ′ is of type T with shift 1/2 at q 0 along Λ Re g W |V * , where Then, one has T ≃ k N (f ) . This implies (8.4.6).
|
2017-09-01T06:38:11.000Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "945b2e2e5ccee5decaeb6cfed20f38e828900829",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1709.03579",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "945b2e2e5ccee5decaeb6cfed20f38e828900829",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
18867616
|
pes2o/s2orc
|
v3-fos-license
|
Triptolide attenuates cerebral ischemia and reperfusion injury in rats through the inhibition the nuclear factor kappa B signaling pathway
Inflammation plays critical roles in the acute progression of the pathology of ischemic injury. Previous studies have shown that triptolide interferes with a number of pro-inflammatory mechanisms. In this study, we investigated whether triptolide has protective effects during acute cerebral ischemia/reperfusion (I/R) injury. Male Sprague Dawley rats received triptolide or vehicle at the onset of reperfusion following middle cerebral artery occlusion. Twenty-four hours after reperfusion, we evaluated neurological injuries, the expression of pro-inflammatory markers, and NF-κB activation. I/R rats treated with triptolide showed significantly better neurological deficit scores, decreased neural apoptosis, and reduced cerebral infarct volume and brain edema, and triptolide treatment suppressed the activation of NF-κB following I/R injury. Furthermore, the expression levels of pro-inflammatory cytokines at both the mRNA and protein levels were significantly decreased in rats receiving triptolide. These results indicate that the neuroprotective effects of triptolide during acute cerebral I/R injury are possibly related to the inhibition of both the NF-κB signaling pathway and inflammation.
Introduction
Stroke is one of the most common causes of death and long-term disability in the adult population worldwide. 1 Ischemic stroke, which is caused by a reduction in cerebral blood flow and can result in fatal brain damage, accounts for approximately 80% of all strokes. 2 Thrombolysis and interventional recanalization are regularly applied approaches for the management of acute ischemic stroke. 3,4 Although reperfusion leads to the temporary survival of cerebral cells in the ischemic region through the restoration of blood flow, these therapies have achieved limited clinical success because of their narrow therapeutic time window and the occurrence of reperfusion injury following recovery from ischemia. 4 Ischemia reperfusion injury (IRI) is a complex disorder caused by oxidative damage, inflammation, and cerebral edema. 5,6 In addition, both experimental and clinical evidence has indicated that inflammation is essential for the progression of cerebral IRI. 7,8 Previous studies have suggested that treatment with anti-inflammatory agents could reduce tissue edema and improve outcome in stroke models. 9 Triptolide, a bioactive ingredient extracted from Chinese medicinal plants, has been reported to exhibit anti-inflammatory activities in several disorders including arthritis, 10 pulmonary hypertension, 11 and traumatic brain injury. 12 In vitro studies have also shown that triptolide is able to suppress the production of inflammatory cytokines in various cell lines. 13,14 Those previous findings lend support to the possibility that submit your manuscript | www.dovepress.com
1396
Jin et al triptolide might have anti-inflammatory and neuroprotective abilities when used to treat cerebral IRI. To date, no studies have investigated the possible protective effects of triptolide against cerebral IRI. In the present study, we used a rat middle cerebral artery occlusion (MCAO) model to investigate the protective effects of triptolide on cerebral IRI and to determine the possible mechanism for this effect.
Materials and methods animals and experimental protocol
This study was performed using specific pathogen-free (SPF) 7-week-old male Sprague-Dawley rats weighing 250-300 g. The animals were purchased from the Center for Animal Experiments, Zhongnan Hospital, Wuhan University (Wuhan, People's Republic of China). This study was approved by the research committee of Wuhan University, and all animal experiments were conducted in accordance with the guidelines of the Wuhan University Animal Experimentation Committee. Animal experiments were conducted in accordance with the Guidelines for the Care and Use of Laboratory Animals of Wuhan University (Wuhan, Hubei, China). The Institutional Ethic Committee approved the animal study (20140210ZN11). A total of 100 rats were randomly divided into five groups (n=20 in each group): 1) sham group (SHAM); 2) ischemia and reperfusion group (IRI + Vehicle); and 3) three triptolide groups (IRI + TL, 3 subgroups, n=20 in each group). A single dose of triptolide ( Figure 1A) (Sigma, Saint Louis, MO, USA, dissolved in pure dimethyl sulfoxide [DMSO]) was intraperitoneally (ip) injected at the onset of reperfusion in rats belonging to the three TL groups; a corresponding volume of vehicle (pure DMSO) was administered to the rats in the IRI group. The dosages used for triptolide administration (0.1 mg/kg, 0.5 mg/kg, and 5 mg/kg) were selected based upon our pilot study. At the conclusion of the reperfusion period, the neurological deficit scores were evaluated. The rats were then sacrificed for the collection of tissue samples.
animal model
The rat MCAO model has been established previously. 15 Briefly, animals were anesthetized with an ip injection of pentobarbital sodium (50 mg/kg; Amresco, Cleveland, OH, USA) and were fixed in a supine position on a warming blanket to maintain their body temperature at 37°C-38°C. A 4/0 surgical nylon filament with a silicone-beaded tip was introduced into the right internal carotid artery (ICA) through the external carotid artery (ECA) to occlude the origin of the middle cerebral artery (MCA). After 2 hours of ischemia, the occlusion was released for a 24-hour reperfusion period. The arterial blood pressure (BP) and heart rate (HR) were monitored through the left femoral artery using a monitoring system (BL-420F; TaiMeng, Chengdu, People's Republic of China). Arterial blood samples for blood gas analysis were collected through the left femoral artery using an i-STAT ® 1 analyzer (Abbott, Kyoto, Japan) prior to (baseline) and 15 minutes after the onset of MCAO (ischemia) and reperfusion (reperfusion).
Measurement of cerebral infarct volume
At the end of the observation period, the rats were sacrificed, and their brains were collected. The assessment of infarct volume was carried out as previously described. 16 In brief, brains were sectioned at 2 mm intervals and stained with 2% 2,3,5-triphenyltetrazolium chloride (TTC, Amresco) for 15 minutes at 37°C. Images were digitalized, and the infarct areas were analyzed. The infarct volume is expressed as the percentage of the contralateral hemisphere.
Measurement of cerebral edema
Cerebral edema was evaluated by determining the water content of the brain 24 hours following reperfusion, as previously described 17 with modifications (n=5 in each group). Briefly, the brains were immediately weighed to obtain the wet weight (WW) after removal and dried at 110°C overnight in an electric oven. The dried brains were weighed again to obtain the dry weight (DW). Brain water content was calculated as follows: water content (%) = [(WW -DW)/WW] ×100%.
TUNel staining
Neuronal apoptosis was assessed by TUNEL staining using a commercial in situ cell death detection kit (Nanjing KeyGEN Biotech Co. Ltd, Nanjing, People's Republic of China) according to the manufacturer's instructions. All sections were counterstained with 4′,6-diamidino-2-phenylindole (DAPI; Invitrogen, Carlsbad, CA, USA). In each case, ten fields in the infarcted cortex were randomly selected for apoptotic cell counting in a blinded manner using an Olympus IX51 reflected light fluorescence microscope (Olympus, Japan), and the percentage of TUNEL positive cells (TUNEL/ DAPI) was calculated.
Assessment of neurological deficit score
Neurological symptoms were assessed 24 hours after reperfusion using a neurological deficit score as previously described. 18 The neurological deficit score ranges from 0 to 4 (0, forelimb flexion and body twisting when rats were suspended by the tail [no observable neurological deficit]; 1, rats failed to extend left forepaw; 2, rats circled to the Neuropsychiatric Disease and Treatment 2015:11 submit your manuscript | www.dovepress.com
real-time Pcr analysis
The expression of pro-inflammatory cytokines, including tumor necrosis factor alpha (TNF-α), interleukin beta (IL-β), and IL-6, in the peri-infarct cortical tissue was detected using real-time polymerase chain reaction (PCR). Tissue collection and preparation for the extraction of total RNA were carried out as previously described. 19 Total RNA was extracted using TRI Reagent (Molecular Research Center, Inc., Cincinnati, OH, USA) and reverse-transcribed into cDNA according to the manufacturer's instructions. The levels of TNF-α, IL-1β, and IL-6 mRNA expression were measured with SYBR green, and RT-PCR was performed using an ABI prism 7000 sequence detection system (ABI, Foster City, CA, USA). Expression values are shown as fold change relative to the control group. The following pairs of primers were used for PCR amplification: TNF-α primer
enzyme-linked immunosorbent assay
Cortical tissue was collected and homogenized 24 hours after reperfusion. The homogenate was centrifuged for 30 minutes at 10,000× g at 4°C. The protein concentration of the supernatant was detected with a BCA kit (Pierce, Rockford, IL, USA), and the levels of cytokines, including TNF-α, IL-1β, and IL-6, were determined using commercial Enzyme-linked immunosorbent assay kits (Abcam, Shanghai, People's Republic of China).
After washing with TBS containing 0.05% Tween-20, the membranes were probed with horseradish peroxidase (HRP)labeled secondary antibodies (Boster, Wuhan, People's Republic of China), and the densities of the protein bands were analyzed using Quantity One software (Bio-Rad, Hercules, CA, USA). Protein levels were normalized to β-actin.
statistical analysis
The data are expressed as the mean ± SD and were processed by the statistical analysis software SPSS version 18.0 (SPSS Inc., Chicago, IL, USA). The comparison of several means was performed using one-way and repeated-measure two-way analysis of variance followed by the Tukey-Kramer test to identify significant differences between groups. The Kruskal-Wallis test was used for the neurological deficit scores. A P-value 0.05 was considered significant.
Physiological parameters during McaO and reperfusion
Physiological parameters, including mean arterial pressure (MAP), HR, rectal temperature, arterial blood pH, PaO 2 and PaCO 2 , during MCAO and reperfusion, are shown in Table 1. We observed no significant differences in MAP, HR, pH, PaO 2 , and PaCO 2 at any time point before or during MCAO or during reperfusion.
Triptolide ameliorates McaO-induced cerebral injury
Two hours of MCAO followed by a 24-hour reperfusion period induced an infarct volume of 21%±3% and a brain water content of 83%±2% in vehicle-treated (IRI + DMSO) rats. Treatment with triptolide significantly reduced the infarct volume in a dose-dependent manner (P0.05) ( Figure 1B and C). In addition, the brain water content was also reduced in rats treated with triptolide (P0.05) ( Figure 1D). At the conclusion of the observation period (24 hours following MCAO), rats in the IRI + DMSO group showed markedly greater neurological deficit scores ( Figure 1E), while rats receiving triptolide (0.5 mg/kg and 5 mg/kg) showed a significant decrease in their neurological deficit (P0.05).
Triptolide attenuates the production of pro-inflammatory cytokines
To determine the anti-inflammatory response following MCAO, we evaluated the expression of pro-inflammatory cytokines, including TNF-α, IL-1β, and IL-6, in ischemic brain tissue 24 hours following reperfusion. We found that the relative mRNA ( Figure 2A) and protein ( Figure 2B) expression of these cytokines were significantly elevated in vehicle-treated rats (P0.05), while treatment with triptolide significantly attenuated the increase in TNF-α, IL-1β, and IL-6 mRNA and protein levels in a dose-dependent manner (P0.05).
Triptolide prevents neuronal apoptosis following cerebral ischemia/reperfusion injury DNA fragmentation following MCAO was determined by TUNEL staining ( Figure 3A). TUNEL-positive cells were rarely detected in SHAM animals. Conversely, MCAO rats treated with vehicle showed a significant increase in the number of TUNEL-positive cells within the ischemic penumbra. Triptolide significantly reduced the percentage of apoptotic cells in a dose-dependent manner (P0.05). We also evaluated the expression of the pro-apoptotic protein Bax ( Figure 3B) and the anti-apoptotic protein Bcl-2 ( Figure 3C) in ischemic brain tissue. MCAO-induced cerebral IRI led to a significant increase in Bax protein expression and a reduction in Bcl-2 expression when compared to sham-operated rats (P0.05); however, triptolide reverted the expression of these two proteins to basal levels.
Triptolide inhibits the activation of NF-κB following McaO-induced cerebral injury
The activation of NF-κB was evaluated in both the cytosol and nuclear fractions ( Figure 4A). We found that the NF-κB p65 subunit was highly expressed in the cytosol but not in the nucleus of sham-operated rats. In MCAO rats, the ischemic tissue showed higher levels of NF-κB p65 in the nucleus compared to the cytosol, and this activation of NF-κB was inhibited by treatment with triptolide. In addition, we also determined the total protein levels of IκB-α and phosphorylation of IκB-α. Our results showed that, following cerebral IRI, the overall level of IκB-α protein was decreased along with an enhancement in IκB-α phosphorylation. Triptolide inhibited the I/R-induced phosphorylation and degradation of IκB-α in a dose-dependent manner ( Figure 4B).
Discussion
The major finding of this study is that the administration of triptolide has protective effects against the cerebral IRI that is induced by MCAO. Treatment with triptolide improved neurological deficit scores, attenuated cerebral infarct volume, reduced brain edema, and decreased neuronal apoptosis and the production of pro-inflammatory cytokines including TNF-α, IL-1β, and IL-6. In addition, our results revealed that triptolide has marked inhibitory effects on the activation of NF-κB. These results suggest that the protective effects of triptolide against neuroinflammation are likely acting through the inhibition of the NF-κB signaling pathway. NF-κB is a transcription factor that controls the expression of target genes involved in cell proliferation, apoptosis, and inflammation, and it is known to be highly activated in inflammatory disease states such as traumatic brain injury and cerebral ischemia. 21,22 It has been shown previously that the activation of NF-κB following MCAO-induced cerebral injury was accompanied by the elevated expression of pro-inflammatory cytokines including TNF-α, IL-1β, and IL-6. 23 These cytokines are produced by diverse cell types and serve as mediators of inflammation. 24 In fact, these pro-inflammatory cytokines, including TNF-α, IL-1β, and IL-6, could also stimulate NF-κB activation in inflammatory diseases. 25 In this study, we confirmed that both the mRNA and protein expression levels of TNF-α, IL-1β, and IL-6 were increased following cerebral IRI and that this increased production of pro-inflammatory cytokines was attenuated by the administration of triptolide. These findings are consistent with previous studies reporting the anti-inflammatory effects of triptolide. [10][11][12][13][14] NF-κB was activated at the very early stage of MCAOinduced cerebral injury. In an animal MCAO model, the increased binding of NF-κB DNA was detected after 30 minutes of reperfusion. 26 In addition, mice deficient in the p50 subunit of NF-κB showed a significant reduction in infarct volume in stroke models, suggesting that NF-κB plays a detrimental role in the response to cerebral ischemia. 27 Moreover, the selective inhibition of NF-κB in neurons significantly reduced the infarct size and the number of TUNEL-positive cells following MCAO. 28 However, the outcomes of direct NF-κB inhibition on focal cerebral ischemic injury are inconsistent, as Hill et al have shown that the inhibition of MCAO-induced NF-κB using diethyldithio carbamate in rats increased cell death. 15 Previous studies have suggested that triptolide is absorbed very rapidly after oral administration in rats; the time to reach maximal plasma concentration ranges from 10.0 to 19.5 minutes with a half-life of 16.8-50.6 minutes. 29,30 In this study, only a single dose of triptolide was injected ip at the onset of reperfusion. Thus, the short half-life of triptolide might reduce its bioactivity. Regardless, our results still show that the administration of triptolide at the onset of reperfusion significantly suppressed the translocation of NF-κB and the phosphorylation of IκB-α following MCAO. Time course studies have suggested that NF-κB is activated within 3 hours following MCAO-induced injury in the rat brain. 23 It is possible that triptolide inhibits the early activation of NF-κB following cerebral ischemic injury. However, additional studies are needed to clarify the mechanism and to assess the best therapeutic window for triptolide following cerebral IRI. In addition, the administration of triptolide significantly reduced the percentage of TUNEL-positive cells following MCAO. Thus, the protective effects of triptolide against cerebral IRI are possibly related to its inhibitory effects on the activation of NF-κB.
Conclusion
Our results suggest that the administration of triptolide following cerebral ischemia reduces infarct volume and apoptosis and improves neurological function by inhibiting the activation of NF-κB and the release of pro-inflammatory cytokines in the brain. Triptolide might be a promising therapeutic agent for the prevention and/or treatment of cerebral IRI.
|
2017-10-15T01:43:53.440Z
|
2015-06-03T00:00:00.000
|
{
"year": 2015,
"sha1": "f3af74737cc0566fae0eba3b2fafb04cf5d436d1",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=25325",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23e15b906bc9ede1a1f910c02b54b79961229f20",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231597329
|
pes2o/s2orc
|
v3-fos-license
|
Late Neoproterozoic–Silurian tectonic evolution of the Rödingsfjället Nappe Complex, orogen-scale correlations and implications for the Scandian suture
Abstract The Scandinavian Caledonides consist of disparate nappes of Baltican and exotic heritage, thrust southeastwards onto Baltica during the Mid-Silurian Scandian continent–continent collision, with structurally higher nappes inferred to have originated at increasingly distal positions to Baltica. New U–Pb zircon geochronological and whole-rock geochemical and Sm–Nd isotopic data from the Rödingsfjället Nappe Complex reveal 623 Ma high-grade metamorphism followed by continental rifting and emplacement of the Umbukta gabbro at 578 Ma, followed by intermittent magmatic activity at 541, 510, 501, 484 and 465 Ma. Geochemical data from the 501 Ma Mofjellet Group is indicative of arc magmatism at this time. Syntectonic pegmatites document pre-Scandian thrusting at 515 and 475 Ma, and Scandian thrusting at 429 Ma. These results document a tectonic history that is compatible with correlation with peri-Laurentian and/or peri-Gondwanan terranes. The data allow correlation with nappes at higher and lower tectonostratigraphic levels, including at least parts of the Helgeland, Kalak and Seve nappe complexes, implying that they too may be exotic to Baltica. Neoproterozoic fragmentation of the hypothesized Rodinia supercontinent probably resulted in numerous coeval, active margins, producing a variety of peri-continental terranes that can only be distinguished through further combined geological, palaeomagnetic and palaeontological investigations.
The Scandinavian Caledonides contain a record of Neoproterozoic tectonic activity at the Laurentian and/or Baltican and/or other, presently unidentified margins, followed by late Cambrian-Ordovician closure of the Iapetus Ocean. The current form of the orogen principally reflects Silurian continentcontinent collision (Scandian Orogeny) and later extensional reworking, superimposed on a poorly known, highly variable Neoproterozoic-Ordovician evolution. Since the 1980s, the evolution and nappe architecture of the Scandinavian Caledonides have been conceptualized within the framework of a series of allochthons Roberts and Gee 1985;Stephens et al. 1985) that are ordered tectonostratigraphically (Fig. 1). The Lower and Middle allochthons consist of low-to high-grade metasedimentary rocks interpreted to have been deposited on the western (present-day coordinates) margin of Baltica, tectonically interleaved with crystalline Baltican basement. The overlying Upper Allochthon contains variably dismembered ophiolites from the Iapetan realm, whereas the Uppermost Allochthon is interpreted as vestiges of the eastern margin of Laurentia that collided with Baltica (Stephens and Gee 1989). This rather straightforward framework has served as a powerful guide in investigations of Caledonian evolution for decades. In recent years, however, increased access to geochronological datasets has shown that many units, at several tectonostratigraphic levels, contain a pre-Caledonian tectonomagmatic and tectonometamorphic history that has complicated this picture (Corfu et al. 2007;Kirkland et al. 2007;Gasser et al. 2015). Corfu et al. (2014) discuss many of the issues related to the dual tectonostratigraphic/provenance allochthon framework and, following these authors, we generally refer to particular nappes or nappe complexes rather than allochthon, with allochthon affinity in parentheses where reference to this framework is pertinent.
Here, we present new U-Pb zircon geochronological and whole-rock geochemical and Sm-Nd isotopic data from the central parts of the Scandinavian Caledonides currently assigned to the Rödingsfjället Nappe Complex (RNC) of the Uppermost Allochthon (Fig. 1a). The data constrain the timing of magmatic, metamorphic and deformational events, and characterize the magmatic activity. This new information provides a basis for interpreting the Neoproterozoic-Ordovician evolution of the RNC and discuss the potential for correlation with other, well-characterized units within the orogen and the implications for orogenic architecture.
Geological background
The main study areas form part of the RNC near Mo i Rana and Røssvatnet (Figs 1 & 2) that, along with the structurally overlying Helgeland Nappe Complex (HNC), constitute the Uppermost Allochthon. Although distinct in many ways from the underlying Köli Nappe Complex (Upper Allochthon), particularly with respect to depositional, magmatic and metamorphic evolution , there are indications that all three nappe complexes formed at, or outboard of, the Laurentian margin and had largely been assembled prior to Early Silurian Scandian continent-continent collision (Pedersen et al. 1992;Meyer et al. 2003;McArthur et al. 2014;Slagstad and Kirkland 2018).
The supracrustal successions in the RNC and HNC comprise voluminous marbles, and associated Roberts and Gee (1985). The red line indicates the boundary between units above (to the NW), which may arguably have a origin exotic to Baltica, and units below, which are most likely of Baltican ancestry. (b) Study areas in the Rödingsfjället Nappe Complex. Abbreviation: SIP, Seiland Igneous Complex. Søvegjarto et al. (1988Søvegjarto et al. ( , 1989, Gjelle et al. (1991) and Marker et al. (2012). (b) Simplified geological map of the study area east of northern Røssvatnet, based on Gustavson (1981) and Gjelle et al. (2003), and new mapping by NGU (Bjerkgård et al. 2018). The black stars show locations of dated samples.
The exotic Scandinavian Caledonides iron formations and pelitic schists, deposited in platformal, shelf-edge and shelf-slope environments during the Late Neoproterozoic-Early Silurian (Melezhik et al. , 2015(Melezhik et al. , 2018Barnes et al. 2007;Slagstad and Kirkland 2017). The metasedimentary rocks were intruded by voluminous, Ordovician-Early Silurian arc-type granitoid plutons and batholiths (Nordgulen et al. 1993; Barnes et al. 2007), and coeval, voluminous magmatism is recorded in the British, Irish and Greenland Caledonides (e.g. Flowerdew et al. 2005;Rehnström 2010). Early Caledonian, NW-vergent thrusting Yoshinobu et al. 2002) in the HNC has been correlated with NW-vergent Taconian/Grampian deformation along the Laurentian margin (Prave et al. 2000). These geological features are compatible with, and generally interpreted to reflect, active-margin processes along the Laurentian margin of Iapetus, prior to continent-continent collision in the Late Silurian ). The Köli Nappe Complex is characterized by numerous, Early Ordovician, fragmented arc and ophiolite complexes, most of which have compositions attributed to formation in oceanic arc and backarc basins (Grenne et al. 1999;Slagstad et al. 2013). Fossil evidence suggests formation of these complexes outboard of the Laurentian margin (Bruton and Bockelie 1980;Pedersen et al. 1992), consistent with coeval and tectonically similar (i.e. activemargin related) arc-related ophiolites in the Newfoundland and Quebec Appalachians (e.g. van Staal et al. 1998;Lissenberg et al. 2005a). These ophiolites are typically correlated with coeval, compositionally similar ophiolite complexes in the HNC (Furnes et al. 1988;McArthur et al. 2014), highlighting the pre-Scandian relationship between these nappe complexes.
The Seve and Särv nappe complexes (Middle Allochthon) comprise low-to high-grade metamorphic metasedimentary and meta-igneous rocks commonly interpreted to represent the Neoproterozoic-Early Paleozoic western margin of Baltica (Gee et al. 2017). The structurally highest Seve Nappe Complex (SNC) is usually inferred to represent the outermost margin of Baltica and locally preserves mafic dyke swarms dated at c. 615-600 Ma (Svenningsen 2001;Gee et al. 2017;Kjøll et al. 2019;Tegner et al. 2019), which is interpreted to reflect continental break-up and opening of the Iapetus Ocean, placing the SNC at the continent-ocean transition zone (Andréasson 1994;Andréasson et al. 1998;Gee et al. 2017). Two areas that preserve evidence of high-pressure-ultrahigh-pressure (HP-UHP) metamorphism have been subjected to numerous studies. One is in the Norrbotten region and the other is in Jämtland (Fig. 1). The Norrbotten SNC comprises quartzites, feldspathic and calc-silicatebearing psammites, minor marble, and pelite. The Norrbotten SNC records a comparatively long tectonomagmatic and tectonometamorphic history, with magmatism at 945 + 31 (Albrecht 2000) and 845 + 14 Ma (Paulsson and Andréasson 2002), and c. 637 and 607 Ma titanite and 603 Ma monazite ages (Rehnström et al. 2002;Root and Corfu 2012;Barnes et al. 2019). These ages may be related to heating from the intrusion of mafic dyke swarms; however, the 637 Ma titanite age is too old to be related to mafic dyke emplacement, and the interpretation of the 603 Ma monazite is cryptic due to textural complexities (Barnes et al. 2019). Eclogitefacies mafic rocks (former dykes) in the Norrbotten SNC record pressure-temperature conditions of 26-27 kbar and 680-780°C (Barnes et al. 2019). Various attempts at dating this metamorphic event using Ar-Ar, Sm-Nd isochrons and U-Pb on monazite and zircon have yielded ages between c. 505 and 482 Ma (Mørk et al. 1988;Essex et al. 1997;Root and Corfu 2012;Barnes et al. 2019).
The Jämtland SNC comprises rocks similar to those in the Norrbotten SNC but lacks evidence of a Neoproterozoic tectonometamorphic history. Eclogite-facies rocks in the Jämtland SNC record roughly similar pressure-temperature conditions (25-26 kbar, 650-700°C: Fassmer et al. 2017) but yield Sm-Nd and Lu-Hf mineral ages of around 460 Ma (Brueckner and van Roermund 2007;Fassmer et al. 2017), and even younger, c. 446 Ma, zircon crystallization ages (Root and Corfu 2012). The Västerbotten area, between the Norrbotten and Jämtland SNC, is poorly exposed and not well investigated. Thus far, there is no evidence of similar (U)HP metamorphism in Västerbotten, possibly because of more extensive retrogression (Gee et al. 2013).
The Kalak Nappe Complex (KNC) in the northern Norwegian Caledonides (Roberts 1985) has traditionally been correlated with the SNC (Andréasson et al. 1998). The KNC dominantly comprises variably metamorphosed sedimentary rocks that were laid down in several depositional cycles between c. 1000 and 700 Ma (Slagstad et al. 2006;Kirkland et al. 2007), and record tectonometamorphic and tectonomagmatic events throughout much of the Neoproterozoic (Kirkland et al. 2007(Kirkland et al. , 2016Gasser et al. 2015), including emplacement of the mafic-ultramafic Seiland Igneous Province mostly at c. 570-560 Ma . The Seiland Igneous Province rocks have chemical compositions and field relationships compatible with formation in a possibly plume-influenced continental rift (Krill and Zwaan 1987;Grant et al. 2016;Larsen et al. 2018), similar to the c. 615-600 Ma mafic dyke magmatism in the SNC, interpreted to be related to the opening of the Iapetus Ocean and the Central Iapetus Magmatic Province . The youngest components of the Seiland Igneous Province are subordinate nepheline syenite dykes dated at c. 525 Ma (Pedersen et al. 1989). The complex Neoproterozoic evolution of parts of the KNC, with multiple phases of tectonometamorphic activity that correlate with events in the Mesoproterozoic-Neoproterozoic sequences of Scotland and East Greenland, has led several authors to suggest an origin exotic to Baltica (Corfu et al. 2007;Kirkland et al. 2007;Slagstad and Kirkland 2018).
Rödingsfjället Nappe Complex near Mo i Rana
The study area near Mo i Rana consists of three nappes with disparate metasedimentary and metaigneous rock suites (Fig. 2a). The field observations that form the basis for the geochronological investigation in this area have been presented by us in various theses, reports and maps, including: Marker (1983), Søvegjarto et al. (1988Søvegjarto et al. ( , 1989, Gjelle et al. (2003), Marker et al. (2012), Bjerkgård et al. (2013), Høyen (2016) and Storruste (2017). This work has established a local tectonostratigraphy for this part of the Rödingsfjället Nappe Complex consisting of, from structural bottom to top: the Ravnålia, Plura and Slagfjellet nappes. Each of the nappes are subdivided into one or more 'groups' or 'formations'; however, as discussed below, it is possible that cryptic nappe boundaries between some of these groups and formations have gone undetected. The nappes are described in detail below, from structural bottom to top, with particular emphasis on the dated units.
Ravnålia Nappe
Kjerringfjell Group. The Kjerringfjell Group is dominated by variably migmatitic, garnet-biotite +muscovite, +staurolite, +kyanite gneisses, locally with irregular layers of quartzite and garnetbearing amphibolite, intruded by numerous pegmatite sheets, the Umbukta gabbro and associated finegrained, mafic dykes (Høyen 2016). The mica gneisses are typically medium grained and preserve what appears to be centimetre-scale primary bedding, along with a tectonic foliation and migmatitic leucosomes (Fig. 3c, d). The rocks preserve evidence of several generations of deformation but the structures in the Kjerringfjell Group have not yet been mapped out in detail. The Umbukta gabbro is medium-coarse grained and generally undeformed (Fig. 3e), although local growth of garnet and a general retrogression of pyroxene to hornblende are telltale signs of metamorphic overprinting (Storruste 2017). Primary igneous textures are typically well preserved, and olivine is preserved locally, giving the rocks a characteristic brown, weathered surface.
Mafic dykes associated with the Umbukta gabbro are typically up to a few decimetres thick and cut the high-grade fabrics in the gneissic host rock (Fig. 3f); similar dykes are also found inside the gabbro. In addition, remnants of older mafic dykes, now thoroughly deformed and amphibolitized, are common, indicating pre-metamorphic mafic magmatism.
Ørtfjellet Group and Dunderland Formation. In addition to the Kjerringfjell Group, the Ravnålia Nappe consists of the Ørtfjellet Group and the Dunderland Formation. The main lithologies of these units include amphibolite-facies dolomite and calcite marble, compositionally variable mica schists and minor diamictite (Fig. 3g) interpreted to be of glacial origin (Melezhik et al. 2015), and minor intrusive bodies including tonalite. A characteristic feature of the Ørtfjellet Group and Dunderland Formation is numerous dismembered, stratiform iron formations in close proximity to the marbles. Carbon and Sr isotope chemostratigraphy of the marbles implies a depositional age of 800-730 Ma for the Dunderland Formation and a depositional age of c. 660 Ma for the Ørtfjellet Group (Melezhik et al. 2015).
Langfjell Shear Zone. The Langfjell Shear Zone separates the Plura and Ravnålia nappes. The shear zone consists of high-strain schists and gneisses intruded by locally abundant dismembered sheets of pegmatite (Fig. 3h). Detailed structural mapping of the Langfjell Shear Zone suggests that it reflects pre-Scandian shearing followed by Scandian folding (Marker 1983).
Slagfjellet Nappe
Mofjellet Group. The Mofjellet Group consists of complexly folded, fineto medium-grained grey gneisses with persistent layers of amphibolite and aluminous biotite and muscovite gneisses. The grey gneisses are dominantly dacitic and rhyolitic with abundant quartz and plagioclase, and subordinate biotite and muscovite, and are probably of igneous (perhaps volcanic) origin (Fig. 3a). The commonly garnet-bearing amphibolites contain pods and stripes of calc-silicate rock and are interpreted by us to represent strongly deformed pillow lavas with small amounts of sediment infilling between the pillows. The more schistose and micaceous type of gneiss most likely represents greywacke-type metasediments. The biotite and muscovite gneisses or schists are generally rich in quartz and aluminosilicates in addition to mica. They may form separate, generally persistent layers but grade into each other with changing proportions of biotite and muscovite. Biotitedominated types may also contain amphibole, and grade into hornblende-biotite gneisses. The biotite gneisses contain abundant kyanite in addition to garnet and staurolite, while the muscovite gneisses are mostly poor in these minerals. The biotite (+hornblende) and muscovite gneisses invariably contain disseminated pyrite as well as quartz-rich exhalites. The exhalite zones can be traced for several kilometres along strike and are important hosts for stratabound Zn-Pb-Cu sulfide mineralizations in the Mofjellet Group (Bjerkgård et al. 2013).
Rödingsfjället Nappe Complex east of northern Røssvatnet
The RNC is exposed on Hjartfjellet and the eastern and southern shoreline of northern Røssvatnet (Fig. 2b). Unfortunately, very little detailed mapping has been conducted in the region between our two study areas and we are currently unable to correlate units in the two areas. Our new mapping east of northern Røssvatn shows that the rocks in this area comprise garnet and kyanite-garnet mica schists, and quartzo-feldspathic mica gneisses with intercalations of marble. The main foliation of the schists and gneisses represents an already transposed and isoclinally folded layering (S n ). This foliation S n is represented in thin sections by a compositional layering and discontinuous, probably isoclinally folded quartz layers and rootless isoclinal folds. Tight folding of this layering represents a subsequent deformation stage, F n+ 1 . Variably thick foliation-parallel, leucocratic, fine-grained to pegmatitic granitic veins ( Fig. 3i) were emplaced syntectonically parallel to the foliation (S n+ 1 ) that formed during F n+ 1 folding, leading to boudinage or folding of the intruding veins during ongoing deformation. Folding occurred at high temperatures and was accompanied by shearing along the fold limbs. Crenulation cleavage oblique to F n+ 1 axial planes and spatial variations in the orientation of F n+ 1 orientation indicates a likely third pre-Scandian folding phase that requires verification and further studies. Different generations of granitic sheets, pods and larger bodies are common in the gneisses and schists (Fig. 3i, j); they formed at different stages in the polyphase tectonometamorphic evolution of the schists and gneisses. In the area east of Røssvatnet, the RNC is located in a regional-scale Scandian synform that also includes structurally underlying nappes traditionally assigned to the upper Köli Nappe Complex.
U-Pb zircon dating
Zircon crystals were separated using standard techniques (Wilfley or Rogers water table, heavy liquid, Frantz magnetic separation). Zircons from the non-magnetic fraction were picked under alcohol, mounted in 1 inch-diameter epoxy resin mounts and polished to expose an equatorial section through the grains.
The analyses were carried out at the Geological Survey of Norway (NGU) on an ELEMENT XR single-collector, high-resolution ICP-MS, coupled to a UP193-FX 193 nm short-pulse excimer laser ablation system from New Wave Research. The laser was set to ablate single, up to 60 µm-long lines, using a spot size of 20 or 15 µm, a repetition rate of 10 Hz and an energy corresponding to a fluence of 4-5 J cm −2 . Each analysis included 30 s of background measurement followed by 30 s of ablation. Masses 202,204,[206][207][208]232 and 238 were measured. The reference material GJ-1 (Jackson et al. 2004) was used for fractionation correction of isotopic ratios, whereas 91500 (Wiedenbeck et al. 1995) and an in-house standard (OS-99-14, 1797 + 3 Ma: Skår 2002) were used to check precision and accuracy. The data were not corrected for common lead but monitoring of the signal for 204 allowed exclusion of data deemed to be influenced by common Pb from further calculations. The data were reduced using GLITTER® (Van Achterbergh et al. 2001) and plots were made using Isoplot (Ludwig 2003).
Whole-rock geochemistry
Whole-rock geochemical analyses were conducted at ALS Chemex in Sweden using methods ME-ICP06 (fused bead, acid digestion and inductively coupled plasma atomic emission spectroscopy (ICP-AES)) and ME-MS81 (fused bead, acid digestion and inductively coupled plasma mass spectrometry (ICP-MS)).
Sm-Nd isotopes
Sm-Nd isotope data on whole-rock samples from the Umbukta gabbro and its host rock were obtained in the Geology Laboratory of Université Blaise Pascal (Clermont-Ferrand, France) using isotope dilution thermal ionization mass spectrometry (ID-TIMS). Basaltic samples were decomposed by standard acid dissolution procedures with hydrofluoric acid (HF). Sample decomposition of metasedimentary rocks was achieved by fusion with a LiBO 2 flux in an induction furnace at c. 1150°C, as described by Le Fèvre and Pin (2005). Isolation of Nd and Sm was carried out by cation exchange and extraction chromatography methods similar to (samples dissolved with HF), or derived from (samples fused with LiBO 2 ), those described by Pin and Santos Zalduegui (1997). Sm and Nd concentrations were measured by isotope dilution using a mixed 149 Sm-150 Nd tracer and TIMS, allowing determination of 147 Sm/ 144 Nd ratios with a precision of 0.2%. Sm isotope dilution measurements were made in Clermont-Ferrand after sample loading in a droplet of c. 5 M phosphoric acid on single Ta filaments using an automated VG54E mass spectrometer operated in single collection mode. Nd isotopic ratios were determined with double Re filament assemblies using a Thermo Finnigan Triton TI instrument at Nîmes University, in the static multi-collection mode, with normalization to 146 Nd/ 144 Nd = 0.7219.
During the period of analyses, five measurements of the JNdi-1 isotopic standard (Tanaka et al. 2000) gave a mean 143 Nd/ 144 Nd of 0.512102 (SD = 3 × 10 −6 ). The USGS rhyolite standard RGM-1 was analysed in duplicate, using the HF-dissolution and the LiBO 2 -fusion methods, in order to check for the overall reproducibility and accuracy of the method.
U-Pb zircon age dating
The U-Pb zircon data are presented in Supplementary material 1. Table 1 presents a summary of the data and sample coordinates.
Detrital zircons, metasedimentary rocks, Kjerringfjell Group, Ravnålia Nappe. One hundred and eight detrital zircon grains from four samples collected just north of the Umbukta gabbro in the Kjerringfjell Group yield dominantly Mesoproterozoic and late Paleoproterozoic ages between 1.9 and 1.0 Ga, with sparse Archean grains (Fig. 4a). The youngest analysis is 97% concordant with an age of c. 865 Ma; a relatively low Th/U ratio of 0.15 may, however, suggest some effects of metamorphism, as discussed further below. The youngest identifiable population consists of four analyses that yield a concordia age of 1030 + 17 Ma (MSWD = 0.13), considered the best estimate of the maximum age of deposition.
Neoproterozoic high-grade metamorphism, Kjerringfjell Group, Ravnålia Nappe. A leucosome from a migmatitic psammite, cut by a mafic dyke related to the Umbukta gabbro ( Fig. 3f), was sampled for dating. The zircons from this sample are typically 100-150 µm and rounded to elongate. Internally, the zircons commonly display oscillatory-zoned cores with cathodoluminescence (CL)-dark (U-rich) mantles with faint oscillatory to irregular zoning, or CL-dark grains with variable oscillatory and irregular zoning (Fig. 5). Twenty-three analyses of oscillatoryzoned cores yield ages ranging between 1630 and 889 Ma, with a youngest population of c. 1034 Ma (Fig. 4b). Seventeen analyses of CL-dark rims and discrete grains yield several reversely discordant analyses due to high U content and, judging from the post-analysis CL images, core-rim mixtures cannot be confidently excluded in some cases. Nevertheless, 10 analyses yield a weighted mean 206 Pb/ 238 U age of 623 + 6 Ma (MSWD = 2.1: Fig. 4b), interpreted to represent the crystallization age of the leucosome and thus the age of high-grade metamorphism in the Kjerringfjell Group.
Late Neoproterozoic magmatism, Kjerringfjell and Ørtfjellet groups, Ravnålia Nappe. The zircons from the Umbukta gabbro, Kjerringfjell Group (sample 197692) are long prismatic, c. 250-300 µm, with well-developed, rather broad, oscillatory zoning. Nine of 10 analyses yield a concordia age of 578 + 6 Ma ( Fig. 4c: MSWD = 2.5), interpreted to represent the crystallization age of the mafic magma. This age is similar to an earlier date of 576 + 7 Ma, interpreted as the age of magmatic crystallization (Senior and Andriessen 1990). The significance of the single excluded analysis is unclear.
The zircon crystals from a tonalitic gneiss in the Ørtfjellet Group (sample MM83786) resemble those in the Umbukta gabbro, both in size and internal zoning. Thirteen of 15 analyses yield a weighted mean 206 Pb/ 238 U age of 541 + 6 Ma ( Fig. 4d: MSWD = 1.2). The age is interpreted to reflect crystallization of the tonalitic magma. Of the two analyses not used in the age calculation, one is strongly discordant, whereas the other yields a c. 430 Ma age that is interpreted to reflect modification or growth during Scandian overprinting.
Cambrian and Ordovician magmatism. Sample MO_104 from the Mofjellet Group, Slagfjellet Nappe is a fine-grained, light-grey quartzofeldspathic orthogneiss with sparse migmatitic leucosomes. The zircons from this sample are comparatively small, typically around 100 µm, equidimensional to stubby, with well-developed oscillatory zoning. The zoning is commonly transected by CL-bright irregular veins, suggesting some later alteration. Thirty analyses cluster around 500 Ma, 20 of which yield a concordia age of 501 + 3 Ma (Fig. 4e: MSWD = 0.19), interpreted to represent the crystallization of the orthogneiss protolith. The analyses which do not fit the concordia age are interpreted to reflect Sample MO_074 from the Plura Nappe is a foliated, fine-grained, light-grey homogeneous metagranite with disseminated biotite, garnet and muscovite. The zircons from this sample resemble those of sample MO_104, typically around 100 µm, stubby with well-developed oscillatory zoning. Of 13 analyses, three have large 207 Pb/ 206 Pb errors, one is strongly reversely discordant (16%) and one is a concordant outlier at c. 565 Ma, which may have some inherited component. The remaining eight analyses yield a concordia age of 510 + 6 Ma ( Fig. 4f: MSWD = 2.4), interpreted to represent the crystallization age of the metagranite on the basis of magmatic zircon CL texture. The significance of the c. 565 Ma outlier is unknown but is similar to the magmatic activity in the Ravnålia Nappe.
Sample 92355, from east of Røssvatnet, is a medium-grained granitic gneiss layer. The zircons from this sample are 100-300 µm, stubby to prismatic, with CL-dark, oscillatory-zoned interiors. Five of 16 analyses are discordant, with the other 11 yielding a concordia age of 464 + 4 Ma ( Fig. 4g: MSWD = 1.7), interpreted to reflect crystallization of the granitic magma. Cambrian-Silurian (Scandian) deformation. Several samples were dated in an attempt to constrain the age of deformation. The samples include dykes showing syntectonic relationships with the surrounding rocks or cross-cut tectonic fabrics in those rocks.
Sample 92352, from east of Røssvatnet (Fig. 2b), is a syntectonic pegmatite sheet. The sampled outcrop is located on the shoreline at Skittreskvika and comprises intensely folded quartzofeldspathic mica gneisses. The gneissic layering represents a transposed foliation (S n ) that has subsequently been affected by another folding phase (F n+ 1 ). During this phase, porphyritic granite intruded sub-parallel to the layering, forming incoherent pinching and swelling veins, and lens-shaped boudins due to ongoing deformation. The layering and granite veins have been folded to tight metre-scale NWfacing folds with moderately WSW-plunging F n+ 1 fold axes. One of the boudinaged granites was sampled for geochronology (sample 92352) inferred to provide an age of folding. The zircons from this sample are typically 200-300 µm, prismatic, and dark and featureless in CL due to very high U contents. One outlier yields a slightly reversely discordant age of c. 540 Ma, and another is strongly discordant and omitted from further discussion. The remaining 16 analyses plot in two groups, with three discordant analyses plotting partly between the two groups. The older group consists of nine analyses and yields a concordia age of 515 + 5 Ma (Fig. 6a: MSWD = 0.38), whereas the other group yields a concordia age of 480 + 8 Ma (MSWD = 4.5). The older age is interpreted to date crystallization of the syntectonic pegmatite, thus dating deformation, whereas the younger group is interpreted to reflect later Pb loss. The three discordant analyses between these concordant age components are interpreted to represent physical mixtures of different age domains, consistent with the interpretation of two distinct age components in this sample.
Sample MO_079 CL, but in many cases with clearly discernible oscillatory zoning. Seven of 16 analyses are strongly discordant and not considered further. The remaining nine analyses cluster around c. 475 Ma, with two analyses yielding slightly younger ages. A regression through all nine analyses yields an upper intercept of 469 + 11 Ma (MSWD = 1.2), with an imprecise future lower intercept of −305 + 930 Ma. Excluding the two slightly younger analyses, the remaining seven analyses yield a concordia age of 475 + 5 Ma (Fig. 6b: MSWD = 0.32), interpreted as the best estimate of the crystallization age of the pegmatite,and thus dating deformation along the Langfjell Shear Zone. The two younger analyses are interpreted to have undergone recent radiogenic Pb loss.
Sample 92351 was collected at Skittreksvika (Fig. 2b), from a fine-grained granitic to granodioritic dyke, which cuts the foliation in the host mica gneiss at a high angle and clearly post-dates the main deformation (i.e. folding and foliation development, as well as high-grade metamorphism) in this outcrop. The zircon crystals from this sample are c. 100-150 µm, prismatic to irregular with CL-bright, oscillatory-zoned cores surrounded by thick, CLdark, oscillatory-zoned mantles; the latter also form separate grains. The cores yield ages between c. 1080 and 1805 Ma, whereas nine CL-dark mantles and grains yield ages that spread along concordia between c. 490 and 450 Ma; the older analyses are generally more concordant. Four discordant analyses are omitted from further discussion. Extracting an age from the young group of analyses is not straightforward. A regression through the group yields a discordia that is nearly parallel to concordia, resulting in apparently meaningless, very imprecise, upper and lower intercepts. Caledonian Pb loss is a distinct possibility and the three oldest analyses yield a concordia age of 484 + 11 Ma (Fig. 6c: MSWD = 0.86), which may represent a best estimate of the age of crystallization of granite dyke. This date would also then provide a minimum age of deformation of the host mica gneiss. The dispersion along concordia to younger ages is interpreted to reflect Caledonian Pb loss. The population of inherited detrital cores is similar to that documented from the Kjerringfjell Group, Ravnålia Nappe.
Sample MO_062 is a syntectonic granite sheet within the Mofjellet Group, Slagfjellet Nappe. The zircons from this sample are 100-150 µm and prismatic, with well-developed oscillatory zoning. Sixteen analyses yield ages dispersed along concordia from c. 430 to 400 Ma, with four discordant analyses excluded from further discussion. The five oldest analyses are concordant to somewhat discordant (up to 16%), whereas the seven younger analyses are between 10 and 31% discordant. The oldest population of five analyses yields a concordia age of 429 + 4 Ma (Fig. 6d: MSWD = 1.3), considered the best estimate of the crystallization age of the syntectonic granite, corresponding to the Scandian tectonic event.
Whole-rock geochemistry and Sm-Nd isotopes
The whole-rock chemical and Sm-Nd isotopic data, along with sample coordinates, are presented in Electronic Supplements 2 and 3, respectively.
Mofjellet Group. The meta-igneous suite in the Mofjellet Group comprises grey gneisses and amphibolites that classify as dacite/rhyolite and basalt/ basaltic andesite, respectively (Fig. 7a). The grey gneisses are calc-alkaline with enriched chondritenormalized light REE (LREE) patterns, enriched in large ion lithophile elements (LILEs) and Pb relative to high field strength elements (HFSEs), and depleted in Nb, Ta and Ti when normalized to primitive mantle (Fig. 7b, c). In commonly used tectonic discrimination diagrams, the grey gneisses plot in the volcanic arc field (Fig. 7e).
The amphibolites straddle the line between tholeiitic and calc-alkaline compositions and have slightly LREE-depleted to LREE-enriched patterns (Fig. 7b). Like the grey gneisses, the amphibolites are depleted in Nb and Ta, and enriched in Pb in the primitive-mantle-normalized diagram, and enriched in LILEs relative to MORB (Fig. 7b-d). The amphibolites plot in the field of arc basalts in the tectonic discrimination diagrams (Fig. 7f).
Umbukta gabbro and related mafic dykes. The whole-rock chemical data include samples from the medium-to coarse-grained gabbro itself, in addition to fine-grained mafic dykes inside the gabbro and dykes cutting older metamorphic fabrics in the metasedimentary host rock around the gabbro. In general, there are no systematic differences in composition between the gabbro and the dykes, suggesting that the gabbro samples roughly reflect melt compositions; a few exceptions, where the gabbros have lower incompatible trace element concentrations, may represent a higher proportion of cumulate phases.
The mafic rocks range between 42 and 58 wt% SiO 2 , and dominantly correspond to 'basalt' and more rarely 'basaltic andesite' in the SiO 2 v. K 2 O + Na 2 O diagram (Fig. 8a). They are enriched in incompatible trace elements, have fractionated REE patterns with chondrite-normalized REE values typically several tens to more than hundreds of times chondrite, and little or no Eu anomaly (Fig. 8b). The primitive-mantle-normalized diagram displays a relatively flat pattern with no particular anomalies, apart from a positive Pb anomaly (Fig. 8c), and the MORB-normalized diagram displays a characteristic 'hump' shape (Fig. 8d). Ratios of Zr/Nb are sensitive indicators of enrichment of the mantle source, with low ratios (c. 3) indicating high degrees of enrichment, and high ratios (c. 30-40) typically encountered in strongly depleted MORB (Weaver et al. 1983;Pin and Paquette 1997). Apart from one sample, the data from the Umbukta gabbro and related dykes have low Zr/Nb ratios between 5 and 9 (not shown in the figure). These results, together with results from several tectonic discrimination diagrams (Fig. 8e, f), are consistent with an enriched mantle source and a within-plate tectonic setting for the Umbukta gabbro, which, in turn, is consistent with emplacement into continental rocks that did not appear to undergo orogenic activity at the time. Sun and McDonough (1989), and the MORB (mid-ocean ridge basalt) values are from Pearce (1983). Tectonic discrimination diagrams in (e) & (f) are from Wood (1980) and Pearce et al. (1984), respectively.
The shaded field in the diagrams represent the range of chemical compositions from gabbros in the Seiland Igneous Province (Roberts 2007), which is generally interpreted to have formed in a within-plate rift setting, possibly in association with plume activity Grant et al. 2016;Larsen et al. 2018). The gabbroic rocks from the two areas are indistinguishable.
Sm-Nd analyses of nine mafic samples and one garnet mica schist host-rock sample show a large spread in ε Nd (575 Ma) values (ε Nd values calculated for an age of 575 Ma), from 5.6 (i.e. close to depleted Cox et al. (1979), the chondrite and primitive mantle values are from Sun and McDonough (1989), and the MORB (mid-ocean ridge basalt) are values from Pearce (1983). Tectonic discrimination diagrams in (e) and (f) are from Pearce and Cann (1973) and Pearce and Norry (1979), respectively. mantle at 575 Ma) to −2.5, probably reflecting contamination with the metasedimentary host rocks, one sample of which yields an ε Nd (575 Ma) value of −8.2 (Fig. 9a). Th/Nb ratios are sensitive indicators of crustal contamination in mantle melts because the former have significantly higher Th/Nb ratios than the latter. The contamination may result from assimilation during ponding and ascent through the crust or from contamination of the mantle source during subduction and recycling of continental detritus. Typically, uncontaminated mantle melts have very low Th/Nb ratios (,0.1), regardless of whether their source is depleted or enriched. The Umbukta gabbro and related mafic dykes have Th/Nb ratios between 0.04 and 0.12, whereas two samples with significantly lower ε Nd (t) values (−2.5 and −1.7) have significantly higher Th/Nb ratios of 0.32 and 0.38 (Fig. 9b). There is no apparent correlation between the ε Nd (575 Ma) value and proximity to the metasedimentary host rocks, indicated, for example, by a relatively high ε Nd (575 Ma) value of 4.5 for a mafic dyke intruding the host rock. This lack of correlation suggests that contamination took place at depth, consistent with apparently relatively cold host rocks. Gabbros from the Seiland Province (Tegner et al. 1999) yield a similar range of ε Nd values (Fig. 9a) and a comparatively wide range of zircon Lu-Hf isotope values (Roberts 2007). Interestingly, there is a large contrast in the dominantly positive ε Nd values of the gabbro, indicative of a time-integrated depleted mantle source, and the enriched incompatible element signature. These features are indicative of enrichment of the source slightly before or at the time of the igneous event.
Discussion
Neoproterozoic depositional, tectonomagmatic and tectonometamorphic events in the Rödingsfjället Nappe Complex, north-central Norway Figure 10 presents schematic cross-sections of the two study areas and summarizes the available data from the investigated units. A distinct feature of the Scandinavian Caledonides is metasedimentary rocks with indistinct detrital zircon populations between c. 1.8 and 1.0 Ga, with relatively minor late Archean input (Slagstad and Kirkland 2017). The detrital zircon dataset presented here is rather small and limited to the Kjerringfjell Group, the host rock to the Umbukta gabbro, but nonetheless seems to be similar to other metasedimentary units in the Caledonides, including previously published detrital zircon data from the Plura Nappe (Slagstad and Kirkland 2017). A maximum age of deposition for the metasedimentary protolith to the Kjerringfjell Group is given by the youngest c. 1030 Ma population. Strontium and carbon chemostratigraphic data on the widespread marbles in the RNC yield apparent depositional ages of 800-730 Ma in the Dunderland Formation, c. 660 Ma in the Ørtfjellet Group, constituting the uppermost unit in the Ravnålia Nappe, and 700-670 Ma in the Plura Nappe (Melezhik et al. 2015). The possible existence of a tectonic contact between the Dunderland Formation and the Ørtfjellet Group cannot be excluded, but the data seem to point towards a longbut not necessarily continuousdepositional history in the RNC. Migmatization at 623 Ma and magmatism at 578 and 541 Ma document a change from a depositional to a tectonically more active environment. The significance of this change and the implications for correlation with other units in the Scandinavian Caledonides are discussed further below.
Cambrian-Ordovician active-margin processes: the 501 Ma Mofjellet arc
Cambrian magmatism is generally unknown from the Scandinavian Caledonides but detrital zircons infer the existence of one or more proximal arc sources. The 501 Ma age and chemical data from the Mofjellet Group suggest that it, and possibly rocks in the adjacent Plura Nappe (510 Ma), may represent a Cambrian arc capable of sourcing detrital zircon of this age. The 515 + 5 Ma age of a syntectonic, boudinaged granite pegmatite in the RNC at northern Røssvatnet is indicative of a Cambrian contractional tectonic event, consistent with the arc-related magmatism in Mofjellet. The geographical extent of this event is currently poorly constrained. Deformation as early as 515 Ma is previously unknown from the Scandinavian Caledonides, with an age of 497 Ma from the suprasubduction-zone-related Leka ophiolite being the temporally closest recognized event (Dunning and Pedersen 1988;Furnes et al. 1988), and interpreted to represent convergence within the Iapetus Ocean.
Elsewhere in the Caledonian-Appalachian system, the subduction-zone-related Little Port ophiolite in Newfoundland has yielded an age of 505 Ma (Jenner et al. 1991), suggesting Mid-Cambrian convergence in Iapetus. The 515 Ma age of deformation and the 510 Ma age of arc-related magmatism reported here may either record stages of a distinct pre-Caledonian/pre-Taconian Cambrian evolution or may, alternatively, mark the incipient stages of Taconian orogenic development. In either case, these 515-500 Ma ages push the onset of convergence in the Iapetan system further back in time. The 480 + 21 Ma dyke (sample 92351) cross-cuts metamorphic fabrics, and clearly post-dates Cambrian folding and metamorphism (Fig. 10b). It provides evidence for a polyphase (at least two stages) tectonothermal evolution of the wall rocks prior to c. 480 Ma. This dyke is coeval with the 475 Ma syntectonic pegmatite from the Langfjell Shear Zone (Fig. 10a), separating the Ravnålia and Plura nappes, whose age is similar to well-documented Taconian deformation in the HNC (Yoshinobu et al. 2002), and appears to overlap with ages of ophiolite obduction in the Scandinavian Caledonides indicative of a coupling between the two nappe complexes at least from that time.
Lastly, the RNC preserves evidence of the climactic Scandian event at c. 429 Ma, reflecting a collision between Baltica and Laurentia. This event may have been widespread throughout the Caledonian tectonostratigraphy (Majka et al. 2012;Engvik et al. 2014;Froitzheim et al. 2016;Bender et al. 2019); however, considering that some nappes record several hundred million years of discontinuous tectonic activity (Kirkland et al. 2007;Gasser et al. 2015; this work), the sparsity of data constraining metamorphic and deformational events in the Caledonides suggests we have a long way to go before we are able to resolve Scandian from pre-Scandian effects.
Implications of correlating geological events across Caledonian tectonostratigraphy
The tectonostratigraphic subdivision of the Scandinavian Caledonides into four allochthonous levels Roberts and Gee 1985) has served as a basis for studies of the pre-Caledonian tectonic evolution of the Iapetus Ocean, as well as processes related to continent-continent collision for decades. An important aspect of this framework is that units at increasingly higher tectonostratigraphic levels are increasingly exotic to Baltica. A growing geochronological database has, however, resulted in many workers questioning the relationship between structural level and provenance (Kirkland et al. 2007;Corfu et al. 2014), with some authors arguing that distinguishing Baltica-derived from Laurentiaderived terranes with any degree of confidence is far from trivial with currently available data (Slagstad and Kirkland 2017). Others have argued that terranes derived from other continents, such as Gondwana, may also be present (Corfu et al. 2007). Many authors assume that Baltica and Laurentia were conjoined until Iapetus opening at c. 600 Ma (but see Slagstad et al. 2019 for a discussion of alternative scenarios), in which case the various terranes making up the Caledonides only had about 160 myr to develop their own, distinct characteristics prior to collision. Determining the provenance (Baltica, Laurentia or some other continent) of units within the Scandinavian Caledonides has been a major effort for decades (Roberts 1988;Corfu et al. 2007;Gee et al. 2014). A major obstacle is that Laurentia and Baltica have had a similar evolution through much of geological history, which means there are few unique and, hence, diagnostic features by which to make a distinction. Nonetheless, faunal evidence from Early Ordovician sedimentary rocks deposited on top of eroded ophiolite fragments shortly after obduction in the central and southwestern Scandinavian Caledonides suggest derivation from the Laurentian side of Iapetus (Bruton and Bockelie 1980;Pedersen et al. 1992). This interpretation is consistent with the recent suggestion by Slagstad and Kirkland (2018) that a distinct suite of 438-434 Ma mafic layered intrusions in the Köli and correlative nappe complexes is only found in the upper plate of the Scandian continent-continent collision. Hence, if we accept that the Köli and correlative nappe complexes are Laurentia-derived, the presence of the Leka and other similar ophiolite fragments in the HNC (McArthur et al. 2014), along with overlapping ages and styles of Ordovician magmatic activity in both allochthons (Meyer et al. 2003), is consistent with the widely accepted Laurentian ancestry of the HNC. A characteristic feature of the HNC is Early Ordovician, c. 475 Ma top-to-the-west thrusting, interpreted to reflect accretion and obduction of arc and back-arc assemblages (including ophiolites) that is typically correlated with the Taconian Orogeny in northeastern North America (Yoshinobu et al. 2002;Barnes et al. 2007;Roberts et al. 2007). The new data presented from the RNC suggests that evidence of a Taconian event is also present in other nappe complexes, and that some units may have undergone even earlier Cambrian tectonic events.
As shown in Figure 2a, the Ravnålia Nappe consists of two quite distinct units: the Kjerringfjell Group, consisting of high-grade metasedimentary rocks that underwent partial melting at 623 Ma prior to intrusion of the Umbukta gabbro at 578 Ma; and the Ørtfjell Group/Dunderland Formation, consisting of lower-grade schists and voluminous marble and banded iron formations. It cannot be ruled out that these two units are separated by a tectonic contact, and in effect constitute two nappes (Fig. 10a). However, the Taconian-age Langfjell Shear Zone seems to link the Kjerringfjell Group and the RNC to the HNC, based on the similar ages of thrusting there. The similarities between the Umbukta gabbro and its high-grade metasedimentary host rocks and the Seiland Igneous Province and its high-grade metasedimentary host rocks are quite compelling. Correlating these units means assigning the Seiland Igneous Province and its KNC host rocks to an origin unrelated to Baltica (Corfu et al. 2007;Kirkland et al. 2007;Slagstad and Kirkland 2018).
The implication of assigning a non-Baltican origin to the KNC is that the correlative SNC (e.g. Andréasson et al. 1998) also comes under scrutiny, as it has in other recent contributions (Corfu et al. 2007;Kirkland et al. 2011). There is an apparent difference between the tectonometamorphic evolution of the SNC in Norrbotten and Jämtland (Fig. 1). In Jämtland, the SNC rocks underwent UHP metamorphism at pressure-temperature conditions of 25-27 kbar and 650-760°C at c. 458 Ma (Brueckner and van Roermund 2007;Fassmer et al. 2017), whereas the SNC rocks in Norrbotten may have undergone eclogite-facies metamorphism at 12-15 kbar and 500-630°C at c. 505 Ma (Mørk et al. 1988) or, as suggested by later work, between c. 500 and 480 Ma (Root and Corfu 2012;Barnes et al. 2019). Other features of the Norrbotten SNC worth mentioning here include subordinate volcanism at 945 + 31 Ma (Albrecht 2000), titanite ages at 637 and 607 Ma (Rehnström et al. 2002;Root and Corfu 2012), and a monazite age of 603 Ma (Barnes et al. 2019). As discussed by Barnes et al. (2019), the dated monazite preserves a patchy zoned texture consistent with a partial dissolution process, in which case the 603 Ma age may represent resetting of an even older generation of monazite. If correct, this would imply the presence of older Neoproterozoic tectonic events, as indicated by the 637 Ma titanite and even older volcanic activity. Thus, unlike the Jämtland SNC, the Norrbotten SNC preserves evidence, albeit limited, of a long Neoproterozoic-Ordovician magmatic and metamorphic history that predates the HP metamorphic evolution recorded in the Jämtland SNC, and shares many similarities with that of both the KNC and RNC. Gee et al. (2013) and Barnes et al. (2019) noted that the (U)HP metamorphism in the Norrbotten SNC coincided with Early Ordovician obduction of arc/back-arc assemblages (ophiolites) recorded mainly in the Köli and Helgeland nappe complexes, and argued for a causative link between the two events. As discussed above, however, these oceanic assemblages almost certainly formed and were obducted on the Laurentian side of Iapetus; thus for a causative link to work, the Norrbotten SNC would also have to be located on the Laurentian side.
The Jämtland SNC lacks a documented Neoproterozoic history, possibly because of a younger (,730 Ma) depositional age (Kirkland et al. 2011); however, the 458 Ma age of UHP metamorphism is conspicuously similar to the c. 460 Ma age for UHP metamorphism in the Tromsø Nappe (Corfu et al. 2003;Ravna et al. 2017), which is typically correlated with the HNC and RNC (Corfu et al. 2014) and their inferred Laurentian heritage. Based on the data from the Jämtland SNC and the Tromsø Nappe, Brueckner and van Roermund (2007) argued for coeval UHP metamorphism on both margins of Iapetus at this time, whereas Janák et al. (2012) argued that the structurally higher position of the Tromsø Nappe could reflect out-of-sequence thrusting, resulting in a 'mismatch' between tectonostratigraphic position and provenance. However, probable Ordovician eclogites are also known from the Newfoundland Appalachians (Jamieson 1990), suggesting that there is no reason per se that the Tromsø Nappe eclogites could not have formed near the Laurentian margin. The age of 460 Ma for UHP metamorphism in the Jämtland SNC is interesting as it matches a period of reduced magmatic activity, between 460 and 450 Ma, in the HNC (Barnes et al. 2007), which may have been related to crustal thickening, possibly as a result of collision with a microcontinent. Thus, a case can be made that tectonothermal events recorded in the Jämtland and Norrbotten SNCs can be correlated with events in units that preserve geological and fossil evidence of formation at or outboard of the Laurentian margin.
Exotic components in the Scandinavian Caledonides and their origin
Classically, most units of the Scandinavian Caledonides were thought to have formed prior to, and synchronous with, the generation and destruction of the Iapetus Ocean between Laurentia and Baltica (Roberts 2003). Units of Gondwanan heritage, although well established in the British and Irish Caledonides and Appalachians, were not considered to be components of the Scandinavian margin until rather more recently (Corfu et al. 2007). A brief summary of the Neoproterozoic and Early Paleozoic evolution of the 'Iapetus-facing' margins of Baltica, Laurentia and Gondwana is presented below, and is also shown schematically in Figure 11. This information highlights possible origins for some of the units making up the Scandinavian Caledonides. The positions and orientations of the continents essentially follow that proposed by Domeier (2016) but we highlight that complexities related to choice of pole for Baltica exist (e.g. McCausland et al. 2007).
Iapetus break-up starting at c. 750-600 Ma is commonly understood with reference to the c. 1000 Ma Rodinia supercontinent (Li et al. 2008); however, although the proximity of Laurentia and Gondwana in Rodinia is relatively well established, the position and orientation of Baltica is not (Hartz and Torsvik 2002;Slagstad et al. 2019). Neoproterozoic tectonomagmatic events in Baltica are sparse. Extension of unknown magnitude in SW Baltica (present-day coordinates) at 616 Ma (Bingen et al. 1998), central Baltica at c. 580 Ma (Meert et al. 1998) and Neoproterozoic sedimentation (Nystuen et al. 2008) implies that the Baltican margin was relatively quiescent to extensional through the Neoproterozoic (Fig. 11a). Only an enigmatic Timanian orogenic event in NE Baltica at c. 600-560 Ma might imply localized tectonic activity in this section of the margin (Gee and Pease 2004;Pease et al. 2008). Thus, Neoproterozoic tectonometamorphic activity in parts of the KNC, that reached temperatures and pressures of at least 750-800°C and 10-11 kbar at c. 710 Ma (Kirkland et al. 2016), is not readily explained as having taken place on Baltica. Slagstad et al. (2019) suggested that tectonic activity may have continued outboard of Baltica following retreat of the Sveconorwegian active margin; however, at present, there is little direct evidence to support this idea. In contrast, numerous papers argue that parts of the Laurentian margin were active until Laurentia-Gondwana break-up at around 600 Ma (Kirkland et al. 2007;Cawood et al. 2010;Strachan et al. 2013). Rifting and drifting of Laurentia and Gondwana at c. 615-530 Ma caused widespread extension-related magmatism along the incipient Iapetan margin of Laurentia (see the summary in McCausland et al. 2007); importantly, the duration of extension-related magmatism provides a nearly perfect match to the 615-525 Ma magmatic activity recorded in the Rödingsfjället, Kalak and Seve nappe complexes (Fig. 11a). In addition, Laurentia-derived units currently located in the northern Appalachians record Neoproterozoic (c. 765-680 Ma) magmatism interpreted, at least in part, to reflect extension, possibly related to incipient attempted rifting (Cawood et al. 2001;Tollo et al. 2004); this evolution is similar to at least part of the Neoproterozoic history recorded in the Rödingsfjället and Kalak nappe complexes. Mafic dykes in the Kjerringfjell Group, now thoroughly amphibolitized, must have intruded after c. 1030 Ma and before high-grade metamorphism at 623 Ma, and may record similar activity.
In Gondwana (Avalonia), a convergent-margin setting is recorded around 765 Ma by the appearance of juvenile arcs indicating subduction of oceanic lithosphere (Murphy et al. 2013). Arc magmatism probably continued until at least 540 Ma, and records both contractional and extensional periods; and northwards drift of peri-Gondwanan fragments (illustrated by Avalonia and Ganderia) across the Iapetus probably started in the Early Ordovician ( Fig. 11b) (Linnemann et al. 2008). These rocks are currently located in the Appalachians and UK Caledonides, where they accreted around 450-440 Ma or slightly thereafter, shortly before final closure of the Iapetus Ocean. The Laurentian margin was almost certainly active from the Late Cambrian through to the Ordovician (Fig. 11c) (van Staal et al. 1998;Lissenberg et al. 2005b;Zagorevski et al. 2006), with formation and accretion of arcs, ophiolites and rifted continental fragments of both peri-Laurentian and peri-Gondwanan origin, including episodes of HP metamorphism (Jamieson 1990).
The Appalachian-Caledonian Orogen is characterized by long, linear features that can be traced for up to several hundred kilometres. This linearity may, at least in part, be a result of one or several oblique accretionary events, both prior to, as well as during, final continent-continent collision. As discussed by van Staal et al. (1998), such oblique collision may result in a misleadingly simple linearity, concealing complexities that may render unique reconstructions all but impossible. Several workers have argued that oblique accretion and collision in the northern Appalachians and UK-Scandinavian Caledonides resulted in major sinistral shearing (Soper and Hutton 1984;Soper et al. 1992), possibly resulting in translation of nappes over distances of a few thousand kilometres (Pettersson et al. 2010). Although translation of nappes over such distances is not strictly required by the model presented in Figure 11, it provides an appealing process for transporting units that resemble those of the northern Appalachians and UK Caledonides to the northern parts of the Scandinavian Caledonides (Fig. 11d, e). Following sinistral shear, the collision may have become more orthogonal (Soper et al. 1992), possibly obscuring evidence of earlier lateral shearing.
The rocks in the Upper and Uppermost allochthons in the Scandinavian Caledonides, including at least parts of the Kalak and Seve nappe complexes, record a very similar tectonometamorphic and tectonomagmatic evolution to that observed in units formed at or outboard of the Laurentian and Gondwanan margins. Thus, in the absence of clear evidence of an active Baltican margin, these similarities warrant an interpretation where large parts of the Scandinavian Caledonides are exotic rather than endemic to Baltica, as we have illustrated in Figure 1a.
Conclusions
New U-Pb zircon geochronology from the Rödingsfjället Nappe Complex (RNC) reveals a record of Late Neoproterozoic, high-grade metamorphism followed by continental rifting and mafic magmatism at c. 575 Ma, with continued intermittent magmatic activity at c. 540, 510-500, 480 and 465 Ma. Dating of syntectonic pegmatite at c. 515 and 475 Ma demands pre-Scandian thrusting, as well as later Scandian thrusting at c. 430 Ma. Such constraints on deformation show that potentially significant nappe stacking had taken place well before terminal continent-continent collision.
The RNC has a tectonomagmatic history that prompts correlation with peri-Laurentian and/or peri-Gondwanan terranes, consistent with the nappes' high tectonostratigraphic level and Laurentian faunal assemblages.
Crystallization of the c. 575 Ma Umbukta gabbro was coeval with emplacement of the Seiland Igneous Province in the Kalak Nappe Complex (KNC). These magmatic units have strikingly similar and distinct geochemical and isotopic signatures, which, together with a comparable Neoproterozoic tectonic history recorded in their host rocks, present a robust basis for correlating the KNC with units at high tectonostratigraphic levels within the assembled nappe pile.
Components of the Seve Nappe Complex (SNC) preserve a comparable tectonic history to the KNC, suggesting that these nappes may all be exotic to Baltica.
The paradigm of Scandinavian Caledonide tectonostratigraphic position is steeped in connotations of palaeogeographical derivation, yet much of the conventionally considered Middle Allochthonsuch as the SNCis more robustly correlated with units of undisputed exotic origin.
Acknowledgements Reviewers Deta Gasser and Fernando Corfu provided insightful comments and constructive criticism that helped clarify many of our arguments. Some of the work presented herein was conducted as part of the CAMOC (Centre for Advanced Mineral and Ore Characterisation) collaboration between the NGU and the Department of Geoscience and Petroleum at NTNU; this is CAMOC contribution No. 3. Data availability statement All data generated or analysed during this study are included in this published article (and its supplementary information files).
|
2020-04-30T09:11:52.978Z
|
2020-06-26T00:00:00.000
|
{
"year": 2020,
"sha1": "210b8359f49ca19ac7b1163b00ab2fa86940bc6b",
"oa_license": "CCBY",
"oa_url": "https://sp.lyellcollection.org/content/specpubgsl/503/1/279.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e3e81c91a8d7535b02940fcf35627d7be7546433",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
67831059
|
pes2o/s2orc
|
v3-fos-license
|
On the Detection Possibility of Extragalactic Objects ’ Redshift Change
This paper describes the detail calculations of expected redshift change in the galaxies’ or other distant objects’ light after a certain amount of time between observations has elapsed. The detection of this phenomenon has been proposed since the Hubble’s discovery of the galaxies’ redshift dependence on their distance from Earth and their significant recession velocities. Various astrophysicists have performed such calculations for several cosmological models of the Universe, but not for the model introduced by the author of this paper. This is now addressed in this publication.
Introduction
Several astrophysicists have proposed the measurement of change in the galaxies' redshift with the elapsed time of observations.The most comprehensive and early evaluation of this phenomenon was published by Sandage (1962Sandage ( , 2010) ) with possibly the earliest version suggested by Tolman (1930Tolman ( , 1934)).Several recent publications on this subject are by Loeb (1998) and by Lerner, Falomo, and Scarpa (2015).However, all of these publications are using some variations of the classical models of the Universe based either on the main stream Big Bang (BB) model or on the Newtonian flat space model.
In the previous publication Hynecek (2012a) has introduced a new model of the Universe that assumes the Universe being finite in size and filled with a repulsive and deformable Dark (transparent) Matter (DM).The DM is repulsive to visible radiating matter but attractive to itself.In this model the galaxies are treated only as small test bodies floating from the bulk of the Universe to the edge where they explode and generate the well-known immense Gamma Ray Bursts (GRB) detected here on Earth.The GRBs that are reflected back to the bulk of the Universe then contribute to the generation of new matter.This model is in line with the theory proposed by Hoyle, Burbidge & Narlikar (2000) of a steady state Universe with a constant matter creation.The new matter then condenses to stars and eventually to new galaxies endlessly repeating the cycle of creation and destruction.The galaxies' explosions residue at the edge of the Universe generates also the Cosmic Microwave Background Radiation (CMBR) with its temperature of 2.725 °K.The repulsive DM density is very small but its gravitational effects dominate the visible matter gravitation at large distances.The gravitational field of galaxies is thus compensated and screened by the DM after a certain distance.The galaxies thus for the most part move in this Universe independently of each other.
The one of the significant contributions of this model to the theory of the Universe, in comparison to other models of the Universe in particular the BB model, is in the derivation of the relation between the Hubble constant and the CMBR temperature (Hynecek, 2013).These two parameters are typically considered independent of each other.In this model they have been found dependent and one can be derived from the other.This is only possible in a finite and thermodynamically enclosed model as this one is and not in the open models such as the BB.
The details of the model mathematical background and its excellent agreement with the various observations have been published earlier by Hynecek (2012aHynecek ( , 2014)).The important equations needed for the derivation of the redshift time dependence are presented in the next section.
Mathematical Background
Since the long range gravitational effects of the visible radiating mater and all of the radiation can be neglected, the space-time metric can be considered static, spherically symmetric, and described by the following differential metric line element (Hynecek, 2011(Hynecek, , 2012b)): where: dΩ 2 = dϑ 2 + sin 2 ϑdφ 2 , g tt = exp(2φ v ), g tt g rr = 1, and c is the local intergalactic speed of light.The cosmological Newton gravitational potential for the visible matter φ v normalized to c 2 is calculated using the well-known equation: where κ is the Newton gravitational constant.Due to the deformation of the observed natural radius r by the DM gravity, the physical radius ρ(r) must be used in the formula instead of r and this parameter is found from the differential equation that follows from the metric: (3) Since any particular galaxy now represents only a small test body in this Universe, the well-known and many times verified Lagrange formalism can be used to describe the motion of such galaxies.The Lagrangian is therefore as follows: For a purely radial motion the Lagrangian can be simplified and the first integrals of the corresponding Euler-Lagrange equations easily found using the initial condition at the origin where the recession velocity is zero and where: dτ = dt.The results are: Eliminating the non-observable dτ from these equations then leads to the formula for the recession velocity: For the relatively near objects, where the cosmological gravitational potential for the visible matter φ v is still small, it holds that: ρ(r) ~ r, and m(ρ) = m 0 .This simplifies Equation 7 as follows: From this result it is then clear that the recession velocity is linearly proportional to the natural coordinate distance r of such nearby objects from the origin and that the Hubble constant H 0 is related to the DM density m 0 at the origin according to the following Equation: The recession velocity and the Hubble constant are referenced to the DM coordinate system, so the value of the Hubble constant should be corrected and referenced to the Earth's centered coordinate system from where it is actually measured.However, the correction is very small and it will be neglected.Earth and its Milky Way galaxy are located relatively near the center of the Universe in comparison to its immense size.
In order to proceed further in the model description it is necessary to find the relation for the DM density m(ρ) as a function of the physical radius.This is obtained by adapting the well-known approach described, for example, by Zel'dovich and Novikov (2011) where the DM pressure gradient is expressed as a function of the physical radial distance: After substituting for the DM pressure from the relation: , and defining the normalized mass density function: m n (ρ) = m(ρ)/m 0 , Equation 10 can be rearranged with the help of the Green's function as: where A 0 is a constant equal to: A 0 = 4πκm 0 /c 2 .There is no known analytic closed form solution for this equation, so it is necessary to use the numerical iterative approach or find an approximating function.The approximating function approach was selected for the next steps to avoid very long computing times during iterations.The selected function, however, underestimates the true value of the DM mass density at large ρ, but the error has only a small overall effect.The first two iterations and the approximating function: Figure 1.The graphs of the first two iterations (dashed and dot-dashed lines) and the approximating function describing the dark matter mass density as a function of the physical radius where: x = ρ/ρ h .The introduced parameter defined as: ρ h = 2c/H 0 , is called the Hubble distance or the Hubble physical radius Another advantage of using the approximating function is that the DM concentration tail extending past the maximum radial distance can be easily cut off by suitably truncating the power series expansion in the exponent.This feature is advantageous if it is considered that the visible matter debris from explosions of galaxies are accumulating at the edge of the universe and are forming a loosely bound shell there.Of course, it is possible to add more terms than shown in Equation 12; however, this will not be pursued any further in this paper, since the accuracy of the approximation was found reasonable.
Once the mass density function is known it is easy to find the normalized gravitational potential for the visible matter using the formula in Equation 2, and for the dark matter using the Green's function formula derived also from the Gauss law as follows: ( ) Both potentials are plotted in the graphs shown in Figure 2.
In the next step of the model description it is necessary to find the formula for the Z shift, since this is the parameter that is directly measured by astronomers.The Z shift typically consists of the three components: the star gravity induced redshift, the cosmological potential induced redshift, and the Doppler redshift resulting from the recession velocity.The star gravity induced redshift does not have to be considered here, since after the star or the galaxy explosions have occurred when the Supernova explosions or the GRB data are compiled, most of the principal source of the gravitational field has been converted to radiation and radiated away and only the remnants or the afterglow produce the light that is observed.The cosmological potential induced redshift does not have to be considered either, since in this model the galaxies are in a radial free fall and this compensates for the shift.
Figure 2. The graphs of dependencies of normalized gravitational potentials for the visible matter (solid line) and for the dark matter as functions of the physical radius.The integration constants were adjusted such that the potentials at infinity are zero The only remaining redshift component is thus the Doppler redshift resulting from the radial recession velocity v r .The Doppler redshift observed on Earth, therefore, is: where c r indicates the light speed at the galaxy location in reference to Earth.The graph of the Z dependency on the natural coordinate radius r is shown in Figure 3.
Ly
The radial distance r, also called in this paper the natural radial distance, which is the observable parameter, is calculated according to Equation 3 as follows: To complete the model description the graphs of the galaxies' recession velocity and the speed of light as functions of the natural radial distance r are shown in Figure 4.
Figure 4.The graphs of the speed of light (dashed line) and the galaxies' recession velocity in km/sec as functions of the natural radial distance r from the center of the Universe in light-years
Derivation of the redshift rate change
The redshift Z dependency on time can be found by first differentiating Equation 14 with respect to distance and then by differentiating the distance with respect to time.The result of differentiating Equation 14with respect to distance is as follows: The differential of the physical distance with respect to natural time t is found from Equation 3 and Equation 7.
By combining Equations 16 and 17 the result for the rate change of the redshift Z is: To proceed further it is useful to find the expression for the potential of the visible matter in terms of the redshift Z.This is obtained again from Equation 14 with the result: Using this formula in Equation 18 results in a suitable expression for the numerical evaluation as follows: It is also possible to find an approximation for the nearby galaxies where the redshift is still small.For this case it holds that the DM density is still reasonably constant and from Equation 2 therefore follows that: By combining these formulas together, using also the logarithm of formula in Equation 19 to substitute for the visible matter potential, the small Z approximation for the redshift rate change is: Both of these Equations; 20 and 23 were graphically evaluated using convenient Mathcad 15 numerical symbolic calculations and the results are plotted in graphs in Figure 5.The Hubble constant used in all these calculations was: H 0 = 68.0km/secMpc (Keel, 2007).This value is in an excellent agreement with the value derived by Hynecek (2012aHynecek ( , 2013) ) from the precisely measured CMBR temperature: The previously derived formula relating these two parameters is as follows: where k B is the Boltzmann constant, h is the Planck constant, and the visible matter cosmological gravitational potential: φ v = -1.7436 is calculated, using Equations 2 and 12, at the Universe's edge.The time interval between the two consecutive observations was selected to be 10 years.From the graphs it is thus clearly seen that for the small Z, less than unity, the approximation of constant DM density is reasonable, but fails for the larger values.The interesting result is that there is an optimum redshift rate change for 3 < Z < 4 where the redshift rate is the largest.The observations should therefore focus on such cosmological objects.The galaxy deceleration together with the slower speed of light at large distances cause the redshift change rate for large Z objects to be lower and eventually drop to zero at the maximum Z shift: Z mx = 10.35.This finding may seem somewhat counterintuitive, since one would expect the larger rate for the larger redshifts.This result is an unavoidable consequence of the finite size model of the Universe that is curved by the DM gravity.Such an interesting finding will most likely not be the same for the BB model or any other similar Universe model derived from the BB theory and, therefore, could be used to confirm the veracity of a particular model.
For a better clarity another plot of the redshift rate change as a function of the natural radial distance r is shown in Figure 6.From this graph it is clear that the redshift rate drops to zero at the edge of the Universe even though the redshift itself is the largest there.
Figure 6.The plot of the dependency of redshift change for the 10 year time period between observations as a function of the natural radial distance r in light-years
Discussion
It is clear from the graph in Figure 5 that the maximum of the redshift rate that can be observed is not much larger than Years ⋅ − 10 / 10 9 . This is an extremely small value that may be difficult to detect.However, with the unprecedented progress in the observational astronomy and in the atomic clock long term stability, the detection might perhaps be possible.It is, of course, necessary to also subtract the rate changes due to the various peculiar motions, such as of Earth in the Sun's orbit, the Sun's motion in our galaxy, and our galaxy motion in the Universe, but this should not present an unsurmountable problem.The reference for subtraction can either be the local galaxy cluster background or the CMBR.Another problem could be the galaxies' intrinsic redshift change.But this effect could also be subtracted by observing the nearby similar galaxies or selected objects.An advantage of the described measurement analysis is that it involves only one variable, the redshift, which can be relatively precisely detected in the light spectrum of distant objects of the Universe.
The successful detection of the redshift rate maximum and the confirmation of the redshift rate dependency on the Z shift as is shown in Figure 5 would represent yet another confirmation of correctness of the finite size model of the Universe, in addition to the determination of the precise value of Hubble constant from the CBMR temperature, and prove the BB dogma derived from the simplistic extrapolation of observations to be wrong.
Conclusions
The article described the detail calculations of the expected redshift change in the galaxies' light after a certain amount of time between the observations has elapsed.This derivation was made for a new model of the Universe.The maximum value of the redshift rate was also derived suggesting the best redshifts at which the observations should be carried out.The possibility of detection of this phenomenon has been proposed since the Hubble's discovery of the galaxies' redshift dependence on their distance from Earth and their significant recession velocities.The detection of this effect would represent a dramatic confirmation of an alternate model of the Universe that is static, finite in space and infinite in time.The Universe without a Big Bang was also recently proposed by Ali and Das (2014).
are shown in a graph in Figure1.
Figure 3 .
Figure 3.The graph of the dependency of redshift on the natural radial distance r.The maximum redshift that can be observed is: Z mx = 10.35.The visible matter does not exist at the distances larger than: Ly r mx 9 10 11 .22 ⋅ = , since it disintegrates a the Universe's edge
Figure 5 .
Figure 5.The plots of the dependency of redshift change for the 10 year time period between observations as a function of the galaxy redshift Z.The dotted line is the Z < 1 approximation
|
2019-04-19T13:06:39.310Z
|
2015-02-28T00:00:00.000
|
{
"year": 2015,
"sha1": "c2429cc07f9a163e6f6f5d4b20ba8a740f457553",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/apr/article/download/45188/24902",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c2429cc07f9a163e6f6f5d4b20ba8a740f457553",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9679680
|
pes2o/s2orc
|
v3-fos-license
|
Preliminary Examination of Olanzapine and Diet Interactions on Metabolism in a Female Macaque
1Divisions of Reproductive and Developmental Sciences, Oregon National Primate Research Center, Beaverton, OR 97006, USA 2Division of Diabetes, Obesity, and Metabolism, Oregon National Primate Research Center, Beaverton, OR 97006, USA 3Department of Obstetrics and Gynecology, Oregon Health and Science University, Portland, OR 97201, USA Journal of Endocrinology and Diabetes Open Access Research Article
Introduction
The twin epidemics of obesity and diabetes threaten to overwhelm healthcare systems in the U.S. and across all other parts of the world [1].Specifically, 2/3 of the US population is overweight or obese, while 40% exhibit symptoms of pre-or frank diabetes and the combined direct and indirect costs of obesity and diabetes are now approaching $500 billion a year.Thus, a better understanding of factors that contribute to their incidence is critical in order to manage this major public health issue.Important contributors to obesity and diabetes are lack of exercise and consumption of a high-fat/calorie Western-Style Diet (WSD).Compounding those factors in a subset of mentally ill patients is the use of Second-Generation Antipsychotics (SGAs), which are known to induce weight gain and to exacerbate risk for metabolic disease and diabetes, especially in youth.
The patients most often prescribed SGAs, have schizophrenia.Schizophrenia is a serious, non-curable mental illness with high morbidity and premature mortality [2].It is generally estimated that today only approximately 10% to 15% of people who have schizophrenia are able to maintain full-time employment of any type, even with medication.The total indirect excess costs in the US were estimated to be $32.4 billion in 2005 [3].Adding
Abstract
Clinical data suggest that atypical antipsychotics such as Olanzapine (OLZ) induce significant metabolic changes that are serious side effects of their primary use.Since controlled human studies are problematic and rodent data may be poorly translatable, we have initiated development of a macaque model of OLZ-induced metabolic disease.In this preliminary feasibility study, we examined some metabolic effects of OLZ in a female macaque in the context of a standard low-calorie/fat monkey chow diet followed by a highfat/sugar western-style diet (WSD).A female Japanese macaque was administered OLZ (1.25 mg/day) for 6 months, with dietary changes at 2-month intervals as follows: OLZ + Restricted chow, OLZ + Unrestricted chow, OLZ + WSD, and Placebo + WSD.Weight was assessed weekly.Glucose tolerance tests (GTT) and dexascans were performed at baseline and every 2 months.Omental (OM) and subcutaneous (SQ) adipose tissue biopsies were obtained at baseline, after OLZ + Unrestricted chow and after OLZ + WSD to evaluate adipocyte size, lipolysis, and insulin-stimulated free fatty acid uptake (FFA).A separate trial was conducted on two monkeys with 5 days of OLZ or no-treatment followed by RT-PCR on rostral and medial basal hypothalamus.Weight increased on administering OLZ + Restricted chow and stabilized on administering OLZ + Unrestricted chow.OLZ + WSD diet did not significantly change the weight plateau.Weight declined upon withdrawal of OLZ with continued WSD.Body fat increased from 14% at baseline to 22%, 30%, 28% and 19% at 2, 4, 6 and 8 months (mo), respectively, indicating that body fat was elevated on OLZ administration regardless of diet and declined upon OLZ removal.Glucose tolerance and the insulin response during GTT were normal with OLZ + Restricted chow or OLZ + Unrestricted chow diets.Addition of WSD with OLZ impaired glucose clearance during GTT.Insulin remained in the normal range, but first-phase insulin secretion was reduced.After removal of OLZ, but continued WSD administration, glucose clearance returned to normal; however, this was associated with hyperinsulinemia.Adipocyte diameter was increased in OM and SQ fat by OLZ + chow and OLZ + WSD to a similar extent (p < 0.01, 2-way ANOVA).In OM, isoproterenol-stimulated lipolysis occurred at baseline.In both depots, isoproterenol-stimulated lipolysis occurred with OLZ + chow, but it was significantly blunted by addition of WSD (ANOVA p < 0.0001; post hoc p < 0.05).Insulin increased FFA uptake at baseline.OLZ + chow or OLZ + WSD increased basal FFA uptake but insulininduced FFA uptake was blunted in both depots (post hoc p < 0.05).metabolic disease on top of mental illness may be devastating and it decreases compliance.Currently psychiatrists prescribe SGAs, while these patients typically see an endocrinologist separately to manage their metabolic disease.
Nonetheless, the development and widespread use of SGAs has significantly improved the treatment and management of schizophrenia without the extra-pyramidal side effects of first generation antipsychotics such as haloperidol [4].The older antipsychotics mainly acted as antagonists of D2 dopamine receptors [5].SGAs exhibit lesser action at dopamine receptors and act more as serotonin (5HT) 2A/C receptor antagonists [6][7][8][9].One widely used SGA is olanzapine (OLZ), which remains one of the most efficacious psychiatric medications in spite of its nearly universal metabolic side effects [10].The SGAs ziprasidone and aripiprazole have reduced metabolic side effects, but also poorer scores on tests of positive and negative symptom relief [11,12].Often ignored, however, is the fact that in the US, OLZ is taken by patients who typically consume a high-calorie/fat WSD.Little is known of the interaction between WSD and SGAs, and this variable is nearly impossible to test in humans.
Although OLZ may act at a number of receptors, the paramount role of the 5HT2C receptor in the hypothalamic feeding and satiety neuronal systems is undisputed [13].Thus, the antagonism of the hypothalamic 5HT2C receptor by OLZ, probably plays a pivotal role.Most studies of the peripheral effects of OLZ in the area of metabolism have been conducted with rodents, but OLZ treatment has markedly different effects in rats and humans.OLZ regulates insulin secretion in islets and increases serum glucose in rats and humans, but there were serious inconsistencies in effects on weight gain, lipid concentrations, and Leptin levels in drug-treated rats [14,15].Male rats showed an increased preference for a high-fat or -sugar diet, but did not exhibit greater weight gain than diet-matched controls [14], although female rats appear to be more sensitive to OLZ-induced weight gain [15].However, there are no sex differences in humans.Thus, the absence of reliable effects of SGAs in rats indicates the poor predictive value of the rodent models.
Nonhuman primates (NHPs), however, are an exceptional model for human neuropsychiatry and metabolism.Monkeys in captivity eat monkey chow, which is very low in fat and sugar and high in micronutrients.We hypothesize that OLZ acts through both central and peripheral mechanisms that involve antagonism of 5HT2C receptors.We further hypothesize that the metabolic reactions to OLZ differ depending on diet.We have collected preliminary data from one longitudinally treated macaque with multiple weightings; Glucose Tolerance Tests (GTTs), Dexascans and adipose biopsies.The monkey responded to OLZ in a manner similar to humans, and the metabolic responses to OLZ differed significantly between a chow diet and a WSD.It appears that macaques are an excellent model for revealing the mechanisms by which OLZ induces metabolic syndrome in patients.
Materials and Methods
This experiment was approved by the IACUC of the Oregon National Primate Research Center and conducted in accordance with the with the NIH guideline (Guide for the Care and Use of Laboratory Animals, Eighth edition, NRC 2011).
Animal
An adult female Japanese macaque (Macaca fuscata) was housed in a large double cage for the duration of this study.The animal was maintained at a healthy weight and metabolic status; with a restricted, but adequate, diet of normal monkey chow (hereafter called 'chow') supplemented with fresh fruits and vegetables.Water was available ad libitum.Research veterinarians monitored the monkey continually.
Diets
Chow (Lab Diet 5000) contains 4.94 kcal/g and calories are distributed as 27% from protein, 14% from fat, and 58% from carbohydrates.Of the total 2.87 kcal/g of carbohydrates, 26% are derived from starch and 2.8% from sucrose.WSD (Lab Diet, TAD 5L0P) contains 4.94 kcal/g and the calories are distributed as 18% protein, 36% fat, and 46% carbohydrates.Of the total 3.36 kcal/g of carbohydrates, 19.5% is derived from starch, 8.81% from sucrose, 4.41% from glucose, and 4.61% from lactose.
Protocol
The animal was weighed weekly throughout the protocol.Prior to any medication or diet manipulations, baseline GTT, Dexascan, and adipose tissue biopsies were obtained.After completing and recovering from biopsy surgery (described below), OLZ was administered at 1.25 mg/day by placing the dissolving form in a food treat based on a human dose of 5 mg/ day for a 70 kg human.After 2 mo OLZ + Restricted chow, another GTT and dexascan were obtained, and the diet was changed to unrestricted monkey chow.This was accomplished by always maintaining extra chow biscuits in the cage.After 2 mo OLZ + Unrestricted chow, another GTT, dexascan, and a second biopsy were obtained.Upon recovery from biopsy surgery, the food was changed to WSD.After 2 mo OLZ + WSD, another GTT, dexascan, and a final biopsy (third) were obtained.She was then withdrawn from OLZ, and maintained for an additional 2 mo on WSD alone.After 2 mo WSD alone, another GTT and dexascan were obtained.The animal was then released from the protocol and returned to normal monkey chow.
Euthanasia
After 6 months on normal monkey chow, the same animal was administered OLZ for 5 days and euthanized by an expert veterinary pathologist in accordance with the American Veterinary Association guidelines (AVMA Guidelines for the Euthanasia of Animals, 2013 edition).She was transported to the necropsy suite under sedation, given an overdose of pentobarbital (30 mg/kg, i.v.(Hospira, Lake Forest, IL)) and exsanguinated with severance of the descending aorta.The brain was harvested, blocked, frozen in liquid N 2 and stored at -80 °C.An additional hypothalamus was obtained from an untreated female Japanese macaque that was euthanized for clinical reasons.
Biopsy
All surgical procedures were conducted by trained Surgical in vivo since a single subject was studied for this preliminary examination and feasibility study.Multiple adipose explants were used for each measurement enabling legitimate statistical comparisons.Differences between groups were determined with two-way ANOVA followed by Bonferroni post hoc pair wise comparisons using Prism version 5 (Graph Pad Software, Inc., San Diego, CA).
Results
A longitudinal experimental design was implemented as described above Figure 1; top illustrates the experimental design, the timing of OLZ and dietary treatments, adipose biopsies, GTTs, dexascans and body weight.There was a rapid increase in weight on OLZ + Restricted chow that plateaued after OLZ + Unrestricted chow.Addition of WSD did not further increase weight.Unfortunately, there was a problem with palatability in this batch of WSD that was discovered subsequent to the experiment.Therefore, this particular piece of the data must be interpreted with caution.Nonetheless, weight reached a plateau on OLZ + Unrestricted chow that may have been maintained, or even increased, with more palatable WSD.
The Area Under Curve (AUC) for glucose and insulin secretion from the GTT data are shown in Figure 1, middle and bottom panels, respectively.The raw GTT data are shown in Figure 2. The Services Unit personnel under the supervision of surgical veterinarians in dedicated surgical facilities using aseptic techniques and comprehensive physiological monitoring.All procedures took place in the operating rooms of the Surgical Services Unit.The animal had 22-gauge cephalic catheters placed, and was intubated in dorsal recumbency with a 4.0-6.0ET tube.The animal was administered intravenous fluids (Lactated Ringers solution, 10 ml/kg/hr).Continuous monitoring was performed during the surgery for body temperature via esophageal temperature probe, heart rate, and pulse character (fast or slow) via pulse oximetry and electrocardiography, blood pressure via indirect blood pressure cuff or direct percutaneous arterial line, respiratory rate and pattern, end tidal carbon dioxide, capillary refill time, absence of palpebral response to touching the medial canthus, jaw tone and color of mucous membranes at gums or conjunctiva.A Verres needle was inserted via a 1-cm sub umbilical skin incision followed by insufflations to 15 mm Hg pressure with CO 2 gas.The Verres was removed and the 11-mm trocar/sheath and 10-mm telescope was inserted by puncture at the same site.A right paralumbar 5-mm accessory port was placed, through which a cutting biopsy grasper was inserted.Pinch biopsy forceps were used to retrieve two fat biopsies from the falciform ligament.Grasping forceps were used to grab a small section of omentum, which was pulled through the side port.A 1 × 2 × 1-cm block of omentum was removed via sharp and blunt dissection.The laparoscopic instruments were removed.A subcutaneous fat biopsy was then retrieved from the site of the scope incision.The tissue was placed in sterile culture medium for transport to the laboratory.The incisions were closed with interrupted 4-0 monocryl in the rectus fascia and skin.Recovery was on the OR table until extubation.Additional heat and oxygen support was provided as needed during the recovery period.
Adipose tissue explant protocol
The protocol for transport and processing of adipose tissue explants has been previously described [16][17][18].The explants were incubated with and without insulin, in the presence of Isoproterenol or free fatty acids (FFA).For isoproterenolstimulated lipolysis studies, 100 ± 10mg tissue explants were placed into a 24-well culture dish containing 0.5 mL incubation medium (phenol red-free DMEM [Invitrogen], 0.5% BSA [Sigma-Aldrich], 20 mM HEPES buffer [pH 7.4]), and incubated at 37°C free-floating for 2 hours in an atmosphere of 5% CO 2 at 37°C.Glycerol release was determined using a colorometric glycerol detection kit (Zen-Bio).For determination of FFA uptake, fluorescent FFAs (BODIPY-C12; Invitrogen) were added to the culture medium.
Data analysis
Statistical analyses were not performed on the data collected
Preliminary Examination of Olanzapine and Diet Interactions on Metabolism in a Female Macaque
Copyright: © 2014 Bethea et al.
quantitative values for each curve are shown in Table 2. Glucose and insulin AUCs did not change with OLZ + Restricted chow (baseline vs. 2 mo) or OLZ + Unrestricted chow (baseline vs. 4 mo).Nonetheless, after unrestricted chow, first-phase insulin secretion was blunted.Addition of WSD caused an increase in glucose AUC (baseline vs. 6 mo), and no change in the insulin AUC, but there was a marked change in first-phase insulin secretion, and second-phase insulin secretion was elevated and prolonged.After OLZ withdrawal, but continuation of WSD, weight and glucose AUC declined, accompanied by markedly increased insulin AUC (pre vs. 8 mo).Although insulin levels were elevated, the pattern of first and second-phase insulin secretion appeared normal.
Examination of the GTT data (Figure 2) demonstrates in more detail that, at 2 mo after OLZ + Restricted chow, there was little change from baseline.However, with OLZ + Unrestricted chow (pre-vs 4 mo), there was an apparent decrease in firstphase insulin secretion.After 2 mo OLZ + WSD (pre vs. 6 mo), both first-and second-phase insulin secretion were reduced, and this was associated with increased post-prandial glucose.AUC glucose was elevated (pre vs. 6 mo GTT, Figure 2) indicating that clearance declined.After withdrawal of OLZ, but maintenance on WSD (pre vs. 8 mo), the insulin secretory response was markedly elevated and this hyperinsulinemia restored glucose disposal.Fasting glucose did not change with the different treatments.
Dexascan (Table 1) showed that % body fat increased from baseline with OLZ + Restricted chow and OLZ + Unrestricted chow.WSD did not change % body fat after 2 months.Removal of OLZ and continuation of the WSD reduced % body fat.In addition, Dexascans showed that a predominant amount of fat was deposited in the midsection (Figure 3).
To evaluate adipose-specific effects, OM and SQ white adipose tissue biopsies were obtained at baseline (biopsy 1) and after OLZ + Unrestricted chow (biopsy 2) and after OLZ + WSD (biopsy 3) to evaluate adipocyte cell size, lipolysis, and insulin-stimulated FFA uptake (red type in Figure 1).As shown in Figure 4, adipocyte size was significantly increased by OLZ in both depots, with SC adipose tissue exhibiting a greater response, but WSD did not induce a further change.
Adipocyte hypertrophy (increased cell size) can affect lipolysis (represented by glycerol release into conditioned medium) under basal conditions or in response to the β-adrenergic agonist isoproterenol.At baseline, isoproterenol stimulated lipolysis in
Preliminary Examination of Olanzapine and Diet Interactions on Metabolism in a Female Macaque
Copyright OM, but not in SQ fat.Exposure to OLZ increased isoproterenolstimulated lipolysis in both depots, but the in vivo addition of WSD with OLZ decreased isoproterenol-stimulated lipolysis in both depots (Figure 5).
As shown in Figure 6, insulin increased FFA uptake at baseline in both OM and SQ adipose tissue.OLZ + chow or OLZ + WSD increased basal FFA uptake in both OM and SQ adipose tissue (post hoc p < 0.05), with insulin no longer inducing an additional significant increase, suggesting the early development of adipose insulin resistance.
A separate preliminary trial was conducted on two monkeys with 5 days of OLZ or no treatment followed by RT-PCR analysis of rostral preoptic (POA) and medial basal hypothalamus (MBH) gene expression (Figure 7).OLZ treatment decreased pre-opiomelanocortin (POMC) mRNA in the MBH, indicating less alpha-melanocyte stimulating hormone (α MSH) would be produced and less satiety achieved.OLZ increased expression of neuropeptide Y (NPY) and agouti-related peptide (AgRP) mRNAs, both coding for appetite stimulants.OLZ also increased expression for 5HT2C, MCR4 and leptin receptor (LepR) mRNAs.
Discussion
The in vivo data from this study suggest that OLZ immediately increased appetite and consumption of chow, but that induction of whole animal hyperglycemia required WSD as well.Withdrawal of OLZ allowed restoration of glucose clearance through increased insulin secretion.OLZ caused a rapid and marked change in hypothalamic gene expression related to satiety and feeding.POMC, the precursor to α MSH, which mediates satiety, was markedly down regulated.NPY and AgRP gene expression, which code for feeding peptides NPY and AgRP, were clearly increased.In the anterior hypothalamus where α MSH neurons project, there was an increase in gene expression for 5HT2C, MCR4, and LepR.
Adipocyte size was significantly increased with administration of OLZ for 4 months (mo), with a diet of normal monkey chow.Isoproterenol increased glycerol release when the animal was on monkey chow at baseline, or on OLZ + chow.However, with addition of WSD to OLZ, isoproterenol-induced glycerol release was significantly reduced, indicating a loss of sensitivity to adrenergic stimulation.Insulin stimulated FFA uptake at baseline in both OM and SQ fat.After 4 mo of OLZ + chow, insulin did not stimulate FFA uptake over the elevated basal uptake in either depot.The elevated basal FFA uptake may be attributable to the increase in cell size that occurred over 4 mo of OLZ treatment with restricted or unrestricted chow.Thus, there was an indication of insulin resistance at the level of the adipocyte that preceded the hyperglycemia observed in the GTT after OLZ + WSD.
Clinical data show an increased diabetes risk in patients treated with clozapine or OLZ compared with untreated patients, and weight gain is more common in patients taking OLZ than with haloperidol or placebo [19].Possible mechanisms include insulin resistance secondary to drug-induced weight gain, altered body fat distribution, or a direct effect on insulin-sensitive targets [20,21].Our data support the involvement of all three.We have other data (not presented) that OLZ reduced insulinstimulated FFA uptake and this effect was reversed by addition of serotonin in 3T3-L1 adipocytes.Similar to our data obtained from experiments on the macaque monkey, OLZ reduced FFA uptake in rat peripheral tissues [22].To date, studies in cell lines have shown negative effects of SGAs on adipocyte-type cell function, but this issue has not been adequately studied in humans with psychiatric illness [23].
An additional consequence of SGAs is their effect on serum lipids.Clozapine and OLZ, which produce the greatest weight gain, are associated with the greatest increase in levels of total cholesterol, LDL and triglycerides, and with decreased HDL cholesterol.Measurements of lipids, adiponectin or ghrelin were not obtained in this preliminary study, but there is reason to believe believe that the levels would be similar to that in humans humans.Future studies with more animals, and measurements of these and other endpoints, are awaiting funding.Hypothalamic 5HT2C receptors play a major role in feeding and satiety [13].Clinically, serotonergic agonists such as fenfluramine/phentermine (fen-phen) and, more recently, lorcaserin, a 5HT2C agonist (Belviq; Eisai/Arena Pharmaceuticals), have been used for weight reduction in morbid obesity.Two independent serotonin systems are now known to exist, one in the brain and the other in the periphery.Serotonin is a well-known central nervous system (CNS) neurotransmitter that regulates feeding behavior, meal size, and body weight [24].Briefly, two populations of neurons in the arcuate nucleus (ARC) play primary roles in the regulation of eating.Neuropeptide Y (NPY) and agouti-related peptide (AgRP) co-localizing neurons (NPY/AgRP) stimulate feeding, whereas POMC/CART colocalizing neurons mediate satiety and inhibit feeding [25][26][27][28][29].Each of these populations is regulated by metabolic hormones such as insulin, leptin, and orexin, and by neurotransmitters such as serotonin [30].POMC neurons produce the satiety peptide α MSH via post-translational processing.In rodents, different [1] baseline, [2] with OLZ+chow and with [3] OLZ+WSD.There was a significant difference between the groups with OM and SQ explants (both p < 0.0001, ANOVA).*Asterisks designate post hoc pair-wise differences (Newman-Keuls post hoc p < 0.05).Isoproterenol-stimulated lipolysis at [1] baseline, [2] with OLZ+chow and with [3] OLZ+WSD.There was a significant difference between the groups with OM and SQ explants (both p < 0.0001, ANOVA).*Asterisks designate post hoc pair-wise differences (Newman-Keuls post hoc p < 0.05).populations of POMC neurons express 5HT2C and LepRs, which in turn, stimulate production and release of α MSH to decrease appetite as satiety is reached [24,31].Our data indicate that by 5 days of OLZ treatment with a monkey chow diet, the expression of POMC is suppressed, whereas NPY and AgRP are markedly induced.Thus, OLZ has very rapid effects on the neural systems that govern feeding and satiety.There was also an increase in 5HT2C, MCR4 and LepR, possibly as an attempt to overcome OLZ antagonism and maintain homeostasis.Therefore, we hypothesize that the first action of OLZ is antagonism of 5HT2C receptors in hypothalamic neural systems, which causes an increase in appetite.The increase in the 5HT2C receptor expression indicates that antagonism of the 5HT2C receptor by OLZ acts like serotonin denervation, and consequently results in the classical increase in expression of the postsynaptic receptors.The increase in MCR4 suggests a second classical homeostatic mechanism occurred to maintain satiety.Without serotonin stimulation of POMC neurons there would be decreased α MSH neurotransmission that in turn leads to increased expression of MCR4.Interestingly, the increase in LepR occurred before weight gain and adiposity increased.This could be a third homeostatic mechanism in which the sensitivity to Leptin was increased to maintain POMC expression in the absence of serotonergic input.Nonetheless, these mechanisms were not sufficient to maintain normal body weight with continued administration of OLZ.
There is much less evidence for the role of serotonin as an endocrine hormone, particularly with respect to its effects on glucose and lipid metabolism.We now understand that murine pancreatic islets express serotonin system genes and that serotonin affects islet function [32][33][34].Recent studies have shown that white adipose tissue also expresses serotonin receptors, tryptophan hydroxylase, and the serotonin reuptake transporter, and adipose cells secrete serotonin, which regulates leptin in mature adipocytes [35][36][37].Furthermore, serotonin metabolites act as endogenous agonists for peroxisome proliferator-activated receptor (PPAR)-γ and serotonin accelerates adipocyte differentiation via 5HT2A and C receptors [36,38].Thus, serotonin acts in peripheral tissues central to insulin production and response.
Most studies of the peripheral effects of OLZ in the area of metabolism and serotonin have been conducted with rodents, but OLZ treatment has significantly different effects in rats and humans [14,15].5HT2A, 5HT2B, and 5HT2C receptors are expressed in adipose tissue [35,36,39].Through these receptors, peripheral serotonin can modulate a variety of adipose functions, including adipocyte differentiation [36] and lipolysis [23].By antagonizing serotonin action, OLZ and other SGAs impaired glucose and lipid disposal in fat and muscle [40], inhibited lipolysis, and increased lipogenesis in 3T3 adipocytes [23], consistent with our preliminary data.These properties of OLZ treatment correlate with the development of dyslipidemia and diabetes in the long term.
The results of the longitudinal GTTs indicated that induction of whole animal hyperglycemia required addition of WSD to OLZ; while withdrawal of OLZ allowed restoration of glucose clearance through increased insulin secretion.This observation raises the distinct possibility of a direct inhibitory action of OLZ on pancreatic β cells.Beta cells have several serotonin receptors, and insulin co-localizes with serotonin [41].Knock out of TPH1, with inhibition of peripheral serotonin production, causes β cells to stop proliferating and leads to diabetes in adult mice [42].It is attractive to speculate that serotonin is needed for optimum response of β cells to glucose; and in the absence or antagonism of serotonin; the insulin response to glucose is severely blunted.The exact mechanism of serotonin action on the β cell is unknown.
Our preliminary data suggest the development of local adipocyte insulin resistance and adipocyte hypertrophy followed 4 mo of OLZ treatment, which occurred on unrestricted chow at the time that whole animal GTT was normal.The possibility exists that the adipocyte insulin resistance was due to adipocyte hypertrophy, which was a consequence of weight gain, rather than a direct effect of OLZ.The ability of serotonin to reverse the effect of OLZ on insulin-induced FFA uptake in 3T3-L1 cells argues against this possibility as being entirely responsible.A recent study by Teff et al. [43] demonstrated that shortterm treatment of normal subjects with OLZ produced insulin resistance and hyperinsulinemia in conjunction with changes in glucagon and GLP-1 levels that were independent of weight gain [43].This observation further supports the notion that OLZ has immediate direct effects on peripheral tissues.With comparison of the cellular actions and in vivo actions of OLZ, it appeared that insulin resistance manifested at the level of the adipocyte prior to manifestation in the GTTs.It is possible that muscle use of glucose delayed the onset of hyperglycemia in the whole animal, although it has been suggested that SGAs may block the glucose transporter in muscle [44].
In summary, these preliminary data in our NHP model support the hypotheses that [i]OLZ acts on NHP peripheral tissues as well as in the CNS; [ii] that rapid early changes in hypothalamic gene expression lead to decreased satiety and increased feeding that precede fat accumulation; [iii] that adipose tissue exhibits insulin resistance prior to alterations in glucose tolerance and insulin secretion; [iv] that addition of WSD to OLZ precipitates impaired glucose clearance without hyperinsulinemia; and [v] that removal of OLZ and continued WSD result in normalized glucose tolerance and increased insulin secretion.Our data suggest complex and early responses to OLZ that may be exacerbated by WSD, and which need confirmation with more animals.In addition, better treatments need to be developed that block the metabolic effects of OLZ while maintaining its psychiatric benefits.
Figure 1 :
Figure 1: Illustration of the protocol for the preliminary data collection and the area under the curve for glucose and insulin in serial GTTs (circles).Biopsies are shown in red text and protocol changes are shown in blue text.
Figure 2 :
Figure 2: Serial GTTs.Dotted lines represent the baseline before treatment responses.
Figure 3 :
Figure 3: Dexascan pictures of the female Japanese macaque at baseline and after OLZ + Unrestricted chow.Note the increase in adiposity in the thoracic region.
Figure 4 :
Figure 4:Adipose cell size at[1] baseline, and after[2] OLZ+chow and after[3] OLZ+WSD.There was a significant effect of fat location and biopsy number or in vivo treatment.There was also a significant interaction (all p < 0.0001, 2-way ANOVA).*Asterisks designate post hoc-pair wise differences (Newman-Keuls post hoc p < 0.05).
Figure 7 :
Figure 7: Semi-quantitative RT-PCR analysis of transcripts medial basal hypothalamus (MBH) and rostral hypothalamic preoptic area (POA).An untreated control (Con) monkey was compared to a monkey treated with OLZ for 5 days.
Table 1 :
Summary of Dexascan results in the female Japanese macaque treated with and without OLZ and WSD in a longitudinal manner.
Table 2 :
Quantitative values (mg/dL) from the GTT curves at the two month time-point from the start of each treatment.
|
2017-10-11T19:04:59.387Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "4b5d7c8dfd2ef905763d808573bf17c52499f138",
"oa_license": "CCBY",
"oa_url": "https://symbiosisonlinepublishing.com/endocrinology-diabetes/endocrinology-diabetes12.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4b5d7c8dfd2ef905763d808573bf17c52499f138",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
259882221
|
pes2o/s2orc
|
v3-fos-license
|
Stem Cell Therapy in Critical Limb Ischemia
Critical limb ischemia (CLI), a serious outcome of peripheral artery disease, is frequently associated with morbid outcomes. The available treatment modalities do not provide satisfactory results, leading to marked morbidities such as joint contracture and amputations, resulting in a high economic burden. The peripheral vascular disease tends to cause more morbidity in patients with diabetes and atherosclerosis, given the pre-existing compromised perfusion of medium and small vessels in diabetic patients. With surgical procedures, the chance of vascular compromise further increases, inducing a significantly greater rate of amputation. Hence, the need for nonsurgical treatment modalities such as stem cell therapy (SCT), which promotes angiogenesis, is warranted. In CLI, SCT acts through neovascularization and the development of collateral arteries, which increases blood supply to the soft tissues of the ischemic limb, providing satisfactory outcomes. An electronic database search was performed in PubMed, SCOPUS, EMBASE, and ScienceDirect to identify published clinical trial data, research studies, and review articles on stem cell therapy in critical limb ischemia. The search resulted in a total of 2391 results. Duplicate articles screening resulted in 565 articles. In-depth screening of abstracts and research titles excluded 520 articles, yielding 45 articles suitable for full-text review. On review of full text, articles with overlapping and similar results were filtered, ending in 25 articles. SCT promotes arteriogenesis, and bone marrow-derived mesenchymal stromal cells produce significant effects like reduced morbidity, improved amputation-free survival (AFS ) rate, and improved distal perfusion even in "no-option" CLI patients. SCT is a promising treatment modality for CLI patients, even in those in whom endovascular and revascularization procedures are impossible. SCT assures a prolonged AFS rate, improved distal perfusion, improved walking distances, reduced amputation rates, and increased survival ratio, and is well-tolerated.
Introduction And Background
Critical limb ischemia (CLI) is a serious and potentially fatal form of peripheral artery disease (PAD), with low evidence of therapeutic success with the available treatment modalities [1,2]. Thromboembolism and atherosclerosis may induce CLI with short-term mortality and adverse cardiovascular effects [3,4]. The most common manifestation of CLI is observed in smokers with severe leg pain, ulcers, or gangrenous toe. Despite advances in treatment, including surgical and interventional radiological intervention, patients still undergo major or minor amputation [5,6].
Approximately 20% of Americans over 65 and 50% of patients aged 75 are diagnosed with PAD [7], with eight million Americans diagnosed with limb ischemia [8]. In India, there is a double or triple incidence of PAD and/or CLI, of which 10-40% require amputation [9] and gangrene [10]. The use of stem cell therapy for this purpose is beneficial by increasing the number of new cells producing growth factors and stimulating neoangiogenesis [11][12][13].
Risk factors of PAD are a) Ankle-brachial index (ABI) <0.09, greater in non-Hispanic blacks, b) hypertension, c) dyslipidemia, d) raised CRP in asymptomatic individuals, e) hyperviscosity and hypercoagulation state, f) hyperhomocysteinemia, and g) chronic renal insufficiency [14][15][16][17]. Revascularization is the cornerstone of treatment whenever possible, yet amputations and death remain common. Significant major amputation rates in the range of 10-40% have been observed in these patients, particularly those with unsuccessful revascularization or "no-option" CLI (NO-CLI) [12]. Exploring newer techniques for the revascularization of these ischemic limbs is essential. Cell-based therapeutics have emerged as a new area in this field, with bone marrow-derived mesenchymal stem cells (BM-MSCs) currently viewed as a promising and potentially newer therapeutic approach. Numerous research, including randomized trials, nonrandomized trials, and uncontrolled studies, have demonstrated the efficacy of stem cell therapy in CLI patients [11][12][13]. However, given study variability, small sample numbers, and a lack of large-scale placebo-controlled research, acceptance of this modality of therapy as the standard of care remains debatable. Transplantation of autologous BM-MSCs has also been tested in terms of several implantation techniques, either intramuscular (IM) injection, intra-articular (IA) injection, or combination, and has yielded essentially identical results [12]. Generally, stem cell-based therapy is safe and effective, with modest and mostly temporary adverse responses associated with local implantation. Moreover, preconditioning methods and prolonged growth factor release using bioactive microspheres may improve the therapeutic efficacy of cell treatment.
The current review aims to evaluate and assess the clinical data and findings regarding the use of stem cell therapy in CLI patients with better therapeutic outcomes.
Search Criteria
A comprehensive search was conducted in PubMed, EMBASE, SCOPUS, and Web of Science, resulting in 1865 articles. The search was performed to identify articles having data on peripheral arterial obstructive disease (PAD), critical limb ischemia, angiogenesis, limb loss, and amputations together with BOOLEAN terms "OR", "AND," and "NOT". The search strategy was conducted with adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines ( Figure 1). An additional search was conducted using the reference articles of the primary results to include studies left out.
Inclusion Criteria
Only articles published in English were included and reviewed by the first author to identify relevant studies. We evaluated all potentially eligible studies through an in-depth review and consideration of the full text. Reference lists and relevant publications in the articles' bibliography were also searched. Data from clinical trials that mainly used stem cells in CLI patients were included.
Exclusion Criteria
Studies with primarily targeted indications other than CLI, such as carotid disease, aortic aneurysmal disease, inflammatory disease, cancer, nonvascular disease, intracranial vascular disease, and chemotherapy treatment, were excluded from the study.
Results
The initial database search yielded 2391 results, of which 393 studies were from PubMed, 526 from ScienceDirect, 759 from Web of Science, and 713 from EMBASE. Duplicate articles screening resulted in 565 articles. In-depth screening of abstracts and article titles excluded 520 articles, yielding 45 articles suitable for full-text review. On review of full text, articles with overlapping results were filtered, ending up in 25 articles included in this study. Some of the significant results from individual studies are as follows:-
Hybrid Revascularization
A CLI patient may undergo a combination of endovascular and open surgery with decreased extensive invasive procedures and shorter duration. Endovascular surgery consists of inflow, outflow, or a combination of both; it is performed percutaneously using a cross-over technique in the ipsilateral or contralateral femoral artery. Dougherty et al. studied 125 patients with hybrid therapy and found limited effectiveness in non-ambulatory patients in old age, and if multiple co-morbidity were present, those patients needed therapeutical angiogenesis [18].
Vascular Stem Cells Biology
Thompson et al. reported that embryonic stem cells (ESCs) retain the ability to perpetually regenerate themselves while still possessing the capacity to differentiate into any form of human body cell [15]. The inner cell mass (ICM), as the name suggests, forms the innermost cellular component of the embryonic cell blastocyst and gives birth to primitive endoderm and epiblast, which in turn are composed of the ectoderm, mesoderm, and endoderm main germ layers, throughout the physiological process of embryogenesis [19].
Vasculogenesis and angiogenesis are two different processes. Vasculogenesis is the process where new blood vessels form from endothelial progenitors and is primarily an embryonic process; the resultant capillaries are tiny and, as a result of the Hagen-Poiseuille rule, cannot appropriately replace larger ones blocked in CLI. Arteriogenesis, also known as collateral growth, is the process by which pre-existing arterioles present collaterally are transformed into functioning collateral arteries. Human mesenchymal stromal cells derived from bone marrow trigger cellular events and paracrine processes, which have been shown to assist arteriogenesis [19].
Elser et al. and Jaminon et al. demonstrated that ischemia increases plasma levels of activated cell cytokines such as thrombopoietin and soluble kit-ligand (sKitL) as well as progenitor cell cytokines like granulocytemacrophage colony-stimulating factor (GM-CSF) and erythropoietin [20,21]. The release of stromal cellderived factor-1 (SDF-1) by thrombopoietin and sKitL in ischemic limbs may enable revascularization by supporting hemangiocyte mobilization. In addition, immature vascular smooth muscle cells (VSMCs) perform a vital role in the proliferation and migration to the site, followed by creating extracellular matrix (ECM) and vascular wall components such as collagen, elastin, and proteoglycans. All these contribute to an efficient process of vascular morphogenesis [21].
Tateishi-Yuyama et al. performed a pilot study with 25 patients suffering from unilateral ischemic leg who received injections of bone marrow-mononuclear cells (BM-MNCs) in the gastrocnemius of the ischemic limb. Twenty-two patients having bilateral ischemic legs underwent injection of BM-MNCs in one leg and peripheral mononuclear cells in the other. Ankle-brachial index (ABI) was significantly improved in those who had received BM-MNCs at 24 weeks. Autologous implantation of BM-MNCs is effective for therapeutic angiogenesis due to increased progenitor cell consistency, angiogenic factors cytokines, and a cluster of differentiation 34 (CD-34) cells [22].
Stromal Cells for Angiogenesis, Vasculogenesis, and Wound Healing
Increasing blood flow can theoretically be attained by increasing angiogenesis either pharmacologically or biologically, allowing therapeutic angiogenesis. Based on in-vitro studies, nitrous oxide plays a major role in bone marrow mobilization and endothelial progenitor cell release. Hyperbaric oxygen therapy (HBOT) increases nitrous oxide in cerebral cortex tissue, pulmonary tissue, and neutrophils, in turn increasing circulating progenitor cells, CD34, and myeloid marker cluster of differentiation 14 (CD14) [23][24][25][26]. Hence, HBOT is safe to enhance the mobilization of bone marrow (BM) derived progenitor cells into the circulation with minimal side effects
Stromal Cell Mobilization
Hematopoietic stem cells (HSCs) are harbored in the BM, and several chemokines and cytokines promote HSC mobilization to the peripheral circulation. This mobilization of stem cells results from the complex interactions between many important cell proteases, ligands, and receptors in the extracellular milieu [19]. A dose-dependent association exists between the increased number of endothelial progenitor cells (EPCs) in the peripheral circulation and the injection of granulocyte colony-stimulating factor (G-CSF) and GM-CSF. In addition, the chemokine receptor (CXC) leads to the mobilization of EPCs, which increases matrix metallopeptidase 9 (MMP-9) signaling in BM [27,28].
GM-CSF mobilizes fewer cells than G-CSF, so it is rarely utilized in place of G-CSF. In general, vascular endothelial growth factor (VEGF), fibroblast growth factors, and factors generated from stromal cells can promote EPC recruitment. Statins, parathormones, and cell-mobilizing ligands can be used alone or in conjunction with G-CSF. Using BM-derived cells (BMDCs) in stem cell treatment has demonstrated reasonable safety. However, difficulty was seen in connection to the collection and mobilization of cells. A one-day culture of activated dendritic cells (DC) allowed us to observe the development of EPC-enriched stem cells [29].
Skeletal muscle satellite cells are another potential source of stem cells requiring further research and exploration [30]. After birth, muscles harbor precursor cells with myogenic potential, which aid muscle fiber repair and regeneration in adult tissues. These precursor cells are activated after muscle injury and initiate the healing process by producing new muscle fibers or integrating into the available muscle cells, mainly into their myonuclei, and promoting reparative processes. Satellite cells are devoted to myogenesis and are undifferentiated and mitotically dormant. The only source of new myoblasts in adult tissue is satellite cells, whose numbers decline with age. These cells can become active in ischemic circumstances and behave similarly to bone marrow stem cells in such conditions. Adipose tissue, the umbilical cord, and other sources can also be used to transfer stem cells for therapeutic purposes in addition to bone marrow [30].
Comparison of Intramuscular and Intra-arterial Stem Cell Administration
The route of administration of stem cells for CLI may be IM or IA, or both. Establishing a paracrine cellular depot in the ischemic region is the primary pathophysiology for the intramuscular route. Experimental investigations on animals show that BMDCs can aid in the repair of blood vessels and muscles by physically integrating growth factors into the tissue [31][32][33], and ischemic tissues can effectively neovascularize when bone marrow mononuclear cells are injected. They induce the regeneration of blood vessels and muscles by paracrine pathways through vascular endothelial cells or direct differentiation of blood vessels and muscles from the precursor cells, which would account for their angiogenic actions [31].
EPCs are among the many cell fractions found in BM-MNCs, and they secrete various angiogenic factors invivo. The injected stem cells release a host of angiogenic factors, which improve blood flow and contribute to the incorporation of EPC into the newly formed capillaries, which is most likely how angiogenesis occurs. BMDCs function by physically integrating into the tissue or secreting growth factors to aid in the repair of blood vessels and muscle tissue [34]. The gastrocnemius muscle and blocked native arteries are the best locations for collateral development because of the density of preformed collaterals and the greatest parallel orientation to axially aligned arteries [34].
The effects of either pathway of administration were compared, assessing ABI and the transcutaneous partial pressure of oxygen. Both parameters increased considerably only with intramuscular injection or combination therapy but not with intra-arterial cell therapy alone (TcPO2). However, there was no difference between the distance traveled without discomfort and significantly improved pain level. Unlike experiments using intra-arterial cell therapy, intra-muscle cell therapy significantly improved ulcer healing [35][36][37].
With the administration of G-CSF, mainly in intramuscular sites, the peripheral blood mononuclear cells were found to be mobilized to the site of ischemic fibers. Clinical signs and symptoms of CLI patients improved considerably though the effects were observed only in a limited number of patients in the pilot investigations [38]. Tateno et al. suggested that the incorporated peripheral blood mononuclear cells stimulate ischemic skeletal muscle cells to form muscle-derived angiogenic factors [39].
Stromal Cell Treatment in CLI
Tateishe-Yuyama et al. [22] explained using autologous stem cells in CLI. Injecting MSCs into the gastrocnemius muscle increases TCO2 and ABI and decreases claudication. 18 of 25 clinical studies have shown good results. Yet, more studies are required to standardize the dosage and route of administration. Using autologous stromal cells becomes challenging in patients with anesthetic difficulties, making bone marrow aspiration impossible. This leads to the necessity of using mesenchymal stem cells as an allogeneic transplant. MSCs are less immunogenic and crossed the phase III trial successfully. The indications of SCT are thromboangitis obliterans [clinical trial showed its efficacy with 4-year follow-up] and atherosclerotic diseases [efficacy of progenitor cell proved in atherosclerotic disease still need a proper protocol for manufacturing, standardization, storage, and delivery of these cells] [22].
Adverse Effects of SCT
Several studies report that after six months of treatment, patients who received BM-MNC experience improved rest pain, pain-free walking period, and tissue oxygen pressure. However, a significant effect was not reported after peripheral blood mononuclear cell injection [22,40]. In addition, studies report a 15% mortality in patients undergoing autologous therapy stem cell implantation for CLI. Recent studies have also demonstrated a high mortality rate in patients with CLI in whom angiogenic cell treatment with IM BM-MNC implantation had been done. However, studies also state that the risk is in the same proportion compared to traditional surgical revascularization procedures [22,[40][41][42].
Hemodialysis, diabetes mellitus, and pre-existing coronary artery disease (CAD) are conditions that severely affect angiogenesis or the ability to salvage limbs, as observed in both animal studies and human environments [43,44].
Role of Auto CD-34 Cell Therapy in CLI
In a double-blinded study, 28 CLI patients underwent cell mobilization with a 5 microgram/kg per day dose of GSF followed by leukapheresis on the fifth day. The CD-34 cells obtained were injected intramuscularly. At 12 months, there was a significant improvement in the mean Rutherford score (reduction in baseline, implying improved patient's limb condition) and a reduced rate of amputation in the cell therapy group compared to control subjects [42].
Role of Neutrophil-lymphocyte Ratio (NLR) in CLI
A simple marker of systemic inflammation acts as a significant prognostic marker for chronic CLI. In a recent study of 561 CLI patients with a median age of 74 were selected, NLR was noted. The study followed the subjects for 31 months, with 162 deaths and 148 major amputations during the study period. The Kaplan-Meier curve shows lower mortality in patients who had NLR <5. There was a significant increase in the rate of deaths and major amputation in patients with NLR >5 [45].
Therapeutic Neovascularization Using Peripheral Blood Mononuclear Cells (PB-MNC) for CLI
The collection of PB-MNC is a very safe and cost-effective mode of stem cell therapy for CLI. 29 patients with CLI from arteriosclerosis obliterans (ASO) or thrombangitis obliterans (TAO) were enrolled in a study where 80% of patients were advised for amputation by their physicians. These subjects underwent PB-MNCs in the ischaemic limbs and were reviewed at 2, 6, and 12 months. The authors noted decreased rest pain and improved claudication. Only three patients underwent non-salvageable procedures [39].
Patients with Limitations of Endovascular and Surgical Revascularization
CLI patients who had undergone previous unsuccessful attempts at revascularization, weak outflow vessels limiting the possibility of surgical revascularization options, or medically unfit patients with substantial comorbidities that lead to an unacceptable risk of revascularization/endovascular operations are grouped as NO-CLI are Ideal candidates for stem cell/progenitor cell therapy [46,47].
Previously NO-CLI patients had high rates of limb loss and mortality, which were measured using amputation-free survival (AFS). AFS is a composite measure that combines hard outcomes such as amputation and death. It is measured as a one-time event or a cumulative accumulation of events. Stem cell therapy in NO-CLI has resulted in good clinical outcomes, viz., living persons with limbs that have not been amputated
Delivery Methods to Enhance Cell Survival, Paracrine Effects, Engraftment
Various strategies are being employed to enhance the therapeutic effects of stem/progenitor cells. One such strategy is to improve the cell tissue viability of the progenitor cells, mainly achieved through the development of effective cell transport carriers supporting the protective effect of cells or strengthening their overall cell survival. Wang et al. produced a methylcellulose (MC) hydrogel that responds to temperature and enables the temperature-regulated release of placental MSC (P-MSC) at lower temperatures [48]. The thermo-responsive MC-hydrogel as a delivery system for P-MSC in ischemic limbs increased cell survival, improved cell viability, and enhanced blood flow to the limbs, thereby preventing muscle atrophy [49].
Discussion
Surgical or endovascular revascularization has been the first choice of peripheral therapy for obstructive arterial disease (PAOD) and CLI. Up to 30% of patients end up with complications in such procedures due to critical vascular involvement. The prognosis remains equivocal, with mortality rates up to 20% over six months [50,51].
For the last two decades, advances in regenerative medicine have been noted in various fields of modern medicine with proposed stem cell therapies in different forms and varieties. Asahara et al. showed that circulating cells produced in the bone marrow can develop into the endothelium and stimulate endothelial formation, progressing into new blood vessels. These were called EPCs, capable of endothelium and new vessel formation, thereby improving the perfusion of ischemic tissues, especially myocardium and peripheral limb ischemia [52].
SCT is a very promising treatment modality in CLI patients, especially in NO-CLI, where surgical revascularization and endovascular procedures to re-establish the vascular flow have failed or are impossible [53]. In such patients, SCT is a reliable salvage procedure that helps to lower or avoid amputations, increasing ABI and TcPO2 and improving the AFS rate by avoiding or postponing amputations and death [54]. Studies have recorded an overall improvement in ischemic symptoms and their quality of life, especially future diabetic angiopathies and coronary angiopathies.
Aspiration of bone marrow is generally well tolerated. Local pain is the most frequent adverse reaction that can be treated with non-steroidal anti-inflammatory medicines. Even G-CSF stimulation was well tolerated, with myalgia, fever, and bone soreness being the most frequent adverse effects. Intramuscular or intraarterial administration of BM-MNCs cells has also produced encouraging results; the procedure is safe and well tolerated. If at all any unfavorable reactions were observed, they mostly resulted from some pre-existing disease [55,56].
Given the unique chemokine receptor-4 (CXCR4) crucial for stem cell homing, BM-MSCs appear more effective in initiating reparative processes than mobilized PB-MSCs. Hence, BM-MSC may be a better alternative than PB-MSC. The treatment-induced benefits last for two to three years. A multi-centric, largescale, randomized control study is required to demonstrate the efficacy and safety of using stem cell injection for PAOD and to establish this therapy as a standard mode of treatment for CLI patients [57].
The ever-increasing trend of molecular approach and linkage between CLI and type 2 diabetes mellitus suggests the need for new medical therapies. miRNAs appear to be favorable therapeutic and diagnostic targets because they are particularly involved in neovascularization [58,59]. All preclinical, animal, and clinical studies point towards an overall positive outcome of such therapy.
Limitations and Challenges of SCT
Despite the very encouraging results of several clinical trials, there are many concerns and barriers to SCT. Furthermore, extensive research is needed to characterize the precise molecular pathways controlling its therapeutic effects. First, we must decide which cell type [BM-MSCs, peripheral blood-derived MSCs, adipose-derived stem cells, induced pluripotent stem cells, umbilical cord-derived MSCs, or dental pulpderived MSCs] to use. Second, as the stromal cells constitute a heterogeneous population, a deeper comprehension of the effective subset of stem cells is required.
Despite the generally encouraging findings of several clinical trials, there are still many unresolved problems and obstacles to SCT. The specific molecular mechanisms controlling progenitor cell therapeutic benefits are being studied to identify the ideal cell type for cell-based treatment.
Both microvascular and macrovascular issues are related to PAOD/CLI. The current understanding of the mechanism of action of MSCs in CLI is increased microvascular density and stromal cell-mediated angiogenesis to improve the perfusion in CLI limbs. The ideal solution for an ischemic human leg is the production of collateral arteries or, even better, novel blood flow pathways. An in-depth analysis of the mechanisms behind SCT-induced arteriogenesis is required to develop an effective treatment to develop arteriogenesis.
The ideal frequency of application, the best route of administration (IM vs. IA or other targeted delivery), and the right therapeutic dosage of cells are yet to be defined. The impact of autologous versus allogeneic stem cells and various stromal cell varieties must be standardized.
Conclusions
We conclude that SCT is a promising treatment modality for CLI patients, even in those in whom endovascular and revascularization procedures are not possible. SCT assures a prolonged AFS rate, improved distal perfusion, improved walking distances, reduced amputation rates, and increased survival ratio. Further research is required to characterize the mechanism of action, identify the optimal stem cell variety useful in CLI, uniform routes of administration, and frequency of application, thus standardizing the procedure. Finally, the path from vascular biological principles to effective treatment of CLI requires sustainable optimism, generosity in sharing data and discussing benefits and risks should mark PAD as a treatable condition.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2023-07-15T15:34:54.594Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4901e629272fadf26f8115660041a976760fad25",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/164136/20230712-8740-1rhnt8g.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b6e44bc1f191c165464142e479316a2b4b90165",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
64226106
|
pes2o/s2orc
|
v3-fos-license
|
A precedence-free approach to ( de-) palatalisation in Japanese
Japanese exhibits two patterns involving palatality: palatalisation, which causes two adjacent segments to share palatality, and de-palatalisation, which renders one of those two adjacent segments unable to sustain the shared palatal property. These patterns are traditionally analysed by referring to the notions of adjacency and/or precedence. By contrast, in the context of Precedencefree Phonology (Nasukawa 2014, 2015ab) this paper re-analyses these phenomena by referring to the head-dependency relations that are necessary for building structure, rather than by appealing to precedence relations. In this model, precedence is merely a natural result of interpreting the dependency relations that hold between units in hierarchical phonological structure.
Introduction
This paper analyses two opposing prevalent phenomena -palatal assimilation (e.g.si → ɕi) and palatal dissimilation (e.g.ji → i)which frequently occur between adjacent positions and which are both typically analysed by referring (either explicitly or implicitly) to the syllable, e.g. the palatality of segment X must be realised in segment Y iff X and Y are linearly adjacent in the same syllable.
Adjacency is formally defined as a precedence relation that is lexically encoded in the segments forming a CV sequence, while syllables are taken to be constituents formed by dependency relations between C (onset) and V (nucleus), where C is a dependent of V.
The relational properties between units -precedence and dependency -are both regularly employed in phonology to explain recurrent phenomena and aspects of phonological architecture.In the interests of representational minimalism, however, some recent theories of representation dispense with one of these two relational properties and describe phonological phenomena by referring only to the other property.There are two opposing views: (i) the strict CVCV model of Government Phonology (which may be dubbed Dependency-free Phonology) developed by Scheer ( ; ) and his colleagues, which abandons dependency and describes phonological phenomena by referring only to precedence; and (ii) Precedence-free Phonology developed by Nasukawa ( ; , ), which abandons precedence and describes phonological phenomena by referring only to dependency.Both The approach described in Nasukawa ( ; , ) (cf. ; ) denies the existence of precedence relations between units of phonological representation, eliminating not only units such as CV units, skeletal positions and Root nodes (which have been assumed to carry properties relating to precedence) but also traditional prosodic units such as onsets, nuclei and codas (although these may still be informally referred to for the sake of ease of understanding).Instead, features are regarded as the units that play a central role in building phonological structure.This contrasts well with orthodox phonological models where features are merely inherent attributes of a segmental position and segments (more precisely, CV units or skeletal positions) are treated as basic building blocks for constructing phonological structure.In this model features take the place of prosodic constituents like onset and nucleus, since features (which are, in phonological terms, the smallest units) themselves function as the basic building blocks of phonological structure.At the same time, a feature may also function as the head of a 'nuclear' expression, and by adding another feature to this head feature a complex expression is formed in which the additional feature takes the role of a dependent/complement.The feature model which most clearly illustrates this approach is the version of Element-based feature theory developed by Nasukawa ( ; ), in which each feature or elements is monovalent and fully interpretable on its ownto be phonetically realised it does not require the support of other elements.It follows that there is neither any universally fixed
2015b
Nasukawa 2015b: 213 (1) a. Morpheme-internal phonological structure consists not of segment-based precedence information but of a set of features which are hierarchically concatenated.
b.
Phonology is a module which not only interprets fully concatenated strings of morphemes but is also responsible for lexicalization (building the phonological structure of morphemes in the lexicon).In element-based feature theory, melodic structure is represented using the six monovalent elements |A I U ʔ H N|. These are to be understood as mental objects which are active in all languages.Conceived of within the perception-based view of melodic structure employed in the work of Jakobson ( ; ), elements map onto the phonetic exponent.The six elements are described in Table 1, along with their typical acoustic signature ( ; ).
Table 1 Typical acoustic exponence of elements.
In principle, the elements may be employed in both consonant and vowel expressions.Table 2 shows the different phonetic categories associated with each element according to whether it appears in a consonant or a vowel ( ; ).
The (Traditionally, it is assumed that an empty nucleus is pronounced as one of the central vowels ə, i(ɨ) or ɯ, according to parametric choice.)English selects |A|, which is realised as ə in its acoustically weak form, while Yoruba chooses |I| (realised as i) and Japanese selects |U| (realised as ɯ in the east part of Japan, as u in the west) Figure 1.Thus languages divide into three types according to their baseline resonance: |A|-type ( ə), |I|-type (i) and |U|-type (ɯ) ( ).
Given that the weak vocalic forms ə, i and ɯ are each represented by a single element |A|, |I| and |U| respectively, the question arises as to how the near-universal corner vowels a, i and u are represented structurally.In the case of |A|-type languages such as English, the baseline (which functions as a nucleus/V) takes another element as its dependent.If the baseline and |I| are concatenated, the whole expression is phonetically realised as i, and if the baseline and |U| form a set, the expression manifests itself as u.Furthermore, the set which consists of the baseline and |A| is phonetically interpreted as a.These structures may be represented as follows.
The leftmost structure in Figure 2 shows the representation of the English baseline, a sole |A|, which determines the quality of unstressed vowels and of the default epenthetic vowel, both of which are phonetically manifested as ə.On the other hand, the baseline resonance may also have the acoustic pattern of an additional (dependent) element superimposed on to it: for example, in the structures for a, i and u respectively, the dependents |A|, |I| and |U| have acoustic patterns with greater prominence than those of their baseline.These phonetic values a, i and u are the exaggerated forms of ə, i and ɯ respectively (where ə, i and ɯ are to be understood as the phonetic interpretation of |A|, |I| and |U| as bare heads) ( ).Following notational conventions, the head occupies the position at the top of the tree diagram and labels the entire structure.The same configuration also applies to |I|-type and |U|-type languages.One example is the |U|-type language Japanese, which will be discussed in the latter half of this paper.In the case of Japanese, |U| is the baseline (head), which is phonetically realised as the unrounded vowel ɯ when there is no dependent element.
When the baseline takes |A|, |I| or |U| as a dependent, the acoustic pattern (phonetic exponence) of this dependent element overrides that of the baseline.As a result, the structures are phonetically realised as a, i, and ɯ respectively, as shown in Figure 3.The remaining two vowels of Japanese, e and o, are represented as follows.In the domain marked out with a dotted line, the set where |I| (solely phonetically interpreted as i) has |A| (interpreted as a) as a dependent is phonetically realised as the mid front vowel e: in acoustic terms, the additional (dependent) 'mass' pattern is added to the (structurally headed) 'dip' pattern.In this configuration, the dependent 'mass' pattern is more prominent than the head 'dip' pattern since |A| is the most embedded dependent, making it phonetically more prominent than the head.The same is true in the structure for o in Figure 4: in the |U|-headed set of |U| and |A|, the dependent |A| is acoustically more prominent than the head |U|.
Structures which are the reverse of those in Figure 4 are also employed in Japanese, as given in Figure 5: the |A|-headed set consisting of |A| and |I| is phonetically interpreted as the light diphthong ja (ĭa) rather than as a monophthong, while the |A|-headed set consisting of |A| and |U| phonetically manifest itself as the light diphthong ɰa (ɯ̆a).
Figure 4
Mid vowels e and o.
The remaining light diphthongs permitted in Japanese are represented as follows.
As discussed in Nasukawa ( ), the above structures find support in the observation that jV of CjV (rather than Cj) behave as constituents in phonological phenomena, as demonstrated below (where phonetic symbols in the brackets are phonetically realised forms).
2015a
The pattern emerging from ( 2) is that a front vowel cannot follow a Cj sequence in Japanese.This is often taken to be a cooccurrence restriction which bans a sequence comprising the palatal glide j and a front (palatal) vowel (i/e).Yet in fact, not only CjV sequences but also jV sequences are subject to the same distributional restriction.
Given that the co-occurrence restriction works within a domain/constituent, as demonstrated by consonant clusters and diphthongs cross-linguistically, it follows that a CjV sequence must be syllabified as C-jV, where j is part of the nucleus rather than part of the onset (Cj-V).This is motivated by the fact that any consonant in the Japanese consonant inventory (except for j, w and the placeless nasal ɴ) may appear before a permitted jC sequence, i.e. the same distributional freedom as a single consonant that precedes any of the five monophthong vowels a, i, u, e, o.To capture this distributional restriction involving jV sequences, Nasukawa ( ) claims that jV as a whole forms a nucleus rather than a CV sequence.That is, jV is a light diphthong (ĭa) of the kind which is also found in languages such as Korean and Chinese.
This view is also supported by the way these sounds are written in the Japanese syllabary, where kja ( ) is represented as a combination of (ki) and a subscript (ja): i.e. ki is modified by the addition of ja.
Representing consonants in Japanese
Before proceeding to the analysis of Japanese palatalisation in §3, let us clarify how consonants are represented in the precedence-free model.It is assumed that consonants are structurally dependent on vowels, since vowels are generally taken to be obligatory in constituents such as 'syllable' and 'word' whereas consonants are optional.From this it follows that the vocalic part of the constituent forms its head (and is therefore unmarked and essential for structure-building), while the consonantal part takes the role of a dependent (and is therefore unimportant for structure-building).
On this basis -and in light of the above discussion on the relation between structural head-dependency and phonetic prominence -it may be claimed that consonants are more prominent than vowels since consonantal properties tend to function as phonetic cues to prosodic information (e.g.English aspiration as a marker of foot-initial position) while vowels have no comparable function (e.g.despite being more sonorous than consonants, vocalic properties are unmarked and do not show any acoustically-defined abrupt changes).This is consistent with the point made in §2.2 that heads lack phonetic prominence while dependents are phonetically more prominent.
Let us return to the argument that the part of a constituent which is phonetically more prominent and/or contrastively richer should occupy a more deeply embedded position.We may represent the consonantal part using the structure in Figure 7, where elements under a vertical line are heads and those under a slanting line are dependents.
Figure 7
The structure of ta 'rice field'.
As illustrated above, the consonantal part is dominated by the vocalic part: in the left-hand structure in Figure 7, the consonantal |H|-headed set of three elements (which phonetically manifests itself as t) is dependent on the baseline |U| that is the ultimate head of the expression.And the |U|-headed set of |H|˝ and |U| (= |U|ˊ) takes |A| as its dependent at the next level down.As discussed in §2.2, the part consisting of |U| and |A| is phonetically interpreted as a since the head |U| is a resonance baseline, the acoustic quality of which is masked by that of its dependent element.As a whole, the structure on the right-hand side is realised phonetically as ta 'rice field'.
As mentioned earlier, and as discussed in Nasukawa ( ; ; ; ) and Nasukawa & Backley ( ), representations of this kind make no reference to precedence relations between the units within phonological representations.There is therefore no difference between the two structures in Figure 7: both exhibit the same dependency relations between the units in their respective structures.In this model, as argued in Nasukawa ( ) who discusses in detail two types of dependency (endocentric dependency and exocentric dependency), linear precedence is to be regarded as the natural result of performance systems interpreting the hierarchical structure present in phonological representations.
Referring to the configurations in Figure 7, the element structures permitted to appear in the consonantal part are given below.
Since the noise element |H| is present in all obstruents, it serves to define the class of obstruents.(Conversely, the absence of |H| indicates a sonorant expression.)|H| is deemed the head of the consonantal expression in which it appears, while the nature of the hierarchical relation whereby |I|, |U| or |A| is dominated by |H| determines any acoustic effects relating to place of articulation.In addition, a whole expression is identified as a stop or an affricate if the edge element |ʔ| is present, as in Figure 8, while the same expression without |ʔ| is interpreted as a fricative, as shown in Figure 9. Fricatives in Japanese.
Let us limit the present discussion to palatality (in representational terms, the property associated with the |I| element), since this will be the focus of §3.The element |I| is found in ʨ/ʥ (Figure 8), in ɕ (Figure 9) and in ç (also Figure 9); in all of these, |I| is in the most deeply embedded part of the structure, and for this reason, is interpreted as palatality.
Using the melodic structures just outlined, the next section describes two seemingly opposing phenomena involving palatality: palatal dissimilation and palatal assimilation.
Palatal dissimilation
The process of palatal dissimilation in Japanese (see §2.2) imposes a ban on sequences of a palatal glide j followed by a front (palatal) vowel (i/e).
The prohibited sequences *ji and *je are instead produced as i and e respectively (e.g.idiɕɕu < jɪdɪʃ 'Yiddish' and eritsiɴ < jeltsin 'Yeltsin (Boris)').This process is typically seen as a co-occurrence restriction, which makes appeal to the OCP (Obligatory Contour Principle) or Identity Avoidance since it disallows sequences of j plus i/e ( ; cf. ; ).In terms of element structure, Japanese *ji and *je are represented as follows.
Recall that the structural part containing the vocalic set has one of the three elements |I|, |U| or |A| as its head, and that in the case of Japanese it is |U| which dominates and provides the baseline for the entire structure.In this vocalic part, only a single |I| element can appear (*|I I|: ).Thus, by suppressing an |I| element which is more deeply embedded, as shown on the left in Figure 10, we arrive at a structure identical to that in the second from the right in Figure 3; this resulting structure is phonetically interpreted as i.The same applies to the right-hand structure in Figure 10: suppressing the most deeply embedded |I| leaves an expression which is interpreted as e, the same as in the left-hand structure in Figure 4.
In loanword phonology, on the other hand, *je is occasionally accommodated as ie rather than e as in ieti (*eti) < jeti 'Yeti' and iereɴ (*ereɴ) < jelən 'Yellen (Janet)'.The unpacking of je to ie may be analysed as follows.
Rather than by suppressing the most deeply embedded element |I| as in the right-hand structure in Figure 10, the structure for ie is generated by breaking the input structure at the highest level and placing the most deeply embedded |I| (as in Figure 11, left-hand side) in the dependent position of |U| , which is the first dependent of |U| .
Figure 11
The unpacking of je (ĭe) to ie.
In addition to the alternations *je > i and *je > ie, je (unlike *ji) is occasionally allowed in recent loanwords (e.g.jesu < jes 'yes' and jeroo < jeləʊ 'yellow').On the other hand, *ji is disallowed in all word types including loanwords (ijaa < jɪə 'year' and idiɕɕu < jɪdɪʃ As we will see in the following section, |I| can also appear in the non-vocalic domain too; specifically, |I| is allowed to occupy a position within the domain where the non-resonance element |H| is the head (i.e.|H||ʔ||L|).It is possible for the same element to appear twice in an expression if the two tokens of that element reside in different (vocalic and consonantal) parts of the structure (the reader is referred to the discussion preceding and following Figure 13).
Palatal assimilation as SEARCH |H| and COPY |I|
The palatal dissimilation process just described for Japanese is observed in the vocalic set, while the opposite process of palatal assimilation involves palatality in both the vocalic and the dependent consonantal sets.The process itself targets only coronal obstruents (s, z, t, d) and the glottal fricative (h) when they precede the front high vowel i or the light diphthong jV.This is illustrated in ( 5).(Note that Japanese word-initial z is realised as [ʣ], which requires further explanation that is beyond the scope of this paper.In ( 5b) and ( 6b 'book' All the target segments of palatalisation are obstruents, which suggests that the noise element |H| (for obstruency) is crucial to the process.Note that this quite unlike the palatal dissimilation discussed above, which takes place in the vocalic domain where |H| is absent.Also, from the observation that palatal assimilation targets only coronal obstruents (s, z, t, d) and the glottal fricative (h) when they precede i or jV we can expect there to be something which is common to the internal structures of the target segments.
As seen in the three rightmost structures in Figure 12, the target segments (t, s/z, h) all have the noise element |H|, and in addition, all lack the rump element |U|, whereas non-target segments such as p/b and k/ɡ do contain |U|.In other words, palatalisation affects a consonantal set which has |H| but no |U| and which is dominated by a vocalic set containing |I| (the source of palatality).
Consider t-palatalisation as an example.
Figure 12
The target segments of palatalisation.
As illustrated above, the process which palatalizes a consonantal structure may be analysed as palatality-spreading/copying on condition that this structure is recognized as obstruent.In element terms, it can be said that the existence of |H| (which defines obstruency) forces the most deeply embedded |I| in the vocalic set to COPY itself to the most deeply embedded part of the consonantal set.This may be formally expressed as in (7).
In the case of the sequence ti, the dependent |I| in the vocalic set (i.e. the only -and therefore, the most deeply embedded -token of |I|) copies itself onto the most deeply embedded part of the consonantal set containing |H|.
At this point let us address the following questions arising from this analysis.'extermination' hjɑku hjuuɡɑ hjoo 3.
Table 3 Typical acoustic exponence of elements.
The reason why |I| and |H| behave as a set in the process in question is that they freely appear in both consonantal and vocalic As for (8b), the property that is copied occupies the most deeply embedded position (terminal dependent) in the whole structure, being subject to three levels of embedding.As such, it is able to maximize the effects of |I| percolating through the entire domain to ensure the most effective agreement of the active property.
The operations SEARCH |H| and COPY |I| in (7) also work in fricatives, as these also contain |H| in their structures.The same palatalisation process is observed in the case of fricatives, as illustrated below.
Unlike the stop t in Figure 13, the fricative s in Figure 14 has no |ʔ|; yet the V-dependent |I| is still copied to the most deeply embedded part of the |H| domain.Additionally, the glottal fricative h, consisting of a sole |H| element, is also a target for |I|-copying to its dependent position when the SEARCH and COPY operations apply, as shown in Figure 15.
By contrast, the |H|-headed set that has |U| is immune to palatalisation, as illustrated Figure 16.
Even though the conditions for SEARCH and COPY are met, the presence of the rump element |U| prevents |I| from being copied to the |H|-headed domain.This is attributed to the following co-occurrence restriction which is operative in Japanese.
Notably, (9) applies not only to the consonantal (|H|-headed) domain but also to the vocalic (|U|-headed) domain.As discussed in §2.2,Japanese disallows the combination of |I| and |U| in a vocalic domain (note that the baseline element |U|, which is the ultimate head of the domain, does not count as a dependent |U|).And as shown below, what applies to p also applies in the case of k, since k also contains |U|).
The co-occurrence restriction in (9) prevents the velar stop k from being copied to the most deeply embedded part of the |H|headed domain.(Note that k is phonetically palatalized, but the degree of palatalisation is perceptibly different from t, s, z, h.) In Standard Japanese, thus, the arguments of COPY and the co-occurrence restriction are |I| and *|I U| respectively.Arguments for COPY are parametrically selected: |U| for rounding assimilation (e.g.round harmony in Turkish and Finnish), |A| for height assimilation (e.g.height harmony in Chichewa and Basque), |L| for nasal/voicing assimilation (e.g.postnasal voicing in Zoque and Japanese), |H| for voiceless assimilation (e.g.English and Swedish) and |ʔ| for stop gemination (e.g.Italian and Danish) (Harris 1994;Harris & Lindsey 1995;;).As for the co-occurrence restriction, not only *|I U| (both of which are 'colour' elements) but also *|H L| (both of which are 'source' elements) are observed cross-linguistically (Harris 1994;;).In principle, any combination of elements has the potential to act as a co-occurrence restriction although in practice there are clear tendencies: *|ʔ H| is marked although it does function when no other elements are present.
Returning to the copying of |I| (palatalisation), unlike Figure 17, some dialects of Japanese exhibit the palatalisation of velar stops: e.g.cɨŋko 'safe' in the Shiroishi dialect ( kiŋko in Standard Japanese) and ɨɟɨ 'railway station' in the Morioka dialect ( eki in Standard Japanese).This is illustrated as follows.
In this dialect, unlike Figure 17, COPY(|I|) requires |I| in the vocalic domain to be copied at the highest dependent part (rather than the most deeply embedded part), which forces |U| from its position.As a result, as shown on the right in Figure 18, the structure of the consonantal domain is phonetically interpreted as c.The same process can be found in the case of ci < ti (e.g.cɨkara 'power, force' in the Morioka dialect (ʨikara in Standard Japanese)) where (according to the requirement of *|I U|) |A| in the highest dependent position of the consonantal domain is forced out and instead |I| in the vocalic domain is copied in the position Figure 19.Like Standard Japanese, however, no bilabial stop palatalisation is observed in this dialect since two |U|s in the consonantal domain blocks COPY (|I|), as illustrated Figure 20.
Another question to be addressed is why the vowel e, which also contains |I|, does not trigger palatalisation Figure 21.
Figure 21
No palatalisation before e.
As illustrated in Figure 4 in §2.2, e has |I| as a head and |A| as a dependent, so the most deeply embedded element is |A| rather than |I|.Since the operation COPY in (7) targets the most deeply embedded |I| in the V domain, the head |I| in e cannot be a source for the copying operation.As such, e fails to palatalize a preceding coronal obstruent or glottal fricative.However, in some dialects of Japanese (e.g. in Kyushu) the sequence se manifests itself as ɕe, suggesting that in those dialects it does not matter if the source |I| is in the most deeply embedded part of the structure or not: parametrically any |I| is copied to the consonantal set if it is present in the dominant vocalic set.
A final point to note is that the string ɕe is permitted in loanwords, in which case ɕ is not the result of palatalisation triggered by the following e: rather, it is simply a sequence consisting of ɕ plus any vowel (a, i, u, e, o), which is possible in Japanese loanwords.
Conclusion
This paper has analysed two processes involving palatality: (i) palatal dissimilation and (ii) palatal assimilation.While the latter has traditionally been accounted for by referring to precedence relations between segments, in this paper it has been reanalyzed within the context of Precedence-free Phonology, which makes no reference to precedence relations in representations, and instead, employs only head-dependency relations between units ( ; ; ; ).
In this model, the traditional notions of progressive and regressive assimilation are interpreted in terms of different C OPY movements: progressive (C to V) involves copying to a higher position in the hierarchical structure, while regressive (V to C) requires the opposite movement to a lower position.The latter is typologically more common ( ; ), suggesting that it is more natural for movement to target a position in the same domain as the source.
Palatal dissimilation (de-palatalisation) takes place between sonorants: the process affects sequences of j followed by i or e.As a result of de-palatalisation the banned sequences *ji, *Cji, *je, *Cje are produced as i, Ci, e and Ce respectively.Palatal assimilation, on the other hand, targets coronal obstruents (s, z, t, d) and the glottal fricative (h) when they precede the front high vowel i.In this way, the constraint may be said to apply across the board within a given language.
At no point have the above analyses made any reference to precedence relations between structural units.Further research will now be required on other phenomena that have traditionally been analysed by referring to precedence relations.
Notes
Alternatively, a precedence relation could be formed between neighbouring X slots, or root nodes, or between features within a contour segment such as an affricate or prenasalised obstruent.
The notion of precedence has been questioned in the literature such as Anderson ( ), van der Hulst ( ), Fujimura (the Converter/Distributor model; ) and Haraguchi ( ).Like the present model, they all exclude the notion of precedence from representations while unlike the present proposal, prosodic categories (e.g.onset, nucleus, syllable, foot) are formally retained.
As discussed in Nasukawa ( ), all languages have one of the baseline resonance qualities (|I|, |U| or |A|), which appears as a default epenthetic vowel.The identity of this default vowel is typically revealed through loanword phonology.
Multiple appearances of the same element in a segment-sized structure are characteristic of Particle Phonology ( ; ; ).The idea of head-dependency relations between elements can be traced back to Dependency Phonology ( ).
Note that there is no phonetic difference between the manifestation of the sole baseline ( ɯ) and the realisation (ɯ) of the set consisting of the baseline plus a dependent |U|.Phonologically, however, they display different behaviour: unlike the latter, the former is restricted to verb endings and is insensitive to phonological processes.The reader is referred to a detailed discussion in
The consonants of Japanese are as follows (distinction between phonemic and allophonic not shown). 1
Excel | CSVmatrix of features nor any template-like feature organization.In accordance with certain principles, features can combine freely with one another.
first three elements |A I U| form a natural group of 'resonance' elements which typically describe vowel quality, prosodic phenomena such as pitch and intonation patterns, and also place of articulation (POA) in consonants.The other three elements |ʔ H N| are associated with non-resonance properties such as occlusion, aperiodicity and laryngeal-source effects.
Figure 2
Figure 2Vowels in the |A|-type language.
Figure 3Vowels in the |U|-type language.
shows how the |U|-headed set consisting of |U| and |I| is phonetically interpreted as the light diphthong ju ((ĭu), while the whole structure is realised as the light diphthong jo ((ĭo) in which the |A|-headed set comprising |A| and |I| (phonetically interpreted as ja) is embedded in the dependent |A| part of the |U|-headed set consisting of |U| and |A| (phonetically interpreted as o).
Figure 6
Figure 6 Figure 8Stops and affricates in Japanese.
).Under the proposed representations in Figure10, the difference between *ji and *je is attributed to the presence/absence of |A|: the structure which consists of only |I|s in the domain in question (the case of ji in Figure10, left-hand side) is strictly prohibited by the requirement of Identity Avoidance *|I I|.On the other hand, the structure which contains |A| in addition to the two |I|s in the domain in question (the case of je in Figure10, right-hand side and Figure11may be interpreted depending on various factors such as donor language and word frequency: in some recent loanwords, the existence of |A| flanked hierarchically by the |I|s (as in Figure10, right-hand side) protects the otherwise ill-formed *|I I| structure; while in others, the existence of |A| is transparent and renders the entire structure ungrammatical.
), therefore, the issue is avoided by only showing examples in which z appears word-internally.) 7) SEARCH |H| and COPY |I| SEARCH |H| and COPY the V dependent |I| in the most deeply embedded part of the |H| domain.Excel | CSV Original | PPT Original | PPT Regarding (8a), ET shows a clear connection between |H| (obstruency) and |I| (palatality) in that both are united as members of the group of 'light' elements.As discussed in detail in Backley & Nasukawa ( ) and Backley ( ), the 'light' elements comprise the set |I H ʔ| while the remaining elements |U A N| are 'dark'.Here it is claimed that palatalisation is driven by a mechanism in which the light element |I| seeks out another light element |H|, the former being copied onto a position where the latter is already present Table domains whereas |ʔ| is typically limited to consonantal domains.Because the process in question (palatalisation) is an interaction between consonantal and vocalic domains, only elements which can function in this way naturally form a group.The same is true in the 'dark' group in some systems such as native (Yamato) Japanese, where |U| (labiality) and |L| (nasality/voicing) behave as a set: |U| can be employed in a single consonantal segment when it is accompanied by |L| as in m (|U L ʔ|) and b (|U L ʔ H|) while |U| with no |L| can only appear in a geminate consonant as in -pp-(|U ʔ H|). Figure15 Figure 18Palatalisation: velar stop.
On this basis, the only difference between the two processes concerns the presence/absence of obstruency.In terms of elementbased representations, segments with |H| (noise = obstruency) undergo palatalisation (COPY |I| (|I|-agreement)) while the sonorant j, which has no |H|, is subject to de-palatalisation (*|I I|).In accordance with the general requirement of Identity Avoidance, the same element |I| cannot appear twice in a domain; so in the case of de-palatalisation two tokens of |I| in the sonorant j are disallowed and one of them (the dependent |I|) must be suppressed.However, another |I| is allowed to appear in the |H|-headed domain, since an element may be freely copied to a position outside of its own consonantal/vocalic domain.So under the operations SEARCH |H| and COPY |I|, |I| is specified in the |H|-headed domain if it is already present in the dependent part of the associated vocalic set; then palatalisation is established.This is consistent with the analyses of nasal and vowel harmony analysed in Nasukawa (), where a similar mechanism is discussed which refers to dependency relations in prosodic structure.This analysis succeeds in accounting for phenomena in which no palatalisation takes place, i.e. when the segments concerned are labials or velars.Employing the co-occurrence restriction *|I U| which in the case of Japanese functions in the vocalic set (i.e.*|I U| bans segments such as y and ø, which contain both |I| and |U|), I have claimed that the restriction also applies to the consonantal (|H|-headed) domain.
Jakobson et al. 1952 Jakobson & Halle 1956 Harris & Lindsey 2000 Harris 2005
broad resonance peak at lower end of the frequency range Nasukawa &
Backley 2008 Nasukawa 2014: 3
what is traditionally assumed to be a nucleus is replaced by one of the three resonance elements |A|, |I| or |U|, this language-specific choice determining the phonetic quality of a melodically empty nucleus in the given language.
|
2019-02-16T14:28:25.830Z
|
2016-06-17T00:00:00.000
|
{
"year": 2016,
"sha1": "f0a89889f0e8889e87945ac8ce6148d31412df69",
"oa_license": "CCBY",
"oa_url": "https://www.glossa-journal.org/article/id/4820/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0a89889f0e8889e87945ac8ce6148d31412df69",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
260993597
|
pes2o/s2orc
|
v3-fos-license
|
Hospital length of stay prediction tools for all hospital admissions and general medicine populations: systematic review and meta-analysis
Background Unwarranted extended length of stay (LOS) increases the risk of hospital-acquired complications, morbidity, and all-cause mortality and needs to be recognized and addressed proactively. Objective This systematic review aimed to identify validated prediction variables and methods used in tools that predict the risk of prolonged LOS in all hospital admissions and specifically General Medicine (GenMed) admissions. Method LOS prediction tools published since 2010 were identified in five major research databases. The main outcomes were model performance metrics, prediction variables, and level of validation. Meta-analysis was completed for validated models. The risk of bias was assessed using the PROBAST checklist. Results Overall, 25 all admission studies and 14 GenMed studies were identified. Statistical and machine learning methods were used almost equally in both groups. Calibration metrics were reported infrequently, with only 2 of 39 studies performing external validation. Meta-analysis of all admissions validation studies revealed a 95% prediction interval for theta of 0.596 to 0.798 for the area under the curve. Important predictor categories were co-morbidity diagnoses and illness severity risk scores, demographics, and admission characteristics. Overall study quality was deemed low due to poor data processing and analysis reporting. Conclusion To the best of our knowledge, this is the first systematic review assessing the quality of risk prediction models for hospital LOS in GenMed and all admissions groups. Notably, both machine learning and statistical modeling demonstrated good predictive performance, but models were infrequently externally validated and had poor overall study quality. Moving forward, a focus on quality methods by the adoption of existing guidelines and external validation is needed before clinical application. Systematic review registration https://www.crd.york.ac.uk/PROSPERO/, identifier: CRD42021272198.
Background and significance
Hospital inpatient and outpatient services make up the bulk of the health spending for all the Organization for Economic Co-operation and Development (OECD) countries (1).Australian health expenditure has increased by an average of 2.7% per year in the last 18-20 years, and the cost of hospital care accounted for 40% of the total, of which 61.7% was spent on acute admitted care (2,3).In 2020-2021, the cost of acute admitted care was AUD33.8 billion, with the average cost per admitted acute care separation being $5,315 (4).Length of stay (LOS) in an acute hospital is a significant influencer of the cost of delivering hospital-based care and is a key measure of hospital performance according to the Australian Health Performance Framework (5).Extended LOS increases the risk of hospitalacquired complications (HACs) and impacts patient access and flow (6).A recent report showed up to a 3-to 4-fold variation in the average LOS in Australian hospitals (3) often due to a complex interaction of multiple factors, including some unrelated to the patient's condition.HACs similar to delirium can prolong hospital LOS by 6-7 days and increase mortality (7,8).Reducing unwanted variation in LOS is essential in Australia and globally to ensure the sustainability of economically viable health services for the future.
To utilize healthcare resources efficiently, studies have been undertaken globally utilizing existing data and applying statistical techniques such as machine learning (ML), to develop and validate predictive models identifying patients at risk of extended LOS (9)(10)(11)(12)(13).Prior studies have investigated LOS prediction in disease-specific groups such as heart failure (14), cardiac surgery (15), thermal burns (16), or population-specific groups such as intensive care unit (ICU) and neonatal care (17,18).Other recent reviews have looked at this outcome from a risk adjustment perspective (19) or a broad epidemiological perspective (20).
Prediction of risk of extended LOS in heterogenous populations such as all hospital admissions and General Medicine is common but lacks impact (20,21).Accurate and timely risk prediction can enable targeted interventions to streamline care, reduce unwarranted extended LOS, and potentially impact system-level management of patient flow issues by providing high-level visibility of impending access issues and enabling proactive decision-making (2,22).A review of the literature published in 2019 had examined methodologies applied to create LOS predictions.The authors found that approximately half of the included studies (36 of 74) did not restrict the studied population by diagnosis groups, and only a third had calculated the prediction at the time of admission or earlier (20).We aimed to extend this review by broadening the search, evaluating the risk of bias (ROB) (23) of the included studies, and adding data from the recent 2 years to capture the emerging Artificial Intelligence (AI/ML) approaches.This review aims to identify validated prediction variables and methods used in tools that predict the risk of extended LOS in all hospital admissions and specifically General Medicine admissions.This is needed to advance the evidence base required by healthcare administrators and planners on possible future predictive tools supporting efficient resource utilization and patient flow.
Methods
"Prediction tools" or "tools" for this review can include any type of risk assessment tools/flags/factors or risk prediction models that used computerized statistical methods for predicting hospital LOS.This review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (24).Protocol was registered on the International Prospective Register of Systematic Reviews (PROSPERO) (https:// www.crd.york.ac.uk/PROSPERO/) (#CRD42021272198).
Search strategy
We searched CINAHL, EMBASE, OVID MEDLINE, OVID EMCARE, and Cochrane systematically on 31 August 2021 and updated the search on 28 June 2023, using a predefined search strategy guided by our library scientist (VD), as shown in Supplementary Table S2.The primary concepts searched were "risk factors", "statistical/prediction models", and "Length of stay".Considering the rapidly advancing field of health data analytics, we narrowed the search to only include English language articles, from OECD comparable countries and published after 2010.Reference lists of included publications were examined to identify any additional potential studies.A gray literature search using key terms was completed in Google and Google Scholar in a time-limited way (20 h over 4 weeks).
Eligibility criteria
As shown in Supplementary Table S3, we included primary studies that reported LOS predictive tools for adults admitted to acute care hospitals that reported prediction metrics (25) to inform what works in LOS prediction methods and in what context.No limits on publication types were applied.We excluded studies looking at day procedures (LOS < 24 h) and those describing or including admissions to nursing homes, or subacute/rehabilitation facilities due to the difference in their operational structure and purpose, compared to the acute hospital setting.
Model for all admissions (mixed medical and surgical admissions) was the focus based on recent reports suggesting the positive impact of identifying and managing acuity on hospital resource utilization (26).We also studied the prediction tools for the General Medicine admissions (2,3,5) due to their high LOS variation, which is summarized in a separate section.
Studies that were not primary research, including conference abstracts, unpublished studies, book chapters, and review articles, were excluded.We also excluded reports focusing on condition/procedure-specific LOS tools such as burns, joint replacements, cardiology, cancer, maternity, and pediatric admissions and studies that did not assess LOS as an outcome.
No limits on publication types were applied.Once studies were highlighted for inclusion, the reference lists of included publications were manually searched for additional studies.
Study screening and data extraction
Screening, full-text review, data extraction, and quality assessment were completed using the web-based data management platforms of Covidence (27) and EndNote X9.3.3 (Clairvate).Title, abstract, and full-text screening was conducted by two reviewers (SG and JG) who were responsible for selecting studies for inclusion.In case of discrepancies, consensus was reached via discussion.SG extracted data based on the CHARMS and TRIPOD checklist (28,29) into a predefined data extraction table.
Quality assessment
The risk of bias was assessed independently by two reviewers (SG and YH) based on PROBAST recommendations.Disagreement was resolved by consulting a third reviewer (JE).Using the PROBAST tool (30), studies were rated as low/moderate/high concern for bias and applicability in each of the four domains: participants, predictors, outcomes, and analysis (23,29).We used guidance from the adaptation of the PROBAST tool for ML models (31).
Data synthesis
The data items extracted for each included article are provided in Supplementary Table S4.Data sources were classified as (1) administrative/registry/claims, and (2) medical records and prediction modeling methods as classic statistical methods/ML/both.Model performance measures of discrimination and calibration were extracted and synthesized.
Discrimination measures, where possible, were presented as Area Under Receiver Operating Curve (AUROC) with a 95% confidence interval (CI) (21).We applied AUROC thresholds of 0.5 to suggest no discrimination (ability to identify patients with and without the risk under test), 0.7-0.8 as acceptable, 0.8-0.9 as excellent, and >0.9 as outstanding discrimination (32).Calibration was assessed using reported calibration plots, where available, or using calibration statistics (32,33).
Predictor variables in the included LOS models were classified into categories adapted from the recent systematic review by Lequertier et al. (20), as shown in Supplementary Table S5.The level of validation (development with or without internal validation and/or external validation) was based on the PROBAST guideline (30).
Meta-analysis
Meta-analysis of prediction models is challenging especially when models are specified differently and have heterogenous predictors and outcome definitions (34).Conversely, it is also valuable to understand the impact of the underlying variation in case mix and population characteristics on the prediction estimates (35).As such, we have presented a random-effects meta-analysis using restricted maximum likelihood estimation for external validation studies of LOS prediction models.As guided by recent literature on a meta-analysis of prediction model studies (36, 37), models having comparable outcome types (binary) and predictors were included, and we reported the 95% prediction interval of theta (21) to provide a range for the estimated performance of the model in a new population.Stata SE 17 was used for statistical analysis and calculation.When the standard error of AUROC was unreported, it was estimated using the method by Hanley and McNeil (38) and Kottas et al. (39).Heterogeneity was reported as I 2 (40).The number of eligible validation studies was small, and hence further investigation of sources of heterogeneity was not possible.
Publication bias
Forest plots showing effect sizes and confidence intervals were generated.Egger's regression was used for evaluating funnel plot asymmetry due to small-study effects (33,41).
Results
The search yielded 8,103 studies from OVID Medline (4,172), OVID Emcare (260), CINHAL (555), EMBASE (3-076), and Cochrane (40).Records were exported to Covidence, and 319 duplicates were removed.In total, 7,784 records were screened, which yielded 213 potential reports for full-text retrieval.Citation searching identified an additional 17 records which were assessed for eligibility.A recent update identified a further nine studies for full-text review.Following the full-text review, 39 were selected for inclusion based on the eligibility criteria: 14 reporting on GenMed populations and 25 on all admissions.PRISMA diagram illustrates the search in Figure 1.Study characteristics are summarized in Supplementary Table S6.
All admissions prediction models
Of the 25 studies, the majority were published in the last 5 years, 11 were from the United States, six from the European Union, two from Australia, and one each from the United Kingdom, Canada, Japan, South Korea, Algeria, and Singapore.All studies were observational: two prospective and 22 retrospective, a single crosssectional study.The median duration was 3.75 years (range 0.6-12) with a median sample size of 53,211 (range 332-42,896,026).
Data sources
There was greater use of medical records data (60%) compared to administrative data (40%).All studies collected data at and during admission (84%) or used data collected post-discharge in addition to admission data.LOS was predicted categorically in 64% or continuously in 28% of studies and both categorically and continuously in 8% of studies.The cut-off for defining prolonged LOS ranged from 5 to 14 days, and two studies used a predefined diagnosis-specific increase of LOS tertile as their cut-off.
Predictive modeling methods
The level of validation was low with only 2 of 25 reported validation studies (four models).Of the 45 models reported in 25 studies, classical statistical approaches accounted for just under half (44%), ML methods such as ridge regression, random forest, gradient boosting machine algorithms, and generalized linear models were used in 32%, and deep learning approaches (24%) included stacked recurrent neural network, channel-wise long short-term memory (LSTM), multi-modal deep learning, and ensemble-based neural networks.The greater prevalence of ML and deep learning approach in this group is likely to reflect the number and complexity of the variables and the large sample size used in these studies.
Analytical pipeline
The median number of predictors used was 18 (range 2-714).Inclusion of all candidate predictors in multivariable modeling was common (96%) without pre-selection of variables which was done in a single study (42).Feature/predictor selection methods during multivariable modeling were largely poorly reported in 76% of studies.When reported, AIC (43)(44)(45), recursive feature elimination (46), and full model approach (47,48) were used for feature/predictor selection.Missing data were handled using imputation by various methods in 16% of studies but remained under-reported in the remaining studies (84%).Methods used to manage over-fitting and optimism were commonly used in 80% of studies.They included combinations of random split, kfold cross-validation, bootstrapping, hyper-parameter tuning and selection and stochastic gradient descent techniques; and were not reported in 20% of studies.The more recent studies reported various hyperparameter optimisation methods such as Bayesian (49) and Gaussian (50)-based selection and tuning processes, gradient descent methods (51), and 10-fold cross-validation (52).
Table 1 and Supplementary Table S8 show the key information for all admission LOS prediction models included in the systematic review.
Reported performance metrics and interpretation
The frequency of the various reported model performance measures is summarized in Figure 2 and Supplementary Table S7.
Discrimination
AUROC was the most frequently reported metric of discrimination (42% models) outlined in Figure 2. The median values of AUROC were 0.7365 (range 0.63-0.832),indicating the fair-to-good discriminative ability of the majority of the models
Calibration
Calibration metrics (likelihood ratio index, HL goodness of fit, and calibration plots) were reported in only 20% of models.All the reported models appeared to be sufficiently calibrated.
Predictors/variables
The most frequently used predictors and predictor categories are outlined in Table 2 and Supplementary Figure 1.Variable/feature importance was reported in half the studies using diverse association metrics such as hazard ratio, incident rate ratio, and estimates/regression coefficients making comparisons based on the strength of association of predictors imprecise.
The top three predictor categories used were risk scores (68%), demographic and anthropometric variables (68%), and admission characteristics (60%).Risk scores included illness severity scores, functional indices, co-morbidity scores, and neurocognitive screening tools.A wide range of demographic variables representing the social determinants of health (SDOH) such as ethnicity, socioeconomic index, anthropometric characteristics, and marital status were used frequently.Admission characteristics, such as admission source, day/month of admission, need for ICU admission, admitting unit, procedure type, time and length of last admission, elapsed LOS, and discharge/transfer destination, were used widely, possibly owing to the predominant use of medical record data sources and ongoing data collection throughout the admission period.Many studies using electronic medical records used information about the number of tests, consults, assessments, medication, and investigations as proxy indicators of extended stay rather than the actual results of these events (47,51,58,66,68).
Physical examination parameters and diagnostic and administrative variables were included in 40% of studies, while documentation and clinical notes, medications, health professional characteristics, and hospital characteristics were included less frequently.Admission diagnoses such as cancer and mental health conditions were noted as important features having an impact on LOS.
Quality assessment
The quality assessment of the included studies is outlined in Table 3.Although many retrospective studies were done using secondary data sources, most were deemed to be from high-quality databases with evident reporting standards.
Of the 25 studies, the majority of the studies were at a low ROB in domains of participants (76%), predictors (72%), and outcome (68%) domains, implying an overall low concern for applicability.Studies at moderate-to-high ROB in these domains demonstrated unclear reporting of data source quality, availability of predictors during implementation, determination, definition, and consistency of outcomes, and inappropriate participant inclusion/exclusion.
Quality assessment of analysis methods showed 68% were at high, and 16% at moderate or low risk of bias.Limitations in the model analysis and methodology reporting in high-risk studies included a lack of comprehensive reporting of model performance measures (no calibration measures), overfitting and optimism, missing data, and handling of data complexity, potentially implying poor adoption/awareness of the TRIPOD reporting guideline (29).
Meta-analysis
We conducted a meta-analysis of four LOS validation models that used Frailty Risk Scoring tools using administrative data [Hospital frailty risk score (48) and Global frailty score (65)] to predict LOS using logistic regression analysis.The meta-analysis reports a 95% prediction interval [shown in Figure 3 (forest plots), Table 4], to account for varying model performance due to differences in case mix and other study-level factors (21).The random-effects meta-analysis showed a 95% prediction interval for theta of 0.596, 0.798 (I 2 = 99.92%).Sources of heterogeneity were not explored further statistically due to the small sample size.However, Supplementary Table 13 outlines the differences in study populations and characteristics.
Publication bias
We observed no small-study effects on statistical testing (Egger's test p < 0.001) shown in Supplementary Table S11.In combination with the visual inspection of the funnel plots, we observed no publication bias in our included studies.
General medicine prediction models
The majority of the studies in this subgroup came from Europe (nine of 14) and the rest from the United States, Australia, and Japan (3, 1, and 1, respectively).The median study duration was 2.9 years (range 0.2-12) with a median sample size of 19,095 (range 33-2,997,249) and the predominant use of administrative data (64%).Timing of prediction in most studies (13 of 14) was on admission with a large range of prolonged LOS cut-offs used (3-30 days).
Predictive modeling methods
There were no externally validated models in 30 models reported in 14 studies.Overall, 56% used classical statistical approaches such as multivariable logistic (n = 14) and Cox/Poisson (n = 3) regression.The rest were ML (37%) and deep learning (artificial neural network) (7%) models.Supervised ML methods used commonly were bagged regression trees (n = 3), random forest (n = 4), linear support vector machine (SVM) + Chi-square filtering method with synthetic minority over-sampling technique (SMOTE) (n = 3), and one decision tree (CHAID) model.Binary outcome modeling was more common (90% of models).AUROC was the most frequently reported metric of discrimination (46%) as outlined in Supplementary Figure 2 followed by sensitivity, specificity, and C-statistic.
Analytical pipeline
The median number of predictors used was 12 .Most studies (64%) included all candidate predictors in multivariable modeling and pre-selection of variables based on univariable analysis was noted in 35% of studies.Feature/predictor selection methods during multivariable modeling and missing data were ./fmed. .poorly reported.In the remaining studies (45, 69-72) p-value thresholds were used for feature/predictor selection, and patients with missing data were excluded (73)(74)(75).Methods used to manage overfitting and optimism were used frequently (64%) and included combinations of random split, k-fold cross-validation, bootstrapping, and sensitivity analysis.
Predictor variables
Frequently used variable predictor categories are shown in Supplementary Figure 1 and Supplementary Table 8.Predictor categories such as risk scores (86%), diagnoses (primary/secondary including co-morbidities) (79%), and demographic and anthropometric variables (71%) were used most frequently.Commonly used risk scoring tools were illness severity scores/index (71), Charlson Co-morbidity Index and Manchester triage scores (70), Brief Geriatric Assessment tool (74), Exton smith scale (pressure injury risk), ADL score and nutritional risk tools (76), and COMPRI (care COMplexity PRediction Instrument) (77).Cardiovascular, respiratory, gastrointestinal, and neurological diagnostic groups were noted as significant predictors, in addition to demographic characteristics such as age, sex, and living situation.Physical/laboratory parameters (43%) such as serum markers, routine observations including oxygen requirements, medication variables such as >5 drugs/day (36%), and admission characteristics (14%) such as day/month of admission, elapsed LOS, and discharge destination were also included in prediction models albeit less frequently.The predominant use of diagnostic categories in this group emphasizes the importance of clinical presentation in General Medicine admissions and diagnostic complexity reflecting acuity and by proxy LOS.
Quality assessment
Risk of bias (ROB) was low in the domains of predictors and outcome assessment of all studies (Supplementary Table 12).In total, 28% of studies were found to have a moderate-to-high bias in the participant selection domain due to unclear data source information.Bias was also noted to be high in the analysis of all included studies.The commonly observed pitfalls were a lack of comprehensive reporting of model performance measures (no calibration measures) (78%), overfitting and optimism (35%), missing data (85%), and handling of data complexity (71%).As a result, the overall ROB for all included General Medicine studies was high suggesting that results should be interpreted and translated cautiously.
Discussion
This systematic review of risk prediction models for prolonged LOS in all admissions and General Medicine admissions showed a sharp increase in reporting of LOS prediction studies since 2018 with the widespread use of ML methods.Most models calculated the risk on admission.Reported prediction models showed good discriminative ability; however, they lacked calibration information, limiting impact assessment.Only four external validation models were reported with extensive use of electronic medical records and ML and AI methods.Overall, the study reporting was poor, especially for model analysis and performance, impacting the ability to assess the model quality and potential for translation into practice.In addition to detailed reporting aligning with guidelines such as TRIPOD and PROBAST, the high-quality studies had large sample sizes and reliable data sources and used retrospective data.A meta-analysis demonstrated prediction intervals in the moderate-to-good discrimination range, demonstrating that these macro-level algorithms may have some utility for identifying inpatients at risk of prolonged LOS.
Observations about a shortage of external validation studies have been noted by other researchers (78)(79)(80).Underreporting of external validation studies that often perform poorly may be contributing to this observation (80,81).Another factor may be the lack of consistency in the predictor variables used in the various LOS models.Consensus on a consistent set of predictor variables could assist the ability of researchers across the world to conduct external validations and work toward establishing transportable models predicting the risk of prolonged LOS.Increasing age, presence of multiple co-morbidities (assessed via diagnoses or risk scores), illness severity (assessed using risk scores or proxy indicators such as number of medications), and admission characteristics such as type, source, and day of admission were used most frequently in the GenMed admissions.In addition to these, all admissions models predominantly included physiological measurements (such as BP and oxygen saturation) and functional independence measures (risk scores or demographic variables such as living situation).The extensive use of non-clinical features may suggest that systemic and environmental factors have a considerable role alongside clinical factors in the prediction of LOS in heterogenous populations.
Literature about procedure-specific prediction models with good prediction accuracy (82, 83) is abundant, with models primarily predicting clinical outcomes such as 30-day mortality and postoperative pain.LOS prediction models for surgical populations have been analyzed and published in a separate manuscript (84).LOS predictions are considered to have a dual benefit in being a proxy measure of clinical outcomes as well as hospital efficiency (1).As such, population-based LOS predictions are key enablers of organizational resource planning as well as the daily access and flow issues managed by the frontline staff.Hence, the purpose of prediction should guide the choice of procedure-specific vs. population-specific models.
SDOHs are also associated with health outcomes such as longer acute LOS (85,86).Factors such as socioeconomic index, residential postcode, cohabitation status, and level of education are often considered a proxy for SDOH and can be extracted from routinely collected data.Only two studies (47,57), in this review, explicitly used these factors over and above the standard demographic variables of age, gender, ethnicity, and marital status.Levin et al. included predictors such as addiction treatment medications, psychotherapeutics, case management and social work consults, and clinical flags of substance abuse, which were correlates of SDOH.Notably, only seven of 39 studies clearly indicated the inclusion of other socioeconomic variables such as ethnicity, race, religion, language, or marital status.This could potentially be a limitation of the data sources used or the capability for data linkage with other data sources which could provide this rich detail to the data.Future models could benefit from the inclusion of reliable indicators of SDOH to identify cases where prolonged LOS risk may be more ambiguous.
Clinical implementation and deployment of LOS prediction models continue to be a challenge despite extensive efforts in the development of such models (87)(88)(89).Low digital literacy levels, serious technological debt in healthcare infrastructure systems, and issues with the reliability of data and interoperability have been widely cited in the literature as potential roadblocks to the implementation of such predictive analytical decision support.In addition, successful implementation strategies must consider the existing workflows and clinician perspectives on the utility and value of these predictive algorithms.As such, co-design and coproduction with end-users is crucial to embed these tools as an integrated legacy framework, for future use by the health service.Furthermore, in this process, external validations must be conducted in a large number of settings to show all stakeholders, including clinicians, administrations, and patients, that this type of decision support can add value and is trustworthy.
Strengths and limitations
The validated PROBAST quality assessment of the included studies was a strength of our review.It revealed a significant gap in the adoption of TRIPOD guidelines for prediction modeling studies, presenting evidence of moderate-to-high ROB.Poor reporting impacts implementation feasibility and external validation of existing prediction models.Many recent publications have implored the research community to attempt external validation before developing new models while accepting the evident challenges in reporting and reproducibility (80,90).This review further strengthens this imperative to improve the reporting in prognostic prediction modeling studies in LOS.
The majority of the data sources in our systematic review were classified as secondary data sources.As per the PROBAST tool recommendation, secondary data sources are considered as high ROB due to a lack of data collection protocols, increasing the uncertainty about data validity (91) and limiting generalizability.Secondary data use is critical for long-term real-world evaluation of health interventions, system efficacy, and continuous improvement and monitoring of health service delivery (92).Transparent reporting of data quality issues such as missingness, inaccuracy, and inconsistency can assist in providing some reassurance that routinely collected data can be used as a strategic resource for research to improve health system efficiencies and effectiveness (91,93,94).We suggest that data hubs and repositories adopt evidencebased standardized frameworks to guide their data governance and evaluation practices (92,95) to ensure transferability and generalization of results of secondary analysis of routinely collected health data.
Broad recommendations
Future studies should (1) validate the prediction models on prospective data to enable near real-time LOS risk prediction and attempt external validation of existing models to test implementation feasibility, (2) use appropriate guidelines (23,29) to report prediction study findings, (3) utilize data available on and within 24 h of admission to enable prognostic prediction and proactive interventions, and (4) include variables and assessments that are available from routinely collected data to reduce the administrative burden on frontline clinicians.
Conclusion
To the best of our knowledge, this is the first systematic review assessing the quality of risk prediction models for prolonged LOS in All Admissions and GenMed studies.Overall, LOS risk prediction models appear to show an acceptable-to-good ability to discriminate, however, transparent reporting and external validations are now required for potential benefits of such macrolevel prediction tools to be implemented inside hospitals to assist with early identification of inpatients at risk of a prolonged LOS.
FIGURE
FIGUREPRISMA flow diagram demonstrates the systematic review of the literature for hospital length of stay prediction tools.PRISMA, preferred reporting items for systematic reviews and meta-analyses; ** based on exclusion criteria provided in Supplementary TableS; OECD, organization for economic co-operation and development.
Of the two studies reporting comprehensive performance measures, including calibration, discrimination, and overall accuracy measures, both Harutyunyan et al. (LOS>7 days) and Hilton et al. (LOS>5 days) demonstrated an excellent discriminative ability with AUROC of 0.84(49,59) with good calibration of models using ML/deep learning (recurrent neural networks, LSTM, and gradient boosting machines) and data from electronic medical records.
FIGURE
FIGUREFrequency of LOS prediction model performance metrics reported in all admissions LOS prediction models (n = ).AIC Akaike information criterion.The following performance metrics were used less than three times and are not represented in the figure: Pred/z-score/MMRE (mean magnitude of relative error), model adequacy/model fit R /adjusted R-squared, Cohen's kappa, explained variance/Nagelkerke's R-squared, Brier score, and median AE (absolute error).
primary/secondary including co-morbidities) and procedure types.
TABLE Most frequently used variables in risk prediction of prolonged LOS in all admissions (n = ).Lab tests: bilirubin, Glucose, ph, K, Na, serum bicarb level, serum urea, nitrogen level, WBC count • Laboratory acute physiology score • Number of micro labs/number of lab tests/consults/diagnostics (count of tests) • Admission month/admission shift/admission source/admission type • Care units/hospital service/transfer frequency • Day of week time of day • Entry date and time • Mode of entry/mode of arrival to ED • Discharge date and time/discharge location • Early admission to ICU • Temporal variables: elapsed LOS (current admission)/last admission LOS/no. of days since last admission/total days in hospital in last 12/12 • First procedure on admission/medical procedures/interventions/procedural terminology 15 60% Physical examination (biological and physiological parameters) • Observations: capillary refill rate, chart events, diastolic blood pressure, fraction of inspired oxygen, heart rate, mean blood pressure monitoring outputs, oxygen saturation, respiratory rate, temperature, systolic blood pressure • • Principal diagnoses or admission diagnoses such as AIDS, blood cancers, mental co-morbidity, and metastatic cancer APR-DRG, all patients refined diagnosis related groups; CCI, charlson co-morbidity index; GCS, glasgow coma scale; MUST, malnutrition universal screening tool; NRS, nutrition risk screening; PG-SGA, patient-generated subjective global assessment.TABLE Risk of bias assessment of all admissions studies using PROBAST tool (n = ).FIGURE Meta-analyses of four externally validated models for LOS prediction in all admissions group (n = ).
TABLE Meta -
analysis summary of four externally validated models for LOS prediction in all admissions group.
|
2023-08-19T15:10:16.985Z
|
2023-08-16T00:00:00.000
|
{
"year": 2023,
"sha1": "a63750f5dd94a3e1b815eca49f265795aa1c651c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2023.1192969/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c8a66ea5a1191da34e2a11a9c27c718c558bcdb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255751282
|
pes2o/s2orc
|
v3-fos-license
|
Graphs of protein-water hydrogen bond networks to dissect structural movies of ion-transfer microbial rhodopsins
Microbial rhodopsins are membrane proteins that use the energy absorbed by the covalently bound retinal chromophore to initiate reaction cycles resulting in ion transport or signal transduction. Thousands of distinct microbial rhodopsins are known and, for many rhodopsins, three-dimensional structures have been solved with structural biology, including as entire sets of structures solved with serial femtosecond crystallography. This sets the stage for comprehensive studies of large datasets of static protein structures to dissect structural elements that provide functional specificity to the various microbial rhodopsins. A challenge, however, is how to analyze efficiently intra-molecular interactions based on large datasets of static protein structures. Our perspective discusses the usefulness of graph-based approaches to dissect structural movies of microbial rhodopsins solved with time-resolved crystallography.
Introduction
Microbial rhodopsins belong to a large family of seven-helical membrane proteins in which photo-isomerization of the covalently-bound retinal molecule triggers reaction cycles resulting in ion transfer or photo-sensing (Govorunova et al., 2017). The diversity of biological functions performed by microbial rhodopsins underlines their importance to dissect sequence-structurefunction relationships. Moreover, some of the microbial rhodopsins are used as optogenetic tools to control the membrane potential of excitable cells (Zhang et al., 2006;Gunaydin et al., 2010;Berndt et al., 2011;Kandori, 2020). Decades of studies have led to a detailed understanding of the general principles of action of microbial rhodopsins-for comprehensive reviews, see, e.g., refs. (Lanyi, 1999;Heberle, 2000;Balashov and Ebrey, 2001;Herzfeld and Lansing, 2002;Kandori, 2004;Wickstrand et al., 2015;Bondar and Smith, 2017;Govorunova et al., 2017;Kandori, 2020;Brown, 2022). Here, we focus on the usefulness of graph computations to evaluate structural changes along reaction cycles of iontransfer microbial rhodopsins based on structural movies derived with structural biology.
During its reaction cycle, an ion-transfer microbial rhodopsin undergoes a sequence of protein structural changes that couple to the retinal isomeric state, relocation of discrete internal water molecules, and ion transfer. Time scales for inter-conversions among subsequent intermediate states can vary substantially, for example, in the case of the bacteriorhodopsin (BR) proton pump, the lifetimes of intermediate states that have been characterized with spectroscopy range from the femtoseconds to milliseconds (Lanyi, 1993). Structural biology has provided invaluable data on the architecture and structural dynamics of microbial rhodopsins-from the first electron microscopy structure of BR solved at a resolution of 7 Å (Henderson and Unwin, 1975), to crystal structures solved at resolutions of 1.05-1.6 Å for BR (Borshchevskiy et al., 2022), Acetabularia rhodopsin-1 (Furuse et al., 2015), and arachaerhodopsin-3 (Bada Juarez et al., 2021); recently, entire structural movies of structural changes along the reaction cycle, up to the millisecond time domain, were solved with time-resolved serial femtosecond crystallography (TR-SFX) for BR (Nango et al., 2016;Nass et al., 2019;Weinert et al., 2019), the sodium pumping rhodopsin KR2 (Skopintsev et al., 2020), and for channelrhodopsin chimera C1C2 (Oda et al., 2021). For the chlorideion pumping rhodopsin, CIR, TR-SFX resolved the early-stage dynamics within 100 ns after illumination (Yun et al., 2021).
Data from TR-SFX may be combined into sets of structures for given time intervals, such as an early picosecond time domain, nanosecond-, microsecond-, and millisecond-domains. Together, the ensembles of average protein structures provide an overview of the protein structural dynamics in the crystal environment, which are relevant to reaction cycles in physiological conditions-as verified for BR (Nango et al., 2016) and KR2 (Skopintsev et al., 2020). By providing a high-resolution view of structural rearrangements along key steps of the reaction cycle, datasets of TR-SFX structures are a unique opportunity to dissect the time evolution of protein conformational changes, and to understand how protein conformational changes ultimately lead to ion transfer.
Hydrogen(H)-bond networks are central to formulating hypotheses about reaction mechanisms of membrane transporters in general. In the particular case of ion-transfer microbial rhodopsins, H-bond networks of the retinal Schiff base, and of protein sidechains directly involved in ion transfer, are thought essential for functionality. For microbial rhodopsins whose structural movies have been solved with TR-SFX, the challenge is how to dissect the entire H-bond network of the protein, and to identify sites where H-bonds break or form as the protein passes from one intermediate state to the next. We argue here that graph-based algorithms that compute and compare H-bond networks in datasets of static protein structures enable us to dissect the structural movies captured in experimental data.
Graphs of H-bond networks computed from static protein structures of microbial rhodopsins. An H-bond graph consists of nodes-here, the H-bonding protein groups, and edges, which here are direct or water-mediated H-bonds between protein groups. A local H-bond cluster, or local H-bond network, consists of a subset of nodes and edges that are all interconnected to each other.
Let us consider the structure of the resting state of a microbial rhodopsin, which we label as a reference structure R, and two intermediate-state structures, I1 and I2. The conserved H-bond graph of structures R, I1 and I2 consists of the nodes (H-bonding groups) and edges (H-bonds) that are common to the three structures within a set conservation threshold (Bertalan et al., 2020;Bertalan et al., 2021). That is, for three static structures of microbial rhodopsins captured at distinct moments of time, the conserved H-bond graph contains the H-bonding protein groups and their H-bonds that remain part of the network. The difference H-bond graph of structure I2 relative to that conserved H-bond graph indicates which H-bonding groups and H-bonds are present in structure I2, but not in structures R or I1. The comparison H-bond graph of structures I2 and R indicates H-bonding groups and H-bonds present in both structures, vs. only in I2/R.
To compute H-bond graphs we used the graph-based algorithms Bridge (Siemers et al., 2019;Siemers and Bondar, 2021) and C-Graphs (Bertalan et al., 2021) with standard geometric criteria of ≤3.5 Å distance between the H-bond donor and acceptor hetero-atoms; we included water bridges of up to three H-bonded water molecules between protein sidechains. To examine the location of the H-bond networks, protein structures were pre-aligned and their H-bond graphs projected with C-Graphs along the membrane normal (Bertalan et al., 2021). From the projected H-bond graphs we estimated the length of the networks and identified sites where H-bond networks become interrupted or connected in a given structure.
We analyzed in total 35 structures of microbial rhodopsins, grouped in 4 datasets according to the corresponding experimental measurement. For each dataset, the resting state is considered as a reference. Protein structures were downloaded from the Protein Data Bank (PDB) (Rose et al., 2021), and the corresponding PDB ID indicated in Figure 1A. To facilitate comparisons of the projections of the H-bond graphs of distinct microbial rhodopsins, we used Chimera (Pettersen et al., 2004) to overlap each structure onto the structure of the dark (resting) state of KR2, PDB ID: 6tk6 (Skopintsev et al., 2020), oriented along the membrane normal with Orientations of Proteins in Membranes, OPM (Lomize et al., 2011).
From the H-bond graphs we extracted the total number of H-bond connections between sidechains, which can be direct or watermediated H-bonds ( Figure 1B). Separately, we extracted the number of direct H-bonds between protein sidechains, without water-mediated connections ( Figure 1C), and the number of direct H-bonds between protein sidechains and protein backbone groups ( Figure 1F). We computed the number of in internal water molecule as the number of water oxygen atoms within the membrane plane indicated by OPM (Lomize et al., 2011).
Resolution of the structure and the number of water molecules impact H-bond networks. Internal water molecules are central to reaction mechanisms of ion-transfer microbial rhodopsins (Gerwert et al., 2014;Tomida et al., 2021) because they may, e.g., impact the relative orientation of the protonated retinal Schiff base and its carboxylic primary proton acceptor (Gat and Sheves, 1993), the energetics of proton transfer reactions (Hayashi and Ohmine, 2000;Bondar et al., 2004), the translocation of sodium ions by KR2 (Suomivuouri et al., 2017), and the opening of CHR2 (Ardevol and Hummer, 2017).
For the dataset of 35 static structures considered here, the overall water content depends somewhat on the resolution ( Figure 1D): CIR structures solved at resolutions of 1.65-1.85 Å have 106 water molecules each, and the two KR2 structures solved at 1.6 Å resolution have 60 and, respectively 99 water molecules ( Figure 1E). Likewise, the number of internal water molecules depends on the resolution-but also on the protein and time domain. The higher resolution CIR structures for 0 ps-100 ps have 32-35 internal waters each, and BR structures for 0-1.725 ms, 10-17 internal waters each. For comparison, each of the 0-4 ms C1C2 structures has 13-15 internal waters, and each of the 800 fs-20 ms KR2 structures, 9-10 waters ( Figures 1A, D).
Within a dataset of TR-SFX structures of the same protein, changes in the number of H-bonds of protein sidechains may indicate structural rearrangements leading to the loss/formation of H-bonds along the reaction coordinate of the protein. Typically, Frontiers in Chemistry frontiersin.org within a dataset, the number of sidechain-sidechain H-bonds ( Figure 1C) follows the same trend as the total number of H-bond contacts of that protein's H-bond graph ( Figures 1B, C, F): Each of the BR structures has 17-22 sidechain-sidechain H-bonds ( Figure 1C), and 26-39 direct and water-mediated contacts between sidechains ( Figure 1B); relative to the resting state, the total number of direct H-bonds increases by 10 in the 1.725 ms structure ( Figure 1F). The 3 KR2 structures solved at 2.25 Å resolution for 800fs-150 µs, 1 ns + 16 ns, and 30 μs + 150 μs have 50-52 connections each ( Figure 1B), and 27-29 sidechain-sidechain contacts ( Figure 1C); the 1 ms and 20 ms structures (2.5 Å resolution) have similar numbers of sidechainsidechain H-bonds ( Figure 1C) and internal waters ( Figure 1A), but noticeably different numbers of connections ( Figure 1B), suggesting rearrangements in water-mediated bridges and/or sidechainbackbone contacts; though distinguished by 17 internal water molecules ( Figure 1E), the two 1.6 Å resolution KR2 structures solved at acidic vs. neutral pH have rather similar numbers of H-bond connections ( Figure 1B), suggesting rearrangements of the protein-water H-bond network. The precise contribution that net loss of gain of H-bond connections might bring to the energy profile along the reaction coordinate is unclear. A rough estimation could be made based on double-mutant cycle analyses of BR indicating that, on the average, most sidechain-sidechain H-bonds contribute about 0.6 kcal/mol to the stability of the protein (Joh et al., 2008). Such energetic penalties would be compatible with the energy profile of the first half of the (Nango et al., 2016); blue, channelrhodopsin (Oda et al., 2021), green, CIR (Yun et al., 2021), magenta, KR2 (Skopintsev et al., 2020). (D-F) Number of internal water molecules (D), total number of waters (E), and number of direct H-bonds between sidechains and between sidechains and backbone (F) as a function of the resolution at which the structure was solved. In panels D-F, each dot represents a microbial rhodopsin structure, color-coded as in panel (A) (G-J) Molecular graphics of BR (G), C1C2 (H), CIR (I), and KR2 (J). Structures were downloaded from the Protein Data Bank, PDB (Berman et al., 2000) and aligned along the membrane normal using Chimera (Pettersen et al., 2004 Time evolution of H-bond graphs of microbial rhodopsins. The vertical axis shows the projection along the membrane normal (z coordinate) of the Cα atoms of amino acid residues part of the H-bond graphs; the horizontal axis shows the Principal Component Analysis (PCA) projection along the membrane plane. (A-P) Difference H-bond graphs. Conserved H-bond graphs computed using Bridge (Siemers et al., 2019;Siemers and Bondar, 2021) within C-Graphs (Bertalan et al., 2021) for BR (panels A-D), C1C2 (E-H), KR2 (I-L), and CIR structures (M-P) indicated in Figure 1A are compared to selected structures of each dataset. Nodes and edges colored gray that are present in all structures of the corresponding dataset; different colors indicate nodes and edges present only in the corresponding structure, and numbers in the right upper corner indicate the total/conserved connections. (Q-T) Comparison H-bond graphs computed for selected pairs of strructures of BR (Q), C1C2 (R), CIR (S), and KR2 (T).
Frontiers in Chemistry
frontiersin.org H-bond graph computed for the 13 BR structures (Nango et al., 2016) has 26 H-bonds (Figures 2A-D). BR resting state hosts an H-bond network with a linear length of~18 Å, which includes the primary proton donor (the protonated retinal Schiff base-K216), the primary proton acceptor D85 and D212 (also implicated in proton transfer), and the extracellular proton release dyad E194/ E204 (Brown et al., 1995;Dioumaev et al., 1999;Bondar et al., 2004;Phatak et al., 2009); the cytoplasmic proton donor D96, which H-bonds with T46, is within~15 Å distance (Cα-Cα) from D85/ D212 (Figure 2A) -comparable with distances of~10-13 Å between proton-transfer sites in unrelated proton transporters (Bondar, 2022); such distances could be bridged by 3-4 H-bonded water molecules (Bondar, 2022). Progressive changes in the H-bond connections are observed in the H-bond graphs of the later intermediates. A water-mediated bridge between T178 and W182 appears at 760 ns ( Figure 2B) and remains present in the 36.2 µs ( Figure 2C) and 1.725 ms structures ( Figure 2D). These two latter structures add to the extracellular H-bond network several water-mediated bridges such that this network, though with about the same length of the projection, has greater connectivity than in the resting state (Figures 2A, C, D)-as also indicated by the comparison H-bond graph between the resting and 1.725 ms structures ( Figure 2Q). Taken together, the H-bond graphs for the structural movie of BR (Figures 2A-D, Q) indicate that, withiñ 1.7 ms, the internal H-bond network that couples D85 to E194/ E204 gains connections without extending along the membrane normal.
C1C2 looses H-bonds in the central protein core. TR-SFX structures of the C1C2 channelrhodopsin chimera solved at a resolution of 2.3 Å captured C1C2 in the dark state and at 1 µs, 50 µs, 250 µs, 1 ms, and 4 ms after illumination (Oda et al., 2021). The conserved H-bond graph computed for all C1C2 structures has 46 H-bonds. Relative to the conserved H-bond graph common to all C1C2 structures, the resting state and the 1 μs structures contain a handful more H-bonds. The H-bond graph of the resting state ( Figure 2E) has additional H-bonds at the extracellular network of the protonated retinal Schiff base (K296, see also Figure 1H). H-bonding between the retinal Schiff base and the primary proton acceptor D292, and between E162 and T166, is present only in the resting state ( Figure 1H, Figure 2E). The 250 μs structure has additional H-bonds at the extracellular side (see T155, Y160, and R213 in Figure 2G); the 1 ms structure has two additional H-bonds (R312-D319 and S55-N70 in Figure 2H). Overall, unlike the rather localized changes in BR ( Figure 2Q), the C1C2 resting vs. 4 ms structures are distinguished by H-bond connections throughout much of the protein ( Figure 2R).
An extensive cytoplasmic H-bond network of KR2 shrinks within 20 ms. The KR2 sodium pump is of interest for optogenetics applications for the control of neuronal activity (Kato et al., 2015). KR2 couples sodium transport with changes in the protonation of the retinal Schiff base and D116: the Schiff base proton is transferred to D116 and then, following transfer of the sodium ion, back to the retinal Schiff base (Kato et al., 2015). N106, N112, E160, and D251 ( Figure 1J) are part of the sodium conductance path (Kato et al., 2015;Skopintsev et al., 2020).
The conserved H-bond graph for the seven KR2 structures of the dataset ( Figure 1A) has 28 H-bonds ( Figures 2I-L); in all structures, an H-bond network extends from Y45 some~12 Å further to the cytoplasmic side, to N264 ( Figures 2M-P). In the resting state, this H-bond network includes 14 H-bonds ( Figure 2I), of which 10 are lost in the 20 ms structure ( Figure 2L). The difference H-bond graph between the resting and 20 ms structures ( Figure 2T) indicates loss of H-bonds in the latter, particularly at the cytoplasmic H-bond network of N264, and at the extracellular network of the retinal Schiff base (Figures 2I, T).
Extensive H-bond rearrangements of the CIR within 200 ps. In the Non-labens marinus chloride pump CIR, the BR proton transfer groups D85 and D96 ( Figure 1G) are replaced by N98 and Q109 ( Figure 1I) (Yun et al., 2021); BR T89 ( Figure 2B), which can function as an intermediate carrier for the Schiff base proton (Bondar et al., 2004;Bondar et al., 2008), is conserved as T102 ( Figure 2N). TR-SFX structures of the CIR captured by TR-SFX for the dark, resting state, and for 1 ps, 2 ps, 50 ps, and 200 ps after illumination, suggested rapid structural perturbation such that the chloride ion, which is close to the protonated Schiff base (K235) in the CIR resting state ( Figure 1I), is close T102 at 50 ps (Yun et al., 2021). Rapid signal propagation might be needed to ensure an inter-helical pore opens to allow the chloride ion to pass (Yun et al., 2021), and is compatible with the flexible opening of an inter-helical passage observed previously for halorhodopsin (Gruia et al., 2005).
H-bond graphs computed for the eight CIR structures ( Figure 1A) have in common 34 H-bonds, but the number of H-bond connections of each of the four structures of the resting, 1 ps, 2 ps, and 50 ps states ranges from 43 to 62 ( Figure 1B), and difference H-bond graphs for intermediate states indicate extended H-bond changes throughout the protein (Figures 2M-P). For the 100 ps state, a cross-validation of the signals was interpreted to suggest consistent conformational changes at four different power levels of the laser (Yun et al., 2021). The H-bond graph computations here indicate that the total number of sidechainsidechain and water-mediated H-bonds between sidechains varies, among the 4 distinct 100 ps structures solved at different laser power levels, between 43 and 62, which is the same interval found for the 0-50 ps structures ( Figure 1B). This suggests that structural rearrangements of the CIR H-bond network might depend on the laser power-which could also affect the interpretation of the other structures of the CIR dataset.
Conclusion
Time-resolved coordinate snapshots of microbial rhodopsins provide invaluable information about the structural rearrangements along the reaction path. Discrete water molecules captured in the structures mediate internal H-bond networks that ensure conformational couplings between remote regions of the protein, and participate in ion transfer reactions.
The graph computations suggest common features in the propagation of structural changes via H-bonds and H-bond networks of microbial rhodopsins, but also important differences that could be related to function. Thus, the BR resting and msintermediate states are distinguished by connections within an H-bond network that extends through~18 Å at the extracellular side, which becomes more inter-connected in the ms structure ( Figure 2A). A similar number of H-bond connections distinguishes the internal H-bond network of the resting vs. the 4 ms C1C2 structures but, unlike BR, C1C2 gains H-bonds at the cytoplasmic side ( Figure 2R). By contrast, the resting state of KR2 has Frontiers in Chemistry frontiersin.org more extended H-bond connections than the 20 ms structure, particularly at the extracellular and central H-bond clusters ( Figure 2T). A caveat of the analyses of H-bond graphs based on TR-SFX structural movies is that each structure of the data set might represent mixtures of intermediate states (Yun et al., 2021;Barends et al., 2022). Moreover, resolution impacts the overall picture of the internal H-bond networks (Figure 1). We anticipate that future methodological developments in structural biology will allow for more complete structural movies of microbial rhodopsins to be solved at high resolution, and that graph analyses as presented here could be used for an automated assessment of the H-bond fingerprints of intermediate states of microbial rhodopsins.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Author contributions
EB conducted the computations, prepared the figures, read and provided comments on the manuscript. A-NB designed research and wrote the manuscript. EB and A-NB analysed the data.
Funding
Open-access publication funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)-491111487.
|
2023-01-13T14:05:23.677Z
|
2023-01-13T00:00:00.000
|
{
"year": 2022,
"sha1": "32be128c6880ef905294afde6cb4e59bf45dcf1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "32be128c6880ef905294afde6cb4e59bf45dcf1c",
"s2fieldsofstudy": [
"Chemistry",
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119214030
|
pes2o/s2orc
|
v3-fos-license
|
Anyonic glueballs from an effective-string model
Relying on an effective-string approach in which glueballs --- bound states of pure Yang-Mills theory --- are modelled by closed strings, we give arguments suggesting that anyonic glueballs, \textit{i.e.} glueballs with arbitrary spin, may exist in $(2+1)$-$\,$dimensional Yang-Mills theory. We then focus on the large$\,$-$N_c$ limit of $SU$($N_c$) Yang-Mills theory and show that our model leads to a mass spectrum in good agreement with lattice data in the scalar sector, while it predicts the masses and spins of anyonic glueball states.
I. INTRODUCTION
The appearance of quantum states with arbitrary spin, called anyons, is a fascinating feature of quantum mechanics in (2 + 1) dimensions [1] that has been explored in a considerable amount of works: The interested reader may find useful references in [2][3][4]. Actually, in 2+1 dimensions the spin s of a given state may be arbitrary because the Lorentz group SO(2, 1), as a group manifold, contains a non-contractible circle S 1 whose covering R covers it infinitely many times. In the case of an Euclidean spacetime, the "Lorentz" group SO(3) is a compact, connected (albeit non-simply) manifold that admits at most two -valued unitary representations.
It is known in field theory that coupling matter field to a three-dimensional vector gauge field with a Chern-Simons term leads to the appearance of states with fractional statistics [5]. The equivalent result is obtained within an O(3) σ-model with Hopf term [6]. Note that a Chern-Simons term is not a necessary condition to produce anyons in field theory, as illustrated by the following examples: • Composite quantum states with arbitrary spin or arbitrary exchange statistics can be built from the genuine Abelian Higgs model without Chern-Simons term [7,8]; • Within an Abelian gauge theory with matter field denoted by Ψ and g 2 a constant with dimension of mass, one defines the shifted connection A θ µ = A µ + θ g 2 F µ where F µ = ǫ µνρ F νρ /2 . The operator Ψ(x)P {exp(i y x dz µ A θ µ )}Ψ(y) then propagates an anyon with non trivial statistics related to the arbitrary real number θ [9]; • The spectrum of closed Nambu-Goto strings in 2 + 1 dimensions necessarily contains fractional spin fields after light-cone quantization [10].
More generally, it has to be stressed that the existence of fractional-spin fields in (2 + 1)-dimensional Minkowski spacetime arises from pure group theoretical arguments that are actually independent of the particular form of the action under consideration [11,12]. These arguments will be summarized in Sec. II, while the case of closed Nambu-Goto strings, particularly important for our present work, will be discussed in Sec. III.
The purpose of the present note is to investigate whether anyonic states exist or not in pure (2 + 1) -dimensional Yang-Mills theory. Such a problem has, to our knowledge, never been studied so far. If anyonic glueballs can be built, the next question is: What are their masses and spins? This problem can be addressed by resorting to a closed-string effective model of glueballs. The idea that Yang-Mills theory should be equivalent to some closed string theory at large N c actually originates from 't Hooft and Veneziano's work on the large -N c limit of QCD [13,14]. It has indeed been known since then that any amplitude in large -N c Yang-Mills theory can be expressed as a sum over terms containing planar diagrams forming Riemann surfaces with various genus numbers, just as it is the case in closed string theory.
From an effective model point of view, it is therefore tempting to assume that glueball dynamics has some stringy nonperturbative origin. The celebrated Isgur and Paton's flux tube model [15] is a first example of how, starting from a lattice-QCD-inspired approach, one is led to the conclusion that glueballs -or at least some of them -may be described by closed strings. Closed effective strings are often referred to as closed flux tubes since they are seen as particular configurations of the chromoelectric field whose dynamics is expected to be that of a closed string. Interestingly, lattice computations regarding Yang-Mills theory have given some support to this picture. The interested reader may find in [16,17] a discussion of the agreement between a string model of glueballs and the lattice data of [18].
The effective model we use, inspired in particular by that of Ref. [17], is presented in Sec. IV and numerical results are obtained in Sec. V. Concluding comments are finally given in Sec. VII.
II. RELATIVISTIC ANYONS
Since the seminal works of Wigner and Bargmann [19,20], it has been known that the elementary particles in Minkowski spacetime of dimension D are associated with the unitary irreducible representations (UIRs) of the spacetime isometry group ISO(D − 1, 1) , the latter group being the semi-direct product of the Lorentz group SO(D − 1, 1) with the translation group T D . In 2 + 1 dimensions, the Poincaré group therefore is ISO(2, 1) ∼ = SO(2, 1) ⋉ T 3 . In this Section we first show how to build the UIRs of the (2 + 1)−dimensional Lorentz group SO(2,1); then we extend the discussion to the Poincaré group ISO(2,1).
A. The case of SO (2,1) Let L ab = −L ba be the generators of SO(2, 1). Then so(2, 1), the Lorentz algebra in 2 + 1 dimensions, is presented by where J a = 1 2 ε abc L bc and ε 012 = 1 . Using the Minkowski metric in Cartesian coordinates η = diag(− + +) and bold fonts for 3-vectors, the scalar product of U and V reads U · V ≡ U a η ab V b and the Casimir operator of SO(2, 1) is taken to be We use the notation u, v . . . for 2-vectors in the planes at fixed values of the Minkowskian coordinate x 0 .
An oscillator-based method for the classification of the UIRs of SO(2,1) was given in [11], which we closely follow since it has the advantage of building up the UIRs of SO(3) in complete analogy, thereby unifying the treatments of the various real forms of so(3, C) . It is of importance for us in view of drawing the reader's attention on the differences between both groups, the latter being actually at the basis of lattice QCD because of the Wick rotation leading to a Euclidean rather than hyperbolic spacetime.
Defining, as usual, the ladder operators yields and for both so(2, 1) and so (3). The authors of [11] considered the complex algebra so(3, C) , thereby taking one and the same set of commutation relations for both so(2, 1) and so ( Let ξ = (ξ α ) α=1,2 be a commuting real spinor of SO(2, 1) and consider the linear vector space spanned by normalised vectors of the form The integer m unambiguously labels the vectors once Φ and E 0 are specified. The inner product is defined by Φ, m|Φ, m ′ = δ m,m ′ . In this representation the generators J ± and J 0 are realised by the operators These act on the basis vectors as One sees that (E 0 + m) is the eigenvalue of the generator of spatial rotations J 0 , while a quick calculation shows that Φ(Φ + 1) is the eigenvalue of the quadratic Casimir Imposing the unitarity condition (6) for the SO(3) group leads to These recursion relations can be solved only if the values of m are bounded both from above and from below. This corresponds to the well-known result that the UIRs of SO(3) are finite-dimensional. The spectrum of the operator showing that Φ = s is the spin of the corresponding SO(3) irreducible representation, and RE 0 = 0.
For the non-compact group SO(2, 1) , one derives from the unitarity condition (6) that Keeping the notation of [11], this gives the following UIRs : The UIRs D(C 2 , E 0 ) , whose spectra for J 0 are neither bounded from above nor from below, contain the principal and complementary (or supplementary) UIRs of SO(2, 1) . As explained in the next Section, we shall focus on the other possible UIR's. The representations D ± (Φ) are called the discrete series, while the representation D(0) is the trivial, one-dimensional representation. For the discrete series D + (Φ) (resp. D − (Φ)), the spectrum of J 0 is countably infinite, bounded from below (resp. above). Cases of particular interest for our purpose will be denoted by The spin of the discrete series D + s representation is s (with s > 0 ), that can be integer or even an arbitrary (albeit positive), real number.
Representations of SO(2, 1) bounded from above and below like for SO(3) exist but are non unitary [11]. The only UIR that SO(3) and SO(2, 1) share is the trivial one D(Φ = 0), which corresponds to scalar fields. It is finally worth mentioning that the statistical phase exp (2iπs) can still be associated with a state of arbitrary spin s by virtue of the spin-statistics theorem [4,21].
B. The case of ISO (2,1) Let P a be the translation generators of T 3 . Then iso(2, 1) , the Poincaré algebra in 2 + 1 dimensions, is presented by and the two Casimir operators of ISO(2, 1) read, for massive representations, They respectively give, on irreducible representations, the squared mass and the spin of a state. It has been shown in [22] that states |Ψ belonging to the complementary series D(C 2 , E 0 ) are such that P 2 |Ψ = 0, P · J|Ψ = 0, Such states are not relevant in view of studying glueballs since we are looking for massive representations with nonzero spin, that will contain anyons. As also shown in [22], such "physical" states belong rather to the discrete series D + s or D − s . Let us denote by |M 2 ; p; s; J 0 these states, the two series being distinguished by the signs of the eigenvalues of P 0 and J 0 : positive (resp. negative) for D + s (resp. D − s ). Therefore, the two series D ± s can be seen as PT conjugated to each other, P and T denoted respectively the parity and time-conjugation. In the rest frame , p = 0, and s reduces to J 0 (resp. −J 0 ) for states in the D + s (resp. D − s ) representation. We note such states, that will play a particular role in the rest of this work.
In 2+1 dimensions, the action of parity P is to revert one spatial direction; we define it to act as . As a consequence, Eigenstates of both (19) and the parity can be built; they represent anyons and in the rest frame they read where η P is the eigenvalue of the parity. This prescription is valid when s = 0. For states belonging to D(0), eigenstates of the parity can still be obtained by application of the projector 1 2 (1 + η P P), but both values of η P cannot necessarily be reached, as we will see in Sec. IV by explicit computation.
III. ANYONS FROM CLOSED STRINGS
As shown in [10], fractional spin do appear in the spectrum of closed (2 + 1)-dimensional Nambu-goto strings in the light-cone gauge. More precisely, the authors of [10] have performed the light-cone quantization of the following Hamiltonian version of the Nambu-Goto action where σ is the string tension and where the string coordinates X are a function of τ and φ ∈ [0, 2π]. This last action is equivalent to the standard Nambu-Goto action provided l, the Lagrange multiplier accounting for the S 1diffeomorphism invariance, is nowhere vanishing. The other Lagrange multiplier, u, stands for the τ -reparametrization invariance. The reader can find in [23] a detailed and rigorous presentation of the Hamiltonian quantization of the Polyakov action for the (super)string, where the Hamiltonian action (23) appears upon fixing the constraint related to Weyl invariance of the classical Polyakov string.
A first observation made in [10] is that the mass spectrum of the theory reads with the usual number operators N andN . The constraint equating the number of left-and right-movers, as a consequence of the S 1 -diffeomorphism invariance, must be added to Eq. (24). The constant a is actually not constrained by the theory. Indeed, it is well known that a light-cone quantization in a D-dimensional spacetime would have led to the critical value a = (D − 2)/12 necessary to restore the Lorentz invariance at quantum level. However the authors of [10] have fixed D = 3 a priori, which has a strong impact: The problematic commutators are de facto absent and Poincaré invariance is satisfied at the quantum level without having to fix a unless the theory is supersymmetric, a case that we are not dealing with here.
The spectrum can be built by requiring the string states to be simultaneously eigenstates of M 2 and s, given by (19). This last operator is cubic in the α ′ s and couples the different states with the same N . The eigenvalues of the operator s finally give the spins of the closed string states with a given mass. Inspection of these eigenvalues shows that there necessarily are fractional spin fields in the spectrum of the first-quantized closed string in 3D. This is the key result of [10]. More precisely, the first levels of the closed string spectrum contain states with the following spins: • Only s = 0 for N = 0 and N = 1 ; • Two s = 0 states and two s = 3 States with s = 0 actually appears in doublets of opposite helicities, standing for the two discrete series D ± s . We recall that both discrete series are characterised by the same eigenvalue of the the operator on the right-hand side of the second equation of (19), but differ by the sign of J 0 . The interested reader will find the explicit expression of all the above states in terms of the string oscillators in Ref. [10].
There is actually an infinite but countable set of closed string states, some of which having fractional spin since there is no value of a leading to only integer or half-integer spins. In view of what we recalled in Sec. II, this result is natural: Imposing Poincaré invariance to the first-quantised closed string in 3D should logically lead to states belonging to anyonic representations. Note however that the non-critical nature of the bosonic string in (2 + 1) dimensions comes in the light-cone quantisation prescription. BRST quantisation, on the other hand, forbids low-dimensional, critial Polyakov strings, see [23].
A. Glueballs and closed strings
Beyond the pioneering work [14], the relevance of relating Yang-Mills theory at large-N c to a closed string theory has been studied also in [24], where the following picture is developed. On the one hand, at large N c , Yang-Mills dynamics can be reformulated in terms of a reduced model, typically a quenched Eguchi-Kawai model [25]. On the other hand, an appropriate limit N c → ∞ of SU (N c ) [24,26] is isomorphic to the algebra of area-preserving diffeomorphisms.
Both results allow to reformulate the quenched Eguchi-Kawai action as a Nambu-Goto action. However, SU (∞) Yang-Mills is not fully equivalent to a Nambu-Goto string, since the integration measure of its partition function is not that of a Nambu-Goto string [24]. Other approaches clearly show that a closed Nambu-Goto string can only be a leading-order approximation of Yang-Mills theory even at large-N c , see e.g. [27] and references therein.
An other point of view is the one of [28], in which Yang-Mills theory in 2 + 1 dimensions is reduced to a (1 + 1)dimensional Yang-Mills theory with scalar adjoint matter. The spectrum of the latter theory is shown to contain bound states (glueballs) that can be interpreted as closed strings. Nevertheless, as observed in [24], the Nambu-Goto string alone cannot provide an effective description of Yang-Mills theory. A better-known reason is the standard result that Poincaré invariance is fulfilled at the quantum level for D = 26 only. This issue was solved in [29], where it was shown that adding a term to the Polyakov Lagrangian restores Poincaré invariance for any spacetime dimension D . The Polchinski-Strominger term has been computed in conformal gauge [29] and recovered in static gauge [30]. Note that such an extra term is not needed in the case we focus on since, within the light-cone gauge quantisation scheme used in [10], Poincaré invariance is already satisfied at the quantum level for the 3D Nambu-Goto action.
Another reason to go beyond the Nambu-Goto string may then be to reach a more accurate description of the dynamics of the effective QCD string. For example, as seen from a semiclassical expansion around a closed folded string, the Polchinski-Strominger term produces corrections to the well-known mass formula M 2 ∝ J , J being the string angular momentum. The corrections appear as powers of J smaller than one and have been computed in [31].
More generally, the analysis performed in [32] of the terms allowed by classical Lorentz invariance reveals that the first nontrivial correction to the Nambu-Goto Lagrangian in 2 + 1 dimensions is a term involving the induced worldsheet metric h and the scalar curvature R constructed from it: However, in the present exploratory work, we are mainly interested in a qualitative description of the glueball spectrum, so it is worth asking whether adding such a term brings relevant information or not. It appears from Ref. [33] that, expanding the energy of an effective closed string in terms of its classical length L , the energy formula is universal up to 1/L 5 terms in 2 + 1 dimensions and deviations from universality only appear at order 1/L 7 . According to lattice computations [34], the mass of the lowest-lying glueball at large -N c is given by M/ √ σ ∼ 4 , which provides the estimate √ σL ∼ 4 , a length range such that 1/( √ σL) 7 corrections to the standard Nambu-Goto energy formula are negligible [35].
We aim at building an effective model in which the nonperturbative dynamics of (2 + 1)-dimensional YM theory is that of a closed bosonic string. From what we have just been arguing, it is thus sufficient to adopt, in a first approach, the quantization scheme of [10] that will allow us to reach this goal.
B. Glueball states
In order to match string states and glueball states according to standard terminology, one has to associate s PC quantum numbers to a given string state. On top of the reversal of any spatial momentum, the parity operator P for closed strings is defined by It anticommutes with the helicity operator [10]. As a consequence, for any given eigenvalue of N (equivalently M 2 ), states with nonzero spin form parity doublets (22). The s = 0 cases must be treated separately, see below.
Charge conjugation C has to be introduced by hand by recalling that, in 2 + 1 dimensions, a closed flux tube is actually a loop of fundamental color flux that closes on itself. Hence it has an intrinsic orientation which is that of the chromoelectric field [17]. So a given state in the closed-string spectrum can either correspond to a flux tube with clockwise ( ) orientation or anticlockwise orientation ( ). The action of the charge conjugation is to revert this orientation, basically by turning fundamental color charges into conjugated ones [17], while parity also flips J 0 : Note that, in our framework, time reversal would just flip J 0 .
In summary, starting from a closed-string state | ; M 2 ; s; s found in [10], one can build a s η P η C glueball with mass M 2 provided that the linear combination is nonzero. At this stage, charge conjugation just adds an additional Z 2 degree of freedom to the spectrum.
The explicit form of the eigenstates of M 2 and s is given in [10] and will not be recalled here for the sake of brevity.
We have checked that, from these | ; M 2 ; s; s states, one can form the following multiplets: • . . .
The * is used to distinguish excited states of a given s η P η C . It is readily seen that, if glueball dynamics is that of a closed string, the low-lying spectrum should be filled by (pseudo)scalar states, while the first states with nonzero spin are expected to arise at higher masses, corresponding to level 2 in our formalism. At this stage, the state with s = 3/ √ 4 − a can still be a boson with spin n ∈ N 0 provided that a = 4 − 9/n 2 . However, n > 1 leads to a > 0 , implying unphysical glueball states with M 2 < 0 at level 0. Even if the N = 2 glueball with J = 0 is not an anyon but a spin-1 boson, then anyons necessarily appear at level 3, so they cannot be avoided in the glueball spectrum.
A. Numerical results
Glueball states obtained in the previous section follow the simple mass formula (24). Hence the glueball spectrum is completely known from our model once the value of a is fixed. As usually done in the field, this can be achieved by comparing our results to the (2 + 1)-dimensional glueball spectrum computed in pure gauge lattice QCD in Refs. [18,34] and further analyzed in [16,36].
A clear feature of the lattice spectrum is the appearance of Regge trajectories, i.e. a linear dependence between the squared mass M 2 and the spin s of a glueball, with a slope compatible with the value 8πσ of a classical closed string [16,36]. However the spin "measured" on the lattice is necessarily integer due to the Euclidean spacetime induced by Wick's rotation. That is why, as discussed in Sec. II, comparisons between our model and lattice results should be restricted to s = 0 states: SO(2, 1) and SO(3) only share the D(0) UIR. These states are listed in Table I. It is readily seen that, as predicted by the closed-string picture, the lightest states with C = + (resp. C = −) are 0 ++ (resp. 0 −− ) ones, while the first 0 −+ (resp. 0 +− ) glueball is much heavier.
As pointed out in [17], the lattice spectrum shows a large splitting between C = + and C = − states, which are degenerate according to the mass formula (24). As argued in [17], this is the stage at which it has to be remembered that flux tubes may be more complex objects than Nambu-Goto strings because of their intrinsic orientation. Processes that induce a mixing between and states can be figured out: One can think of a flux tube shrinking to a "ball-like" configuration where information about the orientation is lost, then expanding into a flux tube. The simplest way of implementing such a mixing is to add a constant coupling of the form N +N − a + b). The effect of the mixing introduced is thus simply to shift the intercept of C = − states with respect to that of C = + states.
The model built here is obviously very simple and should be regarded as valid only in a first approximation. Spindependent corrections, in particular, should be present in a more refined model. It is nevertheless interesting to notice the good agreement between our mass formula and existing lattice data once a and b are fitted, see Table I. A prediction of the present model is that there should exist two degenerate 1.218 ±+ glueballs with a mass around 8.18 in units of the string tension, as well as 1.22 ±− glueballs with a mass around 9.26 .
For completeness we mention that an attempt to compute the large-N c glueball spectrum in (2 + 1) dimensions by resorting to a formulation of lattice gauge theory in the light-cone gauge has been made previously [37]. Among other results the ratios M 0 −− /M 0 ++ = 1.35 (5) and M 0 −− * /M 0 ++ = 1.82(6) are found, while our approach leads to the similar values 1.46 and 1.90 respectively, keeping the same values of a and b . Anyonic states were not built in Ref. [37]; to our knowledge it is an open question to know whether anyonic states can be built in light-cone gauge lattice theory or not. [18,34] in the large−Nc limit. Masses are given in units of the string tension.
B. Comments on the mass spectrum
Although the present flux tube model is close to the one proposed in [17], a fundamental difference occurs at the level of the quantization of the closed string. Indeed, in [17], a spectrum was found in agreement with lattice data by using the Isgur-Paton closed flux-tube model [15]. This is not surprising since the authors of [17] perform a nonrelativistic, Schrödinger-like quantisation of the fluctuations of a closed circular string, and in such a scheme the spin of a state is identified with s = |N −N | so it is necessarily an integer and the constraint N =N is not present. Only the constraint N +N = 1 is imposed by the model [17]. Hence, the angular momentum appearing in the resulting Hamiltonian is integer and matches existing lattice data.
When N c is finite, our main assumptioni.e. identifying glueballs with closed flux tubes -may appear less sound. It has to be noticed however that the quantum numbers and mass hierarchy of the glueball states are identical whatever N c is [18,34]. The case N c = 2 is special since the fundamental representation is real. Then, no orientation can be given to a flux tube, and only the C = + sector is present. The universal structure of the glueball spectrum for N c > 2 may suggest that the stringy picture developed here is still relevant at finite N c and thus that anyonic glueballs are a generic feature of SU (N c ) Yang-Mills theory in (2 + 1) dimensions. Even the SU (2) lattice scalar mass spectrum can be recovered by using b = 0 (no C = − sector) and a = −1.9 in our model. Note that the spectrum obtained in the present section is expected to be the same in the large N c limit of SU (N c ) , SO(N c ) and Sp(N c ) Yang-Mills theories, that have been proven to be equivalent in the strong coupling limit [38].
VI. RELATION WITH 'T HOOFT AND WILSON LOOPS
It is now worth wondering how much the existence of anyonic states in YM theory relies on our effective closedstring description. There exist other ways to build anyons. One of the simplest ways, at the nonrelativistic level, is to minimally couple a particle to a vortex-like vector potential: The resulting vortex-plus-particle system constitutes an anyon [2]. This coupling can be achieved in Yang-Mills theory too. Let us start from the 3D 't Hooft operator φ( x) defined through the nonstandard commutation relation [39] W where W (C t ) = Tr P exp ig Ct A is a standard Wilson loop with C t a closed spacelike curve. By "spacelike" it is meant that all the points of C t have the same temporal coordinate x 0 = t . Moreover, in the equation above, n( x; C t ) is the number of times that the closed curve C t winds around x in a clockwise fashion minus the number of times it winds around x anticlockwise. Note also that [φ( x), φ( y)] = 0 which reflects the locality of the operator φ [39].
We now define the operator where z may or may not be enclosed by C t , a closed spacelike curve fixed once for all. Since spacelike Wilson loops commute at equal time [39], it is readily shown that G Ct ( z) may have a nontrivial statistical phase: It is indeed such that, for two separated points z 1 an z 2 , Nc [n( z2;Ct)−n( z1;Ct)] G Ct ( z 2 )G Ct ( z 1 ) .
The statistical phase will be nontrivial as soon as n( z 2 ; C t ) = n( z 1 ; C t ) . From the generalized spin-statistics theorem [21], it can be concluded that the operator G Ct ( z) creates a color-singlet state with spin s = (k/N c )+ n with k, n ∈ N , that is, a value that can be nonzero and neither integer nor half-integer.
Just as the correlator of spacelike Wilson loops contains scalar glueballs [18], it can be expected that the correlator 0|G † Ct ( z)G C0 ( z)|0 will propagate anyonic glueballs with spin k/N c . If that turned out to be true, this would show that our main result is not fully dependent on the model used. In the context of the Abelian Higgs model with Chern-Simons term, the propagation of anyonic states is described in [4], where in particular it is shown that the physical Hilbert space of 1-anyon states is decomposed into orthogonal sectors labelled by the vorticity q : where µ/4π is the coefficient multiplying the Chern-Simons term A ∧ dA in the action and where the vorticity eigenvalue q labels the homotopy classes for the map S 1 → S 1 expressing the asymptotic behaviour of the complex scalar field at spatial infinity. Note that the spin of a state is then given by µq 2 /2 mod Z . The previous considerations on anyon propagation can even be made more rigorous on the lattice in 3D Euclidean space, see Sec. 7 of [4].
VII. SUMMARY AND OUTLOOK
In this note, we have developed a closed-string model of glueballs in (2 + 1) dimensions based on the light-cone quantization of the Nambu-Goto string performed in [10]. Since closed-string are actually used to model the dynamics of Yang-Mills field, the orientation has been added as an extra quantum number in order to account for the fact that we are dealing with effective rather fundamental strings. This addition has two consequences: The possibility of defining the charge-conjugation of a state, and the addition of a mixing mechanism eventually splitting the masses of states with different eigenvalues under charge conjugation. Our model has two free parameters that, once fitted, allow to satisfactorily reproduce the masses of the 8 zero-spin glueballs currently observed in large-N c lattice calculations.
As a consequence of our model, anyonic glueballs must be present with a mass and spin that both depend on the We believe that the existence of such states is not an artifact of the closed-string picture proposed, but rather, that it is a generic property of Yang-Mills theory in (2 + 1) dimensions. Hence, the existence of anyonic glueballs could be confirmed (or not) in the future by resorting to lattice calculations, either in light-cone gauge or in the more standard temporal gauge provided that appropriate correlators are built. As a starting point for future calculations, an inspiring explicit form for the t'Hooft operator can be found in [40], while similar results have been proposed in the framework of the Abelian Higgs model in [41].
|
2015-09-30T19:45:26.000Z
|
2015-09-30T00:00:00.000
|
{
"year": 2015,
"sha1": "b81bed61e892c6a21f89eb88802756d1e54c0185",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1509.09312",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1111fbe7f7fc210054803e9891eccfdb0462ef7e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252220008
|
pes2o/s2orc
|
v3-fos-license
|
Clinical application of comprehensive genomic profiling panel to thoracic malignancies: A single‐center retrospective study
Abstract Background The usefulness of comprehensive genomic profiling (CGP) panels for thoracic malignancies after completion of the standard treatment is unclear. Methods The results of CGP panels for malignant thoracic diseases performed at our hospital between December 2019 and June 2022 were collected. We examined whether CGP panel results led to new treatment, correlated with the effectiveness of immune checkpoint inhibitors (ICIs), or revealed secondary findings related to hereditary tumors. Results A total of 60 patients were enrolled, of which 52 (86.6%) had lung cancer. In six (10%) patients, the panel results led to treatment with insurance‐listed molecular‐targeted agents; four patients had EGFR mutations not detected by the real‐time polymerase chain reaction assay and two had MET ex.14 skipping mutations. In small‐cell lung cancer, the tumor mutation burden was high in 4/6 (66.7%) patients and pembrolizumab was available. Another MET ex.14 skipping mutation was detected in two cases with EGFR‐tyrosine kinase inhibitor resistance. ICI efficacy was ≤1 year in patients with STK‐11, KEAP1, and NEF2L2 mutations. A BRCA2 mutation with a high probability of germline mutation was detected in one patient. A thymic carcinoma with no detectable oncogenic mutation responded to second‐line treatment with Tegafur‐Gimeracil‐Oteracil Potassium (TS‐1) for ≥9 years. Conclusions CGP panels are useful in thoracic malignancies, especially lung cancer, because they can detect overlooked driver mutations and genetic alterations. We believe that the significance of conducting a CGP panel prior to treatment may also exist, as it may lead to the prediction of ICI treatment efficacy.
INTRODUCTION
A genetic test in which multiple regions of multiple genes are simultaneously analyzed using next-generation sequencing (NGS) is called a cancer gene panel test. [1][2][3] Conventional genetic testing can analyze a limited area at a time, but NGS allows the analysis of several to several hundred genes at a time. The cancer gene panel test can simultaneously detect base substitution/insertion/deletion mutations, gene amplification/deletion, and gene fusion in all or part of the carried genes. In addition, there are gene panel tests that can estimate tumor mutation burden (TMB) and microsatellite instability. 4 The number of target genes and type of nucleic acids (DNA and RNA) used for analysis are different for each cancer gene panel test. Furthermore, some tests use only tumor-derived nucleic acids, while others use nucleic acids derived from normal specimens, such as peripheral blood, as controls. 3,5 In addition, a test method that analyses tumor-derived free DNA in blood using peripheral blood is under development. 6,7 This test is expected to be implemented in clinical practice in the future because of its low invasiveness for specimen collection.
Similar to conventional gene tests, gene panel tests include a companion-diagnosis function to determine the appropriateness of administering molecular-targeting drugs, 8,9 in addition to comprehensive genomic profiling (CGP) to determine the genetic abnormalities involved for appropriate treatment selection. The former does not require interpretation of the results because the results obtained are positive/negative for a specific genetic biomarker. However, the latter requires decisions regarding the pathological significance of the genetic abnormality detected and the availability of the corresponding candidate drug. Therefore, a review by an "expert panel" or "molecular tumor board" is required for insurance purposes. 10 Although cancer gene panel tests have been used for >2 years in Japan, many problems persist, including how to make the best use of gene panel test results for treatment. According to domestic and international reports, the percentage of patients who receive treatment after a gene panel test is currently about 10-20%. 3 Lung cancer is a malignant disease with a poor prognosis and is the leading and second leading cause of death among men and women in Japan, respectively. 11 However, among solid tumors, lung cancer has the highest number of identified druggable driver mutations. 12 In advanced-stage lung cancer, it is recommended to identify epidermal growth factor receptor (EGFR), ALK fusion, ROS1 fusion, BRAF, RET fusion, and MET ex.14 skipping mutations before starting treatment. 13 To identify these mutations simultaneously, an NGS-based gene panel (Oncomine Dx target test) is recommended. 13,14 Lung cancer is the second most likely solid tumor after malignant melanoma to respond to immune checkpoint inhibitors (ICIs), 15 and genetic mutations associated with ICI efficacy are being identified. 16,17 However, the usefulness of CGP panel tests in advanced-stage lung cancer after standard treatment has been completed is unclear.
In Japan, CGP panels are covered by health insurance only after completion of the standard treatment defined by each guideline. Therefore, this study aimed to examine the impact of CGP panels conducted after the completion of standard treatment on actual clinical practice at our center for malignant thoracic diseases and the current usefulness of such panels.
PATIENTS AND METHODS
Patients and analysis procedure for CGP panels All patients with malignant thoracic disease who underwent CGP panel tests at the Osaka International Cancer Institute between December 2019, when the CGP panel was approved for reimbursement in Japan, and June 2022 were included in this study. Patients' age, sex, disease, and number of lines of treatment at the time of CGP panel evaluation were collected. All participants were asked whether they wished to disclose the results of CGP panel analysis to parties other than themselves and whether they wished to disclose information related to any hereditary tumors prior to test submission. When tumor tissue was used, the attending physician decided whether to perform a Foundation One panel (F1 panel) or an OncoGuide NCC oncopanel (NCC panel). When tissue specimens were used, after obtaining consent, a pathologist determined whether they could be submitted for CGP panel testing based on the tumor area, tumor content, and specimen storage period. The percentage of patients whose specimens could not be submitted because of the pathologist's decision and whether these patients subsequently underwent reexamination were investigated. The sampling method of tumor specimens was also investigated. For patients treated after August 2021, when F1 liquid was introduced, if tissue specimens were not available, we proposed the use of F1 liquid, and CGP panel testing was performed using F1 liquid for consenting patients. The time between obtaining consent and disclosing the CGP panel results after expert panel review to the patient was calculated as the turnaround time.
Expert panel for CGP panels
The results of all CGP panel analyses were reviewed by an expert panel within the Osaka International Cancer Institute and then explained to the patients. The expert panel consisted of an oncologist for each organ, a clinical geneticist, a genetic counselor, a pathologist, a clinical trial coordinator, and a pharmacist. For detected alterations, oncogenicity was annotated based on the reports of each gene panel and Center for Cancer Genomics and Advanced Therapeutics (C-CAT) guidelines, 18 and treatment for oncogenic mutations was recommended based on the results of clinical trials mainly in Japan and the recommendation level (A to F) in C-CAT. 3,8 The possibility of drug treatment through the patient offer system was also proposed. We considered genetic mutations that may be associated with hereditary tumors as secondary findings, and if these were disclosed we considered referring the patient for genetic counseling. All considerations were based on individual patients' medical history, with a focus on treatment history.
Heat map of reported oncogenic mutations
Reported oncogenic alterations with a frequency of >5% in the cohort were included in the heatmap. The heat map was created using custom R programming scripts with graphics modules of ggplot2 v.3.3.6 and cowplot v.1.1.1. Cluster classification was performed for each malignant thoracic disease.
Correlation of STK11, KEAP1, and NEF2L2 mutations with effects of ICI Cases with STK11, KEAP1, and NEF2L2 mutations were extracted based on the results of the CGP panel, and in patients with a history of ICI administration the effect was evaluated in terms of progression-free survival (PFS). PFS was defined as the point from the start of ICI administration to its discontinuation due to tumor progression or toxicity, based on medical records.
Patient characteristics
During the study period, 63 patients consented to CGP panel testing, of whom eight (8/63, 12.7%) were determined to have insufficient specimens; of these, four (50%) patients underwent re-biopsy for CGP panel, one (12.5%) specimen was submitted in F1 liquid, and three (37.5%) patients declined to resubmit tests. Finally, 60 (95.2%) results were available for analysis ( Figure 1). The clinical characteristics of the patients are shown in Table 1. The participants included 38 (63.3%) men and 22 (36.7%) women, with a median age of 69 (range 44-82) years. Histopathologically, there were 33 (55.0%) lung adenocarcinomas, 10 (16.6%) lung squamous cell carcinomas, three (5.0%) nonsmall-cell lung carcinomas (NSCLC)-not otherwise specified, six (10.0%) small-cell lung carcinomas (SCLC), six (10.0%) thymic carcinomas, one (1.7%) thymoma, and one (1.7%) malignant pleural mesothelioma. The median number of treatment lines at the time of CGP panel submission was three. Submitted specimens included 21 (35.0%) surgical biopsies, 12 (20.0%) computed tomography-guided biopsies, two (3.3%) pleural biopsies, nine (15.0%) bronchoscopic specimens, six (10.0%) endobronchial ultrasound-guided transbronchial needle aspiration specimens, and 10 (16.7%) plasma samples. The F1 panel, NCC panel, and F1 liquid were used to analyze 47, three, and 10 specimens, respectively. The median turnaround time from obtaining consent to explaining the results was 48 (range 33-118) days. Eight (13.3%) patients did not want the results of the CGP panel to be disclosed to anyone other than themselves and three (5.0%) did not want the results of inherited tumorassociated mutations to be disclosed. No case could be registered in a clinical trial, based on the genetic alterations detected in the CGP panel.
Landscape of genomic alterations in 60 patients
Of the mutations detected in the gene-panel analyses of 60 cases, only those mutations or copy-number alterations that were considered oncogenic mutations in the report and found in >5% of cases are shown in the heatmap image ( Figure 2). The top 10 alterations detected were TP53 (30%), EGFR mutation cases not detected by the first RT-PCR test but detected by the CGP panel All four patients with major activating EGFR mutations detected using the CGP panel had undergone RT-PCR EGFR-detection tests at diagnosis, but no EGFR mutation was detected and they were treated as EGFR-mutationnegative cases. The clinical courses and characteristics of the four cases are presented in Table 2. In all cases, CGP panels were performed after at least 2 years of chemotherapy. In case 1, the RT-PCR test for EGFR gene mutation was submitted using bronchoscopy-forceps washout. In case 2, RT-PCR was performed using a section from a paraffinembedded block of tumor-tissue specimen from a bronchial biopsy. EGFR L858R in case 2 was a two-base substitution mutation of EGFR c.2572 _ 2573 CT>AG (Figure 3) F I G U R E 3 A case of EGFR L858R mutation caused by two-base substitution. A two-base substitution changes the codon encoding amino acid 858th L from "CFG" to "AGG". "AGG" codes R respectively) were likely not detected because they were variants not covered by RT-PCR testing.
CGP panel testing for resistance mutations after treatment with tyrosine kinase inhibitor
A total of six patients underwent CGP panel testing to search for resistance mutations after tyrosine-kinase inhibitor (TKI) treatment. Four patients had major EGFR mutations and two had ALK fusion; all six patients underwent TKI therapy for each mutation. MET ex.14 skipping mutation was detected as a resistance mutation in two of the four patients with EGFR mutations. In one patient, G724S mutation was detected as a new compound mutation in addition to the original Ex.19 deletion, leading to a change in TKI, based on EGFR structure. 19 In one case of ALK fusion, after first-, second-, third-, and fourth-line treatment with alectinib, lorlatinib, ceritinib, and a combination of CBDCA, paclitaxel, bevacizumab, and atezolizumab, respectively, and a fifth-line lorlatinib rechallenge, CGP panel was performed, and BRAF-KIAA1549 fusion was detected in addition to multiple ALK-resistant mutations of ALK G1269A, L1196M and F1174C.
ICI-resistant mutations
We studied STK11, KEAP1, and NEF2L2 mutations as ICIresistant gene mutations and the effect of ICIs on tumors with these mutations. Of the 60 patients, eight had oncogenic mutations of these three genes and seven had received ICI treatment. The age, sex, TMB, PD-L1 tumor proportion score (%), and ICI and ICI treatment line administered were evaluated, and their correlation with PFS in these patients is summarized in Table 3. PFS was <1 year in all patients receiving ICIs, and in patients with KEAP1 and NEF2L2 mutations, PFS was <3 months despite administration of the first-line therapy, indicating primary resistance to ICI.
Secondary finding associated with hereditary tumor
One of the 60 patients had BRCA2 Q1361* mutation as a secondary finding associated with hereditary breast and ovarian cancer, after checking the family history of neoplastic diseases. The patient was a 70-year-old woman with squamous cell carcinoma of the lung who had undergone (Figure 4). The BRAC2 Q1361* mutation was considered to be associated with hereditary breast and ovarian cancer in the analysis of tumor tissue alone, but the patient did not wish to receive genetic counseling and her germline mutation of BRAC2 Q1361* was not examined using normal tissue.
No oncogenic mutation in thymic cancer exhibiting exceptional response to TS-1 Seven of the 60 patients had thymic tumors. Among them, in one case no genetic alterations were detected in the CGP panel. The patient received CBDCA plus paclitaxel as the first-line treatment, but the disease progressed after 7 months, therefore he received tegafur + gimeracil + oteracil as the second-line treatment and currently his disease has been in remission for 9 years.
DISCUSSION
Among patients with malignant thoracic diseases, mainly lung cancer, six (10%) patients with an insurance-approved indication at evidence level A, based on CGP panel results, received molecularly targeted drugs. In addition, high TMB was detected in 4/6 (66.7%) small-cell carcinomas, making pembrolizumab a new treatment option for TMB-high smallcell carcinomas where treatment is limited. In five cases, HER2 mutations eligible for trastuzumab deruxtecan treatment at evidence level B were detected. MET ex.14 skipping mutation was detected as a new driver mutation in specimens with EGFR-TKI resistance, leading to the introduction of a new therapeutic agent. Compared to other cancer types, CGP panel testing has a higher probability of leading to promising treatments in lung cancer and CGP panels may be more useful for lung cancer. In this study, we clarified the significance of multiple NGS panels because a CGP panel in clinical practice has a high probability of detecting TMB-high in SCLC, and also because a druggable driver mutation can be detected in cases with EGFR mutations that were previously screened by assays other than NGS.
Of the six cases that led to molecularly targeted agents at evidence level A, two cases of MET ex.14 skipping mutations were detected by the CGP panel because the same mutation had not been searched for using RT-PCR. As pre-treatment NGS-based gene panels become more prevalent in the future, such cases are expected to become less frequent. All four cases in which EGFR mutations were detected had undergone RT-PCR-based EGFR testing at least once. Cases 1 and 2 demonstrated Ex.19 deletion and L858R mutation, respectively, which were major activating EGFR mutations and therefore variants covered by RT-PCR. 20 Case 1 results may have been false negative because the specimen used was the biopsy-forceps washing fluid, which probably contained a low percentage of cancer cells. In case 2, the mutation was caused by a two-base substitution, therefore it is possible that the primers specific for the L858R mutation could not bind and RT-PCR was not successful. 21 Cases 3 and 4 demonstrated variants that were not covered by the RT-PCRbased EGFR assay and by the Oncomine Dx target test, respectively. Therefore, the mutations could only be detected by the CGP panel. [22][23][24] All four patients survived for >2 years and nearly 8 years without EGFR-TKIs, suggesting that EGFR mutation is a favorable prognostic factor regardless of EGFR-TKIs. 25 ALK fusion has also been reported to be a favorable prognostic factor. 26 Long-term survivors of advanced lung cancer are not likely to have undergone an NGS panel at presentation and should be actively considered for a CGP panel.
The number of approved chemotherapeutic regimens for SCLC is smaller than for NSCLC. 27 Recently, the use of pembrolizumab was approved for TMB-high solid tumors. 28 With regard to the efficacy of pembrolizumab in SCLC treated with >2 lines of chemotherapy, a previous study reported an objective response rate of 19.3% (95% confidence interval 11.4-29.4); two of 83 patients showed complete response, and 14 patients showed partial response. The median duration of response was not reported (range 4.1-35.8 months, plus sign indicates ongoing response). 29 This trial excluded cases with SCLC in which the anti-PD-L1 antibody drugs atezolizumab 30 and durvalmab, 31 which are currently approved for first-line induction, were used. However, pembrolizumab, an anti-PD-1 antibody drug, may be effective in cases in which anti-PD-L1 antibody drugs are ineffective. 32,33 The CGP panel is useful because it adds a new treatment option, pembrolizumab. Interestingly, in this study, the CGP panel detected high TMB in 66.7% of cases with SCLC.
ICIs are approved for all malignant thoracic diseases except thymic tumors and are recommended unless there is a specific reason for avoiding their use. 34 As genome profiling progresses, mutations that negatively correlate with the effects of ICIs have been reported, STK11 and KEAP1 being representative of such mutations. 16,17 NEF2L2 has been reported to form a complex with KEAP1 and exhibit intracellular bioactivity. 35 NSCLCs with these oncogenic mutations have been reported to be resistant to ICIs. 16,17,36 Similarly, in the present study, we observed a trend toward reduced efficacy of ICIs in patients with these mutations. If unnecessary ICI administration can be avoided by genome profiling, it may be possible to avoid a reduction in the quality of life due to immune-related adverse events. Therefore, we believe it is worthwhile to conduct the CGP panel prior to the start of treatment, rather than after.
The availability of specimen volume is an issue in CGP panel testing in patients with advanced-stage lung cancer who have had an NGS panel performed at the time of initial diagnosis. At diagnosis, physicians rely on bronchoscopic biopsy specimens in nearly 60% of patients with advancedstage lung cancer. 37,38 F1 CDx requires at least 1 mm 3 of tissue and the NCC OncoPanel requires 10 unstained slides with a minimum size of 4 mm 2 (16 mm 2 is recommended). 14 These tumor volumes are often difficult to obtain from bronchoscopic biopsy specimens, and the possibility of obtaining specimens that can withstand two NGS panels is much lesser. Surgical biopsy specimens accounted for 35% of specimens in this study, while bronchoscopic biopsy specimens accounted for only 25% (Table 1). In eight cases, a re-biopsy for gene panel evaluation was required, suggesting that specimen collection is an important issue in thoracic malignancies. In addition, a surgical biopsy specimen may be able to withstand multiple gene panels, 23 and it is important to consider a genomic biopsy policy that aims not only at diagnosis but also at genomic analysis. 38 This study had several limitations. First, this was a single-center, retrospective, controlled study with a limited number of cases, therefore the statistical significance of mutations as a factor for poor treatment response to ICIs could not be fully investigated. Second, the sample predominantly included cases with lung cancer, and the significance of CGP panels in other malignant thoracic diseases could not be adequately studied. Third, since this was a retrospective study, there was a selection bias for cases in which a CGP panel was performed. To examine the usefulness of the CGP panel, it would be helpful to examine the impact of the CGP panel on clinical practice by prospectively examining all cases with thoracic malignant diseases over a period of time. Fourth, the reach rate for clinical trials based on CGP panel results is likely to be influenced by region. Since clinical trials for cancer drugs are more common in Tokyo than in other parts of Japan, it is conceivable that the reach rate for clinical trials may also be higher in Tokyo.
CONCLUSION
In the present study, the CGP panel detected favorable genetic alterations, including druggable mutations, in 12 (20%) of 60 patients. TMB-high SCLC responded to pembrolizumab, whereas MET ex.14 skipping mutation was resistant to EGFR-TKI. Compared with other cancer types, lung cancer is rich in molecular-targeted agents, therefore the usefulness of a CGP panel may be greater. Mutations in STK-11, KEAP1, and NEF2L2 may be useful for predicting the effect of ICIs, and the importance of conducting a CGP panel before the start of treatment in clinical practice was suggested.
ACKNOWLEDGMENTS
We would like to acknowledge to all patients with thoracic malignancies who received CGP panels in our hospital. The study was approved by the ethics committee of the Osaka International Cancer Center (#22046).
|
2022-09-15T06:16:41.739Z
|
2022-09-13T00:00:00.000
|
{
"year": 2022,
"sha1": "01a2abfb67e711c0d60d373677dcc966072910e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "b406e7b73093a019ccf4974b97d03609b5c70e5c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236278472
|
pes2o/s2orc
|
v3-fos-license
|
Around the world in 500 years: Inter- regional spread of alien species over recent centuries
Aim: The number of alien species has been increasing for centuries world- wide, but temporal changes in the dynamics of their inter- regional spread remain unclear. Here, we analyse changes in the rate and extent of inter- regional spread of alien species over time and how these dynamics vary among major taxonomic groups. Location: Global. Time period: 1500– 2010. Methods: Our analysis is based on the Alien Species First Record Database, which comprises > 60,000 entries describing the year when an alien species was first recorded in a region (mostly countries and large islands) where it later established as an alien species. Based on the number and distribution of first records, we calculated metrics of spread between regions, which we termed “inter- regional spread”, and
| INTRODUC TI ON
The numbers of alien species are rising continuously across all continents and in most taxonomic groups (Seebens et al., 2017). These increases in regional alien species richness are driven by increasing human activities that facilitate biological invasions, such as trade, travel, intentional introductions and habitat modifications (Ellis et al., 2013;Levine & D'Antonio, 2003;Pyšek et al., 2020).
The relative importance and the rate of these drivers in shaping the long-term dynamics of alien species are likely to have changed over recent centuries (Essl et al., 2011). Thus, we might expect that the rate at which the global distributions of individual alien species have expanded across the globe has also changed during this period.
Despite recent insights into the long-term dynamics of alien species richness world-wide, we lack a good understanding of spatiotemporal trajectories in changes of the distribution of alien species world-wide. The intensification of global trade and transport over recent decades might have increased the number of alien individuals released (i.e., increased propagule pressure) (Hulme, 2009), resulting in increasing rates at which new alien populations establish (Lockwood et al., 2005). Moreover, the intensification of land use can favour the establishment of alien species (Pauchard & Alaback, 2004), which might also result in more frequent new occurrences of populations of alien species. As a result, we might expect not only an increase in overall numbers of alien species (Seebens et al., 2017), but also an increase in the rate of the spread of individual alien species. Conversely, the expansion of alien geographical ranges must eventually reach a point of saturation owing to environmental constraints, which should slow the spread of individual species (Seebens et al., 2016;Shigesada & Kawasaki, 1997;Wilson et al., 2007). Hence, although the overall dynamics of the spread of alien species across the globe can be expected to have accelerated in recent times owing to increased human pressures, the rate of spread might have decelerated for some alien species as they reach the limits to the number of regions in which they can establish.
However, it remains unclear how these processes have developed over time at a global scale, how they interact and how temporal trends in proliferation differ among species and across taxonomic groups.
Spatio-temporal dynamics have largely been investigated either at regional scales or globally for single alien species (Roura-Pascual et al., 2010;Wilson et al., 2007). A well-known relationship between the spatial and temporal dimensions of biological invasions shows that species introduced earlier are more widespread today (Gassó et al., 2010), but it remains unclear whether this regional relationship holds true at the global scale and how it has developed historically.
Furthermore, invasion dynamics have been reconstructed for only a few species, again mostly at regional scales, owing to the lack of comprehensive data at larger scales (Pyšek & Hulme, 2005). As a consequence, comparatively little is known about inter-regional dynamics of alien species spread and how they have changed over time.
To analyse the temporal development of inter-regional spread across a spectrum of established alien species, we use the Alien conducted statistical analyses to assess variations over time and across taxonomic groups.
Results: Almost all (>90%) species introduced before 1700 are found in more than one region today. Inter-regional spread often took centuries and is ongoing for many species. The intensity of inter-regional spread increased over time, with particularly steep increases after 1800. Rates of spread peaked for plants in the late 19th century, for birds and invertebrates in the late 20th century, and remained largely constant for mammals and fishes. Inter-regional spread for individual species showed humpshaped temporal patterns, with the highest rates of spread at intermediate alien range sizes. Approximately 50% of widespread species showed signs of declines in spread rates.
Main conclusions:
Our results show that, although rates of spread have declined for many widespread species, for entire taxonomic groups they have tended to increase continuously over time. The large numbers of alien species that are currently observed in only a single region are anticipated to be found in many other regions in the future.
K E Y W O R D S
accumulation, biological invasions, first records, global, historical, invasion curves, invasion time, long term, spatio-temporal Species First Record Database, which is the most comprehensive cross-taxonomic source of data on the first detections of alien species in regions world-wide (Seebens et al., 2017). The first record database has been used previously to analyse long-term trends in alien species accumulation across taxonomic groups and continents (Seebens et al., 2017). For most taxonomic groups and regions, the number, and often also the rate of increase, of alien species has risen continuously, particularly since 1800, with further accelerations after 1950. A subsequent study of temporal dynamics of newly recorded alien species showed a surprisingly high proportion of the so-called emerging alien species in recent years, which could be related to a continuous increase in the sizes of source pools from which the species originated . However, these studies focused on total numbers of alien species, and it remains unclear how the dynamics of inter-regional spread of individual species has changed over time. Recent emerging alien species, in particular, might have greater potential to spread, whereas species first introduced a longer time ago might have reached their environmental limits and therefore slowed their rate of range expansion. Disentangling both processes should help in explaining the observed long-term trends of alien species accumulation.
Here, we used the first record database to analyse how widespread and how frequently individual alien species were recorded at different times. The frequency of records, although affected by sampling intensities, should provide indications of how species proliferated at various times and how likely it is that newly introduced species will start to spread in the future. We analysed the temporal development of inter-regional spread for major taxonomic groups (vascular plants, mammals, birds, fishes, arthropods and other invertebrates) over recent centuries. Specifically, we ask the following questions.
1. How quickly did the geographical distribution of alien species change over recent centuries? 2. How are the numbers of regions occupied by alien species today related to the year of first recording globally?
3. How long has the spread of individual species continued, and do we see indications of slowing? 4. How has the rate of spread changed over time, both for individual alien species and for whole taxonomic groups?
| Data
Our analyses are based on the Alien Species First Record Database (Seebens et al., 2017). The database contains years of first records of established (naturalized; i.e., forming permanent self-sustaining populations; for definition, see Blackburn et al., 2011) alien populations in regions of the world. The regions largely correspond to countries; however, large islands administered politically by a mainland country but located in biogeographically distinct locations or with a particularly high number of alien species (e.g., Hawaiian Islands) are considered as different regions. The database has been updated and revised recently. It is now based on 164 individual data sources (22 online databases, 126 scientific articles and reports, and 16 unpublished data sets from individual researchers). The information about occurrences, years and taxon names was standardized and integrated, as explained in detail by Seebens et al. (2017). Altogether, the database contains 63,807 records of 22,320 alien species occurrences in 280 non-overlapping regions with a median size of 33,523 km 2 (range .43-16,921,565 km 2 ). In comparison to the previous version of the database , this represents 5% more records and 13% more taxa. All versions of the Alien Species First Record Database are available online (https://doi.org/10.5281/ zenodo.3690748).
| Definitions
For analyses, we calculated a number of measures to capture dynamics of changing ranges and frequencies of first records and provide the following definitions of certain key terms used.
• Inter-regional spread: We refer to inter-regional spread as the temporal sequence of first records of an established alien species in geographical regions. This definition can include both autonomous dispersal of species without human assistance and humanmediated introductions.
• Rate of spread: We define rate of spread as a function of the number of first records per unit time. Thus, it refers to inter-regional spread and should be distinguished from local spreading dynamics of, for example, expanding individual populations. We interpret the frequent recording of an alien species as an indication of a high rate of inter-regional spread. The rate of spread was measured for each individual species separately as the inverse of the time elapsed between consecutive first records of that species.
• Invaded or alien range: The invaded or alien range is the number of regions for which the alien species has been recorded with a first record in the database.
• Global first record: The global first record denotes when the species was first recorded as an alien anywhere in the world, as documented in the Alien Species First Record Database. This record is a proxy for the onset of the inter-regional spread of a species in its alien range world-wide. The global first record of a species is used to define the global minimum invasion time of an alien species.
• Global extent of new occurrences: The global extent of new occurrences was measured as the variation of coordinates of the centroids of regions where the first records occurred. More specifically, it was calculated as the circular variance of longitudes and latitudes, respectively, of first records for individual species recorded during 10-year intervals since 1500. A large variance indicates more widely distributed first records, whereas a low variance shows a narrow distribution of new first records.
• Minimum invasion time: Invasion time describes the time elapsed since the first record of a species in a new region. However, given that there are often substantial time lags involved between the establishment of an alien species and its documentation (Crooks, 2005), the dates of first records used here provide information only on minimum invasion time, because the true (but undocumented) onset of inter-regional spread of a species might have started earlier. The minimum invasion time can be regarded as the global counterpart of the more commonly used "minimum residence time" (Gassó et al., 2010), which, however, is not applicable at the global scale, where all species are resident somewhere.
• Invasion curve: Invasion curves describe the increase in the number of invaded regions for individual species over time (Pyšek & Prach, 1993). A steep increase in the invasion curve shows that the species was recorded frequently from new regions in a short period of time, which might indicate rapid inter-regional spread or frequent introductions across regions.
| Data analyses
We used linear regression analysis and generalized additive models (GAMs) to analyse temporal dynamics of inter-regional spread.
Given that we were also interested in the functional forms of observed trends, we fitted different functional relationships, such as linear [y = a + bt], quadratic {y = a − [(x − b)/c] 2 } and saturating [y = a(1 − e −bt )] forms, to observed long-term trends in y with time x, with a, b and c denoting parameters defining the shape and scale of the functions. We evaluated the goodness-of-fit using Akaike's information criterion (AIC) for individual fits and identified the best-fitting functional relationship by the lowest AIC. According to common standards (Burnham & Anderson, 2004) we considered an improvement in fit as a ΔAIC > 5. Where appropriate, we calculated standard errors of the mean or interquartile ranges to highlight the variation of the underlying data and performed resampling of subsets of data to obtain measures of variation. More details are provided together with the presented results.
| RE SULTS
The median number of invaded regions is generally low for all taxonomic groups (Supporting Information Figure S1). On average, individual species of birds and mammals tend to have more first records compared with other taxonomic groups. For all taxonomic groups, the vast majority of species have a low number of first records, with medians of one or two regions. A few species in all taxonomic groups, however, are widespread. For vascular plants, arthropods and other invertebrates, c. 1% of all species occupy ≥ 20 regions world-wide, and this is true for 6% of birds, 6% of mammals and 3% of fishes. However, a few insect species have very large alien ranges, covering > 100 regions, such as the longhorn crazy Invasion curves were relatively flat for nearly all species with first records before 1800 ( Figure 1). In general, alien species with global first records between 1500 and 1700 did not occupy >10 regions during that time, although undersampling is likely to be an issue for this period. Given our data, only a few conspicuous species, such as the brown rat (Rattus norvegicus), domestic pig (Sus scrofa), common pheasant (Phasianus colchicus) or common guava (Psidium guajava), were already found in many regions before 1800. In contrast, steep invasion curves were nearly always observed after 1800 and mostly for species with their global first record after 1800 (blue and turquoise lines in Figure 1). Particularly steep increases in invasion curves were observed for birds and arthropods. Interestingly, none of the arthropods introduced before 1800 spread widely, whereas nearly all arthropods widespread today were first introduced during the 19th century.
Across all taxonomic groups, the number of invaded regions increased continuously with longer minimum invasion times (i.e., the earlier in time a species was initially introduced somewhere in the world, the more first records it has today). The percentage of species with their global first record occurring between 1500 and 1700 and that are still found in only one region is often far below 10% of all species recorded in this time period for different taxa (Supporting Information Figure S2). In contrast, of all species with their global first record between 1950 and 2000, >50% are still found in only a single region. This pattern is particularly pronounced for vascular plants, for which ≤ 80% of species recorded recently for the first time globally are found in a single region only, but similar patterns are also apparent for vertebrates and invertebrates (Supporting Information Figure S2).
In general, the median alien range size of species increased with minimum invasion time over the last 500 years (Figure 2), which was supported by significant (linear regression models, p < .001) relationships between the year and alien range sizes for all taxonomic groups. To test for potential effects of saturation of alien range sizes in time, we compared the fits of a negative exponential (i.e., saturating) and a linear function, but for all taxonomic groups the Figure S4).
Although the rate of spread tended to increase over time around half of these widespread alien species showed declining spread rates towards the end of their time series (i.e., the rate for their last three steps was lower compared with the mean of the preceding three steps), indicating that the inter-regional spreading dynamics of these species tended to slow down (Table 1). The proportion of widespread species with declining rates of spread among all widespread species was c. 50% for all taxonomic groups except for fishes, for which the proportion was 90% (Table 1).
Examples of widespread alien species with declining spread rates are given in Table 1.
Species reached their maximum spread rates at different time periods, but within individual taxonomic groups the species showed a tendency toward earlier or later timings (Supporting Information
| D ISCUSS I ON
Here, we have shown that the inter-regional spread of individual alien species has extended historically over long time periods, often >100 years (Figure 1). The process of inter-regional spread is ongoing for the majority of species, including those that were first introduced to new regions as early as several hundred years ago ( Figure 2). Furthermore, our data suggest that spreading dynamics intensified after 1800, resulting in higher numbers of first records per species (Figure 4), with a wider distribution (Figure 3), although this result might also reflect the paucity of first record data from earlier centuries. Although it is known that within a region, such as an individual country, range expansion is a long-lasting process that often takes many decades or even longer (e.g., Gassó et al., 2010;Hudgins et al., 2017;Mandák et al., 2004;Roques et al., 2016), here we document that similar and even much longer time spans are also required for inter-regional spread at a global scale.
Most of the species that were observed as alien in recent times were often found in only a single region. However, the vast majority (>90%; Supporting Information Figure The calculation of spread rates is certainly affected by varying sampling and recording intensities through time. As has been shown in other studies (Costello & Solow, 2003), an increase in first records does not necessarily mean that rates of spread accelerated when, for instance, sampling intensities increased in parallel. Disentangling the influence of sampling intensities requires knowledge and data about the underlying drivers to construct appropriate models (Costello et al., 2007) or to include data about species introduction efforts.
Unfortunately, both options are not possible owing to the lack of data on drivers and propagule pressures. However, it seems valid to assume that sampling intensity and research efforts have increased over time, hence we would assume an increase in records simply because of that. This might be the case for birds, but we observe constant or even declining rates of spread in other well-investigated taxonomic groups, such as mammals and vascular plants (Figure 4), which are difficult to explain with increased sampling intensities.
We, therefore, acknowledge that records are certainly influenced by varying sampling intensities, but we believe that the observed dynamics were not driven predominantly by that.
Interestingly, rates of spread varied among most of the wellsampled groups (Figure 4). Rates for alien birds increased distinctly over time, which indicates rapid and widespread range expansions by many species, particularly during the last 50 years. In contrast, inter-regional spread has tended to slow for mammals since 1900, probably owing to more stringent regulations on species movements across international borders and a rising appreciation that introduction of such species can be highly detrimental (e.g., New Zealand; McDowall, 1994). For alien vascular plants, spread rates peaked in the late 19th century. During that time, many plants were exchanged world-wide as a consequence of a distinct increase in horticultural activities (van Kleunen et al., 2018). However, many records for plants originate from herbarium records, which were sampled intensively during that time, and it is not clear how this might have affected the overall trends. Although overall spreading dynamics seemed to have accelerated, rapid spreading events were already observed in early times for certain species (Supporting Information Table S1). For mammals, for example, the highest spread rates were recorded before 1800. This might, however, be affected by sampling effort, because individual surveys might result in a high frequency Widespread species with slowed inter-regional spread [n (%)] Examples of widespread species with slowed inter-regional spread (Drury et al., 2007;Seebens et al., 2019). At the same time, there are still enough unoccupied and suitable regions that the species can colonize new regions. At large range sizes, most of the suitable regions are already occupied, and the spread rate has to slow.
The same pattern was found in a study of marine invasions, which showed that the highest probabilities of spread into new areas were predicted to happen at intermediate range sizes (Seebens et al., 2016). For taxonomic groups other than vascular plants and mammals, the phase of slowing spread has seemingly not yet been reached. An alternative explanation is that rates of spread increase with the maximum potential range size of a species. This means that species with the potential to occupy large ranges are also fast spreaders, whereas species dispersing slowly can occupy only a small number of regions. However, this is less likely because it implies that there are no slow-spreading alien species that occupy large ranges, which is contrary to our findings.
Clearly, given that first records of alien species are an amalgamation of true inter-regional spread and recording intensity, and that only a fraction of alien species first records are included in the Alien Species First Record Database , the metrics applied here can only represent proxies for the true rate of spread.
However, we believe that the results we show here are robust.
Sensitivity analyses indicate that even under the assumption of very large changes in sampling rates, such as misclassification of first records of a maximum of 100 years, similar time series result, albeit at reduced rates and for lower species numbers .
In addition, the observed variation in spread among well-investigated groups, such as vascular plants, mammals and birds, is difficult to explain with changes in sampling rate alone ( Figure 4). It would require nonlinear variation in sampling rates specific to individual taxonomic groups (i.e., a peak for plants in the 19th century, a peak for birds in the 20th century and a constant rate for mammals), which is unlikely to be the case. Although there is certainly a spatial bias towards Europe inherent in the data, repeating the analyses using first records only from Europe revealed very similar patterns (Supporting Information Figure S6). This also shows that the high variation of region sizes in our database did not affect our conclusions. Thus, although several biasing factors are likely to have affected the observed dynamics, the overall results are robust to these gaps and uncertainties.
In conclusion, the vast majority of species have expanded their ranges after their global first record, although this process can take centuries. Some species of vascular plants and mammals show signs of declining spread rates as they reach large range sizes, indicating that at least some widespread species in these groups are saturating their potential global ranges defined by environmental constraints.
We expect many new records of alien populations to occur in the future for the many alien species currently found in only a single region, because most of them were recorded only recently. As a consequence, even if the introduction of new alien species is stopped completely, an increase in their numbers per region will be observed for many decades to come owing to the spread of species already established.
|
2021-07-26T00:06:18.181Z
|
2021-06-07T00:00:00.000
|
{
"year": 2021,
"sha1": "fec699845384a217e0943e76e69ca3c06acf7238",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/geb.13325",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0c215fdea9f434b3e754b6150746024fcb28371c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
12270540
|
pes2o/s2orc
|
v3-fos-license
|
Surgical techniques, open versus minimally invasive gastrectomy after chemotherapy (STOMACH trial): study protocol for a randomized controlled trial
Background Laparoscopic surgery has been shown to provide important advantages in comparison with open procedures in the treatment of several malignant diseases, such as less perioperative blood loss and faster patient recovery. It also maintains similar results with regard to tumor resection margins and oncological long-term survival. In gastric cancer the role of laparoscopic surgery remains unclear. Current recommended treatment for gastric cancer consists of radical resection of the stomach, with a free margin of 5 to 6 cm from the tumor, combined with a lymphadenectomy. The extent of the lymphadenectomy is considered a marker for radicality of surgery and quality of care. Therefore, it is imperative that a novel surgical technique, such as minimally invasive total gastrectomy, should be non-inferior with regard to radicality of surgery and lymph node yield. Methods/Design The Surgical Techniques, Open versus Minimally invasive gastrectomy After CHemotherapy (STOMACH) study is a randomized, clinical multicenter trial. All adult patients with primary carcinoma of the stomach, in which the tumor is considered surgically resectable (T1-3, N0-1, M0) after neo-adjuvant chemotherapy, are eligible for inclusion and randomization. The primary endpoint is quality of oncological resection, measured by radicality of surgery and number of retrieved lymph nodes. The pathologist is blinded towards patient allocation. Secondary outcomes include patient-reported outcomes measures (PROMs) regarding quality of life, postoperative complications and cost-effectiveness. Based on a non-inferiority model for lymph node yield, with an average lymph node yield of 20, a non-inferiority margin of −4 and a 90% power to detect non-inferiority, a total of 168 patients are to be included. Discussion The STOMACH trial is a prospective, multicenter, parallel randomized study to define the optimal surgical strategy in patients with proximal or central gastric cancer after neo-adjuvant therapy: the conventional ‘open’ approach or minimally invasive total gastrectomy. Trial registration This trial was registered on 28 April 2014 at Clinicaltrials.gov with the identifier NCT02130726.
Background
Laparoscopic surgery has been shown to provide important advantages in comparison with an open approach in the treatment of gastrointestinal malignant diseases, such as less perioperative blood loss, faster patient recovery and shorter hospital stay. It also maintains similar outcomes with regard to tumor resection margins and oncological long-term survival [1,2]. In gastric cancer, the role of laparoscopic surgery remains unclear.
The current recommended treatment for gastric cancer consists of radical resection of the stomach, with a free margin of 5 to 6 cm from the tumor, combined with a lymphadenectomy. The extent of the lymphadenectomy, performed according to the guidelines of the Japanese Gastric Cancer Association, is considered a marker for radicality of surgery and quality of care [3]. Therefore, it is imperative that a novel surgical approach such as laparoscopic total gastrectomy should be non-inferior with regard to radicality of surgery and lymph node yield.
Several studies have focused on laparoscopic versus open gastrectomy. These studies are predominantly conducted in Asian countries [4,5], where the incidence of gastric cancer is higher in comparison to Western countries [6,7]. The screening program in Japan, which started in 1983, has enabled important advances in the detection and treatment of early gastric carcinomas in this country [8]. As such, tumor stages are lower at the time of diagnosis compared to Western countries, and it is difficult to translate the results of Asian studies to a population for which no screening program exists, and in which the stages of the tumors at diagnosis are higher [9].
Only a few Western studies, one randomized controlled trial and some cohort analyses, have been conducted comparing laparoscopic and open approaches for gastric cancer [10][11][12][13][14]. In the randomized controlled trial by Huscher et al., they found that laparoscopic partial gastrectomy showed similar results to open gastrectomy with regards to quality of oncological resection, as measured by the number of retrieved lymph nodes, and five-year survival rate, whereas patient recovery was faster and admission duration was shorter [10][11][12][13][14]. However, these studies are small and underpowered and are exceeded by changes in neo-adjuvant therapies. Further research is indicated in order to establish the optimal surgical strategy.
Moreover, implementation of neo-adjuvant chemotherapy is currently used after the outcome of different studies on this subject [15,16]. Nowadays the use of neo-adjuvant treatment followed by gastric resection is extensive and applies in stage Ib to IVa gastric cancer [17]. The effect of neo-adjuvant chemotherapy on a laparoscopic gastrectomy in comparison with an open resection remains unclear. For instance, in rectal and breast cancer, neoadjuvant chemotherapy has been associated with response of the tumor and a lower number of lymph nodes found in the specimen [18]. In gastric cancer, preoperative chemotherapy has been associated with a lower number of tumor-positive lymph nodes, however no difference in total lymph node yield was reported [19]. In other series, laparoscopic gastrectomy has shown non-inferior results with regard to lymph node yield in comparison to open gastrectomy, but these studies were conducted before the implementation of neo-adjuvant chemotherapy [12,20,21]. Moreover, the difficulty of dissection and resection, and the quality of a laparoscopically performed esophagojejunostomy, remains a technical challenge. Considering all these factors, such as the differences in populations, the number of retrieved lymph nodes, the location of lymph nodes in anatomical stations, the increased use of neo-adjuvant chemotherapy and the technical difficulties derived from the laparoscopic total gastrectomy, a randomized controlled trial comparing open and laparoscopic total gastrectomy after neoadjuvant therapy is warranted. Such a trial could provide an answer to the question, 'Is a minimally invasive total gastrectomy justified in the era of neo-adjuvant chemotherapy?'.
Study objectives
The objective of this study is to establish the optimal surgical strategy in the treatment of patients with gastric cancer. The STOMACH trial is a prospective, international, multicenter, parallel randomized clinical trial. Patients with gastric cancer selected to undergo a total gastrectomy, who have received neo-adjuvant chemotherapy, are randomized between a conventional 'open' and a minimally invasive group.
Endpoints
The primary endpoint is quality of oncological resection with regard to radicality of surgery and lymph node dissection in all the appointed stations. Both the total number of resected lymph nodes and the resected lymph node stations will be examined. After surgery, the surgeon will attach tags with numbers corresponding with the dissected lymph node stations to the specimen. This will allow for a more extensive assessment of the feasibility of minimally invasive versus open resection.
Secondary endpoints include postoperative complications, which are monitored for 30 days postoperatively. Overall length of hospital stay and Intensive Care Unit (ICU) stay will also be recorded. Survival will be monitored for up to three years postoperatively. Quality of life is assessed with patient-reported outcome measures (PROM), the Euro-Quality of Life-5D (EQ-5D) questionnaire, the European Organizaion for Research and Treatment of Cancer Quality of Life Questionnaire 30 (EORTC-QLQ30 and the Stomach 22 module (STO22).
Assessment of quality of life will be performed preoperatively, five days postoperatively, three months, six months and one year postoperatively. Cost-effectiveness will be assessed from a hospital and societal perspective.
Power of the study
The number of dissected lymph nodes in gastric cancer surgery is an important marker for radicality of surgery and quality of care [22][23][24]. Therefore the primary outcome in this study is the number of retrieved lymph nodes in laparoscopic surgery compared to an open procedure.
It is anticipated that laparoscopic gastric resection will show similar surgical resection specimen quality [19], based on the results of the Dutch Cancer Registry (NKR). The sample size calculation is set to achieve 90% power to detect non-inferiority using a one-sided, two-sample t-test. With a margin of non-inferiority at −4.0 and a significance level (α) of 0.05, the sample size requires 66 patients to be included per group, with a total sample size of 132 patients. A non-inferiority margin of −4.0 is deemed feasible, since the current average lymph node yield at the VU University Medical Center ((VUmc) Amsterdam) is around 20, meaning a lymph node yield of 16 is acceptable.
Since lymph node yield is of interest in cases of radical resection, further correction is necessary for radicality of surgery. The NKR showed that a radical resection was achieved in 79% of patients, although palliative resection figures are not given separately. After correction for radicality, a total of 168 patients are to be included. In other, similar prospective studies, no loss to follow-up was recorded, therefore we do not take into account a percentage for loss to follow-up [25,26].
Inclusion criteria
All adult patients with primary carcinoma of the stomach, where the tumor is considered surgically resectable (T1-3, N0-1, M0) after neo-adjuvant chemotherapy, are eligible. Only patients with an indication for total gastrectomy are included, in order to exclude bias due to different surgical approaches. Written informed consent is obligatory.
Exclusion criteria
Exclusion criteria are previous surgery of the stomach and patients with a previous history of cancer or presenting with a co-existing cancer. To allow for appropriate inclusion and randomization, patients operated in an acute setting are excluded. Patients with an American Society of Anesthesiologists (ASA) classification of four of higher are excluded.
Participating surgeons and clinics
Complication rate, duration of operation and morbidity can be a result of the learning curve of the operating surgeon, and this might bias results. In order to prevent surgeon bias, participating surgeons are to have sufficient experience in open and minimally invasive total gastrectomy. Based on the literature and the Dutch guidelines for gastric carcinoma [27,28], it is required that the participating surgeon has performed at least 20 open and minimally invasive total gastrectomies. All
Randomization and blinding
Information regarding the study will be provided to the patient at the outpatient clinic. When informed consent is obtained, the patient will be randomized at the outpatient clinic. Randomization occurs via an online module. The participating surgeon can login via a secured module on the STOMACH trial website. Upon filling out the randomization form, an immediate response is obtained, containing a code number and the allocated type of operation.
The study design is unblinded with regard to patient and physician. The patient will be informed about the type of procedure they are allocated to. When patients do not agree to participate in the study they will receive the standard treatment in the corresponding department. The pathologist assessing the specimen is blinded for the operating technique, since radicality of surgery and the number of assessed lymph nodes and lymph node stations is the primary outcome in this trial.
Data collection and statistics
Data is collected via a secured Internet module and via datasheets on paper. A secured online module has been especially designed for the STOMACH trial, using OpenClinica, version 3.3. © 2015 OpenClinica, LLC. Paper datasheets, such as completed questionnaires, will be sent to the VUmc by mail, where they are kept in a secured room. Data are collected daily until discharge. PROMs are collected preoperatively, five days postoperatively, at three months, six months and one year postoperatively.
One research fellow in the VUmc will monitor the data of all included patients, and maintain regular contact with all participating centers. All required parameters will be collected in an SPSS data file, SPSS version 22, IBM statistics®, Chicago, Illinois, USA . Data analysis will be performed according to the intention-to-treat principle. Continuous variable will be compared with a T-test or Mann-Whitney U as appropriate, and frequencies will be compared with a chi-square or the McNemar analysis as appropriate.
Ethics
The study is conducted in accordance with the principles of the Declaration of Helsinki and 'good clinical practice' guidelines. The independent medical ethics committee of the VUmc (Medisch Ethische Toetsingscommissie VU Medisch Centrum, Amsterdam, the Netherlands) has approved the final version of the protocol prior to the start of the study (approval number: 2014.354 -NL51293.029.14). Written informed consent will be obtained from all participating patients. This trial was registered on 28 April 2014 at Clinicaltrials.gov with the trial number NCT02130726.
Surgical technique Preoperative preparation
All patients will receive the same preoperative preparation, regardless of allocated treatment. All participating patients will receive standard preoperative prophylactic antibiotics consisting of a single dose of cefuroxime at 1,500 mg and a single dose of metronidazole at 500 mg. Antithrombotic prophylaxis will be administered according to local protocol.
Open gastrectomy
For the open gastrectomy, the patient is placed in the supine position. Access to the abdomen is obtained via a median laparotomy. The Omnitract® system, Omni-tract Surgical, St Pauls, Minnesota, USA, is placed over the incision in order to secure vision over the stomach.
Minimally invasive gastrectomy
For the minimally invasive gastrectomy, the patient is placed in the reverse Trendelenburg position and the legs are abducted. The surgeon is positioned between the legs of the patient. The first trocar, for the laparoscope, is inserted at the umbilicus. After insertion, a pneumoperitoneum is created. The following trocars are placed with the aid of the laparoscope. The overview of trocar placement is depicted in Figure 1. A Nathanson Hook Liver Retractor® may be placed in order to retract the liver from the operation area.
Gastrectomy
After the placement of trocars or opening of the abdomen, the abdomen is inspected for signs of tumor progression. The greater omentum is mobilized and dissected from the transverse colon. Access to the lesser sac is obtained. The right gastro-epiploic artery is identified and clipped and according lymph node stations dissected. This is followed by further dissection and ligation of the right gastric artery and harvesting of the hepatic lymph node stations. The duodenum is dissected up to 5 cm distal to the pylorus, followed by transection of the duodenum.
Dissection continues with mobilization of the left part of the stomach. After identification of the left gastric artery, the artery is clipped and according lymph node stations are harvested. Further dissection continues towards the hiatus, where the pericardial lymph nodes are harvested. After the gastro-esophageal junction is identified and dissected, it is transected using a linear stapling device. With regard to transection, a proximal margin of 6 cm from the tumor is recommended [3]. After en bloc resection, the specimen is removed, but not yet stored. After completion of the surgery, the surgeon attaches tags with numbers corresponding to each lymph node station, allowing for separate analysis of each lymph node station.
Reconstruction occurs with a Roux and Y anastomosis. First, the jejunum is mobilized upwards in a retrocolic fashion. Anastomosis between the esophagus and jejunum is performed. Next, a jejunojejunostomy is fashioned. A final overview is performed of the abdomen, with control of hemostasis. Lastly, a silicone drain is placed in the operated area, if deemed necessary by the operating surgeon, and the abdomen is closed.
Postoperative management
Irrespective of open or minimally invasive gastrectomy, patients will receive similar postoperative management. Depending on local protocol, a nasogastric tube may be positioned. Oral diet is initiated. Postoperative pain control consists of patient-controlled analgesia (PCA), which is monitored daily by an anesthesiologist. PCA pumps will remain in situ for a maximum of three days. Patients are encouraged to be out of bed and walking around the ward, under the guidance of a physiotherapist or nurse. Patients will be discharged when they pass stool, are able to drink, can walk and are comfortable with oral analgesia. A delay in discharge due to 'social' reasons will be recorded. Follow-up occurs at the outpatient clinic; patients are seen routinely at three, six and 12 months postoperatively.
Discussion
Laparoscopic surgery has been shown to provide important advantages in comparison with open procedures in surgery of the rectum and colon. Since the first minimally invasive total gastrectomy in 1996 by Azagra et al. [29], several comparative studies between open and minimally invasive approaches of the stomach have been published. Short-term results show less perioperative blood loss, faster patient recovery and earlier discharge from the hospital. One study reported long-term results with similar survival and disease-free survival rates in the open and minimally invasive approach [10].
Most studies are conducted in Asian countries, where a screening program has enabled early detection and treatment. The results of these studies cannot be translated to the Western population. Western studies have deemed minimally invasive gastrectomy to be feasible, although the numbers are small and the studies often underpowered. Furthermore, these studies were conducted before the implementation of neo-adjuvant therapy. Currently, in the Netherlands, less than 10% of patients are operated on via a minimally invasive approach [30]. A prospective, randomized clinical trial is considered necessary in order to establish the optimal surgical technique in gastric cancer: open or minimally invasive gastrectomy.
Trial status
The Scientific Research Committee of the Cancer Centre Amsterdam, NL, approved the design of the STOMACH trial. The Medical Ethical committee of the VUmc has approved the protocol (approval number: 2014.354 -NL51293.029). The trial is open for recruitment since January 2015.
|
2015-03-27T18:11:09.000Z
|
2015-03-27T00:00:00.000
|
{
"year": 2015,
"sha1": "18f9f8fc869958a2a931d9760157114539291f6f",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-015-0638-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68325271ef2340ff4112f34d7e09bef8b697c5ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257445281
|
pes2o/s2orc
|
v3-fos-license
|
Syntheses, crystal structures and thermal properties of catena-poly[cadmium(II)-di-μ-bromido-μ-pyridazine-κ2 N 1:N 2] and catena-poly[cadmium(II)-di-μ-iodido-μ-pyridazine-κ2 N 1:N 2]
In the crystal structures of the title compounds, the cadmium cations are octahedrally coordinated by four halide anions and two pyridazine ligands in a trans-CdX 4N2 (X = Br, I) arrangement and are linked into chains by the halide anions and the pyridazine ligands.
The reactions of cadmium bromide and cadmium iodide with pyridazine (C 4 H 4 N 2 ) in ethanol under solvothermal conditions led to the formation of crystals of [CdBr 2 (pyridazine)] n (1) and [CdI 2 (pyridazine)] n (2), which were characterized by single-crystal X-ray diffraction. The asymmetric units of both compounds consist of a cadmium cation located on the intersection point of a twofold screw axis and a mirror plane (2/m), a halide anion that is located on a mirror plane and a pyridazine ligand, with all atoms occupying Wyckoff position 4e (mm2). These compounds are isotypic and consist of cadmium cations that are octahedrally coordinated by four halide anions and two pyridazine ligands and are linked into [100] chains by pairs of -1,1-bridging halide anions and bridging pyridazine ligands. In the crystals, the pyridazine ligands of neighboring chains are stacked onto each other, indicatinginteractions. Larger amounts of pure samples can also be obtained by stirring at room-temperature, as proven by powder X-ray diffraction. Measurements using thermogravimetry and differential thermoanalysis (TG-DTA) reveal that upon heating all the pyridiazine ligands are removed in one step, which leads to the formation of CdBr 2 or CdI 2 .
Chemical context
Coordination polymers based on transition-metal halides show a versatile structural behavior and can form networks of different dimensionalities (Peng et al., 2010). This is especially valid for compounds based on Cu I , which show different CuX substructures (X = Cl, Br, I) such as, for example, dimeric units, chains or layers that can be additionally connected by bridging neutral coligands (Peng et al., 2010). These compounds are of additional interest because of their luminescence behavior (Gibbons et al., 2017;Mensah et al., 2022). For one particular metal halide and coligand, compounds of different stoichiometry are frequently observed. In most cases they were synthesized in the liquid state, but in some cases the coligand-deficient phases cannot be obtained from solution or are obtained only as mixtures with coligand-rich phases.
We have been interested in the structural properties of such compounds for several years and have found that upon heating most of the coligand-rich compounds lose their coligands stepwise and transform into new coligand-deficient compounds that show condensed copper-halide networks (Nä ther & Jess, 2004;Nä ther et al., 2001Nä ther et al., , 2007. The advantage of this method is the fact that this reaction is irreversible, and that the new compounds are obtained in quantitative yields. Moreover, in some cases, metastable polymorphs or isomers can also be obtained and this method can also be used for the synthesis of new coordination polymers with other bridging anionic ligands such as, for example, thioor selenocyanates (Werner et al., 2015;Wriedt & Nä ther, 2010).
We subsequently found that transition-metal halide compounds with twofold positively charged cations such as Cd II that also show a pronounced structural variability can be obtained by this route (Nä ther et al., 2017;Jess et al., 2020). In most cases, discrete CdX 2 complexes are observed (Ghanbari et al., 2017;Liu, 2011), but these units can also condense into dinuclear (Santra et al., 2016;Xie et al., 2003) and tetranuclear units (Zhu, 2011) or polymers (Nezhadali Baghan et al., 2021;Satoh et al., 2001), where the latter can be further linked by the coligands into layers (Hu et al., 2009;Marchetti et al., 2011).
In this context, we have reported on CdX 2 coordination polymers with 2-chloro and 2-methylpyrazine with the composition CdX 2 (L) 2 with X = Cl, Br, I and L = 2-chloro or 2-methylpyrazine). These compounds consists of CdX 2 chains in which the Cd cations are linked by two pairs of -1,1bridging halide anions (Nä ther et al., 2017). Surprisingly, upon heating, the compounds with 2-chloropyrazine lose all the coligands in one single step, whereas decomposition of the 2-methylpyrazine compounds leads to the formation of compounds with the composition CdX 2 (2-methylpyrazine), in which the CDX 2 chains are linked into layers by the 2-methylpyrazine ligands. These compounds can also be obtained if the discrete complex CdI 2 (2-methylpyrazine) 2 (H 2 O) is thermally decomposed. In further work we investigated similar compounds with 2-cyanopyrazine as coligand, where we observed a different thermal reactivity as a function of the nature of the halide anions (Jess et al., 2020).
In the course of our investigations we also became interested in compounds with pyridazine as coligand. A search in the CCDC database revealed that several transition-metal halide coordination compounds with this ligand have already been reported in the literature (see Database survey). With cadmium, one compound with the composition CdCl 2 (pyridazine) is reported, in which the Cd II cations are linked by -1,1-bridging chloride anions into chains, in which each two Cd II cations are additionally connected by the pyridazine ligands (Pazderski et al., 2004a). As this compound is isotypic to many other MX 2 (pyridazine) coordination compounds, one can assume that this structure represents a very stable arrangement. On the other hand, compounds with this composition have also been reported with ZnX 2 . In contrast to the bromide and iodide compounds, the chloride analog crystallizes in three different modifications, which indicates that the structural behavior also depends on the nature of the halide anion (Bhosekar et al., 2006a,b;Pazderski et al., 2004b;Bhosekar et al., 2007). Moreover, even if in the majority of compounds Nezhadali acts as a bridging ligands, some examples have been reported in which this ligand is coordinated to metal cations with only one of the two N atoms, thereby forming discrete complexes, which also include transition-metal halide complexes (Handy et al., 2017;Boeckmann et al., 2011;Laramé e & Hanan, 2014;Yang, 2017;Harvey et al., 2004).
Based on all these findings, we reacted CdBr 2 and CdI 2 in different molar ratios with pyridazine in several solvents to investigate whether compounds with a different ratio between CdX 2 and pyridazine can be prepared, which also might include pyridazine-rich discrete complexes that upon heating might transform into new compounds with a more condensed network. However, independent of the reaction conditions and the stoichiometric ratio, we always obtained the same crystalline phases, as proven by powder X-ray diffraction (PXRD). Crystals of both compounds were obtained at elevated temperatures and structure analysis proves that compounds with the composition CdBr 2 (pyridazine) (1) and CdI 2 (pyridazine) (2) were obtained. Comparison of the experimental PXRD patterns with those calculated from the results of the structure determinations, prove that both compounds were obtained as pure phases (Figs. S1 and S2). Measurements using thermogravimetry and differential thermoanalysis reveal that both compounds decompose in one step, which is accompanied with an endothermic event in the DTA curve (Figs. S3 and S4). The experimental mass losses of 22.9% for 1 and 18.1% for 2 are in good agreement with those calculated for the removal of one pyridazine ligand (Ám calc. = 22.7% for 1 and 17.9% for 2), indicating that CdBr 2 and CdI 2 , respectively, have formed.
In this context, it is noted that the formation of a more pyridazine-deficient compound with a more condensed network is not expected, because for M 2+ cations, the network should be negatively charged. This is impossible in this case, but it is noted that one compound with CdCl 2 and a more condensed metal-halide network is reported in the literature (Jin et al., 2014).
Structural commentary
The reaction of cadmium dibromide or cadmium diiodide with pyridazine leads to the formation of crystals of CdBr 2 (pyridazine) (1) and CdI 2 (pyridazine) (2). Both compounds are isotypic to their CdCl 2 analog already reported in the literature (Pazderski et al., 2004a). In this context, it is noted that for compound 2 a pseudo-translation along the crystallographic baxis is detected, leading to half of the unit cell and space group Cmmm but the refinement clearly shows that the present unit cell and space group is correct (see Refinement). Both compounds are also isotypic to a number of other metalhalide coordination polymers, indicating that this is a very stable arrangement (see Database survey).
The asymmetric units of compound 1 and 2 consist of a cadmium cation located on the intersection point of a twofold screw axis and a mirror plane (Wyckoff site 4c, symmetry 2/m), as well as a bromide or iodide anion lying on a mirror plane (Wyckoff site 8h) and a pyridazine ligand, with all atoms located on Wyckoff position 4e (mm2) (Fig. 1). In both compounds, the Cd II cations are octahedrally in a trans-CdX 4 N 2 arrangement, coordinated by four halide anions and two pyridazine ligands, and are linked by pairs of -1,1bridging halide anions into chains that propagate in the crys-tallographic a-axis direction (Fig. 2). The pyridazine ligands also act as bridging ligands, connecting two neighboring Cd II cations (Fig. 2). Within the chains, all of the pyridazine ligands are coplanar. (Fig. 2).
The Cd-N bond lengths to the pyridazine ligand are slightly longer in the iodide compound 2 compared to compound 1, which might be traced back to some crowding of the bulky iodide anion. In agreement, this distance is the shortest in the corresponding chloride compound (Pazderski et al., 2004a) reported in the literature (Tables 1 and 2). The N-Cd-Br and N-Cd-I bond angles are comparable, which is also valid for that in the chloride compound (Pazderski et al., 2004a). As expected, the intrachain CdÁ Á ÁCd distance increases from Cl [CdÁ Á ÁCd = 3.5280 (5)
Figure 3
Arrangement of the chains in the crystal structure of 1 in a view along the crystallographic b-axis direction.
Figure 2
Fragment of a [100] polymeric chain in the crystal structure of 1.
along the crystallographic b-axis direction (Fig. 3). The angle between two neighboring pyridazine ligands is 180 in both compounds, which is also valid for the chloride analog (Pazderski et al., 2004a). The distance between the centroids of adjacent pyridazine rings is 3.724 Å for the chloride, 3.8623 (1) Å (slippage = 0.095 Å ) for the bromide and 4.1551 (1) Å (0.226 Å ) for the iodide, consistent withinteractions (Fig. 4), although they must be weak for the iodide. There are no directional intermolecular interactions such as intermolecular C-HÁ Á ÁX hydrogen bonding. As mentioned above, this structure type is common for the majority of transition-metal pyridazine coordination compounds with such a metal-to-pyridazine ratio, indicating thatinteractions might also be responsible for this obviously very stable arrangement.
Database survey
A search in the CCDC database (version 5.43, last update November 2022; Groom et al., 2016) revealed that some compounds with the general composition MX 2 (pyridazine) (M = transition metal and X = halide anion) have already been reported in the literature. The compounds with NiCl 2 (CSD refcode POPCIG) and NiBr 2 (POPCOM) were structurally characterized by Rietveld refinements using laboratory X-ray powder diffraction data and are isotypic to the title compounds (Masciocchi et al., 1994). In this contribution, the compounds with Mn, Fe, Co, Cu and Zn with chloride and bromide as anions were also synthesized, and their lattice parameters determined from their powder patterns, indicating that the compounds with Mn, Fe and Co are isotypic to the Ni compound, which is not the case for the compounds with Cu and Zn (Masciocchi et al., 1994). The compounds MCl 2 (pyr-(pyridazine) with Mn (LANJEQ) and Fe (LANJAM) were later determined by single-crystal X-ray diffraction, which definitely proves that they crystallize in space group Immm (Yi et al., 2002). In this context it is noted that three compounds containing diamagnetic Zn II cations have been reported, which consist of discrete complexes with a tetrahedral coordination, viz. ZnI 2 (pyridazine) 2 (MENSUU; Bhosekar et al., 2006a), ZnBr 2 (pyridazine) 2 (VEMBEV; Bhosekar et al., 2006b) and three modifications of CuCl 2 (pyridazine) 2 (YAFYOU, YAFYOU01, YAFYOU02 and YAFYOU03; Pazderski et al., 2004b andBhosekar et al., 2007). Surprisingly, none of the different forms are isotypic to the chloride and bromide compounds reported by Masciocchi et al. (1994) based on XRPD patterns.
Synthesis
CdBr 2 , CdI 2 and pyridazine were purchased from Sigma-Aldrich. All chemicals were used without further purification.
Colorless single crystals of compound 1 and 2 were obtained by the reaction of 0.500 mmol of CdBr 2 or 0.500 mmol of CdI 2 with 0.500 mmol of pyridazine in 1 ml of ethanol. The reaction mixtures were sealed in glass tubes and heated at 388 K for 1 d and finally cooled down to room temperature.
Larger amounts of a microcrystalline powder of 1 and 2 were obtained stirring the same amount of reactants in ethanol or water at room temperature for 1 d. For the IR spectra of 1 and 2 see Figs. S5 and S6.
Experimental details
The IR spectra were measured using an ATI Mattson Genesis Series FTIR Spectrometer, control software: WINFIRST, from ATI Mattson. The PXRD measurements were performed with Cu K 1 radiation ( = 1.540598 Å ) using a Stoe Transmission Powder Diffraction System (STADI P) equipped with a MYTHEN 1K detector and a Johansson-type Ge(111) monochromator. Thermogravimetry and differential thermoanalysis (TG-DTA) measurements were performed in a dynamic nitrogen atmosphere in Al 2 O 3 crucibles using a STA-PT 1000 thermobalance from Linseis. The instrument was calibrated using standard reference materials. Arrangement of neighboring pyridazine rings in 1 showingstacking interactions.
Refinement
Crystal data, data collection and structure refinement details are summarized in Table 3. The C-bound hydrogen atoms were positioned with idealized geometry and refined with U iso (H) = 1.2U eq (C). For compound 2, PLATON (Spek, 2020) suggested a pseudo-translation along the b-axis with a fit of 80%. If the structure is determined in a unit cell with half of the b-axis, space group Cmmm is suggested. The structure can easily be solved in this space group but the refinement leads to only very poor reliability factors (R1 = 11.5%). Moreover, in this case, disorder of the nitrogen atoms of the pyridazine ring is observed, because the N atoms of the pyridazine rings of neighboring chains are superimposed.
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
x y z U iso */U eq
|
2023-03-12T15:05:11.628Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "fc0e2f8a25f99868bbb01f8116f1f7a26b0cf5a5",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/e/issues/2023/04/00/hb8056/hb8056.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af22b0497378ca56616fd71e43f934d69ee8344d",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214161907
|
pes2o/s2orc
|
v3-fos-license
|
Adoption of innovations in harvesting methods of the grape: A case study in Charikar and Bagram districts of Parwan province, Afghanistan
The main objectives of this study were to determine the extent of innovation in the grape harvest, and the rate of familiarity and usability of innovations by farmers in Parwan province. The data were collected as primary data, included face to face interviews with 120 grape growers and local authorities in 20 villages spread across the two districts of Charikar and Bagram provinces of Parwan, Afghanistan. The data was analyzed by (SPSS 22) Package. According to the results, the size of agricultural land, land allocated to grape production have the most similarities; however, the findings show that grape yield was impressed by the application of farmers' innovations and knowledge by the user of innovation in the harvest stage grape production. The membership of farmers in agricultural organizations is very weak, and only 8.2% of the farmers have membership in the organization. And also, the advantages and disadvantages of using innovations were evaluated. The advatages were evaluated by six options (Saving Time, Increasing demand for the product, Wastes Reduction in the product, Better management, Easy harvest, Employment of less laborer), and all of them given high importance (HI). The disadvantages were evaluated in four options, of which only the the item ''Not economic'' was given (HI) while the remaining three disadvantages were in (LI) category. Familiarity and usability of innovations have different results; most of the farmers are familiar with innovations. However, the application of innovations is less than it is familiarity
Introduction
The topic of this study is to analyze the adoption of innovations in harvesting methods of grapes. This case study was conducted in Charikar and Bagram district of the province of Parwan. In this study, we will discuss the adoption of innovations in the grape harvest, applicable innovations, and inapplicable innovations in Afghanistan and also the role of government and NGOs in introducing, spreading and supporting is one of the major and controversial issues.
In Afghanistan, the traditional production, harvesting, and post-harvesting systems are among the most fundamental problems that have a very negative effect on the production and standardization of grapes. Afghanistan's grapes are a major source of export both as fresh produce and dried as raisins, and due to this, the use of innovations in the production, harvest and postharvest stage is very crucial. In Afghanistan, to solve the problems in the production, harvest, and post-harvest steps, both the government, along with various NGO's have activities that support these issues. Some of these include activities such as (CAD-F) in the case of vineyard chewing cases, house raisins, farmers training for the correct use of innovations at harvesting stage grapes, and training on the supply of their products to the national and international markets. NHLP has been in the process of changing the traditional vineyard cultivating systems into the T system and training of the new innovation of the grape harvesting process for farmers. (GIZ) also works on other projects for agribusiness and rural markets and also conducts visual education in the grape value chain. (AMIP) which operates in the development of small processing facilities and bundling of the market for horticultural products(Anonymous 2019).
Afghanistan's farmers are not in a good economic state. Many face poverty in their daily lives, and the season of cultivation brings them additional challenges like borrowing to meet some needs. In recent years, the majority of farmers have learned about innovation and utilization by government and NGOs; however, most are not be able to buy new harvesting equipment. Lack of sales markets for the product is one of the main factors that farmers don't have interests in the adoption of innovation. Farmers, who are in a better economic state and have better knowledge, use innovations in the harvest stage and believe in the effectiveness of innovations (Khaliq & Boz, 2018).
Material and Methods
Multiple analytical methods were used to analyze the data in this study. These included descriptive and inferential statistics. The descriptive data analysis involved measures of central tendency. The inferential statistics had included correlation analysis. The statistical analysis of the data was carried out with the aid of Statistical Package for the Social Science (SPSS). The analysis had included technological capabilities and innovations. Innovation score in this work used the basic idea of the Innovation Index is to assign a single numerical value to the set of innovations of every farmer such numerical valuation must assign higher numbers to innovations that push the technological frontier or to innovations that are relatively rare within the subsector, here it is referred to the degree of adoption of a particular innovation among the farmers (Ariza et al., 2013).
Demiryurek et al., (2014) developed an innovation sustainability index of Dasgupta, (1968) by using not only the number of innovation but also included years of adoption. When the innovation index value increases, the sustainability of innovation that farmer has been adopted increase consequently. Therefore, farmers whose have higher index value can be said are more innovative. (Demiryurek et al., 2015). In this study, we calculated the Innovation Score of the grape harvester's in Parwan, Afghanistan as: Innovation Scores = Number of years of adoption × Number of adopted innovation Total of innovation.
Research findings 3.1. Age
In table 6.2, 120 farmers were interviewed. These farmers are divided into three age groups, the first age group being those 40 years and younger, the second being 60-40 years age range, the third being 60 years old and older. The participants in this study are as old as 80 being the oldest one and the youngest being 18 years old. As noted in table 3.1, 120 farmers were surveyed in this study as grape producers, 29.2% of the farmers are in the first age group, 57.5% of farmers are included in the second group, and 13.3% of them consist of the third age group. The highest percentage of the farmers consists of the second group, with the average age being 47.15. According to the findings of this study, most of the grape producers are over 40 years old, and young producers represent a smaller demographic.
Education level
Knowledge is a key principle in the agriculture sector. It means farmers can be more productive using agricultural knowledge and the precise use of production factors (Fane 1975). The knowledge level of grape producers in this section is divided into six groups according to knowledge. The first group includes illiterate producers. The second group includes producers who can read and write with no educational background, the third group includes producers who graduated from elementary school, the fourth group includes producers who have graduated from secondary school, the fifth group consists of manufacturers who graduated from high school, and the sixth group includes producers who graduated from university. According to table 3.2, 41.7% of producers are illiterate and belong to the first group. Farmers of the second group can read and write, and they form about 5.8% of the total, farmers of the third group who have finished elementary school form Table 3.2. Education level about 5.8% of total farmers. About 11.7% of farmers have graduated from secondary school, and they form the fourth group. The fifth group included high school graduated, and the rate was 27.5%. The five groups consist of the highest percentage after the first group, the remaining 7.5 percent, which are related to the sixth group, they have graduated from college and have higher education. Total 120 100.0
Membership in Agricultural Organizations
Social units are organizations that work collectively to meet their needs. All of these organizations have a management structure that divides responsibilities and power among its members. These organizations have an open environment and act according to group decisions(MacDonald 1963).
Health Insurance
Insurance, in the simplest definition, is a method of transferring risk(DeNavas-Walt 2010). Insurance is one of the most important tools in the agricultural sector because it can encourage farmers to produce more comfortably. Agriculture is a risky sector, and its exposure to natural disasters and environmental stresses is very high (Club 2018). is the partnership land that forms 4% of the agricultural land.
Farm Size and the Land Ownership Status
Earth is the basis of natural resources (Rasmussen 1996). The efficiency of each ecosystem depends on the type and quality of land use. Land usage serves many different purposes, whether it is residential land, agricultural land (water fields and rugged land), forests, and sometimes unusable land. In general, the natural resources of a country can predict the future of that nation (Douroudian 2017).
In this research, farmers' lands are categorized into five groups. These include private/personal land, rented land, given to rent, and kept to partnership land and partnership land. After evaluating this study, two types of land rented land and kept to partnership have been fixed to zero.
According to figure 3.1, agricultural lands are divided into three groups. The highest percentage belongs to the personal land, which accounts for 72% of the land. The other group is rental land, which accounts for 24% of the land, and the third Table 3.8 discusses the familiarity and usage of innovations for grape harvesting. These innovations are used in different stages of harvest, which includes the testing for sugar level, berry size, color, cutting, field packing, clipping, packing, and packaging houses.
Familiarity and Usability of Innovations
The findings show that 38.3% of farmers are familiar with testing for sugar levels in this innovation at the harvesting stage; however, it applies to 6.7% of farmers. Berry size is familiar for 43.3% of farmers; however, this innovation is applicable to 11.7% of farmers. Considering that the color of grapes is an essential step in the harvesting stage of the products, it is a familiar aspect to the highest percentage of farmers at 96.7%, with 87.5% of farmers accepting this stage. The cutting stage is familiar for 98.3% of farmers, and it is applicable for 94.2% of farmers. Field packing stage is done primarily in the field, Table 3.8. Familiarity and usability of innovations with 92.5% of farmers being familiar with this stage, and 89.2% of them are packing their products on the farm, and this is applicable for them. The Clipping Stage is one of the basic stages of harvesting for good marketing and preventing product rotting. This stage of the innovation is a familiar factor for 89.2% of farmers, and it is applicable for 85.8% of them. Most of the time, the Packing stage is completed in the field, 95% of farmers are familiar with this stage, and it is applying to 87.5% of farmers. Packing houses are among the innovations that are least accessible to farmers. In the past two years, a packaging and processing center for produce was opened in Parwan province. The results of this study, however, show because of the lack of availability and existence of packaging houses, many farmers have resulted in packing their products on the field. Of the farmers surveyed, 69.2% of farmers are familiar with packaging houses, but only 7.5% of farmers use this innovation.
In this research, the minimum usage of innovation is 6.7%, and it's in the testing for sugar level category. The maximum use of these innovations was in the cutting stage, with 94.2% usage. Table 3.9 discusses the familiarity and usability of new harvest tools. These tools are used in different stages of harvest, which include (handheld refractometer, digital refractometer, caliper, sizing rings, cutting shears, food thermometer, Basket, carton, clipping scissors, and tarpaulin).
Familiarity and Usability of New Harvest tools
The findings show that 37.5% of the farmers are familiar with a handheld refractometer; however, its usage applies to 5.8% of farmers. Digital refractometer is a familiar tool for 36.7% of farmers; however, this tool is applicable to 4.2% Table 3.10 presents the Advantages of Appling Innovation (Objective 1). Based on the advantage of applying the innovation scale described above, innovation (Objective 1). Based on the disadvantage of Appling innovation scale from four items the first one ''not economic'', is in the (HI) category the others are in the (LI) category, which is: ''takes more time'', ''the need of many workers'', ''digress in production''. The remaining category is zero. Figure 3.2 examines the first farmer's market after crop production. The farmers market occurs after harvesting of their products in which there are five market merchants, retailers, wholesalers, exporters, and packager. Of the total buyers, 31%
Customer's Channels
were merchants, 30% were retailers, 27% were wholesalers, 10% were exporters, and the remaining 2% were packagers. Lack of sufficient access to these markets is the main problem faced by farmers in the part of sales. The absence of government support during the farmer's market season also causes harm to farmers in terms of product sales.
Conclusion
The main objective of this research is to determine the extent of innovation in the grape harvest and the rate of familiarity and usability of innovations by farmers in Parwan province. This study can help producers make informed decisions for the use of innovations in the production and harvest of grapes. It covers the main issues related to extending harvest methods and familiarity and usability innovation by farmers. In the Socio-economic characteristics of farmers in terms of age, most of the farmer was in the middle age terms of age, in terms of education, the majority of them were illiterate. According to the place of residence, all of the grape producers lived in villages, and none of them had any off-farm occupation.
In terms of experience in agriculture, annual income, household size of farmers, there was no significant difference; however, in terms of health insurance of farmers, no one had health insurance. The farm size and the land ownership status of farmers the most of land was own land of farmers, land value hade different value that location and productivity of land can be the reasons for it. The farmers are familiar with the innovations at the harvest stage and believe in the effectiveness of these innovations. Farmer's knowledge of tools and innovations varies. Most farmers have been introduced to the harvesting tools and tools, and they are familiar, but most farmers do not use their innovations and methods.
Customer's channels of farmers contain merchant, retailer, wholesaler, exporter, and packager
|
2020-02-13T09:20:37.051Z
|
2020-01-20T00:00:00.000
|
{
"year": 2020,
"sha1": "7bb85f11f15e473d93c5e320ce4c592c16233974",
"oa_license": null,
"oa_url": "https://ijsrm.in/index.php/ijsrm/article/download/2558/1982",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "87e0a81b516016d0a273b00a026146dce8e4fd68",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
}
|
12653213
|
pes2o/s2orc
|
v3-fos-license
|
Z-Score Linear Discriminant Analysis for EEG Based Brain-Computer Interfaces
Linear discriminant analysis (LDA) is one of the most popular classification algorithms for brain-computer interfaces (BCI). LDA assumes Gaussian distribution of the data, with equal covariance matrices for the concerned classes, however, the assumption is not usually held in actual BCI applications, where the heteroscedastic class distributions are usually observed. This paper proposes an enhanced version of LDA, namely z-score linear discriminant analysis (Z-LDA), which introduces a new decision boundary definition strategy to handle with the heteroscedastic class distributions. Z-LDA defines decision boundary through z-score utilizing both mean and standard deviation information of the projected data, which can adaptively adjust the decision boundary to fit for heteroscedastic distribution situation. Results derived from both simulation dataset and two actual BCI datasets consistently show that Z-LDA achieves significantly higher average classification accuracies than conventional LDA, indicating the superiority of the new proposed decision boundary definition strategy.
Introduction
Brain-computer interfaces (BCI) provide direct connection channel between brain and external world without any peripheral muscular activity [1]. It translates brain activity to signals that control external devices, and there are many augmentative communication and control systems based on BCI [2][3][4][5][6], which improves lives of people with severe neuromuscular disorders.
Generally an EEG based BCI consists of four modules [1]: 1) signal acquisition module to record and amplify EEG signals; 2) feature extraction module to extract signal features that encode user's intent; 3) translation module to translate features into device command; 4) feedback and control module to synchronize user's action and achieve control of external devices. The high performance EEG amplifier with suitable reference strategy [7] will increase quality of the recorded EEG signal, and employing innovative paradigms in the feedback and control module may obtain higher quality features and better control strategies [8][9][10][11]. Once EEG amplifier as well as reference strategy, and the feedback and control module are determined, feature extraction and translation algorithms will play important roles in improving the performance of BCI. Currently, the conventional features used in scalp EEG based BCI can be attributed to event related potentials, the sensorimotor rhythm, the transient visual potentials and the steady state potentials (including visual and audio). To refine those specific features, many feature extraction algorithms have been proposed [12][13][14][15]. However, as an input-output system, the final translation module directly determines whether the subject's intention is correctly decoded [16]. Compared to the conventional pattern recognition problems, BCI system requires the translation module to have ability to handle with the small sample size training problem, the heteroscedastic class distribution problem and the nonstationary physiological signals, etc. Therefore, effective translation algorithms specifically suitable for BCI application are still required in BCI discipline [17] [18].
Linear discriminant analysis (LDA) is one of the most popular classification algorithms for BCI application, and has been successfully used in a great number of BCI systems such as motor imagery based BCI [19], P300 speller [20] and steady state movement related potentials based BCI [21]. The original LDA has two derivations [22], fisher LDA (FLDA) and least square LDA (LSLDA). FLDA is based on Fisher-Rao's criterion [22][23][24], which is to find the projection w to maximize the objective function J(w)~Dw T S b wD Dw T S w wD denoting the ratio of betweenclass to within-class variances. LSLDA is derived from a linear discriminant function y(x)~w T x, where the weight vector w is supposed to minimize the mean squared error between w T x and y(x) [25]. The solution of LSLDA will be equivalent to that of FLDA with a proper label coding scheme adopted in LSLDA [25].
Both the two kind of LDAs are under the homoscedasticity assumption that different classes follow Gaussian distribution with same covariance matrices. However, the EEG data recorded from actual BCI system usually have heteroscedastic class distributions, which violates the fundamental assumption of LDA and notably degrades the recognition performance. Heteroscedastic LDA (HLDA) is an extension of FLDA, whose between-class scatter is generalized from Euclidean distance to Chernoff distance with both effects of the class means and their covariance matrices considered [26], thus HLDA does not need the homoscedasticity assumption. Nonparametric discriminant analysis (NDA) is another extension of FLDA [27], which makes no prior assumption for the class distributions, the parameters can be estimated by k nearest neighbor method and then they are used to define the between-class scatter [28].
In essence, LDA linearly transforms data from high dimensional space to low dimensional space, and finally the decision is made in the low dimensional space, thus the definition of the decision boundary plays an important role on the recognition performance.
Conventional LDA defines the mean of the projected data as the decision boundary due to the homoscedasticity assumption [25]. Nearest neighbor of classes has also been proposed to serve as the decision boundary [29]. Different from LDA, support vector machine (SVM) firstly maps the data to a high dimensional space, and then finds a hyperplane in the high dimensional space so that the distance from the hyperplane to the nearest data point on each side is maximized [30], theoretically the hyperplane is determined only by a small amount of the training data which are called support vectors. During the classification procedure of LDA, the heteroscedastic class distributions will be still kept in the projected space. Therefore, we argue that if the mean and variance of the projected data could be considered for the definition of the decision boundary, it may extend LDA to deal with the practical heteroscedastic distribution data, which is the derivation point for the proposed Z-LDA in this paper.
The paper is organized as follows. Section Methods and Materials provides a detailed description of z-score LDA (Z-LDA); The results of simulation dataset and motor imagery EEG datasets are showed in section Results; In section Discussions, there is a general discussion of the proposed algorithm; Section Conclusion gives a summary of this work.
Linear Discriminant Analysis
To simplify the description of the algorithm, we only consider the case of two classes. Assume (x 11 ,x 12 ,:::,x 1m )[C 1 and (x 21 ,x 22 ,:::,x 2n )[C 2 , with m and n being the number of samples, are the samples in the two class sets C 1 and C 2 . LetX~(x 11 ,x 12 ,:::,x 1m ,x 21 ,x 22 ,:::,x 2n ), then the simplest representation of a linear discriminant function is obtained by taking a linear function of the input vector so that where w is called a weight vector, and w 0 is a bias. Using vector notation, equation (1) can be converted to whereW W~w w 0 andX X is the corresponding augmented input vector X T ,1 À Á T with a dummy input x 0~1 . Accordingly, the least square solution of equation (2) is [25] W W~X X TX X WithW W estimated in equation (3), the corresponding weight sum y(x) can be achieved. For conventional LDA, classification for an input x is based on the comparison of y(x) and threshold, i.e., the decision boundary. If we consider c 1 as the label of class C 1 ,c 2 as the label of class C 2 , the corresponding decision boundary can be defined by c~(c 1 zc 2 )=2.
Z-score Linear Discriminant Analysis
Theoretically, the decision boundary of LDA is derived by assuming the homoscedasticity distribution for the two classes. Thus it may not be competitive to the heteroscedastic distribution, and we will develop the following strategy to define a more robust decision boundary. Based on the estimatedW W obtained by equation (3), the weight sum y(x) for each training sample can be calculated from equation (2), and then the parameters of the Gaussian distributions of the weight sum y(x) related to the two classes can be estimated as, where m k ,s k (k~1,2) are the corresponding mean and standard deviation (SD) of the weight sum y(x) for training set C k (k~1,2). During classification, when a new sample x à is input, firstly calculate the weight sum y(x à ) by equation (2), then perform the following normalization procedure, In essence, z 1 and z 2 are the transformed z-scores to measure how much the weight sum y(x à ) of the newly input sample is close to the two weight sum distributions predefined by the training set, thus the method is called z-score linear discriminant analysis (Z-LDA). Finally, if Dz 1 DvDz 2 D, the sample is classified into C 1 , otherwise, the sample belongs to C 2 . Assume the weight sum of samples in the two classes subject to Gaussian distribution with parameters m k ,s k (k~1,2), then the proposed decision boundary is the intersection of the two Gaussian distribution curves. The above descriptions are based on LSLDA, since the only difference between LSLDA and FLDA is the way to estimate the weight vector w, and the solutions of them are substantially equal. Therefore, the proposed decision boundary definition strategy can be extended to FLDA, too.
Relationship between LDA and Z-LDA
Theoretically, the decision boundary of conventional LDA is defined by Based on equation (6), the decision boundary of conventional LDA is the mean of labels of two classes, i.e. c~(c 1 zc 2 )=2. Obviously, when the SDs are combined into classification, the decision boundary of Z-LDA is defined as which deduces a value Apparently, the decision boundary c à of Z-LDA is defined by both the mean and SD of the weight sum of two classes.
In the binary classification, the expectation of mean of the weight sum y(x) for training set is E(m 1 )~c 1 and E(m 2 )~c 2 , the decision boundary of conventional LDA is theoretically equal to c~(m 1 zm 2 )=2. When the weight sum of two classes have equal SDs, the decision boundary of Z-LDA will also reduce to c Ã~( m 1 zm 2 )=2. Therefore, the conventional LDA is a particular case of Z-LDA. changed the SD of the second class step by step, and then performed the comparison between LDA and Z-LDA on those datasets. Training set and test set with each consisting of 200 2dimensional samples (100 for each class) were generated respectively.
Classification performance of LDA and Z-
LDA. After the simulated datasets with heteroscedastic class distributions are generated, the training model of LDA and Z-LDA were estimated from the training set respectively, and the models were then applied to classify the samples in the test set. The above procedure was repeated 100 times to lower the random effect, and paired t-test was performed to investigate whether the statistical difference exists between the two classifiers. Table 1 listed the mean and standard deviation of classification accuracies for the 100 runs. Figure 1 visually gived the decision boundary definition procedure for the two classifiers when the standard deviation of the first class is (0.3, 0.3), and (1.0, 1.0) for the second class. Figure 2 intuitively showed the difference of recognition performance for the test dataset based on the two decision boundaries in Figure 1. When SDs of two classes are same, LDA and Z-LDA achieved equal classification accuracy. But while we changed the SD of the second class with that of the first class kept, though the classification accuracies for both two classifiers are lowered, Z-LDA achieved higher classification accuracy than LDA. Paired ttest revealed that when the difference of SD between the two classes exists, Z-LDA would achieve the significantly higher classification accuracy than that of LDA (p,0.05), where the more obvious improvement could be observed for those simulations with more differences in SD. participants were asked to read and sign an informed consent form before participating in the study. After experiment, all the participants received a monetary compensation for their time and effort. Subjects sat in a comfortable armchair in front of a computer screen, they were asked to perform motor imagery with left hand or right hand according to the instructions appeared on the screen. Motor imagery lasts for 5 seconds, and follows a 5 seconds rest. 15 Ag/AgCl electrodes covers sensorimotor area were used for EEG recordings with Symtop Amplifier (Symtop Instrument, Beijing, China), the signals were sampled with 1000 Hz and band pass filtered between 0.5 Hz and 45 Hz. 4 runs on the same day were recorded for each subject, each run consists of 50 trials, 25 trials for each class, and there is a 3 minutes break between the consecutive two runs. The first 2 runs are treated as training set and the last 2 runs are treated as test set.
Evaluations on Real BCI Dataset
2.2. Preprocessing. We used the EEG segments recorded from 0.5 s to 3.75 s after the visual cue for the following analysis according to [32] on the first dataset. For the second dataset, all the EEG segments during motor imagery were selected for analysis, and those trials with absolute amplitude above 300 mv threshold were considered to be contaminated with strong ocular artifacts and will be removed from analysis. Next, the specific optimal frequency band for each subject was obtained by r 2 [33], and then it was used to design band pass filter for the selected EEG segments.
2.3. Feature extraction. Common spatial pattern (CSP) analysis was used to estimate the spatial projection matrix, which projects the EEG signal from original sensor space to a surrogate sensor space [13,19]. Each row vector of the projection matrix is a spatial filter, which maximizes the variance of the spatially filtered signal under one task while minimizing the variance of the spatially filtered signal under the other task. The most discriminative 3 pairs of optimal spatial filters in the projection matrix were selected to transform the band pass filtered EEG signal, then the logarithm of the variance of the transformed surrogate channel EEG signals were served as the final features for task recognition. In general, each EEG segment was transformed to a 6dimensional vector feature after the above procedure.
2.4. Classification results. In this section, we will compare the classification performance of Z-LDA to LDA, SVM, NDA and HLDA. LIBSVM with default parameter was served as SVM classifier [34]. NDA in reference [28] with 5 as the number of k nearest neighbors, and HLDA in reference [26] were used for evaluation in current work.
The classification results of Dataset IVa of BCI Competition III were summarized in Table 2. Z-LDA and NDA achieved higher average accuracy than LDA, SVM and HLDA. Though the average accuracy of NDA is slightly larger than that of Z-LDA, Z-LDA had the better performance for 4 of 5 subjects with exception for subject ay, and the paired t test did not show the statistical difference between them (p = 0.4146). There are only 28 training samples for subject ay, which is a small size training problem. NDA is good at dealing with the small size training problem, resulting in the obvious improvement for subject ay. Across the 5 subjects, when LDA is regarded as the baseline for evaluation, only Z-LDA showed the consistent improvement for all the 5 subjects, and the paired t test also revealed that only the accuracies obtained by Z-LDA is significantly higher than that of LDA (p = 0.0293).
The classification results of Dataset recorded by our BCI system were summarized in Table 3. Z-LDA achieved the highest mean accuracy among the tested 5 classifiers. Paired t test also showed that the accuracies obtained by Z-LDA is significantly higher than LDA (p = 0.0004), NDA (p = 0.0006) and HLDA (p,10 25 ), but no statistical difference between Z-LDA and SVM (p = 0.0654).
The overall mean accuracies obtained by Z-LDA are 1.4% higher than that of LDA on both of the two BCI datasets. As shown in Table 2 and 3, we could also see that the accuracies obtained by Z-LDA is consistently better than (or at least equal to) LDA in all of the subjects.
Subject 1 from Dataset recorded by our BCI system was used as an example to briefly reveal why the classification performance of Z-LDA becomes better than conventional LDA in the actual situation. The distribution of weight sum y(x) when subject 1 performed the motor imagery with left hand and right hand were plotted in Figure 3 for the training and test sets, respectively. The decision boundaries of Z-LDA and LDA were also marked in Figure 3.
Discussion
Translation module of BCI receives features from previous feature extraction module and translates them to device command by using certain classification algorithms. In practical BCI situations, the concerned tasks may have heteroscedastic class distributions. Therefore, the consideration of effect of distribution variances may provide more robust ability to recognize the tasks. However, the conventional LDA assumes homoscedastic class distributions, which may not be competitive to handle with actual BCI dataset with heteroscedastic class distributions. Inspired by this, we develop Z-LDA by including the variance information in the classification procedure in order to provide more robust classification for BCI tasks.
As shown in Section Methods and Materials, the decision boundary of conventional LDA is decided by the labels of classes, while the decision boundary of Z-LDA is defined by both mean and SD of the weight sum, which is more potential to capture the distribution information of classes and provide better classification performance for heteroscedastic distribution situation.
The difference of decision boundaries between the two classifiers can be clearly observed in Figure 1. Assume we define the label of the two classes as 21 and 1, the decision boundary of LDA is fixed as c~0, while the decision boundary of Z-LDA is determined by equation (8). If the SD of two classes are same, the decision boundary of Z-LDA is also c Ã~0 , but when the SDs of two classes are different, the decision boundary of Z-LDA will move toward the class with smaller SD. From Figure 1 we can find that because of the small SD of the first class, the SD of weight sum y(x) is also small, resulting in the more concentrated distribution compared to the relatively divergent distribution of class 2 with larger SD. Considering the areas under the two Gaussian curves between the two decision boundaries, the area corresponding to the second class is obviously large than that of that of the first class, which denotes that with the new defined decision boundary, more samples can be correctly recognized. Figure 2 further reveals that if the decision boundary of LDA is used in the test dataset, many samples belong to the second class will be incorrectly assigned to the first class. But if we use the decision boundary of Z-LDA to classify the samples, the number of samples which incorrectly assigned to the first class will be reduced at the cost that some samples belong to the first class will be incorrectly assigned to the second class.
When applied to the actual BCI datasets, Z-LDA consistently shows the best average accuracies among the concerned five classifiers as shown in Table 2 and Table 3. Figure 3 clearly shows us that the weight sum for the two types of tasks actually follow different Gaussian distributions in practical BCI application. In this case, the decision boundary of Z-LDA obtained from the training set is the green solid line, which is smaller than 0, and the decision boundary of LDA is the green dashed line, which equals to 0. The black solid line in Figure 3 denotes the theoretical boundary for the test dataset. Obviously, the decision boundary of Z-LDA determined by the training set is more close to the theoretical boundary of the test dataset, leading to the better classification achieved by Z-LDA compared to LDA. Therefore, we can conclude that the proposed decision boundary definition strategy outperforms the conventional decision boundary definition strategy in actual BCI applications, where concerned samples usually have the heteroscedastic distribution.
Another concerned aspect is the algorithm complexity for the online BCI system. In current work, the algorithm is implemented with Matlab R2011b running on Windows 7 Ultimate SP1 64 bit with Intel Core i5-3470 CPU 3.2 Ghz. The mean time for 200 2dimensional samples in the simulation study using Z-LDA is 0.0004 s, and 0.0001 s for LDA. It indicates Z-LDA is applicable in the practical real time BCI.
Conclusion
Both the simulation and actual BCI datasets confirm that Z-LDA is a more robust classification method. In essence, Z-LDA is an enhanced version of LDA, and it can be reduced to the conventional LDA by assuming homoscedastic class distributions.
Moreover, the probability indicates how reliable the classification is performed could be derived from the z-score transformed weight sum, which may be helpful to handle with the adaptive calibration problem [17,33,35].
There are various algorithms have been proposed based on LDA in BCI application, such as regularized LDA [36,37], Bayesian LDA (BLDA) [38] and enhanced BLDA [33]. Unlike Z-LDA, these algorithms improved LDA's performance from other aspects like regularization, Bayesian frameworks. It is possible to combine the proposed decision boundary definition strategy with these algorithms, which is our future work. Moreover, we will also implement the proposed Z-LDA to our online BCI system.
|
2016-05-14T21:46:18.597Z
|
2013-09-13T00:00:00.000
|
{
"year": 2013,
"sha1": "fcb54c407382aaf17e406c527d6f09ddb75c6c04",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0074433&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcb54c407382aaf17e406c527d6f09ddb75c6c04",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
230718884
|
pes2o/s2orc
|
v3-fos-license
|
Heart rate and swimming activity as indicators of post-surgical recovery time of Atlantic salmon (Salmo salar)
Background: Fish telemetry using electronic transmitter or data storage tags has become a common method for studying free-swimming sh both in the wild and in aquaculture. However, sh used in telemetry studies must be handled, anaesthetised and often subjected to surgical procedures to be equipped with tags, processes that will shift the sh from their normal physiological and behavioural states. In many projects, information is needed on when the sh has recovered after handling and tagging so that only the data recorded after the sh has fully recovered are used in analyses. We aimed to establish recovery times of adult Atlantic salmon (Salmo salar) after an intraperitoneal tagging procedure featuring handling, anaesthesia and surgery. Results: Based on ECG and accelerometer data collected with telemetry from nine individual Atlantic salmon during the rst period after tagging, we found that heart rate was initially elevated in all sh, and that it took an average of ≈ 4 days and a maximum of 6 days for heart rate to return to an assumed baseline level. One activity tag showed no consistent decline in activity, and two others did not show strong evidence of complete recovery by the end of the experiment: baseline levels of the remaining tags were on average reached after ≈ 3.3 days. Conclusion: Our ndings showed that the Atlantic salmon used in this study required an average of ≈ 4 days, with a maximum interval of 6 days, of recovery after tagging before tag data could be considered valid. Moreover, the differences between recovery times for heart rate and activity imply that recovery time recommendations should be developed based on a combination of indicators and not just on e.g. behavioural observations.
Background
Fish telemetry/biologging is a method of monitoring free-swimming sh where individual animals are equipped with electronic tags that often contain sensors for collecting data on the conditions within or near the sh (Cooke et al., 2011;Thorstad et al., 2013). Such tags may either be transmitter tags transferring data wirelessly to the user (see Føre et al., 2011 for details on the structure of an electronic transmitter tag) or data storage/archival tags (DSTs) that store data in internal storage mediums accessible only after the sh (and tag) has been recaptured (Thorstad et al., 2013). Irrespective of tag type, most studies using such methods aim to assess the status of wild sh in ecological settings (e.g. Welsh et al., 2013;Taylor et al., 2017), to evaluate how sh communities respond to man-made structures (e.g. Cooke et al., 2004), or as a tool to provide knowledge for sheries management (reviewed by Crossin et al., 2017). The interest in using this approach in aquaculture is also increasing, both because ongoing technological advances are rapidly expanding the possibilities (Hussey et al., 2015), and because new production philosophies such as Precision Fish Farming promote monitoring at an individual level (Føre et al., 2018a). Example uses of telemetry/biologging in aquaculture include studies to assess sh responses during welfare-critical operations such as crowding (e.g. Føre et al., 2018b) and transport (e.g. Brijs et al., 2018), and responses to environmental variability such as temperature variations (e.g. Johansson et al., 2009).
In animal monitoring, it is essential to ensure that the observed animals are representative of the targeted population. When using telemetry, the sh selected for tagging must therefore be representative both before and after the tags are deployed. Ideally, this means that the selection of sh should be truly random and representative, and that the tags do not in uence physiology or behaviour in such a way that the tagged sh differ signi cantly from untagged sh (e.g. Wright et al., 2018). In addition, tagging procedures include several steps (e.g. handling, anaesthesia and surgical procedures) that may induce stress, that in turn may lead to physiological and/or behavioural changes in the sh (Thoreau and Baras, 1997;Jepsen et al., 2001;Connors et al., 2002;Campbell et al., 2005;Thorstad et al., 2013). Acute (short term) followed by chronic (long term) stress in farmed sh may lead to undesirable effects such as reduced disease resistance, reduced growth rates, impaired health, and increased mortality (Wedemeyer, 1997;Pickering, 1998;Schreck, 2000;Ellis et al., 2002). Stress responses in sh are described by primary responses that include the release of stress hormones such as catecholamines and cortisol into the circulation system, followed by secondary responses such as changes in glucose levels, electrolyte balance and heart rate and, nally tertiary (whole animal) responses. If the sh is unable to acclimate to the stressor at this stage, effects such as behavioural changes, decreased reproductive capacity and growth may occur, sometimes even resulting in that the animal dies (see Iwama et al., 2006 and references therein). If such changes are chronic, the sh cannot be considered representative of the population and should be excluded from further analyses (Mulcahy, 2003;Cooke et al., 2011). Conversely, if the changes are transient, the sh may be considered fully recovered once the response patterns return to those expected from an untagged sh. This means that tagged sh can be used in analyses if the data from the period of recovery are excluded. However, this also raises the question: how can we de ne when a sh is properly recovered after a tagging procedure? Jepsen et al. (2001) sought to identify the duration of post-surgery recovery for Chinook salmon (Oncorhynchus tshawytscha) by studying changes in commonly used blood indicators of the primary (cortisol) and secondary (glucose and lactate) stress responses in teleosts. The authors found that all measured parameters decreased from initially elevated levels to within normal ranges within 7 days post-surgery, with glucose and lactate (substrate and by-product, respectively, of elevated anaerobic metabolism) normalising during the rst 24 h, a recovery time resembling that seen in several studies (e.g. Martinelli et al., 1998;Bridger and Booth, 2003). Coping with stress is also an energy-demanding process (Barton and Schreck, 1987) and one of the most common indicators of metabolic effects due to stress is the increase in plasma glucose concentration (Iwama et al., 2006). Such changes have recently been shown to lead to increased heart rates also in sh (Svendsen et al., 2020). Other studies have aimed to evaluate post-surgery recovery by comparing the behaviour of the tagged sh to their behaviour before surgery or in untagged cohabitant sh. This method has for instance been applied in laboratory experiments with tilapia (Tilapia sp.) who appeared fully recovered 24 h post-surgery after displaying loss of equilibrium and reduced swimming activity and feeding just after tagging (Thoreau and Baras, 1997). Swimming activity was then assessed by measuring the posture of the sh, and presented as the percentage of the time the sh was resting (assuming an oblique angle with the snout towards the surface) or actively swimming (horizontal orientation or snout pointing toward the bottom).
Recovery after tagging may also be studied with sensor telemetry. The information conveyed by the tag must then re ect the state of the sh, and typical sensor values for unstressed sh should be available as a baseline for comparison. Previous studies using this approach include using heart rate tags to compare tagging methods for black cod (Paranotothenia angustata, Campbell et al., 2005), and more recently to study post-surgery stress-responses (Brijs et al., 2019b) and potential effects of antibiotics on postsurgical recovery (Hjelmstedt et al., 2020) in rainbow trout (O. mykiss). While Brijs et al. (2019b) implied a recovery from surgical implantation >72h, Hjelmstedt et al. (2020) demonstrated a decrease in heart rate to within baseline levels 72-96 h after anaesthesia and surgery. Other sensor measurements that could potentially be used in this way include tri-axial accelerometers, as previous studies have identi ed links between accelerometer based activity proxies that are particularly sensitive to tail beat frequency and amplitude and orientation changes, and stress in salmon (Kolarevic et al., 2016;Føre et al., 2018).
Although Atlantic salmon (Salmo salar) has been frequently studied using telemetry, there is still a lack of detailed quantitative information on the post-surgery recovery of this species. We therefore sought to identify the recovery time of Atlantic salmon after intraperitoneal tagging. This was done using heart rate and acceleration data collected using intraperitoneally implanted electronic tags, meaning that data could be collected without introducing the additional handling stress that would accompany other methods such as blood sampling. The parameters were chosen because they have previously been found to be linked with stress (e.g. Brijs et al., 2019a;Brijs et al., 2019b;Føre et al., 2019) and welfare (Hvas et al., 2020a) in salmonids and are commercially available in archival and telemetry tags. The data were collected in a controlled experiment in tanks studying how stress responses in Atlantic salmon can be measured using state-of-the-art technology. The stress response part of this experiment is described in greater detail by Svendsen et al. (2020).
Experimental site and sh
The experiments were conducted at the NINA Ims Research Station near Stavanger, Norway, between January and March 2019, using 60 hatchery reared adult Atlantic salmon of the Aqua Gen strain (mean 55.5 ± stdev 5.7 cm fork length, mean weight 2100 g). The experiment started on January 28 th by stocking four square tanks (tank 1-4, 215 cm side, 122 cm depth, 5600 l) with seven sh each. The sh were then allowed to habituate to the tanks for a period of 21 days until February 18 th when three sh in each of tanks 1-4 were selected at random and equipped with tags, resulting in 12 tagged sh in total (Table 1). The tanks were set up with ow-through con guration, with ltered freshwater from the nearby Imsa river mixed with small amounts (3-6 ppt, average 5 ppt) of seawater supplied from seawater inlets at 30 m depth to ensure a stable and homogeneous water quality and avoid the introduction of parasites and pathogens to the tanks. Consequently, tank water properties followed the ambient conditions in the river, temperatures increasing from 3.9 to 5.0 ˚C and with DO varying between 93.8 and 101.2 % between the start and end of the experiment (March 15 th ). Oxygen sensors and oxygenation were also used to prevent unfavourable DO levels. The sh were fed once per day between 08:00 and 10:00 in the morning throughout the entire experimental period, with each meal consisting of 2 dl tank -1 (Skretting Røye Vitalis 600-60A 7 mm pellets). The sh were not subjected to any fasting during the experiment period.
Biotelemetry/logging systems and surgical procedures All 12 tagged sh (Table 1) were equipped with one of three different types of heart rate monitoring Data Storage Tags (DSTs, Star Oddi Ltd.): 4 x DST milli-HRT (39.5 x 13 mm, 11.8 g in air); 4 x DST centi-HRT (46 x 15 mm, 19 g); 4 x DST centi-HRT ACT (46 x 15 mm, 19 g). Using different DST types rather than equipping all sh with the same tag types allowed us to also investigate whether all three tag varieties would be suitable for experiments with Atlantic salmon, which is relevant because this is one of the rst applications of this technology on this species. Furthermore, since all three tag types were from the same provider, contained the same type of heart rate sensor and comparable sampling frequencies (80 Hz over 7.5 s per HR sample point for the centi tags and 100 Hz over 15 s per HR sample point for the milli tags), and applied the same post-processing methods to the resulting data, they provided heart rate data sets that were comparable among tags. The milli-HRT type was set with a higher sample storage interval (10 min) than the others (5 min) as they used more of their internal storage medium for raw ECG traces. All data were timestamped using the tag internal clocks to facilitate comparison, and eventual clock drift between individual clocks was negligible compared to the time scale of the experiment. One tag type (DST centi-HRT ACT) also measured activity using an embedded tri-axial accelerometer (1 Hz sampling rate).
In addition to the DSTs that were applied, a total of 4 tagged sh (two sh each from tanks 1 and 2, Table 1) were tted with acoustic tags (A MP-9, 24.4 x 9 mm, 3.6 g; Thelma Biotel AS) that contained tri-axial accelerometers (5 Hz sampling rate) and transmitted an activity proxy derived from the accelerometer measurements every 40 s. These tags compute the proxy by rst high pass ltering the accelerations from all three axes using a cutoff frequency of 0.2 Hz to remove low frequency acceleration components due to gravity and body orientation. The remaining high frequency components then mainly contain accelerations caused by features related to bodily movement that are of interest when evaluating activity levels, such as tail beats (frequency and amplitude) and rapid changes in attitude/orientation. The Euclidian norm of the three high pass ltered accelerometer axes is then computed to yield the magnitude of the total high pass ltered 3D acceleration sensed by the accelerometer. Although Føre et al. (2018) used the same activity proxy with a maximum value of 3.465 m s -2 , we chose to limit the proxy to 0-2.1 m s -2 in our study as this gave us a higher resolution and hence precision for the activity measures.
Moreover, Føre et al. (2018) observed very few activity values above 2 ms -2 in Atlantic salmon during stressing, implying that using a lower range would not compromise the ability to capture the dynamics associated with salmon swimming activity. To be comparable with the data from the acoustic tags, the activity data from the centi-HRT ACT DSTs were analysed similarly by applying ltering and computing the Euclidian norm as explained for the acoustic tags (see Svendsen et al., 2020 for more details). Adding the acoustic tags thus allowed us to compare their activity proxies with those based on the acceleration data from the DSTs and resulted in that the experiment produced 12 data sets on heart rate, and 8 data sets on swimming activity. With mean sh weight being 2100 g and a maximum total tag weight carried by an individual at 22.6 g (DST centi-HRT + A MP-9) the tag vs. sh weight ratio of all sh were well within the informal rule of thumb of 2 % for maximum tag mass relative to sh mass (Thorstad et al., 2013).
Each tag implantation was started by capturing a random sh from an experiment tank using a knotless dip net and immediately transferring it to an anaesthetic bath (Benzoak Vet, 70 mg/L) where the sh was kept until it lost its equilibrium and stage III anaesthesia (Coyle et al., 2004) was reached (average time 7.7 min). The sh was then carefully placed with its ventral side up on a specialised surgical table with a v-shaped mid-section designed such that the head of the sh was immersed in water throughout the whole procedure. A hose circulating anaesthetic (Benzoak Vet, 35 mg/L) through the orobranchial cavity of the sh was inserted into its mouth and the head was covered by a moist cloth (Figure 1).
A 2-3 cm incision was made along the sagittal plane starting slightly more than one tag length (i.e. the length of the tag to be implanted) posterior from the transverse pericardial septum.
A nger was inserted through the incision to locate the transverse pericardial septum. While retaining the nger inside the peritoneal cavity for support, a needle was positioned in the skin just posterior to the transverse septum and slightly laterally from the sagittal plane. The nger was withdrawn, and a smooth plastic spoon inserted through the incision until it was just below the needle insertion point. The needle was then pushed through the peritoneal wall while simultaneously withdrawing the spoon to extract the needle out through the incision while protecting the viscera. One end of a suture threaded through the end of the tag was inserted into the tip of the needle. The needle was then withdrawn to pull the suture out through the needle's entry point. This procedure was then repeated on the other side of the sagittal plane. The tag was then inserted through the incision and anchored anteriorly in the peritoneal cavity using the suture and an (external) surgical knot. For the four sh also equipped with separate acoustic tags, the second tag was inserted into the peritoneal cavity through the same incision. Finally, the incision was closed using interrupted sutures. The sh was then transferred to a recovery tank with circulating seawater where it was kept until it regained consciousness, upon which it was transferred back into the tank it was collected from. See Table 1 for anaesthesia bath and surgery durations for all tagged sh.
Timeline and experimental design
Since the present study focused on investigating the post-tagging recovery, the analyses only included data from the two weeks following tagging. To avoid inducing other stress effects that could disturb their recovery, the sh were sheltered from all potential stress factors except those necessary to feed and provide for the sh in this period.
None of the sh exhibited signs of adverse health after tagging or during the trials, and all sh were euthanised after the conclusion of the experiment. Posthumous pathology of all remaining experimental sh at the end of the experiment (19 female, 23 male) revealed that about one third of these sh (14 in total, 8 F, 6 M) exhibited signs of sexual maturation through the experimental period, including 5 of the tagged individuals (Table 1). Although this appeared to have little direct impact on the sh in three of the tanks, the data from the sh in one of the tanks (tank 3) were excluded from the statistical data analyses due to perpetual inter-individual aggression between two matured males in that tank throughout the experimental period. This left nine sh tagged with DSTs measuring heart rate, six of which also measured activity. Since two of these sh contained both a DST and an acoustic tag measuring activity, this resulted in a total of eight time-series of activity.
Data processing and statistics
Heart rate data were used as downloaded from the DSTs. Outliers were removed using the Median Absolute Deviation (MAD) approach (Leys et al., 2013), using a MAD decision criterion of 3, which is a conservative value (see Miller, 1991). The MAD decision criterion denotes the standard deviation from the dataset's sample average above which samples are rejected. The MAD decision criterion typically ranges from 2 (poorly conservative) to 3 (very conservative). In this study, the choice of 3 is justi ed by the measured heart ranges compared to typical heart rates published in literature (15 < HR < 80) for Atlantic salmon and comparable species (Lucas, 1994, Brijs et al., 2019. Activity data from the DST centi-HRT ACT tags were downloaded as raw acceleration values along all three axes, and then subjected to similar post processing as that used to compute the activity proxy in the A MP-9 acoustic transmitter tags to yield a comparable measure of activity between the two tag types. In a non-decomposed time-series, circadian variation (that between day and night) and irregular variation (that other than circadian of long-term) had the potential to obscure long-term trends in heart rate and activity. Time-series of heart rate and activity were therefore rst decomposed into circadian, long-term trend, and irregular components. Decomposition, and subsequent removal of the circadian and irregular components of the time-series, leaving a long-term component (that showed the long-term growth or decline of the time-series values over the temporal extent of the series), allowed for examination of the form of the long-term trends towards recovery. To decompose each time-series, it was rst binned into 15 min intervals (each 15 min interval showing a mean heart rate or intensity over that interval), and then converted into a time-series object (R function ts {stats}; Becker et al (1988)). Time-series objects were then decomposed using the Seasonal Decomposition of Time Series by Loess R function stl {stats} (B.D. Ripley; Fortran code by Cleveland et al (1990) from "netlib"). Long-term trend components were then analysed for a systematic change in heart rate or activity that could be indicative of a post-surgery recovery by rst modelling the temporal relationship and then compartmentalising this into pre-and postrecovery phases.
The relationship between the long-term trend component of heart rate or activity (y) and time posttagging (t) was modelled using an exponential decay model: where α de nes the decay constant from y 0 (at time zero) to y p , the model plateau. Models were tted with the nls {stats} R function (D.M. Bates and S. DebRoy: D.M. Gay for the Fortran code used by algorithm = "port"), using the self-starting asymptotic regression function SSasymp {stats} (J. Pinheiro and D.M. Bates). Most trend components followed an exponentially decaying pattern, ensuring model convergence, but some included parts that were inconsistent with an exponential decay. Firstly, some tags (three heart-rate tags and four activity tags) showed a short initial post-surgery increase in registered values at the beginning of the experiment. Secondly, some tags (one heart rate and two activity tags) showed an increase in registered values after ≈ 5-6 d. This late increase in activity or heart-rate was likely a result of a separate, post-recovery change in behaviour of these individuals. To ensure model convergence, these parts of the long-term trend components were removed prior to model tting. That is, the exponential model was only tted to parts of the long-term trend component that were consistent with a post-surgery exponential decline. One activity tag ( sh F4 in tank 8) did not show an exponential decline with time and was thus not tted with a model.
Identi cation of breakpoints between pre-and post-recovery phases was done on an individual basis. The breakpoint between pre-and post-recovery for each tag was set where the heart rate or activity reached a recovery threshold, de ned as the heart rate or activity level delimiting those pre-and post-recovery. A recovery threshold was de ned for each tag as the mean +2SD of the long-term trend component values calculated from the nal three days of the tted series. Inspection of the tags showed that trend components were approaching asymptotes in the nal three days, so it was reasonable to assume that values from these days represented post-recovery signature. Thresholds were established on an individual basis to allow for post-recovery heart rate or activity to change according to individuals.
Post-surgery recovery
Daily heart rate signi cantly declined from a mean of 36.0 bpm (range = 24.6 -45.6, SD = 5.6, n = 9) on the day of surgery to a mean of 22.3 bpm (range = 17.5 -26.6, SD = 2.6, n = 9) 13 days later (one sided (Figure 2 B). However, individual variation in activity was high (Figure 2 B). Both heart rate and activity displayed circadian variation. Heart rate was greater during daytime (mean = 25.8 bpm, range = 22.2 -26.7, SD = 1.9, n = 9) than during night (mean = 22.7 bpm, range = 19.6 -24.9, SD = 1.9, n = 9) ( The heart rate trend component showed a decline that could be modelled with an exponential decay function ( Figure 3). However, the trend component still showed considerable temporal variation, depending on the tagged individual. For example, the trend component for sh F4 showed a sharp decline during the rst day after tagging, but this then uctuated for the remainder of the two-week post-tagging period. The activity trend component also showed a pattern consistent with an exponential decay (Figure 4), except for one sh ( sh F8) where an exponential decay model could not be tted due to the activity trend component peaking ≈ 7 d after tagging. Two sh ( sh F1 and F2) showed an exponential decline in activity but did not reach a plateau during the study period, suggesting that these sh has not fully recovered in terms of activity.
Time to recovery (as de ned by the location of the breakpoint between pre-and post-recovery phases) varied between individuals, and the metric used (heart rate or activity, Figure 3, Figure 4, Table 2). The mean threshold value for heart rate in a 'recovered' individual was 23.8 bpm (range = 21.2 -26.0, SD = 1.18, n = 9). The mean time to reach this threshold (i.e. breakpoint between pre-recovery and post-recovery) was 4.1 d (range = 1.3 -5.8, SD = 1.7, n = 9). The threshold for activity recovery was greater for the acoustic tags (mean = 0.44 m s -2 , n = 2) than the DSTs (mean = 0.29 m s -2 , n = 3), re ecting the higher activity values registered by the acoustic tags. For the activity tags where there was evidence of recovery, the mean time taken to reach the threshold was similar to that for the heart rate tags (mean 3.3 d, range = 2.1 -5.7, SD = 0.09). For the two individuals that were each tagged with two activity tags, the identi ed breakpoints between the parts of the time series classi ed as pre-and post-recovery depended on the tag: in both individuals, the threshold to reach post-recovery occurred later for the acoustic tag than the DST.
Although raw values of mean heart rate on the day of anaesthesia and surgery (mean = 36.0 bpm, range = 24.6 -45.6, SD = 5.6, n = 9) varied more than the recovery threshold (mean = 23.8 bpm, range = 21.2 -26.0, SD = 1.8, n = 9, Table 2), there was a clear declining trend for all tagged individuals. With the exception of one individual (F8, Figure 4), there was a similar trend for activity: day of anaesthesia and surgery, mean = 0.64 m s -2 , range = 0.39 -0.92, SD = 0.20, n = 7; recovery threshold, mean = 0.36 m s -2 , range = 0.28 -0.43 SD = 0.07, n =7. For both heart rate and activity, raw values pre-recovery were signi cantly greater than those post-recovery (one sided Wilcoxon signed rank test: heart rate, V = 45, p = 0.002, n = 9; activity, V = 28, p = 0.008, n = 7). Table 2: Recovery based on heart rate and activity sensors. Activity sensors with a * suffix indicate acoustic tags. . "No fit" indicates that the long-term component of the time-series did not follow an exponential decline and that an exponential model could not be fitted; "No rec" indicates that it was possible to fit an exponential model to the timeseries but that recovery thresholds and times were not assigned because the fitted exponential model did plateau.
Discussion
The current study showed plateauing of most time-series, indicative of recovery, within the 14 d of the experiment. Two activity tags, F1(Aco) and F2(Aco), however did not show plateauing, suggesting that the tagged sh had not fully recovered in terms of activity during this period. Other time-series showed gentle gradients even after the recovery breakpoint (for example, the F6 heart rate tag) so the de nition of the point of recovery of some individuals as having fully recovered is less robust. However, identi ed breakpoints generally corresponded with systematic changes in the time-series. For example, the breakpoint on the F6 heart rate tag occurred in a trough separating the sharp initial decline over the rst 5.75 d with the gentle gradient afterwards, so it is reasonable to infer that the identi ed breakpoint corresponded to the transition to post-recovery. The modelling approach used here allowed for a consistent method for establishing the time until recovery among a group of time-series. It should be noted however that estimated times until recovery are dependent on modelling approach used. For instance, tting an exponential model to raw-rather than detrended timeseries, or using a different method to establish a breakpoint between pre-and post-recovery parts of the time-series, would yield different estimates. The exponential model used in this study is a well-validated method for modelling physiological recovery (Bartels-Ferreira et al. 2016) but alternative approaches may also be considered (e.g. Svendsen et al. 2020). The sample size of sh in this study was small (N = 9); a larger sample size would allow a better quanti cation of the range of behaviour during recovery and allow better selection of the modelling approach.
The heart rate data suggest that the tagged Atlantic salmon in our study could only be considered fully recovered from the anaesthesia and surgical procedure of intraperitoneal tag implantation after an average of ≈ 4 and up to a maximum of 6 days post-surgery. While some studies have indicated longer recovery times post-tagging (Hvas et al., 2020a), our observations concur with several previous studies that have reported similar lengths of recovery post-tagging as our study (Martinelli et al., 1998;Jepsen et al., 2001;Bridger and Booth, 2003;Brijs et al., 2018;Brijs et al., 2019b). Although some data series from the tagged sh in our study may visually appear to continue declining after ful lling the recovery threshold criteria, these changes were not found to be statistically signi cant. Recovery results based on activity data varied more in the recovery threshold criteria and time to recovery than heart rate, suggesting it might be a less consistent indicator of recovery between individuals. Moreover, both the temporal patterns and absolute values changed less for activity than heart rate between post-tagging and postrecovery periods, implying a lower ratio between the baseline pattern (i.e. circadian variations) and the changes in activity caused by the tagging procedure. Together, these factors suggest that activity may be a less consistent indicator of post-tagging recovery than heart rate, and that heart rate might be a generally more sensitive indicator than activity, especially for post-tagging recovery.
It is also important to note that there were individual variations in the recovery time assessed from heart rate. Although inter-individual variation in recovery time might be an inherent effect one should expect when tagging A. salmon, we did nd that mature sh had a lower heart-rate recovery time than immature sh. However, the low sample size did not provide enough statistical power to robustly test in uences on recovery time, so we recommend further studies with larger sample sizes to increase power in analyses of potential in uences.
Based on these results, we urge caution on using telemetry data collected after anaesthesia and surgery without rst ensuring that the sh are fully recovered (Mulcahy, 2003;Cooke et al., 2011). Biosensors that measure heart rate and/or activity can be potent tools in such evaluations, as they provide quantitative, high resolution data that will be both more consistent, precise and objective in capturing the full postanaesthesia/surgery effects than e.g. comparing behavioural observations of tagged vs. untagged sh.
Alternative parameters that could be used to assess post-tagging recovery in individual sh include blood glucose, lactate or pulse oximetry/ppg. These could provide a more direct assessment of stress levels in salmon, but we are not aware of any commercial electronic tags able to sense such parameters in live sh. Other techniques based on measuring cortisol in faecal matter (Cao et al., 2017) or bioelectric eld monitoring akin to that used by sharks (Kalmijn, 1972) could potentially result in future solutions that could be used evaluate recovery in a less invasive and independent manner, where the sh are monitored before, during and after the procedure. However, these methods are still to be developed to a stage where they can be applied to free swimming sh, at least in large groups under commercial production conditions, and would only be able to provide information on a group level.
All three DST types tested in this experiment appeared to be suitable for applications on Atlantic salmon as all tagged sh provided valid heart rate data. Moreover, the activity proxy computed from the DSTs containing accelerometers were found to be comparable to those measured by the acoustic tags (see Svendsen et al., 2020 for details on this comparison). The lower absolute amplitude of the activity proxies computed from the DST data was probably caused by them sampling at a lower frequency (1 Hz) than the acoustic tags (5 Hz), thereby capturing fewer high frequency components. The surgical procedure used to implant the heart rate tags was much simpler than the procedure needed for multivariate implants recently used in rainbow trout by Brijs et al. (2019a), but was more comprehensive and invasive than that used for conventional intraperitoneal tag placement. It is likely that less complex surgical procedures would lead to shorter recovery times in Atlantic salmon, as previously found for rainbow trout (Altimiras and Larsen, 2000;Gräns et al., 2014). However, it is probably reasonable to be conservative with respect to recovery times, especially if the data are to be used e.g. as a management tool in aquaculture applications or to evaluate stress effects on sh in conjunction with ecological studies. Using data from sh that are still recovering from post-anaesthesia/surgery effects in such applications could result in sub-optimal management decisions or erroneous conclusions that could have rami cations beyond the study itself.
The sh included in the analyses exhibited heart rates that gradually stabilised at daily means between 21 and 26 bpm (daily variations between 15 and 30 bpm, similar to that observed by for adult A. salmon of mean fork length of 62.3 cm at 4˚ C by Lucas, 1994). Due to the similarities across tanks and individuals, this range in heart rate may be typical for Atlantic salmon of this size and with the prevailing temperatures. Moreover, all individuals in tanks 1, 2 and 4 had similar circadian rhythms (higher heart rates during daytime than at night) and gradual post-surgery declines in mean daily heart rate (from more than 30 bpm after surgery, to 21-26 bpm after up to six days). This implies a regularity across individuals that increases the likelihood that heart rate may function as a consistent stress indicator in Atlantic salmon that may be used to assess sh recovery after tagging. The tagged sh in tank 3 were excluded from the study due to inter-individual aggression. These individuals demonstrated measured heart rates that differed from the others both in individual and aggregate values . Although these sh also showed signs of circadian variation in heart rate, the mean value did not appear to decline over the days following tagging, an effect that was attributed to inter-individual aggression, all else being held equal. This may indicate that the stress induced by the aggression between the two males in this tank overrode the stress response due to recovery. A potential interpretation of this is that the aggressive encounters caused chronically elevated stress levels that masked the recovery stress caused by handling, anaesthesia and surgery. This could further mean that recovery stress can be di cult to monitor if the sh are simultaneously in uenced by independent external events, such as individual interactions due to dominance hierarchies (Sloman et al., 2001;Cubitt et al., 2008).
Based on established knowledge on how salmon swimming speeds are affected by variations in light intensity (Oppedal et al., 2011), as well as previous telemetry studies applying similar activity proxies on salmon in sea-cages (e.g. Føre et al., 2018), we expected to see a circadian rhythm in activity where activity was higher during day than at night in the present study. In contrast to these expectations, the circadian trends in the activity of our sh were on average higher during night-time than during day. A similar "inverse circadian" rhythm was observed in salmon reared in sh tanks during the period after tagging by Kolarevic et al. (2016) and could imply that a "normal circadian" activity rhythm may arise only after the salmon have recovered after tagging in tanks. Conversely, the circadian rhythm in heart rate was more like expected (higher during daytime), meaning that the sh displayed generally higher heart rates when measured activity was low than when activity was high. This may seem counter-intuitive as one would expect more active sh to display higher heart rates since salmon tend to display increased heart rates with increased swimming activity (Hvas et al., 2020b). However, it is possible that the higher heart rates during daytime were caused by effects such as feeding activity (Eliason et al., 2008;Gräns et al., 2009) or perceived increased predation risk due to higher light levels (Johnsson et al., 2001). These results are unexpected and very interesting, but further extrapolations and discussions on this matter would probably require further experiments with more data.
Although this study underlines the importance of critical evaluation with regards to recovery from anaesthesia and surgery when using telemetry, the data collected also highlight the importance of telemetry as a method for studying free swimming sh. The heart rate and activity values for all tagged sh eventually plateaued, possibly indicating that they all recovered from the anaesthesia/surgery, and posthumous pathology revealed no in ammations or other apparent morphological signs of reduced welfare due to the surgical procedures. Even though the low water temperatures during the experiment may have led to handling and surgery having less impact on the sh, the tagging procedure used here was more complex than conventional intraperitoneal tagging. It is thus reasonable to conclude that sh carrying telemetry tags can be considered representative members of the group they were selected from once they are fully recovered from anaesthesia and surgery, provided that they were a representative selection to begin with. However, this also requires that the recommendations on ratio between tag size and sh size are not exceeded (e.g. "the 2% rule", Thorstad et al., 2013). Since we worked with adult salmon with a mean weight of 2100 g, and the maximum tag weight carried by the sh was 22.6 g (around 1% of the sh body mass) this was not a challenge in our study.
Future research and potential technological improvements
Since this study only focused on Atlantic salmon exposed to one set of environmental conditions, it is di cult to assess if these concerns are also relevant for other species, and/or sh under different conditions. Similar studies on rainbow trout using the same tag type found that they recovered 72-96 h after surgery (Brijs et al., 2019b), which was shorter than the Atlantic salmon in the present study. Moreover, wounds in Atlantic salmon are known to heal faster in warmer temperatures than in cold water (Jensen et al., 2015), suggesting that the low water temperatures in the present study may have contributed to longer recovery periods. These elements suggest that species speci c effects or differences in external environmental conditions are important to consider when studying recovery times.
Future studies on the relationship between heart rate and post anaesthesia/surgery recovery time should therefore be conducted for other species of interest, across relevant temperature ranges, to obtain a more complete picture of this relationship.
In the present experiment, the sh were kept in groups in small tanks. To investigate how recovery time is affected by eventual scaling effects and social/inter-individual effects arising due to group dynamics, future studies addressing post-tagging effects should be done with a larger number of tagged sh at larger spatial scales. This would also enable a deeper scrutiny into individual variations in recovery, as a higher number of tagged sh would provide a good foundation for nding statistical relationships on the individual level. Although our present results imply that inter-individual variations are a prominent feature in the recovery time of tagged salmon, a larger sample number will be necessary to properly conclude upon the nature of such variations. To increase the relevance of a larger follow-up study, it could be done in sh cages in the marine environment, perhaps rst by using meso-scale size cages containing fewer sh than a commercial cage but at similar densities, and then moving to full-scale studies to cover all steps in the transition from lab to industrial scale.
Conclusion
The main conclusion from this study is that the Atlantic salmon in these experiments required an average of ≈ 4 and up to a maximum interval of 6 days of recovery after anaesthesia and surgery before their heart rates returned to assumed baseline routine values. Moreover, although observation of behaviour and/or activity may alone be insu cient to assess that the sh has physiologically recovered, activity measurements indicated similar recovery periods to those based on heart rate, although there was a longer maximum period of 10 days. We therefore urge caution when using data collected after surgery and anaesthesia in studies using biologging/telemetry tags. Assuming that we want all individuals to be recovered, our study thus implies that only data collected after 6 days recovery time should be used for further analyses. However, this recommendation would only be applicable to studies featuring Atlantic salmon reared in similar experimental conditions as we used. Since recovery time will vary with factors such as sh species, water temperature, invasiveness of the surgery, anaesthesia time, sh density and physical scale, it is di cult to make general recommendations on when one can assume the sh to be recovered from tagging, and the data to be safe for use in biological analyses. However, by conducting experiments similar to the present study where these parameters are varied, a more complete picture of how we need to account for sh recovery after tagging in telemetry studies may be obtained.
Declarations
Ethics approval and consent to participate All sh handling and surgery were made in compliance with the Norwegian animal welfare act and were approved by the Norwegian Animal Research Authority (permit no. 18/18431).
Consent for publication
Not applicable.
Availability of data and materials
|
2020-04-23T09:14:37.268Z
|
2020-04-21T00:00:00.000
|
{
"year": 2021,
"sha1": "f9e47b6d643b6db53bded8087a4a567f1c76752a",
"oa_license": "CCBY",
"oa_url": "https://animalbiotelemetry.biomedcentral.com/track/pdf/10.1186/s40317-020-00226-8",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ad6137dfe332d880d3690bed0d33a810e9ee3669",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
58948846
|
pes2o/s2orc
|
v3-fos-license
|
Lifting the veil on disrespect and abuse in facility-based child birth care: findings from South West Nigeria
Background Eliminating disrespect and abuse in health care facilities during childbirth could be a contributory factor in improving pregnancy outcomes and avoiding preventable illnesses and deaths. This study aims to provide evidence of disrespect and abuse in this community in order to create awareness about its occurrence. Methods A cross-sectional survey was carried out on 384 recently delivered women who visited the postnatal and immunization clinics of a primary and tertiary health facility in Ile-Ife. Information was sought about awareness of disrespect and abuse, prevalence and forms of disrespect and abuse, and opinions on improvements which can be made in maternity services. Univariate analysis was used to summarise the data. Results About half of the respondents were in their fourth decade of life and had tertiary education. Overall, the majority (98.4%) of respondents agreed that it was their right to be treated with respect and dignity during childbirth while about one-fifth (19%) had ever experienced some form of disrespect and abuse. The commonly identified forms of disrespect and abuse were: non-dignified care (12.8%), discrimination (8.1%), a detention and abandonment (6%). However, the majority (81%) of the respondents did not have any suggestions for improvements in delivery services. Conclusions Although most of the respondents knew it was their right to be treated with respect, some reported that they had experienced disrespect and abuse during childbirth in varying forms. The evidence from this survey draws attention to the need for interventions to address the health system factors hindering health service utilization.
Background
In March 2010, the United States Agency for International Development (USAID) funded Translating Research into Action (TRAction) project, called for a meeting of public health and human rights governmental and non-governmental organisations who were active in maternal health issues to review the subject of respectful and disrespectful birth care including abusive maternal care [1]. This was motivated by the understanding that disrespect and abuse during childbirth not only involves human rights violations but is also one of the barriers to the utilization of a skilled birth attendant at delivery [1][2][3]. Over the last twenty years, efforts have gone into improving the number of deliveries taken by a skilled birth attendant in order to reduce morbidity and mortality. These efforts have yielded positive results as the proportion of deliveries attended by a skilled birth attendant in developing countries have increased from 56% in 1990 to 62% in 2012 however, 800 women and 7700 newborns still die daily from complications during pregnancy, delivery and postpartum period [4].
For a long time, the focus regarding barriers to skilled care has been on access to care until it was realised that improved access doesn't necessarily translate to active use. Thus, shifting the focus to quality of care in terms of not just the skill of health workers, but also with regards patient's rights and their perceptions of the quality of care they received [2,3]. On-going research reveal that pregnancy outcomes are also linked to the experience of care and to reduce preventable illnesses and deaths, health care delivery should not only involve good infrastructure and skill but also proper delivery in a gentle, caring way and with the right attitude [4].
The World Health Organization (WHO) has defined a quality of care and prepared a framework for providing optimum care for mothers and newborn around the time of pregnancy, delivery and postpartum because they realized that adequate care in this period contributes maximally to saving lives [4]. According to WHO, Standards for improving the quality of maternal and newborn care in health facilities [4]: "The quality of care for women and newborns is therefore the degree to which maternal and newborn health services (for individuals and populations) increase the likelihood of timely, appropriate care for the purpose of achieving desired outcomes that are both consistent with current professional knowledge and take into account the preferences and aspirations of individual women and their families".
The framework divides quality of care into two parts: the provider's provision of care and the patient's experience of care. Embedded in the patient's experience of care is a section on standards of care [4] with one of the domains stating that "women and newborn receive care with respect and preservation of their dignity" [p.3]. This is meant to address all forms of abuse, discrimination, neglect, detainment, and services denial [2,4].
There have been reports of disrespect and abuse during labour and delivery from around the world and this occurrence has been defined "as interactions or facility conditions that local consensus deems to be humiliating or undignified, and those interactions or conditions that are experienced as or intended to be humiliating or undignified" [5]. The seven categories of disrespectful maternity care which have been highlighted by Browser and Hill in their landmark paper on the analysis of disrespect and abuse are: physical and verbal abuse, non-consented clinical care, non-dignified care, discrimination, non-confidential care, abandonment and detainment in a health facility. These may be due to behavioural and structural factors. Evidence from literature suggests that behavioural factors may be linked to learned behaviour from pre-service training, the belief that healthcare workers are acting in the best interest of patients, and a possible lack of commitment to ethics and respect for human right concerns of their patients [1,6,7]. Furthermore, many authors have also noted that structural challenges including under-staffing, poor pay and poor facility space, may lead to disrespectful maternity care [2]. Disrespect and abuse have been recorded in sub-Saharan Africa. Findings from Kenya revealed that 20% of the women polled reported some form of disrespect and abuse during their maternity care ranging across six categories which includes: non-consented care, abandonment and detainment, non-dignified care, physical abuse and non-confidential care. They perceived they were humiliated when receiving care during their labour and deliveries [8].
In Ghana, a study on exposure to disrespectful maternal care among final year student midwives had 72% opining that maltreatment during labour was a problem and they reported that the occurrence was more in government facilities compared with private facilities. About 80% of respondents also reported that the way women were treated during their labor and delivery influenced their choice of a delivery place, a probable reason why many women decided to deliver at home [9].
Disrespectful and abusive care has also been recorded in Nigeria. Okafor, Ugwu, and Obi in 2012 noted that 98% of their respondents reported one form of disrespectful and abusive care in their last delivery. This ranged from non-consented and non-dignified care to abusive care and abandonment and the commonest was the non-consented care and physical abuse. Non-consented care was carried out for procedures such as episiotomies, blood transfusions and caesarean sections and physical abuse included being beaten, slapped, restrained and tied as well as incidences of sexual abuse by health workers [10]. In Abuja, north-central Nigeria, Bohren et al. interviewed patients, health workers, and other hospital staff who observed patients being slapped, threatened, shouted at and physically restrained while on admission. Although some viewed this type of behavior as unacceptable, however, few of them believed actions like these were necessary in order to motivate the women to comply with care to ensure healthy outcomes. Some alleged that slapping was necessary to give the woman "more strength to push the baby out" [7].
In the face of evidence which proves the presence of disrespect and abuse in maternity care, the concept of Respectful Maternal Care (RMC) has evolved from the safe motherhood initiative and supported by USAID along with the White Ribbon Alliance. It is "an approach centred on the individual, based on principles of ethics and respect for human rights, and promotes practices that recognize women's preferences and women's and newborns needs" [11].
A few studies have shown that some of the victims of disrespectful and abusive maternity care sometimes accept the abuse they get as rightfully deserved, acceptable, and may not recognize any infringement on their fundamental human rights. Some regard this kind of behavior as "normal" and come to expect it [1,7,9].
In spite of available evidence of disrespect and abuse of women in facility-based childbirth care, there are few interventions geared towards reducing disrespectful and abusive care and promoting respectful maternal care in this environment. There is the need to create awareness by providing data and proof of its occurrence. This study is aimed at providing empirical evidence to support the growing trend of reports from other research sources carried out in other parts of the country and in the West African Sub-region. Studies done in Nigeria are few and none of them provide information about the South West geopolitical zone. It is also aimed at investigating the existence and magnitude of disrespect and abuse of women during childbirth from the client's perspective in health facilities in Ile-Ife, south West Nigeria.
Methods
This was a descriptive cross-sectional survey. Respondents were recruited for the survey at the postnatal and immunization clinics of a primary and a tertiary public health facility in Ile-Ife, Osun state and these were women who booked for antenatal care and had facility-based delivery in the selected facilities. Using the WINPEPI software's formula for estimating a proportion and a prevalence of 20% from a similar study in Tanzania, at a 95% confidence level, 5% level of significance and a 20% attrition rate, a minimum sample size of 303 was determined. For a robust analysis, 400 questionnaires were eventually administered. As women presented in the immunization or postnatal clinic, they were recruited by trained research assistants until the sample size of 400 was met. Of the 400 questionnaires administered, 384 were returned completed while 16 were discarded due to mothers not completing the interview and hurrying to leave the facility after receiving clinical care giving a response rate of 96%. Inclusion criteria were all women who had recently had a baby in the last three months while women who did not deliver in the health facilities were excluded. Interviews were conducted for eligible respondents who were serially recruited over a period of two weeks in the order of their registration at the selected clinics.
A pre-tested interviewer-administered semi-structured questionnaire was used to collect information on awareness of respondents about disrespect and abuse among women who had facility-based childbirth and the prevalence of this disrespect and abuse. Prevalence was defined as "ever experience" of disrespect during childbirth for each respondent interviewed. Questions were also asked about the different forms of disrespect and abuse they had experienced as well as their opinion on areas for improvement in maternity services. The questionnaires were administered by trained interviewers and the questions were translated into the Yoruba language and back-translated into the English language to ensure accuracy in the portrayal of the intended meaning. Interviews were conducted in the Yoruba language for those who could not speak the English language. Participation was voluntary, confidentiality was assured and verbal consent was obtained from respondents before interviews were conducted.
Data were field edited then entered and analysed using SPSS version 17. Univariate analysis to generate frequency tables was performed for socio-demographic characteristics, awareness about disrespect and abuse in facility-based childbirth, the prevalence of disrespect and abuse, the different forms of disrespect and abuse they experienced and their opinion regarding areas of improvement in maternity services. Responses to open-ended questions were grouped into similar categories, transformed into quantitative variables then frequencies were determined.
Results
A total of 384 women were recruited into the study. Their ages ranged from 15 years to more than 50 years with more than half of them (51%, N = 177) in the 30-39 year age group. Another half of them (52.7%, N = 202) had a tertiary education while almost half (49.2%, N = 189) were either artisans or traders. More than 7 in 10 of them were Christians and more than 8 in 10 were Yoruba ( Table 1).
As shown in Table 2, more than 9 in 10 of them said that it was not culturally acceptable to be disrespected or abused during childbirth. About the same proportion agreed that it was their right to be treated with respect and dignity while 11% (N = 44) of them reported knowing someone who had been treated disrespectfully or abused during childbirth. Table 3, which was developed from the question "Have you ever been disrespected or abused during maternity care?" revealed that one-fifth (19%, N = 73) of respondents had ever experienced disrespect and abuse during childbirth. In Table 4, respondents identified the different forms of disrespect and abuse they had experienced. The commonest form was the non-dignified care (12.8%, N = 49) followed by discrimination (8.1%, N = 31) and the least identified was physical abuse (1.6%, N = 6).
Different forms of physical abuse were experienced by 6 out of the 384 respondents which include being pinched or beaten (50%, N = 3) respectively while 33.3%, N = 2 were either slapped or sexually abused ( Table 5).
The forms of non-consented care are shown in Table 6, as experienced by 18 out of 384 respondents, and was more commonly about lack of information on the care received (77.8%, N = 14). Table 7 describes the non-confidential care received among respondents. The commonest was about asking private questions in the presence of other patient or healthcare workers not directly providing service to respondents (60%) while the least was taking delivery in the presence of other patients and their relatives (15%). Table 8 described the different forms of abandonment and detention and 7 in 10 of the respondents who experienced disrespect and abuse said they were left unattended during delivery and were denied necessary care which they expected in form of prompt response when they requested for medical attention or the constant presence of healthcare workers in the delivery room. Out of the 23 women who responded to question on being detained due to inability to pay hospital fees, 2 said yes, they had been detained.
In Table 9, the non-dignified care which occurred most was being shouted at (59.2%) while the use of harsh tone and words was a close second (49%). The least form was being threatened (6.1%).
Regarding forms of discrimination, Table 10 showed that perceived discrimination occurred most commonly due to economic status (38.7%) and least commonly due to HIV/AIDS status (6.5%). Table 11 summarises the suggestions by respondents on improvement of maternity services in three areas: ANC services, delivery services and attitude of staff. In the three areas, 20 to 25% of respondents suggested promptness of services, improved care and increased staffing as ways to improve maternity services.
Discussion
As demonstrated by this study, obstetric abuse is a reality that occurs against women delivering in identified facilities in Ile-Ife that might hinder achieving the coverage goals for reducing maternal mortality. This study found the prevalence of disrespect and abuse during childbirth in a local government area in Ile-Ife to be 19%, similar to findings in other studies conducted in Tanzania [12] and Kenya [6] reporting a prevalence of 19.5 and 20% respectively. However, a study conducted in Enugu, South Eastern Nigeria [10] reported a prevalence of 98 % suggesting a possible regional or cultural link with respectful maternal care.
This prevalence was despite a high level of awareness by respondents of abuse during childbirth being culturally unacceptable (97.0%), a finding in congruence with Nigeria's adopted National Standard of Care for Public health facilities, the Charter on Universal Rights of Childbearing Women which does not condone disrespect and abuse in childbearing [2]. An even higher proportion (98.4%) considered respectful and dignifying maternal care a right during childbirth, again, a typical finding in other studies where respectful maternal care is considered a fundamental right of women [13,14].
Just over 10% of respondents reported knowing someone who had been abused during childbirth which is much lower than reported by midwifery students in Ghana who reported witnessing obstetric abuse at a much higher rate (78.0%) in the maternity rooms [9]. This difference may be due, in part, to the proximity of midwifery students to the childbirth process and low reporting by women who may consider the process as normal or try to move on from an unpleasant experience. This study found non-dignified care to be the highest form of abuse reported at 12.8% and this was typified, from the opinion of respondents, as being shouted at, use of harsh words or tone or being insulted during the birth process. Present but much less reported was intentional humiliation and threats. Because treatment with dignity determines to a great extent utilization of these facilities, women who have received non-dignified care may prefer to deliver elsewhere. Being discriminated against on account of socio-demographics like economic status, age, educational level, ethnicity, marital and HIV status was also a key finding in this study which is supported by findings in nearby Ibadan, Nigeria that found an association between experience of violence and socioeconomic status [15], however, Innocent et al. in Enugu, Nigeria did not find any significant association between maternal socio-demographic and disrespectful maternal care. Despite being reported as a distasteful practice in facility-based delivery [4,10], abandonment was also reported by 6% of the respondents, generally as being left unattended or denied necessary care with fewer reports of lack of encouragement during delivery and detainment on account of inability to pay for services. Non-confidential care, which is an unacceptable breach of the code of ethics in healthcare, was found in this study to include being asked private questions publicly, physical examination without a screen and delivery in public view. The report of non-consented care due to inadequate information about care and procedures was low and this may be as a result of increasing awareness among healthcare providers that non-consented care may result in litigation. Physical abuse, however, was least reported by respondents (1.6%) mainly as being pinched, beaten, slapped or sexually abused, however, a single occurrence of sexual abuse is an important issue which needs to be highlighted. These findings are proportionately similar to findings in Kenya [8] and Tanzania [12,16] though findings from this study seem significantly less than those found in these climes, again likely due to a rising awareness of litigation among care providers.
Respondents in this study suggested kindness from care providers and prompt service as a recipe for improving maternal services, particularly antenatal and delivery services, which are important ways of not stereotyping violence as a part of obstetric care as suggested in other studies [14]. Increased staffing was also suggested, in keeping with identified policy approaches to reducing disrespectful maternal care [2,4]. Exit interviews were conducted for women who had live born children and were attending postnatal clinics or who brought their children for immunization. Possible limitations of this includes exclusion of the experience of mothers who had stillbirth or and early neonatal death, recall bias and social-desirability bias whereby respondents are likely to give information considered favourable. Another bias which could have occurred is in relation to recruitment (selection) bias as some of the respondents were recruited from postnatal clinics which mothers who may have encountered disrespect and abuse during their deliveries may be unwilling to attend. As women deliver in both formal and informal delivery facilities in Ile-Ife, the demographics of women attending the clinics selected may not completely reflect the demographics of pregnant women in city, and this is a possible limitation to the generalizability of the study. Also, in considering disrespect and abuse as a single item, this conflation may have a possible effect on study validity. A qualitative study could also have been conducted to provide in-depth information regarding the topic beyond the a priori list of categories of disrespect and abuse that were used for the survey. Lastly, discarded responses from the few respondents who were rushing to leave the clinic after receiving services may also pose a limitation.
Conclusion
This survey revealed varying forms of the occurrence of disrespect and abuse during childbirth in health facilities in this community. It adds to the growing evidence which reveals that poor access to health care services might also be as a result of treatment received by women in these facilities and draws attention to the need for interventions to address the system factors hindering health services utilization.
|
2019-01-23T21:23:06.516Z
|
2019-01-22T00:00:00.000
|
{
"year": 2019,
"sha1": "8546ae8d46085a50aad2890e8734839f3e5a9986",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-019-2188-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8546ae8d46085a50aad2890e8734839f3e5a9986",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250059018
|
pes2o/s2orc
|
v3-fos-license
|
Environmental Inequality in Four European Cities: A Study Combining Household Survey and Geo-Referenced Data
Combining individual-level survey data and geo-referenced administrative noise data for four European cities (Bern, Zurich, Hanover, and Mainz; n ¼ 7,450), we test the social gradient hypothesis, which states that exposure to residential noise is higher for households in a lower socioeconomic position (measured by income and migration background). In addition, we introduce and test the ‘environmental shielding hypothesis’, which states that, given environmental ‘bads’ in the neighbourhood, privileged social groups have better opportunities to shield themselves against them. Our results show that, for many residents of the four cities, observed road traffic and aircraft noise levels are above World Health Organization limits. Estimates of spatial error regression models only partly sup-port the social gradient hypothesis. While we find significant but relatively small income effects and somewhat stronger effects of having a (non-Western) migration background, these effects are not significant in all cities. However, especially high-income households are more capable of avoiding exposure to indoor noise. Due to their residence characteristics and having the resources to maintain high standards of noise protection, these households have more capabilities to shield themselves against environmental bads in their neighbourhood. This supports the environmental shielding hypothesis.
Introduction
While, in the last decades, research on environmental inequality has expanded rapidly in the United States (Mohai, Pellow and Roberts, 2009;Mohai and Saha, 2015a,b), corresponding research in European countries has remained much sparser (Elvers, Gross and Heinrichs, 2008;Laurent, 2011;Preisendö rfer, 2014). Early studies on 'the social gradient' 1 of local environmental threats in the European context were mainly conducted by health scientists, epidemiologists, and medical researchers-some of them especially alarmed by the detrimental effects of noise and air pollution on the health and cognitive functioning of children (for a review, see World Health Organization and European Centre for Environment and Health, 2005). It is well known that noise and air pollution, which are often strongly correlated, have an enormous impact on people's health (Basner et al., 2014;European Commission, 2016;European Environmental Agency, 2019). They affect the respiratory and cardiovascular systems and finally lead to increased mortality rates (Forastiere et al., 2007;Qi Gan et al., 2012;Basner et al., 2014). Hence, environmental inequality contributes to the inequality in life expectancy resulting from socioeconomic status and social class. Moreover, the monetary costs of residential environmental bads enhance the 'real' income inequality as measured by the Gini coefficient (Muller, Matthews and Wiltshire-Gordon, 2018).
Social scientists interested in environmental inequality most often examine how local environmental conditions vary according to socioeconomic characteristics and which mechanisms generate their unequal distribution. The typical study on the social gradient is crosssectional and uses aggregate spatial data at the level of census blocks, communities, or otherwise spatially defined areas. Indicators of socioeconomic status composition and measures of environmental conditions are then simply correlated or analyzed by multivariate regression methods. Few studies use individual-level data, and even fewer use longitudinal individual-level data to investigate the mechanisms presumably generating environmental inequalities (Crowder and Downey, 2010;Pais, Crowder and Downey, 2014;Mohai and Saha, 2015a,b;Best and Rü ttenauer, 2018).
In the present study, we do not have longitudinal data and hence cannot uncover causal mechanisms of environmental inequality. However, we can avoid the 'ecological fallacy' that occurs when using aggregate data. The term 'ecological fallacy', originally coined by Robinson (1950), denotes the problem in empirical research that one cannot infer correlations at the individual level from correlations at the aggregate level. A positive correlation between the share of migrants in a district and the district's average level of noise exposure, for example, does not necessarily prove that migrants have to endure more noise than nonmigrants. Studies using individual-level data that avoid the problem of a possible ecological fallacy are rare. Where they do exist, the data pertaining to local environmental threats are often based on subjective measures, i.e. perceptions and evaluations of residential environmental conditions by survey respondents (e.g. Best and Rü ttenauer, 2018). These subjective measures (noise annoyance, health worries, etc.) may be biased by respondent characteristics, such as education, environmental concern, or-more specifically-noise sensitivity. The main strength of our study is that it combines individual-level survey data with 'objective' data for residential environmental bads, i.e. administrative data on road traffic and aircraft noise. Via geo-referencing, these objective noise data were matched to survey data. Based on the combined survey and 'objective' noise data, we examine social inequalities in the exposure to road traffic and aircraft noise for the Swiss cities of Bern and Zurich and for the German cities of Hanover and Mainz.
Our analyses are guided by two main hypotheses. The first is the social gradient hypothesis, which states that exposure to residential noise is higher for households in a lower socioeconomic position (in our study measured by income and migration background). In addition, we introduce and test a second hypothesis, which we refer to as the 'environmental shielding hypothesis'. This hypothesis states that, if there are environmental bads in the neighbourhood (such as road traffic and aircraft noise), privileged social groups have more and better opportunities to shield themselves against them.
In the 'Theoretical and Empirical Background' section, we discuss these two hypotheses based on findings from previous studies on environmental inequality. The next section then describes our data and methods. In the 'Empirical Results' section, we present descriptive results and estimates from multivariate spatial error regression models (SEM) examining the two hypotheses. The last section draws conclusions.
Theoretical and Empirical Background
The Social Gradient Hypothesis The basic hypothesis of environmental inequality research suggests a negative correlation between socioeconomic status and unfavourable local environmental conditions (Ringquist, 2005). There are two main processes that might explain this correlation (Mohai and Saha, 2015a; see Rü ttenauer, 2018: Chap. 1.2 for an overview). The first process can be termed 'disparate siting' (Mohai and Saha, 2015a) and supposes that investors, politicians, and other decision makers might prefer to locate industrial sites or other unwanted facilities near low-income areas because land and real estate prices are low and other similar facilities are already there. They might also expect less opposition from disadvantaged groups against decisions to locate polluting facilities in their neighbourhood. The second process can be called 'post-siting demographic change' (Mohai and Saha, 2015a) and assumes that low-income groups tend to settle in areas with unfavourable environmental conditions because rents are lower than in less-exposed neighbourhoods. Minority or migrant groups may also experience discrimination in the housing market. This describes a selective move-in process. An additional selective move-out process predicts that privileged social groups have a higher probability of leaving areas with unfavourable environmental conditions. We cannot test these causal mechanisms in the present study, but we can examine the basic social gradient hypothesis that socioeconomic status matters for exposure to environmental conditions using rather finegrained, geo-referenced noise data. This is important because, on the one hand, many studies in the United States (for reviews, see Ringquist, 2005;Brulle and Pellow, 2006;Mohai, Pellow and Roberts, 2009;Banzhaf, Ma and Timmins, 2019; for a critical assessment, see Bowen, 2002) and Europe (for Germany, see Mielck and Heinrich, 2002;Best and Rü ttenauer, 2018;Rü ttenauer, 2018Rü ttenauer, , 2019a for France, see Padilla et al., 2014;for Switzerland, see Braun-Fahrländer, 2004;Diekmann and Meyer, 2010; for the United Kingdom, see Evans and Kantrowitz, 2002;Mitchell and Dorling, 2003;Agyeman and Evans, 2004; for comparisons of European cities, see Pasetto, Mattioli and Marsili, 2019;Samoli et al., 2019) provide evidence for the existence of a social gradient with respect to social class, income, education, foreign origin, etc. On the other hand, empirical results vary greatly between studies. Evidently, the strength of the association between socioeconomic status and environmental quality can depend on the status dimension (race, income, social class, migration background, etc.), on the type of environmental burden (different kinds of noise and air pollution, distance to landfills, toxic waste, industrial sites, etc.), on the area under study, on the details of area demarcations (the modifiable areal unit problem), and on the statistical methods used.
For example, for French cities, Padilla et al. (2014) were puzzled by the seemingly paradoxical phenomenon of a positive social gradient in Paris at the census block level. Air pollution in the French capital (measured by mean concentrations of nitrogen dioxide) is significantly more severe in city blocks populated by people of high socioeconomic status. The reverse, however, is true for the French cities of Marseille and Lille; and in Lyon, the pattern turned out to be curvilinear, i.e. the middle social categories experienced the highest exposure levels. Contrary to common belief among researchers, the 'Paris irregularity' does not seem to be a single exception and the empirical validity of the social gradient hypothesis is far from certain.
In a study pertaining to census blocks in Rome, Forastiere et al. (2007) also observed a positive association between exposure to traffic-induced air pollution (measured by particulate matter PM10) and both income and socioeconomic status. Rü ttenauer (2019a) explored the association between industrial sites, air pollution, and environmental inequality in German cities (combining data at the city and grid cell level). He found that the burden for foreigners is higher than for German citizens in most cities. However, there are also cities where this relation is reversed. Many cities in Europe and other parts of the world have highly attractive, often historic, and expensive inner-city districts that face serious overcrowding problems accompanied by aboveaverage levels of noise and air pollution. Nevertheless, young professionals and high-income people often prefer an urban lifestyle and therefore choose busy and noisy neighbourhoods located in the inner city. Individual residence decisions are complex, and there is a multitude of factors that people trade off when deciding where to move to and, finally, where to live (Clark et al., 2002;Clark, Deurloo and Dieleman, 2006).
Regarding road traffic noise, one of our explananda, Carrier, Apparicio and S eguin (2016) found slight evidence for the social gradient hypothesis relating to income and minority status based on data from 14 boroughs in Montreal, Canada. In line with the social gradient hypothesis, Casey et al. (2017) also document disparities in overall noise pollution (measured at natural/rural sites, urban sites, and near airports) across ethnic and socioeconomic groups at the level of census blocks in the United States. At the city block level, Lagonigro, Martori and Apparicio (2018) report higher noise exposure, including road traffic noise, for unemployed and older people in Barcelona, Spain; yet they do not find differences in noise exposure relating to income and young age (children). Based on data for 201 statistical sectors in Ghent, Belgium, Verbeek (2019) observes a positive association between income and exposure to road, railway, and industry noise.
Contrary to the abovementioned studies, which all refer to aggregate-level data on local environmental disamenities, our approach allows testing the social gradient hypothesis at the individual level, allowing us to avoid the problem of ecological fallacy. Such studies are rare. For example, Diekmann and Meyer (2010) conducted a nationwide survey in Switzerland and linked household data to geo-referenced data on noise and air pollution. Although they found a negative social gradient, the 'slope' was very small in comparison with other factors explaining the variance in exposure to emissions. Living in an urban area rather than in the countryside, for instance, increased air pollution (nitrogen dioxide) by a factor much higher than a hypothetical doubling of income. Swiss census data yielded similar estimates (Diekmann and Meyer, 2017). The authors speculate that the surprisingly weak social gradient may result from Swiss particularities, i.e. the low level of residential segregation, the absence of landfills, and the minimal presence of heavy industry in Switzerland. In the present study, we follow a similar approach focusing on road traffic and aircraft noise.
The Environmental Shielding Hypothesis
Higher exposure to air pollution notwithstanding, Forastiere et al. (2007) find in their above-mentioned study about Rome that the negative health effects of air pollution are less pronounced for people with a higher income and socioeconomic status than for those with lower income and socioeconomic status. The authors mainly explain this finding by arguing that low-income and low-status groups are more likely to suffer from chronic diseases (with the strongest differences existing for diabetes mellitus, hypertension, heart failure, and chronic obstructive pulmonary diseases) and are therefore more susceptible to the health effects of air pollution. They further conjecture that rich people are less often outside their residences and frequently have second homes in the countryside. Another possible explanation, which we will follow in our contribution, is the more general assumption that households with a higher level of economic and other resources are more capable of protecting and shielding themselves against environmental bads at their place of residence. We will call this the environmental shielding hypothesis.
'Objective' measures of environmental conditions, such as road traffic and aircraft noise data provided by administrations, usually capture emissions outside a building. However, for a given level of outside noise, inside noise levels can vary greatly. For living comfort, subjective well-being, and health effects, indoor rather than outdoor noise is crucial, and there are more or less effective ways to prevent outside noise from intruding inside a building and thus becoming subjectively annoying. Inspired by psychological stress research (e.g. Aldwin, 2007;Biggs, Brough and Drummond, 2017), we may denote such preventive measures as 'coping strategies'. Whereas coping strategies in stress research typically refer to subjective modes of dealing with stress factors, what we have in mind here are rather 'structural' coping strategies referring to housing characteristics.
A first and very basic factor that enables coping with disturbances originating from outside is a spacious home with several rooms. The more rooms a dwelling has, the higher the chance of having rooms that are less exposed to emissions. This should particularly apply for noise, which we are interested in here, but less so for air pollution. Furthermore, focusing on noise, this rule should also apply more for road traffic than for aircraft noise. If there is residential aircraft noise, it is likely to be uniformly distributed, whereas road traffic may be loud in front of a building and much less so behind it. For our empirical analyses, we expect that a spacious home (measured in square metres) is much more often an advantage enjoyed by high-income households than by low-income households and by people without than with a migration background.
When the residence of a household consists of two or more rooms, it is a reasonable strategy to choose those rooms for sleeping that are least exposed to noise and other environmental bads. Sleeping is a human activity that takes up about one-third of the day and is very important for recreation, subjective well-being, and health. More generally, a household with a spacious home can arrange indoor living routines in a way that minimizes potentially annoying outdoor noise. Noise-exposed rooms are good for purposes that do not involve a lengthy stay, whereas relatively quiet rooms are good for sleeping, relaxing, working, or studying. It can be assumed that the opportunity at home to move to quieter indoor zones makes noise exposure subjectively less annoying and reduces potential stress reactions. In stress research, it is well known that personal control over a situation facilitates coping with stress-prone circumstances (Aldwin, 2007;Biggs, Brough and Drummond, 2017).
A further structural coping strategy aims at building features and construction measures. Independent of dwelling size and indoor arrangements of daily activities, the intrusion of noise and other environmental bads can be reduced by features that improve the construction of a building. Over the last decades, many new and efficient techniques of noise and energy insulation of buildings have been developed and implemented (e.g. McMullan, 2018). With respect to noise, the quality of the windows is particularly important because windows are the weak spots-i.e. the most evident gateway of noise inflow. While there are various industry standards for windows with minimum requirements, modern highquality soundproofed windows can absorb high noise levels, including potentially annoying aircraft noise. Based on the environmental shielding hypothesis, we expect that the residences of high-income and non-migrant households are more often equipped with better-quality and hence soundproofed windows than low-income and migrant households. The main reason for this expectation is the simple fact that fitting an apartment with high-quality windows requires financial resources. 2
Data and Methods
The main empirical data for the following analyses come from surveys in two Swiss cities, Bern and Zurich, and two German cities, Hanover and Mainz. We chose these four cities for several reasons. In terms of population and economic and environmental conditions, the four cities are not too different, although there is some variation concerning the institutional and cultural context. Furthermore, environmental issues arising from aircraft traffic was a special topic of our research project, and this motivated the selection of Zurich and Mainz (see below in this section). Moreover, members of our research group were affiliated with the universities in three of the four cites, and this proved helpful both for the survey sampling and for access, guidance, and validation concerning the 'objective' environmental data.
Except for some local adaptations, the surveys in the four cities were strictly comparable in terms of research design (sampling procedure, etc.) and questionnaire program. The surveys were carried out as mail questionnaires and were conducted between October 2016 and March 2017. They were based on random samples of the adult population (18-70 years old) selected from the official population registers managed and maintained by the city administrations. The samples included not only people of Swiss or German nationality but also foreigners and migrants living in the cities.
With some variations in detail, the subjects selected for participation in the study were approached using Dillman's (2007) tailored design method: they received a first invitation to participate in the survey, a postcard after one week, a second invitation after three weeks, and a third invitation after seven weeks. It is important to note that the surveys were not introduced as an environmental survey, but as a survey entitled 'Housing and Living in [City]'. Starting with 4,000 addresses in each city, the survey yielded a response rate of 55.2 per cent in Bern, 48.4 per cent in Zurich, 35.9 per cent in Hanover, and 45.2 per cent in Mainz (standard RR2 for postal surveys to specifically named persons, AAPOR, 2016). In total, 7,540 respondents participated in the survey (for further methodological details of the study, including issues of sample selectivity, see Bruderer Enzler et al., 2019).
Our empirical analyses use several variables from the mail survey. The indicators for the socioeconomic status are household income and migration background. We measure the household's income situation by the net equivalent monthly household income using the new OECD scale (OECD, 2009). To make incomes comparable between Switzerland and Germany, we convert Swiss Francs into Euros and account for the countries' different purchasing power parity (PPP). 3 Migration background denotes whether the respondent, or at least one of his/her parents, was born abroad. Thus, a respondent is assigned a migration status independent of citizenship. We distinguish migration background concerning (a) European and other Western countries (North America, Australia) from (b) Africa, Asia, and South America. Three indicators capture how well a household can shield itself from outside noise: the size of the apartment/house in square meters; a dummy variable indicating whether no bedroom window faces the street versus having at least one window facing the street ('bedroom street-side' for short); and the window quality measured on a scale from 1 to 5. Control variables are the respondent's age, gender, highest educational level (tertiary versus all others), household size, subjective noise sensitivity (index based on five items of the Weinstein (1978) scale; Benfield et al. 2014), and a summary index of environmental awareness (with values from 1 to 5, for low to high awareness). People exhibit variation in terms of how sensitive they are towards noise. This in turn might affect their choice of residence. Age, gender, education, household size, and environmental awareness may also have an impact on the decision where to reside. Therefore, we include these variables as 'controls' in the regression equations (for details about the measurement of these variables, see Table A4). We do not, however, want to specify prediction equations by maximizing 'explained' variance. Our main goal is to test the hypotheses elaborated above, and we have tried to avoid 'overcontrolling' of covariates.
In addition, we estimated reduced form equations with (i) income, age, and gender and (ii) migration background, age, and gender to assess the total effects of income and migration background on road traffic and aircraft noise exposure (see Supplementary Tables C1 to C4).
Using the respondents' postal addresses, we were able to determine the spatial coordinates of their places of residence. For the two Swiss cities, spatial coordinates were taken from the Federal Register of Buildings and Dwellings (Swiss Federal Statistical Office, 2017). For Hanover, the software QGIS with the plug-in MMQGIS was used to geocode the addresses based on OpenStreetMap data. For Mainz, the geocoding was carried out using a web-based service that extracts coordinates from Google Maps (www.gpsvisualizer.com/geocoder).
Based on the spatial coordinates, very fine-grained 'objective' administrative data on local road traffic and aircraft noise were merged with the survey data (for more information about these administrative data, see Supplementary Section A). Fine-grained data mean that these data focus directly on the building where the respondents live. For both, road traffic and aircraft noise, L den (level during day, evening, night), is used. As a result, noise in dB(A) is assessed as the A-weighted long-term average sound level, applying the usual penalties for evening and night-time noise of 5 and 10 dB, respectively (Brink et al., 2018).
Whereas the road traffic noise data were provided for all four cities, aircraft noise data were only available for Zurich and Mainz. In the regions of Bern and Hanover, there are only small local airports, while Zurich and Mainz are located near international airports. Zurich is affected by Zurich Airport, which is about 10 kilometres north of the city. Mainz is affected by Frankfurt Airport, which is about 25 kilometres east of the city. Zurich Airport has about 750 aircraft movements each day (take-offs and landings), Frankfurt Airport about 1,300. Not all these movements directly affect the cities, and with respect to noise abatement, detailed administrative regulations exist on flight routes, night flights, take-off, and landing procedures -regulations that are more or less continuously in flux. Given this, the development of the airport and, in particular, aircraft noise have been controversial public and political issues in both cities for many years (for Zurich, see e.g. Wirth, 2004;and Brö er and Duyvendak, 2009; for Mainz, see e.g. Schreckenberg et al., 2010;and Wiebusch, 2014).
As we employ geo-referenced individual data, our analytical strategy takes into account that road traffic and aircraft noise pollution levels may be influenced by spatially clustered variables. Concerning the independent variables of the SEM equations, only income is significantly spatially autocorrelated, but there are significant autocorrelations pertaining to the dependent variables of road traffic and aircraft noise, as confirmed by Moran's I tests (see Supplementary Section B). Therefore, we apply SEM with robust standard errors (for a recent summary, see Rü ttenauer, 2019b). SEM assumes that the spatial autocorrelation between the units is caused by unobserved factors such as building density, building heights, or neighbourhood topography, and explicitly models the spatial dependence among the error terms. Note that we refrain from autoregressive models because we want to predict the observed values of street and aircraft noise and are not interested in spatial spill-over effects among these variables. For the SEM, we created spatial weight matrices based on inverse distances, with a cut-off distance of 200 m.
Taken together, our study uses detailed individuallevel survey data pertaining to sociodemographic and other characteristics of the respondents and their households, as well as to 'objective' measures of road traffic and aircraft noise focused directly on the buildings the respondents live in. Compared to previous research, this design avoids problems usually connected with aggregate data of geographical areas, and it circumvents biases accompanying subjective measures of residential environmental conditions. Nevertheless, our data are cross-sectional, and we are well aware that longitudinal data would be preferable. Table 1 shows descriptive statistics of all variables in the analyses. The socioeconomic composition of the two Swiss and the two German samples is similar concerning age and gender (average age 43-45 years, proportion of females 53-56 per cent), but different with respect to education, income, and the proportion of citizens with a migration background. There are more respondents with a tertiary education in Bern and Zurich than in Hanover and Mainz. The average income (net monthly equivalent household income per capita in Euro, PPP adjusted) in the Swiss cities is about one-third to onehalf larger than in the German cities. The proportion of respondents with a migration background is also higher in the Swiss than in the German cities. In all four cities, the migration background is more frequently related to Africa, Asia, or South America (between 15 and 25 per cent of respondents per city).
Noisy Cities
The World Health Organization (WHO) (2018) strongly recommends not surpassing a road traffic noise level L den of 53 dB. In all cities, the average values are close to the WHO threshold, ranging from 52 dB (Bern) to 55 dB (Hanover). Furthermore, we find remarkably high proportions of residents suffering from potentially detrimental levels of road traffic noise in all cities (Table 1): the share of the respondents enduring road traffic noise above 60 dB is 11.6 per cent in Bern, 19.5 per cent in Zurich, 24.4 per cent in Hanover, and 22.0 per cent in Mainz; the WHO limit of 53 dB is surpassed by 36.9 per cent in Bern, 46.8 per cent in Zurich, 54.3 per cent in Hanover, and 38.5 per cent in Mainz.
Turning to aircraft noise, WHO strongly recommends an average L den of no more than 45 dB. As mentioned above, we only have data for Zurich and Mainz. The average level of aircraft noise is higher in Mainz (48 dB) than in Zurich (45 dB). In Mainz, a much higher share of the respondents (30.1 per cent) suffers from very high aircraft noise levels of more than 50 dB compared to Zurich (11.2 per cent). Regarding the WHO recommendation of 45 dB, 51.5 per cent of the respondents in Zurich and 72.5 per cent in Mainz suffer from aircraft noise above this limit.
Residential Environmental Noise by Income and Migration Background
A visual inspection of city maps in Figure 1 gives an impression of districts confronted with high road traffic noise, such as inner-city districts in Zurich or parts of the south of Hanover. However, there is no consistent evidence of a negative relation between income and road traffic noise at the aggregate level of city districts. Notably, in all four cities there are districts that are both noisy and affluent. At the district level, the correlations between average income and average road traffic noise level point in different directions (Bern 0.55, Zurich 0.15, Hanover À0.21, and Mainz À0.22) and none of them is significant. As we will see, working with individual-level data yields different results. Clearly, individual data are more informative. For each city, we estimated two SEM to analyze the relationship between road traffic noise and the two indicators of social stratification, household income and migration background (see Table A1; for the specification of the estimated equations, see Supplementary Section E). Model 1 includes only these two central independent variables; Model 2 adds control variables-respondent's age, gender, and education, as well as her/his noise sensitivity and environmental awareness. We included the latter two variables as indicators for individual preferences that might influence the location of the residence chosen by the respondents. Figure 2 displays the estimates of Model 2 for each of the cities. Including the control variables, we find a significant negative relation between income and road traffic noise at the respondent's place of residence in the cities of Bern, Zurich, and Mainz. In Hanover, however, the coefficient is close to zero and not significant.
How about the substantial relevance of the coefficients in Bern, Zurich, and Mainz? 4 The coefficient in Mainz, for example, is À0.615 (Model M1 in Table A1); this means that a hypothetical increase in the monthly income by 1,000 e is associated with a reduction of the noise level by about 0.6 dB. Although the magnitude of this effect is not negligible, it is relatively small. A difference of up to 0.7 dB is usually inaudible. It may be helpful to compare the size of the coefficient to the difference in noise levels between cities and the countryside. In the above mentioned study with Swiss data, Diekmann and Meyer (2010) estimated urbanrural differences of 5.37 dB for road traffic noise during the daytime and 4.21 dB during the night-time; the difference is about seven to nine times larger than the income coefficient in the Mainz sample.
Do persons with a migration background face more road traffic noise near their residence? The sign of the coefficients for non-Western migration background is positive and significant in all cities, except for Hanover. In Bern, Zurich, and Mainz, inhabitants with a non-Western migration background live with noise levels that are enhanced by about 1.1-1.8 dB compared to natives. Note that this is an additional burden that adds to low income. In Zurich, Western migrants also experience an increased noise level of similar size to that experienced by residents with a non-Western migration background. 5 Estimates of income and migration background change only slightly when age, gender, education, noise sensitivity, and environmental awareness are taken into account (models B2, Z2, H2, and M2 in Table A1). Age is negatively associated with road traffic noise at the place of residence. The estimate is similar for all samples. All other things being equal, a resident 10 years older is, on average, exposed to 0.5 dB less road traffic noise than a younger resident. This might reflect a life cycle effect because particularly young respondents (such as students or young employees) live in areas with affordable low rents in noisier neighbourhoods. The noise level of households of female respondents is 0.56 dB lower in Bern and 0.81 dB lower in Mainz than the noise level of households of male respondents. We do not find additional contributions by education. Whereas exposure to road traffic noise tends to show a negative relationship with noise sensitivity, there is no consistent relationship with environmental awareness.
One may argue that the migration coefficients are biased downwards because we have controlled for income. Removing income from the regression equation does not change the overall picture much, although there is a small indirect effect via income. Regression Supplementary Table C2). We also observe only slightly increased income effects when we estimate the reduced form equation without the migration dummies (Table C1). Again, the income coefficients are small but significant in all cities, except Hanover.
As we have argued earlier, aircraft noise constitutes a relevant environmental threat in Mainz and Zurich. However, in Mainz, neither household income nor a non-Western migration background is significantly related to exposure to aircraft noise ( Figure 3 and Table A2). 6 In Zurich, the situation is different. According to Model 1 of Table A2, the coefficients for income and migration background are significant, albeit small: a 1,000 e income increase is associated with a 0.337 decrease in aircraft noise level, and having a non-Western migration background enhances the noise level by 0.753. Coefficients are slightly reduced when controlling for age, gender, education, noise sensitivity, and environmental awareness. Excluding income and estimating the reduced form equation (see Supplementary Table C4) leads to a slightly higher estimate of migration background in Zurich. The aircraft noise level for non-Western migrants in Zurich is on average about 1 dB above the level of native citizen due to the indirect effect of migration background via the migration-income correlation.
Shielding against Residential Environmental Noise
In contrast to the rather weak associations between residential environmental noise and income, there is a clear dependency of apartment size on household income in all four cities, even when taking into account several control variables (Figure 4 and Table A3). Income effects are smaller in the Swiss than in the German cities. An additional income of 1,000 e is associated with an increase in apartment size by 9.3 m 2 in Bern and 7.1 m 2 in Zurich and by 10.9 m 2 in Hanover and 11.2 m 2 in Mainz, although part of the difference is due to the higher rent levels in Switzerland. Apartment size is, however, a somewhat indirect indicator of a household's ability to protect itself from external noise. The position of bedrooms in the house and the window quality are more direct indicators. The results for having bedroom windows not facing the street exhibit significant coefficients for Zurich, Hanover, and Mainz (when including control variables), indicating that households with higher incomes are more likely to have bedrooms facing away from the street in these cities ( Figure 5 and Table A3). The results for window quality are again in line with our expectations. They are most directly related to noise protection, be it road traffic or aircraft noise: with increasing income households have, on average, significantly better window quality in all four cities ( Figure 6 and Table A3). 7 Turning to the associations with migration background, we find a disadvantage in apartment size for respondents with a non-Western migration background for all cities, even though household income (alongside other control variables) is included in the model (Figure 4 and Table A3). Thus, over and above financial reasons, this population group lives in smaller apartments, which we assume to provide a smaller noise protection effect than bigger apartments. However, looking at the more specific indicators for noise protectionbedroom(s) facing away from the street and, in particular, window quality-not all associations are sizable and significant ( Figures 5 and 6 and Table A3): for non-Western migrant households, there is a higher probability of having bedroom windows facing the street only in Zurich, while window quality is significantly lower in Bern, Zurich, and Hanover compared to households without a migration background. Thus, there is some evidence that respondents with a non-Western migration background are less able to shield themselves from noise intrusion into their living space, even if their income situation has been taken into account. When income is excluded, the reduced form effects are, as expected, more pronounced. Non-Western households have apartments smaller by about 16, 17, 21, and 25 m 2 in Bern, Zurich, Hanover, and Mainz, respectively, compared to native inhabitants; the probability of having a bedroom that is not on the street side is significantly lower than for native inhabitants in Bern and Zurich; and the window quality is significantly reduced in all four cities (Supplementary Table C6).
To summarize, the relationships (in particular with household income) reveal a clear pattern: high-income households can afford to live in more spacious homes that are more likely to provide options for locating living rooms and bedrooms in the quiet part of the apartment. They also enjoy better window quality than households with a lower level of resources. Thus, our data yield evidence supporting the environmental shielding hypothesis.
Discussion
We have explored the strength of the association between income, migration background, and other sociodemographic characteristics and the environmental burden of road traffic noise in four urban areas in Switzerland and Germany: Bern, Zurich, Hanover, and Mainz. In addition, we have included aircraft noise in Zurich and Mainz. Four random samples of inhabitants were drawn, resulting in 7,540 completed questionnaires. Households were linked to geo-referenced data of road traffic and aircraft noise. The burden of road traffic noise is high in all four cities: between 12 and 24 per cent of the citizens in Bern, Zurich, Hanover, and Mainz have to endure road traffic noise levels of more than 60 dB and 37-54 per cent endure noise levels above the WHO limit of 53 dB. European Union regulations (directive 2002/49/EC) stipulate that member states should compile noise maps and develop action plans to mitigate noise emissions. However, European communities are far from meeting targets and implementing action plans for noise reduction (Cancik, 2013;European Commission, 2016). Our first research question focused on the strength of the social gradient of the environmental burden, while controlling for noise sensitivity and environmental awareness as indicators of preferences. As expected, the sign of the income-noise relation was negative in all four cities. The SEM coefficients were small and statistically significant in Bern, Zurich, and Mainz but failed to reach significance in the Hanover sample. Respondents with a non-Western migration background were exposed to more road traffic noise in three of the four cities and the corresponding effects were statistically significant in all except the Hanover sample. Moreover, income and non-Western migration background were also significantly associated with aircraft noise in Zurich, but not in Mainz. Thus, there is evidence for the social gradient hypothesis, but this relationship is not generally valid for all urban areas under study and it is not strong, especially for income. However, there might be cumulative disadvantages for parts of the population, such as young people with low incomes and a non-Western migration background, for whom our regression models predict a greater exposure to high unhealthy noise levels.
Overall, we infer that the impact of the socioeconomic characteristics of income and migration status on the noise level measured outside buildings varies in the cities under investigation. The social gradient is nonexistent in Hanover and the effect is weak to moderate in the other three cities. This is in line with the results of other studies, which have produced only weak evidence or even contradictory findings on the social gradient hypothesis (e.g. Diekmann and Meyer, 2010;Padilla et al., 2014). We believe that local specifics (such as historically grown urban structures; see Elliott and Frickel, 2015) affect how far noise patterns are linked to social disparities (see also Rü ttenauer, 2018Rü ttenauer, , 2019a. Interestingly, Rü ttenauer's (2019a) study on the emissions of Table A3. Logistic regression models estimations for income, migration background, and additional covariates; average marginal effects (AME) are presented with a 95 per cent confidence interval.
industrial sites even found a negative association between the share of foreigners and air pollution in Hanover, while the relation was reversed in the city of Mainz. This result is well in accordance with our findings, but the complexity of factors driving the choice of residence also needs to be taken into account. Not only environmental aspects but also rent levels and the attractiveness of inner cities in terms of cultural life, infrastructure (such as public transportation), apartment type and size, and many other characteristics might be considered when choosing where to settle. The relative importance of these factors might also vary over the life course.
In some cities, urban areas and inner cities are attractive, but also noisy. With this in mind, we formulated the environmental shielding hypothesis. When low-income households and those with a higher level of resources alike live in noisy neighbourhoods, we assume that rich people have better capabilities to protect themselves against noise compared to low-income households. The geo-referenced emission data yield valuable information on the noise level outside buildings, but they do not inform us about the noise level inside apartments or about the variation of the noise level in larger apartment buildings. We found evidence for the environmental shielding hypothesis, given the consistent correlation between household income, apartment size, and window quality in all urban areas under study. Thus, households with a higher level of economic resources have a much better chance of reducing noise levels, not just of a busy street in front of their building but also from aircraft if they have good window quality.
In sum, our contribution casts doubt on the hypothesis that there always exists a social gradient of economic resources and social position (indicated by household income and migration background) and the exposure to road traffic and aircraft noise in large cities. At the same time, our results warn against the idea of a 'democratic' exposure to environmental bads: economic resources clearly matter when it comes to who is able to shield himself/herself against local noise. Thus, Beck's (1986: p. 48) well-known statement that 'poverty is hierarchical, smog is democratic' needs to be qualified. Individual and household resources shape actual living conditions that are also related to environmental bads.
Although our study contributes to a better understanding of social disparities in terms of exposure to road traffic and aircraft noise in urban areas, it still has a number of limitations that call for further research. First, we consider only four cities in two higher-thanaverage income countries in Europe, and we focus only on the indicators of road traffic and aircraft noise. Second, our investigation is mainly descriptive; our cross-sectional data do not make it possible to disentangle the specific mechanisms leading to, or compensating for, social disparities in noise exposure. For such an endeavour, we would need longitudinal data that traced moving histories of households and included information on decision processes, combined with finegrained geographical data. To our knowledge, such data are currently not available for Switzerland and Germany. Third, it would be worthwhile acquiring a deeper understanding of how specific historically grown city types, as well as different strategies in urban transport and road construction policies, relate to social disparities in noise exposure. Fourth and finally, we were only able to scratch the surface of how households with different resources are able to shield themselves from external noise in their homes. It would be an interesting research avenue to collect detailed data on the living conditions and coping strategies of households in noisy urban districts. Further research should dig deeper into the relevance of monetary as well as non-monetary resources and capacities to shield one's household from environmental bads.
Supplementary Data
Supplementary data are available at ESR online.
Funding
This work was supported by the German Research Foundation (projects PR 237/7-1 and KU 1926/3-1) and the Swiss National Science Foundation (project 100017E-154251).
Notes 1 The 'social gradient' is a standard concept in health science and epidemiology. It denotes a negative correlation between health-risk indicators and socioeconomic status. 2 We are aware that homeownership might be a factor related to the environmental shielding hypothesis. High-income and non-migrant households are more often homeowners, i.e. more likely to live in familyowned houses or apartments, whereas low-income and migrant households tend to belong to the group of house and apartment renters (Andrews and Sánchez, 2011;Garcia and Figueira, 2021). This means homeownership is a mediator of socioeconomic status effects on noise exposure and shielding possibilities such as window quality. In this article, however, we are interested in total status effects. Table A3, the regression equations yield a very low level of explained variance. As stated above, our goal is not to specify equations to maximize 'explained variance'. 5 Large standard errors for Western migration background in the Hanover and Mainz samples are due to the very small proportion of this group in our samples. 6 In Mainz, there is a tendency that respondents with a Western migration background are exposed to a lower aircraft noise level than those without a migration background. The coefficient of 1.46 is significant in Model M2 (Table A2) but fails to reach significance in Model M1 and in Models M1 and M2 of the reduced form estimations (Supplementary Table C4). The lower noise level in comparison to natives might be due to a higher concentration of residents with a Western migration background living in the inner city. This part of the city is less exposed to aircraft noise than the south-eastern periphery of Mainz. 7 The income coefficients are more pronounced when 'migration background' is excluded from the equation (see Supplementary Table C5). However, a causal interpretation of income 'effects' should include the migration variable. Depending on the model specification, migration background is a confounding factor that partially explains the income coefficients.
activities focus on studies of climate policy and experimental research on social dilemmas. Notes: ***P < 0.001, **P < 0.01, *P < 0.05; robust standard errors in parentheses; B, Bern; H, Hanover; M, Mainz; Z, Zurich. Income for Bern and Zurich is converted to Euro and adjusted for purchasing power parity. For 'Bedroom not on street side' estimates from logistic regressions are presented as average marginal effects (AME).
20
European Sociological Review, 2022, Vol. 00, No. 0 Downloaded from https://academic.oup.com/esr/advance-article/doi/10.1093/esr/jcac028/6617962 by guest on 02 July 2022 Own questionnaire Noise sensitivity 1) I get annoyed when my neighbours are noisy. 2) I get used to most noises without much difficulty. 3) I find it hard to relax in a place that's noisy. 4) I get mad at people who make noise that keeps me from falling asleep or getting work done. 5) I am sensitive to noise. Adapted from the Weinstein (1978) scale, see also Benfield et al. (2014).
Mean index from 1 ¼ not sensitive to 5 ¼ sensitive, coding for item (2) was reversed, additive index Own questionnaire Environmental awareness Please indicate how much you agree to the following statements. 1) The thought of the environmental conditions under which our children and grandchildren will probably have to live worries me. 2) If we continue as we are, we are heading for an environmental catastrophe.
3) The majority of the population in our country is too little environmentally conscious. 4) Environmental problems are greatly exaggerated by many environmentalists. 5) Politicians in our country do far too little for environmental protection. 6) For the sake of the environment, we should all be prepared to limit our standard of 1 ¼ do not agree at all to 5 ¼ fully agree, additive index Own questionnaire (continued)
|
2022-06-27T17:31:30.249Z
|
2022-06-25T00:00:00.000
|
{
"year": 2022,
"sha1": "9fcd43fa369f95c5e806b989724c43aad1ecd6f7",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/esr/advance-article-pdf/doi/10.1093/esr/jcac028/44251308/jcac028.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e0e543b92869af83a800c10f78bb181246482e2a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
251672867
|
pes2o/s2orc
|
v3-fos-license
|
Elder abuse in the COVID-19 era based on calls to the National Center on Elder Abuse resource line
Background The COVID-19 pandemic has exacerbated circumstances that place older adults at higher risk for abuse, neglect, and exploitation. Identifying characteristics of elder abuse during COVID-19 is critically important. This study characterized and compared elder abuse patterns across two time periods, a one-year period during the pandemic, and a corresponding one-year period prior to the start of the pandemic. Methods Contacts (including social media contacts, and email; all referred to as “calls” for expediency) made to the National Center on Elder Abuse (NCEA) resource line were examined for differences in types of reported elder abuse and characteristics of alleged perpetrators prior to the pandemic (Time 1; March 16, 2018 to March 15, 2019) and during the pandemic (Time 2; March 16, 2020 to March 15, 2021). Calls were examined for whether or not abuse was reported, the types of reported elder abuse, including financial, physical, sexual, emotional, and neglect, and characteristics of callers, victims, and alleged perpetrators. Chi-square tests of independence compared frequencies of elder abuse characteristics between time periods. Results In Time 1, 1401 calls were received, of which 795 calls (56.7%) described abuse. In Time 2, 1009 calls were received, of which 550 calls (54.5%) described abuse. The difference between time periods in frequency of abuse to non-abuse calls was not significant (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.28$$\end{document}p=0.28). Time periods also did not significantly differ with regard to caller, victim, and perpetrator characteristics. Greater rates of physical abuse (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\upchi }^{2}=23.52, p<0.001)$$\end{document}χ2=23.52,p<0.001) and emotional abuse (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\upchi }^{2}=7.12, p=0.008)$$\end{document}χ2=7.12,p=0.008) were reported during Time 2 after adjustment for multiple comparisons. An increased frequency of multiple forms of abuse was also found in Time 2 compared to Time 1 (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\upchi }^{2}=23.52, p<0.001)$$\end{document}χ2=23.52,p<0.001). Conclusions Findings suggest differences in specific elder abuse subtypes and frequency of co-occurrence between subtypes between time periods, pointing to a potential increase in the severity of elder abuse during COVID-19. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-022-03385-w.
Background
The COVID-19 pandemic has exacerbated circumstances that place older adults at higher risk for abuse, neglect, and exploitation. Consequences of elder abuse are grave, and result in negative physical, psychological, and social effects for victims, families/loved ones, communities, and society [1,2]. In this study, we sought to characterize Open Access *Correspondence: Gali.Weissberger@biu.ac.il elder abuse patterns during a one-year period of the COVID-19 pandemic, and compare these patterns to a corresponding one-year period prior to the pandemic.
For several reasons, older adults may have been at increased risk of elder abuse during the height of the COVID-19 pandemic [3][4][5][6]. Limited interpersonal contact in order to prevent or slow virus transmission may lead to social isolation [5], a known risk factor for elder abuse [4,7]. The pandemic may also increase the burden that caregivers experience and perceive in caring for older adults [5,6,8]. Moreover, older adults may be at higher risk for financial instability due to changes in money earning opportunities [5,6], a factor linked to increased vulnerability to scams [9]. All of these factors may become particularly salient when concern of virus transmission grows and becomes widespread.
Few studies to our knowledge have examined rates and characteristics of elder abuse during the COVID-19 pandemic. While two survey studies found increased rates of elder abuse during the pandemic [10,11] compared to a pre-pandemic period, one study [12] found evidence for decreased rates of elder abuse and age discrimination. A methodological limitation of the first two of these studies [10,11] is that both utilized comparison datasets that were different from the pandemic dataset examined. More studies are needed to fully understand the scope of the impact of COVID-19 on elder abuse patterns.
In this study, we utilized contacts made to the National Center on Elder Abuse (NCEA) resource line to examine patterns of reported elder abuse over a one-year period after the United States federal government issued a stayat-home order on March 16, 2020. We compared these patterns to patterns of reported elder abuse over a corresponding one-year period prior to the pandemic (March 16,2018 to March 15, 2019). The NCEA resource line serves as a unique frontline source of data to investigate elder abuse characteristics across the United States [13]. Calls, emails, and social media messages to the NCEA during the two timeframes were descriptively examined for reports of elder abuse, the types of elder abuse described, and characteristics of the callers, the alleged perpetrators, and the victims. We expected that there would be an increase in elder abuse calls made to the NCEA resource line during the second time period, consistent with two recent studies [10,11]. We also expected that there would be a shift in the distribution of reported abuse types and perpetrator characteristics across these two time periods. Given increased time at home and a decrease in access to home and community-based services during the second time period, we hypothesized that there would be a rise in reports of emotional and physical abuse in comparison to the prior time period. Due to increased economic vulnerability incurred by the pandemic, we predicted that a greater percentage of calls during the pandemic would report financial abuse. Additionally, we predicted that elder abuse during both time periods would be most commonly committed by a family member, consistent with previous work [13][14][15][16][17].
Methods
The National Center on Elder Abuse (NCEA) The NCEA (https:// ncea. acl. gov/) provides information and resources to individuals and community groups through multiple outlets, including a telephone line, website, and social media pages (all referred to as the NCEA resource line for purposes of this study). Individuals can contact the NCEA through these various outlets (for expediency, all forms of contact made to the NCEA resource line will be referred to as "calls" consistent with previously published work [13]. Calls made to the NCEA are summarized and logged into a database by NCEA staff. Responses to the calls by NCEA staff are also logged. A detailed description of the methodology for coding NCEA calls has been described in previous work [13]. In brief, prior to coding calls, an NCEA staff member de-identified the call narratives. Two independent raters then coded the calls with regard to whether or not abuse was reported, caller, victim, and perpetrator characteristics, the types of abuse alleged, whether multiple subtypes of abuse were alleged, and who perpetrated the alleged abuse. Call narratives and NCEA staff responses were utilized to code whether or not abuse was alleged. Single calls that reported two completely unique scenarios of abuse (two different victims or two different and unrelated perpetrators) were coded separately for each scenario of abuse and considered unique "calls" for analyses purposes. After identifying whether abuse was alleged, abuse calls were categorized into one or more of five elder abuse subtypes: financial, physical, sexual, emotional, and neglect. Calls were also coded for number of abusers reported per call (one abuser, more than one abuser, staff of a company or facility, or unable to determine) and the relationship of the abuser to the victim (family; non-family, non-medical caretaker; non-family, medical caretaker; caretaker, relationship unknown; an individual or entity known to the victim who does not fit the other categories; a stranger such as a telephone solicitor; or unable to determine). Two study co-authors (GHW, ACL) resolved any disagreements between the two independent raters.
Procedure
Procedures for rating the calls followed the same codebook developed in previously published work [13]. The codebook was developed through a review of the scientific literature and expert knowledge on elder abuse. The two raters agreed on 85.35% of the initial codes (353 disagreements out of the 2410 total calls received) for overall alleged abuse prior to resolution of disagreements. Percent disagreement between raters on subtypes of abuse was calculated based on the number of times the raters disagreed about the subtype of abuse being reported out of the 1365 total calls reporting abuse across the two time periods. Disagreement was highest for financial and emotional abuse (11.2% and 11.8%, respectively), followed by neglect (9.01%), physical abuse (4.1%), and sexual abuse (0.4%). Non-abuse calls mostly consisted of general requests for information about the NCEA and elder abuse services.
Elder abuse and its subtypes were defined based on a Center for Disease Control and Prevention (CDC) report [18]. Per the CDC report, elder abuse is defined as "an intentional act or failure to act by a caregiver or another person in a relationship involving an expectation of trust that causes or creates a risk of harm to an older adult. " An older adult is defined as an individual 60 years of age or older. Consistent with previous work [13], we chose to include "strangers" when classifying abuser-victim relationships, a modification consistent with the U.S. Department of Justice's Elder Justice Roadmap definition [19]. The general definition of elder abuse, definitions of specific subtypes applied in this study, and descriptions of relationships coded are described in detail in previous work [13] and in Supplemental Table 1.
Additional criteria for coding abuse
Per CDC guidelines, alleged abuse between residents of long-term care facilities was not considered abuse. Calls reporting suboptimal living situations due to low income were not considered to be abuse for the purposes of this study.
Calls that reported abuse of an individual who is now deceased were only considered to be abuse if the death was presumed to be a result of the alleged abuse. This was done to ensure that abuse occurred within the two time periods of interest. Calls alleging abuse of victims residing outside of the United States or its territories and calls that alleged an abusive event that occurred prior to the windows of time under consideration were excluded from descriptive analyses. In the case of vague call narratives, abuse was only considered if the NCEA response narrative provided Adult Protective Services (APS) or police numbers to the caller or referenced a specific case of abuse.
Analyses of calls
Total calls identifying alleged abuse for each time period were tallied and characteristics of the calls were summarized separately for each of the two time periods. Descriptive analyses procedures were as follows. If a call described two or more unique and unrelated instances of abuse, these instances were counted separately into the total. Percent of each abuse subtype was calculated by dividing the number of calls alleging a specific abuse subtype by the total number of calls reporting abuse. This same procedure was done to determine other characteristics of the calls, including caller, perpetrator, and victim characteristics, abuser-victim relationships and number of abusers. Calls that identified more than one subtype of abuse or relationship were included within each relevant descriptive analysis, such that some calls were represented more than once. In cases in which calls reported more than one subtype of abuse or other characteristic of interest (e.g., perpetrator relationship), the denominator remained the total number of calls reporting abuse, or the total number of calls reporting a subtype of abuse, in the case of subtype analyses (e.g., examining perpetrator relationships separately by subtype).
To investigate whether there were statistical differences in call characteristics between the two time periods, a series of chi-square tests of independence were conducted. Adjustments for multiple comparisons were made using Bonferroni corrections. Comparing the frequency of the five abuse subtypes (financial abuse, physical abuse, sexual abuse, emotional abuse, and neglect) between Time 1 and Time 2 revealed a greater frequency of physical abuse calls ( χ 2 = 23.52, p < 0.001) and emotional abuse calls ( χ 2 = 7.12p = 0.008) in Time 2 compared to Time 1. There were no significant differences in rates of alleged financial abuse (p = 0.09), sexual abuse (p = 0.32), and neglect (p = 0.71) between the two time periods.
Subtypes of alleged abuse
Frequencies of abuse subtypes between the two time periods can be viewed in Fig. 1.
Caller, victim, and perpetrator characteristics
Calls were assessed for specific characteristics of the caller, the victim, and the perpetrator ( Table 2). With regard to caller characteristics, in both time periods, most callers were not the victims themselves and this did not significantly differ between time periods (p = 0.15). Victims were most commonly reported as female in both time periods. With regard to sex of the alleged perpetrator, the vast majority of calls did not specify sex.
Of the calls that specified sex, there were slightly more calls that reported female perpetrators for both time periods. There were no significant differences in sex breakdown between the two time periods (both ps ≥ 0.21). The number of perpetrators discussed in each call was also assessed. Both time periods indicated one abuser for the majority of calls, followed by abuse by a company or facility, and more than one abuser. There were no significant differences in breakdown of number of abusers reported between the two time periods (p = 0.77).
Calls were also assessed for the relationship reported between the perpetrator and the victim (see Table 2 and Fig. 2). For both time periods, family members were the most commonly alleged perpetrators, followed by relatively equal rates of calls reporting an individual known to the victim (non-family, non-caretaker) and nonfamily medical caretaker. Strangers were the next most common alleged perpetrators for both time periods, followed by non-family, non-medical caretaker, and unspecified caregivers. Differences did not arise with regard to the frequency of relationships reported across the two time periods (p = 0.36).
To further describe differences in the patterns of calls between the two time periods, we examined alleged victim-abuser relationships separately for the four most common abuse types between the two time periods: financial abuse, physical abuse, emotional abuse, and neglect (Supplemental Table 2 and Supplemental Fig. 1a-d). For both time periods, family members were the most commonly alleged perpetrators of financial, physical, and emotional abuse. For neglect, the most commonly alleged perpetrators for both time periods were medical caretakers. A pattern arose for physical abuse calls such that the percent of calls alleging a family member in Time 2 was lower by over 15% compared to Time 1, while the percent of calls alleging physical Table 3. For both time periods, financial abuse and physical abuse most commonly co-occurred with emotional abuse, and neglect most commonly occurred with financial abuse and emotional abuse.
Discussion
In this study, we examined calls made to the NCEA over a one-year period during the COVID-19 pandemic and compared them to calls made during a one-year period prior to the pandemic. Consistent with previous work by our group [13] and others [9,20], during both time periods, financial abuse was the most commonly reported abuse subtype, followed by emotional abuse. Additionally, family members were the most commonly alleged perpetrators of abuse across both time periods [13][14][15]17]. Other characteristics also did not differ between time periods including caller, perpetrator, and victim characteristics and number of perpetrators reported.
Differences between time periods arose when investigating frequencies of subtypes of abuse. Consistent with our prediction, a greater frequency of physical abuse and emotional abuse calls were reported in Time 2 compared to Time 1. This is consistent with a study that reported alarming increases in rates of domestic violence during the COVID-19 pandemic [21]. The authors [21] discuss that for individuals already in a vulnerable home Table 3 Number of calls that reported more than one abuse subtype for Time 1 and Time 2 Note: The total number of calls reporting each subtype were used as each row's denominator to calculate percentages. Note that row sums of Supplemental Table 3 (frequencies of co-occurrences between each subtype) differ from the corresponding total number of calls listed in Table 3 situation, pandemic circumstances may exacerbate vulnerabilities. Older adults are more likely to be dependent on others for completion of daily activities due to physical and cognitive limitations that increase with age. All adults, including older adults, must be more reliant on technological forms of communication given physical distancing recommendations, and this greater dependency is increasingly being exploited by bad actors [22]. Greater dependencies on others and on technology can increase vulnerabilities of older adults during the pandemic, especially given increased pressures on caregivers and reduced access to outside supportive resources [4,6]. The finding of increased physical and emotional abuse directly contrasts a study reporting a decrease in physical and psychological abuse during the pandemic compared to a pre-pandemic period in a representative community sample of older women in Hong Kong [12]. Our findings also diverge slightly from those of Chang et al. [10] who found increases in rates of physical abuse and financial abuse reported during the pandemic period, but not in verbal abuse (a type of emotional abuse). In their study, the authors compared results of an elder abuse survey administered during a two-week period during the COVID-19 pandemic to two nationally representative surveys conducted prior to the pandemic. Elder abuse subtypes were assessed using single item questions and were based on self-report. Thus, differences between this study and our findings may be due to differences in how elder abuse is measured and/or the specific data sources utilized (i.e., survey questions versus an elder abuse resource line). Differences between studies may also reflect the complexity of measuring elder abuse during the COVID-19 pandemic. For example, Yan et al. [12] discuss that the reduction in elder abuse found in their study may reflect a true reduction in elder abuse as a result of changing living situations, or may reflect a change in the willingness to report elder abuse during the pandemic period when more victims are trapped at home with perpetrators of violence. Thus, different data sources (i.e., survey, resource line) may yield vastly different results.
Contrary to our hypothesis and two previous studies [10,11] that found an increase in elder abuse rates during the COVID-19 pandemic, we did not find an increase in elder abuse calls during the pandemic period (Time 2). Importantly, the two previous studies utilized data sources that diverged from their pre-pandemic comparison dataset, which may have contributed to differences in elder abuse rates for reasons other than pre-post pandemic changes. Although the difference in the ratio of abuse to non-abuse calls between pre-pandemic and pandemic time periods was not significant in our study, there was an overall decrease in contacts made to the NCEA during the pandemic period. It is possible that aspects of the pandemic may have affected individuals' initiative to call the NCEA resource line to receive elder abuse related services and support. Consistent with this notion, a recent Adult Protective Services (APS) report found that many APS programs received fewer reports in the beginning of the pandemic [23]. One possibility for less reports during the pandemic is that COVID-19 preventative measures such as social distancing and isolation reduce social contact which may subsequently decrease the opportunities for abuse to be detected and reported [24]. This may be particularly relevant in older adults who are isolating with perpetrators of abuse, such as family members, as they may be controlling what is being seen or heard by others [24].
During both time periods, physical and emotional abuse were most likely to co-occur with other abuse subtypes. This finding is consistent with previous work [13,25,26]. We additionally found a greater proportion of calls alleging more than one abuse subtype in Time 2 compared to Time 1. This may suggest increased severity of abuse during the pandemic period, a possibility suggested by Makaroun et al. [5]. During the pandemic, many older adults may be sharing living arrangements with family members who may be home more often and more available due to changes in work schedules and shifts in social activities. Moreover, older adults may be spending significantly more time with family members or caretakers due to a lack of other supportive resources. Such drastic lifestyle changes may consequently increase mood disorders and substance use both in caregivers and older adults [5]. Furthermore, increased tensions brought on by reduced economic stability, shared living spaces, and fears/anxieties related to COVID-19 transmission may also be risk factors for increased frequency and severity of abuse [10], thereby increasing the likelihood that perpetrators commit additional forms of abuse (i.e., emotional abuse progressing to physical abuse).
This study has several limitations. Findings in this study are based on calls or messages made by individuals who contacted the NCEA resource line to receive information or seek advice about elder abuse. This self-selection bias may skew findings, and precludes determinations of elder abuse incidence and prevalence during the COVID-19 era. Relatedly, the COVID-19 pandemic may impact older adults' contact with outside supportive systems that may assist in the detection of elder abuse. As such, any assessment of the degree of elder abuse during the COVID-19 pandemic may underestimate the issue. Finally, because investigation is not part of the NCEA resource line protocol, we were unable to substantiate the veracity of abuse claims made by callers, though there is no reason to believe calls were made disingenuously.
Nevertheless, findings of this study have important research and clinical implications. Future studies examining changes in elder abuse characteristics longitudinally and in concert with shifting social distancing patterns and virus transmission rates may further shed light on the complexities of the issue. Additionally, enhanced awareness (e.g., within healthcare organizations and amongst healthcare providers) of elder abuse risk factors such as social isolation, mental illness, and substance use that may change alongside evolving virus transmission rates and social distancing measures is critical [5,6]. This will ultimately help identify those older adults most at risk and put in place protective measures so that abusive situations can be avoided.
Conclusions
This is one of the only studies to compare elder abuse characteristics during the COVID-19 pandemic to a pre-pandemic period, and the only study to our knowledge to do so using the same data source for comparison. Findings suggest differences in specific elder abuse subtypes and frequency of co-occurrence between subtypes between time periods. Future studies are needed to investigate elder abuse characteristics in larger and more representative samples of older adults, and across different time periods of the pandemic, to further clarify the impact of the COVID-19 pandemic on patterns of elder abuse.
Abbreviations NCEA: National Center on Elder Abuse; CDC: Center for Disease Control; APS: Adult Protective Services.
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12877-022-03385-w. Table 1. (a) Definitions used to code calls made to the NCEA helpline. Definitions are based on CDC guidelines with some modifications. (b) Descriptions of the types of relationships coded for. Adapted from Weissberger et al. [13]. Supplemental Table 2. Breakdown of perpetrator relationships to victim separately by the four most commonly reported abuse subtypes. Some calls reported more than one relationship, thus the percentages may exceed 100% for each time period. Supplemental Table 3. Number of calls that report co-occurring subtypes for Time 1 (Panel A) and Time 2 (Panel B). Supplemental Figure 1a-d. Visual display of breakdown of perpetrator relationships to victim separately by the four most commonly reported abuse subtypes: (a) financial abuse (b) emotional abuse (c) neglect and (d) physical abuse. Some calls reported more than one relationship, thus the percentages may exceed 100% for each time period.
|
2022-08-20T14:59:50.453Z
|
2022-08-20T00:00:00.000
|
{
"year": 2022,
"sha1": "b4ac9c1aec77fc5bd312fac36b0d5240cbc4689c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "b4ac9c1aec77fc5bd312fac36b0d5240cbc4689c",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258334459
|
pes2o/s2orc
|
v3-fos-license
|
Mycobacterium bovis naturally infected calves present a higher bacterial load and proinflammatory response than adult cattle
Granulomas are characteristic bovine tuberculosis lesions; studying this structure has improved our understanding of tuberculosis pathogenesis. However, the immune response that develops in granulomas of young cattle naturally infected with Mycobacterium bovis (M. bovis) has not been fully studied. Our previous work described an atypical pattern in granulomatous lesions of cattle younger than 4 months (calves) naturally infected previously M. bovis that did not correspond to the histological classification previously proposed. Histologically, granulomas from calves lack a connective tissue capsule and have fewer multinucleated giant cells (MGCs) and more acid-fast bacilli (AFB) than the classic tuberculosis lesions found in cattle older than 1 year (adults); this suggests a deficient immune response against M. bovis infection in young animals. Therefore, we used IHC and digital pathology analysis to characterize the in situ immune response of granulomas from young and adult cattle. The immunolabeling quantification showed that granulomas from calves had more mycobacteria, CD3+ cells, IFN-γ, TNF-α, and inducible nitric oxide synthase (iNOS) than those of adult cattle. Furthermore, calf granulomas showed lower immunolabeling of MAC387+, CD79+, and WC1+ cells without connective tissue surrounding the lesion and were associated with less vimentin, Alpha Smooth Muscle Actin (α-SMA), and TGF-β compared with granulomas from adult cattle. Our results suggest that the immune responses in granulomas of cattle naturally infected with M. bovis may be age dependent. This implies that an exacerbated proinflammatory response may be associated with active tuberculosis, producing more necrosis and a lower microbicidal capacity in the granulomas of calves naturally infected with M. bovis.
Introduction
Bovine tuberculosis caused by Mycobacterium bovis affects different mammals, including humans. In the livestock industry, M. bovis causes losses of approximately 3 billion dollars per year (1,2). This disease mainly affects cattle's lymph nodes and lungs, where granulomas are formed. These structures isolate and control mycobacteria and restrict tissue damage by preventing chronic inflammation of the surrounding tissue (3). Granuloma formation depends on the recruitment and activation of the cellular immune response and the persistence of the mycobacterial antigenic stimulus. These factors will determine the histology and development of the lesion (4). The structure of the granuloma has been associated with the progression or control of the disease; granulomas with little necrosis and well delimited by cellular and connective tissue capsules are associated with a lower bacillus number than inadequately formed granulomas with extensive necrosis and limited encapsulation. The latter type of lesion has been reported in humans and experimental monkeys with active tuberculosis as well as in individuals co-infected by Mycobacterium tuberculosis/HIV (5)(6)(7).
We have previously characterized granulomas of cattle naturally infected with M. bovis and older than 1 year, observing lesions comparable to those reported by Wangoo et al. (8). In contrast, bovines younger than 4 months presented "atypical" granulomas with many bacilli, necrosis, and absence of a connective tissue capsule, suggesting that this group of calves developed a response that was unable to form a granuloma to control the infection (9). Nonetheless, the immune response present in the granulomas of these animals is unknown. To better understand the immune response of granulomas induced by M. bovis at the cellular and molecular levels, we used immunohistochemistry (IHC) and digital pathology analysis to characterize granulomas from cattle older than 1 year and calves younger than 4 months. Our results suggest that calf granulomas present more mycobacteria, a greater proinflammatory response, and a lack of connective tissue capsule compared to the granulomas from adults.
Sample collection
Mediastinal lymph node samples were collected from 25 naturally infected cattle, 15 were adult Holstein-Friesian dairy cows between one to 5 years of age, and 10 corresponded to calves between 1 week to 4 months of age. These tissues were collected with owner consent, from cattle that exhibited lesions suggestive of tuberculosis in the post-mortem examination. All cattle died from conditions that did not include tuberculosis, the main circumstances of death (euthanasia, emergency slaughter or unassisted death), were metabolic/digestive disorders, pneumonias, traumatism, and mastitis/udder problems, in the Supplementary Table 1 we described the causes of death of each animal. The samples were collected in a dairy basin from the central region of Mexico with a prevalence of bovine tuberculosis higher than 16% (10).
Histopathological analyses of paraffin-embedded tissues
Samples of lymph nodes, lung tissue, and individual organs that exhibit tuberculosis-suggestive lesions were collected during the necropsy. Tissues were divided for histopathology and bacteriological cultures.
For histopathological analysis, the tissue was fixed in 10% formaldehyde and embedded in paraffin. From the formalin-fixed paraffin-embedded tissues (FFPE) were obtained 4-μm width serial sections and stained with Hematoxylin and Eosin (H&E), Masson's trichrome, Ziehl Neelsen (ZN), and Von Kossa. In these sections were identified granulomas with fibrous tissue capsules, acid-fast bacilli, and calcification. Granulomas were identified and staged according to Wangoo et al. (8) and using a new classification of granulomas in the group of young bovines (9).
Mycobacterium bovis identification
Mycobacterium bovis was identified by bacteriological isolation and by PCR of FFPE tissues. Briefly, part of the collected tissue was used for bacteriological isolation after Petroff 's decontamination method under biosecurity conditions (11). To confirm the presence of M. bovis, we extracted genomic DNA from the bacteriological isolation and FFPE tissues that had granulomas. In the case of paraffin blocks, 10 to12 micron sections were obtained by use of a microtome and added to a 1.5 ml centrifuge tube, microtome blades were cleaned with 70% alcohol between slices to avoid cross-contamination of samples. After, 1 ml of xylol was added to each tissue section which was then vortexed and incubated for 5 min. Xylol was subsequently decanted and tissue section was washed twice with absolute ethyl alcohol, allowed to dry, and resuspended in 400 μl of TE with 50 μl of lysozyme (10 mg/ml) and were incubated over-night at 37°C. Then, we used the CTAB (N-cetyl-N, N, N-trimetyl ammonium bromide)/chloroformisoamyl alcohol protocol, described by Van Helden et al. (12). Next, a nested PCR was performed to amplify the mpb70/m22 genes and identify members of the Mycobacterium tuberculosis complex. We used a commercial kit (TopTaq Master Mix Kit) followed the manufacture instructions. Primers for the mpb70 gene that amplify a product of 372 bp were: mpb70 F (5′-GAACAATCCGGAGTTGACAA-3′) and mpb70 R (5′-AGCACGCTGTCAATCATGTA-3′). For a second reaction a 208 bp product from the same gene was obtained, the M22 F (5′-GCTGACGGCTGCACTGTCGGGC-3′) and M22 R (5′-CGTTGGCCGGGCTGG TTTGGCC-3′) primers were used. Finally, PCR of the RD9 and RD4 genes was used to identify specifically M. bovis. For the RD9 gene, selected primers were RD9 F (GTGTAGGTCAGCCCCATCC), RD9 I (CAATGTTT GTTGCGCTGC) and RD9 R (GCTACCCTCGACCAAGTGTT), with a product of 333 bp for M. tuberculosis and 206 bp for M. bovis and for RD4 gene the primers were RDF (ATGTGCGAGCTGAGCGATG), RD4 I (TGTACTATGCT GACCCATGCG) and RD4 R (AAAGGAGCACCATCGTCCAC), with a product of 268 bp for M. bovis and M. bovis BCG, for the rest of the members of the M. tuberculosis complex a product of 172 bp is amplified (12-14).
Frontiers in Veterinary Science 03 frontiersin.org
Immunohistochemistry
IHC procedures are summarized in Table 1. Briefly, FFPE tissues were cut into 4-5 μm sections and placed on electrocharged slides (Kling-On Slides-Biocare Medical). The sections were deparaffinized at 60°C for 30 min, rehydrated, and placed in 3% hydrogen peroxide for 15 min to eliminate endogenous peroxidase activity; then, epitope demasking was performed using both physical and chemical methods according to the primary antibody ( Table 1). The tissues were washed with distilled water and placed in Sequenza cover plates (Shandon Scientific Loughborough, UK) for immunolabeling. The sections were washed after each step of the staining procedure with phosphate-buffered saline (PBS: 138 mM NaCl, 3 mM KCl, 8.1 mM Na2HPO4, 1.5 mM KH2HPO4 adjusting the pH to 7.4) and Tris-buffered saline with Tween (TBST: 0.005 mM Tris-buffered saline, pH 7.6 with 0.05% Tween 20). A universal blocking reagent (Background Sniper BS966L10) for reducing nonspecific background staining was added, then samples were incubated with the primary antibody. Antibody concentration and incubation time were standardized for each antibody. After washing twice, MACH 1 Universal HRP-Polymer Detection (Micro-polymer detection) was used following the manufacturer's instructions. A probe (mouse antibodies only) was added to the sections and incubated for 15 min at room temperature. Then, the Polymer was added and incubated for 30 min at room temperature, followed by 3,3′-diaminobenzidine tetrahydrochloride (DAB) for visualization. The slides were rinsed in purified water, counterstained in Mayer's hematoxylin, dehydrated, and mounted with resin.
Digital image analysis in granulomas
The slides immunolabeled with different antibodies were processed digitally with a scanning microscope (Aperio Scanscope CS, Aperio, CA, USA), generating 40× images with a spatial resolution of 0.45 μm/pixel. Images were analyzed with the ImageScope software (Aperio, CA, USA), and granulomas were delimited by removing the areas of necrosis composed of cell debris and calcification. Various algorithms were used to standardize the adequate detection level for the quantification of the brown staining obtained from IHC. This methodology enabled the quantification of the proteins marked in the different IHC tests. Supplementary Figure 1 summarizes the experimental procedures used in this study, and Supplementary Table 2 shows the antibodies used and the number of granulomas analyzed in each group.
Statistical analyses
Statistical analysis was performed using the PASW Statistics 18 program and GraphPad Prism 7.0. Shapiro-Wilk test and normal Q-Q plots were tested for normality. Comparisons of immunostaining in granuloma between adult and young cattle were performed by the nonparametric Mann-Whitney test. Significant differences were considered when p < 0.05.
Granulomas of calves naturally infected by Mycobacterium bovis exhibit high numbers of bacteria
Formalin-fixed paraffin-embedded lymph node sections from 25 Holstein Friesian cattle naturally infected with M. bovis were used. IHCs were performed on these tissues to identify cell populations, cytokines, and the presence of mycobacteria. A total of 3,439 granulomas were analyzed, of which 31.3% (1,077) were from adult cattle, and 68.6% (2,362) were from calves. Using the Ziehl-Neelsen staining, we had previously observed more AFB in granulomas of the mediastinal lymph nodes of calves compared with adult cattle. To Figure 1). The higher sensitivity of this technique allowed us to detect not only the presence of bacilli, but also cellular remains in the forms of vacuoles and cytoplasmic dust that are possibly associated with cell debris due to mycobacterial processing and phagocytosis. This staining was observed mainly in the cytoplasm of macrophages (MΦs), epithelioid MΦs, and multinucleated giant cells (MGCs). Positive staining was scarcely found in granulomas from adult cattle, which presented necrotic centers with mineralization and were circumscribed by a connective tissue capsule; in these types of lesions, the positive immunostaining was only observed in the MGCs ( Figure 1C). In calf granulomas, positive staining was extracellular and predominant in necrotic areas ( Figure 1D). Interestingly, the cytoplasm of different types of cells outside the granulomas had cytoplasmic dust immunolabeling in both groups. These results suggested that the higher bacteria burden found in granulomas of calves compared with adult cattle may be correlated with the type of immune response. Therefore, we sought to identify the major cell populations and cytokines associated with the immunopathology of tuberculosis.
Calf granulomas do not develop a fibrous capsule
One of the main findings of the histopathological analysis in the tissue stained with Masson's trichrome was the absence of a connective tissue capsule in the granulomas of calves, even in the presence of necrosis and calcification. Fibroblasts and myofibroblasts have been reported as the main cell populations that form the connective tissue capsules in the granulomas caused by M. bovis; these capsules are mainly composed of type I collagen (8). We observed greater vimentin (fibroblasts) and α-SMA (myofibroblast) immunolabeling in adult cattle compared with calves ( Figure 2). Vimentin staining was identified in epithelioid MΦs, MGCs, and mostly in cells with fibroblast characteristics, which were interspersed in the cellular area of the granulomas. Positive vimentin fibroblasts formed cell layers with different thicknesses comprising the capsule of connective tissue around granulomas in stages III and IV of adult cattle; this distribution
Mycobacterium bovis granulomas from calves and adult cattle have different cell proportions
Granulomas from calves have more bacteria, no connective tissue capsules, and a lower number of fibroblasts and myofibroblasts than those from adult cattle. Taken together, these results suggest a functional difference in the immune response that is probably related to the type of cells that form the granulomas. We used IHC to identify the main cell populations of granulomas from adult cattle and calves. Granulomas from adult cattle showed more MAC387 (MΦs/monocytes), WC1 (γδ T cells), CD79 + (B lymphocytes) cells, and fewer CD3 + (T lymphocytes) cells compared to granulomas from calves In addition, we evaluated the differences between stages of granulomas, and we observed the same pattern (Figure 3; Supplementary Figure 3). Adult's granulomas showed more MΦs, epithelioid MΦs, and MGCs than calves' granulomas ( Figures 3A,D). γδ T + cells were present in granulomas, as well as in the parenchyma of the lymph nodes. An increasing number of positive cells correlated with the granuloma stage in adult cattle; stages III-IV showed many positive cells in the cellular area and around the connective tissue capsule in adult's granulomas, whereas few γδ T + cells were observed in all granuloma stages from calves ( Figures 3E,H). Adult's granulomas showed more B lymphocytes located between the cellular area of the initial stages and around the lesions with calcification and necrosis that calves´ granulomas. Interestingly, B cells niches were found in some late granulomas in both groups ( Figures 3I,L). Finally, T lymphocytes were observed in both groups surrounding the tissue capsule; they were interspersed in the initial stages and surrounding the necrosis area in later stages. A slight increase of these cells was observed in calves compared to adult's granulomas ( Figures 3M,P).
A higher proinflammatory response was observed in granulomas from calves compared to adults
We hypothesized that the differences in cell populations observed in the granulomas of calves and adult cattle are related to a distinct type of immune response. To further explore this hypothesis, we evaluated cytokines and inflammatory mediators associated with tuberculosis immunopathology in humans and cattle, such as gamma interferon (IFN-γ), an inducible form of nitric oxide synthase (iNOS), transforming growth factor-beta (TGF-β) and tumor necrosis factor Frontiers in Veterinary Science 08 frontiersin.org
Concentration of INF-γ and gamma delta T cells is granuloma stage-dependent
When analyzing the granulomas by stage and did the comparison between adult and calves, a variation in the labeling of INF-γ and γδ T cells was observed. Although the global average of INF-γ is higher in granulomas of calves, in adult cattle we observed higher concentration of this cytokine in stage III granulomas. The immunolabeling of γδ T cells is higher in adults compare to calves. It is interesting to note that the highest concentration of γδ T cells is observed in stage IV granuloma. The amount of INF-Y does not correlate with the expression of γδ T cells observed in these lesions ( Figures 5A,B).
Discussion
Granulomas are the characteristic lesions of bovine tuberculosis. The development, morphology, and fate of this structure depend on several factors, including chronic stimulation by the virulent mycobacteria and the host's immune response associated with the type of cell population, cytokines, chemokines, and cell activation. The immune response and morphological characteristics of M. bovis granulomas have been studied mainly in cattle older than 6 months of age (4). However, very little information has been reported on the immunology of tuberculosis in young animals (17).
Our results evidenced differences in the histological structure, number of bacteria, and immune response in granulomas from calves and adult cattle. In summary, granulomas in calves have more bacteria, no connective tissue capsules associated with disorganized structure; as well as fewer fibroblasts, myofibroblasts, epithelioid MΦs, MGCs, γδ T cells, B cells, and TGF-β immunereactivity than adult cattle. Taken together, these data suggest an exacerbated proinflammatory process that is inefficient in the control of M. bovis infection in naturally infected cattle ( Figure 6). In a previous study from our group, we observed histological differences in granuloma architecture, such as the absence of the connective tissue capsule, more necrosis, and a greater number of AFBs in the granulomas of calves compared to adult cattle (9). To better understand the immune response and the number of bacteria found in these lesions, we used IHC and digital pathology analysis. We confirmed higher mycobacterium immunolabeling in granulomas from calves, mainly in extracellular and necrotic areas and in the cytoplasm of epithelioid MΦs and MGCs.
Our observations differ from previous reports in cattle experimentally infected with virulent strains of M. bovis, where the number of AFBs is low and located in the cytoplasm of giant cells (4,18). However, we found a similar result in granulomas from adult cattle, where positive immunolabeling was mainly found in the cytoplasm of epithelioid MΦs and MGCs. In some lesions, it was impossible to detect positive staining, especially in stage III and IV granulomas. In monkeys, these types of lesions are capable of sterilizing mycobacteria in latent infections (19).
One important feature of this study was the use of polyclonal antibodies in the IHC since the protocol not only stained bacilli but also cellular remains. These remains were observed as vacuoles and cytoplasmic dust, possibly associated with cell debris due to mycobacteria processing and phagocytosis. Interestingly, both groups also showed mycobacteria immunolabeling in cells outside the granuloma. This had already been noted in previous studies, suggesting that mycobacteria are present outside the lesion, probably as the remains of phagocytosed bacteria (20)(21)(22).
The presence of a fibrotic capsule around the granuloma is a hallmark of bovine tuberculosis. The capsule is mainly composed of type I collagen, produced by fibroblasts and myofibroblasts. Using Masson's trichrome staining, we previously observed fibrosis in granulomas from adult cattle, in agreement with previous reports (8,23). However, the capsule was absent in the calf granulomas. To confirm whether calf lesions lacked fibroblasts, we performed IHCs of vimentin and α-SMA; as expected, immunolabeling was detected in fibroblasts and myofibroblasts forming the surrounding fibrous tissue capsule in late granulomas and intercalated in the cellular area in stage I and II granulomas of adult cattle. Surprisingly, vimentin and α-SMA immunolabeling of early-stage granulomas from calves was similar to that of adult cattle. Nevertheless, γδ T + cells were d lesions with necrosis and calcification showed disorganized fibroblasts and myofibroblasts that did not form a capsule around the lesion. The amount of fibrosis correlated with the presence of TGF-β in granulomas and the differentiation of fibroblasts into collagen-producing myofibroblasts (24,25). Using IHC, we observed less TGF-β in the T cells), respectively, in granuloma stages from adult cattle and calves. Mann-Whitney test *p < 0.05, **p < 0.01, and ***p < 0.001. ND, not determined (because of lack to adult granulomas stage II).
Frontiers in Veterinary Science 09 frontiersin.org granulomas of calves compared to those of adult cattle. This cytokine has been associated with the development of fibrosis in granulomas of cattle infected by M. bovis (8). The function of fibrotic capsules in the pathogenesis of tuberculosis and their formation around granulomas are incompletely understood, but some studies have associated them with the chronicity of the lesion, better control of bacteria, limitation of tissue damage, and latent infection (19,(25)(26)(27). In bovine tuberculosis, the fibrotic capsule is characteristic of stage III-IV granulomas and is composed mainly of type I collagen produced by fibroblasts. In naturally infected cattle, the thickness of the connective tissue capsule has been associated with fewer bacteria in granulomas (25). The absence of capsules in granulomas from calves with high bacterial burden suggests that capsules protect cattle naturally infected with M. bovis, but the factors that determine the formation and deposition of fibrous tissue in the external part of the lesions are still unclear. Interestingly, recent studies suggest that the fibroblasts forming a connective tissue capsule may be MΦs that undergo macrophage-myofibroblast transition (28). Our IHC results showed that MAC387, a protein found in MΦs and monocytes, was present in the cytoplasm and membrane of fibroblasts that formed the connective tissue capsule. This observation suggests the possibility of macrophage-myofibroblast transition in cattle granulomas. Granulomas with higher bacterial burdens and no peripheral fibroblasts have been found in active tuberculosis infections in monkeys and humans associated with more proinflammatory cytokines (6,19). In this study, higher immunolabeling of IFN-γ, TNF-α, and iNOS with fewer TGF-β was observed in granulomas from calves compared with adult cattle, suggesting a greater proinflammatory response. The high amount of IFN-γ in the calf granulomas suggests abundant CD3 + T cells in response to mycobacterial antigens. Strong whole-blood IFN-γ responses have been reported in calves as early as 1 month after M. bovis infection, and TNF-α production has been shown in BCG-vaccinated calves (29,30). These two cytokines are essential for activating antimycobacterial mechanisms and inducing reactive nitrogen intermediates by activated MΦs, which play a crucial role in the intracellular killing of Frontiers in Veterinary Science 10 frontiersin.org mycobacteria. However, despite higher iNOS production in granulomas from calves, they presented fewer epithelioid MΦs and MGCs compared to adult cattle. The cytotoxic activity induced by high iNOS concentrations might explain this contradictory result (31). Another possibility is that MΦs present in calf lesions were mainly derived from circulating blood. These MΦs are more proinflammatory and short-lived, and they depend more on glycolysis to produce energy than resident tissue MΦs (32). This finding is consistent with the idea that the MAC387 antibody detects an epitope on the calcium-binding protein MRP14 found in monocytes/MΦs that have recently infiltrated acutely inflamed tissues (33,34). Similarly, we identified more MAC387 + cells in the uninjured tissue surrounding lymph node granulomas of calves compared to adult cattle, suggesting that the MΦs and monocytes detected in calves are mostly from the bloodstream and, therefore, more proinflammatory. We demonstrated that granulomas from calves present a more proinflammatory response than those of adult cattle; this type of microenvironment is associated with less TGF-β, possibly resulting in the lack of connective tissue observed. Granulomas are dynamic, spatially organized structures with a proinflammatory center that may present necrosis and a periphery of cells with a proinflammatory profile. From this study, we can infer that the granulomas of young bovines infected with M. bovis have a reduced anti-inflammatory response, which is why they lack adequate encapsulation of fibrous tissue. The role of B cells and γδ T cells in the pathogenesis of bovine tuberculosis is incompletely understood. However, the presence of B cells in granulomas has been associated with better control of the infection since they are numerous in the granulomas with fewer bacteria (23). In this study, CD79 + lymphocytes were observed among the rest of the cells and around the lesion in different granuloma stages. Multifocal aggregates of CD79 + cells were also observed in some late stages. Finally, more CD79 + cells were found in the granulomas of adult cattle compared with those of calves (which had the highest bacterial burden). This result agrees with the finding that granulomas with more B lymphocytes tend to have fewer mycobacteria. Conversely, γδ T cells play a critical role in connecting innate and adaptive immunity in response to M. bovis. In peripheral blood, γδ T cells represent up to 70% of the lymphocytes in young animals and decline to an average of 10-20% in adult bovines (35). Moreover, WC1+ γδ T-cell from neonatal calves express high levels of INF-γ in response to IL-12 and IL-18 compared with adult animals (36). The higher percentage of T cells in calves suggests that they have an important participation in the immune system. In granulomas, γδ T cells are the first to arrive at the infection sites; they have been observed as early as 7-15 days after experimental infection with M. bovis, suggesting that they play a role in granuloma formation. Likewise, it has been reported that the number of γδ T cells is positively correlated with the stage of the granuloma, which agrees with our observations that the number of γδ T cells was higher in the granulomas of adult bovines, it is interesting to note that the highest concentration of γδ T cells is observed in stage IV. Although adult granulomas showed a higher concentration of γδ T cells, this does not correlate with the expression of INF-γ observed in these lesions (37,38). Surprisingly, although these cells are increased in the circulation of calves, we found a small number in the granulomas. A possible explanation for this result is that γδ T cells in young cattle remain mainly in the bloodstream and are less present in the interstitium.
These observations highlight the involvement of γδ T cells in the pathogenesis of tuberculosis in young animals.
One limitation of this study is that several factors that could affect the type of lesion remain unknown, including the grade of bacterial virulence, route of infection, bacterial dose, and date of infection. It is widely recognized that the pathology of bovine tuberculosis is multifactorial. However, this study emphasizes age as an important factor in the type of immune response and granuloma formation in cattle naturally infected with M. bovis. Granulomas from calves displayed a greater number of bacteria, lacked the connective tissue capsule, were associated with fewer and more disorganized fibroblasts and myofibroblasts, showed a predominance of proinflammatory cytokines (IFN-γ, TNF-α, and iNOS), and had fewer epithelioid MΦs, MGCs, γδ T cells, B lymphocytes, and TGF-β, compared to the granulomas from adult cattle. Our results suggest that calves have active-like tuberculosis with an exacerbated proinflammatory response that may be associated with more necrosis and a lower microbicidal capacity, making them more permissive to infection and dissemination of mycobacteria. This study highlights the importance of understanding the immune response and pathogenesis of bovine tuberculosis in young animals.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by Ethics and Animal Welfare Committee of the Facultad de Medicina Veterinaria y Zootecnia, Universidad Nacional Autónoma de México (CICUA, FMVZ-UNAM), and complied with the Mexican guidelines for animal research (JAGP-074).
Author contributions
JC-U and JAG-P conceived the experiments and wrote the original draft. JAG-P provided resources, project administration, and funding acquisition. JC-U and MAB-A collected and prepared samples. RH-P and CL-M advised on field data acquisition and analysis and provided scientific guidance during the experiment and drafting of the manuscript. JC-U, MJ-R, and GB-G performed the experiments and analyzed the data. SH-Y performed validation, writing-review, and editing. All authors contributed to the article and approved the submitted version.
Funding
This study was financially supported by the Project of Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica (PAPIIT) IG201521 of the Universidad Nacional Autónoma de México.
Frontiers in Veterinary Science 11 frontiersin.org
|
2023-04-27T13:23:56.831Z
|
2023-04-27T00:00:00.000
|
{
"year": 2023,
"sha1": "03ede05a6cb7c94e40700a13c7caaf57b25878dd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "03ede05a6cb7c94e40700a13c7caaf57b25878dd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235078686
|
pes2o/s2orc
|
v3-fos-license
|
The Benefit of Belt and Road Initiative for Central Africa and China: A Case Study of Sub-Saharan African Countries
On a historical account, the apparent lack of documented economic data (accurate information) on the research budget and flexible schedule hinders economic growth and development. When the gravity model has been used for analysis a positive statistically important relationship has been found between transport facilities, continuity, and two-sided trade. However, the connection between transport facilities, continuity, and bilateral commerce on one hand and available documented economic data or information on another hand was missing. To determine how the availability of standard documented economic data or information squeezed economic growth and development as well as the relevance of this relationship; the authors analyzed this relationship. The BRI, Chinas’ majestic idea of an economic belt created from the old road, covers almost all routes across Asia, Europe, and Africa. In the BRI area, the development of a sea, air, and road transport link among trading partners are relevant with a big scale influence on perfecting commerce. This brings to the fore, the second-most important influence, which is a testament to the road, sea transport, and number consistency. Also, transport service quality which has an important influence on bilateral commerce was studied. Our results purposes and guidance are that a standard investment in roads; total commerce in the BRI member countries (the central African countries (CAC) included) could become more valuable. Hence, perfecting transport facilities could lead to a win-win situation with a strong influence on commerce.
Introduction
BRI is a go-getting transformational plan of China to enhance connectivity and trade flow between Asia, Europe, and Africa (Lim et al., 2018). The BRI is a transport facilities project that has been created to influence the lives of about 4.4 billion people with a total GDP that amounted to US$21 trillion. Such massive transport projects are not new, illustrations can be made from the American Gilded Era when road lines were constructed to link different areas together, with decreased transport cost, catalyzing spread of modern goods, encouraging economic flow, investment, and providing job opportunities (Lu et al., 2018). Before ten years, international trade and cooperation had skipped a vital role in whole African economic development in particular East Africa. Deloitte stated that Africa development projects particularly in construction have yearly increased by 59.1% and their total value increased by 53.3%. The total number of projects in East Africa has reached about 139 as the biggest member in BRI. However, about 43 African countries out of 54 undertook the BRI project, although North Africa worth the largest of the benefits that amounted to about US$148.3 billion. In Egypt, the projects were valued at about US$ 79.2 billion as the most valuable ones in the continent content of about forty-six of construction that equivalent to about of the total projects in Africa. Therefore, Egypt leading the African continent in terms of infrastructural projects. Coming next to Egypt South Africa followed by Nigeria. In 2018 Deloitte analysis ranged the majority of the BRI constructions between US$50m-US$500m which was considered as a lower value range. In summary, a sum of 80 structured projects their values were exceeded US$1.1bn, among them about 14 projects was valued greater than US$10bn. This delivery goes coinciding with the 2018 Construction Trends Report on structuring, financing, and delivering huge projects for the African continent. Abe et al. (2009) stated that the two most vital economic divisions where massive investments are made in construction projects are oil and gas; and energy and power segments. According to Rolland (2017) structure projects in the fields of oil and power sectors valued by about US$62 Billion about 13.0% of the African projects under BRI. Das, (2017) cited that there are major corridors in BRI projects that comprise of the following: China-Mongolia-Russia, China-Central Asia-Western China-Pakistan, Bangladesh-China-India-Myanmar, and Indochina Peninsula. These networks routes, bridges, rivers, pipelines information highways case, sea, air linking industrial clusters, comprise maritime components.
Research Objectives
The main objective of our research is to trace the effect of achieving multi-modal transport continuity on multilateral commerce in the BRI zones in specifically the Central Africa countries. Both qualitative and quantitative analyses were used to reach the objective in question. Firstly, to identify potential obstacles to these transnational projects under the BRI; literature reviews and interviews were used. Nonetheless, the results from task factors were included to stretch the model. Secondly, we develop an econometric context to measure the accumulated facilities of the BRI projects. In predicting the reliability of the effect of transport facilities, we discussed the model and induced it to chains of policy script tests.
Some Graphs Showing the Trend of Growth in Both African Countries Under BRI and Those Without BRI
Both figures one and two below showed trade composition in Africa in the year 2016. Nevertheless, the biggest share country worldwide in trade with African is China. Statistically, Chinese-African trade was estimated at US$ 170 billion in 2017. Chinese export to Africa and Chinese import from Africa were US$ 94.74 and US$ 75.26, respectively, with a trade surplus estimated at US$ 19.48 in the favor of China (Statistics, 2018). Only five African countries their trade balances were positive signs with China. So almost all African countries are structurally suffering from trade beneficial with China. The bulk (70%) of Sub-Saharan African export to China was fuel and metal and mineral products raw material whilst China exports to these countries ready made commodities (Chen & Nord, 2017).
The mentioned trade structure between China and Africa makes China's economy conditionally grown. However, the case of Sub-Saharan African raw materials exporters demonstrates the unbalanced trade; that African traders are relying on the international prices of raw resources. However, the slowing of Chinese economy and lower prices for raw goods in 2015-2016, negatively affected the value of Chinese-African trade. Both Figures 1 and 2 show the trade composition in Africa.
China Benefits and Problems from BRI
China has domestic and international benefits from BRI. However, the domestics ones include Trade and investment; energy and natural resources; and issues surrounding Taiwan passage. On the other hand, the international benefits of BRI for China content: China used its culture and influence to pursue African countries to trade with China rather than the use of war Soft Power. This soft power has been drastically increased in Africa; China grabbed the chance of the global economic recovery to the internationalization of renminbi at the same time they internationalized their factories (Zhongyan, 2019); and the OBOR enables China to play a bigger role in global governance; as a part of the global agenda. However, the problems were included the following ones: more than ten thousand Chinese enterprises are going out to invest in Africa making good business, but there are facing many problems. There are many Chinese state0owned enterprises (SOEs) and Chinese private enterprises that greatly varied in their goals. They take into consideration both the financial incremental performance and non-financial incremental performance. View of Chinese SOEs had a margin of profit of more than 20% whereas, 25% of them realized some losses (Sun, 2017). Nonetheless, some SOEs lack clear strategic planning, deployment as well as research programs. In addition to they didn't evaluate their market's investment before they starting operations; that may cause them some failures in BRI areas.
Benefits of BRI for Africa and Risks
BRI has many benefits for African states. At the beginning of 2016, a Memorandum of understanding on the BRI was signed between China and Egypt, in case to widen the Suez Canal over the coming ten years. The project cost was estimated at US$ 230 million which was financed by China (Bagwandeen, 2017). However, the Suez project generates about ten thousand careers for Egyptian people. Vol. 13, No.5;2021 In the Djibouti case, the first Chinese abroad military base and container terminal in the port of Dolareh had been built by the Chinese government; to be among-st many foreign military bases in Djibouti, where the USA and some European countries have positioned the soldiers. That is to be one of the initiatives that emerged from the China-Africa Cooperation Beijing Action Plan (2019-2021). Forum (2018) stated that: "to progress the current global legal system; the stakeholders of the project (China and African countries) should have to reinforce interactions and collaboration in semi-governance, improve joint trust and exchanges in this admiration, provide legal support and guarantee for China-Africa cooperation and to cooperate on the BRI, and work together.".
Despite their incorporation in BRI, but Kenya and Ethiopia benefited more from China. However, China constituted a new Mombasa-Nairobi railroad for Kenya which was considered the biggest structure investment since their freedom. In the future, it may link Kenya with Uganda and it can link the Indian Ocean to other countries such as Rwanda, Burundi, and the Democratic Republic of the Congo, resulting in expansion of the geographical opportunity of the BRI projects. Nevertheless, Kenya gained from this railway the following: growing GDP by about 1.5%, creates about 46,000 occupations for the Kenyan residents, and a 40% reduction in transport costs (Xianfa, 2018 BRI has many risks for African states. For example, high misuse or poor of the funded structure may result from overrating the good influences or benefits of infrastructural projects themselves. However, a highly idle project was Hambantota Port in Sri Lanka that built by China. Although the port hasn't container traffic, Sri Lanka had to give China a 99-year lease for debt relief. The second example was the Mattala Rajapaska International Airport in Sri Lanka that planed to carry a million passengers yearly, but it was called off for its big losses.
Good Transport Facilities Facilitate Commerce Expansion
The international commerce theory defines transportation costs as the alterations between make commerce and non-commerce goods. So the transportation costs can be treated as exogenous variables of the commerce models that depending on geographical factors. Rationally, that cost of transport may rely on the quality of transportation services. So to predict the costs of transportation we can use transport facilities across states as a guide. Nonetheless, in competitiveness transport facilities can be used to excuse for variances. Different ways can be used to clear out the cost of transport. The cost for a certain type of transportation such as seaport, shipping, and route, etc. can be disclosed as a direct measure or be resolute as costs per mile or kilometer. However, a decline in commodities above the past decades due to conceivable evidence in transportation facilities is worth documenting. Hummels (1999) also cited that the cost of air freight reduced by a factor of about 12.5 in the 1950s and the 2000s, while freight cost continued in place. Glaeser Kohlhase (2004) explored that the cost of shipping and route transportation diminished by a factor of about 8 over 10 years. A similar finding was achieved by Redding Turner (2015); that the transportation cost per ton-mile of (sea, air, and road transport) cargo cheap from about US$0.2 1890 to US$0.02 in 2000.
Moreover, the valuable transportation facilities can affect the cost of transport. The countrywide transport facilities represented 40% of transport costs for coastal countries, whereas national and transit country transport facilities accounted for sixty percent of transportation costs for non-coastal states Limão (2001). Also, the 25 th to the 75 th percentile conceivable that made by witnesses in route, sea, and air transport facilities; was predictable to overwhelmed more than half the disadvantages of being blocked-in states. Clark et al. (2004) cited that the cost of sea transport to and from the United States was equal to about 5.25% of the value of freight, so port efficacy can be donated hugely to the total cost. They estimated that worsening in seaport value from the 75 th to the 25 th percentile higher freight cost by 12%.
The very serious thing to a company that hires activities and has a supply network; is that transportation is a third element factor. However, an equal to a day saved journey times; is that when the process reduce the tariff rates by about 0.4 to 1% and 0.8 to 1.5% for exports and imports respectively, Hummels (2007). In part when we calculate the remoteness between dealing allies we get the provision time but more notably between the geography and the quality of transportation capacity. A good example is that a meager incompetent anchorage handling processes can result in long deferrals that do not necessarily in reflecting the monetary costs of transportation. The means of wasting time in waiting at a seaport that has been calculated by Wilson (2003); this could make a vehicle traveling of about 1,600 km overland. These interruptions can be attributed to both poor seaports and bureaucratic procedures at ports.
African Countries Involved in the BRI and Their Date of Integration
Since the provision of US$10 billion in 2015 by China Exim bank to structure projects in Egypt; the BRI scheme has been started in Africa. However, it the first African state that joined BRI agreements in Egypt. In June 2017, many African states were joined to BRI as forthcoming members these countries include South Africa, the Republic of Sudan, and Madagascar. Later in 2019, the group embraced some African states like Tunisia, Guinea, and Côte d'Ivoire. However, BRI is focusing on the Horn of Africa countries, namely: Kenya, Egypt, Djibouti, Tanzania, and Ethiopia, and then Nigeria, Cameroon, and Namibia were remarked as an expansion to the BRI scheme.
BRI Gain for African Countries
African continent gains a lot from BRI. In East Africa BRI seem to be progress in infrastructure building especially roads, highway, and railway connectivity, that implemented by Chinese companies. The Republic of Kenya achieves many constructions from BRI that include the following: a building of a modern seaport at Lamu; the standard gauge railway structure; and the enhancement of Mombasa harbor. Nonetheless, the structure of the pipeline that links Ugandan and South Sudan oilfields to Kenyan ports will be the coming project for Kenya. In Djibouti, the benefits include transportation of drinking water from Ethiopia to Djibouti through a pipeline that costs approximately three hundred million dollars. The provision of $4 billion to modernize the 752.7km Ethiopia-Djibouti Railway, with the Ethiopian section costing $3.4 billion and China's Exim Bank will bear about seventy percent of the total cost of this project. Nonetheless, the African gateway for Europe and to link Africa to both Europe and Asia through the Mediterranean Sea are reachable through the Suez Canal; all that will be achieved by neighboring Djibouti to Egypt.
Objective of BRI for China
The main objectives of the BRI to China were described through issuing an action strategy by the Chinese government, in March 2015. First: China has to close all aspects of the viable growth for itself and including more balanced regional growth. Second: the country has to promote its manufacturing and greener economic growth in China. Third: providing other countries with the cheapest less environmentally-friendly energy sources.
The Economic Growth and Transportation Infrastructure
Numerous economists found that transportation infrastructure has impacts on economic growth. However, some authors found that transportation infrastructure had positive impacts on the economic growth of many countries whilst others found that it had ambiguous, insignificant, or even negative effects on economic growth. Transportation and communication structure may produce both positive and negative benefits in the area where they are located; and positive or negative spillovers to other regions.
Holtz-Eakin and Lovely (1996), stated that transport construction can make an industry to be benefited more as an appositive influence. Moreno et al. (1997) cited that in Spain, in the period , public capital has a greater role in industrial productivity than in any other economic area that was due to processes of growth and liberalization. However, public capital can enlarge the accessibility of economic organizations and lessen costs by using the economic liberalization that making a possible expansion.
The negative influence of transportation, however, in the case of the above economy: is that the dependence of a region's output on t the standard of transportation infrastructure on other regions. Kelejian and Robinson (1997) and Boarnet (1998) found that this passive impact from transport public capital can be attributed to the fact that one region can draw industrial production away from other regions because of the mobility of the input factor. So, in the initial phases, differences may be increased due to usage of public investment in deepening and integration process since affecting the regions with weak competitive positions.
Foreword in Brief
To measure the influence of removing physical borders, a quantitative analysis was to be developed to perfect transport facilities across BRI areas. To measure transport facilities and continuity we use a series of broad-based meters and we incorporated them into a gravity model to determine their effect on international trade. First, we provide a brief review on the theoretical related issues of the gravity model then the resulting empirical models with interpretation were explained, and finally possible evidence in transportation facilities continuity was forecast to demonstrate the potential commerce effects for the BRI zones.
The Theorem of the Gravity Model
As reported by Anderson (2011) and Shepherd (2012), the gravity model is widely used in analyzing commerce patterns and effects (influence). This model was derived from Newton's Universal Law on Gravitation which states that, particles in the space appeal to other elements due to a force that is proportional to the mass of the elements' and inversely proportional to the square of the distance between two things. Hypothetically, commerce between countries is relational to their market size and closeness. Samuelson (1939), stated that the distance and commerce cost are both serious for commerce between states. In 1979, Anderson describes the gravity model by recognizing a set of economic theories and further he made an explanation that consumers have preferences for different goods and commodities are distinguished by source. However, economists assumed that only a portion of transported imports will arrive at their terminus. That is dependence on commerce cost, also known as 'iceberg' costs. have recently been explored by Recently, Anderson and van Wincoop (2003); Arkolakis et al. (2012); and Eaton (2002), have made the economic set ascends originating from the gravity equation. They assume relative trade cost controls two-sided trade flows rather than merely by givens trade costs. The gravity equation formula: Where a b shows exports from the country a to country b, is the GDP of the country a, is the GDP of country b, is the world's GDP, is the elasticity of substitution among product varieties and is the bilateral commerce cost of sending products from the country a to country b according to Clark et al. (2004). Qb and Wb show an outward and inward multilateral resistance (multilateral commerce resistance [MTR]), which captures the fact that export from a country a to country b is determined by commerce costs across all possible export and import markets. A reduction in bilateral commerce cost among China and a third country such as a Central Africa Country-Gabon would reduce China's MTR (Eisenman, 2012). (Even though the bilateral commerce cost between China and CAC remains unchanged, the fall in China's MTR (due to the reduction of commerce cost among China and Central African countries would lead to a diversion of commerce away from China-CAC to commerce among China and Central African countries-Gabon. Failure to account for the multilateral resistance effects would lead to an upward bias in the estimates of gains from the imaginable testaments. Given its multiplicative nature, the gravity equation outlined in (1) can be transformed by taking the logarithms to a log-linear form illustrated as follows: Due to the lack of a direct measure of commerce cost, is usually specified empirically as a function of observable variables that are seen as directly correlating to commerce cost. In literature, log-linear specification is often applied as follows: Where D (distance) is the geographical distance between countries a and b, COT categorical variable equal to one if countries share a common land border. COL (colony) is equal to one if countries a and b had a colonial history. These factors reflect the hypotheses that transport costs become more but are lower for neighboring countries. Indicators for colonial history are related to information cost regarding commerce, where such costs are presumably lower for commerce among countries whose culture and business practices are known to each other.
Empirical Method and Documented Data
The parameters of a gravity model that captures commerce patterns of countries within the BRI area have been projected, and central African countries (CAC) were selected for this research. In total 18 countries were included in the model (8 BRI countries/areas, and 10 central African countries (CAC) among which 5 were chosen). Some states were excepted due to a lack of documented information. The detailed state list is shown in Appendix A.
Baseline Gravity Model
Anderson and van Wincoop (2003) and Head and Mayer (2000) put forward a structural specification of the ijef.ccsenet.org Vol. 13, No.5;2021 model; that is to capture the many-sided resistance result. Normally, convergence is often difficult for the non-linear calculation involved and can be responsive to the initial choice of parameters. A simple but successful solution that takes a linear approximation (by a first-order Taylor sequence expansion) of the multilateral resistance clause is suggested and advised by Baier and Bergstrand (2009) to avoid the difficulty involved in the non-linear procedure in the mentioned model.
1 1 ln 2 1 ln ln (7) Substituting equations (6)and (7) into equation (3), we then get: (9) The GDP share of the country (k and m) refers to the country pairs in the research; this has been demonstrated in the following equations. Any transport index is defined using an equation to account for the MTR (10). For example, the distance variable of the roadway is included in the model as: (10) In this way, the change in the road distance can affect the commerce between the two The relative size of the exporters/importers is also used to calculate the MTR. Our research hypothesis is that transport basic facilities and multi-modal continuity among countries affect (influence) the commerce cost , and also on the bilateral trade. The specification of trade cost is hence estimated as follows: Equation (11) was used to specify transport facilities consistency such as seaport and sea transport for maritime (Nikkei Asian Review. 2016.) consistency, road consistency, and airport consistency), transport continuity, that is, seaport and sea transport for maritime. The distance with 'No Link term for (sea, air, and road transport). The documented information foundation was discussed using parameters estimated from the gravity model, thus predicted the effect (influence) of a change in one or more of these variables on the bilateral commerce (counterfactual analysis). The static geographical variables coting and colony; hence, they were kept as simple Clause as in the above gravity model.
Written Down Information
Standard documented data sources were used in the gravity model. Table 1 fully summarizes the bilateral commerce from COM-commerce, that considered the most public source of documented data on disaggregated commerce by a good. Documented data from 2013 was used in the research. Commerce values in US$ were converted from national currencies. Documented data (information) were also available through the United Nations website (UN COM-commerce 2015-2018). Documented data (information) on GDP was from the World Bank expands Indicator information database (World Bank 2018). Documented information on bilateral tariff from United Nations Conference Trade and Development (UNCTAD) ships database. Tariff rates as effective bilateral rates take into account area and preferential agreements and the average was calculated by adding commercial weights for a few countries. We have imputed 2014 values in these situations. The following sections discuss the sources of transportation facilities and steadiness measurements. Table 2 provides a descriptive analysis of documented economic data (information) for the BRI, and central African countries (CAC). We observed a great extent of variation of GPD and exports within each area. For instance, in the BRI area, the CAC has the lowest GDP in 2013 at US$1.76 million while China has the highest at US$9.61 trillion, below AVRG stands for average. Vol. 13, No.5;2021 linkages among various elements in economies for different sectors at a more dis-aggregated standard (Walker et al., 2009). Unlike computable general equilibrium (CGE) models, which provide explicit links in changing production-consumption pattern and changes to commerce, the gravity model can only identify static effects transport facilities on bilateral commerce, keeping all other factors constant (i.e. it generates first-order effect commerce (Plummer et al., 2011). It also does not explicitly take into account the balance among supply-demand for goods, services, as well as production in the long term. More so, how firms and households respond to changes in transport costs were not accounted for. In applying the gravity model outlined in equation (9), changes in patterns of bilateral commerce capacity as a result of transport continuity, tariffs, and other commerce characteristics, such as the presence of a common border, and historical antecedents were the only metric observed. Tables 3 and 4 show the Descriptive statistics on the export and Trade Flow of China and Sub-Saharan Africa products. As such, results from the empirical model show the relationship between transport cost and bilateral commerce. In addition, all the variables that influence commerce flows and commerce barriers may not be addressed in the empirical model. By controlling many variables such as documented data or information and incorporating multilateral resistance clauses issues hindering commerce flows or commerce barriers may be mitigated.
Model Estimation
In the first phase, parameters of the gravity model in equation (9) were estimated with Ordinary Least Squares (OLS) and Poisson Pseudo-Maximum Likelihood (PPML) estimators (Silva, 2006), taking into account clustering error clause within groups. Moulton (1990), emphasized that failure to identify clustering could result in understated standard errors that are more likely to be correlated by country pair. The model estimation that specifies transport facilities and linked variables interest using equation (10) involved an MTR clause. The PPML approach which uses a quasi-Poisson distribution and log-link is a generalized linear method for estimating gravity. With this method, zero commerce flows are allowed in the estimation. However, in the OLS approach, we added unity to the commerce values that were equal to zero to avoid the zero commerce flows being dropped (as a transformed logarithmic form). The model results are shown in Table 5. Models 1 and 2 are baseline gravity models containing distanced, tariff, and other control variables. Most of the clauses were importantly estimated with the expected sign in models 1 and 2. The relationship between the distance among trading partners and commerce flow was estimated. The greater the distance among the trading partners, the smaller the commerce flows. As such distance and clauses were negatively estimated. For both OLS and PPML methods, the distance was a commerce deterrent, although the elasticity is smaller in the PPML model. There is a positive effect on export flows for countries with a common border and as well as those with colonial history. The estimated parameters containing transport continuity (distance by mode) and transport facilities quantity and service the model shows in models 3 and 4. Between the OLS and PPML methods, the signs of the estimates do not differ except for the tariff term, which is (incorrectly) positively estimated in the OLS model.
Summary and Conclusions
Research has shown that multi-modal transport facilities and continuity are critical for promoting international commerce and economic growth. More specifically, well reduce costs facilitate expansion Snyder et al. (2012).
The efficient industrialization process enables more efficient area and global production networks, supports integration, and fosters the expansion of national welfare. This research identified and discussed the physical and soft barriers/facilitators relating to transport continuity and commerce more generally in the BRI area. The physical barriers/facilitators include an inadequate capacity of equipment, speed and cost of transporting goods, and inhospitable terrain. The soft barriers/facilitators include legal and regulatory barriers, project financing, security, tracking of goods as well as security surrounding commerce routes (Abe et al., 2009). Building upon the qualitative analysis, the following hypothesis was formed: that removing the physical barriers (by perfecting continuity) would facilitate commerce and have a broader positive effect (influence) on economic growth in the BRI area. To examine the research hypothesis, a gravity model was developed to test the relevance among transport continuity and commerce. First, a series of indices were developed to measure transport facilities (containing sea, air, road transport)/road consistency, airport logistics performance) using distance by different modes as proxy journey cost. Countries in the BRI area and central African (CAC) countries were included. The descriptive-analysis of transport measures were: 1) Transport facilities in the BRI area focused on the sea, air, and road transport consistency, as well as airport consistency.
2) Within the BRI area, the standard of transport facilities across countries vary. It was observed that some South and West Asian countries suffered from poor sea, air, and road transport facility continuity among and there was relatively low road/maritime (seaport and sea transport) consistency in some areas. The models were developed to estimate bilateral commerce within the research area and to determine the relative commerce cost among trading countries, thus the MTR clause modeling framework was also used. A positive relevance among transport facilities, continuity, and bilateral commerce was developed. Additionally, having a (sea, air, and road transport) link in the BRI area, consequently gives the biggest scale influence on perfecting commerce (perfecting total exports by 0.08 % in the research area). Logistics performance (for instance LPI) also showed an important and relatively strong influence on bilateral commerce flows.
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
|
2021-05-21T13:40:55.030Z
|
2021-04-25T00:00:00.000
|
{
"year": 2021,
"sha1": "f04de65e46d8bd57a02c2a676f0bca840e000a0f",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/0/0/45179/47888",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f04de65e46d8bd57a02c2a676f0bca840e000a0f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
}
|
201041158
|
pes2o/s2orc
|
v3-fos-license
|
Correlation between synthesis of α2-macroglobulin as acute phase protein and degree of hepatopathy in rats
The degree of hepatopathy affecting the synthesis of α2-macroglobulin (α2M) as an acute phase protein in rats was investigated. Hepatopathy was induced in Sprague-Dawley rats by intravenous administration of galactosamine at a dose of 30 mg/kg for 7 days. Inflammation was induced by intramuscular injection of turpentine oil at a dose of 2 mL/kg. Blood was collected before turpentine oil injection and at 24, 48, 72 and 96 h after injection. Serum concentrations of α2M were measured by enzyme-linked immunosorbent assay. Mean values of aspartate aminotransferase (AST) and alanine aminotransferase (ALT) in rats administered galactosamine were significantly higher than in controls. Mean values of body weight and total protein were significantly lower than in controls. Serum concentrations of α2M in the galactosamine group were significantly lower than in controls. Kinetic parameters, area under the concentration-time curve (AUC0–96) and maximum serum concentration (Cmax), were significantly lower than in controls. The cut-off value for detecting the effects on synthesis of α2M in liver was 46.9 mgˑh/mL. Seven rats (77.8%) were assessed for decreases in the synthesis of α2M due to hepatopathy. Two rats showed no influence on the synthesis of α2M, despite administration of galactosamine. AST and ALT in these two rats were ≤ 285 and ≤ 174 U/L, respectively. In conclusion, synthesis of α2M in rats is evidently suppressed in the severe stages of hepatopathy.
Introduction α 2 -macroglobulin (α2M) is a protein inhibitor with broad specificity in humans [1][2][3][4]. For example, chymase, a mast cell serine protease, was inhibited by α2M [5]. Although it is not an acute phase protein in humans [6][7][8] it is a typical acute phase protein in rats [9][10][11][12][13]. Serum concentrations of α2M show increased sensitivity than α 1 -acid glycoprotein in rats in response to inflammatory stimulation [10]. Thus, α2M is a useful inflammatory marker in rats [12,13]. α2M is synthesized in the liver, and production is decreased with hepatic impairment [14,18]. Many candidate drug substances are reported to induce hepatopathy [15][16][17]. Evaluation of the degree of inflammation using serum concentrations of α2M may therefore give inaccurate results when assessing candidate substances that induce hepatopathy. Serum biochemical parameters, such as AST and ALT, show abnormally high values in rats with hepatopathy, while serum concentrations of α2M are lower than in normal rats [18]. However, the correlation between the extent of liver function failure and the decrease in α2M synthesis in the liver has not been clarified. Moreover, it has not been investigated how much liver damage affects the synthesis of α2M. Thus, the cut-off value for reductions in the serum concentration of α2M in rats with hepatopathy was determined from receiver-operating characteristic (ROC) curve analysis. Moreover, correlations between serum biochemical parameters and α2M were investigated in order to clarify how much liver damage affects the synthesis of α2M.
Animals
Twenty male Sprague-Dawley rats (age, 6 weeks) were purchased from CLEA Japan, Inc. (Tokyo, Japan). Rats were divided into two groups; the galactosamine group, and the control group. Rats were kept in isolators at a temperature of 23 ± 2°C on a 12/12 dark/ light cycle (6:00-18:00). Rats were fed MF (Oriental Yeast Co., Ltd., Tokyo, Japan) and were allowed free access to water.
Animal experimental designs
The animal experimental protocol of this study is shown in Fig. 1. Hepatopathy was induced in 10 rats by intravenous injection of D(+)-Galactosamine Hydrochloride (Wako Pure Chemical Industries, Ltd., Osaka, Japan) at 300 mg/kg (5 mL/kg) daily for 7 days. The other ten rats (control group) were intravenously injected with sterilized saline. Turpentine oil is known to induce acute inflammation and has been used to induce acute inflammation in rats [19]. In this study, turpentine oil (Wako Pure Chemical Industries, Ltd.) was thus used to induce acute inflammation by intramuscular injection at 2.0 mL/kg body weight the day after the end of galactosamine administration. Blood (0.3 mL) was collected from the venae cervicalis superficialis under anesthesia by inhalation of isoflurane (Wako Pure Chemical Industries, Ltd.) at pre-injection of turpentine oil, and at 24, 48, 72 and 96 h after injection. Serum was obtained by centrifugation (1600×g, 15 min) and was stored at − 80°C until use. All experiments were approved by the Institutional Review Board of Azabu University (approval No. 170324-1).
Measurement of serum concentrations of α2M
Serum concentrations of α2M were measured by enzyme-linked immunosorbent assay (ELISA) according to the procedure described by Honjo et al. [20].
Serum biochemical analysis
Aspartate aminotransferase (AST) and alanine aminotransferase (ALT) were measured by the ultraviolet method. Total protein (TP) was measured by the Biuret method.
Statistics
Data were analyzed using GraphPad Prism 7.0 software (La Jolla, CA, USA). All values are expressed as means ± SEM. Area under the concentration-time curve (AUC) for α2M was calculated according to the trapezoidal rule [21,22]. Variations in serum concentrations of α2M, AST, ALT and TP were assessed using unpaired Student's t-test. P-values of < 0.05 was considered to be significant. Cut-off values of α2M for detecting hepatopathy were determined from ROC curve analysis.
Results
The serum biochemical analysis results are shown in Table 1. Unfortunately, 1 rat in the galactosamine group died at 48 h after turpentine oil injection, due to the adverse effects of galactosamine. The mean values of AST and ALT in the galactosamine group were significantly higher than in the control group. Body weight and TP in the galactosamine group were significant lower than in control group. Changes in serum concentrations of α2M in the hepatopathy and control groups are shown in Fig. 2. The kinetic parameters of α2M are shown in Table 2. Mean serum concentrations of α2M at 24, 48 and 72 h after injection of turpentine oil in the control group were significantly lower than in the galactosamine group. Mean maximum serum concentration (Cmax) and AUC 0-96 in the control group were significantly lower than in the galactosamine group. The correlations between AUC 0-96 and AST, ALT or TP are shown in Fig. 3. Significant negative correlations were observed between AUC 0-96 , and AST and ALT (AST: r = − 0.644, p < 0.05; ALT: r = − 0.652, p < 0.05). A significant positive correlation was observed between AUC 0-96 and TP (r = 0.589, p < 0.05). Individual data, AUC 0-96 , AST, ALT and TP in the galactosamine and control groups are shown in Figs. 4 and 5. The cut-off value for AUC 0-96 to detect hepatopathy was 46.9 mg h/mL by ROC analysis. Seven rats (77.8, 95%CI:0.78-1.05%) in the galactosamine group were assessed for decreased synthesis of α2M in liver (Fig. 4).
Discussion
We evaluated how the hepatopathy induced by galactosamine affected the synthesis of α2M in rats. Inflammation was induced by injection of turpentine oil in this study. The effects of hepatopathy on synthesis of α2M in rats could be evaluated after single administration of turpentine oil [18]. Thus, single injection of turpentine oil was not considered to have influenced the synthesis of α2M, and turpentine oil was used to induce inflammation in this study. Fig. 2 Changes in serum concentrations of α 2 -macroglobulin (α2M) in rats intravenously injected with galactosamine at 30 mg/kg once a day for 7 days. Data are means ± SEM (galactosamine; n = 9, control; n = 10). Differences were compared using unpaired Student's t-test. *Significantly different from controls (p < 0.05) Data are represented as mean ± SEM. Differences were compared using the unpaired Student's t-test. *Significantly differences from control (p < 0.05) Significant differences between the galactosamine group and controls were observed in AST, ALT and TP. Administration of galactosamine was therefore considered to have induced hepatopathy in this study. Certainly, significant differences were observed in serum concentrations of α2M, AUC 0-96 and Cmax between the galactosamine group and controls, suggesting that synthesis of α2M changed in rats after hepatopathy was induced by administration of galactosamine. Moreover, a synthetic decrease in α2M was also possible based on the significant negative correlation between AUC 0-96 and AST and ALT. Individual data were then evaluated to clarify the degree that hepatopathy affects the synthesis of α2M. Seven rats were judged to have shown changes in the synthesis of the α2M based on the cut-off value of AUC 0-96 . AST and ALT levels in these seven rats were more than 609 and 482 U/mL, respectively. Galactosamine is known to induce hepatopathy in experimental animals [23,24]. Hepatopathy model rats are generated by administration of galactosamine in many studies [23][24][25]. AST and ALT in rats administered a single dose of galactosamine at 1100 mg/kg are reported to be 100.86 and 121.57 U/mL, respectively [26]. AST and ALT in rats administered galactosamine at dose of 800 mg/kg are reported to be 96 and 199 U/ L, respectively [27]. AST and ALT showed higher values in this study than in hepatopathy model rats reported previously. On the other hand, AST and ALT in the two rats that showed no effect on the synthesis of α2M were less than or equal to 285 and 174 U/L, respectively. From these results, the synthesis of α2M was considered to be inhibited in severe hepatopathy stages. Estimation of α2M as an inflammatory marker will therefore be need to be carefully evaluated in non-clinical studies, particularly toxicological studies that use high dosages or evaluate substances that induce severe hepatopathy.
|
2019-08-18T13:18:15.436Z
|
2019-08-17T00:00:00.000
|
{
"year": 2019,
"sha1": "f533a1b6b6aeee1386f0758146fd31a73cb4fd2c",
"oa_license": "CCBY",
"oa_url": "https://labanimres.biomedcentral.com/track/pdf/10.1186/s42826-019-0014-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f533a1b6b6aeee1386f0758146fd31a73cb4fd2c",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
9034513
|
pes2o/s2orc
|
v3-fos-license
|
Inflows of foreign-born physicians and their access to employment and work experiences in health care in Finland: qualitative and quantitative study
Background In many developed countries, including Finland, health care authorities customarily consider the international mobility of physicians as a means for addressing the shortage of general practitioners (GPs). This study i) examined, based on register information, the numbers of foreign-born physicians migrating to Finland and their employment sector, ii) examined, based on qualitative interviews, the foreign-born GPs’ experiences of accessing employment and work in primary care in Finland, and iii) compared experiences based on a survey of the psychosocial work environment among foreign-born physicians working in different health sectors (primary care, hospitals and private sectors). Methods Three different data sets were used: registers, theme interviews among foreign-born GPs (n = 12), and a survey for all (n = 1,292; response rate 42%) foreign-born physicians living in Finland. Methods used in the analyses were qualitative content analysis, analysis of covariance, and logistic regression analysis. Results The number of foreign-born physicians has increased dramatically in Finland since the year 2000. In 2000, a total of 980 foreign-born physicians held a Finnish licence and lived in Finland, accounting for less than 4% of the total number of practising physicians. In 2009, their proportion of all physicians was 8%, and a total of 1,750 foreign-born practising physicians held a Finnish licence and lived in Finland. Non-EU/EEA physicians experienced the difficult licensing process as the main obstacle to accessing work as a physician. Most licensed foreign-born physicians worked in specialist care. Half of the foreign-born GPs could be classified as having an ‘active’ job profile (high job demands and high levels of job control combined) according to Karasek’s demand-control model. In qualitative interviews, work in the Finnish primary health centres was described as multifaceted and challenging, but also stressful. Conclusions Primary care may not be able in the long run to attract a sufficient number of foreign-born GPs to alleviate Finland’s GP shortage, although speeding up the licensing process may bring in more foreign-born physicians to work, at least temporarily, in primary care. For physicians to be retained as active GPs there needs to be improvement in the psychosocial work environment within primary care.
Background
An increasing shortage of general practitioners (GPs) threatens the effective functioning of primary health care in many countries. In the USA, more than 30% of all rural counties have a shortage of GPs [1]. Australia, too, suffers from a GP shortage in rural areas and increasingly also in metropolitan areas [2,3]. In Finland, health care is mainly publicly funded, and responsibility for running the health care system is delegated to local government. Public primary health care is provided by district health centres; local authorities have their own health centres or form joint municipal boards. Primary health care services provided by the private health care sector account for 16% of outpatient physician visits. In addition, occupational health services account for about 13% of outpatient physician visits; these are mainly provided by private sector firms [4]. In Finland, the shortfall from the required number of physicians at primary care health centres was 6% in 2010 [5], considering both domestic and foreign-born physicians.
However, of the total number of GPs, 18% were substitutes and nearly 6% hired from labour-leasing companies [5]. Foreign-born physicians fill the gap in the medical workforce in many developed countries. For example, in the USA, Australia, and Canada in 2004, nearly 25% of practising physicians were foreign-born [6]. International mobility of physicians to Finland has been low, although the inflow has increased since the end of 1990s [7]. In 2010, Finland had 1,297 foreign-born physicians licensed to practise, about 10% of the total GP workforce.
More attractive alternatives, such as better working conditions or pay, have been shown to motivate physicians to seek employment abroad [8]. Opportunities for employment are influenced by health and migration policies and the health care system of the receiving country. Physicians may practise their profession only if their qualifications are recognised, if they obtain a residence permit, and if there are jobs available. Even where qualifications are recognised, foreign-born physicians may face a language barrier and lack of knowledge of clinical procedures and the wider organisational culture (e.g., [9]).
To settle in Finland, EU/EEA citizens must have a residence permit issued by the police, while non-EU/EEA citizens must have a residence permit issued by the Finnish Immigration Service [10]. Moreover, physicians must be licensed by the National Supervisory Authority for Welfare and Health (Valvira) in order to be allowed to practise in Finland. Within the European Union, the qualifications of physicians trained in the EU/EEA are recognised according to an EU Directive [11], but there are no such standard procedures within the EU for physicians trained outside the EU/EEA. According to the Directive, recognition of the professional qualification may be granted without consideration of the appropriate language skills. In Finland, however, employers are required to ensure that their employees have sufficient proof of language skills. Physicians whose qualifications were obtained outside the EU/EEA have to produce evidence of sufficient language skills (e.g., qualifications awarded by a recognised language institution), complete additional studies, and/or pass an examination in Finnish in order to be licensed to practice their profession. The examination consists of three parts covering basic knowledge of clinical medicine and health care, basic knowledge of the health-care system in Finland (including issues central to the practice of medicine in Finland), and clinical skills.
There is some evidence from the USA that foreignborn physicians are more likely to practise in underserved areas and in primary health care than native physicians [12]. However, the opposite is demonstrated by the findings of Baer et al. [13]. An earlier study conducted in the USA found that foreign-born GPs were less satisfied with primary health care work than native GPs [14]. A previous Finnish study found that the intent to leave a job is more prevalent among foreign-born GPs than Finnish GPs [15]. It has repeatedly been shown that the psychosocial work environment, pertaining to interpersonal and social interactions in the workplace, plays a central role in the work-related wellbeing and job satisfaction of employees. According to Karasek's [16] widely used demand-control model, a job with low control possibilities combined with high demands, such as time pressure, is particularly distressing, while a high-stress job has been shown to have negative impacts for both individual employees and the organisation [17,18]. Linzer found that time pressure and chaotic workplace environments, low work control, and an unfavourable organisational culture were associated among GPs with physician dissatisfaction, stress, burnout, and intent to leave a job [19]. A previous study in Finland showed GPs to be less committed to their job than other physicians because of poorer working conditions in primary health care [20]. A previous follow-up study in Finland showed that patient-related stress and frustration with electronic patient record systems increased in Finland between 2006 and 2010 [21]. Another earlier Finnish study suggested that public-sector physicians were less satisfied and committed to their job than private-sector physicians. Private sector physicians also described fewer psychosocial disorders and sleep problems [22]. In Sweden, too, public-sector physicians seemed less satisfied with their workplace environment than private-sector physicians [23]. However, little is known about the experiences of foreignborn physicians in a receiving country with regard to the psychosocial workplace environment or job satisfaction and about whether there are differences in these between the various health care sectors. The aim of the present study was, thus, i) to examine, based on register information, the numbers of foreign-born physicians migrating to Finland and their employment sector; ii) to examine, based on qualitative interviews, the foreign-born GPs' experiences of accessing employment and work in primary health care in Finland; and, finally, iii) to compare experiences of the psychosocial workplace environment among foreign-born physicians working in various health care sectors (primary health care, hospitals, and private sector). Three data sources were used in order to establish a comprehensive picture of foreign-born physicians' migration, access to employment, and work experiences in Finland. The term 'foreign-born physician' refers here to a physician born and trained outside Finland, whether a foreign national or a person born abroad who now holds Finnish citizenship. The definition does not include physicians who were born in Finland but trained abroad.
Data
Three different data sets were used for this study to answer the study question above. The study questions are linked to the data sources so that the register data answers the first study question, qualitative data answers the second, and the survey questionnaire answers the third.
Study question 1
The number of foreign-born physicians migrating to Finland and their employment sector were obtained from administrative registers such as those of the tax administration and form the basis of population censuses maintained by Statistics Finland. The data obtained from Statistics Finland came from the publicly available databases on their website (http://www.tilastokeskus.fi/). The number of new licences for foreign-born physicians was obtained from the National Supervisory Authority for Welfare and Health (Valvira). The contact information for foreign-born physicians was applied for and obtained from Valvira for the purpose of conducting this study. The numbers of foreign-born physicians reported in the results section were calculated from the data obtained.
Study question 2
Qualitative data was collected from foreign-born GPs working in Finland for the purpose to document their experiences of the licensing process and employment and work in primary care. Qualitative findings were also partly used to design measures in a questionnaire survey. The absence of previous research on this issue in Finland led us to choose theme interviews as a research method, and thereby to aim to form hypotheses of the potential problems encountered by foreign-born GPs in employment and working life in Finland. The 12 interviews provided us with enough information on the topics of interest. The interviews were carried out at six health centres between September 2009 and January 2010. The themes of the interviews were related to experiences concerning how the GPs came to Finland, their experiences of the licensing process and integration into the Finnish health care system, job satisfaction, language skills, and their career choices and future plans. The interviews lasted from 45 to 90 minutes, and were audiorecorded with the interviewees' permission and transcribed verbatim. The transcript of the tapes consisted of 106 pages of single-spaced text. The interviewees, of whom seven were women, varied in age from 30 to 60. The range of the length of their stay in Finland was 4 to 19 years. Six originally came from Russia, two from EU/ EEA Member States, and the remaining four from countries outside these areas.
Study question 3
Once interviews were complete, we conducted a web-based questionnaire survey to examine foreign-born physicians' experiences of the psychosocial work environment and job satisfaction. Although the measures in the survey partly arose from the interviews, we also included validated measures from previous studies among native physicians. The invitation went to all foreign-born physicians licensed to practise and living in Finland in 2010 (n = 1,297). This number is smaller compared to register based information due to stricter selection rules: we included only those physicians who have been licenced and live in Finland. While intended for answering in Finnish, the questionnaire was also translated into English, Swedish, Russian, and Estonian. A link to the electronic questionnaire was sent to physicians during autumn 2010 by email, with up to three reminders. After the first round, printed questionnaires were mailed to non-responders with one reminder. Altogether 553 foreign-born physicians responded out of the original 1,297, giving a response rate of 42%. For the present analysis the sample was restricted to those working in the health care sector (public primary or specialized care or private sector, n = 498).
Ethical approval for the study was obtained from the Ethics Committee of the National Institute of Health and Welfare (Approval number 7/2010).
Measures
High job demand was measured by a 5-item scale (e.g., 'Constant rush and pressure due to non-completed work', response scale 1 = never, 2 = seldom, 3 = occasionally, 4 = quite often, 5 = all the time) derived from Harris' (1989) stress index [24] (α = 0.87). Job control was measured by decision authority scale (3 items, α = 0.76) derived from Karasek's Job Content Questionnaire [25]. An item example is 'I can make independent decisions in my work', with response alternatives 1 = completely disagree, 2 = somewhat disagree, 3 = undecided, 4 = somewhat agree, 5 = strongly agree. For descriptive purposes, job demand and job control scales were categorized so that scores 3 or lower meant low job demand/control and scores above 3 high job demand/control. From these categorized variables we computed a combined variable to represent Karasek's demand-control model: passive work (1 = low job demand and low control, 0 = other combinations), active work (1 = high job demand and high control, 0 = other combinations), high strain work (1 = high job demand and low control, 0 = other combinations), and low strain work (1 = low job demand and high control, 0 = other combinations).
Patient-related stress was measured with a 3-item scale (α = 0.84) derived from the health care stress questionnaire [26] (item example: 'Patients are unwilling to co-operate and are passive'). Stress related to patient information systems was measured with a self-developed 2-item scale (α = 0.82), the two items being 'constantly changing data-systems' and 'poorly working tele-informatic programmes'. Lack of professional support was also measured with a self-developed 2-item scale (item example: 'possibility to consult', α = 0.62). Stress related to teamwork was measured with a 4-item scale (e.g., 'Human relationship problems in the workplace', α = 0.85) derived from Harris' stress index [24]. These stress scales used the same response scale as for job demands, and were also categorized using score 3 as a cut-off point.
Job satisfaction was measured by 3 items from Hackman and Oldham's (1975) Job Diagnostic Survey [27] (e.g., 'I am satisfied with my work', α = 0.79). Job involvement was measured by 3 items developed by Lawler and Hall (1970) [28] (e.g., 'The most important things that happen to me involve my job', α =0.83). Team climate was measured by a 4-item Team Climate Inventory [29], an item example being 'We have a "we are together" attitude' (α =0.88). The response scale in job satisfaction, job involvement, and team climate scales was 1 = completely disagree, 2 = somewhat disagree, 3 = undecided, 4 = somewhat agree, 5 = strongly agree. For descriptive purposes, the variables were categorized using score 3 as cut-off (scores above 3 indicating strong job satisfaction/job involvement and good team climate).
Analysis
Frequency tables were calculated for trends in migration from Statistics Finland in the years 1990, 1995, 2000, 2005, and 2009. The total numbers of foreign-born physicians were compared to total numbers of Finnish physicians in order to calculate a proportion of foreign-born physicians. The numbers of new licenses were obtained from the National Supervisory Authority for Welfare and Health (Valvira). In addition, foreign-born physicians' country of origin was obtained from the Medical Association database in the year 2013.
We chose qualitative content analysis as the analysis method for the theme interviews. The analysis proceeded inductively from smaller categories to major categories. The data were coded using Atlas.ti software. After several readings, the data were classified independently by three of the authors (HK, RL, KM) into 81 subcategories. The results were compared, discrepancies discussed, and data merged into seven categories representing different aspects of the licensing process for foreign-born physicians and their experiences in primary care work.
We analyzed the differences in psychosocial work environment between foreign-born GPs and other foreignborn physicians using ANCOVA in continuous variables adjusting for background factors (age and gender, country of origin, length of stay in Finland, reason for migration and specialization), and we present means and F-statistics for these differences. Bonferroni correction was applied in multiple pair-wise comparisons, while in categorical variables (Karasek's job strain variable) unadjusted differences between GPs and other physicians were first tested by χ 2 test. We also used multivariate logistic regression analyses to examine the differences between foreign GPs and other foreign physicians, adjusting for background variables. All statistical analyses were conducted using the SPSS software, version 19.0.
Results from the registers
The number of foreign-born physicians in Finland remained low until the end of the 1990s, but has increased significantly over the last 10 years. According to Statistics Finland, in the year 2000 a total of 980 foreignborn physicians held a Finnish licence and lived in Finland, accounting for less than 4% of the total number of practising physicians. In 2009, their proportion was 8%, and a total of 1,750 foreign-born practicing physicians held a Finnish license and lived in Finland (Figure 1).
According to Valvira statistics in 1990, nine foreignborn physicians were given a license to practice in the medical profession in Finland. In 2009, the license was given to 270 foreign-born physicians (Figure 1 In 2009, most of the practicing foreign-born physicians (82%) were employed in health and social services. Public hospitals were the biggest employer (47% of the physicians working in health and social services), while public primary care employed 22% and the private sector employed 9% of the foreign-born physicians. Compared to this, 82% of Finnish-born physicians were employed in health and social services, and of those 43% worked in public hospitals while primary care employed 18%. Finally, the private sector employed 16% of Finnish-born physicians.
Interview results
One of the major concerns, especially among physicians trained outside the EU/EEA trying to enter the profession in Finland, was access to work in the health care sector made problematic by a difficult licensing process. We identified four themes of problems: lack of information, bureaucratic difficulties, lack of support with language studies, and unfair test requirements. The lack of information was from official sources, such as employment offices or their superiors, about language courses, the licensing process, and jobs available. This was especially true among physicians trained outside the EU/ EEA. "I've found that chief physicians, they don't know how it [the licensing process] works for foreign physicians, they don't know the difference between qualifying in Europe or outside Europe" (P1). It appears that the Finnish system failed to support the foreign GPs' language training, and their lack of language skills prevented them from entering the Finnish system. The Finnish language is difficult to learn, and language courses were described as being in short supply or poor quality. "The students were at very different levels, some did not even know the alphabet, so we spent time just learning letters" (P10). GPs from outside the EU/EEA described the test requirements for obtaining a licence as excessive, and test requirements or practices were not even consistent over time. "I remember the hall was so full, about 100 people in there, and three months later they announced that only one person had passed the test. What kind of a test is it that an entire hall full of people can fail it? It's just not fair" (P12). Even though foreign GPs were struggling to make decisions based on fragmented information, they had also come up with various strategies for gaining experience in order to achieve their goal of practising medicine in Finland. "Then I went to Meilahti Hospital and said, I'm a doctor, I want to get to know the system, how it works, I don't have a licence yet but could come here to learn" (P3).
Once the foreign-born physicians were licensed and employed they seemed relatively satisfied at work. We identified four themes that describe work in primary health care. The heavy workload seemed to be a common concern and a 'chronic condition' among foreignborn GPs. The other themes were mixed; we described them as 'the role of the orientation to work' and 'patient work is satisfying yet stressful'.
Comprehensive work orientation and consultation opportunities were highly valued and often, but not always, experienced at the workplace. Foreign physicians were sometimes treated like doctors doing specialist training in general practice, which was much appreciated because they were assigned fewer patients per day, and they also had a senior GP as a support person. Peer support from other foreign physicians was also considered valuable by GPs of foreign origin, especially those who had little working experience: "There might be a question about treatment where he [a foreign colleague] might have more experience of how they do things at the health centre" (P4). GPs felt that they had good social relationships with colleagues and other team members at the workplace. However, it was also suggested that contacts with colleagues were superficial, and that the Finnish way of communicating could be strange for a foreigner.
Positive feedback from patients increased the GPs' jobsatisfaction and encouraged them in their work. Negative feedback from the patients naturally bothered the GPs, even outside work. The GPs valued no-nonsense patients who were willing to cooperate with the physician. Certain patient groups were found to be challenging, e.g., patients with multiple diseases, and their treatment would have required more time than allocated. "Chronic diseases are the tough ones, there are elderly people who have a lot of different medications and complaints, and the appointment time is limited, and in that time you should be able to find out exactly what the problem is" (P12). Consumer-oriented patients who had specific expectations and directions for the GP consultation or just wanting a referral to a specialist were found to be demanding and not always appreciative of the work and expertise of the GP. The heavy workload resulting from the shortage of GPs was considered 'a chronic condition' at health centres. Foreign-born GPs worked long hours and sometimes continued working after the health centre was closed. This could be down to a sense of duty, or an inability to cover the workload and paperwork in the time allocated. "It just wasn't possible. Everything else [but patient work] had to be done in the evening, sometimes until 19.00, they were really long days" (P3). The foreign-born GPs viewed the general GP shortage as the cause of increased workload and an important negative factor for job satisfaction. "Physicians may leave for whatever reason, or a position remains unfilled, and the work is just piled on to all the others. We're busy all the time and people are waiting, patients become impatient and angry, they're given a 20-minute appointment and they have a hundred and one things to sort out. I've never treated them before, and in that time I can't solve all their problems, but since I do want to help them, the appointment drags into overtime […]" (P9). GPs maintained that too much time was spent on paperwork, e.g., the time used for making entries in the electronic patient record system, or on administrative meetings during on-call duties. They also found patient information systems to be complex because of the many different systems used, depending on the health-care centres in which they worked. Table 1 shows the characteristics of the survey study sample. Most foreign-born physicians were female, and the mean age was 44.6 years (range 24-69 years and SD = 10.6). Most of the foreign-born GPs came from the Russian Federation, while other physicians, such as those working in hospitals or in private sectors, came from Estonia or other countries. Most foreign-born GPs and physicians in the private sector reported having obtained their residence permit in Finland for family reasons. The most common reason among physicians in specialized care was work-based migration.
Survey results
Foreign-born public physicians (GPs and medical specialists) experienced higher job demand in comparison to foreign-born private physicians (Table 2). Private physicians experienced higher job control in comparison to medical specialists. Foreign-born GPs' experienced more patient-related stress in comparison to foreign-born medical specialists and private physicians. Foreign-born public sector physicians' frustration with electronic patient record systems and stress related to team work was higher in comparison to private physicians. Job satisfaction differed significantly between foreign-born GPs and foreign-born private physicians. No significant results were found in respect of professional support, job involvement, and team climate among foreign-born GPs, medical specialists, and private physicians.
In the combined measure for job strain typology [16], half of the GPs were classified as having active work, one third as having low strain work, 16% as having high strain work, and 2% as having passive work (Table 3). GPs differed significantly from physicians working in specialized care or in the private sector in more often having active work, and less often low strain (Table 3). Furthermore, 16% of GPs and 17% of physicians in specialized care had high strain work, while none of the physicians in the private sector was classified in the high strain group. In logistic regression analyses (adjusted for background factors) the differences between GPs and other physicians in active work remained significant (P<0.001), as did differences between GPs and physicians in the private sector in low strain jobs (P<0.001). The small number of cases in the cells meant that adjustment was not possible in the models for passive and high strain work.
Discussion
This study examined how foreign-born GPs entered the profession and how they experienced working in Finnish health centres. Three different data sets were used: register information, theme interviews, and survey data. The study showed that the numbers of foreign-born physicians have increased dramatically in Finland since the year 2000. The shortage of physicians has often been seen as a powerful spur for the international migration of physicians [30]. The increasing inflow of physicians in the Finnish context was partly allowed by Finland joining the EU in 1995. A change in the Finnish policy environmentfrom a mainly humanitarian-based immigration policy to one of enhancing work-related immigration [31] in response to the challenges of an ageing population and workforce shortagesmay also have contributed to the inflow increase. Most foreign-born physicians immigrate to Finland from the Russian Federation or Estonia. Russian physicians appear to be drawn to Finland by kinship ties and family already living there, while Estonian physicians arrive mainly on account of work-related factors. The importance of 'family proximity' has demonstrated also among overseas-trained physicians in Australia [32] and geographical proximity in some European countries [33]. Our results support previous findings also from the USA that proximity of the destination country and GDP per capita are the two main predictors of physicians' migration [34]. Higher salaries and better working conditions have been the main emigration factors for Estonian physicians Means adjusted for confounders and F-statistics. [35] and for physicians in some other countries [36]. Increasing mobility from Estonia to Finland may have negative impact on health system performance in Estonia, however, we lack information of the size of such a phenomenon. It seems that Finland fails to attract large numbers of physicians from other European countries or overseas probably due to the geographical location and language difficulties. The slow licensing process for non-EU/EEA physicians hindered access to work in Finland, and was experienced as an uninviting, inconsistent, and confusing process with many obstacles. Key issues were insufficient information on the licensing process, lack of support with language studies, and test requirements viewed as difficult. Similar difficulties have been experienced in Canada with licensing of physicians trained abroad [37,38]. Within Europe, the qualifications of physicians trained in the EU/EEA are recognised by EU Directive [11], but there are no consistent practices for physicians trained outside the EU/EEA [39]. According to the present study, the challenge in the licensing process lies in increasing the availability of information concerning the test requirements, and providing language courses especially for physicians trained outside the EU/EEA. According to Haukilahti, only one out of five non-EU/EEA physicians who took the licensing examination in Finland between 1994 and 2009 passed the examination on their first attempt [40]. Thus, this may indicate that many foreign-born physicians work in areas other than the health care sector or are unemployed.
While most foreign-born physicians worked in specialist care, the sector is large in Finland and the biggest employer of the medical workforce regardless of origin. According to a physician survey from 2009, almost half of the working-age physicians worked in specialized health care and 21% in health centres [41]. This may indicate that foreign-born physicians prefer applying to specialties popular also among Finnish physicians. However, there may be characteristics in hospital work that can make specialized care particularly attractive to foreign-born physicians. The language requirement among some hospital physicians may be less strict compared to GPs because hospitals have several specialties (e.g., surgery or anaesthesiology) which may not require comprehensive language skills. Foreign born physicians may also prefer hospitals because of previous working experience in their country of origin, an ambition to become a specialist, or poorer career options and working conditions in primary care. On the other hand this may also suggest that primary care might not be able to attract foreign-born GPs and, thus, there is a need to consider other options to make work in primary care attractive for foreign-born physicians.
The psychosocial working environment of foreignborn GPs was mainly characterized by high job demand and broad opportunities for controlling their own work. In previous studies, primary care work has been discussed in terms of high strain work or its componentshigh work load and low job controlwhich have been associated with poor well-being [14], high absenteeism [15], low commitment [18], and retirement intentions [42]. However, in the present study, half of the foreignborn GPs in the survey could be classified as having an 'active' job profile according to Karasek's demandcontrol model [22]. Active work was even more common among GPs than among other foreign-born physicians. This kind of work is associated with positive outcomes such as job challenge and satisfaction [43]. The divergent findings could be a result of methodological differences. Usually the demand-control model has been used for predictive purposes, and high vs. low conditions in demand and control have been defined based on distribution of responses in specific data sets. In this study, we used Karasek's typology for descriptive purposes, and therefore defined high time-pressure and job control based on initial response alternatives of scales. An alternative explanation could be that foreignborn physicians assess their working environment by different standards from native physicians, for example comparing their experiences to those from their country of origin.
Our results from the theme interviews mirrored the survey results in that, on top of the recurrent theme of job demand, the interviews also revealed that multifaceted work in health centresand the diversity of expertise it requiresmay also be experienced as inspiring by some foreign-born physicians. The finding is in line with earlier interview study among Finnish GPs, where the comprehensive work in primary care was experienced as both stressful and positively challenging [44]. Information systems in health care have been severely criticized by physicians [45], and poorly functioning patient electronic systems (ITC) systems were also a major source of stress among foreign-born physicians in the present study, particularly in public primary or specialized care. The interviewees described the ITC systems as complicated to use, with entries taking a long time.
Patient-related stress was more common among foreign-born GPs than among other foreign-born physicians working in other health care sectors. This was also found in previous studies among native physicians [46] and native GPs. The foreign-born GPs interviewed experienced particular challenges with patients with multiple conditions or mental health problems, and with demanding, consumer oriented patients.
Another theme recurring in the interviews was the importance of social relationships and support from colleagues, and foreign-born physicians evaluated team climate positively in the survey, regardless of the health care sector in which they worked. The need for support among foreign-born physicians has also been recognized elsewhere, and orientation and mentoring programmes for foreign-born physicians [47,48] have been found useful in promoting their integration.
This study used three different data sets to look at the same phenomenon from different perspectives, and this may increase the credibility of our results. The qualitative data were based on a relatively small number of interviews. Simultaneously, with survey data, we were able to compare the themes and issues that emerged from the interviews to responses to a survey among foreignborn physicians working in Finnish health care. The results are specific to the Finnish health care context and cannot be generalised directly to other countries. In addition, qualitative data was gathered in the Helsinki metropolitan area, thus the results may not be comparable with foreign-born physicians living in rural areas. Moreover, although six interviewed physicians came from Russian, which is the largest source country of foreign-born physicians in Finland, the remaining interviewed physicians do not reflect the whole range of nationalities of total population of foreign-born GPs in Finland.
In this study, we have only been able to catch the licensed foreign-born physicians who live in Finland, and thus lack information on temporary workers and on physicians who live in Finland but have not been licensed there. Our results regarding the licensing process, therefore, reflect only the experiences of those who had passed through the process or were still involved with it. The interviews were conducted in Finnish, which was not the interviewees' native language. Although the respondents in the interviews spoke good Finnish, some personal experiences may be easier to discuss in one's native language. The respondents were able to choose from several language versions of the survey. The response rate among foreign-born physicians was relatively low (42%) and employed and female foreign-born physicians were over represented among foreign-born physicians [49]. Furthermore, the cross-sectional study design of the survey prevents us from making causal interpretations, while results may be over-inflated through the use of self-reported data. To minimize problems with self-reports, we have used well-known validated measures that have shown good reliability.
Conclusions
While the number of foreign-born physicians has increased rapidly since the year 2000 in Finland, the licensing process was experienced to be exclusive and particularly unfair by foreign-born physicians trained outside of EU/EEA. Most foreign-born physicians seemed to work in specialized medical care which was followed by primary care. Those working in primary care seemed to experience similar problems, such as high job demand and high patient related stress, as reported by native Finnish GPs in previous studies. However, demands were often associated with high job control among foreign-born GP's and comprehensive work in health centres was found both stressful and inspiring. Improvements are necessary in employee retention and management of the primary care function if we are to keep physicians working as GPs, whether foreign-born or native.
Negative impacts of job demands could be decreased, for example, by dividing tasks between nurses and GPs. This study also suggests that investing in professional support in more detail could make the job of GPs easier. In addition, the ITC system could be unified and make them easier in everyday use. Effective solutions are often context-related, and thus priority should be given to the local and organizational level.
This study indicates that primary care may not be able in the long run to attract a sufficient number of foreignborn GPs to alleviate Finland's GP shortage. One way to ease foreign-born physicians' employment in Finland is to speed up the licensing process for foreign-born physicians, e.g., by providing easier access to training and language courses. This could be implemented in workplaces such as health care centres. Speeding up the licensing process may bring in more foreign-born physicians to work, at least temporarily, in primary care.
|
2015-03-19T23:44:59.000Z
|
2014-08-07T00:00:00.000
|
{
"year": 2014,
"sha1": "86639d587977222b30b41a5b3fa4a2f55695d2d0",
"oa_license": "CCBY",
"oa_url": "https://human-resources-health.biomedcentral.com/track/pdf/10.1186/1478-4491-12-41",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff067b2da92148fbce60aa8a0765b1c9244f8ae4",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
266399346
|
pes2o/s2orc
|
v3-fos-license
|
Robust output convergence of heterogeneous networks via nonsmooth hard-threshold couplings
We study a set-valued maximal monotone coupling law achieving robust output convergence in heterogeneous networks of dynamical systems with uncertainties and persistent disturbances. The coupling consists of an adaptable strategy built from normal cones to convex time-dependent sets (hard-threshold maps). To guarantee the convergence of the output mismatches to a neighborhood of the origin, only connectivity of the intrinsic graph is required (knowledge of the graph algebraic connectivity is not required), whereas only the output of the associated systems is used. Numerical simulations illustrate the effectiveness of the proposed coupling scheme
Introduction
The study of interacting dynamical systems showing synchronized trajectories has got much attention in recent years, see e.g., [5,17,19,21,22,47] and references therein.Such intense focus has been motivated by the wide range of applications coming from almost all fields of science: power grids in engineering [36]; biological and artificial neural networks in neuroscience and computer science [32]; gene regulatory networks in biology [26]; distributed resource allocation in operations research [57]; and opinion formation of individuals in sociology [9]; are just some examples from the increasing list of applications reported in the literature.In the control and systems community, the property of synchronization has become a pivotal property, studied from different perspectives and under different contexts.At first, ideal situations were considered, where disturbances and uncertainties were absent [44,41,45,52].In such contexts, linear, and more generally, smooth approaches, (such as the master-stability function and contraction theory, see e.g., [4,50,41]), are the dominant tools used.
Nowadays, the complexity of the studied networks has increased, as more complicated dynamics are considered together with a lack of global uniformity on the models describing the individual systems, [34,28,39].Such heterogeneity in the network may be present because of different reasons.For instance, each system in the network may be described by a mathematical model that is different from that of its neighbors, with the possibility of having a state space of different dimension.In such a case, full synchronization is not possible.Nevertheless, synchronization of some of the state variables (output synchronization) may still be attainable, depending on the structure of the systems and their interactions.In other cases, even if each system belongs to the same model class, variations or uncertainties in the values of the parameters give rise to heterogeneity.Also, heterogeneity may arise due to the presence of external persistent disturbances affecting some of the agents.It is clear that when all the aforementioned sources are combined into the network, the analysis of such systems becomes a challenging task.
Some recent works dealing with the synchronization of heterogeneous networks include [12,16,28,30,34,39,55].These works focus on either of the following types of interconnections: i) diffuse coupling; ii) nonlinear (high-gain) time-varying coupling; or iii) discontinuous coupling.For the case of diffusive coupling, it is shown in [28,34,39,55] that under suitable assumptions (such as connectivity of the network, 1 and semipassivity or QUAD-ness of the vector fields), the synchronization error diminishes as the amplitude of the coupling signal increases.It is shown that, in general, an infinite interconnection signal is needed in order to force the exact invariance of the synchronization manifold, so that asymptotic synchronization cannot be attained in practice.For the case of nonlinear coupling, [12,30] showed, under similar assumptions as in the diffusive case, that a better performance is obtained with high-gain time-varying interconnections.In [30] nonlinear coupling strategies, inspired from the literature of funnel control, are studied.It is shown there that such strategies achieve high-precision in the presence of heterogeneity.However, the coupling applied requires the knowledge of the full state.Similar interaction laws where proposed in [12] regarding adaptable dead-zone maps for the case of identical systems under external noise perturbations.In such a case, the dead-zone induces an implicit funnel for the synchronization error, and the adaptation mechanism drives the funnel towards a neighborhood of the origin.Showing, once again high precision for the synchronized trajectories.Finally, for the case of discontinuous coupling, [16] established interesting properties of the network that are not shared by their smooth counterparts.Indeed, [16] showed that discontinuous couplings can achieve the asymptotic convergence of the full state with finite coupling strength.Their approach is reminiscent of sliding-mode control techniques for lumped systems, where the synchronization manifold is seen, to some extent, as a sliding surface for the synchronization error.However, it is noteworthy that the asymptotic convergence in [16] holds only at theoretical level, as the coupling is implemented in a regularized fashion to avoid the so called chattering effect.Thus, adding a boundary layer around the synchronization manifold leading to practical synchronization.
Notably, nonsmooth control techniques have shown a remarkable performance when dealing with uncertainties and disturbances, endowing the closed-loop with interesting properties such as, finite-time convergence and model reduction, see e.g., [31,33,51].In contrast, to achieve such distinctive performance, special attention must be put at implementation level, as without an appropriate implementation, those techniques may lead to the appearance of chattering, which leads to degradation in performance and the useful life of components.Regarding networks of systems, the prevalent source of nonsmoothness concerns discontinuous (switched) and impulsive couplings, these have been addressed in recent studies such as [14,15,24,27,42,49,54,56]. In there, the main concern regards the synchronization of systems under interactions that change abruptly, or in cases where the communication channels put constraints on the coupling between systems.However, besides such cases, very little is known about the performance, and the implementation, of more general nonsmooth strategies.Thus, it becomes natural to consider the robust output synchronization problem under nonsmooth couplings, as a way to counteract all the effects caused by disturbances and uncertainties affecting the network.
In this paper we study the robust output convergence of heterogeneous networks using tools from the theory of differential inclusions and convex analysis.The term convergence is used in this paper, instead of the term "synchronization", as the amplitude of the coupling signals is not constrained to be small [43,Section 1.2.1].First, some theoretical results are presented regarding perfect output convergence in presence of uncertainties and persistent disturbances.Later, practical convergence is studied by considering a discrete-time implementation of the ideal coupling.The proposed coupling scheme has a strong connection with funnel coupling [30], adaptable dead-zone coupling [12], constrained differential inclusions [40], and perturbed Moreau's sweeping processes [35].Intuitively, it can be seen as a nonsmooth generalization of the funnel coupling in [30] and the dead-zone coupling in [12].It consists in applying a correction at each time that the mismatches between the outputs of neighbors reach the boundary of a control set S(t), so that S(t) is rendered positively invariant.Such correction terms are implemented via hard-thresholds to the set S(t), so that the coupling action is inactive whenever the mismatches reside in the interior of S(t).Additionally, S(t) is designed such that it contracts asymptotically towards zero at a rate depending on the aforementioned mismatches.In this way, output convergence is easily achieved by means of high-gain couplings.The main problem that arises with the proposed strategy concerns the well-posedness (existence and uniqueness of absolutely continuous solutions) of the interconnected systems, as the operators characterizing the hardthreshold mapping are set-valued and locally unbounded on the boundary of the set S(t).That is, the associated differential inclusion is not of Filippov type [48,Definition 2.2].Thus, special attention is paid to the well-posedness of the problem and sufficient conditions for the existence of solutions are presented in Theorem 6.Finally, it is shown that the proposed strategy is robust against parametric uncertainties, as well as, external disturbances.The main contributions of the paper are summarized as follows: • The proof of existence of absolutely continuous solutions for set-valued heterogeneous networks with nonsmooth hard-threshold couplings, where the dimension of the state at each vertex is not necessarily the same.
• The robust output convergence in the presence of unmatched, persistent disturbances.
• The presentation of two numerical approaches, (one centralized and one fully distributed), for the implementation of the proposed nonsmooth schemes in digital computers.
• The extension of the funnel-coupling proposed in [30] to the multivariable, nonsmooth case.
The paper is organized as follows.The next section recalls some results from convex analysis and settles the notation used all over the paper.Section 3 formulates the problem and introduces the proposed coupling strategy.The formal proof concerning the well-posedness of the coupled system is deferred to Section 6. Section 4 studies the asymptotic properties of the interconnected system in a robust framework, whereas, Section 5 presents two specific numerical approaches for the discrete-time implementation of the proposed schemes in digital computers.The paper ends with Section 7 where conclusions are presented.
Notation and preliminaries
The Euclidean inner product is represented as ⟨ • , • ⟩ and the associated norm as ∥ • ∥.The p-norm of a vector is denoted as ∥ • ∥ p , for p ∈ [1, +∞].For the sake of simplicity we drop the subindex for the 2-norm.In the cases when the argument of the norm is a matrix, A ∈ R r×l , the induced norm is considered, that is ∥A∥ = sup ∥x∥=1 ∥Ax∥.The null and range subspaces of A are represented as, null A and rge A, respectively.The matrix A † ∈ R l×r denotes the Moore-Penrose pseudoinverse of A. The identity matrix is denoted as I l ∈ R l×l , whereas Id : R l → R l denotes the identity map.The set B l denotes the closed unit ball in R l with center at zero (in the cases where the dimension is clear from the context we will drop the subindex l).For any two nonempty sets U, V ⊆ R l , U + V = {u + v|u ∈ U, v ∈ V}, and AU = u∈U {Au}.The interior of a set U is denoted as int U.The spaces L 1 ([0, T ]; R l ) and L ∞ ([0, T ]; R l ) correspond, respectively, to the Lebesgue spaces of all absolutely integrable and essentially bounded functions, from [0, T ] into R l .For a sequence {f k } k∈N ⊂ L 1 ([0, T ]; R l ), the notation f k ⇀ f denotes convergence in the weak topology of L 1 ([0, T ]; R l ), whereas if {f k } k∈N ⊂ L ∞ ([0, T ]; R l ), then the notation f k * ⇀ f denotes convergence in the weak* topology of L ∞ ([0, T ]; R l ).The reader is addressed to [10,Chapter 3] for further details on these notions of convergence.
Elements from convex analysis
Let U, V ⊂ R l be two nonempty, closed, convex sets.The distance from a point x ∈ R l to U is given as dist(x, U) = min y∈U ∥x − y∥ = ∥x − proj(U; x)∥ , where proj(U; x) denotes the projection of x onto the set U, i.e., proj(U; x) = arg min y∈U ∥x − y∥ . ( Let φ : R l → R ∪ {+∞} be a proper, convex, lower semicontinuous function.The convex subdifferential of φ at x is defined as the set-valued map One of the most important properties of convex subdifferentials, that is central in the developments that follow, concerns its maximal monotonicity.A set-valued map 1 M : R l ⇒ R l is monotone (in the sense of Minty-Browder) if for any two pairs (x 1 , w 1 ), (x 2 , w 2 ) In addition, M is maximal monotone, if it is monotone and its graph is not strictly contained in the graph of any other monotone map.Maximal monotonicity guarantees that, for any r > 0, the map (Id +rM) The proximal map to φ is related to the convex subdifferential of φ via the following expression, see e.g., [7,Theorem 16.34], prox(φ; x) = (Id +∂φ) The right-hand side of (3) is also called the resolvent of ∂φ at x, see e.g., [7,Chapter 23].It is clear from (1) and ( 2) that the proximal map is a generalization of the projection onto a closed, convex set, as where ψ U : R l → R ∪ {+∞} is the indicator function of the set U, such that ψ U (x) = 0 for all x ∈ U and ψ U (x) = +∞ otherwise.The set-valued map N(U; • ) : U ⇒ R l denotes the normal cone to U, given by Thus, it follows from (3) that proj(U; The function σ(U; u) = sup p∈U ⟨p, u⟩ denotes the support function of the set U at u ∈ U.
Elements from graph theory
A graph G(V, E) is a mathematical structure consisting of a set of vertices V = {ν 1 , . . ., ν N } where ν i ∈ V represents the i-th vertex, together with its connections, represented via a set of edges E ⊂ V × V.So that, if {ν i , ν j } ∈ E, then the vertices ν i and ν j are connected (adjacent).Throughout the paper adjacency between vertices ν i and ν j is also denoted as ν i ∼ ν j .In addition, each edge {ν i , ν j } ∈ E is incident with the vertices ν i and ν j .When the set of vertices and edges is clear from the context we will denote the graph simply as G.
Note that, in the set notation used, {ν i , ν j } = {ν j , ν i }, that is, the graph under consideration is undirected.Let ν i , ν j ∈ V, a path of length m from ν i to ν j is a sequence of m + 1 vertices {ν k } m k=0 ⊆ V such that ν 0 = ν i , ν m = ν j , and {ν k , ν k+1 } ∈ E for k ∈ {0, . . ., m − 1}.The graph G(V, E) is connected if for any pair of distinct vertices (ν i , ν j ) ∈ V × V there exists a path from ν i to ν j .For each edge ϵ k = {ν i , ν j } ∈ E a sign to each end of ϵ k is assigned.Such sign assignation will provide an orientation to the graph G(V, E).Along all the manuscript, it is assumed that an orientation has been chosen and it is fixed.Thus, the oriented incidence matrix Θ ∈ R |V|×|E| is given as, see e.g., [23], If the graph G has |V| vertices and c connected components, then any associated incidence matrix has rank |V| − c.Moreover, the graph Laplacian L = ΘΘ ⊺ and Θ ⊺ 1 N = 0.
Problem formulation and proposed coupling strategy
Let G(V, E) be an undirected and connected graph, with vertices ν i ∈ V and edges {ν i , ν j } ∈ E, characterizing the connection structure between vertices.Each vertex is associated with a nonlinear system of the form, where, for each i ∈ {1, . . ., |V|}, x i (t) ∈ R ni denotes the state of the i-th system at time t; u i (t), y i (t) ∈ R m denote the variables available for interconnection with other systems, as indicated by the graph G.The term ζ i (t, x i (t)) ∈ R pi is in general unknown, as it takes into account parametric uncertainties of the model, as well as, external disturbances affecting the i-th system.Finally, all the matrices are constant and of the appropriate dimensions, whereas each Throughout the paper it is assumed that each function ) is a function of bounded variation, such that there exists a set-valued map Z i : R × R ni ⇒ R pi , that is measurable in its first argument and upper-semicontinuous in its second argument, with compact and convex images and such that ζ i (t, x i (t)) ∈ Z i (t, x i (t)) for almost all times.In addition, the following assumption on the norm of ζ i is considered.
Assumption 1.For each function ζ i there exist non-negative constants M i,1 and M i,2 such that the following inequality holds for all t ≥ 0, As stated in the introduction, we will focus on the study of a particular nonsmooth coupling law, described below, enforcing the robust agreement (against the heterogeneity of the network and external disturbances ζ i ) of all vertex outputs y i (t) towards a single common trajectory or a small neighborhood of it.These two properties are formalized in the following definition.
Definition 1.The family of systems ( 5) achieves robust asymptotic output convergence if there exist coupling laws u i (t) such that for any two indices i, j ∈ {1, . . ., |V|} Likewise, it achieves practical output convergence if for any ε > 0 there exist coupling laws u i (t, ε), i ∈ {1, . . ., |V|}, such that for any two indices i, j ∈ {1, . . ., |V|} Problem formulation Given a family of systems (5) and a graph interconnection G(V, E), our target consists on designing coupling laws u i (t) = u i (t, y j1 (t), . . ., y jp i (t)), (where each j k is such that ν j k ∼ ν i and i ∈ {1, . . .|V|}), such that robust asymptotic output convergence holds in the presence of non-vanishing disturbances.
Notice that for each vertex ν i , the external disturbances ζ i are not necessarily matched with the coupling input, that is rge G i ̸ ⊆ rge B i for some i ∈ {1, . . .|V|}.In addition, the dimension of the state space may also be different from vertex to vertex leading to a heterogeneous dynamical network.In such context, it is well-known that smooth coupling, such as diffusive coupling, cannot solve the robust convergence problem exactly, since it can provide at most practical convergence with coupling gains growing towards infinity as the mismatches between trajectories approach a neighborhood of the origin, see e.g., [34,39].On the other hand, nonsmooth control laws are able of removing, theoretically at least, all the affections caused by uncertainties and external disturbances, whilst maintaining all control signals in a bounded region, see e.g.[16,33,51].
Motivated by the robustness properties of the set-valued hard-thresholds controllers studied in [33], it is proposed to use the following set-valued and time-dependent coupling law where j∼i indicates that the sum is taken over all indices j such that the vertex ν j is adjacent to vertex ν i and N(S (i,j) (t); • ) : S (i,j) (t) ⇒ R m denotes the normal cone to the time-dependent set S (i,j) (t).At each time instant t ≥ 0, each set S (i,j) (t) = S (j,i) (t) in ( 6) is given as where the sets C (i,j) are constant and satisfy the following standing assumption.
Note that the symmetry condition on C (i,j) guarantees that N(S (i,j) (t); y i (t)−y j (t)) = −N(S (j,i) (t); y j (t)− y i (t)), so that the coupling function in ( 6) is undirected.Figure 2 below depicts the evolution of the graph of the normal cone over time for the case when 7) is used to control the size of each set S (i,j) (t) and it obeys with r (i,j) (0) > 0, and γ : R + → R + is a class-K function, that is, γ is strictly increasing and γ(0) = 0.In addition, γ is chosen such that there are constants Mγ , M γ ≥ 0 such that for any two outputs y i (t), y j (t), Note that with such initial condition, r (i,j) (t) > 0 for all t ∈ [0, +∞) and Thus, for each t ≥ 0, each set S (i,j) (t) is compact and convex, and it contracts asymptotically to the singleton {0} as time grows towards infinity.
The following example shows that the proposed coupling strategy ( 6) is not only a mathematical abstraction, but it can emerge, for instance, in the context of electrical circuits.
Example 3. Let us consider the nonsmooth diode network in Figure 1 carrying out the interconnection between vertices ν i and ν j .Let us assume that each diode satisfies an ideal complementarity condition as (see e.g., [1, Section 1.1]) where the notation 0 ≤ P ⊥ Q ≥ 0 is the short form of the following three conditions: i) P ≥ 0, ii) Q ≥ 0, iii) P Q = 0; I D k denotes the current flowing through the k-th diode D k ; V D k denotes the voltage across the terminals of D k ; and V * denotes the threshold voltage of D k for k ∈ {1, 2}.The ideal complementarity condition captures the behavior of the ideal diode in an simple way, since either: the current through the diode is zero and the voltage is lesser or equal to V * , or the voltage is equal to V * and the current is nonnegative.Setting each voltage controlled source in Figure 1 as s(t) = r (i,j) (t) − V * , it follows from Kirchhoff 's laws that (in what follows the explicit dependence on time is omitted in all variables for the sake of simplicity) (10a) The substitution of (10b) into the complementarity conditions (10c)-(10d) yields It is clear from (11) that for any value of the currents I D1 and I D2 , the mismatch between the outputs of the i-th and j-th vertices satisfies −r (i,j) ≤ y j − y i ≤ r (i,j) . Hence, 1.If y j − y i = −r (i,j) , then it follows from (11b) and (10a) that I D2 = 0 and u (i,j) = −I D1 ≤ 0.
Putting all cases together we arrive at the expression Figure 2 displays the time evolution of the graph of the normal cone to the set r (i,j) (t)[−1, 1] for some function r (i,j) so that r (i,j) (t) → 0 as t ↑ ∞.Finally, joining all vertices of G(V, E) with coupling circuits as the one in Figure 1 produces a coupling law of the form (6) as so that (12) indeed coincides with (6) for the scalar case where each set It is worth to emphasize that the general coupling strategy ( 6) is not exclusively implementable via analog electrical circuits.Indeed, in Section 5 we present two algorithms for the implementation of such coupling laws in a digital computer.Before that, the robustness of the coupled network ( 5)-( 6) is studied in the following section.
Robust output convergence of heterogeneous networks
Let Θ ∈ R |V|×|E| be the associated oriented incidence matrix of the graph G(V, E).The complete network with nonsmooth coupling (6) is written in compact form as Finally, the set S(r(t)) is such that that is, S(r(t)) is the Cartesian product between all the sets indexed by the edges of the graph G.
We first formulate sufficient conditions for the existence of solutions of ( 13) and we defer the formal proof up to Section 6.By a solution of ( 13) we mean a pair of absolutely continuous functions x : R + → R n and r : R + → R |E| , satisfying the inclusion (13) for almost all times t.To that end, we impose the following assumption.
Assumption 4.There exists a non-singular matrix R ∈ R n×n such that Assumption 4 constraints the relative degree (with respect to the input-output pair (u(t), y(t))) of the network (13) to {1, . . ., 1}, which is a necessary condition for passivity of the connected system (13), see e.g., [11].However, since at this point the only assumption we make on the vector fields f i concerns their Lipschitz continuity, the composed network is not necessarily passive.This lack of passivity allows us to consider the robust agreement of systems with interesting behaviors, such as self-sustained oscillations or chaotic solutions.It is easy to see that if for each i ∈ {1, . . ., |V|} there exists a symmetric, positive definite matrix, P i ∈ R ni×ni , such that |V| ) satisfies Assumption 4. Notice that in a more general setting we may have some systems for which there is no symmetric, positive definite matrix, P i , satisfying (15).In addition to Assumption 4 we also consider the following standing assumption.Assumption 5.The set of admissible initial conditions is compact so that there is a constant R > 0, such that ∥r(0)∥ ≤ R, and Assumption 5 is necessary for the existence of absolutely continuous solutions.Otherwise, a state jump at t = 0 will take place so that (Θ ⊺ ⊗ I m )y(0 We are now ready to state the main result concerning the existence of solutions of (35).The proof of Theorem 6 is postponed until Section 6.We now change focus to the consequences of Theorem 6 concerning the robust output convergence problem.
Remark 1.Note that in Theorem 6 the final time T is arbitrary but finite, and some upper-bounds in the proof depend on such T .In terms of the behavior, this translates into that fact that, even though there are not finite escape times, it is possible for the trajectories to grow unbounded as time evolves.An alternative approach consists in considering extra conditions, regarding the vertex dynamics, in order to guarantee the existence of a compact, positively invariant region, having in this way an uniform upper-bound on the state variables.For instance, if each vertex is assumed semipassive [44], then the trajectories of ( 13) are guaranteed to be uniformly bounded in the whole domain [0, +∞).Namely, the i-th vertex is semipassive if there exist ρ > 0 and a continuously differentiable, radially unbounded function, V i : R ni → R + such that for any admissible input u i (t), where Thus, the derivative of the positive definite function where the last inequality above follows from the maximal monotonicity of the normal cone map.Since the right-hand side of ( 16) is strictly negative whenever ∥x i ∥ > ρ for some i ∈ {1, . . ., |V|}, and the function V is radially unbounded, then all trajectories converge to the largest invariant region contained in the compact set where L is such that |V|ρB ⊂ Ω L , and boundedness of trajectories of (13) follows.
The well-posedness result in Theorem 6 conveys important consequences regarding the output convergence of general heterogeneous networks in which each agent may be subject to external persistent disturbances.Namely, well-posedness of the network (13) implies that (Θ ⊺ ⊗I m )y(t) ∈ S(r(t)) for all times t ≥ 0.Moreover, it follows from the dynamics of r(t) in ( 13) together with the strict positivity of Γ that r(t) → 0 as t → ∞.Therefore, The following corollary is thus an immediate consequence of Theorem 6.
Corollary 7.Under the assumptions of Theorem 6, the coupling law (6) achieves the robust asymptotic output convergence of the family of systems (5).
Note that, in the general heterogeneous case, the full convergence of the state is not guaranteed since, in principle, the dimension of the state of each individual system might differ from that of its neighbors.Nevertheless, in the cases when all individual systems have the same state dimension, then extra conditions can guarantee full convergence.
Corollary 8. Let all assumptions of Theorem 6 hold.If in addition the dimension of the state is the same for each individual system and the incremental dynamics of the network is asymptotically zero-state detectable, then the coupling (6) achieves the full-state convergence of the family of systems (5).
The results concerning asymptotic output convergence of the network are valid only when the ideal setvalued map in ( 6) is used.However, in practice, it is not possible to have a real-world implementation of the ideal coupling as depicted in Section 3, since it requires the use of ideal components.In real-life experiments there are losses due to parasitic resistance effects and unmodeled dynamics.In the next section an implementable coupling law is proposed via the implicit discretization of the continuous-time coupling (6), so that the convergence of the output towards a unique trajectory is maintained with a precision depending on the sampling step time h (practical output convergence).
Numerical implementation of the coupling strategy
Nowadays, due to the great performance and accessibility of digital electronic devices, it may be useful to implement the coupling law discussed above using a discrete-time scheme in digital microcomputers.To make such digital implementation, it becomes necessary to pay extra attention to the discretization used, as it is well known, explicit discretization schemes are prone to the appearance of chattering, which leads to degradation of closed-loop performance or even loss of stability, see e.g., [1,6].In this section we study two approaches for computing a discrete-time coupling law based on (6).
Implementation via implicit discretization
The discretization considered in this subsection is based on the discretization scheme presented in [2], where the set-valued component is discretized implicitly and the disturbances are ignored for the selection process.It is similar to the discretization used in the proof of Theorem 6 (see Section 6.2), but in the present context, we are interested only in the output dynamics, as the full state is assumed unknown.Concretely, in order to compute a selection of the generalized equation ( 6) we consider the output dynamics where x(t) is the state of the network (13) at time t ≥ 0. The discretized output dynamics is The new variable ỹk+1 in (18) plays the role of a nominal output.Its role consists in making the selection strategy independent of unknown data, (i.e., independent of C(F (x k ) + Ḡζ k )).Note that, in the cases when C is invertible, as for instance C = I n , the difference F (x k ) − F ( C † y k ) = 0 for all x k and all y k .It is also noteworthy to recall that, in general, the complete state is not available, so that the term F (x k ) is unknown.
A possible way to reduce the level of uncertainty, consists in the design of state-observers as is done in [3,37].
In this work the coupling strategy is static, so that no observer design is discussed.This, in order to keep the numerical implementation of the coupling law as simple as possible.
The following corollary establishes the well-posedness of the implicitly defined coupling (18b)-(18d), as well as, its effectiveness for solving the practical output convergence problem in a discrete-time context.Corollary 9. Let all assumptions of Theorem 6 hold.For the closed-loop heterogeneous network (18) the coupling action is given explicitly as where L = (Θ ⊺ ⊗ I m )Λ and Λ = ( C B) . Moreover, the "nominal ouput" ỹk+1 ∈ S(r k+1 ) for all k ≥ 0 and if the function F and ζ k are uniformly bounded then the network (18)-( 19) achieves practical output convergence.
Proof.It follows from Assumption 4 that the product C B is symmetric and positive definite.Let Λ = Λ ⊺ ≻ 0 be such that Λ 2 = C B. Then, the change of variables w k = Λ −1 y k , yields, where Note that p k is assumed to be known, as it depends only on nominal parameters and the measurable output y k = Λw k .Setting w k ⊥ = L † Lw k and w k = (I − L † L)w k as the projections onto (null L) ⊥ and null L respectively, (similarly for p k ⊥ and p k ), it follows that where Ŝ(r k+1 ) = {s ∈ R |V|m |Ls ∈ S(r k+1 )} = {s ∈ R |V|m |s ∈ L † S(r k+1 )}.It follows from (21b) and (21e) that and the substitution of ( 22) back into (21b) gives us an explicit expression for v k+1 as from where (19) follows.Also note that (22) implies that wk+1 and (18a) leads us to If the map F and ζ k , are uniformly bounded then there is a finite M > 0 such that So that the error is dependent of the sampling time h.Finally, as (21d) is Schur stable, then practical output convergence follows.This concludes the proof.
Remark 2. The assumption regarding the uniform boundedness of F and ζ, is sufficient but not necessary.For instance, if F is not uniformly bounded but Lipschitz continuous instead, then it follows from ( 23) that for some finite M 1 , M 2 ≥ 0, so that the outputs remain close to each other whenever the trajectories rest in an invariant compact region and the sampling time h is small enough.Further assumptions on the individual vector fields, such as, semipassivity [44], incremental dissipativeness [45], or QUAD-ness of the vector field [18], are common to ensure the existence of a compact invariant region of the state, allowing to relax the uniform boundedness of both F and ζ, see Remark 1 in Section 4.
It is worth to point out that the implicit discretization scheme in (18) is centralized, as it uses the pseudoinverse of L, so that in order to compute the individual input u k+1 to each agent, it is necessary to have access to the output of the entire network at each time step.The following subsection presents a simple way of generating a distributed approach in order to achieve practical output convergence of the network, via a simple regularization of the ideal coupling.Example 10.Let us consider an heterogenous small-world network [38] consisting of 16 vertices, where each vertex is of one of the following four different type of systems: A) Chua chaotic system, B) FitzHugh-Nagumo oscillator, C) Lorentz chaotic system, and D) Rössler chaotic system.Figure 3 displays the network configuration.
Each vertex is of the form Let A, B, C, D, be the sets of indices for systems of type A), B), C), and D) respectively.The set of nominal parameters, A i , B i , C i , E i and F i , is shown in Table 1 for each class of vertex system i ∈ A ∪ B ∪ C ∪ D. Note that the dimension of the internal states may vary from vertex to vertex.So that in this case full convergence is not possible to achieve.The terms ∆A i , ∆E i , ∆F i denote parametric uncertainties for each model class.Each uncertainty term is taken as a sample from a standard distribution, of appropriate dimensions, with zero mean and standard deviation σ = 0.1.The function γ : R + → R + used for setting the rate of descend of r k is set as, where δ = 0.5.In this setting, the input to each vertex is computed in a centralized fashion via (19).Figure 4 shows the state x i (t), the output y i (t), and the input u i (t) of each vertex in the network with a sampling time of h = 10ms.
Implementation via regularization
The coupling law ( 6) is largely inspired by the electrical network in Figure 1.Thus, with the end of obtaining a distributed numerical scheme we consider a regularized version of the ideal electrical coupling discussed in Example 3. Specifically, let us consider the circuit shown in Figure 5.After simple computations, similar to those shown in Example 3, we obtain that the regularized coupling circuit obeys (19).
Table 1: Nominal parameters for the three classes of systems for the network in Figure 3.
Recalling that N(S, y) = ∂ψ S (y) is a convex cone it follows that for any R > 0 −Ru (i,j) ∈ N(S (i,j) ; y i − y j + Ru (i,j) ) .
Hence, y i − y j ∈ (Id +N(S (i,j) ; • ))(y i − y j + Ru (i,j) ), and from (4) we retrieve the explicit expression for u (i,j) as The coupling ( 26) is single-valued and it is shown below that it achieves practical output convergence for values of R sufficiently small.Figure 6 depicts the coupling function (26) for the scalar case.
Figure 6: Time evolution of the graph of the coupling law in (26).In this case as t ↑ ∞, R t ↓ 0 and the graph of the coupling map approaches the graph of the normal cone to the set S(t), see Figure 2.
Note that, ( 26) is independent of the parameters of the vertices dynamics, as only the associated outputs and the set S (i,j) are needed.Thus, (26) can be implemented in a distributed fashion.It is also important to remark that ( 26) is a Lipschitz continuous function of the output mismatch y i − y j and therefore the network ( 13)-( 26) does not have finite-escape times, that is, it is well-posed in the entire domain [0, +∞).Moreover, in the limit as R ↓ 0, the network is also well-posed, as in that case, the coupling (24) coincides with the original coupling ( 6) via (12), see Theorem 6 and the remark after it.
Corollary 11.Under the assumptions of Theorem 6, the regularized coupling where u k+1 (i,j) is given by and the regularization parameters for some δ i > 0 i ∈ {1, 2}, achieves the practical output convergence of (13) whenever the time step h is sufficiently small.
Proof.It follows from ( 28) that Hence, ∥R k (i,j) u k+1 (i,j) ∥ = dist(y k i − y k j ; S(r k+1 (i,j) )).Consequently, In the limit as h ↓ 0, ∥u k+1 (i,j) ∥ ∥u k (i,j) ∥ → 1, (notice that such limit is well-defined, since in the limit we retrieve the strategy (6) whose well-posedness is guaranteed by Theorem 6), so that for h sufficiently small the output mismatch y k i −y k j lies in a neighborhood of S(r k+1 (i,j) ) and practical output convergence follows.This concludes the proof.
Example 12. Let us consider again the heterogeneous network of systems described in Example 10 above.This time each coupling is implemented in a distributed fashion using (28).The nominal parameters of each vertex are the same as shown in Table 1 and the function γ describing the rate of descend of the variables r (i,j) also remains unchanged.The disturbances are taken in the same manner as in Example 10.In this example, the sampling time is decreased to h = 2.5ms, (as for a sampling time of h = 10ms, the coupling (28) fails to achieve the ultimate boundedness of the mismatch y k i −y k j ).Note that, in this case, no knowledge regarding the vector field F is used for the computation of the coupling signals.The regularization parameters R k (i,j) are set as in (29) with hδ 1 = 25 and δ 2 = 0.1.Figure 7 shows the time trajectories of the state, control input, and output of each vertex system, illustrating the practical output convergence of the heterogeneous network of Figure 3. Finally, Figure 8 displays the sum of square error signals for the coupling schemes (19) and (28) under the same sampling rate.As expected, there is a trade-off between centrality and since the centralized scheme (19) leads to a lower error compared to the fully distributed scheme (28).j∼i ∥y i (t k ) − y j (t k )∥ 2 for coupling schemes (19)(black) and (28) (gray) with a common sampling period of h = 2.5ms.
Existence of solutions of the coupled network
This section presents the proof of Theorem 6 regarding existence of absolutely continuous solutions of the differential inclusion (13).The proof is an adaptation and an extension of the proof in [25] and is presented here for completeness.The proof is divided in several subsections.First, a change of coordinates and an useful decomposition of the dynamics is performed, putting the model into a more suitable structure for its analysis.Second, a sequence of approximate solutions is constructed based on its associated discrete-time model.Third, it is shown that these sequences of approximate solutions converge in appropriated spaces.Finally, it is shown that the limit functions are indeed trajectories of the continuous-time model.
State transformation and model decomposition
Let us consider the following change of variables where R satisfies Assumption 4. Thus, ( 13) is transformed into where where S(r(t is closed and convex.Hence, (31) transforms into where In what follows we denote the orthogonal projection of any η ∈ R n onto the subspace null Θ as η , and the complementary projection onto (null Θ) ⊥ as η ⊥ .That is, let Π be the matrix representation of the projection onto null Θ.Then, for any η ∈ R n , η = η ⊥ + η , where η ⊥ = (I − Π)η and η = Πη.
Construction of approximate solutions
Clearly, that well-posedness of ( 13) is equivalent to that of (35).Thus, we continue by discretizing in time the dynamics (35) using a semi-implicit Euler method.
where h := t k+1 − t k > 0 is the time-step, and It thus follows from the maximal monotonicity of the subdifferential map and Minty's theorem [7,Theorem 21.1], that there exists a unique selection z k+1 ⊥ satisfying (36a).Indeed, the rearrangement of terms in (36a) and the use of (4) (see also [7,Theorem 3.14]), gives an explicit expression for z k+1 ⊥ satisfying (36a) as where the second equality is a consequence of Proposition 13-ii) in A. It is worth to remark that the selection (37) is unique and it was possible to compute because of the implicit discretization of the set-value term in (36a).From (37) and in view of Assumption 5 and Proposition 13 it follows that z k ⊥ ∈ Θ † S(r k ), and z k ∈ S(r k ) for all k ≥ 0 .
It is also worth mentioning that, as Γ( Θz k ⊥ ) is a diagonal matrix with non-negative entries, the inverse in (36c) is well-defined.Moreover, since Γ( Θ(z ⊥ k )) is a non-negative diagonal matrix, it follows that all eigenvalues of I + hΓ( Θ(z k ⊥ )) lie outside the unitary ball.Recalling that for matrices we consider the induced norm, it is clear that Hence, the recursion on r k , given by (36c), is Schur stable regardless of the value of z k .Let us consider the following family of piecewise linear functions parametrized by h.
It is clear from ( 39)-( 40) that z h and r h are differentiable almost everywhere in [0, T ].In what follows it is shown that for h sufficiently small the sequences {z h } h>0 , {r h } h>0 , { żh } h>0 , { ṙh } h>0 , converge in suitable spaces.
To that end, let t ∈ [0, T ] \ {t 0 , t 1 , . . ., t N } and consider the following inequality Let us focus on the first term on the right-hand side of the inequality.It follows from ( 36)- (37) and the definition of projection that where we used the fact that z k ⊥ ∈ S(r k ) on the last line.Making use of Lemma 14 in A yields the upper bound and the substitution of ( 42) into (41) yields the estimate where L F denotes the Lipschitz constant of F and ∥ ζ(t k , z k )∥ ≤ Mζ ∥z k ∥ + M ζ follows from Assumption 1.
Let U, V ⊂ R l be two nonempty, compact sets.The Pompeiu-Hausdorff distance between U and V, denoted as d H (U, V), is given as or equivalently d H (U, V) = inf {ε ≥ 0| U ⊆ V + εB l , and V ⊆ U + εB l } , Thus, it follows from (60), that for any η ∈ R l , The Pompeiu-Hausdorff distance measures how far two sets are from each other and it provides a notion of continuity to the motion of time-varying sets.Roughly speaking, the following lemma shows that the set S in (33) "moves in a Lispchitz continuous way".
Finally, the combination of ( 64) and (65) leads us to (63).The proof is thus complete.
Finally, taking the lim sup on both sides of(69) leads us to the desired result.
Figure 1 :
Figure 1: Ideal electrical circuit realizing the coupling (6) between systems ν i and ν j .
Figure 2 :
Figure 2: Time evolution of the graph of the normal cone to the time-varying set S(t) = [−r(t), r(t)].
Figure 3 :
Figure 3: Heterogeneous network of systems.At each time the input to the i-th vertex is computed from the outputs of its neighbors.The parameters of each vertex system are different, even if they belong to the same class.
Figure 4 :
Figure 4: (Upper left) Time trajectories of states of vertex systems ν i .(Upper right) Time trajectories of coupling inputs u i for each vertex.(Bottom) Time trajectories of output signals of each vertex system.The coupling is given by(19).
Figure 7 :
Figure 7: (Upper left) Time trajectories of states of vertex systems ν i .(Upper right) Time trajectories of coupling inputs u i for each vertex.(Bottom) Time trajectories of output signals for each vertex.
Figure 8 :
Figure 8: Time evolution of the sum of squares error signal e SOS (t) := 1 2 |V| i=1
|
2023-12-21T16:31:54.058Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "f816080dc135bc8c9adcf549fb5c3cee7e3c13fa",
"oa_license": "CCBY",
"oa_url": "https://hal.science/hal-04333265/file/root.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "95eee07efcebf075429ee9ede949dab8b96b81ca",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": []
}
|
160030660
|
pes2o/s2orc
|
v3-fos-license
|
Possible applications of the theory of transaction costs in corporate Management
Objective – transaction costs theory is one of three new – neoclassical – concepts of an enterprise, constituting the opposition to classical theories perceiving a company as a “black box”, that is a closed body (black box), the aim of the company is to effectively transform inputs into outputs. and the effectiveness analysis is performed through the prism of production costs. Market mechanisms regulate the functioning of the market, since companies operate on the market perceived as a sphere of confrontation between supply and demand (equilibrium of price). The neo-classical approach did not take into account the pressure of different institutions regulating the business activity environment. Neo-institutional theories focus on perceiving a company more broadly than as a “black box”, and through the prism of the production function. The aim of the article is to present scientific studies regarding the possibility of using the theory of transaction costs in enterprise management. Methodology of research – For the purposes of this study the following test methods were used: a critical analysis of professional literature, a diagnostic survey (scenario) and a comparative studies method. Moreover, methods such as synthesis, deduction and induction were used. Result – The studies demonstrated that the comparison of transaction costs should decide on the coordination of activities by a company and not by the market – if transaction costs are lower inside a company, it is profitable to organize the production within this company, if they are higher on the market than in the company – production of goods by the company will be unprofitable. In this context, it is important to take into account the internal solutions in companies – assuming that most of the factors of production are actually used by companies and the use of these assets is subject to the decisions of companies, and not directly to the activities of market forces. The functioning of the market depends, to a large extent, not on market forces, but on the manner in which companies operate. Originality/value – The conducted analysis provides the foundation for a discussion on the implementation of the transaction costs theory by, among others, the creation of purchasing groups in order to improve the negotiating position with partners (reducing the costs of searching for information) or the use of a third-party’s employees (temporary work – reducing the cost of production preparation). keywords: cost management, economics of transaction costs, information asymmetric, institutional economy, contractual theory of an enterprise
Introduction
Transaction cost theory is one of three new -neoclassical -concepts of an enterprise, constituting opposition to the theory of classical theories perceiving an enterprise as a "black box", that is as a closed body (black box), the aim of which is to effectively transform inputs into outputs and an efficiency analysis is conducted through the prism of production costs.Market mechanisms regulate the functioning of the market, since enterprises operate on the market which is treated as a sphere of confrontation between supply and demand (equilibrium of price).The neoclassical approach, on the other hand, has not taken into account the pressure of different institutions regulating the conditions of conducting business activity.
In the case of the neo-institutional theories the focal point is the perception of the enterprise more broadly than as a "black box" and through the prism of the production function.Precursor of one of the non-institutional theories, the aforementioned theory of transaction costs, Ronald Coase, said that the neoclassical approach did not take into account transaction costs.These are costs directly related to the presence of the enterprise on the market, primarily the costs of searching for information on potential customers and partners of goods manufactured by the enterprise, the cost of preparing contracts and supplies, and the cost of enforcing contracts.These are the costs of maintaining the market system that is transaction costs incurred by the market, rather than transaction costs incurred by the enterprise.In the theory of transaction costs the enterprise is treated as a management structure, not as a production function.The comparison of transaction costs should decide on the coordination of activities by the company and not by the market -if transaction costs are lower inside the company, it is profitable to organize the production within this company, if they are higher on the market than in the company -the production of goods by the company will be unprofitable.In this context it is important to take into account the internal solutions in companies -assuming that most of the factors of production are actually used by enterprises, subject to the decisions of companies and not directly due to the activities of market forces.The functioning of the market depends, to a large extent, not on market forces, but on the manner in which the enterprises operate.Examples of the implementation of the transaction costs theory include the creation of the buying groups, in order to improve the negotiating position with partners (reducing the costs of searching for information) or the use of third-party's employees (temporary work -reducing the cost of production preparation).
The theory of transaction costs was rediscovered in the 1990s due to O. Williamson who studied the conditions for achieving the harmonious cooperation of enterprises and eliminating the conflicts related to this issue.According to this scholar, agreement could be achieved through properly functioning institutions and a microeconomic analysis of the enterprise and its management manner aimed at minimizing the transaction costs.Transaction costs are assessed according to the criteria of their optimization.New institutional economics refers to the theory of enterprise by Coase.It focuses on institutions at a microeconomic level, in other words contracts perceived as limited cooperation institutions (Zbroińska 2013, p. 164).
Apart from Coase, K. Arrow, who was the first to use the term "transaction costs" made a contribution to the theory of transaction costs as well as H. Simon, who developed the behavioural concept of a company.Simon also introduced the concept of the limited liability of business entities which is important for the transaction costs theory.A significant figure in the development of this theory was O. Williamson as well.His economics of transaction costs constitutes an institutional approach to the study of an organization, in which the main unit of analysis is the transaction (Lichtarski 2007, p. 34).Commons made a particular contribution to the development of the institutional trend, which focused on the impact that institutions had on the decisions and functioning of enterprises.He was also one of the first individuals who moved the center of gravity as regards to the analysis from the factors of production to transactions.
Currently, the transaction cost theory is incorporated into, in addition to the agency theory and property rights theory, the canon of the new institutional economics.The genesis of its creation, however, suggests a close relationship with both the neoclassical theory of the enterprise and works of Neo-Keynesians from the 1950s and 1960s.Its unquestionable advantage is its multi-dimensional character, manifesting itself in being able to be applied in many areas of economics and management.This concept raises issues of property rights, the forms of organizing transactions and limits on the operation of companies, but also mathematical and statistical models of growth present in the economy (Klaes 2000, p. 192).
The transaction cost theory emerged on the basis of criticism of the neoclassical theory of the company, which was accused of not taking into account the impact of many factors (the issue of accepted assumptions).According to the neoclassical theory, an enterprise operates on the market perceived as a place of confrontation between supply and demand, which leads to the determination of prices to ensure market balance.Such an approach ignores the impacts of various kinds of institutions thus creating an environment for conducting business activity and, in particular, ignores completely the transaction costs (costs of searching and collecting information on finding potential partners, the cost of preparing the contract, the cost of enforcing the provisions of the contracts), which are closely related to the presence of the company on the market (Wrońska 2012, p. 139).
In the transaction costs theory, which is one of the trends of institutional economics, incorporated in the mainstream of the neoclassical approach, institutions, contracts and transaction costs are gaining fundamental importance.The research focuses on enterprises and thus, on the micro-economic environment and manner of management.The neo-institutional theory is based on the notion of transaction (contract) as the analysis unit.Such an understanding is associated with Williamson's hypothesis according to which any problem can be interpreted as a contract.In turn, its conclusion and implementation require incurring transaction costs.These costs become an effective measure of efficiency regarding the whole institution, and their use is far beyond the relations of exchanging goods in a given market.Transaction costs are therefore present in every sphere of human activity, not only in market conditions.This is due to the fact that man exists in a world of contracts of varying intensity concerning uncertainty (Zbroińska 2013, p. 165).
the notion and significance of institution for the transaction costs theory
The concept of institution, in the theory of transaction costs, means that the rules of the game and limitations created by people, shaping the interaction and forming the structure of incentives on interacting participants of the exchange, make the world more predictable (Klimczak 2005, p. 16).The organization is, therefore, an entity equipped with resources, pursuing goals with the help of these institutions.Simply put, the organization is a player and the institutions are the rules of the game (Ząbkowicz 2003, pp. 795-823).The largest organization with the resources is the state which creates the institutional environment, and the legal system is its element.
As regards to the theory of transaction costs institutions can be divided into formal, understood as internal standards of organizations and informal, which are perceived as a moral standards not enforced as the law.Informal institutions include, therefore, customary norms, and certain patterns of conduct that shape human behaviour.Changes taking place in these institutions require time, because items such as habits, procedures, ethics and honesty undergo modification.Characteristics of the institution should be treated as personality traits, which also include ambition, diligence, perseverance, and entrepreneurship.
Formal institutions are also treated differently in the transaction costs theory.They are constituted by the legal standards, the policy of the country, the adopted financial and tax system, and administrative procedures.The functions of these institutions are generally performed by certain organizations.Currently, the imperfection of the bureaucratic system is the biggest barrier in the development of companies, at the same time heightening their transaction costs.The introduction of comprehensive solutions to facilitate economic activity could improve managing them (Chotkowski 2010, pp. 107-108).
Apart from formal and informal institutions the transaction costs theory also covers civil society institutions and market infrastructure.It refers to the social capital and activity of citizens as well as market intermediaries whose aim is to provide the object of sale to the customer with the help of promotional, marketing, transport, information tools and many others.Without effective cooperation between institutions implementing these steps it is difficult to imagine the realization of market transactions (Kowalska 2005, p. 56).
the concept and division of transaction costs
Creating, maintaining and enforcing institutions with contracts as a form of cooperation are accompanied by transaction costs.So far a universal and comprehensive definition of the term has not been developed.For the purposes of the research subsequent interpretations of this term are made, which is often regarded as an area of weakness in the methodological area of a neo-institutional economy (Hardt 2006, pp. 1-23).
Initially, the transaction costs related mainly to the burden connected to agency in transactions.In the 1960s the concept of transaction was expanded due to the costs associated with searching for and obtaining market information and ultimately from the 1970s they have also covered the contracting, contract supervision and functioning of market institutions (Klaes 2001, p. 179).Some theorists believe that they are separable from the cost of production, while others point out that transaction costs should be considered as a component of the function of production and distribution costs (Gorynia 2007, p. 174).
According to Coase, an enterprise and the market do not work next to each other, but they are an alternative way to allocate different resources.In turn, transaction costs constitute the factor determining the size, type and structure of the created enterprise.Initially, the transaction costs were defined as the costs of using the price mechanism.Transaction costs in the area of microeconomics concern the separation of the production costs from the total value of operating costs.On the other hand, in the area of macroeconomics, transaction costs are heterogeneous and include the functioning of the institutional environment of the state regulating all planes of social life.
Despite the multiplicity of definitions being used, economists agree that transaction costs are always related to the conclusion of transactions, transfer of ownership and institutional system (Zbroińska 2013, p. 165).The common denominator in the definition of transaction costs is their non-productive origin and the institutional formation place.
Specific features of the transaction costs include frequent omitting and ignorance of their existence, failure to account for them economically, difficulty in quantification, treating them as a side effect of the transactions and the transfer of property rights.Transaction costs as a category are not recognized in accounting units due to methodological reasons, despite the impact on the results of operations.These costs are generated due to information asymmetry, inducing the parties into negotiation, and because of the high risk associated with them, also to insuring the contracts under which they arise.Transaction costs also arise in situations where the resources are used for the creation, maintenance and running of the institution as well as its usage and changes (Ząbkowicz 2003, p. 811).
Overall, the transaction costs can be divided into three types: market, intercompany and public costs.Market transaction costs are primarily the costs of searching and collecting information, conducting negotiations and decision-making concerning the conclusion of the agreement, monitoring the agreed deadlines, quantity and quality of the product, as well as the possible costs of enforcement of the rights and provisions of the contract.The intercompany transaction costs include, in turn, partly fixed costs and variable costs related to the functioning of the company.On the other hand, the public transaction costs are associated with organizing, maintaining and upgrading formal and informal public order and the costs of the society's functioning (Małysz 2003, pp. 315-340).
A similar classification was proposed by Ząbkowicz.These are the association of transaction costs with institutions with different operating range results in the diversity of these costs, which constituted the basis for their classification according to the type of the contracting parties' cooperation.Following this criterion market (tender), executive (management) and public (rationalizing) transaction costs were distinguished (Ząbkowicz 2003, p. 811).
Among transaction costs, which, although difficult to quantify, one can distinguish several types of them, including (Kowalska 2005, p. 56): -the costs of searching for information (on prices, contract partners, exchange sites, all aspects of carrying out the transaction, including, in particular, the quality of goods and services as well as the available factors of production and the potential behaviour of contract partners within the existing institutional structure), -costs of the negotiations, which would disclose the real position of the contractual partners, assuming that prices are endogenous variable, depending on the outcome of negotiations, -cost of formulation, recording (often with the use of costly legal opinions) and credibility (e.g.notaries) of the contract, -costs of activities protecting from risk (security of the agreement), -the costs of monitoring the behaviour of contracting partners and the degree of implementation of the contract, -costs of implementation and enforcing the implementation of the contract's provisions, -the costs of resolving disputes by consensus, -costs of judicial enforcement of a contract or internalisation of costs in case of impossibility of the fulfilment of contractual obligations by the parties (case in which the costs of judicial enforcement exceed the value of the contract), -costs of negotiating contracts, -the costs of securing and protection of property rights against unauthorized persons.Stankiewicz provides another division of costs.According to him, one should distinguish the following types of transaction costs (Stankiewicz 2007, p. 151): -costs of the search for alternatives, -costs of settlements, -costs of concluding contracts, -costs required procedures, -costs of specification and protection of property rights, -costs of opportunistic behaviour.
The lack of proper information while concluding the contract is a source of uncertainty related to the implementation of the contract, and thus it can generate costs which are unpredictable.The rarity of information means that obtaining it or its disclosure must cost, therefore all economic processes associated with the acquisition, processing information or are associated with the persistent shortage of information, generate a variety of transaction costs (Kowalska 2005, p. 56).
Management of transaction costs as a realization of the theory of transaction costs
The conclusion of each contract is aimed at reducing transaction costs.The principle of this criterion is based on a comparison of the two options of solutions with similar costs related to the production and the final result, and one selects the option, in which the transaction costs turn out to be smaller.While making the choice one must be guided by the assets' specificity, frequency and uncertainty affecting the degree of detail as regards a particular contract.The decision simultaneously influences the level of transaction costs both in ex ante conceptualization and the ex post one described by Williamson (1998, p. 65).In addition, he presented the fundamental division between the two areas of research in the field of transaction costs -these are: the field of management, which defines the domain of the application of that theory and the area of measurement in which the empirical testing of theoretical concepts takes place.
The transactions referred to in theory are carried out under conditions of uncertainty associated with the risk.The greater the degree of specificity of assets, the greater the risk and the pursuit of belaying by the owner as early as at the stage of concluding the contract.One should also take into account that such agreements are accompanied by uncertainty as regards the loyalty of business partners, determined as behavioural uncertainty.Furthermore, the size of transaction costs is also affected by an increase in the frequency of conducting transactions, which reduces them.To identify the possibilities of reducing transaction costs, the table below lists the institutions and elements which may have influence.
Lower transaction costs Higher transaction costs
Goods and services to be exchanged, standard Goods and services to be exchanged, special kind Clearly defined rights of the parties Unclear rights of the parties A small number of the contracting parties
Many contracting parties Friendly relations between the parties Unfriendly relations between the parties The contractual parties know each other
The contractual parties don't know each other The mutual services of the parties are implemented simultaneously Mutual services of parties to the agreement are not implemented at the same time (exchange is postponed in time) The exchange is unconditional The exchange is dependent on additional conditions or terms Low cost of monitoring the implementation of the agreement The high costs of monitoring the implementation of the agreement Judicial execution of contractual rights is cheap and easy to implement Judicial execution of contractual rights is costly and difficult to implement Source: author's own work based Kowalska (2005), Stankiewicz (2007).
Progress in the area of communication technology, easier access to information using electronic technology; significantly affect the reduction in market transaction costs.However, mass character and the anonymity of market transactions, on the other hand, generate a demand for advisory services, financial, market research, insurance, or the need for information management, and many other essential factors affecting the success of the transaction.In the end, it all contributes to a sharp increase in transaction costs in the economy scale as a whole (Zbroińska 2013, p. 168).
temporary work in the outline of the theory of transaction costs
Temporary work involves an independent enterprise employing a jobholder and transferring him or her with the consent of that individual to a third party in order to provide temporary or permanent work.Hiring out employees is a system in which there are three parties involved: -a lender offering the temporary contribution of their employees, -entity declaring the demand for labour, -employees performing the required work at the headquarters of the lending party.Discussed relationship, which is very interesting and unusual, is shown in Figure 1.The employee is linked due to the contract of employment with a temporary work agency, but in fact performs services for third parties.Between employees who are hired out and the enterprise -user no contractual relationship occurs in the course of performing work.Outsourced staff are drawn into the organizational structure of the company declaring the demand for labour, they performs the stipulated tasks, while being obliged to carry out the orders of the lender.
Temporary work agency Employer-user
Agreement on lending employees
Figure 1. Entities involved in temporary work
Source: author's own work based on Storrie (2002).
It seems that it is worth looking at temporary work from the perspective of one of the three main components of the new institutional economics, namely -from the point of view of the transaction costs theory.As O. Williamson noted: "Each problem which can be directly or indirectly expressed as a contracting problem, should be rather discussed in terms of reducing the transaction costs" (Williamson 1998, p. 54).Developing this idea, one could put forward a thesis that the expansion of temporary work is conditioned -at least partly -by an attempt to save on transaction costs by entities running a business activity.Therefore, it is appropriate to ask the question: what kinds of transaction costs are associated with the conclusion of employment contracts, and which of them can be reduced due to temporary work?According to one of the divisions present in professional literature, one can distinguish the following types of transaction costs: market, intercompany and public costs.Costs incurred prior to the conclusion of the employment contract are included into the category of market transaction costs, which were incurred on the labour market.Examples of intercompany costs include the implementation of employment contracts between the company and the persons employed by the company, the cost of measuring the efficiency of the employees, and the costs of information processing.Another division of transaction costs concerns the distinction between ex ante and ex post costs.The ex ante costs are costs arising during the preparation and negotiation of contracts.They change depending on the type of goods produced and services (Małysz 2003, p. 323).According to Williamson, ex post costs include the costs of creating the management structure and operating it, the costs of monitoring, dispute resolution and other.
What benefits can a company obtain thanks to using the services of a temporary work agency?
The lack of perfect market information hinders the parties to conclude a contract of employment.An agency reduces the costs of searching, which would be incurred by an enterprise wishing to find a certain number of employees with the right qualifications.Saving time also has a considerable importance -one can get the required personnel generally almost immediately.An agency reduces also the aforementioned costs for an employee by releasing him or her from the effort of finding employment.Of course, the employee could look for a job vacancy himself or herself and conclude consecutively fixed-term contracts with different employers.The agency takes over these responsibilities.It collects information on vacancies from the market and associates labour supply with demand (matching), which reduces the risk of the employee being without a job for a long time.Moreover, the agency committing in the agreement to provide staff with specific qualifications, in some way vouches for employees and their skills, and thus it is easier for them to get a job.Intermediation in the labour market is undoubtedly a step towards the economization of information.An additional link in a data channel poses the threat of distortion regarding the provided information, that is why it is important to determine precisely between the company-user and the temporary work agency which employees the company needs.The enterprise, in turn, saves on the cost of checking the application documents, references, negotiation and the preparation of contracts.Instead of conducting negotiations with every candidate for a job it only concludes one agreement -with a temporary work agency.An agency therefore minimizes the number of transactions, especially in the case of employing more workers directly.These savings include savings on ex ante transaction costs, that is paving the way for an agreement and those related to its conclusion (Table 2).Source: author's own work based on Małysz (2003).
What are the ex post costs, that is, the implementation and monitoring of the agreement?Temporary work (as well as fixed-term contracts) has a major advantage compared to permanent contracts as the company does not bear the costs of its termination.Moreover, it also does not bear the costs associated with training, apprenticeship, the risk of rotation and motivating employees.The company declaring the demand for temporary workers knows that if they do not meet its requirements or they are ill, it may request other workers.The company is some way buys the resource of work of a certain "quality", and with a guarantee of replacement.Individuals who are working and based on this form of employment tend to be seen as a multi-establishment reserve personnel, one can reach them as needed.Of course, the company has to pay the agency for maintaining the readiness.However, it is a cheaper alternative than the self-continuous maintenance of such workers within the enterprise.In addition, the employment of temporary workers does not require administrative work from the HR and accounting department which accompanies the autonomous employment by the company.Among other reasons why companies resort to temporary workers one should indicate the desire to achieve measurable savings on labour costs.A significant burden with these costs constitute a rule in many countries and it is not surprising that employers are looking for some ways to reduce it.Although companies do not always have a decisive impact on the level of direct wages, one method of reducing non-wage elements of remuneration is the use of atypical forms of employment.In the case of temporary work a company using hired out staff, not acting as an employer formally, omits non-wage labour costs, such as holiday pay, additional paid holidays, sick leave, additional bonuses fixed by a tariff and social security contributions.Besides, the salaries of such employees are in fact lower than the salaries of permanent employees.Above all, however, temporary employees allow the company to cushion the unexpected fluctuations in the demand for labour.Apart from the advantages of this type of agreement for employers one should also indicate some of their shortcomings.Research shows that these employees are on average less productive, less motivated, and one has to incur greater expenditures connected to their implementation to the work and control.
Conclusions
The creation of the concept of new institutional economics was caused by focusing on the legal-institutional environment and its role regarding the market mechanism (characteristic of classical economics) in shaping many economic processes.Institutions, understood in the theory of transaction look differently than how this word is actually understood, characterized by different elements of the environment and rules, as well as the personality traits of people taking different decisions on the market; they affect the efficiency of the economy and the pace of change or adaptation.Markets also constitute an institution and should evolve in the direction of an efficient market.After all, the greatest institution is the state.Although the concept of transaction costs uses the achievements and theoretical principles of neoclassical economics, it still contains elements of new institutional economics, which it incorporated.According to this theory, the institutions as rules of behaviour of all market players as well as structures of managing them undergo the modifications which lead to the lower costs of the whole economic system.The reduction of transaction costs constitutes one dimension of increase of effectiveness connected to operating on the market and it is a positive phenomenon (Chotkowski 2010, pp. 107-108).
The weaknesses of the neo-institutional approach include first of all the problem of the operationalization of transaction costs, which are most often immeasurable.Their identification, however, is simplified by a contractual approach.The enterprise in this context is a party of the contract or agreement both external in relations with other market participants and the state as well as a party in transactions with staff (Zbroińska 2013, p. 172).
Although the primary part of the elements regarding the institutional system, e.g.legal regulations, economic and administrative system, including the tax system, is shaped in the framework of the decisions of state policy, a large part of the decisions affecting the reduction of transaction costs depends on the enterprises themselves.The main problem of the implementation of guidelines of the new institutional economics to business practice is to find the right balance in the contracting process concerning transactions between freemarket mechanisms and hierarchical management structures.These relationships depend on four key dimensions of a transaction: the specificity of the assets at the disposal of the market participants, uncertainty, complexity and frequency of the transactions.Transaction cost economics emphasizes that, especially in conditions of high asset specificity, the uncertainty and frequency of transactions, the costs of internal coordination can be lower than the free-market transaction costs.
To sum up the above considerations, one can conclude that temporary work agencies allow companies to achieve savings in transaction costs related to the hiring of personnel.One has to realize, however, that conducting an accurate analysis and empirical account of these costs is difficult.This is a basic limitation of the transaction costs theory as a research tool.Another disadvantage is its excessive descriptive character and little formalization.The transaction costs theory explains the blurring boundaries of the enterprise and the flattening of the structures within concepts such as outsourcing, downsizing, and lean management.It also enables the explaining of the expansion of organizational innovation and new forms of employment.
table 2
Transaction costs ex post, ex ante
|
2019-03-04T23:33:15.855Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "68f8f7c29cb37c9e7cf2dd3ed77d563ed48ce43c",
"oa_license": "CCBYSA",
"oa_url": "https://wnus.edu.pl/frfu/file/article/view/11290.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "68f8f7c29cb37c9e7cf2dd3ed77d563ed48ce43c",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
}
|
256648164
|
pes2o/s2orc
|
v3-fos-license
|
Graphene-Doped Polymethyl Methacrylate (PMMA) as a New Restorative Material in Implant-Prosthetics: In Vitro Analysis of Resistance to Mechanical Fatigue
Background and Purpose: Provisional prostheses in restorations over several implants with immediate loading in completely edentulous patients increase the risk of frequent structural fractures. An analysis was performed of the resistance to fracture of prosthetic structures with cantilevers using graphene-doped polymethyl methacrylate (PMMA) resins and CAD-CAM technology. Methods: A master model was produced with four implants measuring 4 mm in diameter and spaced 3 mm apart, over which 44 specimens representing three-unit fixed partial prostheses with a cantilever measuring 11 mm were placed. These structures were cemented over titanium abutments using dual cure resin cement. Twenty-two of the 44 units were manufactured from machined PMMA discs, and 22 were manufactured from PMMA doped with graphene oxide nanoparticles (PMMA-G). All of the samples were tested in a chewing simulator with a load of 80 N until fracture or 240,000 load applications. Results: The mean number of load applications required for temporary restoration until the fracture was 155,455 in the PMMA-G group versus 51,136 in the PMMA group. Conclusions: Resistance to fracture under cyclic loading was three times greater in the PMMA-G group than in the PMMA group.
Introduction
Immediate loading in the case of full-arch prostheses in completely edentulous patients has been regarded as a safe and reliable technique, provided that primary implant stability is ensured, with passive fit, splinting and the elimination of micromovements capable of interfering with the implant osseointegration process [1,2]. Immediate loading allows the immediate restoration of chewing function, speech and aesthetics, with clear improvements to patient satisfaction and quality of life [3].
In these cases, the utilization over 3-6 months of a temporary prosthesis is considered to be necessary in order to complete the healing and osseointegration process [4,5]. Such provisional restorations are often made from polymethyl methacrylate (PMMA), due to its good properties. Classically, PMMA has been used as the material of choice, because of its simplicity of use and its low elastic modulus that avoids stress of the occlusal charge. PMMA has been employed using heat-curing techniques, though at present the material is used in the form of discs that can be machined using CAD-CAM technology, resulting in structures with better mechanical and biological properties [6][7][8].
Distal cantilevers are used in many of these provisional restorations in order to expand the occlusal surface. This may cause fractures of the restoration that pose a serious risk for implant stability (improve bone loss because during the occlusal charge; screw 2 of 10 loosening), since they may result in an increase in micromovements that avoid/prevent osseointegration [9].
In order to avoid this problem, the addition of reinforcing materials to the main structure has been proposed, including metal beams, the incorporation of fibers and glass meshes, silica, carbon, polyamide, etc., [10][11][12][13]. One such reinforcing material is graphene, added to the composition of the PMMA discs in the form of small amounts of graphene particles. This material affords improved mechanical properties (fracture resistance) and has antibacterial activity with minimal cytotoxicity [14,15].
The main aim of the present study was to compare the performance and durability of samples of PMMA created with CAD-CAM technology and PMMA created with CAD-CAM technology doped with graphene oxide, regarding fatigue load across 240,000 cycles (12 months clinical life). The null hypothesis would be that the machined PMMA and the graphene-doped PMMA samples exhibited the same mechanical response.
Materials and Methods
Two titanium implants measuring 4 mm in diameter and 11.5 mm in length (BOST411 Zimmer Biomet, Palm Beach Gardens, FL, USA) were placed in a cylindrical nylon tube affixed with Exakto-Form ® epoxy resin (Bredent, Senden, Germany), following the specifications of standard UNE-EN ISO 14801:2017. The implants were spaced 3 mm apart, parallel to each other, and with an inclination of zero degrees.
Two premolar crowns measuring 7 mm in mesiodistal (M-D) width were designed. Likewise, a molar measuring 11 mm in M-D width was included in extension, joined distal to the crown over the implant by means of a 4 × 4 mm connector ( Figure 1). risk for implant stability (improve bone loss because during the occlusal charge; screw loosening), since they may result in an increase in micromovements that avoid/prevent osseointegration [9].
In order to avoid this problem, the addition of reinforcing materials to the main structure has been proposed, including metal beams, the incorporation of fibers and glass meshes, silica, carbon, polyamide, etc., [10][11][12][13]. One such reinforcing material is graphene, added to the composition of the PMMA discs in the form of small amounts of graphene particles. This material affords improved mechanical properties (fracture resistance) and has antibacterial activity with minimal cytotoxicity [14,15].
The main aim of the present study was to compare the performance and durability of samples of PMMA created with CAD-CAM technology and PMMA created with CAD-CAM technology doped with graphene oxide, regarding fatigue load across 240,000 cycles (12 months clinical life). The null hypothesis would be that the machined PMMA and the graphene-doped PMMA samples exhibited the same mechanical response.
Materials and Methods
Two titanium implants measuring 4 mm in diameter and 11.5 mm in length (BOST411 Zimmer Biomet, Palm Beach Gardens, FL, USA) were placed in a cylindrical nylon tube affixed with Exakto-Form ® epoxy resin (Bredent, Senden, Germany), following the specifications of standard UNE-EN ISO 14801:2017. The implants were spaced 3 mm apart, parallel to each other, and with an inclination of zero degrees.
Two premolar crowns measuring 7 mm in mesiodistal (M-D) width were designed. Likewise, a molar measuring 11 mm in M-D width was included in extension, joined distal to the crown over the implant by means of a 4 × 4 mm connector ( Figure 1). Graphene nanofibers have diameters between 10 and 100 nanometers and lengths of 1000 nanometers. In transmission electron microscopy (TEM) analysis, they present a stacked cup structure, where the inside and outside of the fibers are exposed, with different heterogeneity. The chemical composition of graphene nanofibers was analyzed using X-ray photoelectron spectroscopy (XPS), and it was found that they are composed of 91% carbon, 2.5% silicon, and 6.5% oxygen. PMMA was doped with graphene in the range of 0.15-0.175% parts per million (Table 1). Graphene was dispersed into the resin monomer (liquid phase) by means of ultrasonic dispersion processes. The functional groups of graphene were opened to allow bonding by chemical bonds to the monomer ( Table 2). Characteristics Thermal Oxidation temperature ( • C) e 350-680 (520-640) Decomposition products/Thermal oxidation CO, CO 2 mainly a In parentheses: % of micropore area with respect to the total surface area. b Number of graphene planes in the crystal (npg = Lc/d002); D002 is the interlaminar spacing; Lc is the average size of the crystals in the direction perpendicular to the basal graphene planes. c ID/IG: ratio between the intensities of the D and G bands in the Raman spectrum. d Determined by counting at least 200 GNFs in electron microscopy micrographs transmission. e In parentheses: range of temperatures corresponding to the maximum oxidation.
After curing the cement (auto-and photo-polymerized), the crowns were screwed onto the implants with a torque of 20 N cm, following the instructions of the manufacturer.
The tubes were stored in the saline solution until fatigue testing with a cyclic loading machine (Chewing Simulator CS-4.2, Mechatronik, Feldkirchen-Westerham, Germany). The simulator performed cycles of 15,000 applications of a load of 80 N to the center of the occlusal surface of the molar, 10 mm from the center of the most distal implant. Loading was applied by a steel ball affixed to the mobile axis of the machine and with a displacement of 2.5 mm. The simulator operated at a frequency of 2 Hz and with a vertical speed of 40 mm/s. No lateral loading was applied. The loading cycles were applied until fracture or the completion of 240,000 load applications, simulating 12 months of wear life, according to our immediate loading protocols.
An analysis was conducted of the variables time to fracture and length, with the calculation of the mean, standard deviation (SD), maximum, minimum, median and 25th and 75th percentiles.
The Kolmogorov-Smirnov test showed that the variable number of cycles did not exhibit a normal distribution; a nonparametric analytical approach was therefore adopted. The Mann-Whitney U-test was used for comparison of the maximum number of cycles between the two groups. The chi-square test was used to compare the fracture rates or the extent to which a certain threshold of load applications (120,000) was reached in the two study groups. The survival curves corresponding to the number of cycles applied until fracture were plotted using the Kaplan-Meier method, with the log-rank test for comparing the curves between the groups. The level of statistical significance was established as 5% (α = 0.05). Based on the Mann-Whitney U-test, with a confidence level of 95% and considering an effect size f = 1.8, the statistical power reached was 99.9% for the detection of statistically significant differences.
Results
Each provisional restoration underwent cyclic loading until fracture, recording the number of cycles at which fracture took place, or until the completion of 240,000 loads applications in the event of no fracture.
•
In this regard, the specimens in the PMMA group recorded a median of 52,500 load applications (interquartile range [IQR]: 30,000-60,000) versus 120,000 (IQR: 120,000-240,000) in the PMMA-G group (Table 3). • In descriptive terms, the difference between the two groups is significant, and the box plots show the number of cycles to be significantly greater in the graphene-doped specimen group (p < 0.001) (Figure 2). • Likewise, the number of specimens that exceeded 120,000 load applications without fracture was significantly greater in the PMMA-G group than in the PMMA group (p < 0.001), and the corresponding fracture rate was significantly lower (p = 0.009). • Lastly, the cumulative survival curves corresponding to the number of cycles up to fracture showed significant differences between the two groups (p < 0.001, log-rank test). Specifically, in the PMMA group, the median survival was 45,000 cycles (95% confidence interval [95%CI]: 33,508-56,491), versus 120,000 in the PMMA-G group (95%CI: 70,063-169,937) ( Figure 3). This survival curve simulates a clinical situation, because the aim of the study was to analyze the resistance of this material in a fatigue load test. The great dispersion in the resistance values highlights the unpredictability of the behavior of the material, as it fractures at very different values. However, PMMA-G has, on average, better fracture resistance values.
Percentile 75 120,000.0 60,000.0 240,000.0 • In descriptive terms, the difference between the two groups is significant, and the box plots show the number of cycles to be significantly greater in the graphene-doped specimen group (p < 0.001) (Figure 2). • Likewise, the number of specimens that exceeded 120,000 load applications without fracture was significantly greater in the PMMA-G group than in the PMMA group (p < 0.001), and the corresponding fracture rate was significantly lower (p = 0.009).
•
Lastly, the cumulative survival curves corresponding to the number of cycles up to fracture showed significant differences between the two groups (p < 0.001, log-rank test). Specifically, in the PMMA group, the median survival was 45,000 cycles (95% confidence interval [95%CI]: 33,508-56,491), versus 120,000 in the PMMA-G group (95%CI: 70,063-169,937) ( Figure 3). This survival curve simulates a clinical situation, • Most fractures occurred in both groups in the area of the junction with the titanium abutment, where the thickness of the PMMA was less than 2 mm (Figures 4 and 5). The fatigue study was carried out either until the fracture of the material (which occurred in 100% of the PMMA samples) or up to 240,000 cycles, and 68.2% of the PMMA samples doped with graphene were fractured. • Most fractures occurred in both groups in the area of the junction with the titanium abutment, where the thickness of the PMMA was less than 2 mm (Figures 4 and 5). The fatigue study was carried out either until the fracture of the material (which occurred in 100% of the PMMA samples) or up to 240,000 cycles, and 68.2% of the PMMA samples doped with graphene were fractured.
• Most fractures occurred in both groups in the area of the junction with the titanium abutment, where the thickness of the PMMA was less than 2 mm (Figures 4 and 5). The fatigue study was carried out either until the fracture of the material (which occurred in 100% of the PMMA samples) or up to 240,000 cycles, and 68.2% of the PMMA samples doped with graphene were fractured. The results obtained show that from multiple perspectives, PMMA reinforced with graphene oxide proves effective in the manufacture of provisional restorations.
Discussion
The results obtained in this in vitro study indicate that provisional restorations manufactured with CAD-CAM technology and screwed over implants with cantilevers exhibit greater resistance to fractures when made from graphene-doped PMMA than when made from plain PMMA. This refutes our working hypothesis that the machined PMMA and graphene-doped PMMA samples exhibit the same mechanical response.
One of the materials used is graphene, which is added to the acrylic resin during the manufacturing process. Good homogenization of the graphene oxide within the PMMA matrix is crucial in this process in order to secure the best mechanical and biological properties, such as inhibition of the adherence of microorganisms. As a result of these characteristics, the use of graphene is recommended in dental prostheses and orthodontics [16][17][18].
The specimens used in the present study were intended to reproduce the clinical situation of a cantilever in a provisional restoration with immediate loading over implants. For this purpose, the specimens were designed to resemble a real-life prosthesis with anatomically shaped teeth [19].
Most authors do not perform tests on structures of this kind, but rather on bars of different shapes [20,21]. On the other hand, we applied occlusal loading to the center of the occlusal surface of the cantilever (a molar), 10 mm from the center of the most distal implant [22] (Figure 6). The results obtained show that from multiple perspectives, PMMA reinforced with graphene oxide proves effective in the manufacture of provisional restorations.
Discussion
The results obtained in this in vitro study indicate that provisional restorations manufactured with CAD-CAM technology and screwed over implants with cantilevers exhibit greater resistance to fractures when made from graphene-doped PMMA than when made from plain PMMA. This refutes our working hypothesis that the machined PMMA and graphene-doped PMMA samples exhibit the same mechanical response.
One of the materials used is graphene, which is added to the acrylic resin during the manufacturing process. Good homogenization of the graphene oxide within the PMMA matrix is crucial in this process in order to secure the best mechanical and biological properties, such as inhibition of the adherence of microorganisms. As a result of these characteristics, the use of graphene is recommended in dental prostheses and orthodontics [16][17][18].
The specimens used in the present study were intended to reproduce the clinical situation of a cantilever in a provisional restoration with immediate loading over implants. For this purpose, the specimens were designed to resemble a real-life prosthesis with anatomically shaped teeth [19].
Most authors do not perform tests on structures of this kind, but rather on bars of different shapes [20,21]. On the other hand, we applied occlusal loading to the center of the occlusal surface of the cantilever (a molar), 10 mm from the center of the most distal implant [22] (Figure 6). Although much has been written about the convenience or otherwise of cantilevers and their size [23,24], we decided to use a first molar, which reflects a frequent clinical scenario and affords a sufficient occlusal surface in the provisional prosthesis. The connector between the most distal abutment and the cantilever measured 4 × 4 mm, in order to simulate a minimum thickness similar to that recommended by Jemt [25]though some authors consider that a greater thickness is needed in order to avoid fractures at this level [19] (Figure 7). Although much has been written about the convenience or otherwise of cantilevers and their size [23,24], we decided to use a first molar, which reflects a frequent clinical scenario and affords a sufficient occlusal surface in the provisional prosthesis. The connector between the most distal abutment and the cantilever measured 4 × 4 mm, in order to simulate a minimum thickness similar to that recommended by Jemt [25]-though some authors consider that a greater thickness is needed in order to avoid fractures at this level [19] (Figure 7). Although much has been written about the convenience or otherwise of cantilev and their size [23,24], we decided to use a first molar, which reflects a frequent clini scenario and affords a sufficient occlusal surface in the provisional prosthesis. T connector between the most distal abutment and the cantilever measured 4 × 4 mm, order to simulate a minimum thickness similar to that recommended by Jemt [25 though some authors consider that a greater thickness is needed in order to avo fractures at this level [19] (Figure 7). Aluminum implant replicas were used, since the mechanical resistance of this metal is far greater than that of acrylic resins, thus suggesting that they would have no impact upon the results obtained [26].
The implant replicas were placed in the nylon tubes of the chewing simulator using a splint for placement within the epoxy resin, in a stable and fully vertical position as recommended by standard ISO 14801:2017.
The applied load was 80 N, which is similar to the force used by Rosentritt [27] and represents a conventional occlusal force. This is an important but not a fundamental factor, since although some authors such as Suarez-Feito and Shen left the cantilevers free of occlusion, they still recorded an important incidence of fractures [24,28].
In general, there are two ways to produce immediate loading provisional prostheses. One approach is to use the full prosthesis of the patient, perforating where the implants and their provisional abutments have been positioned, and joining both structures with selfpolymerizing acrylic resin. Then, appropriate trimming and polishing of the prosthesis can be carried out [29][30][31][32]. The alternative approach involves the obtainment of an impression at the time of surgery, with the manufacture of the PMMA within a few hours, joining it to provisional titanium abutments [33]. In the present study, we used this second approach for manufacturing the provisional restorations, because it allows the laboratory to produce the machined structures and perform bonding to the provisional abutments using cementing materials of greater quality. For such bonding, we used a technique similar to that described by Pitta, joining the provisional abutments to the PMMA and PMMA-G structure with dual cure resin following sandblasting with 30 µm silica particles at a pressure of 2 bar within the tubes of the acrylic structure [34]. This resulted in very stable bonding, which, in contrast to other studies such as that of Angelara et al., implied that no specimen was decemented [21].
These provisional restorations must remain in the mouth for several months, and in accordance with Soriano et al., we considered it important to know how they behave in response to fatigue after the repeated application of stress, thus leading us to perform cyclic fatigue testing. According to Steiner et al., 240,000 loading applications would be equivalent to one year of function in the mouth. As a result, the 6-month period during which the provisional restorations must remain in the maxilla would be represented by 120,000 load applications. In the case of the mandible, 60,000 applications would be representative of the three months of required presence in the mandible [35,36].
In clinical practice, there are several situations that require advanced surgical procedures with a high biological cost. In this in vitro study, we simulated an adverse clinical situation, where the limitations of bone availability require the use of prostheses with a distal cantilever [37].
The limitations of this study are the characteristics of an in vitro Test. This study, with low statistical power, is a preliminary study to analyze the behavior of these materials in an extreme situation such as distal cantilevers. PPMA-G proves to be a suitable material for use in the prosthesis; however, more clinical studies with a long follow-up period are necessary to analyze its biomechanical behavior.
Conclusions
The statistical data obtained in this in vitro study clearly reflect the benefits of PMMA doping with graphene oxide in the manufacture of immediate loading provisional restorations using CAD-CAM technology with a cantilever molar. Despite the solidness of the results, since this is an in vitro study, caution is required in extrapolating the findings to a real-life clinical setting. Furthermore, the data obtained should be complemented by thermocycling studies.
|
2023-02-08T16:18:49.965Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a4046141ab1796d674679af84f945e1a64f68a77",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/4/1269/pdf?version=1675664724",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "038cdf511900458a1df089d6ba1022e1b60f89df",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208170842
|
pes2o/s2orc
|
v3-fos-license
|
Non-attendance at urgent referral appointments for suspected cancer: a qualitative study to gain understanding from patients and GPs
Background The 2-week-wait urgent referral policy in the UK has sought to improve cancer outcomes by accelerating diagnosis and treatment. However, around 5–7% of symptomatic referred patients cancel or do not attend their hospital appointment. While subsequent cancer diagnosis was less likely in non-attenders, those with a diagnosis had worse early mortality outcomes. Aim To examine how interpersonal, communication, social, and organisational factors influence a patient’s non-attendance. Design and setting Qualitative study in GP practices in one Northern English city. Method In-depth, individual interviews were undertaken face-to-face or by telephone between December 2016 and May 2018, followed by thematic framework analysis. Results In this study 21 GPs, and 24 patients who did not attend or had cancelled their appointment were interviewed, deriving a range of potential explanations for non-attendance, including: system flaws; GP difficulties with booking appointments; patient difficulties with navigating the appointment system, particularly older patients and those from more deprived areas; patients leading ‘difficult lives’; and patients’ expectations of the referral, informed by their beliefs, circumstances, priorities, and the perceived prognosis. GPs recognised the importance of communication with the patient, particularly the need to tailor communication to perceived patient understanding and anxiety. GPs and practices varied in their responses to patient non-attendance, influenced by time pressures and perceptions of patient responsibility. Conclusion Failure to be seen within 2 weeks of urgent referral resulted from a number of patient and provider factors. The urgent referral process in general practice and cancer services should accommodate patient perceptions and responses, facilitate referral and attendance, and enable responses to patient non-attendance.
INTRODUCTION
Introduced in 2000, the 2-week-wait (2WW) policy sought to improve cancer outcomes by accelerating diagnosis and treatment. All NHS patients in England and Wales with suspected cancer should be seen within 2 weeks of GP referral. 1 Though intended to reduce waiting times, the policy also had potential to reduce social inequalities and geographical variation in outcomes. 2 More than 1.9 million 2WW referrals are made annually. 3 Almost half of all cancers are identified through this route, though for 92% of patients, referral will exclude cancer. 4 Hospital trusts face penalties if <93% of referred patients are seen within 2 weeks. This is commonly caused by patient appointment non-attendance, including repeated non-attendance. Around 5-7% of symptomatic referred patients cancel or do not attend their hospital appointment. 5 Help-seeking with symptoms and patient non-attendance have been investigated extensively in other patient pathways, [6][7][8][9][10] revealing sociodemographic patterning of non-attendance. It has not yet been researched in the 2WW pathway, despite referred patients being symptomatic and the potential impact of non-attendance on diagnostic interval 11 and cancer outcomes. Recent quantitative research by the current authors, using a dataset of 109 433 patients (including 5673 non-attenders), found both patient and practice factors predicted non-attendance. 5 Rates were highest in the youngest (aged 18-28 years) and oldest (aged >85 years) patients; in males; in patients living in more deprived areas or further from the hospital; and in those with specific suspected cancers (highest among upper gastrointestinal [GI] referrals). While cancer diagnosis was less likely in nonattenders, early mortality outcomes were worse in this group compared to attenders.
With this in mind, this study sought to gain an in-depth understanding of patients' and referring GPs' experiences of non-attendance for urgent referral appointments using qualitative research methods to examine how interpersonal, communication, social, and organisational factors can mediate decision making and influence non-attendance. By triangulating patient and GP views, the authors aimed to identify and understand a range of possible barriers to attendance in this patient group and identify potential solutions.
METHOD
This was a qualitative study that interviewed patients referred for suspected cancer and GPs. All practices in one large Northern English city were invited to help recruit participants and identify GP interviewees.
Sampling
While a purposive sampling strategy was planned, to gain maximum variation
Results
In this study 21 GPs, and 24 patients who did not attend or had cancelled their appointment were interviewed, deriving a range of potential explanations for non-attendance, including: system flaws; GP difficulties with booking appointments; patient difficulties with navigating the appointment system, particularly older patients and those from more deprived areas; patients leading 'difficult lives'; and patients' expectations of the referral, informed by their beliefs, circumstances, priorities, and the perceived prognosis. GPs recognised the importance of communication with the patient, particularly the need to tailor communication to perceived patient understanding and anxiety. GPs and practices varied in their responses to patient non-attendance, influenced by time pressures and perceptions of patient responsibility.
in key factors (rates of 2WW referral and non-attendance, practice Index of Multiple Deprivation [IMD], 12 location, GP sex, and years of experience), difficulties in recruiting these hard-to-reach patients meant that opportunistic sampling was used. Opportunistic sampling is often necessary for recruiting difficult-to-reach patients, such as those who do not attend appointments. 13 The authors selected 24 patients for interview from the 29 consenting and eligible patients ( Figure 1). The hospital trust identified, on a weekly basis, patients from participating practices who had not attended their appointment without warning ('did not attend': DNA) or cancelled it (cancelled at least two appointments or cancelled referral completely) to achieve this pool of patients. Patients dissenting from the use of their health records for research were excluded. Practices determined patient eligibility according to mental health problems, learning disability, limited English, or any other known factor potentially affecting ability to consent and/or undertake an interview.
Participant recruitment
The authors wrote to all 105 general practices in the city, requesting one or both forms of participation: assistance with patient recruitment and GP interview participation.
Eligible patients were invited by personalised GP letter and recruitment pack. GPs opting for further information about participation were invited by personalised letter from the study team and recruitment pack.
Patients were contacted for in-depth faceto-face interviews; interviews were held within 12 weeks of cancellation or DNA. GP interviews were conducted face-toface or by telephone (n = 2). Patients were given a 25 GBP honorarium. Practices were reimbursed for GP time at National Institute of Health Research Comprehensive Regional Network rates.
Data collection
Participants were interviewed between December 2016 and May 2018 by one of two experienced researchers using a topic guide (available from the authors on request], which had been informed by relevant published research, the study Patient and Public Involvement Group, and aspects of the quantitative analysis. 5 Patient interviews focused on the following: recollection of symptoms provoking the GP appointment; how the referral was explained; and reasons for not attending. GP interviews focused on: 2WW referral decisions; why some patients do not attend; and how non-attendance is managed by the practice. GPs were asked to discuss 2WW referrals both generally and with reference to individual scenarios. Potential interventions were discussed with both groups.
Recordings were transcribed verbatim and all transcriptions checked for accuracy.
Data analysis
Framework analysis was used. 14 Following data familiarisation, a coding framework was developed for emergent themes and subthemes by two experienced qualitative researchers using a grounded theory approach. 15 GP and patient data were
How this fits in
Previous research into patient nonattendance at appointments has mostly focused on primary care, with a concern about wasted time and resources. To the authors knowledge, this is the first study of non-attendance by symptomatic patients referred owing to suspected cancer. The study found that a range of patient and provider factors were associated with nonattendance, including several to which healthcare organisations and individual practitioners may be able to respond. analysed concurrently to draw comparisons and enrich interpretations from multiple viewpoints ( Figure 2). Elements of consensus and differences between individual patients and individual GPs, as well as between demographic groups were explored. This particularly focused on exploring the effect of participant age, sex, GP years' experience, level of deprivation and, to some extent, suspected cancer pathway, though this was limited owing to the large number of categories. Analysis continued until saturation occurred within the evolving themes, a concept Saunders et al term 'inductive thematic saturation'. 16 Investigator triangulation was employed whereby a random sample of 20% of transcriptions was second-coded and independently checked to ensure consistency in the use of the coding framework and interpretations made.
RESULTS
A total of 34 practices participated in patient identification, from which 21 GPs were interviewed (from 16 different practices). GPs had a wide range of experience (Table 1), and three were current or former local clinical cancer leads. Out of 251 patient eligibility checks requested, 143 (57%) were eligible for interview and 138 patients were contacted (consented 29; dissented 15; no response 94 ( Figure 1, Tables 1 and 2) and 24 were interviewed. Patients had been referred with a range of suspected cancers, though patients with suspected skin cancer were over-represented and the authors were only able to interview one patient with suspected breast cancer. Patients with suspected lung cancer were not included as these are not routinely referred via the 2WW process in the city in which this study was based. As would be expected in this patient group, most patients tended to be older (median age 60.5 years), though several younger patients were also interviewed (age range 22-77 years
Use of local leaflet
There is an assumption that the patient wants to attend
Time constraints
Administrative staff making referrals to patient expectations and social context, in addition to communication within the consultation ( Figure 2). This study explored referral processes and how they impacted on attendance, and how patients and GPs balanced notions of personal responsibility and paternalistic care. Finally, the authors summarise themes arising.
System flaws
The requirement to be seen within 2 weeks of referral presented logistical challenges that impacted attendance. Patients described receiving appointment letters after the intended appointment date or with 1 day's notice. One patient was registered blind and needed an interpreter to read the letter; this was not possible in the timeframe. Other examples included incorrect patient contact details; errors in the hospital's mobile phone text cancellation system; and a transport ambulance arriving several hours late.
Expectations
Patients' beliefs about their symptoms, understanding of tests for which they have been referred, and prognostic expectations were key, often interrelated and mutually reinforcing. This may be further mediated by the communication with the GP, itself influenced by the doctor's expectations about the symptoms and possible diagnosis. Patient and GP interviews identified three interacting factors that mediated attendance: patients' circumstances and priorities; patients' beliefs (including emotions, such as fear); and perceived severity.
Patients' circumstances and priorities. Some patients are disproportionately exposed to challenging social factors, creating further barriers to health care. 17 Patients often had multiple comorbidities or significant caring responsibilities. Patients with multiple health conditions described confusion over appointment times and sometimes had difficulty recalling specific circumstances of missed appointments, suggesting they may find their lives difficult to manage. Patients also described how mental health and financial problems created difficulties when prioritising competing demands.
Several doctors commented on the broader difficulties faced by patients (GP01, GP10, GP22): This patient was also receiving financial benefits and lived in an area in the lowest IMD quintile nationally.
GP and patient interviews suggested that deprivation may be strongly related to patients' decisions. Two-thirds (8 out of 12) of patients commenting on these wider life struggles lived in areas ranked in the lowest IMD quintile. GPs described some possible effects: ' [Some patients] lead such chaotic lives and we do have a significant number of patients who, people are really down, they're kind of at the bottom of the chaos ladder of life who literally are in so much debt that they will just not open any letters.' (GP01, M, age 45 years, 15 years' experience) Deprivation may also lead to more immediate difficulties; patients may struggle to negotiate time away from work and transport costs can be prohibitive. Navigating the appointment system appeared a particular problem for older patients and those of lower socioeconomic status; all patients who described navigation difficulties were from areas ranked on the lowest quintile of the IMD and aged >60 years. GPs suggested that patients from minority ethnicities and cultures may struggle to navigate the system, partly owing to language barriers, but also their expectations of the NHS, which may differ to other health systems.
Many GPs commented on how time constraints within consultations prevented them from discussing details. Some, however, considered the practicalities of patient attendance: 'You cannot give an appointment to a 92 year-old at 9 o'clock … in general they are obviously in pain and they usually have arthritis and they're slow in the morning. Afternoon is best.' (GP09, female [F], age 52 years, 20 years' experience) Emotional responses to testing. Most patients described relief that they would be seen quickly. For some patients, however, fear and anxiety affected attendance, describing (at times extreme) hesitation about further testing because they feared the procedure and/or a potential cancer diagnosis. GPs commonly cited this as a reason for patients not attending for investigative tests (12 out of 21 GPs). Fear was especially common among patients referred through upper and lower GI routes, reflecting the invasive tests. Other emotions, potentially influencing attendance, such as embarrassment or disgust, were also touched on, for example the earlier quote regarding colonoscopy from Pt65.
By discussing these concerns, patients may become more informed about the test. However, this relies on them being sufficiently confident or being able to raise concerns. If raised, GPs could prescribe medication for anxiety or discuss alternative scenarios: 'I do remember cancelling one [gynaecological test] because I was scared … So that was when I went back to the GP and then they fast tracked me and I agreed to have it under general [anaesthetic].' (Pt72, F, age 67 years) It was clear that some patients were not aware that sedatives could be provided for invasive tests, such as colonoscopy. Hospitals are using 'straight-to-test' (STT) appointments to reduce the total number of appointments along with guidance that GPs should assess the patient's fitness to do this; however the STT route may inhibit opportunities to allay fears and correct misconceptions. Fear may also relate to the patient's broader concerns about cancer diagnosis and prognosis. Missing an appointment can protect them from the stress of a potential diagnosis. By presenting with symptoms it may seem that patients are actively engaging in managing their health. However, they may also be seeking reassurance and do not expect a cancer referral. Some GPs described this as a form of denial: 'That's news that they didn't want to hear and they don't want to process that and so they just kind of put the shutters up and ignore it.' (GP03, F, age 43 years, 15 years' experience) Some patients revealed instances of denial. They were adamant that they did not have cancer and their personal or relatives' wider experiences of illness influenced these judgements. Avoiding assessment for cancer was a means of coping with these stressful events: 'I'm telling you it's not cancer ... I says, don't complicate matters.' (Pt106, M, age 65 years) Some older patients may not want to seek diagnosis or treatment for potential cancer. This was commented on by some GPs: 'They accept that there's something wrong, they accept they've probably got cancer, but actually they're not sure they really want to do anything about it.' (GP28, F, age 55 years, 27 years' experience) Perceived severity. Patients' perception of symptom severity shaped decision making. While perceptions may be complex and influenced by previous experiences and beliefs, doctor-patient communication was influential. Some patients were unaware of the potential appointment urgency and reported that their GP did not explain: ' [The GP] didn't sort of explain that much. I didn't really know what it was all about.' (Pt111, F, age 22 years) Many patients were surprised by the speed of the referral letter and appointment; consequently several were not available for appointments. This is surprising since local digital systems prompt GPs to confirm that referred patients are available for the next 14 days. Few GPs, however, reported completing this because of time constraints: 'If they've got a cancer I don't want it to be the fact their holiday delays their diagnosis. I've told [the hospital trust] they're on holiday, they need an appointment when they get back.' (GP34, F, age 39 years, 10 years' experience )
Doctor-patient communication
Many GPs commented on how conversations with patients had become more difficult as National Institute for Health and Care Excellence guidance for certain cancer referrals had lowered referral thresholds. 18 GPs believed this may dilute messages given to patients and held concerns about increased pressure on hospitals: There was a sense that referral thresholds were often lower among more recently qualified GPs, described as a 'tick-box generation'. Some GPs suggested that a growing fear of litigation (GP05, GP08, GP21) may create 'soft' referrals, used to reduce uncertainty.
Most GPs acknowledged the careful negotiation required, when balancing the potential cancer risk with patient anxiety, but felt this may be misunderstood: 'It's quite a difficult balance to say, you must attend, it could be cancer but it's probably not … it's a quite hard, a hard kind of dynamic for the patient to grasp.' (GP21, M, age 35 years, 7 years' experience ) GPs described the importance of rapport, particularly when a patient held concerns about a procedure or perhaps misunderstood the implication of symptoms. Patients may be less open to raising concerns unless trust has been established: ' [The patient will] give you the bombshell at the end, you know …. oh by the way whilst I'm here and then they say, oh I've had altered bowel habit.' (GP26, F, age 46 years, 18 years' experience) Four GPs, all female, said they actively chose not to use the 'C [Cancer] word' with some referred patients: 'Sometimes I don't mention the word cancer ... that's deliberate and it's normally with somebody who is already so anxious … we still talk about the fact that it could be something very serious or it could be something sinister.' (GP28, F, age 55 years, 27 years' experience) All GPs described tailoring communication to patients, with prior knowledge of a patient being key. Most commented on the importance of being clear and expressing risks, partly to ensure 2WW attendance and partly to prepare patients for a potential cancer diagnosis: 'It's a real shock if a patient goes for a 2-weekwait appointment and they're suddenly hit with it could be cancer, well, "Why didn't you say that when you referred? "' (GP02, M, age 55 years, 27 years' experience)
Negotiating responsibility
GPs described moves away from paternalism, with increased onus on the patient to take responsibility for their health and health care. However, this varied. Some practices used 'fail safes' whereby administrative staff checked appointments that had been made within 2-week timeframes, whereas other GPs emphasised to patients to contact the practice if they did not receive an appointment. This onus of responsibility was not always well understood by patients; some were surprised there was no GP follow-up, despite ongoing symptoms. A GP reflected on the challenges: 'I think nowadays we have to, unfortunately, rely more on the patients telling us they do rather than checking that they do or don't. They have to take some responsibility for their health care, I suppose.' (GP36, F, age 43 years, 16 years' experience) To encourage 2WW attendance, some GPs delayed referrals if patients were about to go on holiday, while GP28 described collecting an older patient from home and attending the hospital appointment with them thinking that fear of the test would otherwise prevent attendance.
Many patients were concerned about wasted resources through non-attendance, related to a sense of complying with normative assumptions of a 'good patient'. Some described being concerned that nonattendance would be noted as a 'black mark' on their health records: 'I would hate for people to then slow down or, think oh he's already cancelled once or whatever, so I just want to do things right.' (Pt95, M, age 44 years) GPs described similar 'good patient' notions and the potential guilt that patients may feel for non-attendance, to encourage patients to attend rearranged appointments. Most GPs described follow-up whereby practice receptionists telephoned nonattending patients and at times GPs made these calls themselves; partly to enable medical discussion and to reflect the greater respect patients were felt to have for GPs. This could be seen as an example of 'safety netting', 19 though none of the GP participants used that term. Meanwhile, some GPs felt that the onus should be on the patient to attend and an informed decision not to attend should be respected: 'If they don't take you up on that offer and they've already been offered an appointment for a test and they've declined that, then I think at some point you might need to respect the patient's autonomy.' (GP05, M, age 38 years, 10 years' experience)
Referral processes
GPs described struggling to undertake appointments involving suspected cancer referrals in 10-minute timeslots. This explains why online referrals were either completed at the end of a clinic or by practice administrators. A few GPs completed the referral online themselves during the consultation and, in some instances, completed a choose-and-book appointment booking with the patient: 'So you can actually do [a choose-and-book appointment] on the system with the patient in the room and they walk out with the date ... I personally do that, but I'm the only one in my practice that does, my colleagues use our secretaries.' (GP22, M, age 40 years, 12 years' experience) Within the city in which this study took place, an information leaflet had been developed for patients being referred on the 2WW pathway. It had been agreed between the hospital trust, local clinical commissioning group, practices, and a patient representative group, but only 7 out of 21 participating GPs used it. Two patients (Pt23, Pt94) commented that they would have found a leaflet useful.
The online 2WW referral includes a prompt to ensure the GP has given this leaflet, however, since GPs rarely complete the referral process during a consultation, this may not be done. Some GPs deliberately did not give the leaflet as they felt the reference to cancer would worry patients: 'It's treading that fine line, isn't it, between wanting them to know it's important they get followed up and not wanting to scare.' (GP36, F, age 43 years, 16 years' experience)
Interventions
Potential interventions to increase attendance were raised by patients and GPs. These tended to be relatively straightforward. For example, making appointments with patients by phone rather than letter could reduce communication delays, and ensure patient availability. Administrative support was used in some practices to check contact details and help patients navigate the choose-and-book system, and some GPs booked appointments with the patient during the consultation. Text messaging is being used increasingly to send appointment and reminder notifications. Increased vigilance in ensuring patients were suitable for STT appointments was suggested, particularly a need for further discussions around investigations that patients would find acceptable.
Practices varied in responses to information from the hospital about nonattendance of urgent referral appointments; while some GPs described telephoning nonattending patients to stress the importance of re-referral and attendance, others did not follow-up.
DISCUSSION Summary
Interviews with patients and GPs offer several potential explanations for nonattendance at urgent referral appointments for suspected cancer. System flaws explained some instances. GPs talked about practical difficulties experienced with booking appointments and time pressures that restrict them. 20 Patients' expectations of referral were complex, informed by beliefs, circumstances and priorities, and the perceived prognosis. These were often mediated by communication with the GP. GPs' recognition of the importance of communication was evident when acknowledging the need to tailor communication to perceived patient need and worry. GPs have the inherently difficult task of communicating the importance of the referral and also not causing unnecessary anxiety. 21 GPs and practices varied in their responses to non-attendance, influenced by time pressures and perceptions of patient responsibility.
Strengths and limitations
This qualitative study design allowed the generation of in-depth accounts of participants' experiences. Drawing on the two sets of accounts (GPs and patients) enabled greater contextual understanding of the various factors that may influence nonattendance, with triangulation both across these groups and also between researchers improving the rigour of this study. The authors expected patient recruitment to be challenging and wrote to many potential interviewees to achieve their sample. Patients living in the most deprived areas were particularly hard to recruit and several cancelled on the day of the interview, highlighting the difficulties these patients face. However the achieved samples were sufficiently diverse to suggest that some of the identified themes are potentially universal.
The authors planned patient sampling criteria but finally applied only one sample limitation (not to recruit any more patients who had cancelled rather than not attended appointments). Relatively few patients from ethnic minorities were recruited and only one whose first language was not English. Youngest and oldest age groups were also relatively under-represented, given that they have the highest rates of non-attendance, 5 though many older potential participants were assessed by GPs as not fit for interview.
Given the complex, various reasons for non-attendance at appointments, this relatively small study may not have enabled some explanations to be reported. Indeed there may have been common determinants of decisions not to engage with the health care and research interview, and so an important subset of patient views was possibly unavailable. However, the accounts and explanations generated by patient and GP participants were diverse.
Comparison with existing literature
This is the first reported qualitative investigation of patient non-attendance at urgent referral appointments for suspected cancer. Some findings are consistent with studies of non-attendance and use of services in other healthcare settings, such as the influences of deprivation and 'difficult lives'; 17,22 the effects of health literacy on ability to understand and navigate healthcare systems; [23][24][25] and diagnostic and procedure fear as determinants of patient behaviour. 6,26,27 Varied notions of paternalism, largely as a result of workload pressures, have also been reported in a qualitative study of GPs' practice of 'safety netting' for potential cancer presentations. 19 The range of explanations for nonattendance and their potential to increase the diagnostic interval were consistent with elements of the Andersen model of total patient delay. 11 The importance of system flaws is magnified by time pressures within an urgent referral process with performance targets. Thresholds for referral have reduced over the past decade; 18 intended to reduce rates of late cancer diagnoses, resulting in increased referrals and a greater proportion without cancer. This may impact on GP communication and patients' receipt of referral news. Practices varied in their response to non-attendance; some actively monitored attendance, while this was not judged feasible by others owing to many other demands.
A local patient information leaflet had been developed to communicate the importance of attending as cancer was suspected. A significant minority of GPs did not use the leaflet, some because they forgot but others had decided not to use it, questioning the value of universal information that cannot be adjusted to patient needs and circumstances, or provider preferences.
Patients' and GPs' accounts suggest that the challenging circumstances of some patients' lives mean that they may not treat a referral for suspected cancer as a priority. It may be possible for GPs, practices, and hospitals to provide support to improve the chances of these patients attending. Recent research exploring patients' views of the planned introduction of the Faster Diagnosis Standard (FDS) for cancer also highlights the perceived importance of GPs' offering reassurance and support to patients being referred. 28 Remedies to help patients struggling to navigate the healthcare system have been previously suggested, 29-32 however such interventions are 'downstream' and may fail to address substantive causes of difficult circumstances, which require 'upstream' interventions.
Furthermore, patient agency is expressed through choices and preferences as they make sense of the possibility of a cancer diagnosis. 33 Individual agency is realised socially 34 and experience is defined and realised through social negotiation, which includes social and economic barriers. 35 Overcoming these is important in facilitating the success of 2WW. Patients must understand that the referral is urgent and about suspected cancer, as some GPs attempted to reduce patient anxiety by not using the 'C word'. Effective communication is key, in which individual responses to what is happening can be appropriately negotiated, to achieve a desirable outcome. 36
Implications for research and practice
This study of the 2WW urgent referral system and the linked quantitative study 5 illustrate the importance of policy evaluation, particularly the need to examine its implications through the lens of users, whether patients or practitioners. While this qualitative study provides in-depth accounts and explanations across a relatively broad group of patients and GPs, further research would be valuable in other locations and with patients from minority ethnic groups, those without English as their first language, and older patients. The authors did not explore patients' previous attendance and engagement with health care; it would be useful to know if these patterns and explanations are specific to suspected cancer referrals or more general. Lastly, interviews concerned referrals across all suspected cancers and some disaggregation in future would be helpful, not least to permit the development and evaluation of targeted interventions.
Non-attendance of urgent referral appointments affects a small proportion of patients but within a high volume patient pathway, and could result from a range of patient and provider factors. 5 It may impact on short-term mortality outcomes. 5 Patient responses, and especially the provoked worry, influence decision making, and occur within a social context and need to be negotiated by referring GPs. Further barriers include low levels of health literacy, lack of patient access to material resources, practical demands of travelling to hospital, comorbidities (particularly among older patients), and fear of the diagnostic procedure (particularly among patients with suspected GI cancer). The urgent referral process, therefore, needs to accommodate patient circumstances, perceptions, and responses, while ensuring an appropriate infrastructure in both general practice and cancer services to facilitate referral, patient attendance, and responses to non-attendance.
Funding
This study was part of a wider project funded by the charity Yorkshire Cancer Research (Y390). The views expressed in this article are those of the authors and not necessarily those of the funders.
Ethical approval
Approvals were obtained for this study from: NHS research ethics committee (16/NE/0146), HRA (IRAS ID:201398), HRA Confidentiality Advisory Group (16/ CAG/0060) and the University of York departmental research ethics committee.
|
2019-11-20T14:04:52.056Z
|
2019-11-18T00:00:00.000
|
{
"year": 2019,
"sha1": "cea476f975637e0727b15ec0573c9a60b130973c",
"oa_license": "CCBYNC",
"oa_url": "https://hull-repository.worktribe.com/preview/3980741/e850.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "82307ac3a66e60f067b9e5d30a47082739d99ee6",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8917548
|
pes2o/s2orc
|
v3-fos-license
|
Management of Left Ventricular Aneurysm : A Study from Iraq
Background: The most appropriate surgical approach for post-myocardial infarction left ventricular aneurysm (LVA) is controversial. This study aims to display the results of surgical treatment of LVA in a major Iraqi cardiac surgical center. Methods: The surgical management of LVAs over the period 2001 to 2011 was retrospectively reviewed. The presenting signs and symptoms, results of investigations, operative findings, and outcomes of patients were determined. Results: Twenty-seven true LVAs associated with 4 ventricular septal defects (VSDs) were treated surgically. During the same period, 1136 coronary artery bypass graft (CABG) operations were done, thus LVA represented 2.4%. Males constituted the majority (74.1%). The mean age was 54.6 years old. The typical ECG changes were seen in 42.1%. Apical and antero-apical locations predominated. The majority of patients (84.2%) had subnormal values of ejection fraction (EF). Most patients had multi-vessel coronary artery disease (CAD). The most frequent was the left anterior descending artery (LAD). All patients had CABG except 3. Linear repair and Dor technique were used equally. The commonest postoperative complication was bleeding (38.4%). The overall hospital mortality was 18.5%. Conclusion: Concomitant CABG improves early postoperative course and must be added when significant lesions in coronary arteries particularly the LAD are present.
Introduction
LVAs have long been described at autopsy, but LVA was not recognized to be a consequence of coronary artery disease until 1881 [1].The angiographic diagnosis of LVA was first made in 1951 [1].The surgical treatment of ventricular aneurysm was introduced in 1944, when Claude S. Beck reinforced the wall of LVA with fascia lata aponeurosis in order to reduce excessive dilatation and avoid LVArupture.In 1955, Likoff and Bailey performed closed ventriculotomy by placing a large vascular clamp on the beating heart tangentially across the base of LVA followed by resection and suture [2].In 1958, Cooley and colleaues performed the first postinfarction aneurysm resection and linear repair of left ventriculotomy with the use of cardiopulmonary bypass.The technique offered by Levinsky and colleagues in 1979 supposed the performance of left ventricular reconstruction with a woven Dacron patch after resection of anterior postinfarction aneurysm.In 1985, Jatene and Dor independetly presented a fundamentally new, anatomic left ventricular reconstruction method with endoventricular circular reduction and stiching patch in the formed ventriculotomy orifice [3].
The aim of this paper was to study the management of true LVA following MI in a major Iraqi cardiac surgical centre noting the methods of diagnosis and surgical treatment options and the outcome in view of the relevant literature.
Materials and Methods
Twenty-seven patients (20 males and 7 females) with LVA who were admitted to Ibn-Albitar Centre for Cardiac Surgery (IBCCS) over the period from May 1 st , 2001 to December 31 st , 2011 were retrospectively studied.Patients' informed consents and approval of the Hospital Ethics Committee were obtained.The case sheets of these patients were obtained.Information like age, sex, place of residency, presenting symptoms and signs, past medical history particularly ischemic heart disease (IHD) were looked for.The diagnostic work-up was reviewed looking for specific investigations like electrocardiography (ECG), chest radiography (CXR), echocardiography, cardiac catheterization and coronary angiography.All patients in this study were initially seen and thoroughly investigated by cardiologists.Thereafter, surgical candidates were referred for surgery.
All patients had repair of aneurysm (mostly linear repair or Dor procedure) together with myocardial revascularization (CABG) except 3 patients who had repair of their aneurysm alone.Four patients with an associated VSD had closure of this defect as well.The operative notes and the perfusionists' notes were all reviewed to get an idea of the conduct of the operation.Intra-aortic balloon pump (IABP) was used selectively (either before induction of anesthesia or at the end of operation).
Operative procedure: The aneurysmal wall was incised and thrombi removed if present.Repair was done either by the linear method for small aneurysms or Dor procedure for big ones.In the linear method, after excising the aneurysm, the edges of viable myocardium were sutured together by interrupted pledgeted sutures over Teflon felts.While in Dor technique, a purse string (2 zero) Polypropylene suture was used to narrow the defect.The remaining ventriculotomy was closed by a Dacron patch sutured interruptedly by pledgeted Polypropylene sutures over Teflon strips.
The postoperative morbidity and mortality were studied.The follow-up was unfortunately not available apart from the interval between surgery and discharge.
Results
The male to female ratio of patients was 2.8:1.
The youngest patient was a 36-year-old male and the oldest one was a 67-year-old lady.The mean age was 54.6 years old.The age and sex distribution of these patients is displayed in Table 1.Most of the patients (96.3%) were above 40.
All patients had history of IHD and therefore; LVA was a complication of MI.
We could obtain ECG recordings from 19 patients only.These were studied carefully.The typical LV aneurysm morphology (ST elevation seen > 2 weeks follow- ing acute MI, most commonly in the precordial leads, with a concave or convex morphology, associated with well-formed Q-or QS-waves and a relatively small amplitude T-waves) was observed in 8 patients (42.1%).
During the period of this study 1136 patients with IHD had CABG operations in IBCCS.Thus the overall rate of LVA repair to CABG operations was 2.4%.Regarding distribution of patients over years of the study, we found that 62% of them were seen in the last three years.
Echocardiography was done in 25 patients.The ejection fraction values are shown in Table 2.The vast majority (84.2%) had subnormal values.Other echocardiographic findings are shown in Table 3.
The sites of aneurysm are displayed in Table 4: apical and antero-apical locations were the commonest.
Most patients in this study had multi-vessel CAD.Review of the coronary angiograms revealed about 46 stenotic or occlusive lesions in 27 patients as shown in Table 5.The most frequently diseased artery was the LAD.
The perfusion charts of 25 patients were reviewed.Almost one third of patients (8) required IABP support mostly at the end of operation.
The operative procedures done to the patients in this study are shown in Table 6.Twenty four patients (88.9%) had CABG.Four patients (14.8%) with an associated VSD had closure of these defects.
The type of repair of LV aneurysms is shown in Table 7. Linear repair and Dor technique were almost used equally.The details of myocardial revascularization are shown in Table 8.Most of the patients had multivessel coronary artery disease and received complete revascularization.
Regarding the outcome of patients with an associated VSD; one patient has died making a mortality of 25%.
In regard to the duration of postoperative hospitalization, most patients (19, 70.4%) stayed for 1 -2 weeks.
The complications are displayed by Table 9.The commonest was bleeding.
Five out of twenty-seven patients who were managed surgically had died (18.5%).This represents death during the first hospitalization only.The mortality after discharge might be higher as there was no recordable follow up.
Discussion
The 27 patients with LVA repair represented 2.4% of the total 1136 patients who underwent CABG operations
Symptoms
With regard to presenting symptoms, absence of angina pectoris and dyspnea as the predominating symptom was associated with early mortality according to a study by Vural et al. [11].Moreover, lack of angina was an independent predictor for operative mortality [11].It can be speculated that, preoperative angina, probably but not necessarily, may indicate viable myocardial tissue existence, which is capable of generating power during systole, more compliant during diastole and has less compromising effect on ventricular geometry when compared to a totally fibrotic aneurysmal sac [11].
Electrocardiography
The typical LV aneurysm morphology in the ECG described earlier in the Results was observed in 8 patients (42.1%) only.This ECG pattern has a sensitivity of 38% and a specificity of 84% for the diagnosis of ventricular aneurysm [7].
Incidence
62% of patients were seen in the last three years of the study.This could be related to a real increase in the incidence of IHD and its complication like LVA, a concomitant rise in CABG operations, a better diagnosis of LVA, increased awareness of cardiologists about the role of surgery in LVA and thus more referral of cases and a significant increase in the number of efficient young coronary surgeons capable of performing LVA repair with concomitant CABG.
Diagnosis
Echocardiography has a sensitivity and specificity of 93% and 94%, respectively, for detecting LV aneurysm, representing the most frequent and easily applied test for such an anatomic abnormality.Left ventriculography, however, remains the gold standard for the diagnosis [9].
Ejection Fraction
The vast majority of our patients (84.2%) had subnormal EF values.Results of many studies have shown that the ejection fraction of the total left ventricle is an important predictor of the outcome of open heart surgery [11,12].
Ventricular ejection fraction improves following aneurysm repair whether linear or patch technique is used [1].
Other Echocardiographic Findings
In the present study, echocardiography was very useful.This is evident by looking at Table 4 which displayed many studied parameters.Mural thrombi were found in 20% of patients exactly the same as reported by Mangschau et al. [13] and were removed surgically.Four postischemic VSDs were accurately diagnosed and fixed surgically thereafter.
Location
In this study, anterior (11; 40.7%) and apical aneurysms (10; 37.1%) were the commonest.In clinical reports, LVA is usually located in the anterior wall, whereas infero-posterior or postero-lateral aneurysms are less common.Postinfarction LVA follows pathology of the LADdiagonal system (anterior aneurysm), circumflex branches (posterolateral aneurysm), or right coronary artery (inferoposterior aneurysm) [14].In our series, a significant LAD lesion was present in 21 patients (77.8%).The prevalence of inferoposterior aneurysms is significantly higher in autopsy series than in clinical reports [14].This may be due to the extensive infarction necessary for LVA formation.When this occurs in the inferoposterior wall, the result is often acute, severe mitral regurgitation and the patient dies in the acute phase rather than develops LVA [14].
Linear vs. Patch Repair
In the present series, linear and Dor repair were almost equally used (11 patients, 40.7% in each group), whereas the method of repair was not clear in 4 patients (14.8%).The basis on which patients were offered either type of repair is not known.Generally, the choice of repair technique should not be made randomly, but depending on factors such as size and extension of scar tissue [11].
Although aneurysmectomy has been performed for almost five decades, the most appropriate surgical approach to a patient with a dyskinetic LVA is still controversial [4].Antunes et al. believed that the technique of repair of postinfarction dyskinetic LVAs should be adapted in each patient to the cavity size and shape, and the dimension of the scar [4].Unduly wide excision of the scar area and linear closure of the LV defect might lead to deformation of the LV chamber and a reduction in LV diastolic volume [15].
Impact of Coronary Revascularization
CABG is one of the important components of LVA surgery, and the revascularization rate varies in the literature from 68% to 100% [3] Twenty four patients in this study (88.9%) had CABG; which goes with the international standard.Most of the patients had multi-vessel coronary artery disease and received complete revascularization as shown in Table 9.Although the surgical risk is increased, patients with low LVEF and multi-vessel disease have a particular survival benefit after CABG [14].The biological basis for this is recruitment of hibernating myocardium [14].
When CABG Is Not Added!
Three patients in this study had LVA repair without CABG.Two patients survived while the third (having total occlusion of RCA) died.It is noteworthy that RCA disease is associated with low cardiac output (CO) on multivariate analysis [3] which probably was the cause of death in this patient.In a study on 303 patients with LVA from Sweden, Stahle et al. reported an early mortality of 23% in patients who underwent aneurysm resection alone and 8.1% in cases of aneurysm resection with CABG [6].This emphasizes the importance of concomitant CABG with LVA repair.Vural et al. in a retrospective analysis of 248 patients also found that concomitant CABG reduced the incidence of low CO state [11].
LVA + VSD
This is an important complication of MI that has been commonly associated with progression to death [16].Surgery is routinely performed in patients with acute VSD during MI [16].Schlichter et al. reported a 3% incidence of VSD in a series of 102 patients with LVA [17].Acquired VSD was first surgically repaired by Cooley et al. (1957) using CPB and hypothermia [17].In 1962, Collis et al. described the repair of a VSD and LVA in a man of 59 [17].When LVA coexists with a VSD, there is an obvious route for access to the septum since the left ventricular myocardium is already damaged [17].The approach from the left which is thus afforded is ideal, first in that the septum on this side is smoother and the defect thus is more easily defined, and secondly the left ventricular pressure keeps the patch in contact with the septum, whereas a patch applied from the right side may be forced away from the septum [17].Lazopoulos et al. described a case of giant LVA and VSD following a silent MI managed by an endoventricular circular plasty (Dor procedure), interrupted suturing of ventricular septum and CABG [18].
Morbidity
The commonest postoperative complication was bleeding.
In other studies [3,4] low CO was on the top of the list.
Overall Mortality
5 out of 27 patients who were managed surgically had died (18.5%).In a collection of 3439 operations for LVA performed between 1972 and 1987, hospital mortality was 9.9% and ranged from 2% to 19% [1] and it has recently fallen to 3% to 7% in the last decade [1].In view of these figures, our mortality is obviously high.The most likely reason is the small number of patients and the limited experience in the management of this condition.In regard to the possible causes of death, postoperative bleeding was blamed in 2, low CO in 1 while it remained unknown in 2 patients.Low CO accounted for 34% of early mortality in Vural et al. series [11].Mukkadirov et al. also believes that severe low CO is one of the main causes of early mortality after aneurysmectomy [3].
Perfusion Time and IABP
The mean perfusion time in this study was 100 minutes, higher than that reported by Antunes et al. (82.7 minutes) [4].Long periods of aortic-cross clamping have detrimental effects during weaning from CPB [20].
IABP was used in 8 patients (32%) in this study, whereas Eid used it in 26.7% of his patients [20].IABP has become a prerequisite for surgical repair of LVA [20] especially in patients with compromised LV function [20].
|
2017-08-15T11:43:23.922Z
|
2014-01-29T00:00:00.000
|
{
"year": 2014,
"sha1": "1e24842279d53804c5d93e41dde2b996da1c8d6f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=42616",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1e24842279d53804c5d93e41dde2b996da1c8d6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
134373385
|
pes2o/s2orc
|
v3-fos-license
|
SURFACE SOLAR RADIATION IN A TROPICAL AREA ESTIMATED FROM DIFFERENT MODELS
The Global Solar Radiation (Rg) is considered the main power that regulates the biophysical processes in the surface-atmosphere interface. Many regions of the Earth do not have those kinds of measures available, consequently approach models that allow confident estimates are necessary. Thus, the objective of this article was to evaluate the performance of the models proposed by Angström-Prescost (1940), Hay (1979) and Bristow & Campbell (1984) to estimate the solar global radiation in the city of Sinop, Mato Grosso state. Therefore, data from the National Institute of Meteorology (INMET), from 2006 to 2012, have been used to validate and parameterize the models. Solar Global Radiation (Rg), air temperature (maxima and minima), clarity atmospheric index (kt) and the relative sunshine duration (Ir) have shown seasonality under influence of the cloudiness. The Rg estimates for the models did not differ significantly from the Rg measurements and all the models showed a Pearson ́s correlation coefficient classified as "strong", except for those estimated by the Bristow & Campbell ́s model. However, the Hay ́s model presented smaller errors and higher coefficients of correlation and accuracy than the other models. The results indicate that all the models are applicable to the region, but additional surveys with other models covering all of Mato Grosso are necessary to improve the adjustment of the Rg estimation models
INTRODUCTION
Brazil is one of the biggest grain producers of the world with annual production estimated in about 207 millions of tons throughout the 2014/2016 crop. Among all Brazilian states that produce grains, Mato Grosso comes as the second biggest agricultural producer, which represents 24% of the entire Brazilian production (CONAB 2014(CONAB , 2016. The available region to produce grains in Mato Grosso covers about 8% of the total area of the state, mainly the cities of Sapezal, Campo Novo do Parecis, Nova Mutum, Primavera do Leste and Sinop (IBGE 2015).
The municipality of Sinop was created from the colonization processes of the Amazonia and Central-Northern region of Brazil throughout 1970 decade and was fostered by the Programa de Integração Nacional -PIN (National Integration Program) of the federal government (BOTELHO and SECCHI, 2014). This program had as main purpose to give support to immigrants who were looking for fertilized lands and economic prosperity. Sinop gained view in the regional context from the moment its population increased and the socialeconomical investments diversified through the agriculture expansion (BOTELHO and SECCHI, 2014). However, the agricultural expansion has been related to the intense deforestation that causes many regional climate impacts (SHEIL and MURDIYARSO, 2009), such as alteration on the energy balance pattern, evapotranspiration reduction, increasing of the surface albedo and temperature, and reduction of the surface roughness (BIUDES et al., 2015). Solar radiation is the main font of energy to all energetic fluxes in the soil-plant-atmosphere system (BORGES et al., 2011). All kind of radiation originated from the Sun that reaches the Earth surface is called global solar radiation (Rg) (QUERINO et al., 2006). The knowledge of Rg and its components is crucial to understand the total available energy into the soil-plant-atmosphere system, and consequently, how the physical, chemical and biological processes, such as photosynthesis and thunderstorm development, happen in the surfaceatmosphere interface (SOUZA et al., 2005;QUERINO et al., 2011).
The amount of Rg data available is limited, once there is a low number of weather stations that register this variable due to the high cost and frequently maintenance of the sensors (THORNTON and RUNNING, 1999;WEISS et al., 2001). Therefore, many math models have been suggested mainly based on empiric relation to estimate Rg by using meteorological variable such as relative humidity (YANG and KOIKE, 2002), rainfall (LIU and SCOTT 2001;RIVINGTON et al., 2005), sunshine fraction (TRNKA et al., 2005;CHEN et al., 2006), satellite data (MEHARRAR and BACHARI, 2014) and others. Among the models to estimate Rg, Angströn- Prescott (1940) and Hay (1979), both based on the sunshine duration (n), show up as the most popular. The Rg estimated by these models are driven from the coefficients a and b, which are determined by using linear regression between the atmospheric clearness index (kt) and the relative sunshine duration. Nevertheless, Hay's model is different from Angströn-Prescott's because it considers the surface reflection factor in its structure. The Bristow and Campbell's model (BRISTOW and CAMPBELL, 1984) assumes that the incoming solar radiation is a function of the local thermal amplitude, as well as of the solar radiation on the top of the atmosphere (Ro).
There are many models to estimate Rg but most of the studies use two types; temperature-based and sunshine-based models. The most famous sunshine-based model is Angströn- Prescott's (LI et al., 2013). This model is commonly used because it suggests a linear relationship between the ratio of average daily Rg and sunshine ratio (DUZEN and AYDIN, 2012). According to the same authors, Angströn-Prescott's performs better than temperature-based or cloud-based, because those last ones must to be calibrated to local parameters rather than others model. However, the limitations of those models are due to the simplifying assumption applied by them (MEHARRAR and BACHARI, 2014).
The estimation of the Rg varies according to each period of the year, which modifies the complexity of the estimation and the adjusting coefficients to reach the best result (BURIOL et al., 2012). Therefore, due to the importance of the agriculture potential of the Sinop city as well as the necessity of understanding the global solar radiation on the region, it is necessary to find alternative ways to estimate Rg. Thus, the objective of this paper was to evaluate the performance of Angströn- Prescott (1940), Hay (1979) and Bristow and Campbell's (1984) models, and to adjust their respective coefficients, to estimate Rg in Sinop, MT.
STUDY AREA
The study was carried out on the municipality of Sinop (Figure 1), which is located in the Central-Northern region of the Mato Grosso state on the margin of the Cuiabá -Santarem highway (BR 163). The site is placed under coordinates 12°07'53'' S and 55°35'57'' W and is 500 km far away from Cuiabá (capital city of Mato Grosso state). With a total area of 3.942.231 km² and an estimated population of about 126.000 inhabitants, Sinop is considered the biggest urban center of the Northern region of Mato Grosso state (IBGE, 2015).
The climatic condition of the city is hot and humidity with averaged annual temperature around 24ºC. The rainfall pattern is equatorial which is characterized by a dry period throughout the austral winter and a wet season during the summer. The annual amount of rainfall is about 2091.6 mm year -1 and the highest precipitation happens during the months of January, February and March, while the lowest amount of rainfall is observed in June, July and August (BIUDES et al., 2014).
ACQUIREMENT AND TREATMENT OF DATA
The data were acquired from the National Institute of Meteorology To ensure the quality of the data processes, all negative values of Rg or higher than solar constant were eliminated.
ESTIMATIVE MODELS OF THE GLOBAL SOLAR RADIATION
Due to the absence of historical solar radiation data in Brazil, especially in the central and northern regions, three models were parameterized: two sunshine-based models and a temperature-based model. The choice of these models considered the most common data available on Brazilian meteorological stations, as well as the simplicity and precision of the models.
ANGSTRÖM-PRESCOTT'S MODEL
The Angström-Prescott's equation estimates Rg from the relative sunshine duration (Ir) and from the solar radiation on the top of the atmosphere (Equation 1) (DORNELAS et al., 2006).
where Rg is the daily global solar radiation in (MJ m -2 d -1 ), n is the effective number of hours that the solar disc was exposed throughout the day (h d -1 ) (sunshine duration), N is the potential sunshine duration (h d -1 ) determined by the Equation (2), a and b are the linear and the angular coefficients respectively and Ro is the solar radiation incident on the top of the atmosphere (MJ m -2 d -1 ) estimated by Equation (5).
where hp is the hourly angle to the sunset (Equation 3), calculated from the local latitude ( ) and from the solar declination (δ) (Equation 4).
Julian day for JD and dr is the correction of the orbit eccentricity of the Earth (Equation 6).
HAY'S MODEL
The Hay's model is given by the Equation (7), and estimates Rg from the Ro, from the isolation rates, and also considers the multiple reflections of the atmosphere.
where a' and b' are linear and angular coefficients respectively, and A is the adjustment factor associated to the multiple reflections (Equation 8).
where α is the albedo of the grass (0.20) defined by the Word Meteorological Organization (WMO) as the standard surface where a weather station should be installed. αc is the albedo of the cloud bases, usually equals to 0.60, β the scattering coefficient of a clear atmosphere (0.25), N' is the potential sunshine duration for a specific day, and in this case, considering that the heliograph only registers when the Sun high is upper than 5 o (Brooks and Brooks 1947), is estimated by Equation (9).
BRISTOW AND CAMPBELL'S MODEL
The Bristow and Campbell's Model estimates the daily global solar radiation (Rg, MJ m -2 d -1 ) as function of the daily solar radiation incident on the top of the Earth atmosphere (Ro, MJ m -2 d -1 ) and also as function of the daily thermal amplitude, which is the difference between the maximum and minimum register of the daily temperature ( The empiric constants A, B and C have a physical meaning, once that the A coefficient represents the maxima expected solar radiation for a certain day under clear sky condition and B and C control the variations of A when increasing of the temperature difference happen. The original values of the coefficients are A = 0.7, B = between 0.004 and 0.010, and C=2.4 (QUEIROZ et al., 2000). The Bristow-Campbell's model was parameterized by using daily values and we expect that new values of the coefficients can be obtained with the application of the model by using the monthly data.
ATMOSPHERIC CLEARNESS INDEX (KT) AND ISOLATION RATES (IR)
The Atmospheric Clearness Index (kt) is the ratio between Rg and Ro (Equation 11) (RENSHENG et al., 2004). The relative sunshine duration (Ir) is the ratio between sunshine duration (n) and potential sunshine duration (N) (Equation 12).
STATISTICS ANALYSIS
The monthly, seasonal and annual averages in a confident interval of ± 95% of the meteorological variables and measured and estimated Rg have been calculated by the 1000 iteration of the bootstrapping of the aleatory resampling with substitution (EFRON e TIBSHIRANI, 1993). The values of Rg estimated by the models were confronted with Rg measured in the weather station by where Pi are the estimated values of Rg, P is the averaged value of the estimated Rg, Oi is the value of the measured Rg, O is the averaged value of the measured Rg and n is the number of observations. The Willmott's index shows its results based on different levels of performance established on the distance between the estimated and measured values. Its value varies from 0 (no accordance) to 1 (perfect accordance). The Pearson's correlation coefficient indicates a measure of the level and the signal of the correlation between two variables and it is placed from -1 to 1. The MSRE shows the fail of the model when it estimates variability of the measurement related to the average and gives the variation of the values estimated by the measured values, and the MAE indicates the absolute distance (deviation) of the averages. The minimum limit of the MSRE and of the MAE is 0, which represents a perfect approach between the real data and the model's estimative. The index proposed by Devore (2006) (Table 1) has been used to classify the correlation between measured Rg and the one estimated by the models.
ANALYSIS OF THE METEOROLOGICAL VARIABLES
The major accumulation of precipitation happened during the seven months of the wet season (October to April) which corresponds to 95% of the annual total amount of the rain, meanwhile the lowest register of rainfall was observed during the five months of the dry season (May to September) equivalent to 5% of the annual total (Table 2 and Figure 2a). The month with the highest sum of rainfall was January, corresponding to 19% of the total precipitation while lowest was registered in July, corroborating to Biudes et al., (2012;2015). The rainfall regime on the study region is governed mainly by large-scale phenomenon such as the acting of the Bolivia high pressure and South Atlantic Convergence Zone (SACZ). The SACZ is characterized by a huge nebulosity cover that extends from the South of Amazon to the Southeast of Brazil, influencing on the amount of rainfall in the Central-Western region (ESCOBAR, 2014). The Bolivia high pressure acts during the summer and creates a meridional flow that helps to create instability zones in the central area of the country, inducing the rain formation (VIANELLO and MAIA, 1986;CARVALHO and JONES, 2009). The non-actuation of these phenomena, throughout the winter season, results in the lowest amount of rainfall on the study area (ESCOBAR, 2014). Table 2 -Monthly, seasonal and annual total of the precipitation and average (± 95% confidence interval) of the solar radiation on the top of the atmosphere (Ro; MJ m -2 d -1 ), of the global solar radiation (Rg; MJ m -2 d -1 ), averaged (Tavg; ºC), maximum (Tmax; ºC), and minimum temperature (Tmin; ºC), atmospheric clearness index (Kt) and isolation rates (Ir) The maximum Ro has happened in January and the lowest in July (Table 2 and Figure 2b). The interval with the lowest monthly average of Ro was from April to August and the highest averages from September to March. These intervals, respectively, correspond to the period that the Sun, on its apparently position, is located in the North Hemisphere (autumn and winter) and South Hemisphere (spring and summer), then, lower and higher solar radiation will occur on the top of the atmosphere (DALLACORT et al., 2004).
In a general, Rg has shown less accentuated variations when compared with the Ro's temporal dynamic. The period of the maximum averaged of Rg was observed between October and April (rainy season), when the Rg was nearly 2% higher than the values registered between May and September (dry season) (Table 2 and Figure 2d). Nevertheless, the maximum monthly averaged of Rg was noticed in August (dry season) when the cloud cover is lower in the region (BIUDES et al., 2015). The highest averages of Rg were observed in the wet season and are related with high intensity of Ro throughout that period. Thus, even though a reduction of Rg in function of the cloudiness during the rainy time is noticed, the intensity of Ro overcomes the averages values of Rg when compared to the dry period. Comparing values of Rg and Ro, it was observed a relation of 49% between Rg and Ro in January (maximum monthly average of Ro), and 65% in June (minimum monthly average of Ro). These results suggest that the oscillation of Rg in Sinop does not depend only on the intensity of Ro, but also depends on the local atmospheric clearness (kt), which determines what is the portion of Ro that is attenuated by the atmosphere (QUERINO et al., 2011).
The mean air temperature during the study period oscillated between 24 and 27ºC with values throughout the dry season 4% higher than the rainy period (Table 2 and Figure 2c). Between October and April (rainy season) it was observed the lowest variations among maxima and minima air temperature, with thermal amplitude varying from 8 to 12ºC, and from May to September (dry period) the thermal amplitude oscillated from 12 to 17ºC.
The lowest and uniform thermal amplitudes during the wet time are in function of the atmospheric moisture. The water acts as an air temperature moderated factor due to its elevated specific heating, inhibiting an abrupt increasing and decreasing of the air temperature. However, large daily amplitudes on the dry period happen due to the low quantity of clouds (and consequently water vapor) in the atmosphere. Despite of a high warming during the diurnal time, most of the infrared radiation, emitted by the surface, is rapidly released to the atmosphere and escapes to the space, causing cooling of the air during the nocturnal time. Another cause that can influence the thermal amplitude is the phenomenon called "friagens", common in the region and that normally generate significant decreasing in the air temperature (MARENGO and NOBRE, 2009;BIUDES et al., 2012).
The maximum value of Ir has happened in July and the minimum in December (Table 2 and figure 2e). The occurrence interval of the lowest values of Ir was in the period from October to April (rainy period) and the highest from May to September (dry period). The reason is probably because the sky presents less nebulosity (high insolation n) throughout the dry season, it presents also the lowest potential sunshine duration (N) (autumn and winter) and, consequently, Ir tends to be higher.
The kt has shown similar patterns of the Ir (Table 2 and Figure 2f) and its monthly maximum was in July while the minimum in December. That means a reduction of 28% of the incoming solar radiation reaching the surface when compared the months of maximum and minimum kt. The months with highest values of kt were concentrated from May to September, and the lowest from October to April, dry and wet seasons respectively.
We could observe an inversely proportional relation among the lowest values of kt with the months of high amount of precipitation, due to the major concentration of clouds during this period. The nebulosity, defined as the cloud cover in a certain place, acts as the main attenuation factor of the incoming solar radiation over surface (MENEZES and DANTAS, 2002). Another element that should be considered in our analysis is that during the rainy season, the sun is placed in the South Hemisphere, and consequently, highest values of Ro. Nevertheless, despite of biggest radiative fluxes on the top of the Earth atmosphere, just a small portion of the solar radiation will reach the surface, due to the concentration of clouds that tends to scatter the radiation, and reduce the atmospheric transmissivity (QUERINO et al., 2011).
APPLICATIONS OF THE MODELS
The lowest values of the coefficient a to the Angström-Prescott's model occurred in January, April and December, and the highest in July, and have shown smooth seasonal variations during the period (Table 3). On the other hand, the coefficient b has shown the highest values in December and the lowest in July with low seasonal variation. The a coefficient indicates a low kt in a day totally cloudy, what explain the occurrence of the lowest values of a in the rainy period (CAMPELO, 1998;DALLACORT et al., 2004). The sum of the coefficients a and b is related with the maximum kt when the Ir tends to 1. Hence, considering the annual coefficients to a total situation of nebulosity, it was verified for a kt equals to 0.32, with an Ir tending to 1, we obtain kt equals to 0.75.
The lowest values of a on the Hay's model happened in January, April and December, while the major ones were noticed in July (Table 3). The maximum coefficient b was observed in December and the lowest in July. The angular coefficients determined by Hay's equations are similar to that one obtained with Angströn-Prescott's equation. This similarity is explained because of the results of the compensatory effect of the differences presented between these two models, where the introduction of the insolation rates that effectively can reach 1, would have the opposite effect to the insertion of parameters that consider the multiple reflection of the short wave (CAMPELO, 1998).
The A coefficient to the Bristow & Campbell's equation was fixed in all equations, varying only the coefficients B and C (Table 3). The lowest values of B happened in January, June, July, August and October, and the highest in May. The C coefficient has shown the lowest value in May and the highest in January. The Rg estimated by all models was not significantly different from the Rg measured in the study area, except the Bristow & Campbell model in the monthly and annual survey, which underestimated Rg by 3,1% and overestimated Rg by 5.5%, respectively (Table 4). The Angström-Prescott and Hay's models have presented good estimates to all study period (monthly, seasonal and annual), with accordance index (d) oscillating from 0.86 and 0.87, and correlation coefficient (r) varying between 0.76 and 0.79, and then, been classified as strong correlation (Figure 3 and Table 4).
The Bristow & Campbell's model had accordance index below 0.80 in the seasonal analysis and 0.83 in the annual and monthly investigations ( Figure 3 and Table 4). The correlation coefficient was below 0.65 to the seasonal evaluations (moderated correlation) and has varied between 0.72 and 0.70 (strong correlation) in the annual and monthly comparisons respectively. The difference presented by Bristow & Campbell's model and the other two models, especially during the winter, is associated to the thermal amplitude. The linear Angström-Prescott´s model tend to be more accurate due to the established relationship between Rg and relative sunshine duration, and the need for calibration of local physical parameters (DUZEN AND AYDIN, 2012). The lower performance of the Bristow & Campbell model may be due to the model's inability to explain Rg variability and variation in daily thermal amplitude that was not significant over the study period (BELÚCIO et al., 2014).
CONCLUSIONS
The variables Rg, temperature (maxima and minima), kt and Ir have presented seasonality, mainly driven by the nebulosity.
Although the Rg estimates for the models did not differ significantly from the Rg measurements and that all models had Pearson's correlation coefficient classified as "strong", except for those estimated by the Bristow & Campbell´s model. The Hay´s model showed a better performance in relation to the other models studied. Probably, the best performance of the Hay´s model was due to the insertion of the adjustment factor "A", which considers multiple reflections of Rg in the atmosphere.
Even though the Rg estimated by the Bristow & Campbell´s model differed significantly from the Rg measured, on average that difference was around 5%. This means that in the absence of the measure of sunshine duration, the Rg of the study region can be also estimated by the Bristow & Campbell´s model.
Our results suggest that further research should be carried out, such as application of the models to the other areas of the Mato Grosso state to improve the estimative Rg on the study region, as well as on the whole state.
|
2019-04-27T13:12:53.542Z
|
2018-12-19T00:00:00.000
|
{
"year": 2018,
"sha1": "4641335c850a2c24322153e8150133dfb7ed467e",
"oa_license": "CCBYNC",
"oa_url": "https://revistas.ufpr.br/revistaabclima/article/download/51149/37250",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "28d84c37755e8b778a447c75ae5bf141bcffe3b5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
236987082
|
pes2o/s2orc
|
v3-fos-license
|
Formation of the First Two Black Hole-Neutron Star Mergers (GW200115 and GW200105) from Isolated Binary Evolution
In this work we study the formation of the first two black hole-neutron star (BHNS) mergers detected in gravitational waves (GW200115 and GW200105) from massive stars in wide isolated binary systems - the isolated binary evolution channel. We use 560 BHNS binary population synthesis model realizations from Broekgaarden et al. (2021a) and show that the system properties (chirp mass, component masses and mass ratios) of both GW200115 and GW200105 match predictions from the isolated binary evolution channel. We also show that most model realizations can account for the local BHNS merger rate densities inferred by LIGO-Virgo. However, to simultaneously also match the inferred local merger rate densities for BHBH and NSNS systems we find we need models with moderate kick velocities ($\sigma\lesssim 10^2\,\rm{km}\,\rm{s}^{-1}$) or high common-envelope efficiencies ($\alpha_{\rm{CE}}\gtrsim 2$) within our model explorations. We conclude that the first two observed BHNS mergers can be explained from the isolated binary evolution channel for reasonable model realizations.
The main formation channel leading to merging BHNS systems (and BHBH and NSNS) is still under debate. A widely studied channel is the formation of BHNS mergers from massive stars that form in (wide) isolated binaries and evolve typically including a common envelope (CE) phase (e.g. Neijssel et al. 2019;Belczynski et al. 2020;Shao & Li 2021). Other possible channels include formation from close binaries that can evolve chemically homogeneously (Mandel & de Mink 2016;Marchant et al. 2017), metal-poor Population III stars that formed in the early universe (e.g. Belczynski et al. 2017), stellar triples or multiples (Fragione & Loeb 2019;Hamers & Thompson 2019;Hamers et al. 2021), or from dynamical or hierarchical interactions in globular clusters (Clausen et al. 2013;Arca Sedda 2020;Ye et al. 2020), nuclear star clusters (Petrovich & Antonini 2017;McKernan et al. 2020;) and young and/or open star clusters (e.g., Ziosi et al. 2014;Rastello et al. 2020;Arca Sedda 2021). We refer the reader to Mandel & Broekgaarden (2021) for a living review of these various formation channels.
In this Letter we address the key question: Could GW200115 and GW200105 have been formed through the isolated binary evolution scenario?
To investigate this we use the simulations from Broekgaarden et al. (2021a) to study the formation of merging BHNS systems from pairs of massive stars that evolve through the isolated binary evolution scenario. This Letter is structured as follows. In §2 we describe our method and models. In §3.1 we show that most of our models do match the inferred BHNS rate densities, but that only models with higher CE efficiencies or moderate supernova (SN) kicks are also consistent with the inferred R BHBH and R NSNS . In §3.2 we compare the properties of GW200115 and GW200105 to the overall expected GW-detectable BHNS population. We end with a discussion in §4 and present our conclusions in §5.
METHOD
We use the publicly available binary population synthesis simulations from Broekgaarden et al. (2021a, presented in Broekgaarden et al., in preparation), to study the formation of GW200115 and GW200105 from the isolated binary evolution channel. The simulations used in this work add new model realizations compared to Broekgaarden et al. (2021b), and also consider merging BHBH and NSNS systems. The simulations are performed using the rapid binary population synthesis code COMPAS (Stevenson et al. 2017b;Barrett et al. 2017;Vigna-Gómez et al. 2018;Broekgaarden et al. 2019;Neijssel et al. 2019), which is used to model the evolution of the binary systems and determine the source properties and rates of the double compact object mergers. The BHNS population data set contains a total of 560 model realizations to explore the uncertainty in the population modeling. Namely, 20 different binary population synthesis variations (varying assumptions for common envelope, mass transfer, supernovae, and stellar winds) and 28 model variations in the metallicityspecific star formation rate density model, SFRD(Z, z) (varying assumptions for the star formation rate density, mass-metallicity relation, and galaxy stellar mass function), which is a function of birth metallicity (Z) and redshift (z). The population synthesis simulations are labeled A, B, C, ... T, with each variation representing one change in the physics prescription compared to the fiducial model 'A' (see Table 1 in Broekgaarden et al. in preparation); the SFRD(Z, z) models are labelled with 000, 111, 112, ... 333 (see Table 3 Broekgaarden et al. 2021b). To obtain high-resolution simulations, Broekgaarden et al. (in prep.) simulated for each population synthesis model a million binaries for 53 Z bins and used the adaptive importance sampling algorithm STROOP-WAFEL (Broekgaarden et al. 2019) to further increase the number of BHNS systems in the simulations. Doing so, resulted in a total dataset consisting of over 30 million BHNS systems, making it the most extensive simulation of its kind to date.
We define BHNS systems in our simulations to match the observed GW200115 and GW200105 if their m chirp , m tot , m 1,f , m 2,fá nd q f lie within the inferred 90% credible intervals ( §1). We note that Abbott et al. (2021c) also inferred 90% credible intervals for the spins of both BHNS systems, but due to the large uncertainties in the measurements and the theory of spins we leave this topic for discussion in §4 and do not explicitly take spins into account for the BHNS system selection. We calculate R BHNS using Equation 2 in Broekgaarden et al. (2021b), where we assume a local redshift z ≈ 0, and discuss these intrinsic merger rates in §3.1. We obtain the detection-weighted distributions for the BHNS mergers using Equation 3 from Broekgaarden et al. (2021b) and discuss these in §3.2. To calculate the detectable GW population we assume the sensitivity of a GW-detector network equivalent to advanced LIGO in its design configuration (Aasi et al. 2015;Abbott et al. 2016Abbott et al. , 2018, a reasonable proxy for O3. For the purpose of comparison, we use the LIGO-Virgo posterior samples for GW200115 and GW200105 from Abbott et al. (2021d).
Local BHNS merger rates
In Figure 1 we show the predicted local merger rate densities from our 560 model realizations for the overall BHNS population, in comparison to the 90% credible intervals from Abbott et al. (2021c). We find that the majority of the 560 model realizations match one of the two observed BHNS merger rate densities. Model realizations that underpredict the observed rates include most SFRD(Z, z) variations of model G (α CE = 0.1) corresponding to inefficient CE ejection, which increases the number of stellar mergers during the CE phase (our fiducial model uses α CE = 1), and about half of the SFRD(Z, z) variations of model D, which assumes a high mass transfer efficiency (β = 0.75), as opposed to our fiducial model that assumes an adaptive β based on the stellar type and thermal timescale and typically results in β 0.1 for systems leading to BHNS mergers. Conversely, some model realizations overpredict the observed rates, in particular about half of the SFRD(Z, z) variations of models P, Q and R. These models have moderate or low SN natal kick magnitudes, increasing the number of BHNS systems that stay bound during the SNe. The SFRD(Z, z) variations that overpredict the observed rates correspond to lower average metallicities, thereby increasing the formation efficiency of BHNS mergers (Broekgaarden et al. 2021b).
On the other hand, we find that only a small subset of the 560 model realizations (shown with red crosses in Figure 1) also match the inferred 90% credible intervals of the observed BHBH and NSNS merger rate densities ( §1) 2 , namely, models I, J, P and Q in conjunction with a few of the SFRD(Z, z) variations. Both the higher α CE values in models I and J (α CE 2), and the low SN natal kicks in models P and Q (σ ≈ 30 or 100 km s −1 , where σ is the one-dimensional rms velocity dispersion of the Maxwellian distribution used to draw the SN natal kick magnitudes), result in relatively higher NSNS rates that can match the high observed R NSNS 3,4 . Requiring a match with the observed R BHBH mostly constrains the SFRD(Z, z) models to those with moderate average star formation metallicities, as our models with typically lower Z 5 overestimate the inferred R BHBH . Similar results were found by earlier work including Giacobbo & Mapelli (2018) and Santoliquido et al. (2021).
Within the matching models, models I, P and Q match the inferred R BHNS that is based on a broader BHNS mass distribution, whereas the matching model J variations overlap only with the observed rate based on a GW200115-and GW200105-like population. We note, however, that our binary population synthesis models in all cases predict a broader mass distribution compared 3 Most isolated binary evolution predictions (including most of our model variations) underestimate the inferred NSNS merger rate (e.g., Chruslinska et al. 2018;Mandel & Broekgaarden 2021). 4 We note that there have been several recent studies supporting common-envelope efficiencies α CE 2 (e.g. Fragos et al. 2019;García et al. 2021;Schreier et al. 2021). 5 E.g., the models that assume a galaxy mass-metallicity relation based on Langer & Norman (2006) (all SFRD(Z, z) models xyz with z = 1), which maps to lower average stellar birth metallicities (for example, model 231 has an average star formation Z of ≈ Z /10 near redshift z ≈ 2, whereas for our higher Z models this is closer to Z ≈ Z around the same redshift).
to just GW200115-and GW200105-like events. We investigate this in detail in Figure 2, where we plot the cumulative BHNS chirp mass distributions of our model variations, in comparison to the chirp masses spanned by GW200115 and GW200105, 2.35 m chirp / M 3.49. We find that ≈ 60% of the GW-detectable BHNS systems in model J are expected to have m chirp outside of this range, while for matching models I, P and Q this is about 60%, 50%, and 50%, respectively. For models I, P and Q this result is expected since they match the R BHNS range that is based on a broader mass distribution, but for model J the low percentage of 60% conflicts the match with R BHNS based on a BHNS population defined by GW200115-and GW200105-like events. From Figure 2 it can be seen that besides models I, J, P and Q all other model realizations generally predict BHNS populations with broader chirp mass distributions compared to the range spanned by GW200115 and GW200105 alone. The models using the rapid supernova prescription (model L) predict the highest fraction (≈ 75%) of BHNS systems with 2.35 m chirp / M 3.49, whereas the model assuming that case BB mass transfer is always unstable (model E) results in the lowest percentages (≈ 8%).
Properties of the BHNS systems
In the following discussion we focus on the specific model 'P112', as an example of a model realization that matches all of the various observed merger rate densities. We take this approach for simplicity, but note that we are not claiming that only this model realization represents the correct isolated binary evolution pathway to the observed GW mergers. Below we examine the properties of the systems at the time of merger (chirp mass, component masses and mass ratio), as well as at the time of formation on the zero-age main sequence (ZAMS) (component masses, mass ratio and semimajor axis).
BHNS properties at merger
In Figure 3 we show the 1D and 2D distributions of the predicted properties for the GW-detectable BHNS population for all BHNS systems (gray contours and 1D distributions) and for GW200115-and GW200105like BHNS systems (blue and orange scatter points and dotted histograms, respectively). The LIGO-Virgo inferred posterior samples for GW200115 and GW200105 are shown with orange and blue 90% credible contours in the 2D histograms and with filled histograms in the 1D plots, respectively. We show m chirp , m BH , m NS and q f . In the top panels we normalize each 1D distribution to peak at a value of 1.
Overall, we find that model P112 predicts the majority ( In detail, we find several interesting features in the model distributions compared to the observed BHNS mergers. First, we note that the inferred properties of GW200115 and GW200105 lie well within the predicted population of the GW-detectable BHNS population. In particular, the GW200115 and GW200105 credible intervals typically overlap with the highest probability region for the corresponding distribution of the predicted BHNS population. We stress that this result does not follow trivially from the match of model P112 with the inferred R BHNS ( §3.1) as the properties of the intrinsic and detectable BHNS populations could be significantly different due to the strong bias in the sensitivity of GW detectors for more massive systems, meaning that the underlying intrinsic mass distributions can be significantly different from the observed mass distributions. Only for m NS the posterior samples of GW200115 reach well below the predicted distribution of our models, but this is due to the remnant mass prescription, which has an artificial lower m NS limit of about 1.3 M . The overlap between our predictions and the inferred posterior distributions can also be seen from the matches between the LVK distributions and our model-weighted distributions for GW200115 and GW200105.
Second, we find that model P112 suggests the existence of a small, positive, m BH -m NS correlation in the GW-detectable BHNS population (a similar correlation is also visible in the m chirp -m NS distribution, but we note that the chirp mass is dependent on m NS ). This means that we expect, on average, that BHNS with more massive BHs have more massive NSs. Interestingly, this correlations also holds for GW200115 and GW200105. This correlation is visible in most of our other model variations, and was also noted by earlier work, including Kruckow et al. (2018) and Broekgaarden et al. (2021b). The correlation is due to the preference in the isolated binary evolution channel for more equal mass binaries. The BHNS with more massive BH typically form from binaries with a more massive primary (the initially more massive star), and such systems also have on average more massive secondaries at ZAMS (see Sana et al. 2012). In addition, the more massive secondaries at ZAMS typically lead to binaries with more equal mass ratios at the moment of the first mass transfer, making it likely more stable and successfully leading . We also show the 90% credible intervals for GW200115 and GW200105 (vertical bars; Abbott et al. 2021c). The legend indicates the label names of the matching models, while the arrows point to models E and L, which predict the lowest and highest fraction of BHNS mergers within the chirp mass range spanning GW200115 and GW200105, respectively. to a BHNS (Broekgaarden et al. 2021b). This results on average in a more massive NS in binaries with a more massive BH.
Finally, we note that several of the panels in Figure 3 show sharp gaps or peaks in the distributions, particularly visible in the scatter points and 1D histograms. These gaps are artificial discontinuities present in some of the prescriptions in our COMPAS model (see Broekgaarden et al. 2021b, and references therein).
BHNS properties at ZAMS
In Figure 4 we show the ZAMS properties of the binary systems that successfully form detectable BHNS mergers: primary mass (m 1,ZAMS ), secondary mass (m 2,ZAMS ), semimajor axis (a ZAMS ), and mass ratio (q ZAMS ≡ m 2,ZAMS /m 1,ZAMS ). In blue (orange) we show the ZAMS properties of binaries in our simulation that eventually form BHNS matching the inferred credible intervals of GW200115 (GW200105). The distributions are weighted for the sensitivity of a GW-detector network. Several features can be seen that we describe below.
First, we find that GW200115-and GW200105like GW mergers form from binaries that have 1D distributions (90th percentiles) in the ranges 26 m 1,ZAMS / M 112, 13 m 2,ZAMS / M 25, 10 0.04 a ZAMS / AU 10 1.5 , and 0.15 q ZAMS 0.75. From the histograms it can be seen that the initial conditions of the binaries that form GW200115-and GW200105like mergers are representative of the overall BHNS forming population.
Second, when comparing GW200115 and GW200105, we find that our model predicts that both systems formed from binaries with similar primary star masses. However, for the other ZAMS properties the model predicts that GW200105-like BHNS mergers form from binaries with slightly larger m 2,ZAMS , a ZAMS and q ZAMS , compared to GW200115-like BHNS mergers. The larger secondary masses for GW200105 are required to form the more massive NS in this system. The larger secondary mass also causes the slight preference for larger a ZAMS at ZAMS as the increased secondary mass impacts the timing of mass transfer in several ways, including the time at which the primary will fill its Roche lobe, and the common-envelope phase later on (more/less shrinking due to a different envelope mass). As a result we find that GW200105-like mergers form from slightly larger a ZAMS compared to GW200115-like mergers.
Third, it can be seen that several of the distributions in Figure 4 show small gaps in ZAMS space that form BHNSs with combinations of BH and NS masses that do not match GW200115 or GW200105. These are mostly a consequence from small regions in m 1,ZAMS , m 2,ZAMS and q ZAMS that map to specific BH masses in our stellar evolution prescriptions that do not match GW200115 or GW200105. Figure 3. Corner plot showing the 1D and 2D distributions of the properties of the detectable BHNS mergers from our binary population synthesis model P112. We show the chirp mass, BH mass, NS mass, and the mass ratio at the time of merger. In gray we show the overall BHNS population, whereas in blue (orange) we show BHNS systems that have properties matching GW200115 (GW200105). Our GW200115 (GW200105) predictions are shown with blue (orange) scatter points and dotted histograms, whereas the posterior samples from Abbott et al. (2021c) are shown with 90% contour levels in the 2D plots and with filled histograms in the 1D panels. The gray contours show the percentage of the detectable BHNS systems enclosed. All distributions are weighted using the GW-detection probability. The 1D distributions are normalized such that the peak is equal to one. Finally, in the a ZAMS -q ZAMS plane, we note a small population of BHNS systems around log(a ZAMS ) ∼ −1 and q ZAMS 0.6 that do not form GW200115-and GW200105-like mergers. These are a small subset of BHNS systems that form through an early mass transfer episode initiated by the primary star when it is still core-hydrogen burning (case A mass transfer). These systems are the main contributor to the small population of BHNSs in which the NS forms first and with m BH 10 M (see Broekgaarden et al. 2021b for further details).
Predicted Merging BHNS Distribution Shapes for Models besides P112
In §3.2 we showed for model P112 the predicted GWdetectable BHNS distribution shapes; we now discuss how these are effected when considering the 560 model realizations varying SFRD(Z, z) and binary population synthesis assumptions.
First, changes in SFRD(Z, z) do not drastically impact the detectable BHNS distribution shapes for m chirp , m tot m NS , m BH , and q f as shown in Broekgaarden et al. (2021b, see Figure 14 and 15). The predicted BHNS merger rate density, on the other hand, is significantly impacted by the choice of SFRD(Z, z) (with factors ∼ 10×; Figure 1). The SFRD(Z, z) choice in particular also impacts the predicted BHBH merger rate density, which puts the strongest constraints (out of all compact object merger flavors) on the matching SFRD(Z, z) models in Figure 1 (Broekgaarden et al. 2021, in preperation).
Second, variations in binary stellar evolution assumptions do significantly impact the shape of the detectable BHNS distributions for m chirp , m tot m NS , m BH and q f as shown in Broekgaarden et al. (2021b, see Figure 14 and 15). Among the binary population synthesis models that match in rate (I, J, P and Q; §3.1), model J (α CE = 10) stands out as it predicts detectable BHNS distributions that peak at low-mass events (m chirp 2 M ) compared to models I, P and Q (that peak near m chirp ≈ 3 M ). The models with I (α CE = 2) and Q (σ = 30 km s −1 ) have similar BHNS distributions compared to model P112, with a small difference mainly in the tail of the mass distributions. We provide corner plots for the interested reader for these models in our Github repository 6 and refer the reader to Broekgaarden et al. (2021b) for more details. Future GW observations might constrain between these models.
Black Hole Spins and Neutron Star Tidal Disruption
Abbott et al. (2021c) report the inferred 90% credible interval for the primary spin magnitude χ 1 (i.e., spin of the BH), of GW200115 (GW200105) to be χ 1 = 0.33 +0.48 −0.29 (χ 1 = 0.08 +0.22 −0.08 ), while the spins of the NSs are unconstrained. Both reported χ 1 values are consistent with zero. However, for GW200115 the authors report moderate support for negative effective inspiral spin χ eff = −0.19 +0.23 −0.35 , indicating a negatively aligned spin with respect to the orbital angular momentum axis. Theoretical studies of spins in BHNS systems formed through isolated binary evolution are still inconclusive. It has been argued that the black holes are expected to have χ 1 ≈ 0 due to efficient angular momentum transport during the star's evolution (e.g. Fragos & McClintock 2015;Fuller & Ma 2019). Typically, no antialigned spins are expected (but see, e.g., the discussion in Wysocki et al. 2018;Stegmann & Antonini 2021). Studies including Qin et al. (2018) and Bavera et al. (2021) argue that if the BH is formed second, it can tidally spin up as a Wolf-Rayet (WR) star if the binary evolves through a tight BH-WR phase. The same might be true for tight NS-WR systems that can form BHNS with a spun up BH if the BH forms after the NS (e.g. if the system inverts its masses early in its evolution). However, we find that none of the GW200115and GW200105-like BHNS mergers in model P112 do so, and hence we predict χ 1 = 0 for both events, consistent with the LIGO-Virgo inferred credible intervals.
Using the ejecta mass prescription from Foucart et al. (2018, Equation 4) and the BHNS properties from model P112, we can crudely calculate whether our simulated BHNS systems tidally disrupt the NS outside the BH innermost stable orbit and, if so, the amount of baryon mass outside the BH. We find that when assuming χ 1 = 0 none of the GW200115-and GW200105-like BHNS systems have ejecta masses of 10 −6 M (see Abbott et al. 2021c;Zhu et al. 2021) for reasonable R NS = 11 − 13 km.
Other Formation Channels
Previous predictions for R BHNS from isolated binary evolution and alternative formation pathways have been made (see Mandel & Broekgaarden 2021 for a review). The various isolated binary evolution studies have predicted rates ranging from a few tenths to ∼ 10 3 Gpc −3 yr −1 , and a subset can match one of the LIGO-Virgo inferred BHNS rates (e.g. Neijssel et al. 2019;Belczynski et al. 2020). For the other formation channels, there are some studies that predict agreeable rates for formation from triples (e.g. Hamers & Thompson 2019), formation in nuclear star clusters (McKernan et al. 2020, but see also Petrovich & Antonini 2017;Hoang et al. 2020), dynamical formation in young star clusters (Rastello et al. 2020;Santoliquido et al. 2020) and primordial formation (Wang & Zhao 2021). On the other hand, much lower BHNS rates (R BHNS 10 Gpc −3 yr −1 ), which do not match the observed rate, are expected from binaries that evolve chemically homogeneously (Marchant et al. 2017), from Population III stars (Belczynski et al. 2017) and through dynamical formation in globular clusters (Clausen et al. 2013;Arca Sedda 2020;Hoang et al. 2020;Ye et al. 2020). GW observations of BHNS might therefore provide a useful tool to distinguish between formation channels. We stress, however, that models should not only match the rates, but also the inferred mass and spin distributions of BHNS mergers. This is particularly valuable as some of the formation channels predict BHNS distributions with distinguishable features (e.g. a tail with larger BH masses, m BH 15 − 20 M in dynamical formation; Arca Sedda 2020; Rastello et al. 2020) that could help constrain formation channels (e.g. Stevenson et al. 2017a).
Other Potential BHNS Merger Events
Besides GW200115 and GW200105, LVK reported four potential BHNS candidates (Abbott et al. 2021b,e): 1. GW190425 is most likely an NSNS merger, but a BHNS origin cannot be ruled out. If it is a BHNS then m BH = 2.0 +0.6 −0.3 M and m chirp = 1.44 +0.02 −0.02 M are uncommon in our simulated BHNS population (e.g., Figure 3 and Broekgaarden et al. 2021b).
2. GW190814 is most likely a BHBH merger, but a BHNS origin cannot be ruled out. If so, it has m NS = 2.59 +0.008 −0.009 M . In Broekgaarden et al. (2021b) we noted that only our model K (which assumes a maximum NS mass of 3 M ) produces such heavy NS masses, but that it does not form many GW190814-like BHNS systems as GW190814's reported m chirp = 6.09 +0.06 −0.06 , m tot = 25.8 +1.0 −0.9 and m BH = 23.2 +1.1 −1.0 are rare within the model BHNS population.
3. GW190426 152155 is a BHNS candidate event, but with a marginal detection significance. If this event is real, it is inferred to have BHNS properties very similar to GW200115 (see Figure 4 in Abbott et al. 2021c) We therefore predict it to be (similarly) common in our simulations.
4. GW190917 is reported in the GWTC2.1 catalog, but the nature of its less massive component cannot be confirmed from the current data, and it was only classified as a BHBH event (i.e., p BHNS = 0) by the pipeline that detected it. If real, it might be a BHNS with m chirp = 3.7 +0.2 −0.2 , m tot = 11.4 +3.0 −2.9 , m BH = 9.3 +3.4 −4.4 , m NS = 2.1 +1.5 −0.5 and q f = 0.23 +0.52 −0.09 . These properties are somewhat similar to GW200105 (although both the medians of m BH and m NS for GW190917 are slightly heavier), and we therefore predict it to be (similarly) common in our simulations.
CONCLUSIONS
In this Letter we studied the formation of the first two detected BHNS systems (GW200115 and GW200105) in the isolated binary evolution channel using the 560 binary population synthesis model realizations presented in Broekgaarden et al. (2021a). We investigate the predicted R BHNS , as well as the BHNS system properties (at merger and at ZAMS), and compare these with the data from LIGO-Virgo (Abbott et al. 2021c). Our key findings are: 1. We find that the majority of our 560 model realizations can match one of the inferred credible intervals for R BHNS from Abbott et al. (2021c). We further find that models with higher CE efficiency (α CE 2; models I and J) or moderate SN natal kick velocities (σ 100 km s −1 ; models P and Q) also match the inferred 90% credible intervals for R BHBH and R NSNS .
2. Using model P112 as an example, we find that the isolated binary evolution channel predicts a GWdetectable BHNS population that matches the observed properties (chirp mass, component masses, and mass ratios) of GW200115 and GW200105, although we expect a somewhat broader population than just GW200115-and GW200105-like BHNS systems.
4. We note that if GW200115 and GW200105 were formed through isolated binary evolution, then we expect their BH to have a spin of ≈ 0, their BH to have formed first, and neither system to have produced an electromagnetic counterpart.
5. We discuss the four other BHNS candidates reported by LIGO-Virgo, and find that the properties of GW190425 and GW190814 do not match our predicted BHNS population, making them instead more likely to be NSNS and BHBH mergers, respectively. On the other hand, the properties of the BHNS candidates GW190426 152155 and GW190917 do match our predicted BHNS population, but were reported by LIGO-Virgo with low signal-to-noise ratios.
We thus conclude that GW200115 and GW200105 can be explained from formation through the isolated binary evolution channel, at least for some of the model realizations within our range of exploration. With a rapidly increasing population of BHNS systems expected in Observing Run 4 and beyond, it will be possible to carry out a more detailed comparison to model simulations, and to eventually determine the evolutionary histories of BHNS systems.
|
2021-08-13T01:16:08.814Z
|
2021-08-12T00:00:00.000
|
{
"year": 2021,
"sha1": "67856b7002b69e384b33916ef409d976c030fc79",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2108.05763",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67856b7002b69e384b33916ef409d976c030fc79",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
20186727
|
pes2o/s2orc
|
v3-fos-license
|
iPTF Search for an Optical Counterpart to Gravitational Wave Trigger GW150914
The intermediate Palomar Transient Factory (iPTF) autonomously responded to and promptly tiled the error region of the first gravitational wave event GW150914 to search for an optical counterpart. Only a small fraction of the total localized region was immediately visible in the Northern night sky, due both to sun-angle and elevation constraints. Here, we report on the transient candidates identified and rapid follow-up undertaken to determine the nature of each candidate. Even in the small area imaged of 126 sq deg, after extensive filtering, 8 candidates were deemed worthy of additional follow-up. Within two hours, all 8 were spectroscopically classified by the Keck II telescope. Curiously, even though such events are rare, one of our candidates was a superluminous supernova. We obtained radio data with the Jansky Very Large Array and X-ray follow-up with the Swift satellite for this transient. None of our candidates appear to be associated with the gravitational wave trigger, which is unsurprising given that GW150914 came from the merger of two stellar-mass black holes. This end-to-end discovery and follow-up campaign bodes well for future searches in this post-detection era of gravitational waves.
INTRODUCTION
The direct detection of gravitational waves (GW) marks the dawn of a new era (Abbott et al. 2016b). It is widely agreed that the detection and study of the anticipated electromagnetic (EM) counterparts will vastly enrich the science returns for the field of GW astronomy. The photometric discovery of the EM counterpart will give a precise location and a spectrum of the host galaxy will give a precise redshift. This will enable a more accurate measurement of basic astrophysical properties such as the luminosity and energetics of this strong-field gravity event. If the spectrum is timely, it may also solve the arXiv:1602.08764v2 [astro-ph.IM] 14 Mar 2016 long-standing mystery of the unknown sites of r-process nucleosynthesis.
The inherent challenge is that the two advanced GW interferometers, due to the low frequency of operation, give very poor on-sky localization (Kasliwal & Nissanke 2014;Singer et al. 2014;Berry et al. 2015;Abbott et al. 2013). Nevertheless, the prospect of finding electromagnetic counterparts by searching large sky areas is promising as the search methodology is steadily improving -from early efforts in the enhanced LIGO S6 run (Aasi et al. 2014), to proof-of-concept localizations of coarse Fermi gamma-ray bursts (Singer et al. 2013, to a score of EM facilities promptly responding to GW150914 (Abbott et al. 2016a).
At the time of the GW150914 trigger, there was no information disclosed on the nature of the event i.e. whether it was a binary black hole merger or binary neutron star merger or something else (GCN 18330). Many facilities undertook a search for an electromagnetic counterpart (e.g., Connaughton et al. 2016;Evans et al. 2016;Smartt et al. 2016;Soares-Santos et al. 2016). Months later, after offline analysis, the event was identified as a binary black hole merger (GCN 18858).
Here, we present the intermediate Palomar Transient Factory (iPTF) follow-up effort. iPTF uses the Samuel Oschin 48-inch telescope on Palomar mountain equipped with the CFH12K camera with a field-of-view of 7.1 deg 2 (Law et al. 2009). Our motivation was to look for an optical counterpart powered by free neutron decay , or heavy element radioactive decay Metzger & Fernández 2014). We describe the sky area coverage, candidate identification, spectroscopic classification and panchromatic follow-up. We conclude with our plans for a way forward.
IDENTIFYING CANDIDATES
On UT 2015 September 16 03:17, the iPTF Target of Opportunity Marshal automatically responded to the gravitational wave trigger alert G184098 (later named GW150914). It immediately notified the team via phone calls and SMS alerts that there had been a GW trigger. It also computed that due to the sun angle constraint and elevation constraints, Palomar would only be able to access 2.5% of the enclosed probability by tiling 126 deg 2 just before sunrise at high airmass ( Figure 1). This total area calculation takes into account the two non-working CCDs and the gaps between the CCDs. The small containment probability was because the southern mode of the updated ("LIB") localization was too far south to be observable from Palomar, whereas most of the northern mode rose only after 12 degree morning twilight. Clouds did not co-operate and the Palomar 48-inch dome remained closed the first night after trigger. However, the next night (UT Sep 17), we imaged 18 fields covering this area with exposures of 1 min (See details in Table 5; GCN 18337). The scheduling and choice of tiles was further optimized applying the algorithm described in Rana et al. 2016. A second epoch with a baseline separation of 30 min (± 1 min) was obtained for 13 fields.
Within minutes of obtaining the data, our automated realtime image subtraction pipeline started loading candidates into our database. We have two, independent real-time pipelines -one running at the National Energy Research Scientific Computing Center (NERSC) using the HOTPANTS image subtraction algorithm (Nugent et al. 2015) and the other running at the Infrared Processing and Analysis Center (IPAC) using the PTFIDE algorithm (Masci 2016). Due to the dynamic nature of the optical sky, the candidate list was dominated by false positive transients unrelated to the gravitational wave trigger. A total of 127676 candidates were loaded into the NERSC database and 32576 in the IPAC database. Our automated machine-learning-aided filtering algorithms rejected the moving objects in our solar system, variable stars in the Milky Way as well as subtraction artifacts. A list of 13 candidates were presented on a dynamic web portal for human vetting.
We have been refining our software algorithms that quickly sift through the large number of candidates during our Fermi Gamma-ray Burst Monitor afterglow search effort . The EM-GW challenge has some similarities and some differences. The similarities are that we need to continue to reject foreground asteroids/variable stars and background supernovae/active galactic nuclei. The differences are that compared to a Gamma Ray Burst afterglow, the EM-GW counterpart may be relatively fainter and/or slower and/or redder. Knowing that the EM counterpart is relatively nearby due to the advanced LIGO sensitivity helps further reduce false positives.
The following are some rejection criteria: 1. Movement in detections in two epochs separated by at least 15 min suggesting the candidate is an asteroid 2. Past history of eruption in PTF/iPTF data (baseline of six years) suggesting the candidate is an old transient 3. Previously known radio source or X-ray source suggesting the candidate is an active galactic nucleus 4. Previously known optical or infrared point source underneath the position suggesting the candidate is a stellar flare The following criteria lead to flags for follow-up spectroscopy, additional photometry and/or multi-band follow-up: 1. Host galaxy (within 100 kpc of transient) with spectroscopic redshift <0.05 (or photometric redshift <0.1)this is motivated by advanced LIGO's sensitivity limit to binary neutron star mergers 2. Photometric evolution on hour timescale (>0.2 mag) or day timescale (> 0.5mag) or one-week timescale (>1 mag) -this serves as a strong discriminant against old supernovae. We note that this flag was not applied for GW150914 as all candidates of interest were spectroscopically classified within two hours.
3. Hostless candidates with no counterpart in deep iPTF reference co-adds -even though these are unlikely to be local, we flag these events as they are relatively rare.
To quantify the relative efficacy of each criterion, we discuss the most severe cuts in order of severity by applying each criterion independently. Of the 127676 candidates in our NERSC pipeline, only 1007 candidates (0.8% selection) are selected as being coincident with a galaxy within 200 Mpc, hence this is the most severe cut. 5803 candidates (4.5%) are selected as passing our machine learning cuts (we now have three generations of machine learning algorithms; see details in Rebbapragada 2014;Brink et al. 2013). 15624 candidates (12.2%) are selected as having two detections separated by 30ṁin in the same night. 78951 candidates (62% selection) are selected as not having an optical point source in the reference image. Similarly, in our IPAC pipeline, we had a total of 32576 candidates. Of these, 24699 did not match a star (75.8% selection), 5302 had two detections (16.2% selection) and 1964 passed our machine learning cut (6.0% selection).
In practice, these criterion are not all applied simultaneously and the candidates selected for human vetting are the result of a more complex database query. For example, prior to human vetting, we do not require coincidence with a nearby galaxy and we do not require any light curve properties. For the five fields where a second epoch was not completed on the same night, we did a manual search requiring a local universe match, found 2 candidates that were both rejected as known asteroids. After human vetting of 13 candidates, 5 candidates were rejected as they showed past history of variability in the PTF data. In summary, our team flagged 8 candidates for further follow-up in our marshal database (see Table 5). Next, we describe the prompt follow-up that was undertaken to investigate whether any of the candidates was associated with GW150914 (GCN 18341).
SPECTROSCOPIC FOLLOW-UP
Since Hawaii is west of Palomar Observatory, sunrise was three hours later and we were able to obtain spectra of all eight candidates in less than 2 hours from discovery (Figure 2). We emphasize that iPTF has routinely been obtaining spectroscopic classification on the same night as discovery, totaling 165 transients with spectra within 12 hours, thus far. We observed with the DEep Imaging Multi-Object Spectrograph (DEIMOS; Faber et al. 2003) mounted on the Keck II telescope. We used the low resolution 600 ZD grating, giving spectral coverage between 4650Å and 9600Å with a resolution of 3.5Å (full width at half maximum). Our spectra are shown in Figure 2. A priori, since we searched 126 deg 2 to a depth of 20.5 mag, we expect ≈3.2 supernovae using the rates in Li et al. 2011 (and assuming that supernovae are brighter than −17 mag for 1 month i.e. a volume out to z=0.075).
We cross-matched our spectra with a library of supernovae spectra augmenting the superfit software (Howell et al. 2005). Our classifications are in Table 5. We found two Type Ia supernovae (SN Ia), two hydrogen-rich core-collapse supernovae (SN II), three nuclear candidates (e.g. weak AGN where the spectrum is dominated by the host galaxy), and one hostless transient with initially unclear classification (iPTF15cyk). Offline processing of the three nuclear candidates also shows past history of photometric variability in the PTF data, which is consistent with the AGN hypothesis. The spectrum of iPTF15cyk was dominated by a blue continuum, with narrow lines suggesting a redshift of 0.539 (which would imply a very luminous transient). Since the nature of the GW source was unclear, we decided to obtain additional spectroscopic and multi-wavelength follow-up.
RADIO AND X-RAY FOLLOW-UP
We observed iPTF15cyk and the necessary calibrators with the Karl G. Jansky Very Large Array (VLA; Perley et al. 2009) in its D and DnC configurations. The observations were performed in C-band (≈ 6 GHz central frequency) under our Target of Opportunity program (VLA/15A-339; PI: Corsi). VLA data were reduced and imaged using the Common Astronomy Software Applications (CASA) package. In Table 5, we report the 3σ upper-limits derived for iPTF15cyk using the full 2 GHz bandwidth (GCN 18914).
If the host galaxy redshift was confirmed, iPTF15cyk could be a super luminous supernova (SLSN) with absolute magnitude brighter than −22 mag. Radio and X-ray emission from super luminous SNe may arise from interaction with the circumstellar medium (CSM; see e.g. Ofek et al. 2013). In an alternate model, superluminous supernovae could be powered by the spin-down of a nascent magnetar inside the supernova ejecta (Kasen & Bildsten 2010), which may also produce Xray emission .
However, such emission is likely to be very sensitive to the exact properties of the CSM including density profile and homogeneity. In dense CSM environments, free-free absorption can suppress the radio emission at early times. Thus, chances for a detection are maximized by observing after maximum light (Ofek et al. 2013). Hence, we observed iPTF15cyk thrice between 1 month and 4 months after discovery.
We also observed the location of iPTF15cyk with the Swift satellite (Gehrels et al. 2004) beginning at 18:12 UT on 2015 September 18 (∆t = 4.3 d after the GW trigger). We do not detect any emission with the on-board X-Ray Telescope (XRT; Burrows et al. 2005) to a 3σ limit of < 3.2 × 10 −3 ct s −1 . Assuming a power-law spectrum with a photon index of Γ = 2, this corresponds to an upper limit on the unabsorbed flux (0.3-10.0 keV) of f X < 1.3 × 10 −13 erg cm −2 s −1 .
Simultaneously we obtained images of the field with the Figure 5), similar to LSQ12dlf at +16 d (Nicholl et al. 2014). The radio and X-ray upper limits were consistent with this classification. Given the high redshift, we concluded that this event was unrelated to GW150914. We note that the odds of finding a super luminous supernova were lower than the odds of finding other core-collapse or thermonuclear supernovae. The snapshot rate is only ∼0.2 using the volumetric rate in Quimby et al. 2013 (and assuming that SLSN are brighter than −21 mag for 1 month). Moreover, we have a total of only 6 events with z>0.5 (out of 2650 spectroscopically classified supernovae) in the six years of operating PTF/iPTF.
A WAY FORWARD
The post-detection era promises to be one of routine GW detections of binary neutron star mergers. With routine detections, the joint probability of ∼ 1 3 that the sun (∼ 2 3 ), clouds (∼ 2 3 ), and latitude (∼ 3 4 ) simultaneously co-operate to identify the optical counterpart is not discouraging. Furthermore, given the location of Palomar Observatory in Southern California, relative to the location of the advanced LIGO interferometers, the time lag to respond is inherently less than an hour as we do not need to wait for the earth to rotate (Kasliwal & Nissanke 2014). Most of the GW150914 localization was not accessible from the Northern night sky. But, based on our simulations (Singer et al. 2014), iPTF would include the true position of the GW source for an average of ≈1 out of 2 events assuming a total of 100 iPTF observations (see Figure 4; each observation is two 60 s exposures of 7.1 deg 2 ).
As advanced LIGO ramps up in GW sensitivity, we are undertaking both hardware and software upgrades to improve EM sensitivity. In 2017, we plan to commission the Zwicky Transient Facility (ZTF 1 ; Kulkarni 2012; Bellm 2014), a 47 deg 2 camera on the Palomar 48-inch, with a twelve times higher volumetric survey speed than iPTF. This increase in survey speed enables a faster cadence and deeper search for the optical counterpart (e.g., 22 mag in 10 min). The larger field-of-view may also be more robust to a shifting localization (e.g. for GW150914, our enclosed probability went from 2.5% in the initial map to 0.2% in the final map; see Abbott et al. 2016a). We are continuing to improve our software algo-rithms, e.g., better candidate filtering, image co-addition and more optimal image subtraction (Zackay et al. 2016). We are continuing to complete our census of the local universe (CLU; Cook et al. in prep) Among the various models for electromagnetic emission from binary neutron star mergers, free neutron decay gives the most luminous optical counterpart (Figure 3; . Varying free neutron mass and opacity suggests that this counterpart may fade quickly, as much as 4 mag in 24 hours. Thus, we are also systematizing our follow-up with the Global Relay of Observatories Watching Transients Happen (GROWTH 2 ) program. The combination of a longitudinally distributed network of telescopes as well as multiwavelength follow-up (VLA and Swift) should effectively filter candidates on a 24 hour timescale. Obtaining a timely light curve, spectra and spectral energy distribution will unravel both the astrophysics and the astrochemistry of the EM counterpart. With this first gravitational wave detection, the 21st century gold rush (Kasliwal 2013) Observed Wavelength (Å) Figure 2. Keck II/DEIMOS classification spectra of eight iPTF candidates obtained within 2 hours of discovery. Also shown, from left to right, the P48 discovery image, reference image, subtraction image and SDSS thumbnail around each candidate location. Colors denote spectroscopic class: SN Ia (red), SN II (blue), Nuclear (purple), SLSN I (green). Overplotted in gray lines is the best match from a supernova spectra library (SN1996X for iPTF15cyo, SN2004eo for iPTF15cys, SN1999M for iPTF15cym, SN2004et for IPTF15cyq). Additional follow-up data was needed to classify iPTF 15cyk as a SLSN I (see Figure 5). Spectral evolution of iPTF15cyk. The spectra show narrow lines from the host galaxy corresponding to z=0.539. The second spectrum matches a hydrogen-poor super luminous supernova, LSQ12dlf at +16d.
|
2016-03-14T17:56:12.000Z
|
2016-02-28T00:00:00.000
|
{
"year": 2016,
"sha1": "caec3e897a7c7a7b2ba5a387d8f88870e7ef58f2",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/69103/1/apjl_824_2_L24.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67092c79238330b3bb2a97056eeeb66d8847fd16",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1661266
|
pes2o/s2orc
|
v3-fos-license
|
A repeated IMP-binding motif controls oskar mRNA translation and anchoring independently of Drosophila melanogaster IMP
Zip code–binding protein 1 (ZBP-1) and its Xenopus laevis homologue, Vg1 RNA and endoplasmic reticulum–associated protein (VERA)/Vg1 RNA-binding protein (RBP), bind repeated motifs in the 3′ untranslated regions (UTRs) of localized mRNAs. Although these motifs are required for RNA localization, the necessity of ZBP-1/VERA remains unresolved. We address the role of ZBP-1/VERA through analysis of the Drosophila melanogaster homologue insulin growth factor II mRNA–binding protein (IMP). Using systematic evolution of ligands by exponential enrichment, we identified the IMP-binding element (IBE) UUUAY, a motif that occurs 13 times in the oskar 3′UTR. IMP colocalizes with oskar mRNA at the oocyte posterior, and this depends on the IBEs. Furthermore, mutation of all, or subsets of, the IBEs prevents oskar mRNA translation and anchoring at the posterior. However, oocytes lacking IMP localize and translate oskar mRNA normally, illustrating that one cannot necessarily infer the function of an RBP from mutations in its binding sites. Thus, the translational activation of oskar mRNA must depend on the binding of another factor to the IBEs, and IMP may serve a different purpose, such as masking IBEs in RNAs where they occur by chance. Our findings establish a parallel requirement for IBEs in the regulation of localized maternal mRNAs in D. melanogaster and X. laevis.
Introduction
mRNA localization is a commonly used intracellular traffi cking mechanism that provides the means to restrict the translation of specifi c proteins to discrete cytoplasmic regions (for review see St Johnston, 2005).Localized mRNAs are directed to their destinations by cis-acting localization elements (LEs) that generally reside in the transcript's 3′ untranslated region (UTR).These elements must be recognized by specifi c RNA-binding proteins (RBPs) that link the mRNA to the localization machinery.However, this has only been clearly established in yeast, where four stem-loops in Ash1 mRNA are recognized by She2p, which links the mRNA to the myosin Myo4p through She3p (Gonzalez et al., 1999;Bohl et al., 2000;Long et al., 2000;Takizawa and Vale, 2000).
Apart from yeast, a defi nitive relationship has not been established between a localization signal, its cognate RBP, and the protein's function.Although genetic screens indicate that RBPs such as Staufen (Stau;St Johnston et al., 1991), HRP48 (Huynh et al., 2004), and Squid (Norvell et al., 1999) are required for localizing particular mRNAs, the elements that these proteins recognize are not well defi ned.On the other hand, RBPs such as hnRNPI (Cote et al., 1999), 40LoVe (Czaplinski et al., 2005), hnRNPA2 (Hoek et al., 1998), and VgRBP71 and Prrp (Zhao et al., 2001;Kroll et al., 2002) bind specifi c LEs, but in these cases, it remains to be conclusively proven that the protein is actually responsible for localizing the RNA.
One of the best candidates for an RBP that plays a direct role in mRNA transport is chicken and rat zip code-binding protein 1 (ZBP-1), as well as its Xenopus laevis homologue Vg1 RNA and endoplasmic reticulum-associated protein (VERA)/ Vg1 RNA-binding protein (RBP), because it is highly conserved and binds to the localization signals of several different localized mRNAs (Ross et al., 1997;Deshler et al., 1998;Havin et al., 1998).ZBP-1 was fi rst identifi ed because it binds < d o i > 1 0 . 1 0 8 3 / j c b . 2 0 0 5 1 0 0 4 4 < / d o i > < a i d > 2 0 0 5 1 0 0 4 4 < / a i d > A repeated IMP-binding motif controls oskar mRNA translation and anchoring independently of Drosophila melanogaster IMP Trent P. Munro, 1 Sunjong Kwon, 2 Bruce J. Schnapp, 2 and Daniel St Johnston 1 1 The Gurdon Institute, University of Cambridge, Cambridge CB2 1QR, England, UK 2 Department of Cell and Developmental Biology, Oregon Health and Science University, Portland, OR 97239 Z ip code-binding protein 1 (ZBP-1) and its Xenopus laevis homologue, Vg1 RNA and endoplasmic reticulum-associated protein (VERA)/Vg1 RNAbinding protein (RBP), bind repeated motifs in the 3′ untranslated regions (UTRs) of localized mRNAs.Although these motifs are required for RNA localization, the necessity of ZBP-1/VERA remains unresolved.We address the role of ZBP-1/VERA through analysis of the Drosophila melanogaster homologue insulin growth factor II mRNAbinding protein (IMP).Using systematic evolution of ligands by exponential enrichment, we identifi ed the IMPbinding element (IBE) UUUAY, a motif that occurs 13 times in the oskar 3′UTR.IMP colocalizes with oskar mRNA at the oocyte posterior, and this depends on the IBEs.Furthermore, mutation of all, or subsets of, the IBEs prevents oskar mRNA translation and anchoring at the posterior.However, oocytes lacking IMP localize and translate oskar mRNA normally, illustrating that one cannot necessarily infer the function of an RBP from mutations in its binding sites.Thus, the translational activation of oskar mRNA must depend on the binding of another factor to the IBEs, and IMP may serve a different purpose, such as masking IBEs in RNAs where they occur by chance.Our fi ndings establish a parallel requirement for IBEs in the regulation of localized maternal mRNAs in D. melanogaster and X. laevis.
specifi cally to a 54-nt LE in chicken β-actin mRNA, called the zip code, and colocalizes with actin mRNA in the leading lamellae of motile fi broblasts (Kislauskis et al., 1994;Ross et al., 1997).Several lines of evidence support the hypothesis that this interaction is important for β-actin mRNA localization.The overexpression of a truncated version of ZBP-1 reduces the proportion of cells in which the RNA is localized, and the introduction of ZBP-1 into cells that do not express it can induce β-actin mRNA localization (Farina et al., 2003;Oleynikov and Singer, 2003).ZBP-1 colocalizes with β-actin mRNA in the growth cones and dendrites of cultured neurons, and both the localization of the mRNA and its colocalization with ZBP-1 are reduced by antisense oligonucleotides directed against either the zip code or ZBP-1 RNA (Zhang et al., 2001;Eom et al., 2003;Tiruchinapalli et al., 2003).
The X. laevis ZBP-1 homologue VERA/Vg1RBP was identifi ed through its binding to the Vg1LE (Deshler et al., 1997(Deshler et al., , 1998;;Havin et al., 1998).VERA/Vg1RBP recognizes a motif, UUCAC (called E2), which is repeated in the Vg1LE, where it is required for the RNA's localization to the vegetal pole of the oocyte.The same E2 motif occurs fi ve times in the VegTLE, and these sites are likewise required for the accumulation of VegT mRNA at the vegetal pole (Bubunenko et al., 2002;Kwon et al., 2002).Consistent with a role for VERA binding, the injection of anti-VERA antibodies inhibits the localization of both Vg1 and VegT mRNAs by 50% (Kwon et al., 2002).
Although there is convincing evidence that mRNA localization requires the motifs recognized by VERA and ZBP-1, it is much harder in these experimental systems to demonstrate conclusively that the proteins themselves are required.Antisense treatments, antibody injections, and dominant-negative constructs against ZBP-1/VERA appear to inhibit RNA localization, but the effects are partial and variable (Kwon et al., 2002;Eom et al., 2003;Farina et al., 2003).Therefore, we have addressed whether the ZBP-1/VERA orthologue, insulinlike growth factor II mRNA-binding protein (IMP), is required for RNA localization in Drosphila melanogaster, where it is possible to evaluate mRNA localization in mutants that lack the protein completely.
One of the best systems to examine mRNA localization in D. melanogaster is in the oocyte, where the localizations of bicoid (bcd), oskar (osk), gurken (grk), and nanos (nos) mRNAs defi ne the anterior-posterior and dorsal-ventral axes of the embryo (St Johnston et al., 1989;Ephrussi et al., 1991;Kim-Ha et al., 1991;Gavis and Lehmann, 1992;Neuman-Silberberg and Schupbach, 1993).The most relevant to our study is osk mRNA, which localizes to the oocyte posterior pole.Once there, Osk protein nucleates assembly of the polar granules, which contain the posterior determinant nos mRNA, as well as the germline determinants (Ephrussi et al., 1991;Kim-Ha et al., 1991;Ephrussi and Lehmann, 1992).osk RNA accumulates in the oocyte from early oogenesis onwards, localizes transiently to the anterior at stage 8, and then translocates to the posterior pole over a period of several hours during stages 8-9 (Ephrussi et al., 1991;Kim-Ha et al., 1993).Posterior localization involves two substeps, initial transport and long-term anchoring (Rongo and Lehmann, 1996).osk mRNA anchoring requires Osk protein, whose synthesis is triggered upon the RNA's arrival at the posterior pole (Markussen et al., 1995;Rongo and Lehmann, 1996;Gunkel et al., 1998;Vanzo and Ephrussi, 2002).
Premature translation of oskar mRNA produces a bicaudal phenotype in which an ectopic abdomen develops in place of the head and thorax, illustrating the importance of restricting translation to the posterior pole (Ephrussi et al., 1991;Smith et al., 1992).This is achieved by repressing the translation of unlocalized mRNA and relieving this repression once the mRNA reaches the posterior pole (Kim-Ha et al., 1995;Gunkel et al., 1998).Many gene products are required for repressing the translation of unlocalized osk RNA (Wilhelm and Smibert, 2005).In contrast to repression, very little is understood about the localization-dependent translational activation of osk, other than the potential involvement of the Aubergine, Orb, and Stau proteins and the requirement of sequences at the 5′ end of the mRNA (Wilson et al., 1996;Gunkel et al., 1998;Chang et al., 1999).
We have addressed whether the D. melanogaster ZBP/ VERA homologue IMP is required for maternal mRNA localization in the oocyte.Upon fi nding that IMP localizes at the posterior with osk mRNA, we focused our analysis on the role of the protein and its binding sites in the regulation of osk mRNA localization and translation.
IMP localization within the oocyte coincides with and depends on osk RNA
IMP contains the four signature KH-type RNA-binding domains and the glutamine-rich COOH terminus (Fig. 1 H) that are present in the vertebrate orthologues (Nielsen et al., 2000;Git and Standart, 2002).Affi nity-purifi ed antibodies against IMP reveal the protein in nurse cells and the oocyte early in oogenesis.However, the high concentration of IMP in the follicle cells blocks the penetration of the antibody into the oocyte after stage 4. Therefore, we evaluated IMP localization in a homozygous, viable, and fertile GFP-IMP protein trap line (Morin et al., 2001).GFP-IMP is enriched around the nurse cell nuclei (Fig. 1 A, inset) and accumulates in the oocyte as soon as it is specifi ed in the germarium, where it shows a uniform distribution until stage 7 (Fig. 1 A).IMP accumulates transiently at the anterior of the oocyte during stages 8-9 and then localizes in a crescent at the posterior pole at stage 9, where it remains for the duration of oogenesis (Fig. 1, A-C).This pattern of localization is very similar to that observed for osk mRNA and Stau protein, which colocalize with IMP throughout oogenesis (Fig. 1 A; Ephrussi et al., 1991;Kim-Ha et al., 1991;St Johnston et al., 1991).
To ascertain whether IMP localization depends on osk, we examined whether it is perturbed in various mutants that affect the posterior accumulation of osk mRNA and protein.IMP does not localize to the posterior of the oocyte in staufen, barentsz, and hrp48 mutants, which block the transport of oskar mRNA to the posterior pole (Fig. 1 D and not depicted;Ephrussi et al., 1991;Kim-Ha et al., 1991;St Johnston et al., 1991).Furthermore, IMP colocalizes with osk RNA to an ectopic dot in the center of the oocyte in a par-1 mutant that disrupts microtubule polarity (Fig. 1 E; Shulman et al., 2000).Together, these results demonstrate that the localization of IMP to the oocyte posterior pole requires the localization of osk mRNA.
IMP could localize to the posterior through a direct interaction with osk mRNA or protein or could be recruited to the posterior by a downstream component of the pole plasm.To distinguish between these possibilities, we examined IMP localization in a strong vasa hypomorph (vasa PD /Df(2L) TW2), which prevents the posterior recruitment of Vasa by Osk and disrupts all subsequent steps in pole plasm assembly (Hay et al., 1990;Lasko and Ashburner, 1990;Breitwieser et al., 1996).IMP localizes normally to the posterior of these oocytes (Fig. 1 F), suggesting that its posterior accumulation depends on osk directly.Finally, we addressed whether IMP localization depends on Osk protein rather than osk mRNA by examining a nonsense mutation (osk 54 /Df) that disrupts the anchoring, but not the initial localization, of osk mRNA (Ephrussi et al., 1991;Kim-Ha et al., 1991;Markussen et al., 1995;Rongo et al., 1995).IMP still localizes to the posterior of these oocytes at stage 9, but the posterior crescent is weaker than in wild type (WT) and disappears at stage 10 (Fig. 1 G).Thus, IMP behaves like osk mRNA in every mutant combination examined, suggesting that it localizes to the posterior in association with the mRNA.
Identifi cation of IMP's RNA targets using systematic evolution of ligands by exponential enrichment (SELEX)
KH domains recognize short, single-stranded RNA motifs (Lewis et al., 1999(Lewis et al., , 2000;;Jensen et al., 2000) similar to the motifs that are required for localization of RNAs in chicken embryo fi broblasts (Kislauskis et al., 1993;Farina et al., 2003) and X. laevis oocytes (Deshler et al., 1997;Kwon et al., 2002;Lewis et al., 2004).To identify the motifs recognized by IMP, we performed in vitro selection experiments on a large pool of 7ف × 10 14 RNAs containing random 25-nt sequences.Because we were unable to obtain the fi rst and second KH domains of IMP as soluble proteins, we selected RNAs that bind to the third and fourth KH domains.This seemed justifi ed, as the vertebrate homologue ZBP-1 binds the β-actin zip code primarily through its third and fourth KH domains (Farina et al., 2003).
The structural basis of RNA recognition by KH domains was established through biochemical and x-ray diffraction studies of the KH domains from another protein, NOVA (Jensen et al., 2000;Lewis et al., 2000).Those studies used 11 rounds of in vitro selection against their isolated KH domain to identify its preferred recognition element, which is a particular sequence of four bases.On this basis, we chose to evaluate the 11th and 12th round "winner sequences" selected by the IMP KH domains in respect to the frequency of all tetramers.The most common tetramer retained by either KH3 or KH4 was UUUA, which occurred in 43% of the winning KH3 sequences and 46% of the KH4 winners.The base preferred by the IMP KH domains after the principal tetranucleotide was most frequently C (35%) or U (32%).Thus, SELEX indicates that the optimal binding sequence for both IMP KH3 and KH4 is UUUAY, which was present in 36% of clones bound by KH3 and 37% of clones bound by KH4 (Fig. 2 A).
To quantify the binding of KH3 and full-length IMP to UUUAY-containing RNA, we performed fi lter-binding assays using three tandem repeats of the 25-nt winner RNA, 4-12-13 (Fig. 2, C and D).When all fi ve nucleotides of the motif are changed to GGGCG, the affi nity of the RNA for the KH3 domain diminishes by an order of magnitude; and even a single nucleotide change, UUUAY to UUgAY, decreases the affi nity signifi cantly (Fig. 2 C).Full-length IMP binds to the UUUAYcontaining RNA with an even higher affi nity, and the mutations in the motif decrease binding to a similar extent to that observed for the single KH domains (Fig. 2 D).Electrophoretic mobility shift assays confi rm the results of fi lter-binding assays; IMP shifts the mobility of RNAs with UUUAY, but not the mutant motifs (Fig. 2 E).These results indicate that IMP's KH domains 3 and 4 specifi cally recognize the UUUAY motif, which we refer to as an IMP-binding element (IBE).
IMP binds specifi cally to repeated IBEs in the osk 3′UTR
The IBE motif occurs 13 times in the 3′UTR of osk mRNA (Fig. 3 A), which is signifi cantly more frequent than would be expected by chance.This contrasts with the 3′UTRs of other localized mRNAs, such as bicoid, which contains only two copies of the motif.Indeed, osk mRNA associates specifi cally with IMP in vivo because it coimmunopreciptitates with IMP from ovary extracts, whereas bicoid mRNA does not (Fig. S1, available at http://www.jcb.org/cgi/content/full/jcb.200510044/DC1).
To characterize the interaction between IMP and the osk 3′UTR, we performed UV cross-linking assays with ovary extracts and a 32 P-labeled RNA probe of the osk 3′UTR (Fig. 3, B-E).Using a procedure that optimized the binding of VERA/ Vg1RBP to the LEs of Vg1 and VegT in X. laevis oocyte extracts (Deshler et al., 1997), we found that a single 65-kD polypeptide cross-links to the osk 3′UTR, but not to the bcd 3′UTR.This polypeptide co-migrates with the band detected by anti-IMP antibodies on immunoblots (Fig. 3, B-D).Similar crosslinking experiments, using extracts from embryos expressing the GFP-IMP fusion protein, labeled a second polypeptide, whose slower mobility corresponds to that expected of the GFP-IMP fusion (Fig. 3 E).This confi rms that the protein cross-linked in the experiments is IMP.
To address whether the binding of IMP to the osk 3′UTR depends on the IBEs, we mutated all 13 copies of the motif to UUgAY or gggcg.Both mutant osk RNAs are signifi cantly impaired in their ability to complete UV cross-linking of the WT osk 3′UTR to IMP in ovary extracts (Fig. 3 F).The predicted secondary structures (Mathews et al., 1999) of the WT and UUgAY osk 3′UTRs are virtually identical, suggesting that the single-base substitutions in osk's IBEs inhibit IMP binding not through a nonspecifi c effect on the RNA's folding, but instead through abrogation of sequence-selective binding of the IBEs by IMP's KH domains 3 and 4. The very specifi c effects of IBE base substitutions on osk RNA localization and translation provide additional, much stronger, evidence that the mutations do not affect RNA folding signifi cantly.
Posterior IMP localization depends on the IBEs in the osk 3′UTR
To address whether the IBEs in the osk 3′UTR are required for the posterior localization of IMP, we created transgenic lines in which all 13 copies of the IBE are mutated from UUUAC/U to UUUgC/U (osk 13 TTgAY ) in an otherwise WT genomic osk fragment.Because we wanted to avoid the complication of mutant transgenic osk mRNAs localizing to the posterior by hitchhiking on the endogenous WT mRNA (Hachet and Ephrussi, 2004), we introduced this transgene, or a control unmutated transgene osk 13TTTAY , into an osk RNA-null background (osk A87 / oskDf(3R)pXT 103 ; unpublished data).Both the osk 13TTgAY and the control unmutated transgene (osk 13TTTAY ) rescue the stage 6 oocyte-arrest phenotype of osk RNA-null fl ies completely (unpublished data).Through stage 9, the distribution of the mutant osk 13TTgAY mRNA is comparable to that of endogenous WT osk mRNA (Fig. 4 A).Thus, the IBEs are not necessary for osk mRNA's initial transport to the posterior pole.
To examine IMP localization, we introduced the GFP-IMP protein trap into osk A87 /oskDf(3R)pXT 103 fl ies carrying the osk 13 TTgAY or osk 13TTTAY transgenes.Whereas GFP-IMP colocalizes with endogenous Stau at the posterior of oocytes containing the WT osk 13TTTAY transgene, it never accumulates at the posterior of oocytes expressing osk 13 TTgAY mRNA, although Stau still localizes normally (Fig. 4 B).The IBEs in the osk 3′UTR are therefore essential for the posterior localization of IMP, confi rming that these UUUAY motifs are bona fi de IMP-binding sites in vivo.
Although the localization of osk 13 TTgAY mRNA is similar to that of the control osk 13TTTAY mRNA until the end of stage 9, the mutant mRNA disappears from the posterior at stage 10 b (Fig. 4 A).Furthermore, Stau protein displays an identical phenotype; it forms a WT posterior crescent at stage 9 and then disappears from the posterior at stage 10 b (Fig. 4 B).The IBEs in the osk 3′UTR are therefore necessary for the anchoring of osk mRNA at the posterior cortex.
The osk IBEs are required for the translational activation of osk mRNA
The failure to maintain osk 13 TTgAY mRNA at the posterior at stage 10 could refl ect a direct role for the IBEs in the anchoring of the mRNA.However, the maintenance of osk mRNA at the posterior requires Osk protein, which is only translated once the mRNA has been localized (Gunkel et al., 1998).Thus, an alternative possibility is that the IBEs are required for the activation of osk mRNA translation at the posterior, and that the anchoring defect is secondary to a lack of Osk protein.To address the effect of the IBE mutations on Osk protein synthesis, we stained osk A87 /oskDf(3R)pXT 103 ; osk 13 TTgAY or osk 13TTTAY ovaries with an anti-Osk antibody.The mRNA from a single copy of the WT osk transgene produces a robust posterior crescent of Osk protein from stage 9 onwards, whereas no Osk protein can be detected at any stage in the lines expressing osk 13 TTgAY mRNA (Fig. 5, A and B).Thus, the IBEs are essential for the derepression of osk mRNA translation at the posterior pole.The embryos from osk A87 / oskDf(3R)pXT 103 ; osk 13 TTgAY mothers display a fully penetrant osk maternal-effect phenotype in which the abdomen fails to form, consistent with the failure to translate Osk protein (Fig. 5, C and D).The absence of Osk protein was further confi rmed by Western blot of ovarian extracts from osk A87 /oskDf(3R)pXT 103 fl ies that express either the WT or osk 13 TTgAY transgene (Fig. 5 E).
Multiple copies of the IBE are necessary for osk mRNA translation
Although the phenotype of the osk 13 TTgAY suggests that the IBEs are important for osk mRNA translation and anchoring, an alternative possibility is that one of the IBE mutations prevents translation for some other reason; e.g., by chance one IBE might overlap the actual translational control element or all 13 IBE mutations might alter the folding of osk RNA.We therefore created four sets of transgenic lines in which nonoverlapping subsets (A-D) of three or four consecutive IBEs are mutant (Fig. 6 A).Three of these mutant lines (osk TTgAY A, C, and D) have phenotypes that are very similar to that of osk 13 TTgAY .They display a fully penetrant osk maternal-effect defect; Stau and the mutant osk RNAs localize to the oocyte posterior pole at stage 9, but appear dislodged from the posterior or disappear altogether during stage 10; and Osk protein is absent (Fig. 6, C, E, and G; and not depicted).In contrast, the fourth construct (osk TTgAY B) rescues the osk mRNA-null phenotype completely, and the localizations of osk mRNA and protein and Stau are normal (Fig. 6, B, D, and F; and not depicted).These fi ndings support the hypothesis that multiple IBEs, and not some other control element that overlaps one IBE, are responsible for osk RNA translational activation and anchoring.
Creation and analysis of imp mutants
To test whether IMP is required for osk mRNA translation and anchoring, we generated null mutations in the protein through imprecise P excision.Screening by PCR revealed that three of these lines, imp 2 , imp 7 , and imp 8 , correspond to imprecise excisions that specifi cally removed parts of the IMP-coding region (Fig. 7 A).Both imp 7 and imp 8 remove a large portion of the IMP-coding region and are presumably null alleles, whereas imp 2 removes both the alternate initiation codons, but may produce some protein from downstream in frame ATGs (Fig. 7, A and B).Furthermore, there is no detectable IMP staining in mutant germline clones, marked by the absence of GFP (Fig. 7 C).
Although imp mutants are zygotic lethal, the complete removal of IMP from the germline has no obvious effect on oogenesis.Most importantly, osk mRNA localizes normally to the posterior of the oocyte at stage 9 in germline clones of all three alleles and remains anchored there throughout oogenesis (Fig. 7 D).Furthermore, the mRNA is translated at the posterior pole and produces a normal crescent of Osk protein (Fig. 7 D).Thus, despite being a bona fi de component of the osk RNA localization complex and binding to the motifs required for osk translation and anchoring, IMP plays no essential role in the assembly or function of the pole plasm.However, maternal IMP is essential for embryogenesis, as 100% of the embryos from imp germline clones die in late embryogenesis and this phenotype is not rescued by a WT paternal copy of the gene.
Discussion
Our objective was to address whether D. melanogaster IMP is required for mRNA localization, as previous studies of its vertebrate homologues, ZBP-1 and VERA/Vg1RBP, had not resolved this question defi nitively (Zhang et al., 2001;Kwon et al., 2002;Eom et al., 2003;Farina et al., 2003;Tiruchinapalli et al., 2003).We have demonstrated that IMP binds directly to osk mRNA at well defi ned sites that are required for osk translation and anchoring.The best evidence that these sites are bona fi de IBEs is that IMP is not recruited to the posterior by osk mRNA in which all 13 IBEs have been mutated with a single base change.Indeed, this is one of the only cases we are aware of where it has been possible to demonstrate that an RBP interacts in vivo with well defi ned elements identifi ed biochemically in vitro.In vitro, mutant RNA still competes for binding of IMP, albeit less effectively than the WT osk RNA, suggesting that the 3′UTR may contain other lower affi nity sites.However, these sites are not involved in the recruitment of IMP to the posterior in vivo, nor are they suffi cient for translational activation.Although the IBEs are thus bona fi de in vivo IMP-binding sites, their role in osk RNA translation and anchoring is independent of IMP, which is not required for these activities.
Two outcomes of this investigation seem particularly surprising.First, IBEs are required not for the initial localization of osk mRNA, but instead for its translational activation once it is localized and its subsequent anchoring at the posterior pole.Second, osk mRNA localization-dependent translation and anchoring require the IBEs in its 3′UTR, but not IMP itself.
Because Osk protein defi nes where the pole plasm forms, and hence where the pole cells and abdomen develop, it is essential that osk mRNA is only translated at the oocyte posterior.Indeed, translational control is arguably more important than localization in restricting Osk to the posterior, as normally only 18% of osk mRNA is actually localized (Bergsten and Gavis, 1999), and osk mRNA localization mutants such as barentsz (van Eeden et al., 2001) produce a normal abdomen.The translational repression of unlocalized osk mRNA occurs in different ways, depending on the stage of oogenesis.Mutants in RNA interference pathway components cause premature translation of osk mRNA during early oogenesis (Cook et al., 2004).Repression at later stages does not depend on these components, but instead requires the binding of Bruno and Hrp48 to three elements in the 3′UTR called Bruno response elements (Kim-Ha et al., 1995;Gunkel et al., 1998;Yano et al., 2004).This repres-sion may occur at the level of translation initiation through the binding of Bruno to Cup protein and of Cup to the Cap-binding protein eIF4E, implying that the 5′ and 3′ ends of the mRNA are linked (Wilhelm et al., 2003;Nakamura et al., 2004).
Much less is known about how osk mRNA translation is derepressed at the posterior, apart from the fi ndings that a 297-nt element at the 5′ end is required for the localization-dependent activation of a reporter RNA fused to the osk 3′UTR (Gunkel et al., 1998) and that the osk 3′UTR, although suffi cient to repress the translation of heterologous coding sequences, is insuffi cient to activate their translation at the posterior (Rongo et al., 1995).Our data now provide direct evidence that the osk 3′UTR, through its IBEs, is required for translational derepression.Therefore, like activation, repression involves both the 5′ and 3′ ends.Moreover, three osk transgenes (osk TTgACY A, C, and D) with only 3 out of 13 sites mutated at a single base prevent osk translational derepression.These are much more subtle mutations than the deletions that have previously been used to defi ne osk derepression elements (Gunkel et al., 1998) and will be useful for identifying the corresponding derepressor proteins.
Although the CPEB homologue, Orb, and the RISC component, Aubergine, have been proposed to play a role in osk translational activation (Wilson et al., 1996;Chang et al., 1999), mutants in these proteins also affect the initial localization of osk mRNA to the posterior, and this may account for the observed reduction in Osk protein levels (Castagnetti and Ephrussi, 2003;Martin et al., 2003).The only mutant combination that produces a similar phenotype to osk 13TTgAY is stau-null mutants that have been rescued by a transgene-expressing Stau protein that lacks the fi fth double-stranded RNA-binding domain (Micklem et al., 2000).However, Stau is unlikely to be the putative factor that interacts with the IBEs in the osk 3′UTR to activate translation, both because it recognizes double-stranded RNA rather than short-sequence motifs (Ramos et al., 2000) and because the IBE mutations prevent osk mRNA translation without affecting Stau localization to the posterior pole at stage 9.
This brings us to the most signifi cant outcome of our investigation: osk RNA translational activation and anchoring is disrupted by mutants in the IBEs, but not by the loss of IMP itself.The possibility that the IBE mutations prevent osk mRNA derepression and IMP localization indirectly by altering the structure of the RNA seems extremely unlikely, as single-base substitutions within three nonoverlapping sets of three IBEs in widely separated regions of the >1-kb osk 3′UTR produce an identical and very specifi c defect in translation, without affecting any of the earlier functions of the 3′UTR, such as the maintenance of oocyte fate, the transport of the mRNA from the nurse cells into the oocyte, the translational repression of unlocalized mRNA, or its localization to the posterior pole.Thus, none of these mutations disrupt the binding of any of the factors that mediate these earlier steps, including Staufen, which is thought to recognize the secondary structure of the RNA through the interaction of its double-stranded RNA-binding domains with multiple stem loops.This strongly argues against the possibility that the single base changes to the IBEs inhibit osk RNA translation through a nonspecifi c effect on RNA folding.This leads us to conclude that the IBEs play a direct role in the derepression of osk mRNA translation.
Because IMP itself is not necessary for derepression, this implies that the IBEs are also recognized by another factor, which we will call factor X. IMP and factor X could function redundantly to derepress osk translation, i.e., the two proteins might share osk's IBEs and compensate for each other's loss.However, factor X cannot be a ZBP-1/VERA family member because, unlike mammals, no such relatives are evident in the D. melanogaster genome.
Alternatively, IMP and factor X might function independently, i.e., osk derepression might occur exclusively through factor X binding.Rather than implementing osk's translational derepression, IMP's actual function might be to compete with factor X for IBE binding.In support of this, we have found that overexpression of IMP reduces Osk protein levels at the posterior (Fig. S2, available at http://www.jcb.org/cgi/content/full/jcb.200510044/DC1).Although the purpose of IMP competition is presently unclear, one possibility is that IMP serves to bind, and thereby mask, IBEs that occur by chance in RNAs for which factor X binding would be unnecessary or even detrimental.According to this view, competition with IMP would restrict factor X binding to those mRNAs, such as osk, that contain many copies of IBEs clustered within a restricted region.In the absence of IMP, factor X could bind to mRNAs with fewer IBEs and inappropriately regulate their translation.This may explain why embryos from imp-null oocytes always die, but from defects that appear unrelated to Osk function.
Our analysis of the interaction of IMP with osk mRNA closely parallels that of ZBP-1 and VERA/Vg1RBP with βactin and Vg1 mRNA, respectively.(a) In each case, the protein has been shown to colocalize with the localized mRNA and can be UV cross-linked to it in extracts; (b) the precise binding sites of each protein have been determined and reveal that it recognizes a repeated motif in the target mRNA; (c) the function of these sites has then been analyzed by introducing specifi c point mutations that abrogate the binding of the protein, and these have been found to have a dramatic effect on translation or localization.In this study, we have gone one step further, and have compared the phenotype of the IBE mutants with that of mutations in IMP itself.The observation that the former gives a fully penetrant defect in osk mRNA translation, whereas the latter has no phenotype in the germline, conclusively demonstrates that IMP is not responsible for the function of the IBEs in the osk 3′UTR.This is important in light of the observation that many RBPs have been implicated in the posttranscriptional regulation of particular mRNAs by studying the effects of mutations in their binding sites.Our results highlight the potential limitations of this approach by demonstrating that one cannot necessarily infer the function of a protein from the phenotype of mutations in the cis-acting sequences that it recognizes.
The clear similarities between the localizations and functions of Vg1 and VegT mRNAs in X. laevis oocytes, and of osk mRNA in D. melanogaster oocytes, suggest that binding motifs for ZBP-1 proteins have a fundamental role in embryogenesis.Vg1, VegT, and osk localize as mRNAs to one pole of the oocyte, which is the site where the germ or pole plasm forms, and all three proteins play key roles in the formation of the primary body axis (Melton, 1987;Ephrussi et al., 1991;Kim-Ha et al., 1991;Zhang and King, 1996).Our fi ndings extend this parallel by showing that the localized expression of all three proteins also depends on a repeated RNA motif, defi ned by its interaction with IMP or its homologues.Because our results rule out a function for IMP in the regulation of osk mRNA, this calls into question the role of VERA/Vg1RBP1 in the localization of Vg1 and Veg T mRNAs, and it may therefore be worth considering the possibility that there is also a factor X in X. laevis.
SELEX
The cDNAs encoding IMP KH3 (residues Leu 301-Ala 396) and KH4 (residues Val 387-Gln 482) were subcloned into ProEX HTb (Life Technologies).These KH constructs included the canonical KH domain, as well as 20 additional residues at the COOH termini that, in a previous study of a different protein, were found essential for high affi nity binding of the RNA recognition element (Jensen et al., 2000).The constructs were expressed in Escherichia coli and recovered by extraction of the bacteria with a solution of 8 M Urea, 100 mM NaH 2 PO 4 , and 10 mM Tris-Cl, pH 8.0.The fusion proteins were bound to Ni-NTA agarose (QIAGEN), eluted at pH 4.5, and dialyzed against 50 mM NaH 2 PO 4 , pH 8.0, 300 mM NaCl, 5% glycerol, and 2 mM DTT.
To create a random 25mer RNA pool for SELEX, we used 1.
nmol of the oligonucleotide 5′-G C G A A T T C A G A T A G T A A G T G C A A T C T {25N}A A T-T G A A T A A G C T G G T A T C T C C C -3′ (Invitrogen)
, where N indicates the incorporation of nucleotides at random.EcoRI sites and sequences for RT-PCR amplication are included.This provided an oligonucleotide pool with an estimated complexity of 7.2 × 10 14 sequences.To generate a double-stranded DNA library suitable for in vitro transcription, we PCR amplifi ed the pool using
5′-G C G A A G C T T T A A T A C G A C T C A C T A T A G G G A G A T A C C A G C T T A T T-C A A T T -3′ and 5′-G C G A A T T C A G A T A G T A A G T G C A A T C T -3′
as the forward primer containing the T7 promoter and HinDIII sites and the reverse primer containing an EcoRI site.We synthesized RNA from the double-stranded DNA using T7 RNA polymerase in the presence of α-32 P-UTP and then purifi ed the RNA pool on an acrylamide gel run under denaturing conditions.
To select for RNAs that bind the IMP KH domain, the gel-purifi ed pool was split and each half applied to either KH3 or KH4 that was immobilized on separate Ni-NTA agarose.Bound and unbound RNAs were separated by centrifuging the beads.RNAs retained by the beads were extracted with phenol/chloroform and precipitated with ethanol.These eluted RNAs, which were enriched in sequences recognized by KH3 or KH4, were subjected to RT-PCR and in vitro transcription, thereby generating the RNAs for the next round of selection.After the 1st, 11th, and 12th rounds of selection, aliquots of the cDNAs corresponding to the RNAs selected by KH3 or KH4 were cloned and sequenced using standard techniques.
RNA-binding assays
32 P-RNAs consisting of three tandem repeats of the winner sequence 4-12-13 (Fig. 2) were synthesized in vitro using the oligonucleotide 5′-
G T T G A A A-AA A T A A A A A T A A T A A A A A G T T G A A A A A A T A A A A A T A A T A A A A A G T T G A-A A A A A T A A A A A T A A T A A A A A C T A T A G T G A G T C G T A T T A -3′ annealed to 5′-T A A T A C G A C T C A C T A T A G -3′
, which contains the T7 promoter.To create the corresponding motif mutants (UUgAY and gggcg) in the same context, the nucleotides encoding the IBEs (underlined) were altered (5′-ATcAA-3′ or 5′-CGCCC-3′) accordingly.RNA synthesis was performed with α-32 P-UTP and an AmpliScribe T7 transcription kit (Epicentre Biotechnologies).
For electrophoretic mobility shift assays, 60 fmol of 32 P-RNA was incubated with Histidine-tagged IMP (30, 100, 300, and 900 nM) at RT for 30 min before electrophoresis under nondenaturing conditions using 8% acrylamide (37.5:1) gels.The gels were run at 100 V for 4 h at 4°C.Gels were dried and imaged on a PhosphorImager SI (Molecular Dynamics).UV cross-linking assays were performed as described previously (Kwon et al., 2002).
Transgenes
We engineered GFP fusions of IMP using imp cDNA that we obtained either from ESTs (provided by K. Korey and D. Van Vactor, Harvard Medical School, Boston, MA) or a D. melanogaster ovarian cDNA library (provided by N. Brown, The Gurdon Institute, Cambridge, UK).We cloned the imp ORF into pUMAT-GFP downstream of the maternal α-4-tubulin promoter, which drives expression in the germline (Micklem et al., 1997), or into pUAS-p, which allows for the tissue-specifi c expression of the transgene using the Gal4/UAS system (Brand and Perrimon, 1993;Rorth, 1998).The osk constructs were made from a 10-kb Xho1-Apa1 fragment of genomic DNA (a gift from U. Irion, The Gurdon Institute, Cambridge, UK).The osk 3′UTR TTTAY motifs were mutated using the Transformer sitedirected mutagenesis kit (CLONTECH Laboratories, Inc.).
Antibodies and histology
We generated rabbit antisera against full-length recombinant IMP or peptides.Antibodies were affi nity purifi ed against peptide immobilized to Sulfolink resin (Pierce Chemical Co.) or recombinant IMP immobilized to CnBr-Sepharose (Roche).Dilutions for immunoblots were as follows: 1:300 for anti-IMP antibodies, 1:3,000 for anti-Osk antibody (a gift from A. Ephrussi, European Molecular Biology Laboratory, Heidelberg, Germany), and 1:5,000 for anti-tubulin antibody (Sigma-Aldrich).Dilutions for immunostaining were 1:100 for anti-IMP, 1:500 for anti-Osk, and 1:500 for anti-Stau (St Johnston et al., 1991).Secondary antibodies were obtained from Jackson ImmunoResearch Laboratories.We performed osk RNA in situ hybridization as previously described, using dig-UTP-labeled RNA (Roche) and Cy3-anti-Dig (Jackson ImmunoResearch Laboratories; Huynh et al., 2001).Cuticle preparations were mounted in 1:1 Hoyers/ lactic acid, and images were collected using a SPOT camera and software (Diagnostic Instruments) on an Axioplan microscope (Carl Zeiss MicroImaging, Inc.) using a 10× objective at RT. Fluorescent samples were mounted in Vectorshield (Vector Laboratories).Images were collected on a confocal system (models 1024 or Radiance 2100; BioRad Laboratories) with Lasersharp 2000 software (BioRad Laboratories), attached to a microscope (Eclipse E800; Nikon) using a 40×, 1.3 NA, objective at RT. Images were subsequently processed with Photoshop (Adobe).P element excision mutants IMP 2 , IMP 7 , and IMP 8 We used standard methods to generate and isolate P element excision lines that lack IMP gene segments.We obtained the EP(X) 760 P-insertion line (w 1118 , P{w +mC = EP}IMP EP760 ) generated by the Berkeley Gene Disruption Project from the Bloomington Stock Center.To characterize the excisions molecularly, we extracted DNA from homozygous mutant larvae and performed PCR with primers designed to identify lines that lack regions of the imp gene.
Online supplemental material Fig. S1 depicts an experiment showing that oskar RNA is specifi cally immunoprecipitated with IMP.Fig. S2 shows that overexpression of IMP in the germline decreases the amount of Oskar protein at the posterior, as well as causing actin defects late in oogenesis.Online supplemental material is available at http://www.jcb.org/cgi/content/full/jcb.200510044/DC1.
Figure 1 .
Figure1.IMP localization during oogenesis.(A-G) Localization of IMP during oogenesis visualized using G080, a GFP protein trap line (A, B, and E-G), or a GFP-IMP fusion construct specifi cally expressed in the germline (C and D).(A) IMP localizes to future oocyte in the germarium, accumulates in the oocyte through stage 7-8, appears enriched at both the anterior and posterior of the oocyte at stage 8, and is restricted to a crescent at the oocyte posterior pole by stage 9-10.These localizations are similar to those of Staufen protein, which marks the localization of osk mRNA.Within nurse cells, IMP is primarily cytoplasmic, but also rims the nucleus (inset).(B-G) IMP localization (green) and actin (red) in oocytes at stage 9 (B) or 10 (C) in WT and mutant backgrounds (D-G).(D) IMP is absent from the posterior crescent in a stau-null mutant that fails to localize osk RNA.(E) Like osk RNA (not depicted), IMP localizes ectopically in a transheterozygous par-1 allele combination that disrupts the polarity of the egg chamber.(F) IMP localizes normally in a vasa mutant that interferes with pole plasm formation.(G) IMP localizes to the posterior at stage 9 in an osk nonsense mutant (osk 54 /Df(3R)pXT 103 ) egg chamber, indicating its localization depends on osk RNA, not protein.(H)IMP contains four KH-type RNA-binding domains and a glutamine-rich COOH terminus.Numbers indicate the percentage of amino acid identity between IMP's KH domains and those of its homologues.Bars, 25 μm.
Figure 2 .
Figure 2. Characterization of the IBE UUUAY.(A and B) Representative RNA sequences selected after 12 rounds of SELEX against IMP's KH3 (A) and KH4 (B) domains.IBEs are in red.(C and D) Filter-binding assays between three tandem repeats of the winner sequence 4-12-13 (or the same RNA with IBEs mutated to UUgAU or gggcg) and either IMP KH3 (C) or the entire protein (D).(E) Electrophoretic mobility shift assay between IMP and three tandem copies of the winner sequence 4-12-13 containing either WT or mutant IBE motifs.Only RNAs with the WT (UUUAU) motifs induced a band shift.
Figure 3 .
Figure 3.The osk 3′UTR contains 13 IBEs and UV cross-links to IMP. (A) Positions of the 13 IBEs in the osk 3′UTR.(B) A 65-kD protein (IMP) in D. melanogaster ovary extracts specifi cally cross-links the 1,120-nt osk 3′UTR, but not the 817-nt bcd 3′UTR, which contains only two UUUAY motifs.(C and D) Anti-IMP immunoblot of ovary extract (C) labels the same band that UV cross-links to the 32 P-osk 3′UTR (D).(E) 32 P-osk 3′UTR cross-links to an -59فkD polypeptide (GFP-IMP) in embryo extracts of the protein trap line G080.(F) Cross-linking reactions between the 32 P-osk 3′UTR and oocyte extracts in the presence of increasing concentrations of cold, competitor RNAs, including WT and mutant osk 3′UTRs and the bicoid 3′UTR.
Figure 4 .
Figure 4. osk mRNA, IMP, and Stau distributions in osk 13 TTgAY fl ies.(A) Fluorescent in situ hybridizations comparing WT and mutant transgenic osk RNAs in fl ies that otherwise lack endogenous osk RNA.Mutant osk RNA localizes normally through stage 9 (middle), but by stage 10 (bottom), is evident as diffuse fl uorescence fanning out from the posterior pole.(B) IMP (green) and Stau (blue) proteins in WT and mutant osk oocytes before stage 9 (top), at stage 9 (middle), and at stage 10 (bottom).Actin is visualized with rhodamine-phalloidin (red).At stage 9, Stau is localized normally at the posterior pole in oocytes that express the mutant osk transgene, whereas IMP is diffuse and not concentrated at the posterior.Both Stau and IMP are missing from the posterior pole in stage 10 oocytes that express the mutant osk RNA.Bars, 25 μm.
Figure 5 .
Figure 5. IBE mutations abolish osk RNA translational activation.(A and B) Anti-Osk immunostaining of osk 87 /Df(3R)pXT 103 egg chambers expressing a WT osk transgene, showing a crescent of Osk protein (A).Osk protein is missing in egg chambers from osk TTgAY fl ies (B).(C and D) Cuticle preparations of larvae from osk 87 /Df(3R)pXT 103 that express the WT (C) or mutant osk TTgAY (D) transgene.(E) Western blot of ovarian protein extracts probed with anti-Osk antibody, followed by an anti-tubulin antibody.Extracts were from WT fl ies with no osk transgene (WT) or from osk 87 /Df(3R)pXT 103 fl ies expressing either a WT or the IBE mutant osk TTgAY osk transgene, or the nonsense mutant osk 54 transgene.Neither long nor short Osk is present in the osk TTgAY or osk 54 mutants.Bars, 25 μm.
Figure 6 .
Figure 6.Mutations to nonoverlapping subsets of osk's IBEs and their affects on osk RNA and protein distributions in the oocyte.(A) Four subsets of IBEs in the osk 3′UTR: A, B, C, and D. (B-E) osk in situ hybridizations at stages 9 (B and C) and 10 (D and E).(F and G) Osk protein detected by immunofl uorescence at stage 10.Oocytes that express osk RNA with mutations to IBE subset B display normal localization of osk mRNA and protein.Mutations to subset D, or to subsets A and C (not depicted) cause osk RNA delocalization from the posterior pole at stage 10; Osk protein is absent in these oocytes.Bars, 25 μm.
Figure 7 .
Figure 7. Analysis of oocytes that lack IMP protein.(A) The imp gene is fl anked by sesB, Ant2, and sbr.Diagram shows alternatively spliced isoforms (exons in blue), the location of GFP in protein trap line G080, and P element insertion EP(X)760 used to create three mutant alleles of IMP.The positions of the two alternate ATGs are shown in green.(B) Immunoblots of preblastoderm embryos laid by mothers with IMP mutant germline clones.IMP 2 may produce a truncated protein, whereas IMP 7 and IMP 8 are protein nulls.(C, top) A germline clone (center egg chamber) marked by the absence of GFP.(bottom) IMP is absent only from the mutant clone.(D) The distributions of osk mRNA and Osk protein in IMP-null egg chambers (middle and right) are indistinguishable from WT (left) in IMP mutant germline clones generated using the FLP/FRT OvoD1 DFS technique.Bars, 25 μm.
|
2014-10-01T00:00:00.000Z
|
2006-02-13T00:00:00.000
|
{
"year": 2006,
"sha1": "1774122b051617c894ce17fa422f7e44e20b95b9",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/172/4/577/1323404/577.pdf",
"oa_status": "BRONZE",
"pdf_src": "CiteSeerX",
"pdf_hash": "292f82169001de4b7170992b84e70122b9d739b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247377155
|
pes2o/s2orc
|
v3-fos-license
|
LABORATORY AND ANTHROPOMETRIC PARAMETERS IN THE ASSESSMENT OF THE RISK OF CARDIOVASCULAR DISEASE
Introduction: Laboratory and anthropometric parameters for assessing lipid metabolism disorders are important for atherogenesis and the occurrence of cardiovascular disease. Material and Methods: The study was designed as a prospective longitudinal study, meant to assess the risk of cardiovascular disease, which included initial measurement of lipid status, CRP, and BMI, and repeated measurement after DASH diet and exercise. It was conducted on a sample of 60 female respondents. Results: Following the WHO categorization of BMI, the study found that 62% of respondents were overweight, 26% were obese, and only 12% of respondents were at ideal body weight. After the DASH diet and exercise program, the average value of BMI M = 27.02 was established. Analyzing the values of the CASTELLI 1 index in 95.9% of respondents, high values of M = 5.3 were observed, which indicates a high risk of CVD. The study results indicate that the average value of cholesterol, triglycerides, LDL-C significantly reduced after two months of adherence to the DASH diet and exercise. With the help of Spearman’s rank, the correlation coefficient indicated the existence of a positive relationship between the CASTELLI 1 index and total cholesterol, triglycerides and HDL-C. In the initial analysis, CRP had a high value (M = 10 mg/L). In contrast, after the program, the CRP value decreased to (M = 4 mg/L), and a significant negative correlation (p <0.01) was observed between CRP and HDL-C, indicating that HDL-C value as a protective lipoprotein for blood vessels increased. CRP decreased after two months of DASH diet and exercise. Conclusion: With this research, we aim to draw attention to the importance of promoting healthy lifestyles and creating adequate risk assessment models with a well-developed strategy that will include anthropometric, laboratory and other multidisciplinary aspects to combat cardiovascular
Introduction
Cardiovascular diseases are chronic non-communicable diseases and one of the most important causes of death. [1] According to the study on the state of health of the adult population conducted by the Institute of Public Health of the Federation of BiH (2016.), there is an unfavourable trend of all risk factors for morbidity and mortality from cardiovascular disease. [2] Therefore, a strategy for preventing, early detection lipoprotein) ≤ 3.0 mmol/L, triglycerides ≤ 1.7 mmol/L. [6] The Procam Heart study found that the most serious risk factor was an LDL value of 5 mmol/L and that triglyceride levels greater than 3 mmol/L increased the incidence of myocardial infarction. [7] The same study also pointed to the importance of total cholesterol levels greater than 5.3 mmol/L with increased triglyceride levels in women indicating a threefold higher risk of developing CVD. [7,8] It has also been proven that the atherogenic index parameter is used in the risk assessment for CVD development, the calculation of which has predictive significance. According to frequency, atherogenic risk factors leading to cardiovascular disease can be classified into frequent and uncommon factors. [8,9] Common risk factors include increased levels of total cholesterol, LDL, triglycerides, decreased HDL density lipoprotein), improper diet and consumption of atherogenic foods, physical inactivity, hyperglycemia, gender, age, smoking, and genetic predisposition. On the other hand, uncommon risk factors relate to the composition and size of lipoproteins and other parameter values (glucose, inflammatory markers, homocysteine, etc.). It is estimated that about 60% of the coronary circulation is covered with atherosclerotic plaques in most people, providing that other risk factors such as hypertension, smoking, obesity, and genetic predisposition are not present during life. [10] Obesity strongly impacts lipoprotein metabolism, regardless of ethnic group. Increased BMI is considered a determinant of higher triglyceride and LDL-C concentrations and decreased HDL-C concentrations.
In all countries of the European Union, there are national programs for the prevention of cardiovascular diseases, and they include various programs of physical activity, regulation of diet, prevention and treatment of hyperlipidemia and hypertension. [11] The objectives of our study were to examine the values of lipid status, CRP and BMI before and after the application of the DASH diet and five-day exercise.
Methods
The study was designed as a prospective longitudinal study, which lasted from March to November 2018. The research was designed to assess the risk of cardiovascular disease, which included initial measurement of lipid status, CRP and BMI, and then repeated measurement after DASH diet and exercise. Laboratory analysis of lipid status and CRP was performed using spectrophotometry and immunochemical methods. The risk is estimated empirically, using the CASTELLI 1 value of the risk index, which was obtained through the formulas: In order to detect the degree of nutrition, we calculated BMI. After measuring body height and body weight, we obtained the BMI value via the WHO formula: BMI = kg/cm2 After the initial measurement, we recommended to all subjects a two-month application of the DASH diet and daily exercise, which consisted of breathing exercises (abdominal and thoracic), coordination, equilibrium and balance exercises, exercises to improve circulation, strengthen the upper and lower extremities (stretching), exercises for fine and gross motor skills. In addition, all respondents were educated about the benefits, the principle of keeping a diet diary (https://www.nhlbi.nih.gov/files/docs/public/heart/dash_brief.pdf), and the needs of the body for a healthy diet and exercise.
Responders
The research was conducted on a sample of 60 female respondents who are beneficiaries of the health program of the Center for Health Promotion and Improvement "Generation" Stari Grad. Consent for the research was obtained from the Center and all respondents involved in the study. The study included subjects who did not suffer from cardiovascular disease, did not use therapy to regulate lipid status and had blood pressure values> 140/90 mmHg. We excluded persons who suffered from cardiovascular diseases, who use therapy to regulate lipid status, and subjects who had blood pressure values <140/90 mmHg from the study.
Statistical analysis
Data were collected and entered into the database in the IBM SPSS Statistic program. Data analysis was performed in the mentioned program using the Kolmogorov-Smiron test Shapiro Wilk test for testing the normality of distribution. In addition, nonparametric tests were used, the Spearman test was used to prove the correlation, and the Wilcoxon test was used to examine the differences between the parameters. Statistical significance was proved at the level of accuracy of 95%, and this is for the value of p <0.05.
Results
According to the criteria, the research included 60 female respondents who joined the survey. The analysis of the data established that the respondents were aged 50-65 years. During the initial anthropometric measurement, the average height of the subjects was x = 163.7 cm, and the average body weight was x = 76.6 kg. The median value of BMI in the subject was M = 28, which indicates overweight. Following the WHO (World Health Organization) categorization of BMI, the study found that 62% of respondents were overweight, 26% were obese, and only 12% of respondents had ideal body weight. After the DASH diet and exercise program, the median value of BMI M = 27 was established. The Wilcoxon test revealed a statistically significant difference p <0.05 between the first and two months of measured BMI. (Graph 1) Graph 1. Measurement of BMI before and after two months of DASH diet and exercise.
By analyzing and categorizing the values of the CASTELLI 1 index, high values of M = 5 were observed in 95.9% of respondents, which indicates a high risk of developing cardiovascular diseases. The maximum value of the CASTELLI 1 index in the study is 10. (Table 1) The study results indicate that the average value of cholesterol, after two months of adherence to the DASH program of diet and exercise, decreased significantly compared to the initial measurement and was M = 4 mmol/L. Using a nonparametric Wilcoxon test, a statistically significant difference between the two measurements was confirmed at the level of p <0.01. After two months of adherence to the program, Triglyceride values were significantly lower than the initial value and amounted to M = 1.5 mmol/L. Using a nonparametric Wilcoxon test, a statistically significant difference between the two measurements was confirmed at the level of p <0.01. The effectiveness of the DASH diet and exercise is also indicated by the fact that the value of HDL-C increased after two months of application of the program, and whose median was M = 2 mmol/L. (Graph 2) LDL-C in the initial analysis was M = 6 mmol/L, while after two months of the DASH diet and exercise, it was M = 1.5 mmol/L.
Parameter
Cholesterol after two months of DASH diet and exercise initial value of total cholesterol r= -0,53** ** p<0,01 Table 4 Spearman's triglyceride value correlation coefficient.
Parameter Triglyceride after two months
initial triglyceride value r= -0,69** ** p<0,01 Table 5 Spearman correlation coefficient of CRP and HDL-C parameters. There was a negative correlation with other triglyceride values after two months of DASH diet and exercise (r = -0.69; p <0.01). (Table 4) A negative correlation was shown with the second CRP value after two months of DASH diet and exercise (r = -0.42; p <0.01). (Table 5)
Discussion
According to the 2016 report of the FBiH Institute of Public Health on the population's health status, the leading morbidities in women are cardiovascular diseases, which are ultimately the cause and the leading cause of mortality. [2] According to the inclusion criteria, this research included 60 female respondents who joined the survey. In a study by Dekker and associates on metabolic syndrome and the ten-year risk of CVD, the age group of respondents ranged from 50-75 years. [12] This study is not correlated with our study due to the presence of metabolic syndrome in this age group.
In the work of Anderson and associates, high blood pressure is present in 52% of respondents around the age of 73, and 72% of respondents have blood pressure values higher than the above reference value. [13] This work is correlated with our study, which included subjects with blood pressure values> 140/90 mmHg.
The impact of the two-month DASH diet and exercise program on BMI is evident. The significance of lowering the endpoint BMI relative to baseline was statistically significant, p<0.05, as confirmed by the Wilcoxon test. By categorizing the BMI values in this paper, it was found that 62% of respondents were overweight, 26% of respondents were obese, and only 12% of respondents had an ideal body weight. The authors of the study Estruch and associates, in their study, proved that 45% of respondents are overweight (BMI> 25) and that 47% of respondents are obese (BMI> 30). [14] Given the percentage of overweight respondents, this paper correlates with our work in terms of increased representation of overweight people while not correlated with the number of respondents, which is higher in this study compared to our study. Obesity is closely linked to cardiovascular risk, as confirmed by a study establishing a link between anthropometric measurements and the risk of cardiovascular disease. [15,16] In the study by Blumethal and associates, the effects of dietary regimen (DASH) and aerobic exercise on changes in insulin and lipid levels were investigated. Subjects who combined the DASH diet with aerobic exercise lost weight and reported lower total cholesterol and triglycerides than the control group. [17] Considering the values of lipid status, in this study, CASTELLI 1 index in 95.9% of respondents indicated the presence of a risk of cardiovascular disease (M = 5). The maximum value in work reached a degree with a very high risk of cardiovascular disease (M = 10). Spearman correlation coefficient indicated the existence of a positive relationship between CASTELLI 1 index and total cholesterol (r = 0.29; p <0.05), CASTELLI 1 index and triglycerides (r = 0.59; p <0.01), CASTELLI 1 index and HDL-C (r = 0.79; p<0.01). This means that the increase in the value of one parameter is followed by the increase in the value of another parameter. In their paper, Bhardwaj S and associates cite the importance of the CASTELLI 1 index in the prediction of cardiovascular disease and the direct impact of the CASTELLI 1 index with cholesterol and triglyceride values on cardiovascular disease. [18] In the work of the author Andić, cholesterol values were (x = 3.86 mmol/L), triglycerides (x = 1.64 mmol/L), glucose (x = 5.44), but these values cannot be correlated with our work because the sample is respondents with an already established diagnosis of CVD. [19] In terms of lipid status, in this research, the initial cholesterol measurement showed increased values (M = 7 mmol/L), while after the second month of DASH diet and exercise, cholesterol decreased (M = 4 mmol/L). The significance of lowering the cholesterol value in relation to the initial value was statistically significant, p<0.01 (difference in cholesterol values), confirmed by the Wilcoxon test. The Spearman correlation coefficient indicated a negative relationship between the initial cholesterol values and the cholesterol value after two months of applying for the above program (r = -0.53; p<0.01). A negative correlation between the parameters indicates that high baseline cholesterol values were accompanied by a decrease after two months of DASH diet and exercise.
A retrospective study by Wilson and associates demonstrated high triglyceride levels in 48% of subjects (x = 3.32 mmol / L). [20] The mentioned research cannot be correlated with our research because it refers to a follow-up period of 12 years. The value of the initial measurement of triglycerides in this study indicated increased values of M = 2 mmol/L. In comparison, the values of triglycerides after two months of application of the program were lower M = 1.5 mmol/L. The significance of the decrease in triglyceride values compared to the initial value was statistically significant, p<0.01, which was confirmed by the Wilcoxon test. Spearman's correlation coefficient indicated a negative relationship between the initial triglyceride values and the triglyceride value after two months of program application (r = -0.69; p<0.01). A negative correlation between the parameters indicates that high initial triglyceride values are accompanied by a decrease in values after applying the DASH diet and exercise program.
According to Ridker P. and associates, the ratio of total cholesterol to HDL-C was a good predictor of cardiovascular disease. [21] In addition to the parameters for determining obesity, blood pressure, and lipids, the authors cite the parameter of high-sensitivity CRP, which also provided prognostic information. While the measurement of total cholesterol, lowdensity lipoprotein cholesterol (LDL-C) and high-density lipoprotein cholesterol (HDL-C) is recommended in most modern cardiovascular screening algorithms, CRP has been further proven to be a predictive parameter in all groups. [21] In our study, CRP had a high value in the initial analysis (M = 10 mg/L), while after the implementation of the entire program, the CRP value decreased to (M = 4 mg/L), and a significant negative correlation (p<0, 01) we observed between the values of CRP and HDL-C, which indicates that the value of HDL-C as a protective lipoprotein for blood vessels increased. CRP decreased after two months of DASH diet and exercise.
Conclusion
Based on the study, we concluded that, when examining lipid status, the initial value before two months of the program is high. The value after two months of DASH diet and exercise decreased significantly; thus, the values returned within the reference intervals. By examining BMI values, baseline values before applying the program indicated overweight. A significant reduction in BMI was found after two months of DASH diet and exercise. The Castelli 1 index, as a useful parameter for predicting the risk of cardiovascular diseases, showed the presence of risk in the subjects, which decreased when it was determined afterwards. In addition to the above, the inflammatory marker CRP after the program's application decreased to reference values and is considered a useful screening parameter in risk assessment. With this research, we want to draw attention to the importance of creating adequate risk assessment models with a well-developed strategy that will include anthropometric, laboratory and other multidisciplinary aspects in order to combat cardiovascular disease. Also, our research points to the significant role of creating guidelines and programs, which will reduce the values of lipid status and BMI and promote healthy lifestyles as indispensable programs in the fight against cardiovascular disease. Further research is recommended to expand the laboratory panel of parameters and nonlaboratory aspects of risk assessment to a larger number of respondents.
Funding
This work did not receive any grant from funding agencies in the public, commercial, or not-for-profit sectors.
|
2022-03-11T16:19:37.916Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "3be5bf2fc35be46b21933a8429378f92c0294728",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5455/ijmrcr.172-1643701951",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6c871bcb01a29975a9fa1be6916530211dc6dc09",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
113508589
|
pes2o/s2orc
|
v3-fos-license
|
Design check against the construction code (DNV 2012) of an offshore pipeline using numerical methods
The production of oil and gas from offshore oil fields is, nowadays, more and more important. As a result of the increasing demand of oil, and being the shallow water reserves not enough, the industry is pushed forward to develop and exploit more difficult fields in deeper waters. In this paper, there will be deployed the new design code DNV 2012 in terms of checking an offshore pipeline as compliance with the requests of this new construction code, using the Bentley Autopipe V8i. The August 2012 revision of DNV offshore standard, DNV- OS-F101, Submarine Pipeline Systems is supported by AutoPIPE version 9.6. This paper provides a quick walk through for entering input data, analyzing and generating code compliance reports for a model with piping code selected as DNV Offshore 2012. As seen in the present paper, the simulations comprise geometrically complex pipeline subjected to various and variable loading conditions. At the end of the designing process the Engineer has to answer to a simple question: is that pipeline safe or not? The pipeline set as an example, has some sections that are not complying in terms of size and strength with the code DNV 2012 offshore pipelines. Obviously those sections have to be redesigned in a manner to meet those conditions.
Introduction
The production of oil and gas from offshore oil fields is, nowadays, more and more important. As a result of the increasing demand of oil, and being the shallow water reserves not enough, the industry is pushed forward to develop and exploit more difficult fields in deeper waters [1].
Deepwater pipelines are used to carry oil and gas from wellheads and manifolds to platforms or to shore. Figure 1 shows a simple representation of a deep-water installation, with the flow lines on the seabed and the risers, a section of pipeline from the seabed to platforms or ships.
As a consequence of the extremely severe work conditions, the constructors of deep-water pipelines need tubular products with enhanced resistance to withstand all the loads that will be applied to the pipeline, both during its construction and in operation; among them: internal and external pressure, bending, fatigue, tension, compression, concentrated loads, impact and thermal loads, impact and thermal load.
If a pipeline is not stable then it will move under the actions of waves and currents. This is a problem since the movement will cause bending stresses in the pipeline, which may then cause the pipe to fatigue and fail. Alternatively, it may cause damage to pipeline coatings, such as cracking of concrete [2]. Submarine pipeline stability is governed by the fundamental balance of forces between loads and resistances.
This approach to stability design of pipelines was incorporated into DNV's Rules for Submarine Pipeline Systems issued in 1976 and was the basis of design for many pipelines around the world [3].
It was known from experimental research that the hydrodynamic loads on a pipeline could be very much higher than in the DNV '76 model. In 1981, DNV's revised rules incorporated a much more realistic hydrodynamic model.
This created an anomaly -the new approach suggested many of the existing pipelines designed to DNV '76 were unstable. However, annual surveys showed no evidence of a wide-spread problem. The explanation lay in the lateral resistance of a pipeline to movement also being very much higher than predicted by the simple model. It was shown experimentally that during a storm a pipeline undergoes small displacements under the action of wave forces, gradually digging itself into the seabed. The pipeline therefore had small soil berms either side, providing increased resistance to movement and greater hydrodynamic shielding. The results of this research were incorporated into AGA's suite of stability design software, providing a state of the art approach. The first pass approach to pipeline stability is a simple force balance model in 2 dimensions. It is the basis of the design methodology used in: DNV '76 + '81 AGA Level 1 stability software In this paper, we will deploy the new design code DNV 2012 in terms of checking an offshore pipeline as compliance with the requests of this new construction code, using the Bentley Autopipe V8i. The August 2012 revision of DNV offshore standard, DNV-OS-F101, Submarine Pipeline Systems is supported by AutoPIPE version 9.6. This paper provides a quick walk through for entering input data, analyzing and generating code compliance reports for a model with piping code selected as DNV Offshore 2012.
Structure geometry selection
In order to input the geometry of the offshore pipeline, the Bentley Autopipe V8i software will be used. Structure geometry shall be selected based on various requirements such as routing, sizing of the pipeline considering various process parameter, thermal design etc. The pipeline is part of an offshore field development, as seen in the figure 1 below [4]: The model contains a pipeline with two vertical legs and a buried horizontal pipe representing pipeline resting on sea bed. The pipe has three segments, one of the end of the second Riser2 (Nominal Diameter 200 mm), second is the Raiser 2 and the rest of the flow line with the ND=300 mm. The Raiser 1 pipe has the ND=250 mm. The material of the pipes is CMN-415 steel (as per DNV 2012).
The load cases are as per the construction code as follows: Operating Pressure and Temperature data for 3 'T' cases Earthquake loading cases: E1 and E2 Wind loading cases: W1 and W2 Wave loading cases: Wave2 and Wave 3 (One case for accidental) User loads: U1 and U2 (Interference loads, may be from trawling) Soil Properties: SND11A
Pressures and temperatures
The depth of the water is taken as 70 m and the external pressure exerted upon the pipe calculated as a consequence. The fluid circulating inside the pipe will follow three distinct cases: Case 1-Pressure 0 MPa (r) and temperature 200C corresponding to the pipeline at rest with no fluid circulating inside.
Case 2-Pressure 1.379 MPa (r) and temperature 600C corresponding to the normal operation of the pipeline.
Case 3-Pressure 2.7579 MPa (r) and temperature 900C corresponding to the upset operation condition of the pipeline.
Soil properties
The model of soil is the SND11A which is a sandy type of soil (figure 2). The process of defining a buried piping system is a combination of user defined piping points, and internally generated (by AutoPIPE) soil points. The user only needs to define piping points for identifying the following critical parts of a buried piping system: As required by changes in the system geometry. For specification of piping components (e.g. valves, reducers, flanges, anchors, etc.). Where soil properties change. Where the maximum spacing (between the internally generated soil points) defined for the current soil identifier is to be changed.
Earthquake load cases
AutoPIPE can define a series of forces action on a structure to represent the effect of earthquake ground motion. This method assumes that the structure responds in its fundamental mode. For this to be true, the structure must be low-rise and must not twist significantly when the ground moves. The acceleration is typically calculated from the natural period of the structure, and applied to the mass of structure to obtain a force. Static seismic loads are given in factors of gravity, g. As an example, if a static seismic acceleration of 0.5 g's is applied on the x-axis, a force equal to half the systems weight is turned into a uniform load in the x-direction.
AutoPIPE supports the custom creation of these accelerations in the X-, Y-and Z-axes, or can generate accelerations automatically using for instance ASCE 2010 code (figure 3):
Wave loads
The Load/Wave is defined inside the simulation to model the effect of ocean waves impacting a partially submerged piping system.
The following fields/parameters are provided in the Wave Load dialog: Wave data name, Wave type, Load case, Water -Elev. , Water Depth , Water density , Phase , Wave -Height and Period ,
Buoyancy loads
The Load Buoyancy command enables us to model the piping system as partially or fully submerged in a fluid (usually sea water) by defining a height of fluid (and related properties) in which the piping system is partially or fully submerged. The buoyant force applies an upward pressure on the system, effectively reducing the weight of the submerged piping. AutoPIPE includes the buoyancy load in the gravity load case (GR) for analysis.
Results and discussion
The goal of all the calculations is to identify whether or not there are sections of the offshore pipeline with a poor behaviour under the load combinations set by the design standard.
The stress inside pipeline sections
The axial stresses act normal on the member section being by all means a normal stress. For our calculated platform these maximum stresses are shown in the figure below: The maximum values are within the range of 400 MPa, far above of the allowable stresses imposed by the code (figure 6).
The displacements
The calculated displacements are following the load cases considered in the simulation.
For instance for the Thermal loading case 3 with the upset conditions the displacements are given in the figure 7 below: For the point A41 for instance the displacement in OX direction is 14 mm and in OY direction is 73 mm.
Mode shapes
The pipeline structure has its own natural frequencies and mode shapes. For instance the first natural frequency is 1.23 Hz and the mode shape is given in the figure 8 below:
Soil reactions
During various loads acting upon the pipeline the soil will oppose different reactions mainly in the Anchor points. For example for the anchor point A32 near the critical zone, the maximum reaction is 9587 N/mm in longitudinal direction as seen in the figure 9 below:
Conclusions
The offshore pipelines designing is an intricate enterprise following very demanding designing codes since at stake is the integrity of multi-million dollars investments in offshore oil and gas exploitation facilities. The rupture of a live oil pipeline can have disastrous effects over the environment and sea biota with serious penalties coming from the regulatory.
As seen in the present paper, the simulations comprise geometrically complex pipeline subjected to various and variable loading conditions. At the end of the designing process, the engineer has to answer to a simple question: is that pipeline safe or not?
The pipeline set as an example, has some sections that are not complying in terms of size and strength with the code DNV 2012 offshore pipelines. Obviously those sections have to be redesigned in a manner to meet those conditions.
|
2019-04-15T13:06:16.331Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "dbeadb7bed0188c38dccd7290a71f3687702dde3",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/145/8/082018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d25e13701551b0c135b0cab4daebe92457d6c0c8",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
}
|
248390712
|
pes2o/s2orc
|
v3-fos-license
|
The response of dual‐species bacterial biofilm to 2% and 5% NaOCl mixed with etidronic acid: A laboratory real‐time evaluation using optical coherence tomography
Abstract Aim The addition of etidronic acid (HEDP) to sodium hypochlorite (NaOCl) could increase the antibiofilm potency of the irrigant, whilst maintaining the benefits of continuous chelation. Studies conducted so far have shown that mixing HEDP with NaOCl solutions of relatively low concentration does not compromise the antibiofilm efficacy of the irrigant. However, the working lifespan of NaOCl may decrease resulting in a reduction of its antibiofilm efficacy over time (efficiency). In this regard, continuous irrigant replenishment needs to be examined. This study investigated the response of a dual‐species biofilm when challenged with 2% and 5% NaOCl mixed with HEDP for a prolonged timespan and under steady laminar flow. Methodology Dual‐species biofilms comprised of Streptococcus oralis J22 and Actinomyces naeslundii T14V‐J1 were grown on human dentine discs in a constant depth film fermenter (CDFF) for 96 h. Biofilms were treated with 2% and 5% NaOCl, alone or mixed with HEDP. Irrigants were applied under steady laminar flow for 8 min. Biofilm response was evaluated by means of optical coherence tomography (OCT). Biofilm removal, biofilm disruption, rate of biofilm loss and disruption as well as bubble formation were assessed. One‐way anova, Wilcoxon's signed‐rank test and Kruskal–Wallis H test were performed for statistical analysis of the data. The level of significance was set at a ≤.05. Results Increasing NaOCl concentration resulted in increased biofilm removal and disruption, higher rate of biofilm loss and disruption and increased bubble formation. Mixing HEDP with NaOCl caused a delay in the antibiofilm action of the latter, without compromising its antibiofilm efficacy. Conclusions NaOCl concentration dictates the biofilm response irrespective of the presence of HEDP. The addition of HEDP resulted in a delay in the antibiofilm action of NaOCl. This delay affects the efficiency, but not the efficacy of the irrigant over time.
INTRODUCTION
The role of biofilms in the development and perpetuation of apical periodontitis has been established (Ricucci & Siqueira, 2010). Recently, biofilm viscoelasticity has been acknowledged as a virulent factor; it facilitates biofilm survival by influencing its response to mechanical and chemical biofilm stresses (Peterson et al., 2015). In this respect, investigating in vitro the capacity of endodontic irrigants to remove biofilms with certain viscoelastic features bears clinical relevance.
Biofilm removal can be assessed in-vitro by means of optical coherence tomography (OCT). OCT provides quantitative measurements of biofilm removal, whilst revealing the biofilm structure at the mesoscale level (Busanello et al., 2019;Hou et al., 2019;Wagner & Horn, 2017). Indeed, OCT biofilm visualization has led to the identification of distinct biofilm layers of different cohesive and adhesive strength that seem to form when biofilms are exposed to several biocides (Petridis et al., 2019a(Petridis et al., , 2019b. This has clear clinical implications as by accumulating data on the cohesive and adhesive failure patterns of biofilms subjected to various endodontic irrigants, more effective strategies aiming at maximum biofilm removal can be devised. However, the studies conducted so far using OCT analysis have only assessed end-point outcomes (Busanello et al., 2019;Petridis et al., 2019aPetridis et al., , 2019b. End-point outcomes provide a snapshot of the biofilm status at an arbitrary time point specified by the investigators. Consequently, the dynamic response elicited by the chemical and mechanical action (flow) of the irrigants is not explored at all.
Sodium hypochlorite (NaOCl) is effective against biofilms as it both kills the biofilm microorganisms and breaks down the biofilm polymeric matrix (Chávez de Paz et al., 2010;Tawakoli et al., 2017). NaOCl concentration dictates biofilm removal, as it has been shown that a short-term static administration of 5% NaOCl enhanced both biofilm disruption and dissolution of bacterial dense biofilms compared to 2% NaOCl (Petridis et al., 2019b). NaOCl concentration determines the release of available free hypochlorite anions (OCl − ) from NaOCl (Moorer & Wesselink, 1982). A NaOCl solution of higher concentration provides more OCl − that can diffuse into the bulk biofilm and break down the glycosidic bonds present in the polymeric biofilm matrix (Tawakoli et al., 2017). A limitation of the static NaOCl administration (Petridis et al., 2019b) is that the reactive components of NaOCl are progressively deactivated when in contact with organic substrate, such as the biofilm (Baker, 1947;Haapasalo et al. 2000;Moorer & Wesselink, 1982). Thus, the conclusions drawn are considered valid only for the short time periods indicated in the studies and the clinically relevant convective interaction of NaOCl with the biofilm over time, under continuous replenishment is disregarded (Pereira et al. 2021).
Ethylenediaminetetraacetic acid (EDTA) is invariably included in root canal irrigation regimens aiming at enhanced cleanliness and disinfection of the root canal system (Basrani & Haapasalo, 2012;Zehnder, 2006). However, the loss of the active chlorine when NaOCl and EDTA interact (Grawehr et al., 2003), raises serious concerns about the dissolving and antimicrobial efficacy of NaOCl, should the irrigants be mixed during treatment (Rossi-Fedele et al., 2012;Zehnder et al., 2005). In practice, this limits the use of EDTA as a final flush, as the alternating use of EDTA and NaOCl results in impractical irrigation protocols The use of etidronic acid (1-hydroxyethane 1,1-diphosphonic acid or HEDP) seems to overcome this problem. HEDP is a weak chelator that can be mixed with NaOCl without compromising the antimicrobial/ antibiofilm and tissue dissolving properties of the latter (Arias-Moliz et al., 2014;Arias-Moliz et al., 2015;Arias-Moliz et al., 2016;Giardino et al., 2019;Morago et al., 2016;Morago et al., 2019;Neelakantan et al., 2015;Tartari et al., 2015;Tejada et al., 2019;Ulusoy et al., 2018). In addition, continuous chelation with mixtures of NaOCl/ HEDP during instrumentation results in less debris and smear layer accumulation (Paqué et al., 2012) and causes less dentine demineralization compared to NaOCl/EDTA (Tartari et al., 2018). Importantly, a non-inferiority clinical trial has provided preliminary evidence on the lack of any adverse impact of HEDP on the clinical efficacy of NaOCl, a finding further substantiated by the microbiological results that revealed no distinct differences in the microbiota recovered from post-irrigation samples between NaOCl and NaOCl/HEDP (Ballal et al., 2019).
Laboratory studies investigating the antibiofilm efficacy of NaOCl/HEDP mixtures have focused on the short-term bacterial killing effects of mixtures containing relatively low NaOCl concentrations (1% or 2.5%), that are statically applied on mono-species biofilms. Despite their undisputed value, direct translation of their findings is difficult, as clinical parameters such as irrigant flow and replenishment ought to be taken into account. In addition, the strength of mono-species biofilms to hydrodynamic shear stresses is expected to differ considerably from multi-species biofilms (Paramonova et al., 2009), which may skew the data on biofilm removal. Lastly, in order to K E Y W O R D S biofilm, etidronate, HEDP, irrigants, optical coherence tomography, sodium hypochlorite examine events occurring in the entire biofilm depth and evaluating bacterial killing on the top biofilm layer, the use of OCT is more suitable than the frequently employed analysis of images acquired with confocal laser scanning microscopy.
The aim of this study was to investigate the prolonged effect of 2% and 5% NaOCl mixed with a commercial CE-marked HEDP product for endodontic usage, on a dual-species biofilm comprised of clinical isolates of Streptococcus oralis and Actinomyces naeslundii bacterial species. The biofilm response was expressed in terms of quantifiable biofilm disruption and dissolution, whilst the real-time monitoring of the response allowed for the assessment of the anti-biofilm efficiency of the solutions tested over time. The working hypothesis was that mixtures of HEDP and increased NaOCl concentration would lead to superior and faster biofilm disruption and dissolution, based on accumulated evidence indicating that EDTA, higher NaOCl concentration and extended exposure have considerable impact on this biofilm model (Busanello et al., 2019;Petridis et al., 2019aPetridis et al., , 2019b.
MATERIALS AND METHODS
The manuscript of this laboratory study has been written according to Preferred Reporting Items for Laboratory studies in Endodontology (PRILE) 2021 guidelines (Nagendrababu et al., 2021). The PRILE 2021 flowchart summarizes the key steps in reporting the present laboratory study (Appendix S1).
Bacterial strains and growth conditions
Clinical bacterial isolates Streptococcus oralis J22 and Actinomyces naeslundii T14V-J1 were streaked on blood agar plates and incubated in an aerobic incubator, at 37°C, for 24 h, and in an anaerobic incubator, at 37°C, for 48 h, respectively. A single colony was used to inoculate separate glass tubes containing 10 ml of modified brain heart infusion broth (BHI) (37 g/L BHI, 1.0 g/L yeast extract, 0.02 g/L NaOH, 0.001 g/L Vitamin K1, 5 mg/L L-cysteine-HCl, pH 7.3) (BHI, Oxoid Ltd.) (pre-cultures). S. oralis were cultured in an aerobic incubator, at 37°C, for 24 h and A. naeslundii in an anaerobic incubator, at 37°C, for 48 h. Following, pre-cultures were mixed with 190 ml of fresh BHI and incubated aerobically for S. oralis and anaerobically for A. naeslundii for another 16 h (main cultures). Next, bacteria were harvested by means of centrifugation (6500 g), with two washing steps of the bacterial pellets with sterile adhesion buffer (0.147 g/L CaCl 2 , 0.174 g/L K 2 HPO 4 , 0.136 g/L KH 2 PO 4 , 3.728 g/L KCl, pH 6.8) in between (van der Mei et al., 2008). Bacterial pellets were finally suspended in 10 ml of sterile adhesion buffer and sonicated intermittently in iced water for three times 10 s at 30 W (Vibra cell model 375, Sonics and Materials Inc.) to break bacterial chains. Bacterial concentrations were determined by counting using a Bürker-Türk counting chamber (Marienfeld-Superior). Both bacterial suspensions were diluted in 200 ml of adhesion buffer resulting in a dual-species suspension containing 6 × 10 8 bacteria/ ml for S. oralis and 2 × 10 8 bacteria/ml for A. naeslundii.
Biofilm growth
To ensure reproducible and standardized development of bacterial cell-dense biofilms, a constant depth film fermenter (CDFF) was used (Busanello et al., 2019;Kinniment et al., 1996;Petridis et al., 2019aPetridis et al., , 2019bRózenbaum et al., 2017). CDFF was equipped with 15 sample holders. Each holder contained 5 height-adjustable platforms. Each platform could accommodate 1 saliva-coated dentine disk that served as the substrate for biofilm growth. Dentine disks were prepared from the crown of freshly extracted human molars. The use of extracted teeth was approved for research purposes by the Institutional Review Board of the University Medical Center Groningen. Based on the condition that the donors did not participate in any other part of the experimental protocol, the study was judged as not falling under the scope of the Medical-Scientific Act for research with humans. A diamond-coated core drill (6 mm, CARAT N.V.) was used to cut out dentine cylinders of 5 mm diameter, from which dentine discs of 2-mm thickness were obtained with the aid of a watercooled diamond blade (IsoMet, Diamond Wafering blades 102 × 0.3 mm, Buehler) mounted in a circular cutting machine. The dentine disks were treated with 17% EDTA (Pulpdent) for 3 min for smear layer removal in a sonication bath and subsequently were autoclaved (121°C, 20 min). For the saliva coating, freeze-dried whole saliva was used. Briefly, human whole-saliva from 20 healthy volunteers of both sexes was collected into ice-chilled Erlenmeyer flasks after stimulation induced by chewing Parafilm ® (Pechiney, Plastic Packaging) (van der Mei et al., 2008). All volunteers gave their informed consent for saliva donation, in agreement with the rules set out by the Institutional Review Board of the University Medical Center Groningen, Groningen, The Netherlands. After the saliva was pooled and centrifuged twice (10 000 g, 15 min, 4°C), phenylmethylsulfonyl fluoride was added to a final concentration of 1 mM as a protease inhibitor. Afterwards, the solution was centrifuged again, dialysed (24 h, 4°C) against demineralized water and freeze-dried for storage. The lyophilized saliva was dissolved in 30 ml of adhesion buffer (1.5 g/L), stirred for 2 h and centrifuged at 6500 g rpm, 10°C for 5 min. The dentine discs were exposed to the reconstituted saliva under static conditions, at 4°C, for 14 h. It has to be noted that the lyophilized saliva is not submitted to any sterilization process. Saliva lyophilization does not guarantee sterilization. Nevertheless, prior to lyophilization saliva is centrifuged twice to remove any micro-sized debris, including bacterial cells. This decreases considerably the bacterial load. After salivary protein adsorption, the substrate is inoculated with a large number of bacterial cells, which will eventually predominate over any salivary bacterial cells present on the surface. Thus, we expect none (to minimal) interference to the formation of the dual-species CDFF biofilms (long-term observations from studies conducted in our laboratory have never raised any concerns related to the reconstitution of the freeze-dried saliva with buffer and the overnight saliva conditioning of the HA discs). After placing the saliva-coated dentine discs on each platform, the height was set at 250 μm distance between the disc and the rim of the holder, thus allowing for the development of biofilms of standardized thickness (250 μm). Two hundred millilitres of the dual-species bacterial suspension were used to inoculate the CDFF. Inoculation was performed dropwise, at a rate of 1.67 ml/min, whilst the CDFF table was slowly rotating. After inoculation, table rotation was stopped to allow for further bacterial adhesion onto the dentine discs. Finally, rotation was resumed and the biofilms were grown under continuous supply of modified BHI at a rate of 45 ml/h, at 37°C, for 96 h. Before proceeding to biofilm treatment with the irrigants, the thickness of each sample was measured with the aid of OCT. Only samples reaching the thickness of 250 μm (pre-set height in the CDFF holders) were used in the experiment.
Preparation of irrigant solutions
Four irrigant solutions were used to challenge the biofilm, namely, 2% NaOCl, 2% NaOCl combined with HEDP (DualRinse ® HEDP Medcem), 5% NaOCl and 5% NaOCl combined with HEDP. Prior to the experiments, iodometric titration was carried out to determine the concentration of the stock NaOCl solution (Sigma-Aldrich) and the desired NaOCl concentrations were prepared by diluting stock NaOCl with demineralized water; the NaOCl solutions used in the experiments had a pH 12. Finally, for the preparation of the NaOCl/HEDP irrigant solutions, 4.5 g of DualRinse HEDP was mixed with 50 ml of either 2% or 5% NaOCl for 2 min. After mixing, the solution was drawn back into a 50-ml polypropylene syringe with a Luer Lock opening and used immediately.
Biofilm treatment, evaluation and outcome measures
Four experimental groups, each representing one of the irrigant solutions, were formed. Nine biofilm-carrying dentine discs (independent samples) were submitted to treatment with each irrigant (N per group = 9). Three independent experiments were carried out, during which 3 independent biofilm-carrying dentine discs from each irrigant group were treated with the corresponding irrigant (N total = 36). Sample size was determined based on findings from preliminary investigations and data previously published (Busanello et al., 2019;Petridis et al., 2019aPetridis et al., , 2019b. More specifically, the following key parameters were imported in a dedicated tool used to compute statistical power analyses and required sample size (G*Power 3.1.9.7): Effect size f = 0.625 (calculated based on the minimum mean difference in the primary outcome, namely, biofilm removal set at 25% and the standard deviation SD σ within each group set at 20%), αvalue = .05, Power = 0.9 and Number of groups = 4. The biofilm-carrying dentine discs were placed in a parallel plate flow chamber (PPFC), letting only the biofilm on top of the dentin disc to be exposed to the bulk of the irrigant. Irrigant was passed through the PPFC using a peristaltic pump at a flow rate of 0.05 ml/s. During irrigation, 2D real time, cross-sectional recordings were acquired by means of optical coherence tomography (OCT, Thorlabs). Recordings were taken in 2 separate time intervals, named Phases I and II. In Phase I, the short-term effect of the irrigant on the biofilm was recorded (0-180 s exposure). In Phase II, the long-term effect of the irrigant on the biofilm was recorded (300-480 s exposure). The time intervals were chosen based on preliminary experiments on biofilms exposed to the plain NaOCl solutions showing a high NaOCl activity at the first 180 s that gradually decreased, until it plateaued after 480 s. Real-time imaging was performed at 25 frames/s (1 frame taken at 0.04 s intervals), by setting the field of view at a span of 5 mm and the refraction index at 1.33. Each acquired OCT biofilm image represents a series of consecutive xz-plane images taken along the diameter of the circular sample (the so-called 'OCT B-scan' optical cross-section), that is a series of axial intensity profiles containing depth-resolved structural information along the longest distance from one end of the circular biofilmcarrying dentine disc to the other (5 mm) (Wagner & Horn, 2017) (Figure 1). Images were processed with ThorImage OCT software (Thorlabs).
The open source image processing package Fiji was used to analyse the cross-sectional images acquired during the two phases of the OCT recordings. The image stacks, consisting of 3800 images with a resolution of 1000 × 376 pixels, were reduced to 380 measurements.
This yielded a data point for every 0.4 s. A multilevel Otsu threshold was used to segment biofilm from the background (Liao et al., 2001;Otsu, 1979), as previously described (Busanello et al., 2019;Petridis et al., 2019aPetridis et al., , 2019b. This resulted in the identification of different layers within the biofilm, namely, a layer exhibiting lower greyscale pixel intensity (that easily detaches from the bulk of the biofilm, hereafter called disrupted layer) and a layer exhibiting higher greyscale pixel intensity (undisturbed and firmly attached to the dentin disc, hereafter called coherent layer). The total number of pixels measured after background noise subtraction accounted for the total biofilm present. The number of pixels measured within the disrupted and coherent layer, which resulted after background noise subtraction and applying a multilevel thresholding, accounted for the disrupted and coherent biofilm present, respectively. The outcome measures used to assess the biofilm response to the different irrigant solutions were the following: (i) Percentage total biofilm at each measurement point t x : where t 0 = 0 s is defined as the time point at which interaction between the irrigant and the biofilm takes place, namely, immediately after the introduction of the irrigant solution in the parallel plate flow chamber. (ii) Percentage disrupted biofilm layer at each measurement point t x : (iii) Rate percentage total biofilm loss (%/s), indicated by the percentage total biofilm loss over time, normalized against the starting point t 0 = 0 s (baseline measurement). (iv) Rate percentage-disrupted biofilm forming (%/s), indicated by the percentage disrupted biofilm present over time, normalized against the starting point t 0 = 0 s (baseline measurement). (v) Amount of bubbles formed during Phase I (0-180 s) and Phase II (300-480 s) (OCT slide-by-slide imaging). A bubble was counted as an 'event' when it could be visualized from the moment of its initiation until its rupture (before the end of the observation period) or when it was visible throughout the observation period.
Statistical analysis
Statistical analysis was performed using R statistical package (version 3.6.3
Irrigant effect over time on biofilm removal
Biofilms exposed to 2% NaOCl demonstrated an increase in the percentage total biofilm within the first 60 s. This increase remained stable throughout Phase I, whilst a decline became evident during Phase II. The percentage total biofilm declined below the starting point of total biofilm only in the last 60 s of Phase II (Figure 2). The percentage total biofilm after exposure to 2% NaOCl/HEDP never declined below the starting point of total biofilm. On the contrary, an increase in percentage total biofilm occurred within the first 30 s of exposure, which remained stable over time (Figure 2). The percentage total biofilm after exposure to both 5% NaOCl and 5% NaOCl/HEDP showed a decline below the starting point of total biofilm at each time point measured (Figure 2). For both irrigants, a significant decline in percentage total biofilm was noted only at the end of Phase II (480 s), as compared to the starting point ( Table Total biofilm present (t x ) Total biofilm present (t 0 = 0s) × 100 % Disrupted biofilm layer (t x ) Total biofilm present (t x ) × 100% F I G U R E 1 Graphical illustration of the optical coherence tomography (OCT) real-time cross-sectional imaging 1). Comparing the percentage total biofilm present after the irrigation procedure, 5% NaOCl solutions, with or without HEDP, significantly decreased the percentage total biofilm in comparison to 2% NaOCl solutions, with or without HEDP. Between 2% NaOCl and 2% NaOCl/HEDP as well as between 5% NaOCl and 5% NaOCl/HEDP, no significant differences in percentage total biofilm were noted at any time point measured (Table 1).
Irrigant effect over time on biofilm disruption
For all irrigants applied, biofilm disruption was evidenced already from starting point t 0 = 0 s (first point of interaction between irrigant and biofilm after introduction of the irrigant into the parallel plate flow chamber). Within the first 180 s, both 5% NaOCl and 5% NaOCl/HEDP had disrupted a significant amount of biofilm, which was not the case for the 2% NaOCl and 2% NaOCl/HEDP. For the rest of the observation period, no further disruption was caused by the 5% NaOCl and 5% NaOCl/HEDP. Two percent NaOCl started causing significant biofilm disruption after 300 s, whilst 2% NaOCl/HEDP elicited a significant biofilm disruption only towards the end of the observation period (480 s) ( Table 2). Comparing biofilm disruption between the irrigants applied, 5% NaOCl and 5% NaOCl/HEDP started disrupting significantly more biofilm even by the time they were introduced into the parallel plate flow chamber (0 s), F I G U R E 2 Overview of the response of the biofilm to the irrigant solutions over time. Real-time measurement data points were binned to 30 s intervals. The black line represents the percentage total biofilm, green bars the percentage disrupted biofilm and purple bars the percentage coherent biofilm at each time interval keeping the same significant disruptive potential for 300 s, compared to 2% NaOCl and 2% NaOCl/HEDP. At the end of the observation period (480 s), 2% NaOCl/HEDP had caused the least biofilm disruption, without any significant difference compared to other irrigants ( Table 2).
Rate of biofilm loss
During the first 180 s (Phase I), 5% NaOCl decreased the percentage total biofilm the fastest, at a rate significantly higher than for 2% NaOCl and 2% NaOCl/HEDP and considerably higher than 5% NaOCl/HEDP. Five percent NaOCl/HEDP decreased the percentage total biofilm faster only when compared to 2% NaOCl during Phase I. During the last 180 s (Phase II), all irrigant solutions decreased the percentage total biofilm faster compared to 2% NaOCl/HEDP (Figure 3).
Rate of biofilm disruption
During the first 180 s (Phase I), 5% NaOCl and 5% NaOCl/ HEDP induced biofilm disruption in a significantly higher rate compared to 2% NaOCl and 2% NaOCl/HEDP (Figure 4). During the last 180 s (Phase II), an overall decrease in the rate at which biofilm disruption occurred was noted, with all irrigant solutions disrupting biofilm at a similar, lower rate (no significant differences detected) (Figure 4).
Bubble formation
During the first 180 s (Phase I), exposure of biofilms to 5% NaOCl and 5% NaOCl/HEDP led to a significantly higher bubble count compared to 2% NaOCl and 2% NaOCl/HEDP (Table 3). A considerable reduction in bubble count in the T A B L E 1 Percentage total biofilm present over time and comparisons within (horizontal) and between (vertical) irrigant groups Note: One-way repeated measures analysis of variance (ranova) with Bonferroni post-hoc pairwise analysis (horizontal comparisons) was performed. Oneway analysis of variance (anova) with Tukey's post-hoc pairwise analysis (vertical comparisons) was performed. Same small letters (horizontal direction) and same capital letters (vertical direction) indicate significant differences between respective groups (p ≤ .05).
Note:
One-way repeated measures analysis of variance (ranova) with Bonferroni post-hoc pairwise analysis (horizontal comparisons) was performed. Oneway analysis of variance (anova) with Tukey's post-hoc pairwise analysis (vertical comparisons) was performed. Same small letters (horizontal direction) and same capital letters (vertical direction) indicate significant differences between respective groups (p ≤ .05).
5% NaOCl and 5% NaOCl/HEDP groups during the last 180 s (Phase II) was noted, with 5% NaOCl exhibiting a significant reduction compared to Phase I. Between the irrigant solutions, no significant differences were detected between the irrigant groups during Phase II. Overall, throughout the observation period, exposure of biofilms to 5% NaOCl, with and without HEDP, led to a significantly higher bubble count compared to 2% NaOCl, with and without HEDP.
F I G U R E 3
Rate percentage total biofilm loss (%/s) during Phase I (0-180 s) and Phase II (300-480 s) of exposure to the irrigant solutions. Significant differences between the irrigant solutions applied were determined using one-way analysis of variance (anova) and indicated by * for p ≤ .05, ** for p ≤ .01 and *** for p ≤ .001. Non-significant differences are indicated by NS F I G U R E 4 Rate percentage disrupted biofilm layer forming (%/s) during Phase I (0-180 s) and Phase II (300-480 s) of exposure to the irrigant solutions. Significant differences between the irrigant solutions applied were determined using one-way analysis of variance (anova) and indicated by * for p < .05, ** for p ≤ .01, *** for p ≤ .001 and **** for p ≤ .0001. Non-significant differences are indicated by NS
DISCUSSION
The response of a bacterial dense dual-species biofilm to relatively low and high NaOCl concentrations, either with or without the addition of HEDP, under laminar irrigant flow, was analysed. We showed that NaOCl concentration, irrespective of the addition of HEDP, was the driving factor of biofilm disruption and removal, enhancing efficacy and boosting the early efficiency of the irrigant. Remarkably, a slight volumetric increase (swelling) of the biofilm was observed in the first minutes following application of 2% NaOCl, with or without HEDP. On the other hand, 5% NaOCl solutions, with or without HEDP caused a significant biofilm disruption immediately after coming in contact with the biofilms. Mixing NaOCl with HEDP resulted in a delayed antibiofilm effect of the irrigant, a finding that was more prominent in the 5% NaOCl solutions. Finally, a distinct pattern of bubble formation was evident between the 2% and 5% NaOCl groups. Biofilms exposed to 5% NaOCl demonstrated a rapid growth of numerous large bubbles compared to the small-sized and considerably less bubbles formed within the biofilms exposed to 2% NaOCl. NaOCl was administered dynamically in an attempt to approximate the flow dynamics of root canal irrigation. Generally, in vitro models measure biofilm removal after 'one-off' NaOCl application, thus neglecting the merit of irrigant refreshment and volume that are vital part of endodontic irrigation as this is clinically practised. In our model, a steady laminar flow was generated in a parallel plate flow chamber. The flow allowed for a more efficient transport of the NaOCl by the convective motion of the fluid (Incropera & Dewitt, 1990), ensuring refreshment, whilst retaining an adequate concentration of active OCl − (Macedo et al., 2010).
In addition, the constant flow rate of 0.05 ml/s within the 8 min duration of this experiment resulted in the administration of a total volume of 26.4 ml of irrigant. This is admittedly a realistic volume of irrigant applied during a root canal treatment. Considering that monitoring and quantifying biofilm removal under constant irrigant flow rate over time, was the aim of the study, the effect of volume on biofilm removal could not be examined simultaneously and was kept constant. Given that irrigant volume plays a role in biofilm removal (Petridis et al., 2019a), we cannot exclude its confounding effect on our findings. This is an inherent limitation of the study, which could be resolved by manipulating the flow rate and measuring biofilm response at different time intervals, with the risk however of introducing additional confounding factors in the model (flow rate).
The biofilm model employed in this study fails to reproduce the geometry and configuration met in artificial or natural root canal system models. Moreover, it does not take into consideration the chemo-mechanical process in its entirety. In that respect, the inability of a flowing irrigant solution to chemically or mechanically (shear stress) affect biofilms can be compensated to a large extent by mechanical instrumentation, irrigant agitation or the use of intracanal medicaments. Especially for mechanical debridement, its role in reducing the bacterial load is well-established (Byström & Sundqvist, 1981;Dalton et al., 1998;Siqueira et al., 2000) and, as far as the main canal is concerned, the scraping action of the instruments will result, theoretically, in nearly complete biofilm removal (the fact that areas of the main canal will remain untouched by the instruments should not be overlooked). Instrumentation has however less impact on biofilms residing in the finest anatomical spaces and irregularities of the root canal system. Bearing in mind the limitations of this study, it should be noted that clinical extrapolation of the findings needs balanced consideration. Admittedly, testing the irrigants in root canal models bearing lateral morphological features filled with biofilms and possibly dentine debris will provide more clinically relevant findings and calls for further investigation.
Nonetheless, this biofilm model consists of clinically relevant species (Chávez de Paz et al., 2003), has a welldefined architecture and strength (Busanello et al., 2019;Paramonova et al., 2009) and clinically relevant viscoelastic properties (He et al., 2013). These features make it suitable for studying its response under continuous, laminar flow of the irrigant solutions by means of real-time optical coherence tomography (OCT) imaging. Taking into consideration that the viscoelastic profile of root canal biofilms is completely unknown, investigating the response of biofilms with viscoelastic features similar to oral biofilms is as clinically relevant as feasible we can get at the moment. Moreover, this model provides a reproducible in vitro platform for the investigation of the biofilm response to the chemical and mechanical stress resulting from the flowing NaOCl/HEDP mixture, whilst giving the opportunity to study the working mechanisms of the irrigants whilst interacting with the biofilms (Petridis et al., 2019b). Real-time OCT was chosen because it allows a timeresolved assessment on the same biofilm samples, without compromising the biofilm (Busanello et al., 2019). This resolves the issue associated with biological variation that is typically encountered for individual biofilm samples. It also circumvents the limitations associated with end-point measurements on individual biofilm samples (Petridis et al., 2019a). Perhaps more importantly, it allows us to distinguish biofilm layers, namely the coherent and disrupted layer (Busanello et al., 2019) and monitor their fate over time, under the continuous chemical and mechanical stresses exerted by a flowing irrigant.
The initial volumetric expansion of the biofilms as a response to their exposure to 2% NaOCl and 2% NaOCl/ HEDP can be attributed first, to the mild anti-biofilm action of the low NaOCl concentration that fails to remove a considerable amount of biofilm (as opposed to the 5% NaOCl concentration), and secondly to the biofilm viscoelastic properties (Busanello et al., 2019;Busscher et al., 2003;Pereira et al., 2020;Petridis et al., 2019a). Viscoelastic behaviour seems to be a trait common to biofilms that are submitted to a shear force imposed by the laminar flow. Biofilms respond dynamically to these conditions by stretching elastically without detaching (Gloag et al., 2020). This results in biofilm expansion, but not removal (Busscher et al., 2003). Interestingly, the addition of HEDP to the 2% NaOCl solution seemed to cause slightly less biofilm expansion compared to the 2% NaOCl. This was followed by biofilm retraction starting at an earlier time point (30 s instead of the 60 s noted for 2% NaOCl). Lastly, expansion remained stable, until the end point of the experiment. Based on these findings, further investigation is warranted to study the effect of HEDP on the viscoelastic properties of biofilms that are submitted to a low, but constant flow rate.
Higher NaOCl concentrations, irrespective of the presence of HEDP, removed biofilm at a higher rate compared to the less concentrated NaOCl solutions. This effect was especially prominent in the 5% NaOCl group (without HEDP) within the first 180 s, whilst slowing down thereafter. Five percent NaOCl solutions contain a higher quantity of reactive OCl − compared to less concentrated solutions. As a result, an accelerated dissolution of the biofilm is expected at the first moments of their interaction (Alves et al., 2011;Cunningham & Balekjian, 1980;Gordon et al., 1981;Koskinen et al., 1980;Moorer & Wesselink, 1982;Spanó et al., 2001;Stojicic et al., 2010;Thé, 1979;Trepagnier et al., 1977). Naturally, this burst removal is followed by a decelerated organic dissolution (Moorer & Wesselink, 1982), as the reactive compound gets consumed. However, in this study, the continuous NaOCl replenishment achieved by the constant flow of new irrigant in the system compensated for NaOCl consumption. Thus, the reduced biofilm removal following the prominent biofilm removal noted in Phase 1 can be attributed to the decreasing amount of biofilm left to participate in the reaction with NaOCl.
During the last 180 s, 2% NaOCl without HEDP showed an increase in the biofilm removal rate, approaching the rate of the higher concentrated solutions. At first, this supports the idea that the assumingly compromised antibiofilm efficiency of the lower concentrated NaOCl solutions can be compensated by more frequent exchange, a larger volume and longer exposure times (Alves et al., 2011;Gazzaneo et al., 2019;Petridis et al., 2019a;Siqueira et al., 2000). However, this conclusion should be viewed with some caution as biofilm removal rate and net biofilm removal are different outcomes. Biofilm removal rate could be viewed as a measure of antibiofilm efficiency (time-dependent effect), whilst net biofilm removal as a measure of anti-biofilm efficacy of the irrigant (timeindependent effect). Only looking at the similar biofilm removal rates between the 2% NaOCl and the 5% NaOCl (with or without HEDP) noted towards the end of the observation period fails to recognize the fact that higher NaOCl concentrations removed considerably more biofilm and induced considerably more biofilm disruption than the lower NaOCl concentrations overall. Therefore, it may be true that continuous replenishment of lower NaOCl concentrations may compensate for their increasingly diluted antibiofilm effect as a result of the fast consumption of their reactive component whilst reacting with the biofilm, but the fact that higher NaOCl concentrations induce more pronounced biofilm disruption and removal should not be overlooked.
Previous studies have shown that combining 5% NaOCl with HEDP result in some loss of the free available OCl − within the first hour after mixing, which limits the working lifespan of the solution (Biel et al., 2017;Tartari et al., 2015;Zollinger et al., 2018). As OCl − is consumed when NaOCl comes in contact with biofilms (de Beer et al., 1994), the additional loss from the presence of HEDP over time could explain the extra time needed for the combined 5% NaOCl/HEDP to bring about a similar effect as the plain 5% NaOCl. In addition, the interaction between NaOCl and HEDP affects the calcium chelation ability of the latter (Biel et al., 2017). Whether and to what extent mixing HEDP with 5% NaOCl compromises the calcium complexing capacity of HEDP, thereby affecting the antibiofilm capacity of the chelator itself (EDTA, for example, shows considerable antibiofilm capacity against dense bacterial biofilms, for further information see Busanello et al., 2019) is not known and warrants further investigation. In addition, adding HEDP to the 2% NaOCl solution did not bring about any change to the rate of biofilm removal, which was remarkably low during the whole time this experiment was conducted. Again, this indicates that the chelator affected the working action of the solution, not allowing NaOCl to optimally manifest its (time-dependent) antibiofilm efficacy.
NaOCl reacts with the proteinaceous and polysaccharidic content of the biofilm matrix (Hawkins et al., 2003;Tawakoli et al., 2017). This leads to the formation of gas bubbles filled with carbon dioxide and chloroform compounds (Mohmmed, 2017). Bubble formation is a dynamic process shown to be dictated by the reactivity between the oxidative solution and the biofilm (Petridis et al., 2019b). By taking a closer look at the biofilms within the first minutes of their interaction with the irrigants, a rapid and abundant formation of large bubbles was observed in the 5% NaOCl treatment group. The high rate of biofilm removal associated with 5% NaOCl solutions reflects the high reactivity between 5% NaOCl solutions and biofilms. In addition, the penetration capacity of 5% NaOCl is higher compared to the less concentrated NaOCl solutions (Stewart, 2003). The formation of a dense 'bubble cloud' increases the likelihood of coalescence of neighbouring bubbles leading to the formation of even larger bubbles. These large bubbles have increased buoyancy and, when formed deeper in the biofilm, may cause higher stresses that will eventually lead to massive biofilm structural disruption and cohesive failure. The disrupted biofilm debris is then easily removed from the bulk biofilm, for example as evidenced by the significant biofilm reduction noted after exposure to 5% NaOCl (Video S1).
When 2% NaOCl is applied, the lower reactivity between the irrigant and the biofilm does not seem to allow for large bubbles to form. Also, taking into account that these bubbles are formed in the upper biofilm layers due to the limited penetration of the 2% NaOCl compared to the 5% NaOCl, their buoyancy is initially low. Nonetheless, small bubbles also cause disruption in the biofilm structure. This disruption is associated with the structural re-arrangement occurring naturally as a result of the volumetric expansion caused by the formed bubble. Consequently, the biofilm coherence is reduced, albeit below the critical failure level, thus similarly to what happens to a hydrogel as it swells (Macedo et al., 2014). This is evidenced by the evolution of the biofilm from a coherent visco-elastic structure to a more viscous-fluid like state (disrupted layer) as observed in the OCT realtime recordings (Video S1). We hypothesize that this lies in the viscoelastic properties of the biofilms. By measuring real-time changes occurring in biofilm mechanical properties (viscoelasticity) during the application of 2% NaOCl, validation of this hypothesis would be feasible. Towards this end, advanced OCT imaging of biofilms undergoing deformation has been shown very promising (Picioreanu et al., 2018). Ultimately, the continuous and stable administration of fresh 2% NaOCl applied in the present study, seems to have an additive effect on the reactivity of the biocide with the biofilm. This eventually leads to critical cohesive failure and biofilm removal, as evidenced between the 300 and 480 s time points.
Contrary to our working hypothesis, the continuous presence of a chemically 'inert' chelator did not have a synergistic effect on the antibiofilm capacity of NaOCl against CDFF biofilms. Within the limitations of this study, NaOCl concentration seems to be the driving factor that determines biofilm response (Petridis et al., 2019b). EDTA has been shown to remove biofilm more effectively compared to 2% NaOCl in a previous study employing the same biofilm model (Busanello et al., 2019). Accordingly, a superior antibiofilm effect of the combined NaOCl/ HEDP irrigant solutions was anticipated. This was however not corroborated by our findings. EDTA has a higher stability constant than etidronate forming stronger bonds with metal ions (Smith & Martell, 1989;Wright et al., 2020). Nitrogen present in EDTA (CRC Press, 2017), is more likely to react with the divalent cations present in the extracellular polymeric substances (EPS) of the biofilm matrix and the receptors on the cell wall of grampositive bacteria, thus destabilizing biofilm matrix. HEDP is a non-nitrogenous chelator containing phosphorous instead, which is less electronegative than nitrogen. That could explain its weaker antibiofilm action compared to EDTA. As EDTA was not used in this study, this inductive reasoning remains to be confirmed and compared to EDTA.
CONCLUSIONS
Based on the findings and within the limitations of this study, the following conclusions can be drawn: 1. HEDP slows down the efficiency of NaOCl in terms of biofilm removal/disruption, but leads to similar results when biofilms are exposed to NaOCl/HEDP combined solutions for longer periods.
2. NaOCl concentration affects the rate of biofilm disruption and removal. 3. Bubble formation resulting from the reaction between NaOCl and the biofilm contributes to disruption of the biofilm structure and the biofilm volumetric expansion associated with the less concentrated NaOCl solutions. 4. Bubble formation parameters, such as growth rate and final size, are dependent on the concentration of the irrigant solution. 5. Optical coherence tomography is a valuable imaging tool for real-time monitoring of interactions between reactive solutions and biofilms.
|
2022-04-27T06:24:03.651Z
|
2022-04-26T00:00:00.000
|
{
"year": 2022,
"sha1": "c110620f81bb8d079c8c80ea3aa000a84d505ba6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "46b1a0389e707e3620c9316e0f7205440f7ab9d8",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221949066
|
pes2o/s2orc
|
v3-fos-license
|
Rotation Measure Evolution of the Repeating Fast Radio Burst Source FRB 121102
The repeating fast radio burst source FRB 121102 has been shown to have an exceptionally high and variable Faraday rotation measure (RM), which must be imparted within its host galaxy and likely by or within its local environment. In the redshifted ($z=0.193$) source reference frame, the RM decreased from $1.46\times10^5$~rad~m$^{-2}$ to $1.33\times10^5$~rad~m$^{-2}$ between January and August 2017, showing day-timescale variations of $\sim200$~rad~m$^{-2}$. Here we present sixteen FRB 121102 RMs from burst detections with the Arecibo 305-m radio telescope, the Effelsberg 100-m, and the Karl G. Jansky Very Large Array, providing a record of FRB 121102's RM over a 2.5-year timespan. Our observations show a decreasing trend in RM, although the trend is not linear, dropping by an average of 15\% year$^{-1}$ and is $\sim9.7\times10^4$~rad~m$^{-2}$ at the most recent epoch of August 2019. Erratic, short-term RM variations of $\sim10^3$~rad~m$^{-2}$ week$^{-1}$ were also observed between MJDs 58215--58247. A decades-old neutron star embedded within a still-compact supernova remnant or a neutron star near a massive black hole and its accretion torus have been proposed to explain the high RMs. We compare the observed RMs to theoretical models describing the RM evolution for FRBs originating within a supernova remnant. FRB 121102's age is unknown, and we find that the models agree for source ages of $\sim6-17$~years at the time of the first available RM measurements in 2017. We also draw comparisons to the decreasing RM of the Galactic center magnetar, PSR J1745--2900.
INTRODUCTION
Fast radio bursts (FRBs) are millisecond duration radio transients, whose origins are still unknown (Petroff tragalactic origins. Some FRBs have also been observed to repeat; the first discovered, and most observed so far, is FRB 121102 (Spitler et al. 2016), and more repeating FRBs have been detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) radio telescope (CHIME/FRB Collaboration et al. 2019a,b;Fonseca et al. 2020) and the Australian Square Kilometre Array Pathfinder (ASKAP, e.g. Kumar et al. 2019).
Polarisation properties of FRBs can reveal the nature of their local environment, as well as the FRB emission process and its geometry, thus adding constraints to progenitor theories. The rotation of the linearly polarised plane of a signal induced by the line of sight (LoS) magnetic field is called Faraday rotation. The rate of this rotation across frequency is quantified by the rotation measure (RM), calculated as the LoS integral of the product of the magnetic field strength and the electron density. Polarisation fractions and RMs have been determined for 20 FRBs (Petroff et al. 2016). Linear polarisation fractions ranging from ∼ 0 to ∼ 100% have been measured, and the absolute RM values are in the range ∼10-500 rad m −2 , with the exception of FRB 121102, which has an exceptionally high RM of ∼10 5 rad m −2 . FRB 121102's RM has also proven to be highly variable, with a decrease of ∼ 10% between epochs separated by seven months (Michilli et al. 2018a). To be able to observe such a high RM, a narrow channel bandwidth or a high observing frequency are required in order to avoid intra-channel depolarisation. Typical pulsar instrumentation have channel bandwidths of ∼ 1 MHz, so high frequency observations are required to observe high RMs.
In the original discovery of FRB 121102, the dispersion measure (DM) was found to be 557 ± 2 pc cm −3 (Spitler et al. 2014), where the DM is defined as the column density of free electrons along the LoS. In more recent observations, FRB 121102 has exhibited an increase in the measured DM, 560.6 ± 0.1 pc cm −3 in Hessels et al. (2019) and 563.6 ± 0.5 pc cm −3 in Josephy et al. (2019), revealing an average increase of roughly 1 pc cm −3 per year.
Bursts from FRB 121102 have been detected at frequencies spanning from ∼0.3-8 GHz (Chawla et al. 2020;Gajjar et al. 2018). The bursting activity of FRB 121102 does not seem to follow a Poissonian process, but rather goes through phases of bursting activity and quiescence which can be better explained with a Weibull distribution (Oppermann et al. 2018). This dichotomy in activity could also be explained by the recently discovered apparent periodicity of FRB 121102 of 161 days with an active window of 54% (Rajwade et al. 2020;Cruces et al. 2020), also detected in the repeating FRB 180916.J1058+65 with a period of 16 days and a 31% activity window .
FRB 121102 is the first repeating FRB to be unambiguously localised to a host galaxy (Chatterjee et al. 2017), which is a low-metallicity dwarf galaxy at a redshift of z = 0.193 ) with a stellar mass of M * ∼ 1.3 × 10 8 M and a star formation rate of 0.23 M per year . FRB 121102 is also coincident with a compact persistent radio source whose projected offset is < 40 pc (Marcote et al. 2017).
The properties of FRB 121102 and its persistent radio source have motivated a number of FRB models. Among the leading scenarios, FRBs are generated by flaring magnetars within supernova remnants (SNRs). Here, the magnetar flares collide with the surrounding medium, producing shocks creating synchrotron maser emission, resulting in FRB generation. The main difference between these models lies in the nature of the shocked material, being dominated by either the magnetar wind nebula (e.g. Lyubarsky 2014), or by previous magnetar flares (e.g. Beloborodov 2017Beloborodov , 2019Margalit & Metzger 2018).
In this work we have observed FRB 121102 with the 305-m William E. Gordon Telescope at the Arecibo Observatory (AO), the Effelsberg 100-m Radio Telescope, and the Karl G. Jansky Very Large Array (VLA) to obtain RMs from its bursts in order to investigate its long-term RM evolution. In §2 we describe our observations, data acquisition and search analysis. In §3 we report sixteen new RM measurements of FRB 121102, a long-term average FRB 121102 burst rate from our Effelsberg observations, and discuss the properties of the detected bursts. §4 is dedicated to comparing our results to the theoretical prediction of the RM evolution of an SNR from the works of Piro & Gaensler (2018) and Margalit & Metzger (2018), as well as the Galactic center (GC) magnetar, PSR J1745−2900 (Desvignes et al. 2018), and in §5 we interpret those results. Finally, in §6 we summarise our findings.
OBSERVATIONS
The telescopes used for observations were the Arecibo Observatory 305-m William E. Gordon Telescope in Puerto Rico, USA; the Effelsberg 100-m Radio Telescope in Effelsberg, Germany; and the Karl G. Jansky Very Large Array in New Mexico, USA. The observational setup and data processing of each telescope is detailed in their respective subsections below.
We anticipated extremely high RM values from FRB 121102 bursts, and have thus observed at frequencies higher than the 1.4-GHz band in order to avoid intra-channel depolarisation.
Effelsberg
We have used the Effelsberg 100-m radio telescope to observe FRB 121102 at 4-8 GHz using the S45mm receiver with a roughly two-week cadence for 2-3 hours each session from late 2017 to early 2020, totaling 115 hours.
The data were recorded with full Stokes information using two ROACH2 backends with each one capturing 2 GHz of the band. The channel bandwidth is 0.976562 MHz across 4096 channels, with a 131 µs sampling rate. The recorded data were in a Distributed Acquisition and Data Analysis (DADA) format 2 . Before processing, Stokes I was extracted from the data into a SIGPROC filterbank 3 format in order to perform the initial burst searching.
Observations on 22nd October 2018 encountered a receiver issue, forcing us to use the S60 mm receiver instead.
The S60 mm receiver has an SEFD of 18 Jy, 500 MHz of bandwidth from 4.6 to 5.1 GHz, 0.976562 MHz channel bandwidth across 512 channels, and an 82 µs sampling rate. The data were recorded as SIGPROC filterbanks.
The data were searched for single pulses using the PRESTO 4 software package (Ransom 2011). We used rfifind to identify radio frequency interference (RFI) in the data over two-second intervals and to make an RFI mask which was applied to the data during searching. We used PRESTO to create dedispersed timeseries of the data from 0-1000 pc cm −3 in steps of 2 pc cm −3 , which were searched for single pulses using single pulse search.py to convolve the time-series with boxcar filters of varying widths to optimise the signal-to-noise of a burst. A pre-determined list of boxcar widths from PRESTO was used, where the widths are multiples of the data sampling time. We searched for burst widths up to 19.6 ms and applied a signal-to-noise threshold of 7. DM-time and frequency-time plots of candidates were visually inspected to search for bursts.
For further RFI mitigation we calculated the modulation index of candidates. The modulation index assesses a candidate's fractional variations across the frequency channels in order to discriminate between narrowband RFI and an actual broadband signal (Spitler et al. 2012). We applied this thresholding following Hilmarsson et al. (2020).
If a burst was detected, we performed polarisation calibration in order to obtain the RM, polarisation an-gle (PA), and degree of polarisation of the burst. We used the psrfits utils package 5 to create a psrfits 6 file containing the burst and used PSRCHIVE 7 ) to calibrate the data by first dedispersing the burst data using pam, then pac to polarise calibrate that data with noise diode observations. To get the RM value, we used RMsyn.py 8 , which fits a variation in Stokes Q and U as a function of frequency.
Arecibo
Data from the 305-m William E. Gordon Telescope at the Arecibo Observatory were acquired by using the C-band receiver at an observing frequency between 4.1 and 4.9 GHz. The PuertoRican Ultimate Pulsar Processing Instrument (PUPPI) backend recorded dualpolarisation data every 10.24 µs in 512 frequency channels, each coherently dedispersed to DM = 557 pc cm −3 to reduce intra-channel dispersive smearing to < 2 µs. The time and frequency resolution were reduced to 81.92 µs and 12.5 MHz, respectively, before searching for bursts. We used PRESTO to create 200 dedispersed time-series between 461 and 661 pc cm −3 , which were searched by single pulse search.py with box-car filters ranging from 81.92 µs to 24.576 ms. A large fraction of detections due to noise and RFI were excluded by using dedicated software 9 (Michilli et al. 2018b). A 'waterfall' plot of signal intensity as a function of time and frequency was produced and visually inspected for the rest of the detections. The DSPSR package 10 (van Straten & Bailes 2011) was used to create PSRCHIVE files containing the full resolution data recorded by PUPPI.
For each observation, PSRCHIVE utilities were used to calibrate the burst polarisation by using a scan of a noise diode. RM values and their uncertainties were calculated with the RM-tools package 11 by using rotation measure synthesis (Burn 1966;Brentjens & de Bruyn 2005) and a cleaning deconvolution algorithm (Heald 2009). The resulting Faraday dispersion function for bursts detected on MJDs 58222 and 58712 (bursts 8, 19 and 20) shows signs of a poor polarisation calibration, namely symmetric peaks around the origin. We were not able to identify a cause for this and, while the RM measurements are still valid, the resulting polarisation fraction should be considered not reliable. PA curves were calculated by de-rotating the data with PSRCHIVE at the RM value obtained for each burst.
VLA
FRB 121102 was observed with the VLA as part of a monitoring project (VLA/17B-283) from 2017 November to 2018 January. Ten 1-hr observations were conducted at 2-4 GHz using the phased-array pulsar mode. Data were recorded with full Stokes information with 8096 × 0.25 MHz channels and 1024 µs time samples. Each observation had ≈ 30 min on-source. Data were dedispersed at 150 trial DMs from 400 − 700 pc cm −3 and the resulting time-series were searched for pulses using the PRESTO single pulse search.py.
Polarisation calibration was done using the 10-Hz injected noise calibrator signal. After polarisation calibration, the RMs were measured using the PSRCHIVE task rmfit which finds the RM that maximizes the linear polarisation fraction of the burst.
OBSERVATIONAL RESULTS
From our observations we have sixteen new RM measurements from FRB 121102 bursts: 1 from Effelsberg, 2 from the VLA, and 13 from Arecibo. The details of our detections, along with previously reported RM values, are listed in Table 1. The previously reported RM values from Arecibo (Michilli et al. 2018a) and the GBT ) listed in Table 1 are a global fit to multiple bursts from the same epoch. Each burst is also assigned a numerical value for clarity. The burst DMs in Table 1 are obtained through a linear interpolation of DMs from bursts detected at L-band with Arecibo (Seymour et al., in prep). The L-band burst DMs are determined by maximising the structure of the bursts and their sub-components 12 . That sample contains more bursts and shows more complex burst structures than the bursts presented here, resulting in more accurate and consistent DMs.
Long-term Burst Rate at C-band at Effelsberg
Previous surveys of FRB 121102 at frequencies between 4-8 GHz reported rates based on fewer observed hours (Spitler et al. 2018) and anomalously high burst rates . Spitler et al. (2018) detected three bursts from observing at 4.6-5.1 GHz for 22 hours consisting of 10 observing epochs spanning five months using the Effelsberg telescope. Gajjar et al. (2018) detected 21 bursts in a single six-hour observation, observing at 4-8 GHz at the Green Bank Telescope. Furthermore, Zhang et al. (2018) re-searched the data from Gajjar et al. (2018) using a convolutional neural network and detected an additional 72 bursts within the data.
Our Effelsberg survey spans over two years of observing FRB 121102 for 2-6 hrs at a time at 4-8 GHz with a two-week cadence, amounting to 115 hours of observations. Included here are 10 hours of observations presented in Caleb et al. (2020). We can therefore report a robust, long-term average burst rate of FRB 121102 in this frequency range of 0.21 +0.49 −0.18 bursts/day (1-sigma error) above a fluence of 0.04 (w/ms) 1/2 Jy ms for a burst width of w ms. We list the details of the surveys discussed here in Table 2.
A caveat to our observed burst rate is the suspected periodic activity of FRB 121102 (Rajwade et al. 2020). Roughly 40% of our Effelsberg observations were performed during suspected inactivity of FRB 121102, which if true would affect the observed burst rate. Including only observations while FRB 121102 is active, the average burst rate becomes 0.35 +0.80 −0.29 bursts/day above a fluence of 0.04 (w/ms) 1/2 Jy ms.
The observed burst rates of FRB 121102 also seem to be frequency dependent, with the rate being lower at higher frequencies. At 1.4 GHz the FRB 121102 burst rate has been observed to be 8 ± 3 bursts/day above a fluence of 0.08 Jy ms for 1 ms burst widths (Cruces et al. 2020).
Burst Properties
We plot the dynamic spectra, polarisation profile, and polarisation angles (PAs) of our detected bursts in Fig. 1. The PA is equal to RMλ 2 + PA ref , where λ is the observing wavelength, and PA ref is a reference angle at a specific frequency (central observing frequency in our case). The bursts are mostly ∼ 100% linearly polarised, with no circular polarisation detected. Bursts from FRB 121102 have been consistently ∼ 100% linearly polarised since its first polarisation measurement in late 2016 (Michilli et al. 2018a), which suggests a stability in its emission process. The Arecibo bursts at MJD 58222,58247,and 58712 (bursts 8,15,19,and 20) are not fully linearly polarised, which is uncharacteristic for FRB 121102, and can be attributed to polarisation calibration issues (see §2.2). The lack of circular polarisation indicates that no Faraday rotation conversion occurs at our observing frequencies, where linear polarisation is converted to circular in a magneto-ionic environment (Vedantham & Ravi 2019;Gruzinov & Levin 2019).
The PAs are flat across each burst, as has been seen previously from FRB 121102 (Michilli et al. 2018a;Gajjar et al. 2018). The flat PAs indicate the burst timescales are intrinsic, and not from a beam sweeping Table 1. Burst detections of FRB 121102 with measured RMs in chronological order. From left to right: Burst number, barycentric burst arrival time in MJD (referenced to infinite frequency), width (w, full-width at half-maximum), flux density (S), fluence (F ), observed RM, DM, observing frequency, and telescope used. The burst DMs are obtained through linear interpolation of L-band bursts detected at Arecibo, whose DMs are determined by maximising their burst and sub-component structure. Gajjar et al. (2018) the LoS of an observer. We do not discuss PA changes over time, as we did not observe an absolute calibrator for polarisation. In the absence of an absolute calibrator we cannot compare PAs across multiple telescopes. This discussion is outside the scope this work. The VLA burst on MJD 58075 (burst 6) exhibits a triple component profile. The second and third components exhibit a downward drift in frequency, a feature predominantly observed from repeating FRBs (e.g. Hessels et al. 2019; The CHIME/FRB Collaboration et al. 2020). The first component has an apparent upward drift in frequency, which is rarely seen, and a different PA than the two other components. While the temporal spacing between the components is not large, the difference in PAs between the first component and the other two might suggeest that these are in fact two separate bursts.
The Effelsberg burst at MJD 58228 (burst 10) was only detected between 4-5.2 GHz of the 4-8 GHz bandwidth. We were affected by strong edge effects in the bandpass, resulting in an uneven frequency response across the bandwidth. Thus we are uncertain of whether the burst frequency envelope is inherent to the burst or due to the bandpass.
Dispersion and Rotation Measures of FRB 121102
The RMs we obtained from our bursts are listed in Table 1. We plot the RMs over time in Fig. 2. The observed RM of FRB 121102 has dropped by 34% over 2.6 years from ∼ 10 5 rad m −2 to ∼ 6.7 × 10 4 rad m −2 . As Fig. 2 shows, the drop in RM has not been steady over time. From MJD 57757 to MJD 58215 (bursts 1-7), the RM decreased rapidly to ∼ 7 × 10 4 rad m −2 and has declined only slightly (∼ 5000 rad m −2 ) since then.
Within a 32-day timespan the observed RM of FRB 121102 exhibited significant short-timescale variations (bursts 7-15). At epochs separated by a week, the RM increased by ∼ 1000 rad m −2 (bursts 7-8). For three epochs during the following week, the RM remained stable between bursts 8-10, before increasing again by ∼ 1000 rad m −2 a week later (bursts 11-12). During three epochs in the following two weeks, the RM was observed to drop rapidly by a total of ∼ 4500 rad m −2 (bursts 12-15). This short-timescale behaviour can be seen in the inset of Fig 2. No RM measurement is available between MJDs 58247 and 58677 (430 days), but the RMs are consistent with each other at these dates (bursts 15-16). Another drop in RM of ∼ 2000 rad m −2 can be seen between bursts 18 and 19, separated by 28 days.
Only minor changes in DM have been observed during the observed RM evolution of FRB 121102. While the RM decreased significantly, the DM has increased by ∼ 4 pc cm −3 , from 559.7 ± 0.1 pc cm −3 (Michilli et al. 2018a) up to 563.3 pc cm −3 from the aforementioned linear interpolation of L-band burst DMs used in this work.
An increase in DM means an increase in the LoS electron density. There are many contributing factors to the DM along the LoS, so a smaller fractional change in DM is not surprising. The Faraday rotating medium contributes only a fraction of the total DM and its amount is unknown. A decrease in RM implies either a decrease in the magnetic field strength or the electron density along the LoS, or both. The opposing RM and DM evolution thus has two possible scenarios: the changes in RM and DM arise from different media; or the changes arise from the same medium, implying that the LoS magnetic field strength must be decreasing. Michilli et al. (2018a) constrained the average magnetic field along the LoS in the region which Faraday rotation occurs, B , between 0.6 mG and 2.4 mG using their measured FRB 121102 RM in the source frame of RM src ∼ 1.4 × 10 5 rad m −2 and the estimated host DM contribution of DM host 70−270 pc cm −3 . From a measured DM and RM, B can be calculated, ignoring sign reversals, as B = 1.23 RM src /DM host µG. (1) The most recent DM and RM values in our sample yield B = 0.4-1.6 mG. This is a lower limit as the DM in the Faraday rotating region could be much lower.
IMPLICATIONS FOR SOURCE SCENARIOS
We explore two models which estimate the RM evolution over time within an SNR. First is a model from Piro & Gaensler (2018) which estimates both the RM and DM evolution for three different scenarios: a supernova expanding into a constant density ISM, a progenitor wind affecting the circumstellar medium, solely contributing to the RM, and a supernova expanding into wind affected ISM. The second model is a one-zone magnetar nebula expanding spherically at a constant radial velocity (Margalit & Metzger 2018).
Additionally, we consider an environment near a massive black hole by comparing to the GC magnetar, PSR J1745−2900. The RM magnitude and trend of FRB 121102 seems to be analogous to PSR J1745−2900, which has undergone rapid changes in RM in recent times (Desvignes et al. 2018).
Using Bayesian inference, we fit the RM evolution prediction from the aforementioned SNR models to the observed RM of FRB 121102. A Markov-chain Monte Carlo (MCMC) method is used to estimate the posterior Figure 1. Dynamic spectra of the bursts detected with Arecibo, VLA, and Effelsberg in a chronological order, dedispersed to their respective DMs listed in Table 1. On top of each spectrum is plotted the profile of the burst (in black), linear polarisation (red), and circular polarisation (blue), as well as the polarisation angle (PA). Each panel is labeled with the corresponding burst number from Table 1 and the telescope at which the burst was detected. Bursts 8, 15, 19, and 20 suffer from poor polarisation calibration, resulting in unreliable polarisation fractions (see §2.2). of the model parameters and the age of the FRB 121102 bursting source, t age , at the time of its first RM measurement. The models considered here predict that DM decreases over time, while the observed DM is increasing. We therefore do not perform a similar analysis on the DM evolution. To perform an MCMC we used the emcee 13 Python package (Foreman-Mackey et al. 2013). MCMC deploys random walkers around the initial estimates of the parameters, where the walkers explore the parameter space in order to reconstruct the posterior probability of the parameters.
To obtain an initial estimate for our parameters we used the scipy (Virtanen et al. 2020) stochastic least 13 emcee.readthedocs.io squares module differential evolution. An initial guess is also required for differential evolution, where we used the parameters of each model variety in Piro & Gaensler (2018) and Margalit & Metzger (2018). For our MCMC we randomly scattered 10 walkers around each parameter (up to 10% away), where each walker was made to walk 1.5 × 10 3 steps. We used uninformative uniform priors for all our model parameters.
The observed RMs can be affected by instrumental or other kinds of noise processes, which are unaccounted for in the observed uncertainties. We introduced an error added in quadrature, Σ, in order to account for underestimation of the uncertainties of the observed RMs. Σ enters our Gaussian likelihood function as an underesti- mation of the variance σ (observed RM uncertainties in this case) as The measured RM uncertainties, σ, are on the order of 10 2 rad m −2 . Henceforth, all values mentioned will be in the reference frame of the source, unless otherwise stated. This requires a conversion of the observed values to the source frame. The conversions are DM source = DM obs (1 + z), RM source = RM obs (1 + z) 2 , and t source = t obs (1 + z) −1 , where z ∼ 0.2 is the redshift of FRB 121102. This means that the minimum t age possible in the source frame is just over 3 years due to the time elapsed from the first detection of FRB 121102 (Spitler et al. 2014) and its first RM measurement (Michilli et al. 2018a). In the case of DM, we will only consider the contribution local to the source, i. e. local to the bursting source and the host galaxy. For each model we first describe it in more detail before comparing it to our results.
Model Description
Piro & Gaensler (2018) model the temporal evolution of both RM and DM of an expanding SNR. They consider three cases of evolutionary environments, which we expand upon below.
The first evolutionary case is an SNR that expands into an ISM of constant density. The shocked, ionized regions of the SN ejecta and ISM, as well as ionized material from the pulsar wind nebula close to the SNR center, provide sufficient free electrons to disperse an FRB. The Faraday rotation arises from the magnetic fields generated by the forward and reverse shocks during the SNR expansion. The SNR dominates both the DM and RM contributions at early times until the ISM takes over on a timescale of ∼ 10 2 − 10 3 years. The free parameters in this model are the number density of the uniform ISM, n, and the SN ejecta mass, M . The energy of the explosion is kept constant as E = 10 51 erg for all cases.
The second case is where the stellar wind of the massive progenitor affects the circumstellar environment. The magnetized wind provides another source of magnetic field as well as altering the DM evolution. The DM is much higher initially compared to the previous scenario due to high density for the wind adjacent to the SN, but the DM decreases more rapidly because of the wind's decreasing density. The wind environment produces an ordered magnetic field, which is swept up by the SNR. This is the focal point of RM generation in this scenario as opposed to the shock generation of magnetic fields in the previous scenario. The RM also drops rapidly due to the steep decline with time of the wind's density and magnetic field. Here the free parameters are the ejecta mass, M , and the wind mass loading parameter, K, which is a function of the mass loss rate,Ṁ , and wind velocity, v w , and is given in units of g cm −1 .
The third is a mixture of the first two scenarios; an SNR expands into an ISM affected by a constant velocity wind, with M and K as free parameters. For all three cases they assume supernova ejecta masses of 10 M (red supergiant progenitor) and 2 M (strippedenvelope SN).
Results
Using an MCMC we can estimate the posterior of n, K, t age , and Σ of each model variety (Piro & Gaensler 2018, Eqs. 26, 57, and Appendix) using the measured RM values of FRB 121102 (Table 1). Our initial guesses are the median values of n (1 cm −3 ) and K (10 13 g cm −1 ) from Piro & Gaensler (2018), t age = 5 years, and Σ = 10 3 rad m −2 (roughly 1% of the observed RM magnitude). We plot our 2D posterior corner plots in Fig. 3 and list our results in Table 3.
For the constant ISM model we obtain a t age of 1.4 years at the time of the first RM detection. For the wind and wind plus SNR evolution models we obtain t age between ∼ 6-8 years. The range of RM from our results (1-sigma error) for each model and mass is plotted as a function of time in Fig. 4, and overplotted with the observed RM values of FRB 121102.
We also plot the local DM versus RM for the models in Piro & Gaensler (2018)
Model Description
Margalit & Metzger (2018) consider a magnetar surrounded by a magnetar nebula. Flares and winds from the magnetar inject particles and magnetic energy into the nebula that is in turn responsible for the large observed RM. Their model is a one-zone magnetar nebula model, where they assume a spherical, freely expanding nebula with a constant radial velocity, v n . The free magnetic energy of the magnetar, E B * is released into the nebula at a rate following a power-law in time,Ė ∝ t −α (Margalit & Metzger 2018, Eq. 4), where α 1. The Faraday rotation occurs in non-relativistic electrons ejected earlier in the nebula's history and cooled from radiation and adiabatic expansion.
In this model, the RM can be approximated as (Margalit & Metzger 2018, Eq. 19, values normalised to 1 are omitted for clarity) where RM 5 ≡ RM/10 5 rad m −2 , E B * is in erg, v n in cm s −1 , t is seconds since the SN explosion, and t 0 is the time in seconds since the onset of the active period of the magnetar's energy release into the nebula. We extract t age from Eq. 3 by replacing t with t age + t , where t is the time elapsed in seconds of each RM measurement since the first one. For completeness, the estimated DM contribution from the Faraday-rotating medium is given by In their analysis, Margalit & Metzger (2018) consider three variations of their model with each having its own set of values for E B * , t 0 , v n , and α. They call these variations 'model A, B, and C', and we keep the same notation to avoid confusion. Margalit & Metzger (2018) use models A, B, and C to estimate t age of FRB 121102 from Eq. 3 using the RM measurements from Michilli et al. (2018a) and Gajjar et al. (2018). Their choice of parameters and their results are shown in Table 4.
Results
Again, we used MCMC to estimate the posterior of α, t age , and Σ. The initial guesses are the parameters of models A, B, and C and t age in Margalit & Metzger Piro & Gaensler (2018) for each scenario. From left to right: Model scenario, supernova explosion energy (E), supernova ejecta mass (M ), number density of surrounding uniform ISM (n), wind mass loading parameter (K), age of bursting source (tage), and the underestimation factor of the measured rotation measure, Σ. The parameters n, K, tage, and Σ were obtained in this work ( §4.1.2). Uncertainties are 1-sigma. Margalit & Metzger (2018). From left to right: Model, free magnetic energy of the magnetar (EB * ), onset of magnetar's active period (t0), radial velocity of expanding nebula (vn), power-law parameter (α) and age of bursting source (tage) used in Margalit & Metzger (2018), and α, tage, and underestimation factor of the measured rotation measure, Σ, obtained in this work ( §4.2.2). Uncertainties are 1-sigma.
Model
Model EB * (erg) t0 (years) vn (cm s −1 ) α a tage (years) a α b tage (years) b log 10 (Σ) (rad m −2 ) A 5 × 10 50 0. Figure 5. Source frame DM versus RM of each model scenario presented in Piro & Gaensler (2018). Shown are the ranges of the predicted DMs and RMs of each scenario using the parameters obtained in this work with 1-sigma uncertainties (Table 3). The grey shaded area shows the RM and estimated local source DM contribution of FRB 121102 in the reference frame of the bursting source.
A similar t age of ∼ 15-17 years was obtained for all the models. Our obtained α values lie in the range of 1.1-1.6 and are consistent with the values in Margalit & Metzger (2018). The resulting RM range (1-sigma error), overplotted with observed FRB 121102 RM values is plotted in Fig. 7.
Galactic Center Magnetar PSR J1745−2900
The GC magnetar PSR J1745−2900 has exhibited similar behaviour as FRB 121102 regarding changes in RM. Since its first RM measurements of −67000 rad m −2 (Eatough et al. 2013), it showed some variations in RM of a few hundred rad m −2 per year for a few years until its RM suddenly exhibited a steep drop in absolute magnitude (Desvignes et al. 2018). This drop in RM is similar to FRB 121102, albeit not as intense, as PSR J1745−2900 had a drop of 5% in RM over the course of a year while the RM of FRB 121102 has dropped by an average of 15% yr −1 over roughly two years. Both PSR J1745−2900 and FRB 121102 exhibit short-term variations in their observed RMs. Although somewhat similar, the magnitude of the FRB 121102 variations is greater. Desvignes et al. (2018) also report a constant DM and attribute the RM evolution to the changing line of sight towards the moving magnetar where either the projected magnetic field or the GC free electron content varies. Desvignes et al. (2018) use the measured proper motion of PSR J1745−2900 to estimate the characteristic size of magneto-ionic fluctuations to be ∼ 2 astronomical units (AU). Assuming the bursts from FRB 121102 originate from the magnetosphere of a neutron star with a speed of ∼ 100 km s −1 , the source moves a distance of 20 AU per year. The observations of PSR J1745−2900 show that spatial variations on the scale of a few to 10s of AUs are possible in the vicinity of a massive black hole. If the host of FRB 121102 also harbors a massive black hole, the variations seen in the RM of FRB 121102 could be caused by the changing medium in its accretion disk. The velocity of the medium could be much higher than in the Galactic center, contributing to the observed fluctuation.
DISCUSSION
We compared our measured RM sample to the theoretical RM predictions of Piro & Gaensler (2018) and Margalit & Metzger (2018) by obtaining MCMC posteriors of the model parameters and the age of the FRB 121102 bursting source at the time of its first RM measurement, t age .
For the model variations in Piro & Gaensler (2018), we obtain a t age ∼ 1.5 years for the uniform ISM scenario, and 6 − 9 years for the progenitor wind and progenitor wind plus SNR evolution scenarios. Based on observations, the minimum possible t age is 3 years, so we exclude the uniform ISM scenario. A drawback for the wind-only scenario is that it requires a high wind mass loading parameter (K > 10 15 g cm −1 ) to be consistent with the data.
We also compare our sample to the predicted DM vs RM evolution in Piro & Gaensler (2018). The excluded uniform ISM scenario predicts DM values consistent with FRB 121102, but both wind scenarios predict much higher DMs than is observed. However, all the model variations predict a decrease rather than increase in DM at the observed source frame RMs of FRB 121102.
Our results here show that origin scenarios with standard supernovae have difficulties explaining both the RM and DM of FRB 121102. A caveat is that the models assume uniform media, while the ISM, SNR, and wind environments most likely have spatial structures such as filaments.
For the models in Margalit & Metzger (2018) we obtain a t age of ∼15-17 years and α of 1.1-1.6. Our results show that the observed RM evolution of FRB 121102 is consistent with these models. The estimated DM contribution from the nebula in Margalit & Metzger (2018) is ∼ 2 − 20 pc cm −3 for models A, B, and C (Eq. 4). The measured increase in DM of ∼4 pc cm −3 is difficult to reconcile with the RM decrease if it originates from the same electrons.
The DM and RM might not necessarily be coupled. Metzger et al. (2019) estimate that photoionization just outside the propagating outward shock could contribute on the order of 10 pc cm −3 with an increase of a few pc cm −3 possible over several years. Therefore, the RM decrease and DM increase are likely occurring in different regions.
The SNR is initially optically thick at radio frequencies due to free-free absorption. According to Piro (2016), the SNR becomes optically thin at radio frequencies on a timescale of centuries if the SNR is solely ionised by the reverse shock. However, if the SNR is also photoionised from within by the magnetar wind nebula the SNR becomes optically thin at our observed frequencies on a timescale of 10 years (Metzger et al. 2017).
A by-product of our MCMC calculations is the error added in quadrature, Σ, which characterises the underestimation of the observed RM uncertainties. We find Σ to be consistent with ∼ 10 3.9 rad m −2 for all models and their variations, or roughly 10% of the observed RMs. This underestimation could be due to unaccounted noise processes. Alternatively, the large Σ could be explained by deviations of the observed RMs from the RM evolution models considered in this work, which are inherently power-laws. These deviations could be due to LoS variations across observing epochs as is seen for PSR J1745−2900 (Desvignes et al. 2018). PSR J1745−2900 has exhibited similarly drastic changes in RM over time (Desvignes et al. 2018). This change is attributed to variations in the projected magnetic field or the GC free electron content due to line of sight changes of the moving magnetar. FRB 121102 is located outside of its host dwarf galaxy center ), but we cannot exclude a similar scenario due to the fact that AGNs can be found offset from the optical center of dwarf galaxies (Reines et al. 2020).
A comparison can be made between FRB 121102 and another localised, repeating FRB, FRB 180916.J1058+65, which has no discernable associated persistent radio source, and its RM is three orders of magnitude less than the RM of FRB 121102. However, it can still fit within the SNR framework where the persistent radio source has faded and the RM dropped to its observed levels due to the source being a few hundred years old (Marcote et al. 2020).
The observed RMs of FRB 121102 show large-scale variations of ∼ 10 4 rad m −2 over year-timescales and small-scale variations of ∼ 10 3 rad m −2 over weektimescales. There is no obvious periodicity in the observed RM variations at the proposed FRB 121102 periodicity of 161 days (Cruces et al. 2020).
Future polarisation measurements will show whether the RM of FRB 121102 has "leveled-off" at its current magnitude or will continue to vary. If the RM continues to decrease, the parameters of the SNR models considered in this work can be constrained further. On the other hand, if the RM will stay the same, the models can be rejected or will require adjustments. If the RM increases significantly, it would strongly challenge the SNR models.
Investigating the RM and DM evolution of repeating FRBs is certainly helpful in constraining source models. If FRBs, especially repeating ones, continue exhibiting vast differences from FRB 121102, such as host galaxy type, RM magnitude, and DM evolution, one must consider the possibility that FRB 121102 is a unique FRB source, likely residing locally to an AGN.
CONCLUSIONS
We present sixteen new RMs from bursts of FRB 121102 using observations taken with Arecibo, Effelsberg, and VLA.
Our Effelsberg survey consists of over 100 observing hours spanning over two years at 4 − 8 GHz (Table 2). An FRB 121102 survey of this magnitude in this frequency range is unprecedented, and thus enables us to present a robust, long-term average burst rate of 0.21 +0.49 −0.18 bursts/day above a fluence of 0.04 (w/ms) 1/2 Jy ms.
Along with previously reported RM values of FRB 121102 (Michilli et al. 2018a;Gajjar et al. 2018), we have an RM sample spanning roughly 2.5 years. During that time, the source frame RM has decreased significantly. From the first RM measurement at MJD 57747 to MJD 58215, the RM declined rapidly from 1.4 × 10 5 rad m −2 to 1.0 × 10 5 rad m −2 . From that point onward, the RM has stayed relatively constant, with only a slight decrease down to 9.7 × 10 4 rad m −2 . However, short-term RM variations of ∼ 1000 rad m −2 per week have been observed during that period.
We fit the observed RM of FRB 121102 to theoretical models of RM evolution from within SNRs from Piro & Gaensler (2018) and Margalit & Metzger (2018). The results yield a source age estimate of 6-17 years for FRB 121102 at the time of its first RM measurement in late 2016. Conventional SNRs do not agree with our data, but the inclusion of a pulsar wind nebula is compatible with our data.
|
2020-09-28T01:01:07.167Z
|
2020-09-25T00:00:00.000
|
{
"year": 2021,
"sha1": "67abae0308c5a8ed38be4e43f1b98439dfc1290e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.12135",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67abae0308c5a8ed38be4e43f1b98439dfc1290e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
266175446
|
pes2o/s2orc
|
v3-fos-license
|
Self-care behaviour and associated factors among heart failure patients in Ethiopia: a systematic review and meta-analysis
Objective This study aimed to estimate the pooled level of self-care behaviour among heart failure patients in Ethiopia. Design Systematic review and meta-analysis. Data source PubMed/MEDLINE, HINARI, Web of Sciences, Scopus, Google Scholar, Science Direct, African journals online and University repositories were searched from 1 January 2000 to 1 November 2023. Eligibility criteria We include studies that examined self-care behaviour among heart failure patients, studies that report factors associated with self-care behaviour and observational studies (cross-sectional, case-control and cohort) with full text available. Data extraction and synthesis The data were extracted with Microsoft Excel and analysed by using STATA V.11 software. The weighted inverse variance random-effects model at 95% CI was used to estimate the pooled level of self-care behaviour and its associated factors among heart failure patients. Tests of heterogeneity, test of publication bias and subgroup analyses were also employed. Results Thirteen cross-sectional studies with 4321 study participants were included; and the pooled level of good self-care behaviour among heart failure patients in Ethiopia was found to be 38.3% (95% CI 31.46 to 45.13). Only 68.8% of heart failure patients were knowledgeable about heart failure. Knowledge about heart failure (Adjusted Odds Ratio (AOR)=3.39; 95% CI 2.42 to 4.74) and absence of comorbidity (AOR=2.69; 95% CI 1.35 to 5.37) were significantly associated with good self-care behaviour among heart failure patients in Ethiopia. Conclusion The majority of heart failure patients in Ethiopia did not adhere to the recommended self-care behaviours. Nearly one-third of heart failure patients were not knowledgeable about heart failure. Knowledge about heart failure and the absence of comorbidities were significantly associated with good self-care behaviour. Therefore, efforts should be devoted to increasing knowledge and preventing comorbidities among heart failure patients. PROSPERO registration number CRD42023394373.
INTRODUCTION
Heart failure is a complex clinical syndrome characterised by impaired ventricular filling, decreased pumping or inadequate cardiac output arising from functional or structural impairment of the heart. 1 It is one of the rapidly growing cardiovascular disorders that affects more than 64 million people worldwide. 2Heart failure is a major health issue in Africa; associated with significant rates of morbidity, mortality and hospitalisation. 3n Sub-Saharan African countries, heart failure is responsible for about 25.6% to 30% of hospital admissions; posing a substantial burden to the healthcare system. 4The mortality associated with heart failure was higher in Africa than in other regions of the world. 5Furthermore, heart failure affects the youngest and productive population.Hence, it can negatively affect the economic growth of a country by reducing productivity. 4n Ethiopia, heart failure is a significant public health burden; affecting middle-aged and economically productive individuals,
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ The strength of this systematic review and metaanalysis result is including wide geographical areas and different eligible articles across different areas of the country, which increases the accuracy of the findings.⇒ This systematic review and meta-analysis were conducted with robust methodology.⇒ This study provided an overall level of self-care behaviour among heart failure patients in Ethiopia.
Declaring the absence of publication bias by computing Eggers's test may increase the reliability of the findings.⇒ All included studies in this systematic review and meta-analysis were cross-sectional studies, which may limit the opportunity to generate a causal link between variables.⇒ The included articles were conducted in governmental health institutions, so this study does not consider heart failure patients who have follow-up in private health facilities and home-dwelling patients.Moreover, there is high heterogeneity among the included studies.
Open access resulting in significant premature mortality, disability and loss of economic productivity. 6Evidence from this country showed that about 30% to 40% of heart failure patients die within 1 year and around 60% to 70% of heart failure patients die within 5 years of diagnosis. 7oreover, the majority of heart failure patients in Ethiopia had poor health-related quality of life; and most of them had suffered from depression. 8 9ortunately, various treatment modalities have been developed and are being implemented to reduce morbidity and mortality due to heart failure.Self-care behaviour is the practice in which patients are involved in health maintenance activities and decide about managing their signs or symptoms. 10It mainly includes taking prescribed medications, reducing salt intake, alcohol reduction, engaging in regular exercise, cessation of smoking and weight reduction. 1 11Good adherence to those self-care activities may be important for the effective management of heart failure. 12Patients with good self-care behaviour were found to have a better quality of life, reduced mortality and burden of rehospitalisation than patients with lower levels of self-care behaviour. 13 14n the other hand, poor adherence to recommended self-care behaviours was associated with an extended hospital stay, increased depressive symptoms and worse outcomes. 15 16Therefore, it should receive the same attention and status as medications; and clinicians should give priority to self-care behaviour during the caring of heart failure patients. 14ssessing self-care behaviour is crucial for determining the level of self-care practice among heart failure patients.Identifying factors associated with good adherence is also vital for policymakers, planners and clinicians to design factor-oriented strategies targeted to increase self-care practice in heart failure patients.This in turn will help to reduce the morbidity and mortality of heart failure victims.In fact, a number of studies assessing the level of self-care behaviour and its associated factors among heart failure patients have been reported from different regions of Ethiopia.However, the reported results were inconsistent; and as per the investigators' knowledge, there was no national-level study that estimates the overall self-care behaviour among heart failure patients in Ethiopia.Therefore, this study aimed to estimate the pooled level of good self-care behaviour and to identify factors associated with good self-care behaviour among heart failure patients in Ethiopia.
METHODS AND MATERIALS Study protocol and registration
The systematic review and meta-analysis were reported using 'the Preferred Reporting Items for Systematic Review and Meta-analysis' (PRISMA) guideline. 17The completed PRISMA checklist for this study is provided as a supplementary file (online supplemental table S1).The protocol for this study was registered on the PROSPERO database.The registration number is CRD42023394373.
Databases and search strategy
In this systematic review and meta-analysis, we checked databases without the restriction of study design and publication year.The records were searched on international databases such as PubMed/MED-LINE, Web of Sciences, HINARI, Scopus, Google Scholar, Science Direct, African journals online and repositories of Ethiopian universities.Furthermore, a reference list of established articles was checked to incorporate additional articles.Articles reporting self-care behaviours/ practices among heart failure patients and/or factors associated with self-care behaviours/practices in Ethiopia were included in the final analysis.To identify the articles, we used the search terms in combination and/ or independently using "AND" and "OR" Boolean operators.The search terms/keywords were "heart failure", OR "congestive heart failure", OR "cardiac failure", AND "self-care practice", OR "self-care behaviour", AND "factors", OR "associated factors", OR "determinants", OR "predictors", AND "Ethiopia" (table 1).Moreover, we have reviewed the primary study reference lists to assess unpublished studies.The searching date was from 20 November 2022 to 25 December 2022 and retrieved articles until 25 December 2022 were included in the final systematic review and meta-analysis.The retrieved articles
Inclusion and exclusion criteria
In this study, we include studies that examined self-care behaviour among heart failure patients, studies that report factors associated with self-care behaviour and observational studies (cross-sectional, case-control and cohort) with full text available.Studies published from 1 January 2000 to 1 November 2023 were included in this systematic review and meta-analysis.However, interventional studies, systematic reviews, narrative reviews, qualitative studies, case reports, policy statements and inaccessible full text articles were excluded from this systematic review and meta-analysis.
Outcome measurement
This study had two main outcome measurements.The first outcome is the pooled level of good self-care behaviour among heart failure patients; and the second outcome is predictors of self-care behaviour among heart failure patients in Ethiopia.
Heart failure self-care behaviour was defined as good self-care behaviour and poor self-care behaviour.In this meta-analysis, we measured heart failure self-care behaviour using five domains: medication adherence, low sodium diet, exercise, fluid restriction and weight monitoring.Data regarding the extent of participants adherence to self-care behaviour were collected using a 5-point Likert scale (1=strongly disagree, 5=strongly agree).Higher scores indicated a higher level of adherence to self-care behaviour, with scores of 4 or 5 indicating good adherence to heart failure self-care behaviour.Participants scored 3 and less than 3 were considered as poor adherence to heart failure self-care behaviour.
Quality evaluation
In this systematic review and meta-analysis, the quality of the remaining studies (methodological and result validity) was examined using the 'Joanna Briggs Institute (JBI)' critical quality assessment checklist for crosssectional studies. 18Two investigators (AW and MB) independently assessed the quality of each full-text article and any disagreement between the authors was resolved by taking the mean score of the two authors.The parameters of the JBI critical quality assessment criteria were clear inclusion and exclusion criteria, details of the study population and appropriate statistical analysis.The maximum score of JBI critical quality assessment checklist for cross-sectional studies is 8. Finally, the original studies with a score of 5 and above were considered as having high quality with the cut-off point determined after reviewing different relevant literature to include in the study.The completed quality assessment checklist is provided as a supplementary file (online supplemental table S2).
Data extraction
The data from the final included studies were extracted with a standard data extraction format in Microsoft Excel sheets by two authors (AW and MB) separately.The extraction checklist included items like the first author's name, publication year, study design, study region, sample size, mean age, response rate, sex, and heart failure selfcare behaviour.
Data processing and statistical analysis
The data were extracted, cleaned and exported into STATA V. 11.0 (Stata Corporation, College Station, TX, USA) software for quantitative analysis.Considering the variation in true effect sizes across the population, Der Simonian and Laird's random effect model was performed for the analysis at 95% CI.The Cochrane's Q statistics (χ 2 ) and I 2 (%) with its corresponding p values were used to determine the presence of heterogeneity between studies. 19 20Furthermore, the source of heterogeneity was examined through subgroup analysis and sensitivity analysis was also executed to investigate the potential source of heterogeneity observed in the pooled level of heart failure self-care behaviour.The publication bias was evaluated by using Egger's test 21 and presented with a funnel plot.The pooled level of self-care behaviour among heart failure patients was reported with a p<0.05 and reported with a 95% CI.The strength of the association between self-care behaviour and its predictors was determined by computing the OR.
Patient and public involvement
Neither the patient nor the public were involved in the review protocol, proposal development and the design and analysis of the study.
RESULTS
As illustrated in figure 1, the literature search resulted in 1068 articles from electronic databases such as PubMed/ MED-LINE, HINARI, Web of Sciences, Scopus, Google Scholar, Science Direct, African journals online and University repositories.From those articles, 394 duplicated records were removed.Then we excluded 603 irrelevant articles after reviewing their titles and abstracts.We assessed 71 full-text articles for eligibility based on predetermined criteria, and 58 articles were further excluded.Finally, 13 studies were fulfilled the eligibility criteria and included in this systematic review and meta-analysis (figure 1).
Characteristics of included studies
The current study included 13 cross-sectional studies with 4321 study participants.The studies were published between 2014 and 2022 in international peer-reviewed journals.The sample size of the included studies ranged from 229 22 to 424. 23The response rate of the study participants was 98.71%.The mean age of the participants was 49.01% and 54.18% of the study participants were female.
Open access
In this study, four regions and one city administration of the country were represented.One study was from Tigray regional state, 24 five studies were from Oromia regional state, 23 25-28 one study was from Sidama regional state, 22 four studies were from Amhara regional state 8 29-31 and two studies were from Addis Ababa city administration. 32 33Five regions and one city administration of the country, namely, the Benishangul Gumuz region, Somali region, Afar region, Gambella region, South Nations, and Nationalities region and Dire Dawa city administration, were not included in this study due to absence of articles (table 2).
Self-care behavior among heart failure patients in Ethiopia
All included studies reported the level of self-care behaviour among heart failure patients.We computed the analysis using the Der Simonian-Laird random-effects model.The results showed that the estimated pooled level of good self-care behaviour among heart failure patients in Ethiopia was 38.3% (95% CI 31.46 to 45.13; Open access I 2 =95.8%; p<0.001) (figure 2).The highest level of selfcare behaviour was observed in a study done in Bale and East Bale Zones (53.6%),Oromia regional state in 2022. 28Whereas, the lowest level of heart failure self-care behaviour was reported from a study conducted in Jimma town (17.4%),Oromia regional state, in 2015. 27herence to recommended self-care behaviours among heart failure patients in Ethiopia In this study, five studies have been included to determine the pooled level of heart failure patients regarding medication adherence, salt restriction and weight monitoring.
Heterogeneity and publication bias
In the current meta-analysis, there was evidence of significant high heterogeneity across the included studies according to the Cochrane Q-test (p<0.001) and I 2 test (I 2 =95.8%).Therefore, the meta-analysis was computed using the random-effects meta-analysis model to estimate the overall level of self-care behaviour among heart failure patients in Ethiopia.The presence of publication bias was determined using the funnel plot and Egger's regression test statistics.The funnel plot indicated that there was no publication bias in the included studies (figure 3).Egger's weighted regression statistics (p=0.104) and Begg's rank correlation statistics (p=0.246)showed that there was no evidence of publication bias.
Subgroup analysis
In this study, there was evidence of significant heterogeneity across the included studies.Therefore, subgroup analysis was conducted by considering the sample size, publication year, region of the study conducted and the scales used to assess adherence to self-care behaviour.
The result of the subgroup analysis showed that the level of heart failure self-care behaviour was higher in studies conducted in Oromia regional state (42.56%) compared with studies conducted in Amhara regional state (36.15%).Additionally, subgroup analysis was conducted Open access based on the publication year of the included studies and the results showed that the highest level of self-care behaviour was observed in articles published between 2021 and 2022 (40.36%) (table 3).
Participants level of knowledge on heart failure
From the total 13 included articles, 8 studies assessed the participant's level of knowledge regarding heart failure.The results of this systematic review and metaanalysis showed that more than two-thirds of the study participants had poor knowledge about heart failure.In this meta-analysis, the participants pooled level of poor knowledge about heart failure was 68.82% (95% CI 60.41 to 77.23).Five of the included articles used the Japanese heart failure knowledge scale, whereas three articles used the Dutch heart failure knowledge scale to measure the participants' level of knowledge.We conducted a subgroup analysis to determine any variation between the scales used.The result showed that the level of poor knowledge about heart failure was higher in studies that used the Dutch heart failure knowledge scale (70.53%) compared with studies that used the Japanese heart failure knowledge scale (68.82%).
Factors associated with heart failure self-care behaviour in Ethiopia A total of seven cross-sectional studies have been included to determine factors significantly associated with heart failure self-care practice in Ethiopia.In this study, two predictors were identified, namely, participants' knowledge about heart failure and the absence of comorbidity.Seven cross-sectional studies were analysed to determine the association between the participant's level of knowledge on heart failure and self-care behaviour. 22 24 29-33The results showed that patients who had good knowledge about heart failure were 3.39 times more likely to have good adherence to self-care recommendations compared with those who had poor knowledge about heart failure (Adjusted Odds Ratio (AOR)=3.39;95% CI 2.42 to 4.74).
Additionally, the association between comorbidity and self-care behaviour among heart failure patients was computed using three studies. 24 29 33Participants who had no comorbidity were 2.69 times more likely to have good self-care behaviour than those who had comorbidity (AOR=2.69;95% CI 1.35 to 5.37).
DISCUSSION
Good self-care behaviour may be helpful in the effective management of heart failure and provides the opportunity for patients to understand the benefits of self-monitoring and management of their illness. 12Patients with good and effective self-care behaviour have a better quality of life, lower hospital readmission rates, improved clinical outcomes and reduced costs associated with hospitalisation. 34Estimating the overall level of self-care behaviour among heart failure patients may be important for heart failure treatment and prevention of complications related to heart failure.The findings of this systematic review and meta-analysis showed that the estimated level of good selfcare behaviour among heart failure patients in Ethiopia was 38.3% (95% CI 31.46 to 45.13).This implies that more than three-fifths of heart failure patients in Ethiopia had a poor levels of self-care behaviour.The finding of this study is comparable with a study done in America (35.7%) 35 and in Brazil (36.5%). 36However, the current finding is lower than studies conducted in Kenya (50.8%), 37 Slovenia (49%), 38 Netherlands (48%) 39 and Vietnam (54.5%). 40On the other hand, the finding of this study is higher than studies done in Egypt (30%) 41 and Sudan (28.95%). 42The possible justification for this variation might be the difference in socioeconomic status, knowledge level, accessibility of healthcare facilities and lifestyle of the study participants.In this study, more than two-thirds of the study participants were poor level of knowledge about heart failure.Moreover, there is a difference in the methodology, in which the current Open access analysed the level of self-care behaviour with a large sample size (4321).Whereas, the above studies were conducted at a specific place and they evaluated the level of self-care behaviour with a small sample size.
In this study, more than two-thirds (68.82%) of the participants were found to have a poor level of knowledge about heart failure.The finding of this study is higher than a study done in France (43.5%). 43The possible explanation for this variation might be a difference in socio-economic status between France and Ethiopia.A prospective cohort study done in the Faroe Islands showed that disease-specific education for heart failure patients can persistently improve self-care behaviours. 44elf-care behaviours among heart failure patients were affected by awareness about the disease, and the patient's self-care behaviour may be improved through increasing the knowledge of the patients. 45Providing information by considering the educational status and health literacy of the heart failure patient may lead to better treatment outcomes. 12n this systematic review and meta-analysis, the pooled level of medication adherence among heart failure patients in Ethiopia was 84.77%.The result of this study is higher than a study done in Saudi Arabia (46.4%). 46he possible justification for this variation might be due to differences in the methodology of the studies.Evidence showed that heart failure patients who adhered to the prescribed medications had fewer heart failure symptoms, decreased mortality and reduced hospital readmission. 47 48Similarly, 65.55% and 16.75% of heart failure patients in Ethiopia adhered to the recommended salt restriction and weight monitoring, respectively.The European Society of Cardiology recommends that heart failure patients should avoid excessive salt intake (>5 g/ day) and maintaining healthy body weight are important for the treatment and prevention of heart failure. 12his study identified significant risk factors associated with self-care behaviour among heart failure patients in Ethiopia.Those factors are knowledge about heart failure and the absence of comorbidities.Patients who had good knowledge about heart failure were 3.4 times more likely to have adhered to the recommendations self-care behaviour compared with those who had poor knowledge about heart failure.The result of this finding is congruent with studies conducted in Egypt, 41 Iran, 49 Singapore 50 and the Netherlands. 39Heart failure patients who have a good understanding of heart failure may perform the recommended self-care behaviour than patients who have inadequate knowledge about heart failure.Providing disease-specific education to heart failure patients may improve the patient's knowledge about the disease, which, in turn, improves self-care behaviour and reduces complications of heart failure. 44inally, the results of this study showed that heart failure patients who had no comorbidity were 2.7 times more likely associated with good self-care behaviour than those who had comorbidity.Evidence showed that comorbidities account for the majority of loss of life expectancy and negatively influence the treatment outcome of heart failure patients. 51 52Comorbidities among heart failure patients may lead to self-care difficulties because patients may be challenged by multiple medications and might be recommended on different diets for different comorbidities.Moreover, patients may experience difficulty in differentiating the symptoms of comorbidities and may lack knowledge about how to manage multiple comorbidities. 53Clinicians should identify specific comorbidities in heart failure patients and could provide individualised patient-centred care that targets specific comorbidities and associated symptoms. 54
Conclusion
In conclusion, the majority of heart failure patients in Ethiopia did not adhere to the recommended self-care behaviours.Good levels of adherence to the recommended self-care behaviour among heart failure patients in Ethiopia were significantly associated with a good level of knowledge about heart failure and the absence of comorbidity.Based on the results of this study, the authors recommend that clinicians, policymakers and programmers should give special attention to improve the knowledge of heart failure patients and early diagnosis and treat comorbidities among heart failure patients.Finally, nationwide population-based studies and qualitative exploratory studies must be conducted to assess the contributing factors of poor self-care behaviour among heart failure patients in Ethiopia.
Contributors AW and MB designed the study.AW and MB designed and ran the literature search.All authors (AW, AG and MB) acquired data, screened records, extracted data and assessed the risk of bias.AG and AW did the statistical analysis and wrote the report.All authors provided critical conceptual input, analyzed and interpreted the data, and critically revised the report.All authors read and approved the final manuscript.AW, as guarantor, accepts full responsibility for the finished article, has access to any data and controlled the decision to publish.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not applicable.
Figure 1
Figure 1 Flow chart of study selection for systematic review and meta-analysis to estimate the pooled level of good self-care behaviour among heart failure patients in Ethiopia.
Figure 2
Figure 2 Forest plot of included studies that assess the level of good self-care behaviour among heart failure patients in Ethiopia.
Figure 3
Figure 3 Funnel plot to assess publication bias for self-care behaviour among heart failure patients in Ethiopia.
Table 1
Database searches for PubMed, Google Scholar and other databases for the level of self-care behaviour among heart failure patients in Ethiopia.
Table 2
Characteristics of included studies to assess the level of self-care behaviour among heart failure patients in Ethiopia
Table 3
Subgroup analysis of the level of self-care behaviour among heart failure patients in Ethiopia (n=13)
|
2023-12-13T14:04:43.651Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d33d6e5ed40ccd3e1648c6766551e13556a1d296",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "d33d6e5ed40ccd3e1648c6766551e13556a1d296",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11034853
|
pes2o/s2orc
|
v3-fos-license
|
Implicit Learning of Viewpoint-Independent Spatial Layouts
We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations.
INTRODUCTION
Self-motion changes the viewpoint for seeing objects in our surroundings. Despite the occurrence of a viewpoint change, we perceive the objects and the scene around us as unchanged. For stable visual perception, there must be a mechanism for dealing with objects in extra-retinal/physical coordinates (spatial or environment-centered coordinates). Whether the visual system has environmental-centered representations can be examined by investigating whether perception depends on viewpoint. Viewpoint-independent perception would indicate that the visual system has representations that can be dealt in environmentcentered coordinates at least functionally.
There have been several studies on the viewpoint dependence/independence of processes for object recognition. Some researchers have shown that object perception is dependent on viewpoint (Ullman, 1989;Bulthoff et al., 1995;Diwadkar and McNamara, 1997;Hayward and Williams, 2000), whereas others have shown that object perception is independent of it (Biederman, 1987;Hummel and Biederman, 1992). Although the question is still open, there could be different processes with differences in viewpoint dependence (Burgund and Marsolek, 2000).
Layout perception is also important not only for moving around the space but also for object perception. We need spatial layout information for getting to a place, searching things, interacting with objects as well as for object perception. Appropriate spatial layouts facilitate object perception in variety of conditions (Loftus and Mackworth, 1978;Shioiri and Ikeda, 1989;Hollingworth and Henderson, 1999;Sanocki, 2003;Bar, 2004;Davenport and Potter, 2004;Sanocki and Sulman, 2009).
The effect of viewpoint in the perception of object layouts is different from that in the perception of objects in terms of potential contribution of self-motion. Viewing location is a critical factor in layout perception because self-motion changes the retinal image of the whole scene in a systematic manner. This is different from object perception, where object motion also causes a change in the retinal image of each object. Simons and Wang reported little effect of viewpoint change due to self-motion on perceiving scenes or object layouts (Simons and Wang, 1998;Wang and Simons, 1999). This contrasts to a clear viewpoint dependency without self-motion (McNamara and . They suggest that self-motion is a critical factor although self-motion may not be required when rich visual cues are present (Riecke et al., 2007;Mou et al., 2009;Zhang et al., 2011). The visual system likely has representations of scenes and layouts expressed in environmentcentered coordinates and compensates for viewpoint changes that are associated with self-motion.
On one hand, several studies suggested that locations of object representations are automatically encoded (Hasher and Zacks, 1984;Ellis, 1990;Pouliot and Gagnon, 2005) while they did not investigate the effect of viewpoint changes. On the other hand, an automatic process has been suggested to update location representations with self-motion (Farrell and Thomson, 1998). Farrell and Thomson reported an interference effect by self-motion on a pointing task without vision. Localizing error for pointing memorized location increased when participants rotated their body but pointed locations as if they had not rotated it, comparing with the condition without self-motion. Since there was no visual information, the difference should be attributed to self-motion. This interference effect indicates that an automatic process updates spatial representations, which the participant could not ignore, although it does not rule out contributions of other processes (Naveh-Benjamin, 1987, 1988Caldwell and Masson, 2001).
Our question is whether the layout information in environment-centered coordinates can be processed implicitly and automatically. An implicit process that allows us to memorize scenes and object layouts is expected to be essential in the everyday life, particularly if it works in environment-centered coordinates. To move around and to interact with objects, we should recognize objects fixed in the environment-centered coordinates. There is normally no difficulty in such tasks and we do not need explicit effort: no intention to memorize locations of objects and no consideration of self-motion to update locations of object representations (including ignoring it). Considering the previous studies, we predicted that layout information is collected and updated implicitly and automatically with self-motion.
To examine whether the representation of the layout is implicitly obtained and used from different locations after self-motion (i.e., updated), we adopted the contextual cueing effect. The contextual cueing effect is a learning effect of spatial layout in visual search displays and is known to be an implicit learning effect (Chun and Jiang, 1998). The participants repeatedly searched for a target among distracters in a visual search task in that experiment. A half of the display layouts, that is, the locations of the target and distracters, were unique to each trial, whereas the remaining layouts were repeatedly used throughout the experiment. The contextual cueing effect makes target detection faster in the repeated layouts. This is implicit learning since the participants are unaware of the repetitions and the layouts are not recognized correctly (the selection rate of layouts, as previously reported, is no better than chance).
We conducted experiments to examine whether the contextual cueing effect survives across a viewpoint change caused by participant self-motion. The stimuli were located in a 3D space presented by a virtual reality system, and the effect of the viewpoint change was investigated. After learning layouts from a particular viewpoint, images from a new viewpoint were presented with or without participant self-motion. If the contextual cueing effect is found under conditions where the viewpoint change is caused by the self-motion, we can conclude that an implicit process is responsible for learning the layouts and updating them so that the layout information is represented in environment-centered coordinates, at least functionally. Although little or no contextual cueing effect is reported with viewpoint changes (Chua and Chun, 2003), no study investigate the effect of self-motion.
We also examined the influence of binocular depth cues. Without depth perception, appropriate spatial representations cannot be achieved. No previous study of spatial memory and updating seriously considered the effect of depth perception. More precise 3D information with binocular disparity might aid in building reliable representations with viewpoint changes, although pictorial depth cues of shading, linear perspective, and size changes may be sufficient. Investigation of the effect of depth cues on spatial perception should provide important insights for uses of virtual reality systems.
We use the term "environmental-centered coordinate representation" simply in contrast to "retinal coordinate representation." Environmental-centered coordinate representation here indicates a system that can functionally cope with changes in retinal image due to self-motion, whatever the underlying mechanism is. Two systems are often assumed for spatial layout representations: the allocentric and egocentric systems (Burgess, 2006;Mou et al., 2006;Wang et al., 2006). We discuss the relationship between our findings and these two systems in Section "General Discussion."
EXPERIMENT 1
In Experiment 1, we investigated the transfer of the contextual cueing effect to a stimulus viewed from a different location.
PARTICIPANTS
The participants were 20 naive males, all of whom had normal or corrected-to-normal visual acuity.
APPARATUS
The participant wore a head-mounted 3D display (Z800 3DVI-SOR, eMargin) that presented stimuli. Participants with glasses had no difficulty in wearing the head-mounted display. The display size was 32˚× 24˚with a resolution of 800 × 600 pixels. A computer generated stereo images of 3D CG objects on the display. Two chairs were used to set the two viewing positions, based on which the corresponding stimuli were calculated. The experiment was performed in a dark room.
STIMULI
Each stimulus display consisted of 1 T as the target and 11 L's as distractors (Figure 1). Each letter object consisted of two square poles and was within a 1.0 cm × 1.0 cm × 0.2 cm box. The display size for a view was about 2.0˚× 2.0˚. The target orientation was either 90˚or 270˚(the foot of the T pointed either left or right), and the distractor orientation was either 0˚, 90˚, 180˚, or 270˚. The color of the objects was randomly assigned from red, blue, green, yellow, cyan, and magenta. The background was gray. The target and distractors were positioned at randomly chosen intersections of an invisible 6 × 4 × 4 grid with jitters (about 0.2˚on the display) to avoid linear arrangements. The images were drawn on the display assuming a viewing distance of 60 cm to the center of the front surface of the virtual grid.
There were two stimulus conditions: 2D and 3D stimuli. The 3D stimulus was a pair of stereo images calculated for the 3D letters, and the 2D stimulus was the same image presented to each eye. There was no depth information of disparity variations among letters in the 2D stimulus, whereas depth was perceived in the 3D stimulus based on disparity. Pictorial cues of shading, linear perspective, and size changes were in both the 2D and 3D stimuli.
PROCEDURE
A session consisted a learning phase and a transfer phase. In the learning phase, the participant searched for the target in a number of trials, through which display layouts were learned. The participant indicated the target direction (the foot directed either left or right) by pressing one of two buttons. There were two types of layouts as in a typical contextual cueing experiment. One was the repeated layout, which was used once in each block, and the other was a novel layout, which was used only once throughout a session.
At the beginning of each trial, a fixation point was presented for a randomly chosen period between 0.75 and 1.5 s. At the termination of the fixation point, a search display was presented and terminated when the participant responded. The computer measured the reaction time (RT) for target detection. Each block
FIGURE 1 | Stimulus example (top) and experimental setup (bottom).
In the move condition, the viewpoint of the learning phase was 20˚right from the surface normal of the virtual display in a location and that of the transfer phase was 20˚left from the surface normal. In the stay condition, the image condition was the same as in the move condition whereas the participant did not change the location.
consisted of 20 trials: 10 repeated and 10 novel layouts. After 20 learning blocks, two transfer blocks were tested. There were two transfer conditions: the"move"condition and the"stay"condition. In the move condition, the participant moved from one location to the other after training, whereas the participant stayed at the same position in the stay condition. The two chairs were positioned such that the head was directed ±20˚from the surface normal of the virtual display (Figure 1). All letters faced along the surface normal. The viewpoint in the learning phase was 20˚left from the surface normal (see stimulus example in Figure 1), and that of the transfer phase in the move condition was 20˚right. The participants moved from one chair to the other. There was a 1-min interval between the learning and transfer phases, during which the participants executed the movement. During the movement, the fixation point was shown on the display and no retinal information of self-motion was provided. In the stay condition, the images of the learning and transfer conditions were the same as in the move condition while the participant stood up from and sat down in the same chair (20˚left) during the interval between the two phases. The order of the conditions was counterbalanced across participants.
A recognition test was performed at the end of each session to examine whether the participant had memory explicitly retrieved for each repeated layout. In the recognition test, a distractor replaced the target in each repeated layout. Ten repeated layouts from the search experiment were mixed with 10 novel layouts that had not been shown before. For each of 20 layouts, the participant was asked three questions: whether he had seen the layout in the search blocks, where the target was located if the layout was recognized, and his confident rate for the first answer. Since the recognition rate was not higher than chance level in all conditions, we show only the recognition rates below. Figure 2 shows RT in the learning phase in the move condition. Each data point shows RT as a function of epoch, which is average RT of two blocks consisting of 20 trials. Shorter RT was found for the repeated layouts compared with the novel layouts after epochs of learning replicating the previous results (Chun and Jiang, 1998). Figure 3 shows the difference in mean RT between novel and repeated layouts, which we define as the contextual cueing effect. The contextual cueing effect in the transfer phase was calculated from the average of the two blocks, and that in the learning phase was calculated from the average of the last two blocks. Results of a one-tailed t -test for the difference between the novel and repeated layout (whether RT for repeated layouts was shorter or not) was shown by asterisks and plus in the RT results: * * for p < 0.01, * for p < 0.05, and + for p < 0.1 (the same in Figures 5, 7 and 9). We used one-tailed test here because contextual cueing effect is well documented to be RT shortening effect while we used two-tailed tests for other cases below.
RESULTS AND DISCUSSION
A difference was found between the learning and transfer phases in the stay condition, whereas little difference was seen in the move condition. A three-way within-participants ANOVA was conducted for each measure: two disparity conditions (with and without disparity) × two phases (learning and transfer) × two self-motion conditions (stay and move). A significant main effect of phase and self-motion was found: F (1, 19) = 5.51, p < 0.05 for the transfer effect and F (1, 19) = 18.06, P < 0.001 for the selfmotion effect. Significant interaction was found between phase and self-motion: F (1, 19) = 4.80, p < 0.05. Our primary interest was in transfer effect and the planed comparison was performed in the contextual cueing effect between the learning and transfer phases. A t -test revealed a significant difference in contextual cueing effects between the two phases in the stay condition [t = 2.11, www.frontiersin.org p < 0.05 (2D), t = 2.29, p < 0.05 (3D)], whereas no such difference was found in the move condition [t = 0.02, p = 0.49 (2D), t = 0.13, p = 0.45 (3D)]. These results indicate that the effect of viewpoint changes is influenced by self-motion.
Importantly, the result in the stay condition, where the same stimulus as that in the move condition was used without selfmotion, showed little, or no contextual cueing effect in the transfer phase, a result consistent with previous studies (Chua and Chun, 2003). The contextual cueing effect in the move condition must rely on the layout representation that can be updated by self-motion. This indicates that there is a mechanism to represent objects in environment-centered coordinates or a functionally equivalent mechanism. The result also rules out any interpretations based on stimulus image similarities between the two viewpoints.
For the data collected in the recognition test, a t -test showed that the recognition rate was not significantly different from that expected by chance in all conditions: 49.3% (t = −0.57, p = 0.72), 51.8% (t = 1.02, p = 0.15), 49.0% (t = −0.46, p = 0.68), and 51.3% (t = 0.72, p = 0.23) for 2D stay, 2D move, 3D stay, and 3D move. In addition, none of the participants reported noticing the repetitions or trying to memorize the configurations. The representation of layouts thus would have been learned implicitly.
EXPERIMENT 2
In Experiment 2, we examined whether the visual system has a retinal representation in addition to an environmental-centered coordinate. If there is only an environmental-centered coordinate system, the contextual cueing effect will vanish in the move condition when the same retinal images are used between the learning and test phases. In other words, no contextual cueing effect was expected when the same retinal image was used at different positions between the learning and transfer phases. However, if the visual system also has a retinal coordinate system for layout learning, the contextual cueing effect will remain after self-motion.
PARTICIPANTS AND PROCEDURE
Sixteen males (including 12 from Experiment 1) participated in Experiment 2. All participants were naive to the purpose of the experiment and had normal or corrected-to-normal visual acuity. The stimuli and procedure were the same as in Experiment 1 with two exceptions. First, the stimulus image in the transfer phase was the same as the image in the learning phase. That is, the image from the original viewpoint, not from the viewpoint at the second location, was used (Figure 4). Second, we used only the move condition.
FIGURE 3 | Left and right panels show the results of 2D and 3D
stimuli, respectively. The vertical axis indicates the difference in reaction time between the novel and repeated configurations, that is, the contextual cueing effect. The black bar shows the results of the last two epochs of the learning phase, and the white bar shows the results of the epochs in the transfer phase. Error bars represent standard error across participants. Asterisks and pluses are used to show the results of a one tail t -test for the difference between the novel and repeated layout: ** for p < 0.01, * for p < 0.05, and + for p < 0.1 (the same in Figures 5, 7 and 9).
RESULTS AND DISCUSSION
The results in the learning phase showed contextual cueing effect both in the 2D and 3D conditions as in Experiment 1 (Figure 5). In the transfer phase, contextual cueing effect is found also both in the 2D and 3D conditions. A two-way within-participants ANOVA (two disparity conditions × two phases) showed no main effect of disparity [F (1, 15) = 0.001, p = 0.98] and phases [F (1, 15) = 0.004, p = 0.95] and no interaction [F (1, 15) = 0.038, p = 0.85]. A t -test revealed no significant difference in the contextual cueing effect between the learning and transfer phases [t (15) = 0.17, p > 0.1 for 2D and t (15) = 0.13, p > 0.1 for 3D]. The contextual cueing effect was observed when the participant moved and the stimulus was unchanged. This suggests that a retinal coordinate system was involved in the contextual cueing effect, in addition to the environment-centered coordinates system.
The recognition test after each session showed that participants recognized the repeated layouts no more than chance [t (15) = 1.04, p > 0.1 for 2D and t (15) = 0.59, p > 0.1 for 3D].
FIGURE 4 | Experimental conditions in Experiment 2.
There was only move condition. The viewpoint of the learning phase and that of the transfer phase was 20˚right from the surface normal of the virtual display. In the transfer phase, the participant and the virtual display moved by the same amount of 40˚so that the retinal image was the same as in the learning phase.
Note that 12 participants were from Experiment 1, and this was the second time for them to participate in the contextual cueing experiment. Nonetheless, implicit learning was robustly found in those participants. For them, the contextual cueing effect in the learning phase showed no difference between Experiments 1 and 2 [t (11) = 0.54, p = 0.74], although RT was slightly shorter in Experiment 2.
EXPERIMENT 3
In Experiment 1, the contextual cueing effect was found with a viewpoint change of 40˚. We examined whether contextual cueing effect transfers to a larger angle, namely, 90˚, in Experiment 3. The contextual cueing effect may not transfer to large angle differences because spatial updating has been reported to dependent on viewpoint with large viewpoint changes .
PARTICIPANTS AND PROCEDURE
The participants in Experiment 3 were the 12 men who took part in Experiments 1 and 2. All participants were naive to the purpose of the experiment. The stimuli and procedure were the same as in Experiments 1 and 2, except that the viewpoint change was 90˚instead of 40˚ (Figure 6). There are two stimulus images in the transfer phase: the retinal image from the new viewpoint as in Experiment 1 (different view images) and the retinal image from the viewpoint of the learning phase (no image change).
RESULTS AND DISCUSSION
The results in the learning phase showed the contextual cueing effect in all conditions (Figure 7). A three-way within-participants ANOVA was conducted for the data in the different view image condition as in Experiment 1: two disparity conditions (with and without disparity) × two phases (learning and transfer) × two self-motion conditions (stay and move). A significant main effect of phase was found: F (1, 11) = 21.11, p < 0.001 while no significant interaction was found between phase and self-motion: F (1, 11) = 0.82, p = 0.39. This contrasts to the significant interaction found between phase and self-motions in Experiment 1. A ttest for planed comparisons revealed a significant difference in contextual cueing effect between the learning and transfer phases in the 2D stimuli (t = 2.51, p < 0.05 for the 2D stay condition and t = 2.77, p < 0.05 for the 2D move condition). The trend of the average data for the 3D stimuli is similar to that for the 2D stimuli although the difference was not significant (t = 1.80, p < 0.1 for the 3D stay condition and t = 0.59, p = 0.59 for the 3D move condition). These results suggest that contextual cueing effect disappears in the transfer phase with 2D stimuli and possibly also with 3D stimuli in the condition. There is little transfer of the contextual cueing effect when the viewpoint changed by 90˚even with corresponding self-motion.
There is an interesting difference between 2D and 3D stimuli although it is not statistically significant. The contextual cueing effect, the difference between the repeated and novel layouts, for the transfer phase is close to zero for 3D stimuli, whereas the effect for 2D stimuli is below zero. We do not have any interpretation of this pattern of results although this may be related to depth perception and memory of 3D layouts. We leave this issue for future studies.
For the results of the condition with no image change (the same retinal images), a two-way within-participants ANOVA (two disparity conditions × two self-motion conditions) was performed. The result showed no main effect of disparity [F (1, 11) = 0.26, p = 0.62] and self-motion [F (1, 11) = 0.79, p = 0.39] and no interaction between disparity and self-motion [F (1, 11) = 0.00, p = 0.99]. A t -test revealed no significant difference in the contextual cueing effect between the learning and transfer phases (t (11) = 1.03, p = 0.32 for 2D and t (11) = 0.64, p = 0.54 for 3D). These results indicate that the impairment by self-motion of 90˚is much less for retinal coordinate representations than for environmental-centered representations (Compare top and bottom panels in Figure 7).
The recognition rate was not significantly different from chance level (p > 0.1 for all conditions): 50.4, 47.1, and 50.4% for the stay, move, and move (same retinal image) conditions with 2D stimuli and 48.3, 49.7, and 52.5% for the stay, move, and move (same retinal image) conditions with 3D stimuli.
EXPERIMENT 4
The previous experiments showed similar results with the 2D and 3D stimuli. This may be surprising if the difference in depth cue is considered because 3D perception is indispensable for layout identification from multiple viewpoints. One possible interpretation for finding no difference is that the depth information in 2D images was sufficient for the viewpointindependent contextual cueing effect in previous experiments. Minimizing the effect of monocular depth cues should differentiate the 2D and 3D stimulus conditions because less precise representations in environment-centered coordinates are obtained in the 2D stimuli. In Experiment 4, we used variable sizes of distractors so that size variations in 2D images would confuse the relative depth among stimulus items.
PARTICIPANTS AND PROCEDURE
Twenty new naive participants participated in. All participants had normal or corrected-to-normal visual acuity. The stimuli and procedure were the same as in Experiments 1, except that the size of the distractor letter, L, varied (Figure 8). The distractor size varied from −10 to 25% of the size of the original condition. The target size was the same as in the previous experiments.
RESULTS AND DISCUSSION
The results revealed a difference in transfer of contextual cueing effect between the 2D and 3D conditions (Figure 9). A three-way within-participants ANOVA was conducted: two disparity conditions × two phases × two self-motion conditions. A significant main effects of disparity and phase was found: F (1, 19) = 7.81, p < 0.02 for disparity and F (1, 19) = 10.68, p < 0.01 for phase. Significant interaction was found between disparity and phase: F (1, 19) = 6.48, p < 0.02. A t -test revealed no statistically significant difference in contextual cueing effect between the learning and transfer phases in the 3D move condition [t (19) = 0.33, p = 0.75] as in Experiment 1. However, the same t -test showed significant difference between the learning and transfer phases in the 2D move condition [t (19) = 2.22, p < 0.05]. Transfer of the contextual cueing effect disappeared in the 2D stimulus with less monocular depth cues. This indicates that disparity information contributes to build 3D layout representations that can be used with self-motion. The general trend of the results in the stay conditions is similar to that in Experiment 1. However, the t -test revealed no statistically significant difference in contextual cueing effect between the learning and transfer phases in both 2D and 3D conditions [t (19) = 2.01, p = 0.059 for 2D and t (19) = 1.22, p = 0.235 for 3D]. The results may imply that the experiment was not sensitive enough to detect the transfer of contextual cueing effect in the condition by some reason.
The recognition rate was not significantly different from chance level (p > 0.1 for all conditions): 51.8, 50.5, 51.5, and 52.0% for the 2D stay, 2D move, 3D stay, and 3D move conditions.
GENERAL DISCUSSION
In the transfer phase of Experiment 1, the contextual cueing effect decreased drastically in the stay condition but remained unchanged in the move condition. These results indicate that the viewpoint-independent memory of spatial layouts inevitably needs self-motion as indicated in the previous study by Simons and Wang (Simons and Wang, 1998;Wang and Simons, 1999). The present study revealed that spatial layouts are represented and updated implicitly and automatically in the visual system, showing the transfer of the contextual cueing effect across different viewpoints. The change in layout display did not reduce the contextual Frontiers in Psychology | Cognitive Science cueing effect when the change was consistent with the viewpoint change due to participant self-motion. The memory should be obtained implicitly since the contextual cueing effect is implicit, as shown in previous studies and our recognition test.
Mou and colleagues pointed out the importance of spatial reference in the spatial memory across self-motion (Mou et al., 2009;Zhang et al., 2011). They conducted similar experiments as that of Simons and Wang and found that cueing a reference direction provided similar or better memory performance without self-motion comparing the one with self-motion. The study revealed that the self-motion related signal is not always the best information for spatial updating and tracking reference direction can be a better cue. This is not necessarily inconsistent with the present results.
Rather, their and our results suggest that different cues are used in different processes for spatial updating.
Experiment 2 suggested that a retinal coordinate system, or snapshots of retinal image, contributes to layout memory. In the condition where the participant moved while the stimulus display did not changed (no image change condition), the contextual cueing effect remained. This is inconsistent with the assumption that there is only an environmental-centered coordinate system. If there were only an environmental-centered coordinate system, the contextual cueing effect should reduce similarly to the effect in the condition where the display changes without self-movement (the stay condition in Experiment 1). However, it should be noted that there is a hint that there is influence of self-motion. That is, Experiment 3 showed some reduction of contextual cueing effect in the no image change condition after self-motion of 90˚rotation although the effect is not statistically significant.
An alternative interpretation is possible. Self-motion information may not be used to update layout representations even with self-motion when the participant notices that the layouts are images from the viewpoint of the learning phase. There is no reason to expect reduction of the performance in such occasions. The visual system may have only layout representations in environment-centered coordinates, which may or may not be updated by self-motion dependently on stimulus layouts. This system provides the same function as that of retinal image representation and the present experiments cannot differentiate the two possibilities. Whichever the system is in the visual system, www.frontiersin.org Experiment 2 led us to conclude that the visual system can access to retinal coordinate images.
In Experiment 3, we found that contextual cueing effect cannot survive with self-motion of as large as 90˚, but that the retinal coordinate system was influenced much less. This indicates that the implicit memory system assessed in the present study has a limitation for self-motion as previously pointed out for automatic updating . Note that richer visual information and/or visual motion during self-motion potentially expands the limitation of viewpoint-independent processing (Riecke et al., 2007;Mou et al., 2009;Zhang et al., 2011).
In Experiment 4, we found that the disparity cue play a critical role in viewpoint-independent contextual cueing effect under the condition, where pictorial depth cues do not provide sufficient depth information. To obtain representations in environmentcentered coordinates, depth perception is necessary. The present results showed that manipulation of depth cues influenced the viewpoint-independent effect. Clearly, the implicit viewpointindependent effect is related to depth perception. This is not surprising but shows importance of 3D perception to investigate the memory of spatial layouts.
For spatial layout representations, two systems are often assumed: the allocentric and egocentric systems (Burgess, 2006;Mou et al., 2006;Wang et al., 2006). Apparently researchers agree that both systems contribute to spatial memory and updating with self-motion (Waller, 2006;Waller and Hodgson, 2006;Sargent et al., 2008). In general, studies on the field of spatial memory focus on much larger scale of spatial layouts than those used in the present study. However, there is no reason to believe that the present results and experimental results with larger scale layouts are processed in different mechanisms. We discuss possible relationship between our results and the allocentric and egocentric systems. On one hand, there are two facts that suggest that the implicit learning/memory phenomenon we found is not related to the allocentric system. First, the allocentric system is suggested to have representations that can be used from any viewpoint (e.g., Waller and Hodgson, 2006). As shown in Experiment 3, the contextual cueing effect cannot be maintained or updated with a rotation of 90˚. Second, a salient feature that can be used as a reference appears to be critical for memorization and retrieval in the allocentric system (Mou et al., 2009;Zhang et al., 2011). No salient feature contributed to our experiments. The layouts consisted of randomly arranged items and the recognition test revealed that there were no memorable salient features.
On the other hand, we think that the implicit learning effect we found is perhaps related more closely to the egocentric system by following reasons. First, the egocentric system is usually characterized by its requirement of self-motion (Wang and Spelke, 2002;Burgess et al., 2004;Rump and McNamara, 2007). The spatial memory system pertaining to the present experiments also required self-motion to update memorized layouts. Second, the egocentric memory is reported to be viewpoint variant for large viewpoint changes . We found contextual cueing effect with the 40˚viewpoint change, but not with 90˚. While the angle of 90˚is smaller than that of 135˚reported by Waller and Hodgson, the difference is likely attributed to the difference in stimulus and task between the two studies. The experiment of Waller and Hodgson used objects in a room and pointing objects based on intentionally memorized map of the room without visual information. It is not likely that the participant obtain the whole 3D structure of the layout in our experiments because image from only one view was presented. We, therefore, did not expect quantitative similarity in viewpoint dependency between their and our experiments. Based on these discussions, we suggest that the contextual cueing effect we found reflects a characteristic of the egocentric spatial memory system.
There are, however, two issues related the presumption that the egocentric memory system is responsible for contextual cueing effect we found. First, some of the previous studies have suggested that the egocentric memory system is not automatic and requires attention and efforts (Wang and Brockmole, 2003a,b;Wang et al., 2006). This appears to contradict to the implicit effect of contextual cueing. The difference in experimental technique could explain the apparent contradiction. We measured effect on Frontiers in Psychology | Cognitive Science layouts to which the participants did not have to devote explicit attention. This technique allowed us to investigate the effect of stimuli that are not given attention. In contrast, previous studies of spatial memory measured the effect on stimuli to which the participants had to devote explicit attention. In such experimental conditions, attention very likely influences measurements, in addition to the possible implicit process. Consequently, both explicit and implicit processes could influence the results. The previous studies perhaps obtained results that indicate the influence of attention-requiring system. However, such results do not exclude the possibility that an implicit process also contributes to the performance.
Second, updating of the egocentric process is assumed to depend on the number of objects to memorize by the following reason Wang et al., 2006). A vector expresses the location of an object in an egocentric coordinates. The number of the vectors to update increases with the number of objects in order to maintain the location information during body movements. Therefore, more object processing is required to update locations of more number of objects. This assumption contradicts to the fact that little limitation of information capacity for the contextual cueing effect. The number of layouts learned is often more than 10 and the number of distractors in each layout is usually also more than 10. Spatial updating of layouts learned implicitly is expected to be independent of object numbers. If the body location is updated in environment-centered coordinates, as assumed for allocentric spatial representations, no effect of object number is expected. However we do not discuss this issue in details because we see no clear support for the effect of object number in spatial memory: Wang et al. (2006) showed effect of object numbers while Hodgson and Waller (2006) did not.
Based on the above discussion we claim that the visual system has an ability to obtain environmental-centered representations and to update them implicitly and automatically with self-motion. Such representations are important to move around and to interact with objects without specific efforts in everyday life. The finding of a retinal representation system in our study is also an important implication for another aspect of spatial perception. A viewpoint specific system has been suggested to play a non-trivial role in spatial perception (Hermer and Spelke, 1996;Lee and Spelke, 2010). When a participant looses egocentric location in the environment (i.e., in environment-centered coordinates), he or she uses visual information viewed from a particular point (snapshots) to estimate his or her own location (Wang and Spelke, 2002). For the estimation, the retinal images learned implicitly can be used. This suggests that the underlying mechanism of the contextual cueing effect in a specific viewpoint is as important as that of a viewpoint-independent, egocentric system for spatial perception while moving in one's surroundings.
|
2016-05-12T22:15:10.714Z
|
2012-06-26T00:00:00.000
|
{
"year": 2012,
"sha1": "ca5f2adf8498887ddf214805051fae5e6b3a7c8c",
"oa_license": "CCBYNC",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2012.00207/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d25741f8034d151478b68ad058045abf5c20d752",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
257705294
|
pes2o/s2orc
|
v3-fos-license
|
Online and computer-assisted career guidance: Are students prepared for it?
Career services should be a priority for any higher education institution. These structures should be prepared to support the design, implementation, and evaluation of academic and career plans and decisions, providing services properly aligned with students’ needs, preferences, and characteristics. The present study addressed these questions by analyzing data from an online survey administered to 361 Portuguese higher education students. Results indicate that most students do not seek support in these structures, despite having various needs, particularly in the scope of exploring the world of work, developing goals and implementing career strategies. Most students are open to the use of online and computer-assisted career guidance, although their preference is for in-person interventions. These results allow Career Offices to identify new directions and opportunities to work with students, while demonstrating their value to the academic community and the labor market.
Introduction
Unless further education is attained, the common goal of most (if not all) students upon graduation is employment. Over 80% of students claim that the subsequent employment after graduation is a critical factor in their decision to enter university (Stolzenberg et al., 2019). Therefore, under modern economic conditions, universities have found themselves involved in a competition in the developing market of educational services. Almost any higher educational institution nowadays faces the question of its students' employability (e.g., Vinson et al., 2014;Crowne et al., 2020), and Career Offices, Career Services or Employability Centers have been established, in many universities worldwide, to support students with their career development and improve their competitiveness in the labor market.
Throughout the years, the scope of career guidance provided at universities reflected the changes in economic, political, social, generational, and cultural norms (Dey and Cruzvergara, 2014). In a VUCA (Volatile, Uncertain, Complex, and Ambiguous) environment, characterized by economic downturns, labor market changes (e.g., overall effect of technological advances and elimination of particular required skills; OECD, 2019), high levels of unemployment among college graduates (U.S. Department of Education, 2020), and, more recently, the impact of the COVID19 pandemic on the probable prevalence of remote jobs in the future (Bartik et al., 2020), it appears critical for universities to rethink the ways they help graduates' transition into careers. Experts agree that nowadays the purposes of Career Offices should be more comprehensive to meet college students' wide range of vocational concerns to better prepare them for a highly changing and competitive job market (Gallagher et al., 1992;Gallup, 2016). In order to help career offices develop, students' specific requirements need to be assessed to provide an understanding of and fulfillment of their expectations.
However, finding out students' career needs might not be enough on its own. Career offices are competing in the attention economy like everyone else, and in order to make students fully capitalize on the career services provided by their university, the right resources that maximize engagement are required. Thus, identifying what kind of specific types of career interventions students find useful and necessary is equally important to adapt the delivery (Hayden and Ledwith, 2014). Indeed, assessing students' career needs and intervention preferences, and then aligning these expectations with the skills development at university career centers has a vital implication for all parties involved-universities, students, and future employers in the pursuit of a better workforce (e.g., Vinson et al., 2014;Crowne et al., 2020).
Several past studies have addressed the career needs assessment of university students on a national level in China (Li and Jung, 2021); a single-university level in the United States (Fouad et al., 2006) and Romania (Crişan et al., 2015); and, on a faculty level in Turkey (Güneri et al., 2016) and Taiwan (Yang and You, 2010). Moreover, overall counseling needs were explored in Finland (Lairio and Penttinen, 2006), Greece (Giovazolias et al., 2010), and Portugal (Pinto et al., 2016;Pinto, 2019). Results indicate that, with differences depending on the population, gender, age, race, and socioeconomic status, the needs for help in academic and professional topics seem to surpass the needs for personal and social support (Giovazolias et al., 2010). In general, most of the students search for and receive career counseling or guidance in the final years of their studies (Pereira et al., 2020). Commonly stated career needs include obtaining information about the world of work, the transition from university to work life (e.g., job search strategies, Pinto, 2019), career planning, and stress management (Güneri et al., 2016), as well as, overcoming procrastination strategies and time management skills (Pinto, 2019). Also, students are quite passive about using these services (Crişan et al., 2015), preferring the employers to come to the university and display open vacancies rather than having to seek the information themselves. Finally, students indicate a preference for face-to-face rather than online interventions and group interventions rather than individual (Crişan et al., 2015). Most students are not very satisfied with the services provided by their career offices (Pereira et al., 2020).
Indeed, researchers agree on the need for an accurate and regular assessment of college students' needs (Gallagher et al., 1992), to maintain high standards of the provided services. But, despite the importance of developing the services of career offices and their alignment with the expectations of students, to increase the competitiveness and employability of graduates, it appears that not all higher education institutions in Portugal have either fully operating career centers or developed and accessible sets of services provided (Khurumova, 2022). With a significant annual growth of the number of tertiary education students in the country, both local and foreign ones 1 and paradoxically high level of youth unemployment (18.2% as 1 https://www.dgeec.mec.pt/np4/1109.html of the third quarter in 2019, with an EU average of 14.4%) 2 , the provision of career services might be even more important in the country.
Considering the fact that the provision of face-to-face career guidance services is universally limited due to time allowance and personnel number (e.g., one counselor on average for 2,500 students at Master's level according to NACE, 2017), online services seem beneficial in a sense they can be provided to a larger number of students and at their convenient timing. Nowadays, the Internet has steadily become an important aspect of everyone's life regardless of age, as a source of communication, entertainment, and information search. Thus, as a result, this increasing use and reliance on Internet and technology has created opportunities for career counseling professionals to rethink and develop their services (Zainuddin et al., 2020). CAEL (2018) in its report noted several ways for reinventing career services and, in fact, incorporating technology to better assist students in engaging with career-related activities and connecting with employers is one of them. According to the report, technology enables for the provision of virtual career services and tools, target outreach to students, and new means of connecting with employers and business owners. Today, it is impossible to imagine effective university entities such as Career Offices that do not use Internet resources in their work. One advantage of technology-assisted or mediated career counseling is that it is available 24 h a day, 7 days a week, and provides instant feedback, which appeals to the younger generation (Zainuddin et al., 2020). Other advantages of online interventions may include the dynamism and less time needed to update information (Venable, 2011), possible interaction with employers (e.g., through incorporated social media; Venable, 2011), wider outreach (Zainuddin et al., 2020), and a vast number of methods to provide the guidance (Zainuddin et al., 2020). Nevertheless, it is very important, for all the above mentioned, to understand if these added values of online career and counseling are also understood in this way by its users (i.e., the students).
The present work involves an exploratory study aiming to identify and prioritize current career needs of students enrolled in Portuguese higher education. More specifically, it aims to understand how career offices should adapt in order to align with the needs of the service recipients and whether nowadays online interventions are of demand among students. Thus, the findings of the study contribute to the understanding of what subsequently could be used in the development of such interventions and/or to the adaptation of the existing services provided. This should result in a service that meets students' needs more effectively.
Instrument
Career Offices in Higher Education: Needs Assessment Questionnaire was specifically developed for this study, taking into consideration the Career Self-Management Model of Greenhaus et al. (2009) (Pinto and Taveira, 2010), which considers career management as a decision-making and problem-solving process that encompasses four main stages: self-knowledge; exploration of the environment; development of goals; and development and implementation of action plans. The development of the items also considered other previously existing questionnaires published in scientific research (e.g., McBride and Muffo, 1994;Briscoe, 2002;Yang and You, 2010;Güneri et al., 2016;Pinto, 2019). This questionnaire was submitted for (i) evaluation of the content validity with a group of experts; (ii) evaluation of the clarity of the questionnaire content, and completion of the questionnaire with a 30-day interval for calculation of the Intraclass Correlation Coefficient, with a group of students; and (iii) study of validity and internal consistency through Confirmatory Factor Analysis (Khurumova, 2022).
This instrument is organized as follows: (i) Knowledge about the Careers Offices of your Higher Education Institution: three questions that assess whether students are aware of the existence of a Career Office at their HEI, how they obtained this knowledge, and if they have ever used its services; (ii) Preferred Career Intervention Modality: a list of 10 options from which students select the preferred three in terms of career intervention. The different options include, for example, individual or group career counseling sessions, in-person or online career counseling sessions, employability workshops, and mentoring. There is also a question that allows students to indicate another preferred type of support not mentioned in the list; (iii) Career Needs of Higher Education Students: a list of 23 career needs, which include, for example, the need for support in preparing a CV (curriculum vitae), the need for support in using social platforms for job search, and the need for support in negotiating job offers. Responses to each item are made using a four-point Likert scale (1 = no need and 4 = high need); (iv) Own Career Needs: the previous list of 23 career needs, from which the students select the five that best represent their current support needs; (v) Joining Online Career Services: through a single item (0-10 points-Net Promoter Score) the probability of the students' adherence to an online or computerassisted career counseling service is assessed, if it were made available by their HEI; and (vi) Other Additional Comments: an open-ended question, in which students were asked to indicate any additional comments regarding their university's Careers Office and the services it provides.
Data collection procedures
All research projects developed within the scope of the Faculty of Humanities (FCH) of the Catholic University of Portugal (UCP) had to be, at the time of the development of this study, submitted for approval to the Católica Research Center for Psychological, Family and Social Wellbeing (CRC-W). This submission implied a presentation of the study in terms of pertinence, theoretical basis, objectives, methodology, and procedures and ethical care in the data collection process (namely, questions regarding anonymity and confidentiality of data, withdrawal from participation without any penalty). Data were collected between January and August 2021. The Careers Office of the FCH-UCP sent by email an invitation to all students to collaborate in this research and the Office of Communication and Marketing (GCM) released it on various digital platforms. Moreover, each of the Careers Offices of a total of 89 Portuguese HEIs were also contacted via email or through their social platforms (e.g., Facebook page) with a request to distribute the questionnaire among their students. For this purpose, an Ethical Declaration from the CRC-W was submitted upon request. The invitation email contained information regarding the purpose of the study, as well as the link to the assessment protocol inserted in the Qualtrics platform. 3 The assessment protocol included a more detailed explanation of the goal of the study, an informed consent, the assessment instrument previously presented, and a brief sociodemographic questionnaire. The average time to complete the assessment protocol was 8 min.
Data analysis procedures
Data were entered into a database and processed using software for statistical analysis (IBM SPSS Statistics for Windows, Version 26.0; IBM Corp, 2019). Exploratory data analyses were performed to examine if there were problems in the data such as outliers, non-normal distributions, problems with coding, and/or missing values, and to examine the extent to which the assumptions of the statistics that we planned to use were met. We used descriptive statistics to analyze the students' needs and intervention preferences. In addition, correlational analyses between preferred career intervention modality and higher education students' career needs were also performed. Also, several decision trees were conducted to predict the likelihood that students would use an online career service if it were provided by the Career Office of their respective universities/ colleges. The decision tree procedure produces a tree-based classification model that arranges cases into groups or predicts values of a dependent (target) variable based on an independent variable (predictor; IBM SPSS Decision Trees 26). The decision trees were carried out using the CHAID method (chi-squared automatic interaction detector algorithm). The risk and classification tables were analyzed to provide an evaluation of how well the models work. All results were considered statistically significant yielding a value of p lower than 0.05 (p < 0.05).
Knowledge about the career offices at HEI's
Results indicate that most students, namely 277 participants (76.7%) heard about their Careers Offices, while 84 students (23.3%) did not. That information was mostly obtained through emails from the Careers Offices (n = 16, 4.7%); other common sources stated were From other students (n = 30, 8%), Referred by lecturer (n = 29, 8%), and Institutional webpage (n = 22, 6%). In terms of the Career Offices services usage frequency, out of those who knew about the Career Office, 263 (72.9%) never used its services, 74 (20.5%) and 15 (4.2%) used the services 1-2 or 3-4 times a year, respectively.
Preferred career intervention modality
The career intervention modalities that are most preferred are (see Table 1; highlighted in bold): Online information about internship and/ or job opportunities (n = 230, 64%), in-person individual guidance, and counseling sessions (n = 187, 52%), career mentoring programs/ sessions (n = 185, 51%), Career events (n = 15, 44%), and in-person workshops (n = 147, 41%). Table 2 Table 3 presents a list of 23 career needs for which participants were asked to indicate which five they considered to best represent their own current support needs. The majority (n = 189, 52%) indicated that their priority in terms of current career needs was (highlighted in bold): Identifying the type of job I'm best fitted for; second most popular option (n = 133, 37%) was Exploring different career options, such as determining my own choice between advanced studies or employment after graduation, followed by Gaining experience through internships (n = 123, 34%). The lowest number of participants (n = 21, 6%) expressed the need for Using social media platforms to search for job offers.
Joining online career services
A mean score of 7.53 (SD = 2.197, Min-Max = 0-10) about the students' likelihood of joining an online or computer-assisted career counseling service, if it were made available by their college, was found. Since a Net Promoter Score was used, 98 (27%) of the participants were classified as having a detractor attitude, 128 (36%) a passive attitude, and 135 (37%) a promoting attitude toward joining this type of service.
Career needs of higher education students and preferred career intervention modalities: Relationships exploration
The relationship between the career needs of higher education students and the preferred career intervention modalities was analyzed. There is a preference for in-person and individual career intervention modalities (vs online or group). More specifically, in-person individual career guidance is the preferred modality for determining own interest and developing new ones (r = 0.162, p = 0.002), determining own skills and developing new ones (r = 0.190, p ≤ 0.000), determining own values and lifestyle (r = 0.138, p = 0.009), determining personality traits and its relationship with specific environment contexts (r = 0.100, p = 0.043), identifying the type of job I'm best fitted for (r = 0.112, p = 0.034), developing clear, specific and realistic career goals (r = 0.110, p = 0.038), and negotiating job offers (r = 0.104, p = 0.040).
Also, in-person workshops are the preferred modality to explore different career options such as determining my own choice between advanced studies or employment after graduation (r = 0.126, p = 0.017), learning job search strategies (r = 0.176, p = 0.001), discussing career strategies that increase the likelihood of achieving my career goal (r = 0.119, p = 0.023), selecting a new academic degree (r = 0.108, p = 0.041), gaining experience through internships (r = 0.144, p = 0.006), developing job interview skills (r = 0.106, p = 0.044), using social media platforms to search for a job (r = 0.151, p = 0.004), negotiating job offers (r = 0.133, p = 0.011), transferring skills gained in the course to the workplace (r = 0.154, p = 0.003), and supporting the soft skills development (r = 0.147, p = 0.005). In-person group career guidance is also preferred to learn how to be a freelance or start my own business (r = 0.105, p = 0.046).
When the topic concerns developing clear, specific and realistic career goals, or discussing career strategies that increase the likelihood of achieving my career goal, students prefer online career guidance, individual
Probability of attending an online career guidance or intervention program
Next, we present the results of several decision trees conducted to predict the likelihood that students would use an online career service if it were provided by the Career Office of their respective universities/ colleges. The dependent variable used, If the Career Office at your school/university had an online and computer-assisted career guidance intervention or program: what would be the probability for you to attend it?, since it is a Net Promoter Score type variable, was organized into three categories: students who are detractors, students who are passive, and students who are promoters of using these types of services. Three independent analyses were performed. In each of these analyses, the independent variables used were: (i) sociodemographic (gender, age, academic degree, public vs. private institution, area of study), (ii) the higher education students' career needs, and (iii) own career needs.
Decision tree with sociodemographic variables
This tree diagram (Figure 1) shows that gender is the best predictor of what would be the probability for you to attend an online career guidance or intervention program [X 2 (2) = 8.577, p = 0.041]. Female students are more likely to be promoters (40.3%) of this type of service (vs 36% passive and 23.7% detractor). In contrast, male students are more likely to be 39.7% detractors (vs 33.3% passive and 26.9% promoters).
The risk estimate of 0.598 (error = 0.026) indicates that the risk of misclassifying a student is approximately 60%. The model classifies approximately 40.2% of the students correctly. For those students with a passive behavior, the model predicts 0% of them-these are often classified as being promoters; for those students with a detractor behavior, it predicts their behavior in only 31.6%% of the cases, which means that 69% of students are inaccurately classified; and, for those students with a promotor behavior, the model predicts accurately 84.4% of them.
Decision tree with the career needs of higher education students
This tree diagram (Figure 2) shows that the personal concern networking effectively is the best predictor of what would be the probability for you to attend online career guidance or intervention program [X 2 (2) = 23.272, p ≤ 0.000]. For the category of those with a reduced need, this is considered a terminal node since there are no child nodes. Within this category, 40.4% of students take a passive behavior on the likelihood of seeking online and computer-assisted career guidance for this topic, while 35.4% of students take a detractor behavior. For the category of students who have a moderate to high need, the model includes one more predictor-gaining experience through internships. Of the students who indicate a low need, 33.9% have a promoting behavior (vs 33.9% detractor and 32.1% passive) while of the students who indicate a moderate to high need, 53.5% have a promoting behavior (vs 15.3% detractor and 31.5% passive).
The risk estimate of 0.554 (error = 0.026) indicates the risk of misclassifying a student is approximately 55%, which means that the model classifies approximately 44.6% of the students correctly. Students with a passive behavior, are wrongly classified as promoters almost 50% of the times; and those with detractor behavior are never (0%) correctly identified. For those students with a promotor behavior, the model predicts accurately 71.1% of them.
Decision tree with the variables of own career needs
This tree diagram (Figure 3) shows that the personal concern supporting the soft-skills development is the best predictor of what would be the probability for you to attend an online career guidance or intervention program [X 2 (2) = 6.157, p = 0.046]. For both groups, those who feel the need and those who do not feel the need to have soft skills development support, this is considered a terminal node, since there are no child nodes. However, considering those who do not feel this type of need, their behavior toward using an online career service is mostly passive (37.5%), while those who feel this type of need tend to adopt mostly a promoting behavior (51.9%). The risk estimate of 0.604 indicates that the category predicted by the model is wrong for 60% of the cases (error = 0.026), which indicates that the model classifies approximately 39.6% of the students correctly. For those students with a detractor behavior, the model predicts correctly 0% of them; for those students with a promoter behavior, it predicts their promoting behavior in only 20% of the cases, which means that 80% of students with a promoter behavior are inaccurately classified with a passive behavior. But, students with a passive behavior are accurately identified in 90.6% of the cases.
Discussion
This paper focuses on a study aimed to identify current career needs of students enrolled in Portuguese higher education and whether online interventions would be of demand among those students. Based on the results of the analysis, several assumptions can be drawn.
Even though most students are aware of the existence of a career office at their university, only a small proportion of the participants have ever used its services. Compared to the study by Gallup (2016) that stated that 52% of students in the US tend to visit Career Offices at least once over their undergraduate studies, our findings are clearly lower than expected. Most students claim they need career support, yet, based on our findings they rarely apply for it. These results are congruent with those obtained in the study by Crişan et al. (2015), which indicate a passive attitude by most students about seeking support on career topics. Career offices should invest in more intensive and comprehensive marketing plans that bring them closer to their target audience.
Considering the career needs, it is important to analyze that the ones most highlighted by students are organized in two major areas: exploring the world of work and developing strategies that favor the achievement of career goals. Such services range (e.g., internship experiences and job vacancies; job interview preparation sessions) is quite common at career offices, and the majority of offices do provide job interview preparation sessions on demand and distribute information on internships and employment. What is interesting, however, is when asked about current personal needs, the results show that students need counseling support in understanding whether they should continue studying or finding employment after graduation and with the latter, identifying the type of employment that would be the best fit. But there is hardly any focus on self-knowledge, with aspects such as knowing one's life values and personality traits being two of the least valued needs. Compared to previous studies, there is a tendency to focus on aspects such as job search strategies and career decision-making, devaluing the self-exploration dimension that should underpin the career management process (Yang and You, 2010;Crişan et al., 2015;Pinto, 2019). It therefore becomes necessary to raise student' awareness of the importance of the topic of self-knowledge.
The results from correlation analysis allow to draw conclusions about which concerns students want to tackle using specific intervention types. It is clear that in-person individual career guidance sessions can be used to address several personal needs, such as determining and developing own interests, skills, values, and goals, whereas in-person workshops can be used for topics on learning job strategies, negotiating job offers, and developing soft skills.
Lastly, despite most students being interested in receiving online career information (e.g., regarding internships and job vacancies), the most preferred modalities of intervention are nevertheless the in-person. This is consistent with results from previous studies (e.g., Crişan et al., 2015;Pinto et al., 2016;Pinto, 2019), although in contrast to the study by Crişan et al. (2015), Portuguese students systematically indicate a preference for individual sessions or workshops over group counseling sessions (Pinto et al., 2016;Pinto, 2019). It is clear that in-person individual career guidance is preferred to address several self-exploration needs, such as determining and developing own interests, skills, values, and goals, whereas in-person workshops can be used for topics on learning job strategies, negotiating job offers, and developing soft skills.
In terms of possible on-line career interventions, most participants welcome such option, yet again, out of listed possible intervention modalities, online individual counseling sessions are the top preferences by respondents. This is congruent with prior research that states a possible need for human interaction (with a counselor) alongside an online guidance intervention (Venable, 2010;Galliott, 2017). What is different, however, is that similar international studies on needs assessment (Crişan et al., 2015;Güneri et al., 2016) indicated a low Attending an online career guidance or intervention program: Decision tree with sociodemographic variables. Attending an online career guidance or intervention program: Decision tree with career needs of higher education students.
Frontiers in Psychology 08 frontiersin.org preference for online counseling and low usage of Internet and other online tools for career support. In this study, however, students recognize that their preferred mode of intervention when it comes to developing career goals and discussing goal-oriented career strategies are individual online sessions and group online sessions, respectively. Moreover, the likelihood that students would use an online career service if it were provided by the Career Office of their respective universities might depend on the gender and on the specific personal career needs, with networking and soft-skills development being the top possible predictors. There are some limitations concerning the development of this study that must be taken into consideration. First, this is an exploratory study focused on identifying current students' career needs and whether online interventions are in demand among Portuguese higher education students. In an increasingly global academic world, it would have been important to also analyze the needs of international students who choose to pursue their studies in Portugal. Second, the sample size and the variety of students' profiles might have been larger and more varied, but we had some difficulty in securing the collaboration of other national universities in the dissemination of the study. And finally, we had a significant drop-out rate, probably due to the length of the evaluation protocol.
Although our findings provide various insights that could be used by Career Offices to tailor their existing programs or to develop the new ones in accordance with the students' preferences, the results on the topic of an online and computer-assisted career intervention suggest possible difficulties in using the online format for higher education students. It is interesting to note that authors state that despite obvious benefits of online career counseling (Venable, 2011;Zainuddin et al., 2020), its use is not exempt from difficulties, such as distraction problems among students (Feng et al., 2019;Chen et al., 2020;Rana et al., 2021), privacy issues (Gogus and Saygın, 2019), and frustration due to technological errors (Borko et al., 2008) or unstable connections (Rossing et al., 2012). In this regard, online and computer-assisted career guidance is more likely to be more effective if combined with face-to-face counseling (Galliott, 2017), even though providing services "with the right mix of technology and human contact" (Venable, 2010, p.94) can be a difficult goal to achieve.
Conclusion
Although much has been discussed in literature and done on international and national levels to promote the importance of career counseling for students and increase its availability, the services provision seems not to be fully developed in a large number of Portuguese HEIs with students not using the services as expected. Since there is clearly a demand for the career support that is hypothesized to increase even more, universities need to rethink the way they market career services to students and whether the websites provide easy access to all necessary information. Thus, raising awareness of the provided career services and improving the information accessibility are of great importance.
Due to the growing popularity of online career counseling (Bright, 2015) and the fact that most career offices employ a very limited number of personnel, online career interventions seem to be a perfect solution. Yet, without a meaningful consultation with service recipients, i.e., students, the creation of an online career intervention proves to be impossible. The career need assessment has a crucial role in Attending an online career guidance or intervention program: Decision tree with own career needs.
Frontiers in Psychology 09 frontiersin.org understanding the concerns and preferences of students and translating these into action. Only in that way would the efforts and financial investments yield a service that meets students' needs more effectively and would, in the end, be used. Although there are several advantages to online career counseling (Zainuddin et al., 2020) and most students would welcome such an intervention mode, more research is needed in order to establish possible specific concerns that could be addressed in such a format. Since there is a clear preference among students for individual career guidance, universities might need to adapt the intervention types and/or add such sessions regardless of the type. In his historical survey, Shapin (2012, p.13) refers to universities as once being "ivory towers, " distant from the problems of the societies that created and sponsored them. It is important that Career Offices would not become such disengaged and out of reach entities, but instead would be open for the dialog with the service recipients in order to contribute to the promotion of students' employability by adapting the range of the services and creation of targeted career guidance programs, including online interventions. In the pursuit of a better workforce, identifying student career needs, intervention preferences, and then connecting these expectations with skills development at university Career Offices, has critical implications for all parties involved-institutions, students, and prospective employers.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by CRC-W Católica Research Center for Psychological, Family, and Social Wellbeing. The patients/participants provided their written informed consent to participate in this study.
|
2023-03-24T15:03:10.919Z
|
2023-03-22T00:00:00.000
|
{
"year": 2023,
"sha1": "8c72208db804c5573b09561ef1485676a72ecc98",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1117289/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "12c6c77307820c75d329908ea24ac52ce3030ef5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11965580
|
pes2o/s2orc
|
v3-fos-license
|
A semifluorinated alkane (F4H5) as novel carrier for cyclosporine A: a promising therapeutic and prophylactic option for topical treatment of dry eye
Purpose Cyclosporine A (Cs) has been used as effective topical therapy for inflammatory dry eye disease since more than a decade. However, due to its lipophilic character, Cs is formulated as emulsions or oily solutions for topical application. This experimental study aimed to test if the use of semifluorinated alkanes (SFAs) as a preservative-free, well-tolerated non-stinging or burning vehicle maintains or even improves the benefits of Cs in the topical therapy of dry-eye disease. Methods Desiccating stress was applied to C57BL/6 mice for 14 consecutive days to induce experimental dry-eye. Cs dissolved in SFA (perfluorobutylpentane = F4H5with 0.5% Ethanol), F4H5 with 0.5% ethanol only, 0.05% Cs (Restasis®), and dexamethasone (Monodex®) were applied three times daily beginning either at day 4 or day 11 of desiccating stress for up to 3 weeks after end of dry-eye induction. Results In comparison to other groups, Cs/F4H5 demonstrated high efficacy and earlier reduction of corneal staining. In this study, Cs/F4H5 had the ability to maintain conjunctival goblet cell density once applied on day 4. Flow cytometry analysis from cervical lymphnodes demonstrated a significantly lower CD4+ and CD8+ T-cells in the Cs/F4H5 group following 3 weeks of therapy than at baseline, but no difference in regulatory T cells from regional lymphnodes were seen. Conclusions Overall, compared to a commercially available Cs formulation (Restasis®) and dexamethasone, Cs/F4H5 was shown to be equally effective but with a significantly faster therapeutic response in reducing signs of dry-eye disease in an experimental mouse model.
Introduction
Dry-eye disease (DED) is one of the most common disorders of the ocular surface, associated with dysfunction of the lacrimal functional unit, changes in tear fluid, corneal and conjunctival epitheliopathy, and consecutive inflammation [1,2]. Lighter cases of DED and consecutive ocular discomfort are mainly managed with artificial tears, while therapeutic treatment of more severe and chronic cases of dry eye and underlying inflammation include topical steroids or cyclosporine (Cs), topical or oral antibiotics, topical autologous serum drops, and even systemic immunosupressives. However, some of these therapeutic strategies cause a wide range of side-effects, e.g., cataract, glaucoma, or infections, but also a strong burning sensation during topical application [3,4]. With regard to the use of immunosuppressives, currently the only FDA-approved (U.S. Food and Drug Administration) medication for dry-eye disease is a 0.05% cyclosporine emulsion (Restasis®, Allergan Inc., Irvine, CA, USA), whereas in Europe 0.1% cyclosporine has recently been approved by the EMA (European Medicines Agency) for severe keratitis in DED (Ikervis®, Santen).
Cyclosporine is a calcineurin inhibitor, targeting specifically the T-cell response, and was described to increase tear secretion, decrease epithelial damages, increase goblet cell density and visual acuity, but also to improve subjective symptoms in dry-eye patients [5][6][7]. However, in many countries Restasis® or Ikervis® are not available or restricted to only severe cases, and alternatively Cs eye drops have to be compounded by pharmacies using several non-standardized formulations. Furthermore, as the lipophilic Cs has to be formulated using oils and/or surfactants, e.g., castor oil or polysorbate 80, this often leads to intolerance, burning sensation, or visual disturbance. Therefore, application is frequently discontinued [4,8].
As an alternative to existing formulations semifluorinated alkanes (SFAs) were introduced as a new delivery platform, enabling a simple and preservative-free formulation of Cs.
SFAs (e.g., perfluorobutylpentane = F4H5) are linear molecules composed of a hydrocarbon and a perfluorocarbon segment holding special features such as a certain degree of lipophilicity, low surface and interface tension, and high biocompatibility. They have the potential to dissolve water-insoluble substances, e.g., the lipophilic Cs [9,10]. Using an ex-vivo eye irritation test (EVEIT) it was previously shown that the SFAs F4H5 and F6H8 are well tolerated and cause no toxic effects on enucleated rabbit corneas [11]. Also, a recently conducted postmarketing surveillance study using F6H8 as artificial tears demonstrated the safety and tolerability of SFAs in clinical treatment of hyperevaporative DED [12]. F6H8 is now marketed as EvoTears® (Ursapharm Arzneimittel GmbH, Saarbruecken, Germany) in Germany and Switzerland.
In this study, a mouse model of experimental dry eye disease was used to investigate the effect of the semifluorinated alkane F4H5 as a novel carrier for Cs as topical treatment for DED during early and late therapeutic applications.
All animals were treated according to the German Animal Protection Law (LANUV), the local regulations of the University of Cologne and the ARVO statement for the use of animals in ophthalmic research.
Readout parameters Clinical signs of dry eye )production of tear fluid and corneal epitheliopathy) were measured once a week as folllows: time point [TP] 1: baseline-day 0, TP 2: day 11, TP 3: day 18, TP 4: day 25, TP 5: day 32 ( Fig. 1a and b). For measurement of tear production, phenol red threads (Zone Quick Thread, Oasis Medical, USA) were placed into the inferior cul-de-sac for 30 s and recorded in millimeters. Corneal damage was detected by fluoresceine staining: 5% fluoresceine in normal saline solution was applied to the eye, carefully wiped off after 30 s and graded under blue light using a modified Oxford grading scheme with severities ranging from grade 0 to grade 5 (Fig. 2a) [14].
At day 35, all mice were sacrificed and eyes including conjunctiva were removed. For quantification of goblet cells the lower lid was paraffin-embedded and sectioned, and goblet cells were stained with PAS (periodic acid-Schiff) dye. Images were taken using a brightfield microscope (Olympus BX53; Olympus Deutschland GmbH, Hamburg, Germany) and a color camera (Olympus UC10, Olympus Deutschland GmbH, Hamburg, Germany). Goblet cells were counted manually from the lid border to the fornix, and stated as cells/ 100 μm using ImageJ Software (National Institutes of Health, Bethesda, MD, USA). One representative slide out of the central region of the conjunctiva was analyzed from seven to 12 eyes/group depending on the availability of exactly aligned cross-sections (Fig. 2b).
Flow cytometry analysis (FACS) FACS analyses were performed in one experiment following the late therapeutic regimen. Draining lymphnodes of three control mice and three mice receiving F4H5 or Cs/F4H5 were collected at TP1, TP3 and TP5. For T-cell and regulatory T-cell (Treg) analysis, single cell suspensions were stained with FITC-conjugated anti-CD8, APC-conjugated anti-CD4, PE-conjugated anti-CD25 (all Biolegend, San Diego, CA, USA) and a FITCconjugated anti-FoxP3 (BD Biosciences, Heidelberg, Germany) antibody according to the manufacturers' instructions. Stained samples were examined on a Guava easyCyte™ HT (Merck Millipore, Darmstadt, Germany), and analyzed using FlowJo Software (FlowJo LLC, Tree Star Inbc., Ashland, OR, USA).
Statistical analysis Results were presented as mean ± SD of n = 10 eyes of five mice in each experiment (FACS analysis, three mice/group). All experiments were performed two times; data presented here are unpooled from a single experiment. Since all data were positively tested for a Gaussian distribution (Kolmogorov-Smirnov test), the statistical analysis were performed by univariate ANOVA with post hoc test (LSD) using SPSS (Software version 21, IBM). P-values of p ≤ 0.05 were considered to be significant.
Late therapy regimen
Tear production Tear production measured by phenol red threads demonstrated a significant increase of tear production in all groups after termination of desiccating stress and following 7 days of treatment all groups in comparison to TP2. Overall levels of tear production were similar to TP1 prior to EDE induction. Comparative group analysis for TPs 3-5 demonstrated that Cs/F4H5treated mice had a significantly stronger increase of tear production after EDE compared to F4H5, Restasis®, dexamethasone and the untreated control. At TP5, the effect of Cs/F4H5 compared to the control group was less pronounced, although still measurable compared to F4H5 alone and Restasis® (Fig. 3a).
Corneal fluoresceine staining
Analysis of corneal damage using fluoresceine staining following late therapy demonstrated a significant increase of the staining in all groups following EDE at TP2. Following 1 week of therapy (TP3), a significant decrease of the fluoresceine staining was observed only in the Cs/F4H5 group (Fig. 3b). Restasis® and dexamethasone treatment resulted in decreased fluoresceine staining at TP4 at the earliest, whereas the reduction of staining in the Cs/F4H5 group increased further at TP5. Only Cs/F4H5 demonstrated a remaining significant decrease of corneal staining in comparison to TP2 (onset of therapy).
Goblet cell density
In the late therapeutic regimen, naïve mice demonstrated a significant higher goblet cell (GC) density at TP5 compared to all groups ( Fig. 3c + d). Dexamethasonetreated mice had a significant lower number of GC compared to all other groups after late therapy. Cs/ F4H5-treated mice also demonstrated a significant lower Fig. 3 Tear production, fluoresceine staining, and goblet cell density under late therapy regimen before (TP1: baseline), during EDE (TP2) and a following topical treatment (TP3-TP5) of EDE: a Tear production: data represent the tear production in mm of each group as mean ± SD (n = 10 eyes/group). Late therapy with Cs/F4H5 led to a higher increase of tear production compared between groups at every time point. A comparative group analysis comparing the differences between groups was performed at every evaluation time point (asterisks in grey squares placed above). b Fluoresceine staining: data are representing the fluoresceine staining score each group as mean ± SD (n = 10 eyes/group) Late therapeutic treatment with Cs/F4H5 led to a significant earlier improvement of epithelial staining at TP3. c Goblet cell density: all groups showed decreased GC density compared to naïve mice. Treatment with dexamethasone resulted in a lower number of GC compared to F4H5, Cs/F4H5, and control group (mean ± SD, n = number of investigated eyes). P-values ≤ 0.05 were considered to be significant (* p ≤ 0.05, ** p ≤ 0.001, *** p ≤ 0.0001). Significances refer to TP2 (a + b). d Representative images of conjunctival goblet cell distribution (PAS-staining) in all treatment groups at TP5. All treatment groups demonstrated reduced goblet cells in comparison to the naïve untreated control goblet cell count than the naïve control, but no difference to any treatment group except dexamethasone.
Tear production
In the early treatment regimen, all groups demonstrated a significant decrease of tear production after EDE and 1 week of concomitant therapy at TP2 compared to TP1. Thereafter, at TP3 and 2 weeks of concomitant application of drugs and carrier, tear production increased again significantly.
Comparative group analysis at TP3 demonstrated that tear production was significantly greater in mice receiving F4H5 in comparison to dexamethasone. At TP4, mice receiving Cs/ F4H5 for 3 consecutive weeks had significantly higher tear production than the Restasis® and dexamethasone groups. At TP5, no differences between all groups were present, and tear production levels were comparable to levels at TP1 (Fig. 4a).
Corneal fluoresceine staining
At TP2 following 2 weeks of desiccating stress and 1 week of concomitant therapy all groups, except the Cs/F4H5 group, demonstrated a significant increase of corneal fluoresceine staining. The between group comparison revealed that corneal staining was significantly lower in this group compared to all other groups. At TP3 and TP4 only Restasis® demonstrated a significant decrease of corneal staining compared to TP2, at TP5 only F4H5 had a significant reduced corneal staining in comparison to TP2. In the Cs/F4H5 group no change of corneal Fig. 4 Early therapy regimen a Tear production before (TP1: baseline), during EDE (TP2) and a following topical treatment (TP3-TP5) of EDE: data are representing the tear production in mm of each group as mean ± SD (n = 10 eyes/group). b Fluorescein staining grade before (TP1baseline), during EDE (TP2) and after a following topical treatment (TP3-TP5) of EDE: Data are representing the fluorescein grading score of each group as mean ± SD (n = 10 eyes/group). Early therapy resulted in significant less epithelial staining in the Cs/F4H5 group already at TP2. c Expression of goblet cells. After early treatment with Cs/F4H5 GC density remained comparable to naïve mice, whereas in untreated control, F4H5, Restasis® and Dexamethasone number of GC was decreased. P-values ≤ 0.05 were considered to be significant (* p ≤ 0.05, ** p ≤ 0.001, *** p ≤ 0.0001). Significances refer to TP2 (a + b). A comparative group analysis (a and Fig. 3b) comparing the differences between groups at every time point was performed, results are placed above (asterisks in grey squares) evaluating. d Representative images of conjunctival goblet cell distribution (PAS-staining) in all treatment groups at TP5. Only in the group treated with Cs/F4H5 no goblet cell loss was visible staining in comparison to baseline levels at TP1 at any time point was detectable (Fig. 4b).
Goblet cell density
Analysis of the goblet cell (GC) density after early therapy in the bulbar and palpebral conjunctiva of the lower lid resulted in preservation of a normal GC density in Cs/F4H5-treated animals (Fig. 4c, d), compared to naïve mice. Untreated controls and groups receiving F4H5 and Restasis® showed a significantly decreased number of GC compared to naïve mice. Mice that received dexamethasone showed a difference neither to naïve nor control mice.
FACS analysis
FACS analysis was performed in the late-treatment regimen comparing controls with F4H5-and Cs/F4H5-treated groups (Fig. 5). Analysis of CD4 + and CD8 + lymphocytes from lymph nodes demonstrated no alterations between the groups at TP3 and TP5. Furthermore, no differences were detectable in the percentage of CD4 + T cells following 7 days of treatment (TP3) with topical Cs/F4H5 in comparison to control and baseline. At day 35 (TP5), compared to naïve mice, the percentage of CD4 + T cells was significantly increased in the control and F4H5 groups, but not in the Cs/F4H5 group (Fig. 5b). In addition, the percentage of CD8 + T cells in cervical lymph nodes was increased at TP5 in the control and F4H5 group compared to naïve mice. On TP3 and TP5, the CD4:CD8 T cell ratio was significantly less in all groups in comparison to baseline (Fig. 5c). No differences between the groups were detected. FACS analysis of CD4 + CD25 + FoxP3 + Tregs of draining lymph nodes resulted in levels between 3 and 6% of cells in draining lymph nodes, with no differences between groups or time points (Fig. 6).
Discussion
Topical cyclosporine (Cs) is an established immunomodulatory medication indicated for treatment of DED accompanied with inflammation of the ocular surface. It is additionally used in vernal and atopic conjunctivitis, blepharitis, and meibomian gland dysfunction, as well as in LASIK-associated dry eye and ocular graft-versus-host disease [7]. Cs inhibits the activation of T cells and the apoptosis of epithelial cells and reduces proinflammatory cytokines like IL-6. Thereby, Cs clinically decreases corneal staining, increases tearfilm break-up time as well as tear production, and enables patients to decrease their frequency of artificial tear supplement [7].
Cs is a highly lipophilic substance that is typically formulated as emulsions, which often result in side-effects such as burning and stinging sensations [15,16] in part attributable to the vehicle used [17]. Since the introduction of SFAs, a novel drug carrier system is available that allows to formulate Cs as a preservative-and surfactant-free clear solution. For these reasons, Cs formulated in SFA may be a better tolerable alternative to already available Cs formulations. Furthermore, a solution in combination with the spreading properties of the SFAs might lead to increased delivery of Cs to the site of action.
In our study, scopolamine was steadily applied for 14 days via subcutaneous pumps that together with controlled environmental stress resulted in a reliably dry eye phenotype during Fig. 5 FACS analysis of CD4 + and CD8 + T cells of draining lymph nodes after EDE following topical therapy at TP3 and TP5. a Representative flow cytometry dot plot. b Percentages of CD4 + and CD8 + cells as proportion of total live cells. At TP5, the total number of CD4 + and CD8 + cells was increased in control and F4H5 group compared to naïve mice. c Calculated CD4:CD8 ratio. CD4:CD8 ratio was significantly reduced compared to baseline (naïve mice). Data representing mean ± SD of n = 3 mice/group. Statistics were calculated using ANOVA. P-values ≤ 0.05 were considered to be significant (* p ≤ 0.05, *** p ≤ 0.0001) acute EDE, even after removal of desiccating stress. Previous studies have shown that Th17 effector T cells maintain the chronic phase of EDE with increased corneal epitheliopathy lasting several weeks after an acute phase of EDE [18]. Therefore, the model used enabled the investigation of the therapeutic effect of Cs/F4H5 in acute as well as in chronic EDE for at least 3 weeks until control groups returned to baseline parameters.
In this study, the therapeutic regimen of 0.05% Cs dissolved in the F4H5 was highly effective in reducing corneal staining and increasing tear production. Compared to the commercially available Cs (Restasis®), Cs/F4H5 demonstrated at least a comparable therapeutic effect, but a significant faster response. Notably, early therapy with Cs/F4H5 starting at day 4 protected mice from developing dry eye, whereas all other groups showed a significant increase of staining compared to baseline. Consistently, this treatment regime was the only one that maintained the number of conjunctival goblet cells in EDE, clearly demonstrating a prophylactic effect of solely Cs/F4H5. No side-effects such as blepharitis, corneal vascularization, etc., were noted in any of the experimental groups.
In a recent phase 1 study with 18 healthy volunteers, repeated applications of Cs/F4H5 (CyclASol®, Novaliq, N C T 0 2 1 1 3 2 9 3 , h t t p : / / w w w . n o v a l i q . de/fileadmin/Downloads/CYS-001_E_final.pdf) have been well tolerated. Hereby, no stinging or burning sensation, irritations, dryness, foreign-body sensation, and no further discomfort of the mucosa or tearing were reported.
A loss of goblet cells (GC) after EDE was described previously, although the level of GCs strongly varied in these studies [13,19,20]. In the study presented, the investigation of GC was performed only at the end of the experiment at day 35. Topical Cs was already well known to increase the goblet cell density in murine models of dry eye [5] as well as in in patients [21]. As stated above, early therapy with Cs/F4H5 resulted in a prevention of goblet cell loss in comparison to untreated controls, carrier F4H5, and Restasis®. An effect on goblet cells in the late-treatment regimen was not observed, probably due to a prolonged regeneration phase of goblet cells after initial desiccating stress.
It is known that CD4 + T cells play a primary role in the development and progression of dry-eye disease. Desiccating stress leads to infiltration of activated T cells into ocular surface tissues [1]. Such autoreactive CD4 + cells are sufficient to induce dry-eye phenotype once adoptively transferred in T-cell-deficient but otherwise healthy nude mice [20]. Since lymph nodes serve as a reservoir for lymphoid cells and are essential for the antigen-presenting cell (APC)-driven activation of autoreactive CD4 + T cells [22], draining cervical lymphnodes were investigated in this study. During dryeye disease, an increase of activated CD69 + and CD154 + T cells has been reported previously [22,23]. In the study presented, following 3 weeks of therapy only in the Cs/ F4H5 group compared to F4H5 and controls, no increase of CD4+ and CD8+ T-cells was observed, which might explain a potential therapeutic effect of Cs on the regional lymphnode in the late phase of experimental dry-eye disease.
Previous studies [20,24] further demonstrated that the numbers of CD4 + CD25 + FoxP3 + Tregs play a crucial role in the pathology of dry eye. Specifically, Tregs attenuate effector T cell function and in this way dampen dry eye. Experimentally, a depletion of Tregs led to an exacerbation of adoptively transferred dry-eye disease, whereas the reconstitution with Tregs in athymic mice resulted in a protection against transfer of EDE [20,24]. Furthermore, it has been described that BALB/c mice, containing a resulted in no significant differences in the percentage of CD4 + CD25 + FoxP3 + cells compared to naïve and untreated control mice. Data representing mean ± SD of n = 3 mice/group larger pool of Tregs, develop milder EDE than other mice strains, for example C57BL/6 mice [25]. For this reason, the number of Tregs was investigated in this study, but no difference was detected in any of the groups and time points investigated. This study has some limitations due to its experimental character: (i) Desiccating stress was applied for 14 days; this rather long duration might result in metaplasia of the conjunctival and corneal epithelium and consequent impact on the therapeutic effect and readouts, e.g., goblet cell count.
(ii) In contrast to earlier publications commercial Cs did not show a strong therapeutic effect, which might be due to differences in the experimental setup of desiccating stress [20,[26][27][28][29][30].
iii. The very recently approved Cs product (Ikervis®) could not be used a control drug, therefore no conclusions can be drawn with this respect. Therefore, future experiments will also include a shorter desiccating stress period (e.g., 7-9 days) and further controls such as the recently approved Cs product. As all experiments were performed at least twice with sufficient numbers of animals and repeatedly stable clinical phenotypes the setup established is thought to be applicable for further investigations. In addition, pharmacokinetics of F4H5 alone and of the combined product Cs/F4H5 are currently tested in ex-vivo and in-vivo models. These subsequent studies will be supplemented by a phase II clinical trial currently being performed in patients with DED, which tested efficacy and safety profiles of 0.05 and 0.1% Cs/F4H5 in comparison to Restasis® (NCT02617667).
In summary, this experimental study clearly demonstrated a significantly faster and equally effective topical treatment of experimental dry eye using Cs/F4H5 compared to Restasis®. Due to the limitations stated, further experiments will include comparison with other newly available Cs products using a modified protocol of EDE.
|
2017-08-02T19:50:33.432Z
|
2017-01-14T00:00:00.000
|
{
"year": 2017,
"sha1": "fd6fd6b96a222e86bb5f6ee6957d9a5832d11412",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00417-016-3572-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd6fd6b96a222e86bb5f6ee6957d9a5832d11412",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55830698
|
pes2o/s2orc
|
v3-fos-license
|
Supplementary Information for Identification and tunable optical coherent control of transition-metal spins in silicon carbide
Color centers in wide-bandgap semiconductors are attractive systems for quantum technologies since they can combine long-coherent electronic spin and bright optical properties. Several suitable centers have been identified, most famously the nitrogen-vacancy defect in diamond. However, integration in communication technology is hindered by the fact that their optical transitions lie outside telecom wavelength bands. Several transition-metal impurities in silicon carbide do emit at and near telecom wavelengths, but knowledge about their spin and optical properties is incomplete. We present all-optical identification and coherent control of molybdenum-impurity spins in silicon carbide with transitions at near-infrared wavelengths. Our results identify spin $S=1/2$ for both the electronic ground and excited state, with highly anisotropic spin properties that we apply for implementing optical control of ground-state spin coherence. Our results show optical lifetimes of $\sim$60 ns and inhomogeneous spin dephasing times of $\sim$0.3 $\mu$s, establishing relevance for quantum spin-photon interfacing.
INTRODUCTION
Electronic spins of lattice defects in wide-bandgap semiconductors have come forward as an important platform for quantum technologies, 1 in particular for applications that require both manipulation of long-coherent spin and spin-photon interfacing via bright optical transitions. In recent years this field showed strong development, with demonstrations of distribution and storage of non-local entanglement in networks for quantum communication [2][3][4][5][6] and quantum-enhanced field-sensing. [7][8][9][10][11] The nitrogen-vacancy defect in diamond is the material system that is most widely used 12,13 and best characterized [14][15][16] for these applications. However, its zero-phonon-line (ZPL) transition wavelength (637 nm) is not optimal for integration in standard telecom technology, which uses near-infrared wavelength bands where losses in optical fibers are minimal. A workaround could be to convert photon energies between the emitter-resonance and telecom values, [17][18][19] but optimizing these processes is very challenging.
This situation has been driving a search for similar lattice defects that do combine favorable spin properties with bright emission directly at telecom wavelength. It was shown that both diamond and silicon carbide (SiC) can host many other spin-active color centers that could have suitable properties [20][21][22][23] (where SiC is also an attractive material for its established position in the semiconductor device industry 24,25 ). However, for many of these color centers detailed knowledge about the spin and optical properties is lacking. In SiC the divacancy [26][27][28] and silicon vacancy 10,[29][30][31] were recently explored, and these indeed show millisecond homogeneous spin coherence times with bright ZPL transitions closer to the telecom band.
We present here a study of transition-metal impurity defects in SiC, which exist in great variety. [32][33][34][35][36][37] There is at least one case (the vanadium impurity) that has ZPL transitions at telecom wavelengths, 33 around 1300 nm, but we focus here (directed by availability of lasers in our lab) on the molybdenum impurity with ZPL transitions at 1076 nm (in 4H-SiC) and 1121 nm (in 6H-SiC), which turns out to be a highly analogous system. Theoretical investigations, 38 early electron paramagnetic resonance 33,39 (EPR), and photoluminescence (PL) studies [40][41][42] indicate that these transition-metal impurities have promising properties. These studies show that they are deep-level defects that can be in several stable charge states, each with a distinctive value for its electronic spin S and near-infrared optical transitions. Further tuning and engineering possibilities come from the fact that these impurities can be embedded in a variety of SiC polytypes (4H, 6H, etc., Fig. 1a). Recent work by Koehl et al. 37 studied chromium impurities in 4H-SiC using optically detected magnetic resonance. They identified efficient ZPL (little phonon-sideband) emission at 1042 nm and 1070 nm, and their charge state as neutral with an electronic spin S = 1 for the ground state.
Our work is an all-optical study of ensembles of molybdenum impurities in p-type 4H-SiC and 6H-SiC material. The charge and spin configuration of these impurities, and the defect configuration in the SiC lattice that is energetically favored, was until our work not yet identified with certainty. Our results show that these Mo impurities are in the Mo 5+ (4d 1 ) charge state (we follow here conventional notation: 33 the label 5+ indicates that of an original Mo atom 4 electrons participate in bonds with SiC and that 1 electron is transferred to the p-type lattice environment). The single remaining electron in the 4d shell gives spin S = 1/2 for the ground state and optically excited state that we address. While we will show later that this can be concluded from our measurements, we assume it as a fact from the beginning since this simplifies the explanation of our experimental approach.
In addition to this identification of the impurity properties, we explore whether ground-state spin coherence is compatible with optical control. Using a two-laser magneto-spectroscopy method, 28,43,44 we identify the spin Hamiltonian of the S = 1/2 ground state and optically excited state, which behave as doublets with highly anisotropic Landé g-factors. This gives insight in how a situation with only spin-conserving transitions can be broken, and we find that we can use a weak magnetic field to enable optical transitions from both ground-state spin levels to a common excited-state level (Λ level scheme). Upon two-laser driving of such Λ schemes, we observe coherent population trapping (CPT, all-optical control of ground-state spin coherence and fundamental to operating quantum memories 45,46 ). The observed CPT reflects inhomogeneous spin dephasing times comparable to that of the SiC divacancy 28,47 (near 1 μs).
In what follows, we first present our methods and results of single-laser spectroscopy performed on ensembles of Mo impurities in both SiC polytypes. Next, we discuss a two-laser method where optical spin pumping is detected. This allows for characterizing the spin sublevels in the ground and excited state, and we demonstrate how this can be extended to controlling spin coherence.
Both the 6H-SiC and 4H-SiC (Fig. 1a) samples were intentionally doped with Mo. There was no further intentional doping, but nearband-gap photoluminescence revealed that both materials had ptype characteristics. The Mo concentrations in the 4H and 6H samples were estimated 41,42 to be in the range 10 15 -10 17 cm −3 and 10 14 -10 16 cm −3 , respectively. The samples were cooled in a liquid-helium flow cryostat with optical access, which was equipped with a superconducting magnet system. The set-up geometry is depicted in Fig. 1b. The angle ϕ between the direction of the magnetic field and the c-axis of the crystal could be varied, while both of these directions were kept orthogonal to the propagation direction of excitation laser beams. In all experiments where we resonantly addressed ZPL transitions the laser fields had linear polarization, and we always kept the direction of the linear polarization parallel to the c-axis. Earlier studies 38,41,42 of these materials showed that the ZPL transition dipoles are parallel to the c-axis. For our experiments we confirmed that the photoluminescence response was clearly the strongest for excitation with linear polarization parallel to the c-axis, for all directions and magnitudes of the magnetic fields that we applied. All results presented in this work come from photoluminescence (PL) or photoluminescence excitation (PLE) measurements. The excitation lasers were focused to a~100 μm spot in the sample. PL emission was measured from the side. A more complete description of experimental aspects is presented in Methods section.
RESULTS
For initial characterization of Mo transitions in 6H-SiC and 4H-SiC we used PL and PLE spectroscopy (Methods). Figure 1c shows the PL emission spectrum of the 6H-SiC sample at 3.5 K, measured using an 892.7 nm laser for excitation. The ZPL transition of the Mo defect visible in this spectrum will be studied in detail throughout this work. The shaded region indicates the emission of phonon replicas related to this ZPL. 41,42 While we could not perform a detailed analysis, the peak area of the ZPL in comparison with that of the phonon replicas indicates that the ZPL carries clearly more than a few percent of the full PL emission. Similar PL data from Mo in the 4H-SiC sample, together with a study of the temperature dependence of the PL, can be found in Supplementary Information (Fig. S1).
For a more detailed study of the ZPL of the Mo defects, PLE was used. In PLE measurements, the photon energy of a narrowlinewidth excitation laser is scanned across the ZPL part of the spectrum, while resulting PL of phonon-sideband (phonon-replica) emission is detected (Fig. 1b, we used filters to keep light from the excitation laser from reaching the detector, Methods). The inset of Fig. 1c shows the resulting ZPL for Mo in 6H-SiC at 1.1057 eV (1121.3 nm). For 4H-SiC we measured the ZPL at 1.1521 eV (1076.2 nm, Supplementary Information). Both are in close agreement with literature. 41,42 Temperature dependence of the PLE from the Mo defects in both 4H-SiC and 6H-SiC can be found in Supplementary Information (Fig. S2).
The width of the ZPL is governed by the inhomogeneous broadening of the electronic transition throughout the ensemble of Mo impurities, which is typically caused by non-uniform strain in the crystal. For Mo in 6H-SiC we observe a broadening of 24 ± 1 GHz FWHM, and 23 ± 1 GHz for 4H-SiC. This inhomogeneous broadening is larger than the anticipated electronic spin splittings, 33 and it thus masks signatures of spin levels in optical transitions between the ground and excited state. Photoluminescence (PL) is collected and detected out of another side facet of the SiC crystal. c PL from Mo in 6H-SiC at 3.5 K and zero field, resulting from excitation with an 892.7 nm laser, with labels identifying the zero-phonon-line (ZPL, at 1.1057 eV) emission and phonon replicas (shaded and labeled as phonon sideband, PSB). The inset shows the ZPL as measured by photoluminescence excitation (PLE). Here, the excitation laser is scanned across the ZPL peak and emission from the PSB is used for detection Identification and tunable optical coherent control of. . . T Bosma et al.
In order to characterize the spin-related fine structure of the Mo defects, a two-laser spectroscopy technique was employed. 28,43,44 We introduce this for the four-level system sketched in Fig. 2a. A laser fixed at frequency f 0 is resonant with one possible transition from ground to excited state (for the example in Fig. 2a |g 2 〉 to | e 2 〉), and causes PL from a sequence of excitation and emission events. However, if the system decays from the state |e 2 〉 to |g 1 〉, the laser field at frequency f 0 is no longer resonantly driving optical excitations (the system goes dark due to optical pumping). In this situation, the PL is limited by the (typically long) lifetime of the |g 1 〉 state. Addressing the system with a second laser field, in frequency detuned from the first by an amount δ, counteracts optical pumping into off-resonant energy levels if the detuning δ equals the splitting Δ g between the ground-state sublevels. Thus, for specific two-laser detuning values corresponding to the energy spacings between ground-state and excited-state sublevels the PL response of the ensemble is greatly increased. Notably, this technique gives a clear signal for sublevel splittings that are smaller than the inhomogeneous broadening of the optical transition, and the spectral features now reflect the homogeneous linewidth of optical transitions. 28,47 In our measurements a 200 μW continuous-wave control and probe laser were made to overlap in the sample. For investigating Mo in 6H-SiC the control beam was tuned to the ZPL at 1121.32 nm (f control = f 0 = 267.3567 THz), the probe beam was detuned from f 0 by a variable detuning δ (i.e., f probe = f 0 + δ). In addition, a 100 μW pulsed 770 nm re-pump laser was focused onto the defects to counteract bleaching of the Mo impurities due to charge-state switching 28,48,49 (which we observed to only occur partially without re-pump laser). All three lasers were parallel to within 3°inside the sample. A magnetic field was applied to ensure that the spin sublevels were at non-degenerate energies. Finally, we observed that the spectral signatures due to spin disappear in a broad background signal above a temperature of 10 K (Fig. S4), and we thus performed measurements at 4 K (unless stated otherwise). Figure 2b shows the dependence of the PLE on the two-laser detuning for the 6H-SiC sample (4H-SiC data in Supplementary Information Fig. S6), for a range of magnitudes of the magnetic field (here aligned close to parallel with the c-axis, ϕ = 1°). Two emission peaks can be distinguished, labeled line L 1 and L 2 . The emission (peak height) of L 2 is much stronger than that of L 1 . Figure 2c shows the results of a similar measurement with the magnetic field nearly orthogonal to the crystal c-axis (ϕ = 87°), where four spin-related emission signatures are visible, labeled as lines L 1 through L 4 (a very small peak feature left from L 1 , at half its detuning, is an artifact that occurs due to a leakage effect in the spectral filtering that is used for beam preparation, see Methods). The two-laser detuning frequencies corresponding to all four lines emerge from the origin (B = 0, δ = 0) and evolve linearly with magnetic field (we checked this up to 1.2 T). The slopes of all four lines (in Hertz per Tesla) are smaller in Fig. 2c than in Fig. 2b. In contrast to lines L 1 , L 2 , and L 4 , which are peaks in the PLE spectrum, L 3 shows a dip.
In order to identify the lines at various angles ϕ between the magnetic field and the c-axis, we follow how each line evolves with increasing angle. Figure 2d shows that as ϕ increases, L 1 , L 3 , and L 4 move to the left, whereas L 2 moves to the right. Near 86°, L 2 and L 1 cross. At this angle, the left-to-right order of the emission lines is swapped, justifying the assignment of L 1 , L 2 , L 3 , and L 4 as in Fig. 2b, c. Supplementary Information presents further results from two-laser magneto-spectroscopy at intermediate angles ϕ (section 2a).
We show below that the results in Fig. 2 indicate that the Mo impurities have electronic spin S = 1/2 for the ground and excited state. This contradicts predictions and interpretations of initial results. 33,38,41,42 Theoretically, it was predicted that the defect associated with the ZPL under study here is a Mo impurity in the asymmetric split-vacancy configuration (Mo impurity asymmetrically located inside a Si-C divacancy), where it would have a spin S = 1 ground state with zero-field splittings of about 3-6 GHz. 33,38,41,42 However, this would lead to the observation of additional emission lines in our measurements. Particularly, in the presence of a zero-field splitting, we would expect to observe twolaser spectroscopy lines emerging from a non-zero detuning. 28 We have measured near zero fields and up to 1.2 T, as well as from 100 MHz to 21 GHz detuning (Supplementary Information section 2c), but found no more peaks than the four present in Fig. 2c. A larger splitting would have been visible as a splitting of the ZPL in measurements as presented in the inset of Fig. 1c, which was not observed in scans up to 1000 GHz. Additionally, a zero-field splitting and corresponding avoided crossings at certain magnetic fields would result in curved behavior for the positions of lines in magneto-spectroscopy. Thus, our observations rule out that there is a zero-field splitting for the ground-state and excited-state spin sublevels. In this case the effective spin-Hamiltonian 50 can only take the form of a Zeeman term where g g(e) is the g-factor for the electronic ground (excited) state (both assumed positive), μ B the Bohr magneton, B the magnetic field vector of an externally applied field, andS the effective spin vector. The observation of four emission lines can be explained, in the simplest manner, by a system with spin S = 1/2 (doublet) in both the ground and excited state. For such a system, Fig. 3 presents the two-laser optical pumping schemes that correspond to the observed emission lines L 1 through L 4 . Addressing the system with the V-scheme excitation pathways from Fig. 3c leads to increased pumping into a dark ground-state sublevel, since two excited states contribute to decay into the off-resonant ground-state energy level while optical excitation out of the other ground-state level is enhanced. This results in reduced emission observed as the PLE dip feature of L 3 in Fig. 2c (for details see Supplementary Information section 5).
We find that for data as in Fig. 2c the slopes of the emission lines are correlated by a set of sum rules Here Θ Ln denotes the slope of emission line L n in Hertz per Tesla. The two-laser detuning frequencies for the pumping schemes in Fig. 3a-d are related in the same way, which justifies the assignment of these four schemes to the emission lines L 1 through L 4 , respectively. These schemes and equations directly yield the g-factor values g g and g e for the ground and excited state ( Supplementary Information section 2). We find that the g-factor values g g and g e strongly depend on ϕ, that is, they are highly anisotropic. While this is in accordance with earlier observations for transition metal defects in SiC, 33 we did not find a comprehensive report on the underlying physical picture. In Supplementary Information section 7, we present a group-theoretical analysis that explains the anisotropy g g ≈ 1.7 for ϕ = 0°and g g = 0 for ϕ = 90°, and similar behavior for g e (which we also use to identify the orbital character of the ground and excited state). In this scenario the effective Landé g-factor 50 is given by where g || represents the component of g along the c-axis of the silicon carbide structure and g ⊥ the component in the basal plane. Figure 4 shows the ground and excited state effective g-factors extracted from our two-laser magneto-spectroscopy experiments for 6H-SiC and 4H-SiC (additional experimental data can be found in Supplementary Information). The solid lines represent fits to the Eq. (4) for the effective g-factor. The resulting g || and g ⊥ parameters are given in Table 1.
The reason why diagonal transitions (in Fig. 3a, c), and thus the Λ and V scheme are allowed, lies in the different behavior of g e and g g . When the magnetic field direction coincides with the internal quantization axis of the defect, the spin states in both the ground and excited state are given by the basis of the S z operator, Fig. 3 Two-laser pumping schemes with optical transitions between S = 1/2 ground and excited states. a Λ scheme, responsible for L 1 emission feature: Two lasers are resonant with transitions from both ground states |g 1 〉 (red arrow) and |g 2 〉 (blue arrow) to a common excited state |e 2 〉. This is achieved when the detuning equals the ground-state splitting Δ g . The gray arrows indicate a secondary Λ scheme via |e 1 〉 that is simultaneously driven in an ensemble when it has inhomogeneous values for its optical transition energies. b Π scheme, responsible for L 2 emission feature: Two lasers are resonant with both vertical transitions. This is achieved when the detuning equals the difference between the ground-state and excited-state splittings, |Δ g − Δ e |. c V scheme, responsible for L 3 emission feature: Two lasers are resonant with transitions from a common ground state |g 1 〉 to both excited states |e 1 〉 (blue arrow) and |e 2 〉 (red arrow). This is achieved when the laser detuning equals the excited state splitting Δ e . The gray arrows indicate a secondary V scheme that is simultaneously driven when the optical transition energies are inhomogeneously broadened. d X scheme, responsible for the L 4 emission feature: Two lasers are resonant with the diagonal transitions in the scheme. This is achieved when the detuning is equal to the sum of the ground-state and the excited-state splittings, (Δ g + Δ e ) where the z-axis is defined along the c-axis. This means that the spin-state overlap for vertical transitions, e.g., from |g 1 〉 to |e 1 〉, is unity. In such cases, diagonal transitions are forbidden as the overlap between e.g., |g 1 〉 and |e 2 〉 is zero. Tilting the magnetic field away from the internal quantization axis introduces mixing of the spin states. The amount of mixing depends on the g-factor, such that it differs for the ground and excited state. This results in a tunable non-zero overlap for all transitions, allowing all four schemes to be observed (as in Fig. 2b where ϕ = 87°). This reasoning also explains the suppression of all emission lines except L 2 in Fig. 2b, where the magnetic field is nearly along the caxis. A detailed analysis of the relative peak heights in Fig. 2b, c compared to wave function overlap can be found in Supplementary Information (section 4).
The Λ driving scheme depicted in Fig. 3a, where both ground states are coupled to a common excited state, is of particular interest. In such cases it is possible to achieve all-optical coherent population trapping (CPT), 45 which is of great significance in quantum-optical operations that use ground-state spin coherence. This phenomenon occurs when two lasers address a Λ system at exact two-photon resonance, i.e., when the two-laser detuning matches the ground-state splitting. The ground-state spin system is then driven toward a superposition state that approaches Ψ CPT j i/ Ω 2 g 1 j i À Ω 1 g 2 j i for ideal spin coherence. Here Ω n j i is the Rabi frequency for the driven transition from the g n j i state to the common excited state. Since the system is now coherently trapped in the ground state, the photoluminescence decreases.
In order to study the occurrence of CPT, we focus on the twolaser PLE features that result from a Λ scheme. A probe field with variable two-laser detuning relative to a fixed control laser was scanned across this line in frequency steps of 50 kHz, at 200 μW. The control laser power was varied between 200 μW and 5 mW. This indeed yields signatures of CPT, as presented in Fig. 5. A clear power dependence is visible: when the control beam power is increased, the depth of the CPT dip increases (and can fully develop at higher laser powers or by concentrating laser fields in SiC waveguides 47 ). This observation of CPT confirms our earlier interpretation of lines L 1 -L 4 , in that it confirms that L 1 results from a Λ scheme. It also strengthens the conclusion that this system is S = 1/2, since otherwise optical spin-pumping into the additional (dark) energy levels of the ground state would be detrimental for the observation of CPT. Using a standard model for CPT, 45 adapted to account for strong inhomogeneous broadening of the optical transitions 47 (see also Supplementary Information section 6) we extract an inhomogeneous spin dephasing time T Ã 2 of 0.32 ± 0.08 μs and an optical lifetime of the excited state of 56 ± 8 ns. The optical lifetime is about a factor two longer than that of the nitrogenvacancy defect in diamond, 12,51 indicating that the Mo defects can be applied as bright emitters (although we were not able to measure their quantum efficiency). The value of T Ã 2 is relatively short but sufficient for applications based on CPT. 45 Moreover, the EPR studies by Baur et al. 33 on various transition-metal impurities show that the inhomogeneity probably has a strong static contribution from an effect linked to the spread in mass for Mo isotopes in natural abundance (nearly absent for the mentioned vanadium case), compatible with elongating spin coherence via spin-echo techniques. In addition, their work showed that the hyperfine coupling to the impurity nuclear spin can be resolved. There is thus clearly a prospect for storage times in quantum memory applications that are considerably longer than T Ã 2 .
DISCUSSION
The anisotropic behavior of the g-factor that we observed for Mo was also observed for vanadium and titanium in the EPR studies by Baur et al. 33 (they observed g || ≈ 1.7 and g ⊥ = 0 for the ground state). In these cases the transition metal has a single electron in its 3d orbital and occupies the hexagonal (h) Si substitutional site. We show in Supplementary Information section 7 that the origin of this behavior can be traced back to a combination of a crystal field with C 3v symmetry and spin-orbit coupling for the specific case of an ion with one electron in its d-orbital. The correspondence of this behavior with what we observe for the Mo impurity identifies that our materials have Mo impurities present as Mo 5+ (4d 1 ) systems residing on a hexagonal h silicon substitutional site. In this case of a hexagonal (h) substitutional site, the molybdenum is bonded in a tetrahedral geometry, sharing four electrons with its nearest neighbors. For Mo 5+ (4d 1 ) the defect is then in a singly ionized +|e| charge state (e denotes the elementary charge), due to the transfer of one electron to the p-type SiC host material.
An alternative scenario for our type of Mo impurities was recently proposed by Ivády et al. 35 . They proposed, based on theoretical work, 35 the existence of the asymmetric split-vacancy (ASV) defect in SiC. An ASV defect in SiC occurs when an impurity occupies the interstitial site formed by adjacent silicon and carbon vacancies. The local symmetry of this defect is a distorted octahedron with a threefold symmetry axis in which the strong g-factor anisotropy (g ⊥ = 0) may also be present for the S = 1/ 2 state. 50 Considering six shared electrons for this divacancy environment, the Mo 5+ (4d 1 ) Mo configuration occurs for the singly charged −|e| state. For our observations this is a highly improbable scenario as compared to one based on the +|e| state, given the p-type SiC host material used in our work. We thus conclude that this scenario by Ivády et al. does not occur in our material. Interestingly, niobium defects have been shown to grow in this ASV configuration, 52 indicating there indeed exist large varieties in the crystal symmetries involved with transition metal defects in SiC. This defect displays S = 1/2 spin with several optical transitions between 892 and 897 nm in 4H-SiC and between 907 and 911 nm in 6H-SiC. 52 Another defect worth comparing to is the aforementioned chromium defect, studied by Koehl et al. 37 Like Mo in SiC, the Cr defect is located at a silicon substitutional site, thus yielding a 3d 2 configuration for this defect in its neutral charge state. The observed S = 1 spin state has a zero-field splitting parameter of 6.7 GHz. 37 By employing optically detected magnetic resonance techniques they measured an inhomogeneous spin coherence time T Ã 2 of 37 ns, 37 which is considerably shorter than observed for molybdenum in the present work. Regarding spin-qubit applications, the exceptionally low phonon-sideband emission of Cr seems favorable for optical interfacing. However, the optical lifetime for this Cr configuration (146 μs 37 ) is much longer than that of the Mo defect we studied, hampering its application as a bright emitter. It is clear that there is a wide variety in optical and spin properties throughout transition-metal impurities in SiC, which makes up a useful library for engineering quantum technologies with spin-active color centers.
We have studied ensembles of molybdenum defect centers in 6H and 4H silicon carbide with 1.1521 eV and 1.1057 eV transition energies, respectively. The ground-state and excited-state spin of both defects was determined to be S = 1/2 with large g-factor anisotropy. Since this is allowed in hexagonal symmetry, but forbidden in cubic, we find this to be consistent with theoretical descriptions that predict that Mo resides at a hexagonal lattice site in 4H-SiC and 6H-SiC, 35,38 and our p-type host environment strongly suggests that this occurs for Mo at a silicon substitutional site. We used the measured insight in the S = 1/2 spin Hamiltonians for tuning control schemes where two-laser driving addresses transitions of a Λ system, and observed CPT for such cases. This demonstrates that the Mo defect and similar transitionmetal impurities are promising for quantum information technology. In particular for the highly analogous vanadium color center, engineered to be in SiC material where it stays in its neutral V 4+ (3d 1 ) charge state, this holds promise for combining S = 1/2 spin coherence with operation directly at telecom wavelengths.
Materials
The samples used in this study were~1 mm thick epilayers grown with chemical vapor deposition, and they were intentionally doped with Mo during growth. The PL signals showed that a relatively low concentration of tungsten was present due to unintentional doping from metal parts of the growth setup (three PL peaks near 1.00 eV, outside the range presented in Fig. 1a). The concentration of various types of (di)vacancies was too low to be observed in the PL spectrum that was recorded. For more details see ref. 42 Cryostat During all measurements, the sample was mounted in a helium flow cryostat with optical access through four windows and equipped with a superconducting magnet system.
Photoluminescence (PL)
The PL spectrum of the 6H-SiC sample was measured by exciting the material with an 892.7 nm laser, and using a double monochromator equipped with infrared-sensitive photomultiplier. For the 4H-SiC sample, we used a 514.5 nm excitation laser and an FTIR spectrometer.
Photoluminescence Excitation (PLE)
The PLE spectrum was measured by exciting the defects using a CW diode laser tunable from 1050 nm to 1158 nm with linewidth below 50 kHz, stabilized within 1 MHz using feedback from a HighFinesse WS-7 wavelength meter. The polarization was linear along the sample c-axis. The laser spot diameter was~100 μm at the sample. The PL exiting the sample sideways was collected with a high-NA lens, and detected by a single-photon counter. The peaks in the PLE data were typically recorded at a rate of about 10 kcounts/s by the single-photon counter. We present PLE count rates in arb.u. since the photon collection efficiency was not well defined, and it varied with changing the angle ϕ. For part of the settings we placed neutral density filters before the single-photon counter to keep it from saturating. The excitation laser was filtered from the PLE signals using a set of three 1082 nm (for the 4H-SiC case) or 1130 nm (for the 6H-SiC case) longpass interference filters. PLE was measured using an ID230 single-photon counter. Additionally, to counter charge state switching of the defects, a 770 nm re-pump beam from a tunable pulsed Ti:sapphire laser was focused at the same region in the sample. Laser powers as mentioned in the main text.
Two-laser characterization
The PLE setup described above was modified by focusing a detuned laser beam to the sample, in addition to the present beams. The detuned laser field was generated by splitting off part of the stabilized diode laser beam. This secondary beam was coupled into a single-mode fiber and passed through an electro-optic phase modulator in which an RF signal (up tõ 5 GHz) modulated the phase. Several sidebands were created next to the fundamental laser frequency, the spacing of these sidebands was determined by the RF frequency. Next, a Fabry-Pérot interferometer was used to select one of the first-order sidebands (and it was locked to the selected mode). The resulting beam was focused on the same region in the sample as the original PLE beams (diode laser and re-pump) with similar spot size and polarization along the sample c-axis. Laser powers were as mentioned in the main text. Small rotations of the c-axis with respect to the magnetic field were performed using a piezo-actuated goniometer with 7.2 degrees travel.
Data processing
For all graphs with PLE data a background count rate is subtracted from each line, determined by the minimum value of the PLE in that line (far away from resonance features). After this a fixed vertical offset is added for clarity. For each graph, the scaling is identical for all lines within that graph.
DATA AVAILABILITY
The data sets generated and analyzed during the current study are available from the corresponding author upon reasonable request.
|
2019-05-25T08:21:21.292Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b1a9ab31bb539c6e433f5e07b16fee9a39bb149f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41534-018-0097-8.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c9bb475e7aef01628a2b101080148d9e37cc6a0f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
252153645
|
pes2o/s2orc
|
v3-fos-license
|
Discovery of SARS - CoV-2 RNA-dependent-RNA-polymerase (RdRp) Inhibitor from Sambiloto ( Andrographis paniculata ) Based on Molecular Docking and ADMET Prediction Approach
The rapid spread of the coronavirus disease 2019 (COVID-19) has led to the development of therapeutic inhibitor drug of SARS-CoV-2, which can inhibit the viral enzyme RNA-dependent-RNA-polymerase (RdRp), thereby preventing the replication, transcription, and synthesis of RNA virus in the host cells. Previous in-vitro studies revealed that Andrographis paniculata has the potential to inhibit the virus. Therefore, this study aims to isolate the specific compounds of Andrographis paniculata , which play a role in inhibiting SARS-CoV-2 RdRp using molecular docking. A total of 19 compounds were identified in previous literature studies, while remdesivir and favipiravir were used as the positive control. All compounds and proteins were applied to minimize and optimize energy. Furthermore, the docking method was carried out using Autodock 4.2.6 software with a specific grid box containing the active site of RdRp (ID: 6M71), and the Lamarckian Genetic Algorithm was used to determine the conformation. The best docking was screened on ADMET prediction and the binding energy was evaluated. There are 18 compounds of Andrographis paniculata including the top three, namely andrographolactone (∆G = -8.86 kcal/mol), andrographolide (∆G = -7.74 kcal/mol), and andrographidine-A (∆G = -7.68 kcal/ mol), which showed the strongest binding affinity to the SARS-CoV-2 RdRp protein compared to other compounds and the positive control remdesivir (∆G = -5.73 kcal/mol) and favipiravir (∆G = -5.20 kcal/ mol). Furthermore, active amino acids play a role in this interaction by forming strong hydrogen bonds, such as TYR 619, LYS 621, ASP 760, and ASP 623. Andrographolactone has the highest potential as SARS-CoV-2 RdRp inhibitor, hence, it can be used as a novel therapeutic candidate.
INTRODUCTION
A new pandemic known as Corona Virus Disease 2019 was first reported in December 2019 in Wuhan, China, and caused by the Severe Acute Respiratory Syndrome Corona Virus-2 (SARS-CoV-2) (Gorbalenya, et al., 2020). Furthermore, the disease is spreading rapidly around the world with millions of victims, and it also has a major social and economic impact (da Silva, et al., 2020;Nimgampalle, et al., 2021). To overcome the infection and viral replication, it is important to understand the proteins involved in the process. The viral spike protein binds to the human receptor within a metallopeptidase, namely Angiotensin-Converting Enzyme 2 (ACE2) (Borse, et al., 2021;Dong, et al., 2020). After the virus enters the host cell, its positive genomic RNA attaches to the ribosome to translate two large terminal polyproteins, which are processed by proteolysis into components for packaging new virions. The 3CLpro and PLpro provide components for packaging new virions of enormous viral polyproteins translated on the host ribosomes, after which RNA-dependent-RNA-polymerase (RdRp) replicates the SARS-CoV-2 RNA genome (Morse, et al., 2020).
Previous studies revealed that RdRp plays a vital role in SARS-CoV-2 replication as a potential drug target (Lung et al., 2020;Parvez et al., 2020). It has also been highlighted as a fundamental target in computational strategies, such as molecular docking due to its importance in the viral replication stage. Molecular docking is a robust, rational, and inexpensive method, which provides an understanding of how the critical NSP interacts with ligands at the active site. Therefore, it supports the design and screening of novel antiviral agents against COVID-19 Yu, et al., 2020). Another literature predicted the binding of andrographolide and its derivatives to RdRp using molecular docking and simulation, and the results showed that the analogs oxo-andrographolide have strong binding energy compared to andrographolide (Sharma et al., 2020). Neoandrographolide (AGP3) had significant binding to the catalytic site of RdRp, thereby inhibiting the target therapeutically (Murugan et al., 2021).
Herbal medicines play an important role in the control and prevention of infectious diseases (Kwon, et al., 2019). Several clinical studies revealed that they have beneficial effects in treating and preventing epidemic Copyright @ 2022 Authors. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author, and source are properly cited. E-ISSN 2477-0612 diseases, such as the SARS Coronavirus (SARS-CoV) . COVID-19, caused by SARS-CoV-2, is a new type of coronavirus, which has 80% similarity to SARS-CoV and belongs to the Sarbecovirus subgenus of the Betacoronavirus genus (J. F.-W. Lipsitch et al., 2020;Lung et al., 2020;Wrapp et al., 2020). Due to these similarities, several studies showed that the use of herbal medicines in the treatment of COVID-19 has a beneficial effect (K. W. . Z. Wang & Yang, (2021) revealed the potential activity of Chinese Herbal Medicine against SARS-CoV-2 in China Clinical Trial, including Herba andrographiti (Xiyanping Injection), Sophora flavescens Ait. (Matrine-sodium chloride injection) and diammonium glycyrrhizinate enteric-coated capsules. In China's experience of fighting the COVID-19 pandemic, the Chinese Herbal Medicine (CHM) therapy schedule was included in the treatment guidelines because it has played an indispensable role (Du, et al., 2021).
Sambiloto (Andrographis paniculata) also known as the "King of Bitters", belongs to the Acanthaceae family, and has been used for centuries in Asia for medicine. In the official book of Indonesian medicinal plants, bitter herbs are used as diuretics and antipyretics agents (Patin et al., 2018;Ratnani et al., 2012). The in-silico study by Murugan et al., (2020) reported that several compounds from A. paniculata have promising binding affinity to RdRp with proper binding to the catalytic site, which helps to inhibit the target using SARS-CoV-1 PDB ID: 6NUR. Thailand declared its pilot project openly in January 2021 for administering and investigating the effectiveness of A. paniculata extract in patients diagnosed with COVID-19 (Lim, et al., 2021). An RCT study also recommended that the regimen of the extract therapy, namely oral 60 mg, t.i.d., for five days can be given to adult patients diagnosed with the disease. Furthermore, the adverse effects caused by the therapy are limited and benign (Wanaratna et al., 2021).
Indonesia has several traditional medicinal compounds that can be used to treat SARS-CoV and SARS-CoV-2, but the mechanism of their activity and efficiency remains unclear. Therefore, a comprehensive computational approach with molecular docking was used to predict the activity of A. paniculata compounds against RdRp as one of the mechanisms against SARS-CoV-2 infection. In Rafi et al., (2020)'s study, the compounds extracted from the plant were obtained from the presumed results using LC-MS/MS and classified based on the plant part, namely stems and leaves using sonication with 70% ethanol. Principal Component Analysis (PCA) was also used to separate and classify the compounds in its leaves, such as andrographanine, 14-deoxyandrographolide, andrographolactone, dehydroandrograpolide as well as the stem extracts, including andrographolide, apigenin-7,40-dimethylether, 5-hydroxy-7,8-dimethoxy flavanone, and andrografidin A by observing values of the peak area. Andrographolactone was found in A. paniculata leaves, hence, it can be a target plant part for the isolation of compounds. A total of 31 metabolites were identified in the stem and leaf extracts with different intensities and they were divided into groups of diterpene lactones, flavonoids, and phenolic acids.
Materials
A laptop ASUS ROG GL-552VX with a specification of CoreTM i7-6700HQ Processor Intel®, 12 Gb RAM, CPU @2,60 GHz ~ 2,59 GHz, and Microsoft Windows 10 as the operating system was used to perform molecular docking. The software used in this study includes Avogadro for ligand energy minimization and Swiss PDB Viewer for protein optimization. Molecular docking was carried out using Autodock 4.2.6, after which Discovery Studio Visualizer and PyMOL were used for visualizing protein-ligand interactions. The Lipinski screening and ADMET prediction were carried out using SCFBio and pkCSM web tools.
Methods
Detailed docking studies are necessary to predict the candidate SARS-CoV-2 RdRp inhibitor of A. paniculata compounds, and the schematic molecular docking study process is presented in Figure 1.
Ligand and Protein Preparation
The crystal structure of SARS-CoV-2 RdRp (ID: 6M71) was downloaded from the RCSB Protein Data Bank. The target protein was prepared by removing water molecules and adding all hydrogen using the Discovery Studio Visualizer. Its structure was optimized with the Swiss PDB Viewer software using a GROMOS96 force field, and then saved as a .pdb file. A total of 19 compounds of A. paniculata were generated as ligands from the previously collected data, while Rafi et al., (2020)'s study was used for screening to find potential anti-SARS-CoV-2. The compounds were downloaded at PubChem and they passed the Lipinski screening using the SCFBio webserver. Remdesivir and favipiravir were used as a positive control, after which hydrogen was added to all ligands, followed by energy minimization energy using the MMFF94 force field of Avogadro software.
Molecular Docking
Molecular docking studies of A. paniculata compounds and positive control to RdRp (6M71) were carried out using AutoDock. The protein and ligand were uploaded to Autodock Tools, and detected torsions, which allow the rotation of all rotatable bonds were used for the ligands. The Gasteiger partial and Kollman charges were automatically added to the test compounds. The suitable grid box on the side of active RdRp (ID:6M71) from Parvez et al. (2020) was used after evaluating another study. The semi-flexible docking method is ligand flexibly setting where protein is still in rigid conformation. The molecular docking was performed using the Lamarckian Genetic Algorithm with a population size of 150 people and a maximum evals number of 2.500.000 for every 100 independent runs. The best pose was evaluated using the lowest binding energy score (∆G) with inhibition constant (Ki) value as well as the functional crucial amino acid, which was detected to have a role in docking interaction by Discovery Studio Visualizer.
ADMET Prediction
The Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) filtering can be used to predict the pharmacological properties, metabolizing system, and toxicity of oral drug discovery candidate. The Predicting Small Molecule Pharmacokinetic Properties Using Graph-Based Signatures (pkCSM) online server was used to reduce academic costs as well as increase the potential of a highly accurate prediction. All the parameters model prediction of ADMET on pkCSM were obtained from Pires et al. (2015). The 19 compounds that demonstrated good binding affinity were screened for ADMET by changing the format PDB into SMILES and then uploading them to the pkCSM online server.
RESULTS AND DISCUSSION
The viral polymerase RdRp, also known as nsp12, can be a crucial target for inhibiting the replication, transcription, and synthesis of RNA virus in host cells (Gao et al., 2020). RdRp protein can cause a high
Figure 1. Schematic molecular docking study of inhibitor SARS-CoV-2 RdRp
mutation rate, thereby leading to the emergence of a new virus that affects the disease profile, such as escaping the host immunity or increasing resistance to antiviral therapeutics (Wabalo et al., 2021). Cheminformatics and computational drug repurposing are tangible strategies for developing antiviral SARS-CoV-2 in a shorter period compared to the new drug development techniques. Furthermore, several computational drug methods, such as molecular docking have identified potential drug inhibitors for SARS-CoV-2 infection. The success of the docking process was evaluated with the scoring function of free binding energy (∆G) and the main role of amino acid interaction. Several in-silico studies of A. paniculata molecules on RdRp SARS-CoV-2 were carried out using homologous modelings, such as Srikanth et al., (2021), which docked andrographolide compounds to the target protein with autodock and MOE software. Murugan et al., (2021) used the Molecular docking and Molecular dynamic (MM-GBSA) approach on five compounds of A. paniculata, and the best results were obtained in neoandrographolide (AGP3). Sharma et al., (2020) showed that oxo-andrographolide had the best binding to the SARS-CoV-2 RdRp protein (ID: 6M71) through molecular docking, molecular dynamic (MM-GBSA), Clustering of conformations, PCA, and drug likeness properties-ADME prediction. This study's results are expected to improve previous findings through more comprehensive compounds of A. paniculata samples using a molecular docking study approach.
Lipinski Rules Screening
Lipinski's rule was used to predict a compound's bioavailability or ability to be absorbed and circulated in the body when administered orally. The results showed that the 19 compounds of A. paniculata passed the criteria for Lipinski's rule, as shown in Supplementary Table S1. Furthermore, 19 compounds surpassed the Log P value screening, which indicates that the drugs can easily be absorbed and they have an ideal lipophilicity to pass through the lipid bilayer of intestine, thereby making it easy for them to reach the target protein and interact. The ideal molecular weight given by A. paniculata compounds indicates that the drug can diffuse through the cell membrane, thereby increasing its oral bioavailability in the body. The ligands result was consistent with the good evaluation results of hydrogen donors, and the acceptors showed that the compound can quickly be absorbed due to the presence of weak hydrogen bonds with cells. Minimal energy is also required for the drugs to enter the blood vessels. Therefore, the 19 compounds of A. paniculata can become a safe orally active drug for humans based on the good absorption and bioavailability of the predicted Lipinski's rule.
Validation of Positive Control (Remdesivir and Favipiravir) as RdRp (6M71) Inhibitor
RdRp protein (ID: 6M71) without a native ligand was used in this study, hence, the existing antiviral drugs have highly selective inhibitors for the virus's RNA polymerase (Loza-Mejía & Salazar, 2020). Some of them are still in a clinical trial stage, such as remdesivir and favipiravir, which were used as a positive control for native ligand alternatives. They also served as inhibitors in Gao et al. (2020)'s studies about RdRp 6M71 protein, hence, the analysis of remdesivir amino acid interaction is very crucial for docking active sites.
The grid box size preferences in some previous studies (Borse et al., 2021;da Silva et al., 2020;Nimgampalle et al., 2021;Parvez et al., 2020) that use the same protein code 6M71 with remdesivir as a positive control of docking were tested. These studies referred to Gao et al. (2020) of 6M71 polymerase RdRp, which had a complex structure of nsp12 (RdRp) with its cofactors, namely nsp7 and nsp8. These two cofactors can catalyze the synthesis of viral RNA, which plays a central role in the replication and transcription cycle of COVID-19 virus. This protein has three subdomains that are close to the active site. Furthermore, its active site in motif A contains residues 611 to 626, which have a classic divalent-cationbinding residue, while motif C consists of 753 to 767, including catalytic 759 to 761 (Gao et al., 2020). The catalytic residues are important for the Enzyme-Inhibitor complex's stability and viral replication (Sharbidre et al., 2021). Gao et al. (2020) reported that PRO 620 is responsible for stabilizing the protein's active site in the palm subdomain, and its interaction with inhibitor activity can destabilize protein expression. Other amino acids of CYS 813, GLU 811, SER 814, and LYS 798 were found in a palm subdomain, which contributes to the polymerase activity of the enzyme (Gao et al., 2020).
The critical amino acids that played a role in the primary interaction of positive control-RdRp were also relevant to other studies. This indicates that the grid box area used contains several active amino acids, which play an important role in the inhibitory activity of SARS-CoV-2 RdRp (6M71). Subsequently, a docking study of 19 A. paniculata compounds to RdRp was carried out using the validated grid box.
Evaluation of RdRp (6M71) Inhibitor Activity on Chemical Constituents A. paniculata
Based on the docking results, some potential antiviral drugs were detected against RdRp, which has the best conformation with the lowest energy (most enormous negative Gibbs' free energy of binding (∆G) score), as shown in Table 1. The top three docking scores of bioactive A. Paniculata compound were ∆G = -8.86 kcal/ mol as the lowest binding energy of andrographolactone, followed by andrographolide ∆G = -7.74 kcal/mol and andrographidine-A ∆G = -7.68 kcal/mol. Furthermore, these binding energies were lower compared to the positive control of remdesivir and favipiravir, namely -5.73 kcal/mol and -5.20 kcal/mol, respectively. The high negative free energy of binding (∆G) scores indicate the presence of spontaneous protein-ligand binding that affects and stabilizes protein-ligand interaction.
They are also a better inhibitor of the molecular docking prediction (Xing Du et al., 2016;Nusantoro & Fadlan, 2021). The lowest binding score correlated with the lowest inhibition constant (Ki) value. Based on the results, the lowest Ki value was obtained from andrographolactone, namely 321.37 nM (nanomolar), followed by andrographolide 2.10 uM (micromolar) and andrographidine-A of 2.34 uM (micromolar). These results indicate the concentration required to produce a half-maximum inhibition value and show a stronger ligand affinity for macromolecules (Yasin et al., 2020).
The docking results showed that andrographolactone can stabilize and has a stronger affinity to bind to the active side of RdRp (ID: 6M71) compared andrographolide and andrographidine-A. This indicate that it has the potential to be a candidate anti-SARS-CoV-2 drug by blocking RdRp as a template anti-Genom of positive-sense RNA to prevent viral replication (Ahmad et al., 2020).
The top three compounds, which had good binding energy, namely andrographolactone, andrographolide and andrographidine-A, were identified as terpenoids.
andrographolactone is another compound that showed high binding energy with RdRp protein. Furthermore, it is a novel diterpene with an unprecedented skeleton that was isolated from the aerial part of A. paniculata (G. C. Wang et al., 2009). Andrographolactone has the potential to act as an anti-inflammatory agent by inhibiting TNF-a (Firdayani & Srijanto, 2012), which can prevent cytokine storms. This is the first study to show its activity as an anti-SARS-CoV-2, specifically as a RdRp inhibitor.
Andrographolide is a bioactive and major constituent in the leaves extract of A. paniculate, and it has antiinflammatory, antiviral, antitumor, and hepatoprotective activities (Jayakumar et al., 2013). Sa-Ngiamsuntorn et al. (2021) revealed that A. paniculata extract and andrographolide have the same IC 50 as remdesivir against SARS-CoV-2 infection. This evidence can be used for future antiviral development. Furthermore, previous in-vitro studies on the anti-SARS-CoV-2 activity of A. paniculata extract and andrographolide on Calu-3 cells showed high inhibition at the late phases of the viral life cycle including viral assembly and maturation with IC 50 of 0.036 μg/mL and 0.034 μM. Another study using Vero E6 cells reported that andrographolide had a stronger effect compared to the extract. The chemical structure of andrographolide and remdesivir have a common functional group-containing naphthalene ring against SARS-CoV-2 infection. Srikanth et al., (2021) showed a very strong affinity of andrographolide to RdRp (6M71) compared to RBD SARS-CoV-2, which indicates its potential activity as a RdRp inhibitor. These compounds have anti-inflammatory properties by inhibiting the th1/th17 response and stabilizing cytokine expression. Nie et al. (2017) showed that derivate andrographolide can inhibit TNF-α/NF-κB and TLR4/NF-κB signaling pathways to suppress cytokine pro-inflammation, and this prevents the occurrence of cytokine storm that is often observed in COVID-19 patients. Swaminathan et al. (2021) revealed the potential activity of A. paniculata phytocompounds against ten structural and non-structural SARS-CoV-2 proteins using molecular docking and dynamic simulation. The result showed that andrographidine-A have good binding energy with membrane protein, NSP15, and spike protein of SARS-CoV-2. The limitation of these studies was that there was no test on the RdRp target.
The top three compounds with the best docking score, namely andrographolactone, andrographolide, and andrographidine-A have relatively higher chemical potential to cause increased reactivity compared to remdesivir and favipiravir, which indicate that they are a strong inhibitor of SARS-CoV-2 RdRp based on molecular docking. They also have anti-SARS-CoV-2 potential based on previous wet and dry lab studies.
Analysis of Amino Acid Interaction
This study identified a critical amino acid that plays a role in hydrogen interaction and active side of RdRp, such as TYR 619, LYS 621, ASP 760, and ASP 623, as shown in Figure 3 and Supplementary Table S2. The hydrogen bond is one of the non-covalent binders, which play a significant role in the docking score, complex formation, and strength binding modes (Fikrika et al., 2016). Andrographolactone have strong binding energy to bind with RdRp protein, and the amino acid interaction of the compounds did not show hydrogen bond interaction. However, it formed two pi-alkyl with ARG 349 and PRO 461 in finger domain RdRp, as shown in Figure 3a.
This study showed that TYR 619 had the highest number of hydrogen bonds, namely 11, which contribute to the second rank of docking compound andrographolide, as shown in Figure 3b. Previous studies reported that it also formed a hydrogen bond from remdesivir-RdRp (ID: 6M71) complex using natural bioactive compounds through molecular docking (Abd El-Aziz et al., 2021). Eweas et al. (2021) showed that TYR 619 was present at the predicted active site of the protein, which formed a complex with remdesivir and hydroxychloroquine. TYR 619 (Y 619) is an essential amino acid in the active site of Motif A RdRp (ID: 6M71) (Gao et al., 2020).
Figure 3. Visualizing protein-ligand interactions of top three best docking scores (a) andrographolactone, (b) andrographolide, and (c) andrographidine-A
LYS 621 in the active site also contributes to the formation of three hydrogen interactions in andrographidine-A, which was ranked third based on the docking score, as shown in Figure 3c. This study's results showed that the amino acid have eight former hydrogen bonds. Previous studies on remdesivir-RdRp (ID: 6M71) interaction revealed that LYS 621 is one of the amino acids responsible for the formation of hydrogen (Eweas et al., 2021;Jang et al., 2021;Pintilie et al., 2020). Gao et al. (2020) reported that it was present on Motif A (fingers subdomain), and appeared in the prediction of the active side of RdRp protein. The active amino acid residues of ASP 760 and ASP 623 have 8 and 4 former hydrogen bonds, respectively. This is comparable with previous studies, such as Pintilie et al. (2020) which studied remdesivir-RdRp (ID: 6M71), and revealed the presence of the hydrogen interaction of ASP 760 or D760 and ASP 623. Pirzada et al., (2021) identified potential inhibitors of RdRp (ID: 6M71) using FDA-approved antiviral drugs, such as remdesivir, ledipasvir, and paritaprevir.
The results showed that ASP 760 and ASP 623 formed the bonds in the catalytic binding site of RdRp and this finding was strengthened by Gao et al., (2020).
This study's result showed that the four amino acids formed hydrogen bond that were responsible for the energy binding score and they also stabilized the ligandprotein RdRp inhibition. The results of amino acid interaction were comparable with the positive controlbound protein complex. This finding showed that the docking position of ligand on RdRp were ideal for the active site containing enzyme catalytic area, which is important for viral replication and transcription.
ADMET Prediction
The ADMET prediction of the top three A. paniculata compounds using pkCSM is presented in Table 2, while that of all compound is shown in Supplementary Table S3 with parameters models from Pires et al., (2015). The absorption prediction showed that andrographidine-A had low permeability on Caco-2 cell, and the three compounds showed good absorption on the intestine, hence, they have the potential to be absorbed on the intestinal membrane. However, andrographidine-A can be considered in the design of drug delivery systems to increase its ability to be well absorbed and penetrated, which helps to obtain an ideal therapeutic bioavailability. Andrographolactone have low skin permeability compared to andrographidine-A and andrografolide, indicating that it is bad for topical delivery design, but can be used for oral delivery.
The distribution process of andrographidine-A determined the prediction of Low VDss, which is poorly distributed to the brain and unable to penetrate the CNS. The compound can also have low lipophilicity to cross the blood-brain barrier and central nervous system. However, andrographolactone and andrografolide can penetrate with high distribution in this high lipophilicity area, and this helped to decrease the side effects and toxicity or improve the pharmacological activity of drugs.
Most of the metabolic reactions primarily found in the liver are Cytochrome P450, which has some major enzymes promoting prodrug activation or detoxification by converting high polarity drug to excreted molecules. The results showed that andrographolactone and andrografolide were predicted as substrate of the CYP3A4 enzyme. They are easy to metabolize and excrete from the body to prevent toxicity. Cytochrome P450 3A4 (CYP3A4) is the major cytochrome because CYP3A4 can metabolize most types of drug. The prediction results revealed that andrographolactone can serve as a CYP3A4 substrate, CYP1A2 inhibitor, and CYP2C19 inhibitor.
Based on the excretion prediction, andrographolactone, andrographolide, and andrographidine-A are not renal OTC2 substrate that cause interactions when administered concurrently with OCT2 Inhibitors. The result of toxicity prediction shows that these compound do not indicate the mutagenicity on AMES toxicity prediction and hERG inhibitor, which can lead to heart problems. Based on T.Pyriformis toxicity, all compounds can cause toxicity to the protozoan. The hepatotoxicity test revealed that andrographolactone and andrographidine-A are not toxic, while the Skin Sensitisation prediction showed that andrographolactone have an effect on the skin.
The ADMET prediction using the pkCSM of the top three docking scores showed that andrographolactone have more good values in ADMET studies. This result can be proved by comparing in-vitro and in-vivo studies.
CONCLUSION
The results showed the lower binding energy (∆G) of the top three bioactive A. paniculata compounds, namely andrographolactone, andrographolide, and andrographidine-A, respectively, which can be prevented by the replication, transcription, and synthesis of SARS-CoV-2 RdRP (ID: 6M71) were creates hydrogen interaction with TYR 619, LYS 621, ASP 760, and ASP 623. TYR 619, LYS 621, ASP 760, and ASP 623. The ADMET prediction revealed that andrographolactone has low toxicity, which is ideal for an orally active drug for humans. However, there is a need for further studies on andrographolactone as a novel therapeutic candidate by Molecular Dynamic or QSAR, followed by pre-clinical in-vitro and in-vivo SARS-CoV-2 studies.
|
2022-09-09T17:21:42.068Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2acf2ab5f8aaee8f937f302966c99eeb12670b44",
"oa_license": "CCBYNC",
"oa_url": "https://scholarhub.ui.ac.id/cgi/viewcontent.cgi?article=1236&context=psr",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eccfd896089a3309228a2dcb1d1253cd4c9dd603",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
}
|
203837634
|
pes2o/s2orc
|
v3-fos-license
|
Wavefront shaping in multimode fibers by transmission matrix engineering
One of the greatest challenges in utilizing multimode optical fibers is mode-mixing and inter-modal interference, which scramble the information delivered by the fiber. A common approach for canceling these effects is to tailor the optical field at the input of the fiber to obtain a desired field at its output. In this work, we present a new approach which relies on modulating the transmission matrix of the fiber rather than the incident light. We apply computer-controlled mechanical perturbations to the fiber to obtain a desired intensity pattern at its output. Using an all-fiber apparatus, we demonstrate focusing light at the distal end of the fiber and conversion between fiber modes. Since in this approach the number of degrees of control can be larger than the number of fiber modes, it allows simultaneous control over multiple inputs and multiple wavelengths.
Introduction
In recent years, multimode optical fibers (MMFs) are at the focus of numerous studies aiming at enhancing the capacity of optical communications and endoscopic imaging systems [1,2]. Ideally, one would like to utilize the transverse modes of the fiber to deliver information via multiple channels, simultaneously. However, inter-modal interference and coupling between the guided modes of the fiber result in scrambling between channels. One of the most promising approaches for unscrambling the transmitted information is by shaping the optical wavefront at the proximal end of the fiber in order to get a desired output at the distal end. Demonstrations include compensation of modal dispersion [3][4][5], focusing at the distal end [6][7][8][9][10], and delivering images [11][12][13] or an orthogonal set of modes [14,15] through the fiber.
Typically in wavefront shaping, the incident wavefront is controlled using spatial light modulators (SLMs), digital micromirror devices (DMDs) or nonlinear crystals. In all cases, the shaped wavefront sets the superposition of guided modes that is coupled into the fiber. For a fixed transmission matrix (TM) of the fiber, this superposition determines the field at the output of the fiber, as depicted in Fig. 1(a)). Hence, in a fiber that supports N guided modes, wavefront shaping provides at most N complex degrees of control. However, many applications require the number of degrees of control to be larger than the number of modes. For example, one of the key ingredients for spatial division multiplexing is mode converters, which require simultaneous control over the output field of multiple incident wavefronts. To this end, complex multimode transformations were previously demonstrated by applying phase modulations at multiple planes [16][17][18][19]. However, this requires free-space propagation between the modulators, thus limiting the stability of the system and increasing its footprint.
In this work we propose and demonstrate a new method for controlling light at the output of MMF, which does not rely on shaping the incident light and that can be implemented in an all-fiber configuration. Inspired by the ongoing efforts to generate on-chip mode converters by manipulating modal interference in multimode interferometers [20][21][22], we directly control the light propagation inside the fiber to manipulate its TM, allowing us to generate a desired field at its output ( Fig. 1(b)). Since the TM is determined by O N 2 complex parameters, TM-shaping provides access to much more degrees of control than shaping the incident wavefront.
To control the fiber's TM, we apply computer controlled bends at multiple positions along the fiber. Since the stress induced by the bends changes the boundary conditions of the system, it modifies the TM such that different bends yield different speckle patterns at the distal end (Fig. 1(c)). We can therefore obtain a desired field at the output of the fiber by imposing a set of controlled bends, without modifying the incident wavefront. Since in this approach the input field is fixed, it does not require an SLM or any other free-space component. Such an all-fiber configuration is especially attractive for MMF-based applications that require high throughput and an efficient control over the field at the output of the fiber. As a proof-of-concept demonstration of TMshaping, we demonstrate focusing at the distal end of the fiber, and conversion between the fiber modes. Figure 1: Shaping the transmission matrix of multimode optical fibers. (a) The conventional method for wavefront shaping in complex media, performed e.g. by using an SLM and free space optics to tailor the incoming wavefront at the proximal end of the multimode fiber. (b) Proposed method for light modulation, in which the transmission matrix of the medium is altered, e.g. by performing perturbations on the fiber itself. (c) Illustration of the sensitivity of the output pattern on the fiber geometry. Three different configurations of the fiber (depicted by red, green and blue curves), correspond to three different speckle patterns at the output of the fiber. Since the input field coupled into the fiber is fixed, the different output patterns correspond to different transmission matrices of the fiber.
Experimental Techniques Principle
Our method relies on applying controlled weak local bends along the fiber. To this end, we use an array of computer-controlled piezoelectric actuators to locally apply pressure on the fiber at multiple positions [23,24].
The TM of the fiber depends of the curvatures of the bends, which are determined by the travel of each actuator. To obtain a target pattern at the distal end, we compare the intensity pattern recorded at the output of the fiber with a desired target pattern. Using an iterative algorithm, we search for the optimal configuration of the actuators, i.e. the optimal travel of each actuator, that maximizes the overlap of the output and target patterns.
Experimental Setup
The experimental setup is depicted in Fig. 2. A HeNe laser (wavelength of λ = 632.8 nm) is coupled to an optical fiber, overfilling its core. We placed 37 piezoelectric actuators along the fiber. By applying a set of computer-controlled voltages to each actuator, we controlled the vertical displacement of the actuators. Each actuator bends the fiber by a three-point contact, creating a bell-shaped local deformation of the fiber, with a curvature that depends on the vertical travel of the actuator (see Figs. 2(b,c)). For the maximal curvature we applied (R ≈ 10 mm), we measured an attenuation of few percent per actuator due to bending loss. The spacing between nearby actuators was set to be at least 3 cm, which is larger than d 2 λ for d the core's diameter, such that the interference pattern inside the fiber between two adjacent actuators is uncorrelated. At the distal end, a CMOS camera records the intensity distribution of both the horizontally and vertically polarized light.
We used two types of multimode fibers: a fiber supporting few modes for demonstrating mode conversion, and a fiber supporting numerous modes for demonstrating focusing. For the focusing experiment, we used a 2 meter-long graded-index (GRIN) multimode optical fiber with numerical aperture (NA) of 0.275 and core diameter of d M M F = 62.5 µm (InfiCor OM1, Corning). The fiber supports approximately 900 transverse modes per polarization at λ = 632.8 nm (V ≈ 85), yet we used weak focusing at the fiber's input facet to excite only ≈ 280 modes. For the experiments with the few mode fiber (FMF), we used a 5 meter-long step-index (SI) fiber, with an NA of 0.1 and core diameter of d F M F = 10 µm (FG010LDA, Thorlabs). In principle, at our wavelength the fiber supports 6 modes per polarization, (V ≈ 5).
Optimization Process
The curvature of the bends, set by the travel of each actuator, modifies how light propagates through the fiber and thus determines the speckle pattern that is received at the distal end. We can therefore define an optimization problem of finding the voltages that should be applied to the actuators, to receive a given target pattern at the output of the fiber. The distance between the target and each measured pattern is quantified by a cost function, which the algorithm iteratively attempts to minimize. The fiber is pressed by two pins that are attached to each actuator, and one pin which is placed below it, creating a three-point contact. A computer-controlled voltage that is applied on each actuator sets its travel and defines the curvature of the local deformation it poses on the fiber. L, lens; M, mirror; PBS, polarizing beamsplitter; CMOS, camera.
For M actuators, the solution space is an Mdimensional sub-space, defined by the voltages range and the algorithm's step intervals, and can be searched using an optimization algorithm. While the optical system is linear in the optical field, the response of the actuators, i.e. the modulation they pose on the complex light field, is not linear in the voltages.
Moreover, since a change in the curvature of an actuator at one point along the fiber affects the interference pattern at all of the following actuators positions, the actuators cannot be regarded as independent degrees of control. Similar nonlinear dependence between degrees of control is obtained, for example, for wave control in chaotic microwave cavities [25]. Out of the wide range of iterative optimization algorithms that can efficiently find a solution to such nonlinear optimization problems, we chose to use Particle Swarm Optimization (PSO) [26], as on average it achieved the best results out of the algorithms we tested (See the Supplementary Material for more details regarding the use of PSO).
Focusing at the Distal End of the Fiber
To illustrate the concept of shaping the intensity patterns at the output of the fiber by controlling its TM, we first demonstrate focusing the light to a sharp spot at the distal end of the fiber. We excite a subset of the fiber modes by weakly focusing the input light on the proximal end of the fiber. Due to inter-modal interference and mode mixing, at the output of the fiber the modes interfere in a random manner, exhibiting a fully developed speckle pattern ( Fig. 3(a)). Based on the number of speckle grains in the output pattern, we estimate that we excite the first 280 guided fiber modes.
To focus the light to some region of interest (ROI) in the recorded image, we run the optimization algorithm to enhance the total intensity at the target area. We define the enhancement factor η by the total intensity in the ROI after the optimization, divided by the ensemble average of the total intensity in the ROI before the optimization. The ensemble average is computed by averaging the output intensity over random configurations of the actuators, and applying an additional azimuthal integration to improve the averaging.
We start by choosing an arbitrary spot in the output speckle pattern of one of the polarizations. We define a small ROI surrounding the chosen position, in an area that is roughly the area of a single speckle grain, and run the optimization scheme to maximize the total intensity of that area. Fig. 3 depicts the output speckle pattern of the horizontal polarization before ( Fig. 3(a)) and after ( Fig. 3(b) the optimization, using all 37 actuators. The enhanced speckle grain is clearly visible and has a much higher intensity than its surroundings, corresponding to an enhancement factor of η = 25.
We repeat the focusing experiment described above with a varying number of actuators M . When a subset of actuators is used, the remaining are left idle throughout the optimization. Fig. 3(d) summarizes the results of this set of experiments, showing the obtained enhancement factor η grows linearly with the number of active actuators M . It is instructive to compare this linear scaling with the well-known results for focusing light through random media using SLMs or DMDs. Vellekoop and Mosk have shown that when the number of degrees of control (i.e. independent SLM or DMD pixels) is small compared to the effective number of transverse modes of the sample, the enhancement scales linearly with the number of degrees of control. The slope of the linear scaling α depends on the speckle statistics and on the modulation mode [27][28][29]. For Rayleigh speckle statis-tics, as in our system (see Supplementary Material), the slopes predicted by theory are α = 1 for perfect amplitude and phase modulation, α = π 4 ≈ 0.78 for phaseonly modulation [29]. Experimentally measured slopes, however, are typically smaller, mainly due to technical limitations such as finite persistence time of the system, unequal contribution of the degrees of control, and statistical dependence between them. Interestingly, we measure a slope of α ≈ 0.71, which is close to the theoretical value for phase-only modulation for Rayleigh speckles, and higher than typical experimentally measured slopes (e.g. α ≈ 0.57 in [30]). Naively, one could expect a lower slope in our system, since as discussed above, in our configuration the degrees of control are not independent. The large slope values we obtain may indicate that the bends change not only the relative phases between the guided modes (corresponding to phase modulation), but also their relative amplitudes (corresponding to amplitude modulation), via mode-mixing and polarization rotation.
To further study the linear scaling, we performed a set of numerical simulations. We used a simplified scalar model for the light propagating in a GRIN fiber, in which the fiber is composed of multiple sections, where each section is made of a curved and a straight segment. The curved segments simulate the bend induced by an actuator, and the straight segments simulate the propagation between actuators (see Supplementary Material for more details). As in the experiment, we use the PSO algorithm to focus the light at the distal end of the fiber. The numerical results exhibit a clear linear scaling, with slopes in the range of 0.57-0.64 (see Fig. S3 in Supplementary Material). Simulations for fibers supporting N = 280 modes, roughly the number of modes we excite in our experiment, exhibit a slope of α ≈ 0.64, slightly lower than the the experimentally measured slope.
As in experiments with SLMs, focusing is not limited to a single spot. To illustrate this, we used the optimization algorithm to simultaneously maximize the intensity at two target areas. Fig. 3(c) shows a typical result, exhibiting an enhancement which is half of the enhancement obtained when focusing to a single spot, as expected by theory [28]. In principle, it is possible to focus the light to an arbitrary number of spots, yet in practice we are limited by the number of available actuators.
Mode Conversion in a Few Mode Fiber
In the previous section, we demonstrate the possibility to use our system as an all-fiber SLM, i.e. to shape an output complex field by modifying the relative complex weight of the propagating modes. In the following, we show that we can go further by studying the feasibility of TM-shaping to tailor the output patterns in the fewmode regime, where the number of fiber modes is comparable with the number of actuators. Specifically, we are interested in converting an arbitrary superposition of guided modes to one of the linearly-polarized (LP) modes supported by the fiber. To this end, we utilize the PSO optimization algorithm to find the configuration of actuators that maximizes the overlap between the output intensity pattern and the desired LP mode. The target LP modes of the step-index fiber were computed numerically for the parameters of our fiber, and scaled to match the transverse dimensions of the fiber image. Fig. 4 presents a few examples of conversions between LP modes using 33 and 12 actuators. A mixture of LP 01 and LP 11 at two different polarizations can be converted to LP 11 in one polarization ( Fig. 4(a)). Alternatively, a horizontally polarized LP 11 mode can be converted to a superposition of horizontally LP 01 and a vertically polarized LP 11 (Fig. 4(b)). The Pearson correlation between the target and final patterns in these examples is 0.93. Similar results are obtained when we run the optimization with fewer active actuators, with a negligible reduction in the correlation between the target and final pattern. For example, with 12 actuators we observe correlations of 0.90 for the conversion presented in Fig. 4(c). Optimization with less than 12 actuator shows poorer performance, as the number of actuators becomes comparable with the number of guided modes.
Discussion
Controlling the transmission matrix of a multimode fiber, rather than the wavefront that is coupled to it, opens the door for an unprecedented control over the light at the output of the fiber. Since the number of degrees of control, the number of actuators in our implementation, is not limited by the number of fiber modes N , it can allow simultaneous control for orthogonal inputs and/or spectral components. In fact, if O(N 2 ) degrees of control are available, one can expect generating arbitrary N × N transformations between the input and output modes. Over the past two decades there is an ever-growing interest in realizing reconfigurable multimode transformations, for a wide range of applications, such as quantum photonic circuits [22,[31][32][33][34] optical communications [18,35], and nanophotonic processors [20,36]. These realizations require strong mixing of the input modes, as the output modes are arbitrary superpositions of the input modes. The mixing can be achieved, for example, by diffraction in free-space propagation between carefully designed phase plates [16][17][18][19], a mesh of Mach-Zehnder interferometers with integrated modulators [22], engineered scattering elements in multimode interferometers [20,21], or scattering by complex media [25,37]. In our implementation, we rely on the natural mode mixing and inter-modal interference in multimode fibers, allowing implementation using standard commercially available fibers.
The main limitation our current proof-of-concept suffers from is the achievable modulation rates. The piezobased implementation limits the achievable modulation rates. The response time of the system to abrupt changes of the piezos is approximately 30 ms (see Supplementary Material), allowing in principle for modulation rates as high as 30 Hz. In practice, our system works at slower rates (≈5 Hz), mainly due the latency of the piezoelectric actuators and the camera. The total optimization time for the focusing experiments is 50 minutes, and 12 − 15 minutes for the mode conversion experiments. Faster electronics and development of a stiffer and more efficient bending mechanism will allow higher modulation rates, limited by the resonance frequency of the piezo benders (≈ 300-500 Hz). To achieve even faster rates, a different technology should be used for applying perturbations to the fibers, e.g. utilizing all-fiber acousto-optical modulators [38] or the 'smart fibers' technology with integrated modulators [39]. Optical fibers with built-in modulators can also be utilized for a scalable implementation of our method.
Conclusions and Outlook
In this work we proposed a novel technique for controlling light in multimode optical fibres, by modulating its TM using controlled perturbations. We presented proof-of-principle demonstrations of focusing light at the distal end of the fiber, and conversion between guided modes, without utilizing any free-space components. Since our approach to modulate the TM of the fiber is general and not limited to mechanical perturbations, it could be directly transferred to other types of actuators, e.g. in-fiber electro-optical or acousto-optical modulators, to achieve all-fiber, loss-less, fast, and scalable implementations. The all-fiber configuration and the possibility to control more degrees of freedom than the number of guided modes, makes our method attractive for fiberbased applications that require control over multiple inputs and/or wavelengths. Moreover, the possibility to achieve high dimension complex operations opens the way to the implementation of optical neural networks. Our system can provide an important building block for linear reconfigurable transformations, which can be further used in combination with fibers and lasers that exhibit strong gain and/or nonlinearity for deep learning applications.
Response Time
To measure the typical response time of the system, we introduced abrupt changes to the voltages applied to a subset of the piezoelectric actuators, and recorded the speckle pattern obtained at the distal end of the fiber. We then calculated the 2D Pearson correlation coefficient between each of the captured frames and the first frame. The measurements were repeated using different subsets of piezos. Examples of a few of these measurements, for subsets that include between one and four actuators, are shown in Fig. S1(a). The abrupt voltage change causes a fast change to the recorded speckle pattern, yielding a sharp decrease in the computed correlation coefficient. As expected, the bigger the subset of the piezos, the stronger the correlation drop. This sharp decrease is the result of the change in the actuators configuration (the bend they pose), and manifests in a change to the captured speckle pattern. Once the actuator's position stabilized, the correlation stabilized on a lower value. To ensure that the patterns with lower correlation with regard to the first frame are correlated with one another (thus ensuring that the plateau is not a result of the statistical properties of speckles), we also calculated the 2D correlation coefficient of each frame from the last acquired frame. These results are shown in Fig. S1(b) for the same groups of actuators. The high correlation after the configuration change indeed verifies that the speckle pattern did not change further. Based on such measurements we were able to estimate the response time of the system at 30 ms, which corresponds to modulation rates of 33 Hz.
Decorrelation Time
To estimate the stability of the system, we calculated the 2D correlation coefficient of the speckle pattern at the distal end of the fiber over time when the system is idle, i.e. no changes are performed to the states of the actuators. This loss of correlation is known to be attributed to the sensitivity of bare optical fibers to thermal fluctuations in the room and changes of pressure due to air flow. With the GRIN MMF, we found that the system remained highly correlated (corr ≥ 0.99) for 10 minutes. The correlation decreased slowly and linearly for 55 minutes, reaching corr = 0.976. The correlation then decreased faster, reaching corr = 0.883 after two hours. With the SI FMF, the system remained stable and highly correlated (corr ≥ 0.996) over the course of 15 hours.
Rayleigh Statistics
The slopes of the linear scaling of the focusing enhancement factor η as a function of the number of degrees of control rely on the intensity statistics of the generated speckle patterns. The theoretical values reported in the main text are derived for Rayleigh intensity statistics [1]. It is therefore important to compare the intensity statistics of the speckle patterns we obtain in our system with the predictions of Rayleigh statistics. Such a comparison is depicted in S2, which shows excellent agreement with theory.
Optimization Technique
As described in the main text, the results were obtained by finding solutions to optimization problems. These problems used a feedback loop-at each iteration, the speckle pattern at the distal end of the fiber was recorded using the CMOS camera. This pattern was evaluated according to its similarity to a target pattern, and this score was given to the optimization algorithm as a cost, which it tried to minimize by changing the configuration of bends which are applied to the fiber segments. Lower costs were obtained for bend configurations which yielded patterns with high similarity to the target.
The optimization algorithm we chose to use is the Particle Swarm Optimization (PSO), which is a genetic algorithm. It randomly initializes a population of points (referred to as particles) in an M -dimensional search space, representing the voltages which are assigned to the M actuators. These positions are iteratively improved according to their local and global memory from previous iterations. Its stochastic nature helps avoiding local extrema in non-convex problems. An open-source implementation of PSO [2] was modified to fit our experimental setup and simulation. We defined a single run as a single instance of the optimization process, i.e. achieving a single optimized target speckle pattern, such as the example shown in Fig. 3(b) in the main text. With the GRIN MMF, each such run constituted of 80 iterations, with the following hyper-parameters: population size of 120, inertia weight of w = 1, inertia damping ratio of w damp = 0.99, personal learning coefficient of c 1 = 1.5, global learning coefficient of c 2 = 2. With the SI FMF, each run used 86-108 iterations, with a population of size 50. The values of the other hyper-parameters were not changed.
Simulation
Since our system is linear in the optical field, it is natural to describe the propagation of light in it with matrix formalism. We divided the fiber into multiple segments, calculated the transmission matrix (TM) of each segment and computed the total TM of the fiber by multiplying them. To represent our experimental system, we composed bent segments (which mimic the effect of actuators) and straight segments (for the propagation between actuators). A bent segment was approximated by a circular arc, with a defined curvature. To find the guided Figure S1: Response time of the experimental system. The 2D correlation coefficient of each frame with (a) the first and (b) the last of the acquired frames, when a configuration of actuators (the voltage which is applied on these piezos) is changed. Blue lines show a change of configuration of a single actuator, red of two actuators, yellow of three and purple of four. Figure S2: Intensity distribution of the speckle patterns at the end of the fiber. The natural logarithm of the probability distribution function (PDF) of the speckle patterns intensities, as a function of normalized pixel intensity (dots), and a linear fit (line). The data points correspond to experimental readouts, without background noise subtraction. modes and propagation constants of different segments, we used a numerical module [3] which solves the scalar Helmholtz equation under the weakly guiding approximation [4]. We used 10 radii of curvatures, to simulate 10 different vertical positions of the actuators, which impose 10 different perturbations. These radii were linearly spaced between a maximal and a minimal value, which we estimated from the experimental system.
Mode-mixing in short GRIN fibers mostly occurs within groups of degenerate modes. To mimic this phenomenon, we introduced unitary block matrices, whose block sizes were determined according to the mode degeneracy, as expressed in the propagation constants, allowing mixing between modes with the same propagation constants. It is note worthy that without introducing this feature, we were unable to achieve focusing.
We used the same discrete set of possible curvatures for all of the actuators in all runs, and the same optimization mechanism as the experimental setup to achieve a focus. The optimization process mapped one of the possible curvatures for each of the bent fiber segments. In runs where not all of the actuators where used, the remaining were set as straight segments (with no curvature) to maintain the same propagation distance in all runs. Fig. S3 shows the enhancement factor η as a function of the number of actuators whose curvatures were optimized for simulated fibers. It is noticeable that the enhancement factor scales linearly with the number of simulated actuators, where the slope ranges between 0.57-0.64 for the displayed fiber parameters, at a wavelength of λ = 632.8 nm. Figure S3: Simulation of focusing in a multimode optical fiber. The average enhancement factor that was achieved (circles) and the standard error (bars), as a function of the number of actuators whose curvatures where modified as part of the optimization process in the simulation. Several results obtained for different fiber parameters are shown in different colors, along with a linear curve (gray dashed line).
|
2019-10-07T13:54:48.000Z
|
2019-10-07T00:00:00.000
|
{
"year": 2019,
"sha1": "f53d583b3d585d14e251e93d99b2bd068e8657d4",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.5136334",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7c569969c7b8e821467fcf9d0d70c6f2f0d4c083",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
257057453
|
pes2o/s2orc
|
v3-fos-license
|
Monitoring geological storage of CO2 using a new rock physics model
To mitigate the global warming crisis, one of the effective ways is to capture CO2 at an emitting source and inject it underground in saline aquifers, depleted oil and gas reservoirs, or in coal beds. This process is known as carbon capture and storage (CCS). With CCS, CO2 is considered a waste product that has to be disposed of properly, like sewage and other pollutants. While and after CO2 injection, monitoring of the CO2 storage site is necessary to observe CO2 plume movement and detect potential leakage. For CO2 monitoring, various physical property changes are employed to delineate the plume area and migration pathways with their pros and cons. We introduce a new rock physics model to facilitate the time-lapse estimation of CO2 saturation and possible pressure changes within a CO2 storage reservoir based on physical properties obtained from the prestack seismic inversion. We demonstrate that the CO2 plume delineation, saturation, and pressure changes estimations are possible using a combination of Acoustic Impedance (AI) and P- to S-wave velocity ratio (Vp/Vs) inverted from time-lapse or four-dimensional (4D) seismic. We assumed a scenario over a period of 40 years comprising an initial 25 year injection period. Our results show that monitoring the CO2 plume in terms of extent and saturation can be carried out using our rock physics-derived method. The suggested method, without going into the elastic moduli level, handles the elastic property cubes, which are commonly obtained from the prestack seismic inversion. Pressure changes quantification is also possible within un-cemented sands; however, the stress/cementation coefficient in our proposed model needs further study to relate that with effective stress in various types of sandstones. The three-dimensional (3D) seismic usually covers the area from the reservoir's base to the surface making it possible to detect the CO2 plume's lateral and vertical migration. However, the comparatively low resolution of seismic, the inversion uncertainties, lateral mineral, and shale property variations are some limitations, which warrant consideration. This method can also be applied for the exploration and monitoring of hydrocarbon production.
Subsurface CO 2 storage is not a new concept. For decades, the oil and gas industry has been re-injecting the CO 2 produced along with the hydrocarbon gases 1,2 . CO 2 injection has also been used for enhanced oil recovery 3,4 . Carbon capture and storage (CCS) has the potential to significantly reduce CO 2 build-up in the atmosphere from fossil fuel use; however, large-scale subsurface CO 2 storage still may pose different technical and social challenges 5 .
Buoyancy trapping is the key process for CO 2 storage during the injection and early stage of storage 5 . Therefore, the CO 2 is injected at the base of the reservoir, and the plume migrates laterally within the most permeable beds until it finds a vertical passage (fault or fracture) to move upwards and accumulate below the base of the caprock. The plume behavior is a function of the horizontal and vertical heterogeneities within the reservoir. The thin clay and silt layers or carbonate laminations may facilitate lateral distribution of CO 2 in the storage reservoir. For example, in the Sleipner CCS project, the four-dimentional (4D) or time-lapse seismic enables one to trace the migration path and subsequent accumulation of the CO 2 plume 6 . The other CO 2 trapping mechanisms are residual gas trapping, solubility trapping, and mineral trapping. The time-lapse or 4D seismic is carried out to monitor the CO 2 plume migration within the storage reservoir (for example, in a saline aquifer), and to identify a possible vertical CO 2 leakage into the shallower strata or surface.
There are several methods in use for seismic fluid prediction 7 . Many provide qualitative hydrocarbon indication, whereas few techniques are quantitative. The qualitative methods comprise Amplitude-Variation-with-Offset (AVO) analysis [8][9][10][11] , AVO cross plotting 12,13 , Lambda-Mu-Rho (LMR) 14 , Extended Elastic Impedance (EEI) 15 www.nature.com/scientificreports/ and Curved Pseudo Elastic Impedance (CPEI) 16,17 . The examples of quantitative methods are Acoustic Impedance versus P-to S-wave velocity ratio (AI-versus-Vp/Vs) rock physics template [18][19][20] , Multi-Attribute Rotation Scheme (MARS) 21 , Inverse Rock Physics Modelling (IRPM) 22,23 , and technique to discriminate saturation and pressure from 4D seismic using near and far offset stacks 24 . A practical approach suggested for fluid saturation discrimination 25 using seismic data employed a method similar to LMR 14 . Lame parameters were calculated; however, the fluid saturation was suggested to be estimated on a ρ/μ versus λ/μ plane as opposed to the LMR method where a λρ versus μρ was used to differentiate various facies (ρ is bulk density, λ is incompressibility, and μ is shear modulus). Two-dimentional permeability modelling 26 of CO 2 saturation, distribution, and seismic response showed CO 2 trapping, and the P-wave velocity (Vp) and water saturation (Sw) relationship were mostly a function of the Dykstra-Parson 27 coefficients. Executing a workflow for forward modeling 28 of time-lapse seismic data indicated that a high signal-to-noise ratio was needed to detect the CO 2 leakage at the model site. Both 26,28 the studies used Gassmann equations 29 for fluid substitution. Another three-dimentional (3D) modelling study 30 related AI changes with the water saturation (Sw), and quantitatively demonstrated that seismic amplitudes can be more precise than seismic impedances for quantifying Sw changes with 4D seismic data.
A seismic profile can be defined as an array of processed seismic traces. Each trace represents the convolution of a source wavelet with an input reflectivity sequence where each reflectivity spike depicts the contrast in acoustic impedance (AI = P-wave velocity × Bulk Density) across a geological interface. A seismic inversion is carried out to convert the interface property (reflectivity) to a physical rock property such as AI 31,32 . With the advent of AVO/prestack inversion, it became possible to obtain the shear wave (Vs) information also, usually in the form of shear impedance (SI) from the AVO far-offset data. Various forms of Fatti's equation 33 are used for AVO inversion; one of that is 34 : where R P (θ) is the P-wave reflectivity at an angle θ, this angle is the average of incidence and transmission angles, Vp is P-wave velocity, Vs is S-wave velocity, ΔAI/2AI and ΔSI/2SI are acoustic impedance and shear impedance reflectivities, respectively.
Rock physics models represent the link between the reservoir properties (e.g., porosity, clay content, sorting, lithology, saturation) and seismic-derived elastic properties (e.g., AI, SI, or Vp/Vs ratio). One of the existing models comprised a hybrid modeling approach 19 using the AI versus Vp/Vs RPT applied specifically to sandstones employing a physical-contact theory, i.e., the Hertz-Mindlin model 35 combined with theoretical elastic bounds, e.g., the Hashin-Shtrikman bounds 36 simulating the porosity reduction trend associated with depositional sorting and diagenesis. For soft shales, the seismic properties were estimated as a function of pore shape. Gassmann fluid substitution 29 was carried out to estimate the effect of varying gas versus water saturation in the sand layers, whereas Backus average 37 was used to predict the effective seismic properties for changing net-to-gross (N/G ratios) 19 . However, it has been demonstrated 22 that even with the standard rock physics template (RPT) of AI versus Vp/Vs [18][19][20] , it is difficult to know whether the model is adequately calibrated to the data or how it can be interpreted. Furthermore, there are nonunique solutions resulting in various combinations of porosity, lithology, and fluid saturations that have the same Vp/Vs ratio and AI, using the same rock physics model 22 .
In this study, we introduce a new interactive rock physics model that directly relates AI with the Vp/Vs ratio for predicting fluid saturation (S fl ). The model can be calibrated with the well-log data interactively, without using the Hertz-Mindlin model 35 , Hashin-Shtrikman bounds 36 , or Gassmann fluid substitution 29 . The suggested model is nonlinear similar to CPEI 16,17 , but with physical meanings and flexibility that can readily be applied to the seismic-derived AI and Vp/Vs cubes to estimate S fl . We came up with a similar equation in a previous publication 38 to calculate shale volume (Vsh) based on the AI, Vp/Vs ratio domain.
Following is the proposed model to estimate the target fluid saturation (S fl ) in fraction using the AI and Vp/ Vs ratio data obtained by AVO inversion: where V Pma and V Pw are the P-wave velocities of the mineral matrix, and brine respectively, V Pfl is the apparent P-wave velocity of the target fluid, ρ ma is the density of mineral grains, ρ fl is the apparent density of the target fluid, ρ w is the density of brine, AI is acoustic impedance, G is the mineralogy/shaliness coefficient, α is Vs/Vp ratio of the mineral/rock matrix, and n is the stress/cementation coefficient. The water saturation (S w ) can be calculated subsequently (S w = 1 − S fl ).
As mentioned previously, the AI and Vp/Vs ratio are obtained by inverting seismic data (Fig. 1a). AI increases, and Vp/Vs ratio decreases typically with increasing burial depth due to a decrease in porosity. If a low-density fluid (hydrocarbon or CO 2 ) replaces the in-situ brine, a reduction both in AI and Vp/Vs values is expected depending upon the substituted fluid's density. We came up with Eq. (2) that relates AI with Vp/Vs ratio to isolate the target fluid saturation from the brine saturated sandstone compaction trend on the AI versus Vp/Vs ratio plane (Fig. 1b, c). One can calibrate the model using nearby well data (Well-A in this case, see "Methods" section).
This technique will help to monitor a CO 2 plume in the subsurface for lateral and vertical migration. For saturation estimation of a particular CO 2 phase (e.g., gas, supercritical or liquid), the input V Pfl (apparent P-wave www.nature.com/scientificreports/ velocity of the target fluid) and ρ fl (apparent density of the target fluid) can be defined accordingly. The proposed method will be useful for reliable control on the CO 2 injection and sequestration processes. Other uses could be oil and gas production monitoring and hydrocarbon exploration. Similar to our previous study 39 , we used the synthetic elastic property data from the Norwegian Geotechnical Institute (NGI). NGI generated Vp, Bulk Density, and Resistivity 40 properties using grids from a reservoir model by the Northern Light project 41 (Fig. 2a). Additionally, we calculated the Vs data to generate the Vp/Vs ratio cubes (see details in the "Methods" section). The reservoir model was a simulation of one of the potential CO 2 storage sites in the northern North Sea called Smeaheia (Fig. 2b). The Smeaheia area is bounded by a fault array separating the Troll oil and gas field in the west and the Basement Complex in the east 38 . The primary CO 2 storage reservoir in the Smeaheia area is Sognefjord Formation (Upper Jurassic) sandstone, capped by the Draupne and Heather Formation (Upper Jurassic) shales 38,42 (Fig. 3). The amount of CO 2 to be stored was 1.3 Mt/year employing an injection period of 25 years with an injection rate of 200 tons/hr. We sliced out the AI and Vp/Vs ratio cubes covering only the injection and storage area to reduce computation time and converted the cubes to a depth-domain seismic format with inline and crossline profiles (Fig. 2c). We assumed that the AI and the Vp/ Vs cubes were the actual values obtained from the seismic inversion (Fig. 2d).
We assumed a monitoring scenario over 40 years, with injection starting in 2020 for 25 years, keeping an assumption that the time-lapsed seismic surveys were acquired every 10 years. This study also has implications for hydrocarbon exploration and monitoring of oil and gas production. The anisotropy in physical properties, CO 2 dissolution, and chemical reaction with rock grains and their effect on the AI and Vp/Vs ratio are not taken into account.
Results and discussion
We demonstrate a scenario where we have time-lapsed/4D seismic data from 2020 before injection to the year 2060. The top of the Sognefjord Formation reservoir lies between 1020 and 1370 m below mean sea level (Fig. 4a). The reservoir is brine saturated before CO 2 injection in 2020 (Fig. 4b). Both the reservoir AI and Vp/Vs ratio supposedly obtained from prestack inversion decreases where the CO 2 plume replaces the in-situ brine. Therefore, the estimated saturations from AI and Vp/Vs ratio clearly define the plume boundaries and reservoir inhomogeneity ( Fig. 4c-f). We can also see the plume boundary systematically increasing with the passage of years and moving towards the southwest in the up-dip direction. The injection stopped in 2045, therefore a water breach within the plume along the northeastern boundary is apparent as the plume migrates southwestwards in the panel showing the year 2060 (Fig. 4f). For comparison, we used the Curved Pseudo Elastic Impedance (CPEI) 17 attribute to observe the CO 2 plume effect (Fig. 5). CPEI is a fixed function with coefficients controlling the wet-rock trend and grain mineralogy. Qualitatively, the CPEI fluid-related anomalies are almost identical to that of estimated using Eq. (2) (Fig. 4) for the respective survey years, as both the functions are essentially non-linear. In theory, the CPEI values less than 6.9 (km/s × g/cm 3 ), here denoted by hot colours, should represent the fluid softening due to CO 2 replacing the in-situ brine 16,17 . However, it can be noticed that the CPEI anomaly values extend above 6.9 (km/s × g/cm 3 ), making it difficult to relate it with actual CO 2 saturation within the reservoir.
Discrimination between pressure and fluid saturation affects. On the AI versus Vp/Vs crossplot,
there is a systematic decrease in water saturation within reach of the CO 2 plume from 2020 to 2060 (Fig. 6). The CO 2 injection started in 2020 and was completed in 2045. In the panels representing the year 2050, the gas saturated points show a little scatter that increases in 2060. This point scatter could be due to the diffusion and up-dip migration of gas.
With the increase in time from 2020 to 2040, there is a subtle shift in the brine-sand trend (Fig. 6a-c) in the direction '4' shown in the inset of Fig. 1b. We calibrated the brine-sand trend for saturation calculations by changing the value of stress/cement coefficient 'n' . This change in 'n' values is a good indication of reducing effective stress due to the increase in pore pressure (approximately 10 Bar/1 MPa). The brine-saturated sand trend stays roughly the same in the panel covering the end of injection year, i.e., 2045 (Fig. 6d), and the subsequent survey in 2060 (Fig. 6e). One should bear in mind that the Sognefjord Formation sandstone reservoir is predominantly un-cemented 38 . We cannot expect a similar change of brine trend with a change in effective stress within deeper quartz cemented sandstones. Relating the change in 'n' values with the effective stress in various un-cemented sands needs further studies.
Advantages of our suggested rock physics model. In the traditional AI-Vp/Vs rock physics template 18,19 , the dry sandstone is modeled by combining Hertz-Mindlin contact theory 35 and Hashin-Shtrik- www.nature.com/scientificreports/ man 36 interpolation, and finally, Gassmann fluid substitution 29 is performed to estimate the effect of varying fluid saturation in the sand layers. The modelling typically starts from the high-porosity end member interpolated to zero porosity matrix mineral point employing equations that use the rock bulk (K) and shear (μ) moduli as input. The model we suggested (Eq. 2) does not require computations at the elastic moduli level. The matrix pole/point is defined on the AI versus Vp/Vs plane on the basis of coefficient α that is Vs/Vp ratio of the mineral/rock matrix (Fig. 7). While keeping the matrix point at the same position, the gradient of the line interpolating between the matrix point with the high-porosity end member can be changed using the coefficient 'n' . This interpolation defines the brine-sand (100% Sw) line that can be adjusted to calibrate with the stress or cementation condition of the target layer. Changing the shale/mineralogy coefficient 'G' results in a static vertical shift of the brine-sand line that helps adjust with the N/G ratio of the target layer data. The saturation contours adjust themselves with respect to the brine-line according to the given apparent P-wave velocity and density of the target fluid (V Pfl and ρ fl , respectively). This procedure does not require Gassmannn substitution 29 as one needs in the traditional AI-Vp/Vs rock physics template. Also, the model works for both un-cemented and cemented sandstones. In the case of Extended Elastic Impedance (EEI) 15 , the calculated properties (for instance, Sw) appear linear on the AI-Vp/Vs ratio plane; however, the actual sandstone exhibits a non-linear curvature 16 . This nonlinearity is captured by our model, same as the curved pseudo-elastic impedance (CPEI) 16,17 (Fig. 5); however, our suggested model is quantitative and, as discussed above, flexible in terms of grain mineralogy and fluid density. The LambdaRho-MuRho 14 calculations to differentiate lithology and fluid content introduce error and bias because of squaring the impedances 18 . The equation we present does not contain any squared factors, thus preventing additional errors. For subsurface storage, CO 2 is injected in the supercritical phase to a depth where the temperature and pressure keep the gas in the same phase. This approach maximizes the use of available storage volume in the pore spaces within a reservoir. Therefore, the optimum depth for storage is from 1 to 3 km depth 5 . The quartz cementation approximately starts below 2000 m from the seafloor in the North Sea, where the temperature becomes more or less 70 °C. We demonstrated that there is a possibility of quantifying the change in pressure within the un-cemented reservoir sands; therefore, using our suggested model will be helpful in that case. In both un-cemented and cemented sandstone reservoirs, if the supercritical CO 2 plume converts to gas at some point in time due to a decrease in pore pressure, the subsequent time-lapse S w calculations using our model will yield a value less than zero indicating a pressure drop.
Limitations and pitfalls. This method can be applied only in siliciclastics as the carbonates exhibit a different Vp to Vs relationship. There is a difference in resolution between the wireline log data and seismic; therefore, calibrating the model using wireline logs often yields an up-scaled profile in seismic.
Most of the method's uncertainties are associated with the inversion procedure itself 45 . First of all, the inversion is nonunique, i.e., several different solutions (combinations of elastic parameters) may yield the same seismic response. Moreover, the need for an initial low-frequency model poses a main uncertainty during the simultaneous AVO inversion. If the low-frequency model is far away from the truth, the inversion cannot predict the correct answer. Since the low-frequency model is generated from the well-log data and seismic velocities, it becomes more uncertain away from the well control affecting offset-to-angle calculations 45 . To verify the predictions of our suggested technique in CO 2 storage monitoring, saturation calculations from monitoring wells with time-lapse logging can be employed. In case of a hydrocarbon field, comparison with the existing wells (not used for model calibration) can help examine the model-derived saturation accuracy, as in the case of Well-B in Fig. 1. Using this procedure in frontier areas to predict hydrocarbon may be complemented by our proposed method that combines seismic with Controlled Source Electro-Magnetic (CSEM) 39 .
The other uncertainties are the lateral changes in mineralogy or shale volume within the reservoir, resulting in a slight change in the reference brine saturated trend compared to the original calibration. A stochastic approach can be used to address these uncertainties, taking for example, a normal distribution of the input parameters. In the case of two fluids present in a reservoir, i.e., oil with a gas cap are difficult to distinguish; therefore, calibration with gas parameters can be employed to represent the combined influence of the two fluids. A surface draped on an Sw cube may exhibit an 'aliasing pattern' (Fig. 4d-f) depending on the data sampling frequency. The stochastic solution will also resolve this imaging problem.
Conclusions
The seismic method generally provides the subsurface structural and stratigraphic information. Prestack seismic data can be inverted to provide quantitative information on physical properties such as acoustic impedance (AI), shear impedance (SI), and Vp/Vs ratio. Though seismic velocities are moderately sensitive to the change in saturation, using a combination of AI and Vp/Vs ratio can discriminate fluids and their saturations in many situations.
We introduced a new rock physics model that calculates fluid saturations onto the AI versus Vp/Vs ratio plane directly using the cubes inverted from seismic. Without going into the elastic moduli level and Gassmann substitution, the model can be calibrated using well log data by comparing the S w calculated from AI and Vp/Vs curves with the Archie-derived S w . We demonstrated using this model that the elastic properties inverted from seismic help predict CO 2 saturation in a reservoir during and after injection in a subsurface geological CO 2 storage.
Modeling using our proposed approach showed that CO 2 saturation estimation and the plume area delineation is possible using acoustic impedance (AI) and Vp/Vs ratio. The change in pore-pressure estimation is also possible by quantifying the change in brine-sand trend using the stress/cementation coefficient 'n' in un-cemented sand reservoirs. The relation of 'n' with different effective stresses in various uncemented sands warrants further investigation. www.nature.com/scientificreports/ One can also use the suggested procedure to monitor oil and gas production and for hydrocarbon exploration. The main uncertainties and pitfalls of the method come from the inherent inversion problems. We expect with the improvement in prestack inversion technology, the predictability of our rock physics model will increase.
Methods
We generated a rock physics model assuming that a reservoir consists of a rock matrix, pore spaces containing salt water (brine), and other fluids (e.g., CO 2 , or hydrocarbon). According to the assumption, the total volume of rock comprising the matrix and the fluids in the pore spaces is equal to 1. Wyllie 46 approximated the relation between velocity and volumes in sedimentary rocks with the following expression: where Vp is the P-wave velocities of the saturated rocks, Vp ma , Vp fl , and Vp w are the P-wave velocities of the rock grains, the pore fluid (other than saltwater), and saltwater (brine), respectively, ∅ is the pore space volume. S fl is the target fluid saturation. This equation is often called the time-average equation. It is heuristic and not justifiable theoretically; however, it is useful for estimating P-wave velocity directly without calculating the elastic moduli components. The bulk density (ρ b ) is a volumetric average of the densities of the rock constituents that can be related to the various rock volume components by: where ρ ma , ρ fl and ρ w are the densities of rock grains, target fluid, and brine respectively. Combining Eqs. (3), and (4), we obtain an expression in terms of the pore-space volume ( ∅): where AI is acoustic impedance. Employing a relation between the S-wave velocity and the P-wave velocity 47 : we can calculate the Vp/Vs ratio against a given AI by substituting ∅ from Eq. (5). Changing the mineralogy/ shaliness coefficient 'G' results in a vertical static shift in the curved iso-saturation lines, α is Vs/Vp ratio of the mineral/rock matrix that defines the matrix-mineral pole on the AI versus Vp/Vs ratio plane. The stress/ cementation coefficient 'n' controls the slope of the iso-saturation curved lines and may be selected in a formation zone depending on the level of stress, compaction, or cementation. The relevant constants may be taken from literature 48 and vendor's logging chart books.
From this function (Eq. 6), we can define a set of lines representing different fluid saturations converging at the 100% matrix-mineral pole on the AI versus Vp/Vs ratio plane (Fig. 7a). Iterating the values of 'G' and 'n' one can calibrate the wet trend of the well data with the 100% S w line (Fig. 7a). Finally, we find out the values of the target fluid's apparent density (ρ fl ) and apparent P-wave velocity (V Pfl ) by iterating their values until the S w is computed using Eq. (2) calibrates with the Archie S w 49 (Fig. 7b). The apparent fluid velocity (V Pfl ) and density (ρ fl ) values may be fictitious as their difference from the actual values could depend on factors such as the mode of saturation (continuous 50 or patchy 51 ) etc.
(3) www.nature.com/scientificreports/ The calibrated model then can be applied by inputting the seismic-derived AI and Vp/Vs cubes to obtain an S w cube. A similar approach with different initial assumptions leads to the derivation of a rock physics relation for estimating shale volume (V sh ) from inverted data 38 .
The original reservoir simulation model was concieved by the Northern Light project 41 . The model simulated one of the potential CO 2 storage sites "Smeaheia" in the northern North Sea. The injection rate used was 1.3 Mt/year with an injection period of 25 years (from 2020 to 2045). The post-injection period was simulated for 100 years. Subsequently, using results from reservoir simulation, the Norwegian Geotechnical Institute (NGI) generated Vp, Bulk Density and Resistivity 40 properties. For the present study, we generated Vs data additionally to obtain Vp/Vs ratio cubes by applying Castagna's Eq. 52 on the baseline Vp. We assumed that there was no change in shear modulus as the gas injection proceeded, while the change in the density within the plume area was substituted accordingly. Finally, we used the AI (Vp × Bulk Density) and Vp/Vs property cubes in the present study.
|
2023-02-22T15:43:47.635Z
|
2022-01-07T00:00:00.000
|
{
"year": 2022,
"sha1": "c5ff840835f0382084a81e0ee255903287fd82d0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-04400-7.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c5ff840835f0382084a81e0ee255903287fd82d0",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
245006053
|
pes2o/s2orc
|
v3-fos-license
|
Fair Community Detection and Structure Learning in Heterogeneous Graphical Models
Inference of community structure in probabilistic graphical models may not be consistent with fairness constraints when nodes have demographic attributes. Certain demographics may be over-represented in some detected communities and under-represented in others. This paper defines a novel $\ell_1$-regularized pseudo-likelihood approach for fair graphical model selection. In particular, we assume there is some community or clustering structure in the true underlying graph, and we seek to learn a sparse undirected graph and its communities from the data such that demographic groups are fairly represented within the communities. In the case when the graph is known a priori, we provide a convex semidefinite programming approach for fair community detection. We establish the statistical consistency of the proposed method for both a Gaussian graphical model and an Ising model for, respectively, continuous and binary data, proving that our method can recover the graphs and their fair communities with high probability.
Introduction
Probabilistic graphical models have been applied in a wide range of machine learning problems to infer dependency relationships among random variables. Examples include gene expression (Peng et al., 2009;Wang et al., 2009), social interaction networks (Tan et al., 2014;Tarzanagh and Michailidis, 2018), computer vision (Hassner and Sklansky, 1981;Laferté et al., 2000;Manning and Schutze, 1999), and recommender systems (Kouki et al., 2015;Wang et al., 2015). Since in most applications the number of model parameters to be estimated far exceeds the available sample size, it is necessary to impose structure, such as sparsity or community structure, on the estimated parameters to * Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA (E-mail: tarzanaq,girasole@umich.edu, hero@eecs.umich.edu). make the problem well-posed. With the increasing application of structured graphical models and community detection algorithms in human-centric contexts (Tan et al., 2013;Song et al., 2011;Glassman et al., 2014;Burke et al., 2011;Pham et al., 2011;Das et al., 2014), there is a growing concern that, if left unchecked, they can lead to discriminatory outcomes for protected groups.
For instance, the proportion of a minority group assigned to some community can be far from its underlying proportion, even if detection algorithms do not take the minority sensitive attribute into account in decision making (Chierichetti et al., 2017). Such an outcome may, in turn, lead to unfair treatment of minority groups. For example, in precision medicine, patient-patient similarity networks over a biomarker feature space can be used to cluster a cohort of patients and support treatment decisions on particular clusters (Parimbelli et al., 2018;Lafit et al., 2019). If the clusters learned by the algorithm are demographically imbalanced, this treatment assignment may unfairly exclude under-represented groups from effective treatments.
To the best of our knowledge, the estimation of fair structured graphical models has not previously been addressed. However, there is a vast body of literature on learning structured probabilistic graphical models. Typical approaches to impose structure in graphical models, such as 1 -regularization, encourage sparsity structure that is uniform throughout the network and may therefore not be the most suitable choice for many real world applications where data have clusters or communities, i.e., groups of graph nodes with similar connectivity patterns or stronger connections within the group than to the rest of the network. Graphical models with these properties are called heterogeneous.
It is known that if the goal is structured heterogeneous graph learning, structure or community inference and graph weight estimation should be done jointly. In fact, performing structure inference before weight estimation results in a sub-optimal procedure (Marlin and Murphy, 2009). To overcome this issue, some of the initial work focused on either inferring connectivity information or performing graph estimation in case the connectivity or community information is known a priori (Danaher et al., 2014;Guo et al., 2011b;Gan et al., 2019;Ma and Michailidis, 2016;Lee and Liu, 2015).
In this paper, we develop a provably convergent penalized pseudo-likelihood method to induce fairness into clustered probabilistic graphical models. More specifically, we • Formulate a novel version of probabilistic graphical modeling that takes fairness/bias into consideration. In particular, we assume there is some community structure in our graph, and we seek to learn an undirected graph from the data such that demographic groups are fairly represented within the communities of the graph.
• Provide a rigorous analysis of our algorithms showing that they can recover fair communities with high probability. Furthermore, it is shown that the estimators are asymptotically consistent in high dimensional settings for both a Gaussian graphical model and an Ising model under standard regularity assumptions.
• Conclude by giving experimental results on synthetic and real-world datasets where proportional clustering can be a desirable goal, comparing the proportionality and objective value of standard graphical models to our methods. Our experiments confirm that our algorithms tend to better estimate graphs and their fairer communities compared to standard graphical models.
The remainder of the paper is organized as follows: Section 2 gives a general framework for fair structure learning in graphs. Section 3 gives a detailed statement of the proposed fair graphical models for continuous and binary datasets. In Sections 4 and 5, we illustrate the proposed framework on a number of synthetic and real data sets, respectively. Section 6 provides some concluding remarks.
Notation. For a set S, |S| is the cardinality of S, and S c is its complement. The reals and nonnegative reals fields are denoted as R and R + , respectively. We use lower-case and upper-case bold letters such as x and X to represent vectors and matrices, respectively, with x i and x ij denoting their elements. If all coordinates of a vector x are nonnegative, we write x ≥ 0. The notation x > 0, as well as X ≥ 0 and X > 0 for matrices, are defined similarly. For a symmetric matrix X ∈ R n×n , we write X 0 if X is positive definite, and X 0 if it is positive semidefinite. I p , J p , and 0 p denote the p × p identity matrix, matrix of all ones, and matrix of all zeros, respectively. We use Λ i (X), Λ max (X) and Λ min (X) to denote the i-th, maximum, and minimum singular values of X, respectively. For any matrix X, we define X ∞ := max ij |x ij |, X 1 := ij |x ij |, X := Λ max (X), and X F := ij |x ij | 2 .
Fair Structure Learning in Graphical Models
We introduce a fair graph learning method that simultaneously accounts for fair community detection and estimation of heterogeneous graphical models.
Let Y be an n × p matrix, with columns y 1 , . . . , y p . We associate to each column in Y a node in a graph G = (V, E), where V = {1, 2, . . . , p} is the vertex set and E ∈ V × V is the edge set. We consider a simple undirected graph, without self-loops, and whose edge set contains only distinct pairs. Graphs are conveniently represented by a p × p matrix, denoted by Θ, whose nonzero entries correspond to edges in the graph. The precise definition of this usually depends on modeling assumptions, properties of the desired graph, and application domain.
In order to obtain a sparse and interpretable graph estimate, many authors have considered the problem minimize Θ L(Θ; Y) + ρ 1 Θ 1,off subj. to Θ ∈ M. (1) Here, L is a loss function; ρ 1 Θ 1,off is the 1 -norm regularization applied to off-diagonal elements of Θ with parameter ρ 1 > 0; and M is a convex constraint subset of R p×p . For instance, in the case of a Gaussian graphical model, we could take L(Θ; Y) = − log det(Θ) + trace(SΘ), where S = n −1 n i=1 y i y i and M is the set of p × p positive definite matrices. The solution to (1) can then be interpreted as a sparse estimate of the inverse covariance matrix (Banerjee et al., 2008;Friedman et al., 2008). Throughout, we assume that L(Θ; Y) and M are convex function and set, respectively.
Model Framework
We build our fair graph learning framework using (1) as a starting point. Let V denote the set of nodes. Suppose there exist K disjoint communities of nodes denoted by V = C 1 ∪ · · · ∪ C K where C k is the subset of nodes from G that belong to the k-th community. For each candidate partition of n nodes into K communities, we associate it with a partition matrix Q ∈ [0, 1] p×p , such that q ij = 1/|C k | if and only if nodes i and j are assigned to the kth community. Let Q pK be the set of all such partition matrices, andQ the true partition matrix associated with the ground-truth clusters {C k } K k=1 . Assume the set of nodes contains H demographic groups such that V = D 1 ∪ · · · ∪ D H , potentially with overlap between the groups. Chierichetti et al. (2017) proposed a model for fair clustering requiring the representation in each cluster to preserve the global fraction of each demographic Let R ∈ {0, 1} p×p be such that r ij = 1 if and only if nodes i and j are assigned to the same demographic group, with the convention that r ii = 1, ∀i. One will notice that (2) is equivalent to R(I − 11 /p)Q = 0. Let A 1 := R(I − 11 /p) and B 1 := diag( )J p for some > 0 that controls how close we are to exact demographic parity. Under this setting, we introduce a general optimization framework for fair structured graph learning via a trace regularization and a fairness constraint on the partition matrix Q as follows: Here, G(Θ) : M → M is a function of Θ (introduced in Sections 3.1 and 3.2).
We clarify the purpose of each component of the minimization (3). The term ρ 1 Θ 1,off shrinks small entries of Θ to 0, thus enforcing sparsity in Θ and consequently in G. This term controls the presence of edges between any two nodes irrespective of the community they belong to, with higher values for ρ 1 forcing sparser estimators. The polyhedral constraint is the fairness constraint, enforcing that every community contains the -approximate proportion of elements from each demographic group D h , h ∈ [H], matching the overall proportion. The term ρ 2 trace ((S + Q)G(Θ)) enforces community structure in a similarity graph, G(Θ). A similar linear trace term, i.e., trace(QΘ) is used as an objective function in (Cai et al., 2015;Amini et al., 2018;Hosseini and Lee, 2016;Pircalabelu and Claeskens, 2020;Eisenach et al., 2020) when estimating communities of networks.
However, the perturbation of the membership matrix with either a sample covariance for which the population inverse covariance satisfies Assumption (A2) or some positive definite matrix is necessary for developing a consistent fair graphical model.
Relaxation
The problem (3) is in general NP-hard due to its constraint on Q. However, it can be relaxed to a computationally feasible problem. To do so, we exploit algebraic properties of a community matrix Q. By definition, we see that Q must have the form Q = ΨΓΨ , where Γ is block diagonal with size p k × p k blocks on the diagonal with all entries equal to 1 associated with k-th community, Ψ is some permutation matrix, and the number of communities K is unknown. The set of all matrices Q of this form is non-convex. The key observation is that any such Q satisfies several convex constraints such as (i) all entries of Q are nonnegative, (ii) all diagonal entries of Q are 1, and (iii) Q is positive semi-definite (Cai et al., 2015;Amini et al., 2018;Li et al., 2021). Without loss of generality, we assume that the permutation matrix corresponding to the ground truth communities is the identity, i.e., Ψ = I. Now, let Thus, we propose the following relaxation: where The solution of (4) jointly learns the fair community matrix Q and the network estimate Θ. We highlight the following attractive properties of the formulation (4): (i) the communities are allowed to have significantly different sizes; (ii) the number of communities K may grow as p increases; (iii) the knowledge of K is not required for fair community detection, and (iv) the objective function (4a) is convex in Θ given Q and conversely.
Algorithm
In order to solve (4), we use an alternating direction method of multipliers (ADMM) algorithm (Boyd et al., 2011). ADMM is an attractive algorithm for this problem, as it allows us to decouple some of the terms in (4) that are difficult to optimize jointly. In order to develop an ADMM algorithm for (4) with guaranteed convergence, we reformulate it as follows: subj. to Θ = Ω, Θ ∈ M, and Q ∈ N .
The scaled augmented Lagrangian function for (5) takes the form
Iterate until the stopping criterion max are the value of Θ and Q, respectively, obtained at the t-th iteration: where Θ ∈ M, Ω, and Q ∈ N are the primal variables; W is the dual variable; and γ > 0 is a dual parameter. We note that the scaled augmented Lagrangian can be derived from the usual Lagrangian by adding a quadratic term and completing the square (Boyd et al., 2011, Section 3.1.1).
The proposed ADMM algorithm requires the following updates: A general algorithm for solving (4) is provided in Algorithm 1. Note that update for Θ, Q, and Ω depends on the form of the functions L and G, and is addressed in Sections 3.1 and 3.2. We also note that Q sub-problem in S1. can be solved via a variety of convex optimization methods such as CVX (Grant and Boyd, 2014) and ADMM (Cai et al., 2015;Amini et al., 2018). In the following sections, we consider special cases of (4) that lead to estimation of Gaussian graphical model and an Ising model for, respectively, continuous and binary data.
We have the following global convergence result for Algorithm 1.
Theorem 1. Algorithm 1 converges globally for any sufficiently large γ 1 , i.e., starting from any ) that converges to a stationary point of (6).
In Algorithm 1, S1. dominates the computational complexity in each iteration of ADMM (Cai et al., 2015;Amini et al., 2018). In fact, an exact implementation of this subproblem of optimization requires a full SVD, whose computational complexity is O(p 3 ). When p is as large as hundreds of thousands, the full SVD is computationally impractical. An open question is how to facilitate the implementation, or whether there exists a surrogate that is computationally inexpensive. A possible remedy is to apply an iterative approximation method in each iteration of ADMM, the full SVD is replaced by a partial SVD where only the leading eigenvalues and eigenvectors are computed. Although this type of method may get stuck in local minimizers, given the fact that SDP implementation can be viewed as a preprocessing before K-means clustering, such a low-rank iterative method might be helpful. It is worth mentioning that when the number of communities K is known, the computational complexity of ADMM is much smaller than O(p 3 ); see Remark 4 for further discussion.
Related Work
To the best of our knowledge, the fair graphical model proposed here is the first model that can jointly learn fair communities simultaneously with the structure of a conditional dependence network.
Related work falls into two categories: graphical model estimation and fairness.
Estimation of graphical models. There is a substantial body of literature on methods for estimating network structures from high-dimensional data, motivated by important biomedical and social science applications (Liljeros et al., 2001;Robins et al., 2007;Guo et al., 2011c;Danaher et al., 2014;Friedman et al., 2008;Tan et al., 2014;Guo et al., 2015;Tarzanagh and Michailidis, 2018). Since in most applications the number of model parameters to be estimated far exceeds the available sample size, the assumption of sparsity is made and imposed through regularization of the learned graph.
An 1 penalty on the parameters encoding the network edges is the most common choice (Friedman et al., 2008;Meinshausen et al., 2006;Karoui, 2008;Cai and Liu, 2011;Xue et al., 2012;Khare et al., 2015;Peng et al., 2009). This approach encourages sparse uniform network structures that may not be the most suitable choice for real world applications, that may not be uniformly sparse. As argued in (Danaher et al., 2014;Guo et al., 2011b;Tarzanagh and Michailidis, 2018) many networks exhibit different structures at different scales. An example includes a densely connected community in the social networks literature. Such structures in social interaction networks may correspond to groups of people sharing common interests or being co-located (Tarzanagh and Michailidis, 2018), while in biological systems to groups of proteins responsible for regulating or synthesizing chemical products and in precision medicine the communities may be patients with common disease suceptibilities. An important part of the literature therefore deals with the estimation of hidden communities of nodes, by which it is meant that certain nodes are linked more often to other nodes which are similar, rather than to dissimilar nodes. This way the nodes from communities that are more homogeneous within the community than between communities where there is a larger degree of heterogeneity.
Some of the initial work focused on either inferring connectivity information (Marlin and Murphy, 2009) or performing graph estimation in case the connectivity or community information is known a priori (Danaher et al., 2014;Guo et al., 2011b;Gan et al., 2019;Ma and Michailidis, 2016;Lee and Liu, 2015), but not both tasks simultaneously. Recent developments consider the two tasks jointly and estimate the structured graphical models arising from heterogeneous observations (Kumar et al., 2020;Hosseini and Lee, 2016;Hao et al., 2018;Tarzanagh and Michailidis, 2018;Kumar et al., 2019;Gheche and Frossard, 2020;Pircalabelu and Claeskens, 2020;Cardoso et al., 2020;Eisenach et al., 2020). (2018) and references therein. Our paper adds to the literature on fair methods for unsupervised learning tasks (Chierichetti et al., 2017;Celis et al., 2017;Samadi et al., 2018;Tantipongpipat et al., 2019;Oneto and Chiappa, 2020;Caton and Haas, 2020;Kleindessner et al., 2019). We discuss the work on fairness most closely related to our paper. Chierichetti et al. (2017) proposed the notion of fairness for clustering underlying our paper: namely, that each cluster has proportional representation from different demographic groups (Feldman et al., 2015;Zafar et al., 2017). Chierichetti et al. (2017) provides approximation algorithms that incorporate this fairness notion into K-center as well as K-median clustering. Kleindessner et al. (2019) extend this to K-means and provide a provable fair spectral clustering method; they implement K-means on the subspace spanned by the smallest fair eigenvectors of Laplacian matrix. Unlike these works, which assume that the graph structure and/or the number of communities is given in advance, an appealing feature of our method is to learn fair community structure while estimating heterogeneous graphical models.
The Fair Graphical Models
In the following subsections, we consider two special cases of (4) that lead to estimation of graphical models for continuous and binary data.
Fair Pseudo-Likelihood Graphical Model
Suppose y i = (y 1 i , . . . , y p i ) are i.i.d. observations from N (0, Σ), for i = 1, . . . , n. Denote the sample of the ith variable as y i = (y i 1 , . . . , y i n ). Let ω ij = −θ ij /θ ii , for all j = i. We note that the set of nonzero coefficients of ω ij is the same as the set of nonzero entries in the row vector of θ ij (i = j), which defines the set of neighbors of node θ ij . Using an 1 -penalized regression, Meinshausen et al. (2006) estimates the zeros in Θ by fitting separate Lasso regressions for each variable y i given the other variables as follows These individual Lasso fits give neighborhoods that link each variable to others. Peng et al. (2009) improve this neighborhood selection method by taking the natural symmetry in the problem into account (i.e., θ ij = θ ij ), and propose the following joint objective function (called SPACE): where denotes the partial correlation between the ith and jth variables for It is shown in (Khare et al., 2015) that the above expression is not convex. Setting w i = θ 2 ii and putting the 1 -penalty term on the partial covariances θ ij instead of on the partial correlationṡ ω ij , they obtain a convex pseudo-likelihood approach with good model selection properties called CONCORD. Their objective takes the form Note that the penalized matrix version of the CONCORD objective can be obtained by setting Our proposed fair graphical model formulation (called FCONCORD) is a fair version of CON-CORD from (10). In particular, letting G(Θ) = Θ 2 and M = Θ ∈ R p×p : θ ij = θ ji , and θ ii > 0, for every 1 ≤ i, j ≤ p , in (4), our problem takes the form Here, M and N are the graph adjacency and fairness constraints, respectively.
Remark 2. When ρ 2 = 0, i.e., without a fairness constant and the second trace term, the objective in (11) reduces to the objective of the CONCORD estimator, and is similar to those of SPACE (Peng et al., 2009), SPLICE (Rocha et al., 2008), and SYMLASSO (Friedman et al., 2010). Our framework is a generalization of these methods to fair graph learning and community detection, when the demographic group representation holds.
Problem (11) can be solved using Algorithm 1. The update for Ω and Θ in S2. and S3. can be derived by minimizing with respect to Ω and Θ, respectively.
For each (i, j), T ij (Ω) and T ij (Θ) updates the (i, j)-th entry with the minimizer of (12a) and (12b) with respect to ω ij and θ ij , respectively, holding all other variables constant. Given T ij (Ω) and T ij (Θ), the update for Ω and Θ in S2. and S3. can be obtained by a similar coordinate-wise descent algorithm proposed in Peng et al. (2009);Khare et al. (2015). Closed form updates for T ij (Ω) and T ij (Θ) are provided in Lemma 3.
Lemma 3. Let γ n := γn. For 1 ≤ i ≤ p, define Then, we have This shows that when the number of communities is known, the computational cost of each iteration
Large Sample Properties of FCONCORD
We show that under suitable conditions, the FCONCORD estimator achieves both model selection consistency and estimation consistency.
The following standard assumptions are required: Assumption A (A1) The random vectors y 1 , . . . , y n are i.i.d. sub-Gaussian for every n ≥ 1, i.e., there exists M > 0 (A2) There exist constants τ 1 , τ 2 ∈ (0, ∞) such that where for 1 ≤ i, j, t, s ≤ p satisfying i < j and t < s, Assumptions (A2)-(A3) guarantee that the eigenvalues of the true graph matrixΘ and those of the true membership matrixQ are well-behaving. Assumption (A4) links how H, K and p can grow with n. Note that K is limited in order for fairness constraints to be meaningful; if K > p − H + 1 then there can be no community with H nodes among which we enforce fairness. Assumption (A5) corresponds to the incoherence condition in Meinshausen et al. (2006), which plays an important role in proving model selection consistency of 1 penalization problems. Zhao and Yu (2006) show that such a condition is almost necessary and sufficient for model selection consistency in lasso regression, for some finite M (θ o ).
, and = 0. Then, there exist finite constants C(θ o ) and D(q o ), such that for any η > 0, the following events hold with probability at least 1 − O(exp(−η log p)): where q and Ψ(p, H, K) are defined in (15).
Fair Ising Graphical Model
In the previous section, we studied the fair estimation of graphical models for continuous data. Next, we focus on estimating an Ising Markov random field (Ising, 1925), suitable for binary or categorical data. Let y = (y 1 , . . . , y p ) ∈ {0, 1} p denote a binary random vector. The Ising model specifies the probability mass function Here, W(Θ) is the partition function, which ensures that the density function in (18) integrates to one; Θ is a p × p symmetric matrix that specifies the graph structure: θ jj = 0 implies that the jth and j th variables are conditionally independent given the remaining ones.
Several sparse estimation procedures for this model have been proposed. Lee et al. (2007) considered maximizing an 1 -penalized log-likelihood for this model. Due to the difficulty in computing the log-likelihood with the expensive partition function, alternative approaches have been considered. For instance, proposed a neighborhood selection approach which involves solving p logistic regressions separately (one for each node in the network), which leads to an estimated parameter matrix that is in general not symmetric. In contrast, others have considered maximizing an 1 -penalized pseudo-likelihood with a symmetric constraint on Θ (Guo et al., 2011c,a;Tan et al., 2014;Tarzanagh and Michailidis, 2018). Under the probability model above, the negative log-pseudo-likelihood for n observations takes the form We propose to additionally impose the fairness constraints on Θ in (19) in order to obtain a sparse binary network with fair communities. This leads to the criterion Here, M and N are the graph and fairness constraints, respectively.
We refer to the solution to (20) as the Fair Binary Network (FBN). An interesting connection can be drawn between our technique and a fair variant of Ising block model discussed in Berthet et al. (2016), which is a perturbation of the mean field approximation of the Ising model known as the Curie-Weiss model: the sites are partitioned into two blocks of equal size and the interaction between those within the same block is stronger than across blocks, to account for more order within each block. One can easily seen that the Ising block model is a special case of (20).
An ADMM algorithm for solving (20) is given in Algorithm 1. The update for Ω in S2. can be obtained from (12a) by replacing Ω 2 with Ω. We solve the update for Θ in S3. using a relaxed variant of Barzilai-Borwein method (Barzilai and Borwein, 1988). The details are given in (Tarzanagh and Michailidis, 2018, Algorithm 2).
Large Sample Properties of FBN
In this section, we present the model selection consistency property for the separate regularized logistic regression. The spirit of the proof is similar to , but since their model does not include membership matrix Q and fairness constraints that are significant differences. Similar to Section 3.1.1, let θ o = (θ ij ) 1≤i<j≤p and q o = (q ij ) 1≤i<j≤p denote the vector of offdiagonal entries of Θ and Q, respectively. Let θ d and q d denote the vector of diagonal entries of Θ and Q, respectively. Letθ respectively. Let B denote the set of non-zero entries in the vectorθ o , and let q = |B|.
Denote the log-likelihood for the i-th observation by The population Fisher information matrix of L at (θ and its sample counterpartH where 0 n is an n-dimensional column vector of zeros. LetX (i,j) be the [(j − 1)n + i]-th row ofX and ,1) , . . . ,X (i,p) ). Let T = E(X (i) (X (i) ) ) and T n = 1/n n i=1X (i) (X (i) ) as its sample counterpart.
Theorem 6 provides sufficient conditions on the quadruple (n, p, H, K) and the model parameters for the FBN to succeed in consistently estimating the neighborhood of every node in the graph and communities simultaneously. We note that if H = 1 (no fairness) and K = p (no clustering), we recover the results of .
Consistency of Fair Community Labeling in Graphical Models
In this section, we aim to show that our algorithms recover the fair ground-truth community structure in the graph. Let V andV contain the orthonormal eigenvectors corresponding to the K leading eigenvalues of Q andQ as columns, respectively. It follows from (Lei et al., 2015, Lemma 2.1) that if any rows of the matrixV are same, then the corresponding nodes belong to the same cluster.
Consequently, we want to show that up to some orthogonal transformation, the rows of V are close to the rows ofV so that we can simply apply K-means clustering to the rows of the matrix V. In particular, we consider the K-means approach Lei et al. (2015) defined as where M p,K is the set of p × K matrices that have on each row a single 1, indicating the fair community to which the node belongs, and all other values on the row set at 0, since a node belongs to only one community. Finding a global minimizer for the problem (24) is NP-hard (Aloise et al., 2009). However, there are polynomial time approaches (Kumar et al., 2004) that find an approximate Next, similar to (Lei et al., 2015, Theorem 3.1), we quantify the errors when performing (1 + ξ)approximate K-means clustering on the rows of V to estimate the community membership matrix.
To do so, let E k denote the set of misclassified nodes from the k-th community.
we denote the set of all nodes correctly classified across all communities and byVC we denote the submatrix ofV formed by retaining only the rows indexed by the setC of correctly classified nodes and all columns. Theorem 7 relates the sizes of the sets of misclassified nodes for each fair community and specify conditions on the interplay between n, p, H, and K.
Here, |Θ| is the cardinality ofΘ, i.e., the number of unique non-zeros inΘ. Using a similar idea, we consider minimizing the following BIC-type criteria for selecting the set of tuning parameters (ρ 1 , ρ 2 ) for (3): whereΘ k is the k-th estimated inverse covariance matrix.
Note that when the constant c is large, BIC(Θ,Q) will favor more graph estimation inΘ.
Notation and Measures of Performance
We define several measures of performance that will be used to numerically compare the various methods. To assess the clustering performance, we compute the following clustering error (CE) which calculates the distance between an estimated community assignmentẑ i and the true assignment z i of the ith node: To measure the estimation quality, we calculate the proportion of correctly estimated edges (PCEE): Finally, we use balance as a fairness metric to reflect the distribution of the fair clustering (Chierichetti et al., 2017). Let N i = {j : r ij = 1} be the set of neighbors of node i in R. For a set of communities The balance coefficient, called simply the balance, is used to quantify how well the selected edges can eliminate discrimination-the selected edges are considered fairer if they can lead to a balanced community structure that preserve proportions of protected attributes.
Data Generation
In order to demonstrate the performance of the proposed algorithms, we create several synthetic datasets based a special random graph with community and group structures. Then the baseline and proposed algorithms are used to recover graphs (i.e., graph-based models) from the artificially generated data. To create a dataset, we first construct a graph, then its associated matrix, Θ, is used to generate independent data samples from the distribution N (0, Θ −1 ). A graph (i.e., Θ) is constructed in two steps. In the first step, we determine the graph structure (i.e., connectivity) based on the random modular graph also known as stochastic block model (SBM) (Holland et al., 1983;Kleindessner et al., 2019).
The SBM takes, as input, a function π c : [p] → [K] that assigns each vertex i ∈ V to one of the K clusters. Then, independently, for all node pairs (i, j) such that i > j, P(a ij = 1) = b πc(i)πc(j) , where B ∈ [0, 1] K×K is a symmetric matrix. Each b k specifies the probability of a connection between two nodes that belong to clusters C k and C , respectively. A commonly used variant of SBM assumes b kk = ξ 1 and b k = ξ 2 for all k, ∈ [K] such that k = . Let π d : [p] → [H] be another function that assigns each vertex i ∈ V to one of the H protected groups. We consider a variant of SBM with the following probabilities: Here, 1 ≥ ζ i+1 ≥ ζ i ≥ 0 are probabilities used for sampling edges. In our implementation, we set ζ i = 0.1i for all i = 1, . . . , 4. We note that when vertices i and j belong to the same community, they have a higher probability of connection between them for a fixed value of π d ; see, (Kleindessner et al., 2019) for further discussions.
In the second step, the graph weights (i.e., node and edge weights) are randomly selected based on a uniform distribution from the interval [0.1, 3] and the associated (Laplacian) matrix Θ is constructed. Finally, given the graph matrix Θ, we generate the data matrix Y according to y 1 , . . . , y n i.i.d.
Comparison to Graphical Lasso and Neighbourhood Selection Methods
We consider four setups for comparing our methods with community-based graphical models (GM): I. Three-stage approach for which we (i) use a GM to estimate precision matrixΘ, (ii) apply a community detection approach (Cai et al., 2015) to compute partition matrixQ, and (iii) employ a K-means clustering to obtain clusters.
II. Two-stage approach for which we (i) use (4) without fairness constraint to simultaneously estimate precision and partition matrices and (ii) employ a K-means clustering to obtain clusters.
FI. Three-stage approach for which we (i) use a GM to estimate precision matrixΘ, (ii) apply a community detection approach (Cai et al., 2015) to compute partition matrixQ, and (iii) employ a fair K-means clustering (Chierichetti et al., 2017) to obtain clusters.
FII. Two-stage approach for which we (i) use (4) to simultaneously estimate precision and partition matrices and (ii) employ the K-means clustering to obtain clusters.
The main goal of Setups I. and II. is to compare the community detection errors without fairness constraints under different settings of L and G functions.
We consider three type of GMs in Setups I.-FII.: A. A graphical Lasso-type method (Friedman et al., 2008) In our results, we name each one "GM-Type-Setup" to refer to the GM type and the setup above. We repeated the procedure 10 times and reported the averaged clustering errors, proportion of correctly estimated edges, and balance. Tables 1 and 2 is for SBM with p = 300. As shown in Tables 1 and 2 Tables 3 reports the averaged clustering errors, the proportion of correctly estimated edges, and balance for SIBM with p = 100. The standard GM-C.I. method has the largest clustering error and balance due to its ignorance of the network structure in the precision matrices. GM-C.FI. improves the clustering performance of the GM-C.I. by using the method of in the precision matrix estimation and the robust community detection approach (Cai et al., 2015) for computing partition matrixQ. GM-C.FII. is able to achieve the best balance and clustering performance due to the procedure of simultaneous fair clustering and heterogeneous GM estimation.
Real Data Application
Recommender systems (RS) model user-item interactions to provide personalized item recommendations that will suit the user's taste. Broadly speaking, two types of methods are used in such systems-content based and collaborative filtering. Content based approaches model interactions through user and item covariates. Collaborative filtering (CF), on the other hand, refers to a set of techniques that model user-item interactions based on user's past response.
A popular class of methods in RS is based on clustering users and/or items (Ungar and Foster, 1998;O'Connor and Herlocker, 1999;Sarwar et al., 2001;Schafer et al., 2007). Indeed, it is more natural to model the users and the items using clusters (communities), where each cluster includes a set of like-minded users or the subset of items that they are interested in. The overall procedure of this method, called cluster CF (CCF), contains two main steps. First, it finds clusters of users and/or items, where each cluster includes a group of like-minded users or a set of items that these users are particularly interested in. Second, in each cluster, it applies traditional CF methods to learn users' preferences over the items within this cluster. Despite efficiency and scalability of these methods, in many human-centric applications, using CCF in its original form can result in unfavorable and even harmful clustering and prediction outcomes towards some demographic groups in the data.
It is shown in Schafer et al. (2007); Mnih and Salakhutdinov (2008) that using item-item similarities based on "who-rated-what" information is strongly correlated with how users explicitly rate items. Hence, using this information as user covariates helps in improving predictions for explicit ratings. Further, one can derive an item graph where edge weights represent movie similarities that are based on global "who-rated-what" matrix (Kouki et al., 2015;Wang et al., 2015;Agarwal et al., 2011;Mazumder and Agarwal, 2011). Imposing sparsity on such a graph and finding its fair communities is attractive since it is intuitive that an item is generally related to only a few other items. This can be achieved through our fair GMs. Such a graph gives a fair neighborhood structure that can also help better predict explicit ratings. In addition to providing useful information to predict ratings, we note that using who-rated-what information also provides information to study the fair relationships among items based on user ratings.
The goal of our analysis is to understand the balance and prediction accuracy of fair GMs on RS datasets as well as the relationships among the items in these datasets. We compare the performance of our fair GMs implemented in the framework of standard CCF and its fair K-means variant. In particular, we consider the following algorithms: • FGLASSO (FCONCORD)+CF: A two-stage approach for which we first use FGLASSO (FCONCORD) to obtain the fair clusters and then apply traditional CF to learn users' preferences over the items within each cluster. We set ρ 1 = 1, ρ 2 = 0.05, γ = 0.01, and = 1e − 3 in our implementations.
• CCF (Fair CCF): A two-stage approach for which we first use K-means (fair K-means Chierichetti et al. (2017)) clustering to obtain the clusters and then apply CF to learn users' preferences within each cluster (Ungar and Foster, 1998).
MovieLens Data
We use the MovieLens 10K dataset 2 . Following previous works (Koren, 2009;Kamishima et al., 2012;Chen et al., 2020), we use year of the movie as a sensitive attribute and consider movies before 1991 as old movies. Those more recent are considered new movies. Koren (2009) showed that the older movies have a tendency to be rated higher, perhaps because only masterpieces have survived.
When adopting year as a sensitive attribute, we show that our fair graph-based RS enhances the neutrality from this masterpiece bias. The clustering balance and root mean squared error (RMSE) have been used to evaluate different modeling methods on this dataset. Since reducing RMSE is the goal, statistical models assume the response (ratings) to be Gaussian for this data (Kouki et al., 2015;Wang et al., 2015;Agarwal et al., 2011). Experimental results are shown in Figure 1. As expected, the baseline with no notion of fairness-CCF-results in the best overall RMSEs, with our two approaches (FGLASSO+CF and FCONCORD+CF) providing performance fairly close to CCF. Figure 1 (right) demonstrates that compared to fair CCF, FGLASSO+CF and FCONCORD+CF significantly improve the clustering balance. Hence, our fair graph-based RSs successfully enhanced the neutrality without seriously sacrificing the prediction accuracy.
Fair GMs also provide information to study the relationships among items based on user ratings.
To illustrate this, the top-5 movie pairs with the highest absolute values of partial correlations are shown in Table 4. If we look for the highly related movies to a specific movie in the precision matrix, we find that FCONCORD enhances the balance by assigning higher correlations to more recent movies such as "The Wrong Trousers" (1993) and "A Close Shave" (1995).
In addition, the estimated communities for two sub-graphs of movies are also shown in Figure 2.
From both networks, we can see that the estimated communities mainly consist of mass marketed commercial movies, dominated by action films. Note that these movies are usually characterized by high production budgets, state of the art visual effects, and famous directors and actors. Examples in this communities include "The Godfather" (1972), "Terminator 2" (1991), and "Return of the Jedi" (1983), "Raiders of Lost Ark" (1981), etc. As expected, movies within the same series are most strongly associated. Figure 2 (right) shows that FCONCORD enhances the neutrality from the old movies bias by replacing them with new ones such as "Jurassic Park (1993)," "The Wrong Trousers" (1993) and "A Close Shave" (1995).
Music Data
Music RSs are designed to give personalized recommendations of songs, playlists, or artists to a user, thereby reflecting and further complementing individual users' specific music preferences. Although accuracy metrics have been widely applied to evaluate recommendations in music RS literature, evaluating a user's music utility from other impact oriented perspectives, including their potential for discrimination, is still a novel evaluation practice in the music RS literature (Epps-Darling et al., 2020;Chen et al., 2020;Shakespeare et al., 2020). Next, we center our attention on artists' gender bias for which we want to estimate if standard music RSs may exacerbate its impact.
To illustrate the impact of artists gender bias in RSs, we use the freely available binary LFM-360K music dataset 3 . The LFM-360K consists of approximately 360,000 users listening histories from Last.fm collected during Fall 2008. We generate recommendations for a sample of all users for which gender can be identified. We limit the size of this sample to be 10% randomly chosen of all male and female users in the whole dataset due to computational constraints. Let U be the set of n users, I be the set of p items and Y be the n × p input matrix, where y ui = 1 if user u has selected item i, and zero otherwise. Given the matrix Y, the input preference ratio (PR) for user It shows that only around 20% of users have a PR towards male artists lower than 0.8. On the contrary, 80% of users have a PR lower than 0.2 towards female artists. This shows that commonly deployed state of the art CF algorithms may act to further increase or decrease artist gender bias in user-artist RS.
Next, we study the balance and prediction accuracy of fair GMs on music RSs. Figure 4 indicates that the proposed FBN+CF has the best performance in terms of RMSE and balance. As expected, the baseline with no notion of fairness-CCF-results in the best overall precision. Of the two fairness-aware approaches, the fair K-means based approach-Fair CCF-performs considerably below FBN+CF. This suggests that recommendation quality can be preserved, but leaves open the question of whether we can improve fairness. Hence, we turn to the impact on fairness of the three approaches. Figure 4(right) presents the balance. We can see that fairness-aware approaches-Fair CCF and FBN+CF-have a strong impact on the balance in comparison with standard CCF. And for RMSE, we see that FBN+CF achieves much better ratings difference in comparison with Fair CCF, indicating that we can induce aggregate statistics that are fair between the two sides of the sensitive attribute (male vs. female).
Conclusion
In this work we developed a novel approach to learning fair GMs with community structure. Our goal is to motivate a new line of work for fair community learning in GMs that can begin to alleviate fairness concerns in this important subtopic within unsupervised learning. Our optimization approach used the demographic parity definition of fairness, but the framework is easily extended to other definitions of fairness. We established statistical consistency of the proposed method for both a Gaussian GM and an Ising model proving that our method can recover the graphs and their fair communities with high probability.
We applied the proposed framework to the tasks of estimating a Gaussian graphical model and a binary network. The proposed framework can also be applied to other types of graphical models, such as the Poisson graphical model or the exponential family graphical model (Yang et al., 2012). In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1235-1244.
Wang, P., D. L. Chao, and L. Hsu (2009). Learning networks from high dimensional binary data: An application to genomic instability data. arXiv preprint arXiv:0908.3882 .
Supplementary Material to "Fair Structure Learning in Heterogeneous Graphical Models"
Davoud Ataee Tarzanagh, Laura Balzano, and Alfred O. Hero • Appendix A.1 provides some preliminaries used in the proof of main theorems.
• Appendix A.2 provides large sample properties of FCONCORD, i.e., the proof of Theorem 5.
• Appendix A.3 provides large sample properties of FBN, i.e., the proof of Theorem 6.
• Appendix A.4 gives the consistency of fair community labeling in graphical models.
• Appendix A.5 provides the detailed derivation of the updates for Algorithm 1.
A.1 Preliminaries
Let the notation F n (θ d , θ o , q d , q o ; Y) stands for F n in (11). We introduce a restricted version of criterion (11) as below: We define a linear operator A : R p(p−1)/2 → R p×p , w → Aw, satisfying . An example for Aw on a vector w = [w 1 , w 2 , · · · , w 6 ] is given below We derive the adjoint operator A * of A by making A * satisfy Aw, z = w, A * z ; see, Kumar et al. (2020) for more details.
Since by our assumption = 0, we obtain It is easy to see that rank(A 1 ) = H − 1. Let N ∈ R p×(p−H+1) be a matrix whose rows form an orthonormal basis of the nullspace of A 1 . We can substitute and then, using that N N = I (p−H+1) , Problem (A.3) becomes Throughout, we useḡ n andH n to denote the gradient and the Hessian of L n (θ d , θ o ; Y). We also define the population gradient and Hessian as follows: For 1 ≤ i < j ≤ p and for 1 ≤ i < j ≤ p and 1 ≤ t < s ≤ p,
A.2 Large Sample Properties of FCONCORD
We list some properties of the loss function.
Lemma 8. (Peng et al., 2009) The following is true for the loss function: (L4) There exists a constant 0 < M 6 (θ o ) < ∞, such that for all (i, j) ∈ B Lemma 9. (Peng et al., 2009) Suppose Assumptions (A1)-(A2) hold, then for any η > 0, there exist constants c 0 -c 3 , such that for any v ∈ R q the following events hold with probability at least 1 − O(exp(−η log p)) for sufficiently large n: Lemma 10 and D 1 (q o ), such that for any η > 0, there exists a (local) minimizer of the restricted problem (A.1) within the disc: with probability at least 1 − O(exp(−η log p)) for sufficiently large n.
From Assumption (A2), we have Hence, for sufficiently large n, we have Now, by combining (A.8)-(A.10), for sufficiently large n, we obtain Here, the first inequality follows ρ 1n = log p/n and the last inequality follows by setting C 1 > 2/M 1 . Now, let S w,z = {(w, z) : w B c = 0, w = C 1 , z = D 1 }. Then, for n sufficiently large, the following holds with probability at least 1 − O(exp(−η log p)). Lemma 11. Assume conditions of Lemma 12 hold and ρ 2n < δρ 1n /(τ 2 τ 3 ). Then, there exists a constant C 2 > 0, such that for any η > 0, for sufficiently large n, the following event holds with probability at least 1 − O(exp(−η log p)): for any θ o satisfying
Thus
Proof. The proof follows the idea of (Peng et al., 2009, Lemma S-4). For θ o =θ o satisfying (A.11), we haveθ o =θ o + µ 1n w, with w B c = 0 and w ≥ C 2 . We have where the second inequality follows from Taylor expansion of Then, we have where the last inequality follows since Now, let µ 1n = √ qρ 1n . By triangle inequality and similar proof strategies as in Lemma 10, for sufficiently large n, we obtain with probability at least 1 − O(exp(−η log p)). Here, the first inequality uses Lemma 9 and the last inequality follows from Lemma 8 where we have H B,B w B ≥ M 1 w B . Now, taking for some > 0, completes the proof.
Proof. By the KKT condition, for any solution (θ o ,r o ) of (A.1), it satisfies Thus, for n sufficiently large, we have Let C(θ o ) := C 2 . Using (A.14) and Lemma 11, we obtain with probability at least 1 − O(exp(−η log p)).
The following Lemma 13 shows that no wrong edge is selected with probability tending to one.
Lemma 13. Suppose that the conditions of Lemma 12 and Assumption (A5) are satisfied. Suppose further that p = O(n κ ) for some κ > 0. Then for η > 0, and for n sufficiently large, the solution of (A.1) satisfies On E n,k , by the KKT condition and the expansion of F n at (θ d ,θ o ,r o ,r o ), we have . Then, we can write Letθ o denote a point in the line segment connectingθ o andθ o . Applying the Taylor expansion, we
Now, by utilizing the fact thatθ
SinceH n BB is invertible by assumption, we get Now, using results from Lemmas 14 and 16, we obtain Now, (i) by the incoherence condition outlined in Assumption (A5), for any (i, j) ∈ B c , we have Thus, following straightforwardly (with the modification that we are considering each B instead of B) from the proofs of (Peng et al., 2009, Theorem 2), the remaining terms in (A.21) can be shown to be all o(ρ 1n ), and the event holds with probability at least 1 − O(exp(−η log p)) for sufficiently large n and ρ 2n ≤ δρ 1n /(16(1 + M 7 (θ))τ 2 τ 3 ). Thus, it has been proved that for sufficiently large n, no wrong edge will be included for each true edge set B.
A.2.1 Proof of Theorem 5
Proof. By Lemmas 12 and 13, with probability tending to 1, there exists a local minimizer of the restricted problem that is also a minimizer of the original problem. This completes the proof.
A.3 Large Sample Properties of FBN
The proof bears some similarities to the proof of ; Guo et al. (2010) for the neighborhood selection method, who in turn adapted the proof from Meinshausen et al. (2006) to binary data; however, there are also important differences, since all conditions and results are for fair clustering and joint estimation, and many of our bounds need to be more precise than those given by ; Guo et al. (2010).
Following the literature, we prove the main theorem in two steps: first, we prove the result holds when assumptions (B1') and (B2') hold forH n and T n , the sample versions of ofH and T. Then, we show that if (B1') and (B2') hold for the population versionsH and T, they also hold forH n and T n with high probability (Lemma 17).
(B2') There exists a constant δ ∈ (0, 1], such that We first list some properties of the loss function.
This implies that
V −VO F ≤ κφ(p, H, K) K n for some κ > 0.
The rest of the proof follows as in the proof of (Lei et al., 2015, Theorem 1 for some π > 0. which gives (14b). Note that since θ ii > 0 the positive root has been retained as the solution.
The proof for updating Ω follows similarly.
|
2021-12-10T02:15:46.719Z
|
2021-12-09T00:00:00.000
|
{
"year": 2021,
"sha1": "bf0fba8f9c8ca66a6093f97d98826d53b8f5fd92",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a33d81275731b185151a320dc731c8d19d7f0efc",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
234535821
|
pes2o/s2orc
|
v3-fos-license
|
Papillary oncocytoma of eye lid- A rare case
Papillary oncocytoma is an uncommon tumor arising within the ductular cell lining of glandular structures. These tumors contain transformed epithelial cells with eosinophilic granular cytoplasm containing densely packed abnormal mitochondrias. Ocular oncocytomas are usually benign in nature, but occasionally there can be malignant transformation, with both local and distant spread. We are reporting a case of papillary oncocytoma, who was managed with surgery without any recurrence on follow up. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Papillary oncocytoma or oxyphil adenomas are uncommon tumors arising within the ductular cell lining of glandular structures. In these tumors we get transformed epithelial cells, with eosinophilic granular cytoplasm containing densely packed abnormal mitochondrias. Oncocytomas have been found in various organs, including the thyroid, adrenal gland, kidney, liver and breast. However ocular oncocytomas are usually benign in nature, but occasionally there can be malignant transformation, with both local and distant spread. This again depends on the location, as oncocytoma of lacrimal or caruncle origins are generally benign, 1 those with orbital involvement may be malignant. The commonest site of oncocytoma is the caruncle, whereas oncocytoma of eyelid margin is very rare. 1-4
Case Report
A 50 years old male patient presented to us with two years history of a slowly enlarging cystic lesion in the right eye lower lid. It was painless slow progressive lesion. There was no relevant medical history. On clinical examination the lesion was a painless, pink cystic, circumscribed mass * Corresponding author. E-mail address: madhusmita.behera@rediffmail.com (M. Behera).
measuring 3mm × 2mm with smooth surface (Figure 1). The growth was not adherent to the overlying skin but it appeared to be arising from the tarsal border. On eyelid eversion, the overlying palpebral conjunctiva appeared congested. Regional lymph nodes were not clinically palpable. Systemic examination was also unremarkable. Initial differential diagnosis included sebaceous cyst, epidermoid cyst and cyst of moll were done. In toto excision of cyst, with very minimal excision of tarsal plate was done under local anesthesia. The excised tissue was sent for histopathological examination. Histopathological examination showed tubulo-papillary architecture in which the constituent polygonal cells had regular vesicular nuclei with abundant, finely granular eosinophilic cytoplasm (Figures 2 and 3). Four years after the surgery, patient has had no recurrence of the tumor.
Discussion
Oncocytoma are benign tumors, which can occur at a various sites. Ocular adnexal oncocytomas are usually situated in the lacrimal drainage apparatus. More common site of oncocytoma is the caruncle. Cases occurring in other sites such as the eyelids have been reported. They occur most commonly in elderly females. Clinically, these tumors tend to present as a slow growing, asymptomatic lesion that is often red in color. 5 However the differential diagnosis includes, melanocytic nevus, benign epithelial tumors, pyogenic granulomas and hemangiomas. 6 Oncocytoma of eyelid margin are rare. Ocular oncocytoma are generally benign tumors, but occasionally can become malignant, with both local as well as distant spread. This again depends on the site of involvement. Oncocytoma of orbital involvement may be malignant, which most commonly occurs in elderly females. As it is mentioned, oncocytomas are benign tumors and usually only requires excision. However they do can recur in cases of incomplete excision 7 and can be locally aggressive and very rarely can become malignant.
Conclusion
In our case oncocytoma was there in lower eyelid, which is very rare. However on follow up there was no recurrence till 4 years.
Source of Funding
None.
Conflict of Interest
The author(s) declare(s) that there is no conflict of interest.
|
2020-12-24T09:12:04.032Z
|
2020-12-15T00:00:00.000
|
{
"year": 2020,
"sha1": "85559f40d351ab2b9f4f91578f8cd4e0dba1b81f",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijceo.org/journal-article-file/12875",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f32515daad045772b8aa5640d5d9f8527997a4b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251911514
|
pes2o/s2orc
|
v3-fos-license
|
A pyroptosis-related gene signature for prognosis and immune microenvironment of pancreatic cancer
Pancreatic cancer is one of the most lethal tumors owing to its unspecific symptoms during the early stage and multiple treatment resistances. Pyroptosis, a newly discovered gasdermin-mediated cell death, facilitates anti- or pro-tumor effects in a variety of cancers, whereas the impact of pyroptosis in pancreatic cancer remains unclear. Therefore, we downloaded RNA expression and clinic data from the TCGA-PAAD cohort and were surprised to find that most pyroptosis-related genes (PRGs) are not only overexpressed in tumor tissue but also strongly associated with overall survival. For their remarkable prognostic value, cox regression analysis and lasso regression were used to establish a five-gene signature. All patients were divided into low- and high-risk groups based on the media value of the risk score, and we discovered that low-risk patients had better outcomes in both the testing and validation cohorts using time receiver operating characteristic (ROC), nomograms, survival, and decision analysis. More importantly, a higher somatic mutation burden and less immune cell infiltration were found in the high-risk group. Following that, we predicted tumor response to chemotherapy and immunotherapy in both low- and high-risk groups, which suggests patients with low risk were more likely to respond to both immunotherapy and chemotherapy. To summarize, our study established an effective model that can help clinicians better predict patients’ drug responses and outcomes, and we also present basic evidence for future pyroptosis related studies in pancreatic cancer.
Introduction
Pancreatic cancer (PAAD), which is primarily composed of pancreatic ductal adenocarcinoma, is one of the most fatal malignancies in the United States, with a survival rate of about 10% (Siegel et al., 2021). The poor prognosis and stable incidence rates of PAAD cases were not only associated with increased exposure to risk factors such as obesity, diabetes, tobacco use, and alcohol consumption, but also with nonspecific symptoms at the early stage (Stolzenberg-Solomon et al., 2013;Rebours et al., 2015;Walter et al., 2016). Worse still, only modest progress has been achieved in reducing the mortality rate of PAAD. Though immunotherapy has proved to be a promising treatment in many other malignancies, few PAAD patients benefited from ICIs (Torphy et al., 2018;Galluzzi et al., 2020). The "cold" tumor microenvironment is one of the primary reasons for its immunotherapy resistance (O'Donnell et al., 2019). The tumor microenvironment of PAAD is mainly composed of immunosuppressive cells, such as tumor-associated macrophages, myeloid-derived suppressor cells, and regular T-cells (Clark et al., 2007). Additionally, it is believed that an unusually intense desmoplastic reaction surrounding PAAD contributes to the formation of a barrier that prevents immune infiltration and chemotherapy exposure (Provenzano et al., 2012;Ho et al., 2020). Therefore, it is critical to investigate the molecular pathways related to PAAD microenvironment.
Pyroptosis is defined as the caspase (CASP) family-driven programmed necrotic cell death mediated by gasdermin (GSDM) (Shi et al., 2015). When triggered by bacterial, viral, toxin, or chemotherapy, pyroptosis can release pro-inflammatory cytokines and immunogenic material, promoting the activation and infiltration of immune cells (Loveless et al., 2021;Yu et al., 2021). Pyroptotic cell death is characterized by cellular swelling and bubble-like protrusions forming on the cell membrane surface, as well as the release of IL1 and IL18 (Loveless et al., 2021;Yu et al., 2021). Cancers of all forms are closely related to pyroptosis (Yu et al., 2021). On one hand, inducing pyroptosis was originally considered a promising therapeutic strategy for increasing anti-tumor immune response. On the other hand, the activation of multiple signaling pathways and the release of cytokines can lead to tumorigenesis and drug resistance (Xia et al., 2019). The connection between PAAD and pyroptosis is still unclear. Recent work demonstrated that STE20-like kinase 1 slowed PAAD progression by triggering ROS-mediated pyroptosis, implying that pyroptosis may be a potential therapeutic target for PAAD (Cui et al., 2019).
One possible reason for the depressing outcomes of immunotherapy is that PAAD cells can avoid cell death induction . Thus, we sought to advance our understanding of the pyroptotic pathway in PAAD and construct a pyroptosis-related gene (PRG) prognostic signature. Our study provided an effective prognostic model as well as basic evidence for subsequent pyroptosis-related studies in PAAD.
Data extraction
The workflow of our study is revealed in Figure 1. The UCSC Xena (Goldman et al., 2020) (Xean, http://xena.ucsc.edu/) was used to obtain the RNA sequencing profile and clinical following data of the TCGA-PAAD cohort and GTEx cohort. Xena was also implemented to integrate normalized counts from TCGA-PAAD and GTex cohort due to limited matched controls in the TCGA-PAAD cohort. All PAAD patients without survival following were excluded in this study. In this cohort, there are 177 PAAD patients and 167 normal pancreatic tissue. The GISTIC copy number dataset and DNA methylation data for all selected patients were obtained from cBioportal (https://www.cbioportal.org/), while the somatic mutation data of patients was downloaded from TCGA (https://portal.gdc.cancer.gov/). Additionally, we downloaded two extra GEO datasets (GSE28735 and GSE62452, https://www.ncbi. nlm.nih.gov/geo/) and ICGC sequencing profiles from ICGC (https://daco.icgc.org/) as independent validation cohorts (Zhang et al., 2012;Yang et al., 2016).
Identify differential expressed genes and perform functional analysis
The 33 PRGs were selected from a previously published study and are listed in Supplementary Table S1 (Ye et al., 2021). The
FIGURE 1
The workflow of our study.
Frontiers in Genetics frontiersin.org 02 "DESeq2" package was used to identify differentially expressed genes (DEGs) (Love et al., 2014). Additionally, we conducted correlation analyses of gene expression and methylation using the cBioportal (http://cbioportal.org) (Cerami et al., 2012). The Mann-Whitney or unpaired t-test was used to investigate gene expression differences across distinct copy number variations (CNV). The function of DEGs was analyzed using KEGG enrichment analysis and gene set enrichment analysis (GSEA) via the "clusterProfiler" R package (Yu et al., 2012). p-values < 0. 05 were defined as statistically significant.
The construction of prognostic prediction models
To begin, univariate cox regressions were utilized to examine the relationships between individual 33 PRGs and overall survival (OS) in the TCGA cohort. p-value < 0.05 was set as the threshold to identify prognostic-related PRGs. LASSO regression analysis was then used to select significant PRGs and minimize the likelihood of overfitting. Based on these selected PRGs, the prognostic model was constructed using multivariate cox regression analysis. The risk score for OS was constructed as the following formula: risk score 5 i Xipβi Where X represents the gene expression level and β represents the regression coefficient calculated by multivariate Cox regression. All patients were separated into high-and low-risk groups based on the media value of the risk score.
Validation of the prognostic prediction model
To evaluate the accuracy of the prediction model, time receiver operating characteristic (ROC) curve, nomograms, Kaplan-Meier survival curve, and decision curve were established in the TCGA cohort and validation ICGC cohort. The ROC curves at 1-, 3-, and 5-years were generated using the R package "timeROC" (Blanche et al., 2013). The Kaplan-Meier survival curve was generated by using the R package "survival" (Grambsch, 2000). The decision curve and the following clinic impact curve were finished by the R package "rmda" (Brown, 2018). And the R package "regplot" (Marshall, 2020) was used to perform the nomogram analysis.
Molecular variation analysis and tumor mutation burden between subgroups
After combining the copy number dataset with the somatic mutation dataset of TCGA, we visualized the top 15 genes with the highest mutational frequencies and compared their somatic mutation status across subgroups using the R package "maftool" (Mayakonda et al., 2018). The TMB value of each patient was also calculated through "maftool", and the Mann-Whitney or unpaired t-test was used to compare TMB values across subgroups (Mayakonda et al., 2018). p-values < 0.05 were considered statistically significant.
Comprehensive immune characteristics analysis between subgroups
By relating gene expression data to cell purity data, the "ESITMATE" R package was utilized to determine the activities of tumor cells, immune cells, and stromal cells inside the tumor environment (Yoshihara et al., 2013). We next used single-sample GSEA through the "GSVA" R package to determine the relative proportions of 28 different types of tumor-infiltrating immune cells (Hanzelmann et al., 2013). Supplementary Table S2 contains all the gene sets for targeted immune cells. Apart from that, the relative expression levels of the ICIs-targeted genes were determined using FPKM values and compared using Mann-Whitney or unpaired t-test.
Immunotherapy and chemotherapeutic response prediction
The TIDE (Tumor Immune Dysfunction and Exclusion) web tool (http://tide.dfci.harvard.edu/) was used to predict immunotherapy responses (Jiang et al., 2018). Patients with a lower TIDE score were considered to have a better response to immunotherapy. Besides, based on the GDSC (Genomics of Drug Sensitivity in Cancer) database, the R package "oncoPredict" was used to perform ridge regression analysis on each sample to predict IC50 values for targeted drugs (Maeser et al., 2021). A Mann-Whitney or unpaired t-test was used to compare TIDE scores and IC50 values across subgroups. p-values<0.05 were considered statistically significant.
Alterations of pyroptosis-related genes RNA expression in pancreatic cancer
To begin, we identified differentially expressed PRGs between PAAD tissue and normal pancreatic tissue from the TCGA-GTEx integrated cohort. The heatmap of PRGs revealed that nearly all PRGs are significantly overexpressed within PAAD tissue ( Figure 2A). More specifically, the expression of AIM2, CASP1, CASP3, CASP5, GSDMA, GSDMC, IL1B, IL6, IL18, NLRP1, NLRP2, NLRP3, NLRP7, NOD2, TNF, GPX4, and PYCARD increased more than twofold, whereas CASP9 expression decreased ( Figure 2B). Following that, we Frontiers in Genetics frontiersin.org analyzed two additional GEO datasets (GSE28735 and GSE62452) to see whether this differential expression is widespread, which showed a significantly less trend of increase (Supplementary Figures S1A,B) (Zhang et al., 2012;Yang et al., 2016). Considering that the samples of GSE28735 and GSE62452 were taken from tumor and paired adjacent normal tissue, while control samples for the TCGA cohort were derived from healthy pancreas samples from a different cohort, the batch effects may partially account for the difference. Nevertheless, all three cohorts revealed unequivocally that PRGs were activated in PAAD and 18 of these PRGs were overexpressed in all of the datasets when setting p < 0.05 as threshold. We next enriched these 18 PRGs into pyroptosis signaling pathways and discovered that caspase-1, 3, and 8-dependent pyroptosis, as well as gasdmin B-mediated pyroptosis, were all closely related with pancreatic cancer (Supplementary Figure S1C). In general, multiple pyroptosis mechanisms are commonly activated in pancreatic cancer. Then, principal component analysis was processed to identify PRGs expression characteristics between normal pancreatic tissue and PAAD, which revealed a clear distinction among samples ( Figure 2C). To achieve a better understanding of the relationship among PRGs, the correlation matrix was constructed by calculating the Pearson correlation coefficient between each two genes within either normal samples from the GTEx cohort or PAAD samples from the TCGA cohort. In normal pancreatic tissue, the majority of PRGs were found to be remarkably positively linked with each others while only five genes were shown to be adversely connected to other PRGs, including NLRP2, GSDMA, CASP5, NLRP1, and NOD2 ( Figure 2D). Among the PAAD samples, the expression of PRGs was likewise Frontiers in Genetics frontiersin.org positively correlated, which suggested that the co-interaction of PRGs may have a role in PAAD development ( Figure 2E).
DNA methylation and copy number variation affect the pyroptosis-related genes expression
To elucidate possible explanations for the increased expression of PRGs in the TCGA cohort, we analyzed DNA methylation and CNV.
Both DNA methylation and CNV have been implicated in the regulation of gene expression in a variety of cancers (Stranger et al., 2007;Daniel et al., 2011). To ascertain if CNV influences PRGs expression, we divided the TCGA cohort into five or fewer groups based on their copy number for each gene, which included deletion, shallow deletion, diploid, gain, and amplification. We discovered that copy number is positively correlated with gene expression in more than half of the PRGs, suggesting a significant role for CNV in gene regulation. Besides that, copy number is negatively correlated with gene expression in 10% of PRGs and has no correlation in the FIGURE 3 DNA methylation, CNV, and gene expression correlation analysis. (A) Correlations between CNV and PRGs expression. Positive correlation was defined as certain PRGs expression increased while copy number augmented. Negative correlation was defined as certain PRGs expression decreased while copy number augmented. Uncertain was defined as both expression increasement and decrement can be observed while copy number augmented. Unknown was defined as no significant differences between different CNV groups. (B) Pearson correlative value between methylation (HM450) versus mRNA expression z-scores relative to all samples of each PRGs. (C) Violin plots of example positive correlated PRGs. The rest PRGs are presented in Supplementary Figure S2. Significance was determined using the Mann-Whitney or unpaired t-test. Data shown are means ± SD, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. Figure S2). Since the CNV alone could not fully account for the increased PRGs expression, we performed a correlation analysis between DNA methylation and PRGs expression, revealing that the expression of 28/33 PRGs is negatively correlated with DNA methylation ( Figure 3B). This indicates both DNA demethylation and copy number increasement contribute to the overexpression of PRGs in PAAD.
Construction of a prognostic gene signature
The ROC curves for each PRGs revealed that the majority of PRGs had a high predictive value for diagnosis, implying that they may contribute to PAAD tumorigenesis ( Figure 4A). To further assess their prognostic potential, we performed a univariate cox analysis between each PRG and OS, and 22 genes were screened out (with p < 0.05) ( Figure 4B). Lasso regression analysis was then used to identify the most prognostic genes, and 5 genes were chosen by the vertical grey line in Figure 4D (Figures 4C,D). Finally, the model was determined by multivariate cox regression within selected PRGs. Among them, GSDMC, IL18, and NLRP2 are all associated with an increased risk, while the other two confer a protective effect ( Figure 4E). The formula of the risk score was: risk score = (GSDMC*0.2302) -(ELANE*0.4664)+ (IL18*0.3341)-(NLRP1*0.4324)+ (NLRP2*0.1297). Taking the median risk score as the cut-off value, we classified all TCGA patients into low-and high-risk groups. Detailed clinical (D) Ten times cross-validation for parameter selections in the LASSO cox regression. (E)The nomogram incorporating 5 selected PRGs. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Frontiers in Genetics
frontiersin.org information is presented in Table 1 Regardless of histologic stage, disease type or OS, the majority of clinicopathological characteristics are evenly distributed among two groups. An increased risk score, on the other hand, may indicate a higher histological grade and a greater likelihood of ductal and lobular origins.
Prognostic value of pyroptosis-related genes signature in TCGA and validation cohort To assess the prognostic efficacy of this signature, we calculated the probability of 3-years OS in the TCGA cohort ( Figure 5A) and a validation cohort, ICGC (Supplementary Figure S3A). The results indicated that the model had a high predictive capacity in both cohorts. Additionally, time dependent ROC analysis was used to assess the sensitivity and specificity of this model. As for the TCGA cohort, beside 1-year, both 3-years and 5-years corresponding areas under the curve (AUC) are over 0.75 ( Figure 5B), whereas the ICGC cohort's accuracy is lower, with a 1-year AUC of 0.661 and a 3-years AUC of 0.528 (Supplementary Figure S3B). However, its poor performance for predicting longer time survival status may be explained by the fact that only 10% of patients in the ICGC cohort survive till the third year. Following that, similar to the TCGA cohort, all 89 patients in the ICGC cohort were equally divided into lowand high-risk groups based on their risk score, and we observed
pyroptosis-related genes model outperforms clinical characteristics in prognosis
Following that, we compare the predictive accuracy of our model to that of clinicopathological characteristics. Both univariate and multivariate analyses indicated that risk score is an independent predictor, moreover, age and disease type also demonstrated their independent predictive ability with p < 0.1 as the threshold value (Table 2). After combining these three variables, a nomogram model was built to evaluate its clinical utility ( Figure 6A). Then, we processed decision curve and ROC analysis to compare the clinical benefit of the composite nomogram to that of a risk score or clinical characteristics alone. While the composite model performed better than the basic clinical factors in terms of prognosis accuracy, it demonstrated limited clinical net benefit compared to the risk score ( Figure 6B). Additionally, the time-related AUCs of the risk
Frontiers in Genetics
frontiersin.org score model were consistently greater than those of the composed model at each time point, suggesting that the risk score possessed the greatest clinical utility ( Figure 6C).
Bioinformation analysis based on the pyroptosis-related genes model
We identified 365 genes with increased expression and 1,514 genes with decreased expression in the high-risk group as compared to the low-risk group (Figures 7A,B). These DEGs were then used to conduct KEGG enrichment and GSEA analysis to further investigate the biological pathway correlated with risk score. Interestingly, DEGs were predominantly enriched in organismal systems such as endocrine, nervous, and circulatory systems ( Figure 7C). Meanwhile, the GSEA results demonstrate that several pathways, including calcium signaling, cAMP signaling, cGMP-PKG signaling pathways and so on, are down-regulated in the high-risk group ( Figure 7D). Apart from functional analysis, we then looked at the somatic mutation status of TCGA patients. As expected, high-risk individuals have a considerably higher somatic mutation burden, typically for the genes KARS and TP53, which are known to be the primary drivers of PAAD (Kleeff et al., 2016) (Figure 7E; Supplementary Figure S4A). Consistently, the tumor mutation burden (TMB) was also found to be considerably greater in the high-risk group than in the low-risk group (Supplementary Figure S4B).
Given that KRAS and TP54 have been linked to other cell death processes such apoptosis and ferroptosis, we attempted to identity the specific correlation between oncogenes and pyroptosis by comparing the expression of PRGs between KRAS or TP53 mutated and unmutated individuals . Despite the fact that GSDMC, NOD2, and Frontiers in Genetics frontiersin.org IL18 were modestly elevated while NLRP1 and NLRP6 were lowered, the majority of PRGs between the mutant and nonmutant groups were not significantly different (data not shown). The link between pyroptosis and gene mutation is not evident based on the existing findings, and more research is needed to understand the particular interaction between the two.
Immunity features underlying the pyroptosis-related genes model
We further characterize their immune environment heterogeneity by elucidating the association between risk score and immune state. The ESTIMATE web tool was first used to determine cell distribution, and it revealed that highrisk group had significantly less stromal cell and immune cell infiltration. Meanwhile, the testing group ICGC cohort presented a similar trend, though without a statistically significant difference ( Figure 8A; Supplementary Figure S4C). Additionally, the compositions of specific cell types were determined through ssGSEA, showing that the infiltration of a considerable number of immune cell types were reduced in high risk group, including effector memory CD4+T-cells, effector memory CD8+T-cells, and type I helper cells, which are known to have anti-tumor effects. Apart from these, eosinophils, macrophages, mast cells, monocytes, myeloid derived suppressor cells, and plasmacytoid dendritic cells were found to be adversely associated with risk score (Figures 8B,C).
Therapy response features underlying the pyroptosis-related gene model
We suspected that a higher risk score would be correlated with a weaker response to immunotherapy and other bioagents, given that patients in the high-risk group exhibited reduced immune cell infiltration. Then, the TIDE analysis corroborated our hypothesis, demonstrating that individuals at low-risk are more likely to respond to ICI treatment but without statistical significance ( Figure 9A). Moreover, patients in the high-risk group have higher exclusion score but a lower dysfunction score, suggesting that immunological exclusion was the primary cause of their poor outcomes ( Figure 9A). Notably, while both increased and decreased expression of the ICI target gene can be observed, the link Frontiers in Genetics frontiersin.org between specific ICI and risk score requires further investigation (Supplementary Figure S4D). Apart from that, we used onco predict to predict the IC50 values for FDAapproved drugs in high-and low-risk patients. Among the six most commonly used drugs, the low-risk group had considerably lower projected IC50 values for olaparib, irirntecan, and gemcitabine, implying that lower risk is associated with better outcomes from these chemotherapeutic drugs ( Figure 9B). Overall, patients in the high-risk group were less sensitive to both immunotherapy and chemotherapy in general, which may have contributed to their poor prognosis.
Discussion
PAAD is always diagnosed at an advanced stage because of the lack of identifiable symptoms, and only a minority of patients can benefit from conventional surgical treatment or cytotoxic chemotherapy (Von Hoff et al., 2013;Walter et al., 2016). As a result, PAAD is currently one of the top 10 most lethal tumors (Rahib et al., 2014). The immunosuppressive and desmoplastic milieu of PAAD is a substantial impediment to optimizing therapeutic efficacy, including difficulties in drug transport and limited responses to ICI-based immunotherapy (Li et al., 2020). Stimulating the immunogenic cell death of tumor cells is Frontiers in Genetics frontiersin.org regarded to be an efficient method of converting the "cool" tumor microenvironment to a "hot" environment (Kroemer et al., 2013). Given that tumor cells show intrinsic resistance to apoptosis, targeting pyroptosis might be a more efficient strategy for boosting immunotherapy (Huang et al., 2018). Our study investigated the combined effects of various PRGs in PAAD and developed a prognostic model capable of reliably predicting patient survival status and response to prospective targeted therapy.
In this study, we were surprised to find that the majority of PRGs expressed significantly differently between normal pancreatic tissue and PAAD, reflecting a fundamental change in pyroptosis activity. Gene overexpression can occur for a variety of reasons, including gene amplification, activating mutation, or epigenetic modification (Stranger et al., 2007;Daniel et al., 2011). In our case, most of these upregulations occur in part as a result of increased copy number or demethylation. Additionally, the majority of overexpressed PRGs are strongly associated with poor prognosis, indicating that they may contribute to survival state prediction. Thus, using univariate cox and lasso regression to avoid overfitting, five prognostic PRGs were chosen. Following that, we generated a signature comprised of five PRGs (ELANE, GSDMC, IL18, NLRP1, and NLRP2) by multivariate cox, which named risk Notably, the cells are grouped according to their widely accepted role in cancer, including anti-tumor, pro-tumor, and others. (C) Comparison of ssGSEA enrichment scores of 28 types of immune cells between the high-and low-risk groups in the TCGA cohort. Data are presented as means ± SD. Significant was determined using Mann-Whitney or unpaired t-test. *p < 0.05 **p < 0.005, and ****p < 0.00005.
Frontiers in Genetics
frontiersin.org score, and validated its accuracy in both the training and validation cohorts. Among these core genes, higher ELANE and NLRP1 expression suggested a favorable prognosis for the patients. Consistently, Cui et al. (2021) recently demonstrated that neutrophil-derived active neutrophil elastase (ELANE) not only kills numerous types of cancer cells while sparing proximal non-cancer cells by liberating the CD95 death domain that interacts with histone H1 isoforms, but also inhibits metastasis via CD8+T mediated abscopal effect. Furthermore, it has been discovered that NLRP1 downregulation promotes tumorigenesis, including lung adenocarcinoma and colorectal cancer (Chen et al., 2015;Shen et al., 2021). On the other hand, overexpression of GSDMC, IL18, and NLRP2 were associated with a poor prognosis in patients with PAAD. Hou et al. (2020) showed GSDMC mediated non-canonical pyroptosis upon caspase-8 activation and that high GSDMC expression correlated with poor survival. It is difficult to thoroughly elucidate the role of IL18 in cancer. A high level of IL18 in pancreatic tumor tissue was associated with a shorter survival time, increased invasion, and metastasis, whereas a high IL18 level in plasma was correlated with a longer survival time (Guo et al., 2016). By combining our signature with previous studies, we were able to confirm and truly illustrate the predictive usefulness of these core PRGs. Additionally, the singnature revealed differences in several pathways between the two groups. Due to the fact that the number of downregulated genes was much more than the number of upregulated genes, the majority of pathways, such as GABAergic synapase and insulin secretion, were enriched by downregulated genes, and these pathways may have a correlation with PAAD progression and prognosis. For example, gaba suppresses PAAD by inhibiting the β-adrenergic cascade and
FIGURE 9
Therapy response features underlying the PRGs model. (A) Comparison of TIDE score, T-cell dysfunction ("Dysfunction") score, and T-cell exclusion ("Exclusion") scores between the high-and low-risk groups of the TCGA cohort. (B) Predicted IC50 for olaparib, irinotecan, gemcitabine, fluorouracil, erlotinib, and paclitaxel for low-risk and high-risk groups. Data shown are means ± SD. Symbols represent the individual patients. Significant was determined using the Mann-Whitney or unpaired t-test. *p < 0.05 **p < 0.005, ***p < 0.0005, and ****p < 0.00005.
Frontiers in Genetics
frontiersin.org nicotine-induced cell proliferation (Al-Wadei et al., 2011;Al-Wadei et al., 2013;Al-Wadei et al., 2016). cAMP has both proand anti-tumor effects in malignancies (Tagliaferri et al., 1988;Ligumsky et al., 2012;Almahariq et al., 2015); To our surprise, the calcium signaling pathway and the neuroactive ligandreceptor interaction pathway, both of which are associated with a poor prognosis (Bettaieb et al., 2021;Qian et al., 2021), were downregulated in the high-risk group. However, the link between pyroptosis and these pathways is currently unknown and needs further investigation. The pro-or anti-tumor effects of proptosis are somehow determined by the surrounding microenvironment (Hou et al., 2021). Several investigators reported the pyroptosis of tumor cells can induce inflammatory response in microenvironment and attracting CD4 + and CD8+T-cell populations . In our case, though multiple PRGs are robustly overexpressed within PAAD, it is evident pancreatic tumor microenvironment exhibits an immunosuppressive condition (Zhu et al., 2014;Jiang et al., 2016;Kumar et al., 2022). One possible explanation for this is that, unlike acute pyroptosis induction, chronic induction of pyroptosis in some tumors can result chronic inflammation, which leads to a tumor-promoting microenvironment (Tsuchiya, 2021). Besides, extracellular ATP released from pyroptotic cells can be rapidly broken down into adenosine, an immunosuppressive substance, the gradual release of modest amounts of ATP from pyroptotic tumor cells may impact antitumor immunity (Vultaggio-Poma et al., 2020;Tsuchiya, 2021). Apart from that, the pytoptosis that happened in the center region of the tumor could result in chronic tumor necrosis, which suppressed the anti-tumor immunity and accelerated tumor progression (Hou et al., 2020). In our model, patients with lower risk scores were infiltrated with more immune cells, including several anti-tumor immune cells. So that if therapyinduced pyroptosis is expected to improve the pancreatic tumor microenvironment it may be important to determine the appropriate extent of pyroptosis induction, which should be neither too strong nor too weak (Tsuchiya, 2021).
Apart from the immune cell landscape, this signature also showed a significant correlation with somatic mutation status and therapeutic response. The patients with higher risk scores carried more mutation burden, with more mutations in KARS, TP53, ADAMTS12, SMAD4 FAT4, DCHS1, and CDKN2A mutations. Among these genes, KARS, CDKN2A, TP53, and SMAD4 are four major genes involved in the progression of PAAD (Kleeff et al., 2016). However, it is unclear whether these oncogenes are involved in pyroptosis. Moreover, TIDE analysis revealed that PAAD patients with lower risk scores had a higher likelihood of achieving durable benefits from immunotherapy. PAAD is also characterized by a remarkable tolerance to chemotherapy (Kleeff et al., 2016). Thus, to test the PRGs signature's predictive utility in clinical practice, we next predicted the sensitivity to FDA-proved PAAD chemotherapeutic drugs based on gene expression profiles. Similar to immunotherapy, a low-risk score was associated with a better response to olaparib, irinotecan, and gemcitabine. In general, our findings demonstrated that patients with low-risk scores were more likely to be have a reduced mutation burden and benefit from both immunotherapy and chemotherapy.
In this study, we created a valuable PRGs signature and thoroughly explored its correlations with prognosis, immune infiltration, somatic gene mutation, and treatment response. Our model performs well in predicting patient prognosis and treatment response. Moreover, we laid the groundwork for a more complete understanding of pyroptosis's role in PAAD. However, our work is still in its early stage and the limitations of this study are clear. Further clinical trials need to be conducted to fully verify the accuracy of this model. The true involvement of pyroptosis in cancer remains a mystery, and additional researches are required.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Author contributions
Study concept and design: ST, YS, and XW. Data analysis and interpretation: ST, YS, and LT. Manuscript writing: ST and YS. Final approval of manuscript: All authors.
|
2022-08-30T13:37:11.703Z
|
2022-08-29T00:00:00.000
|
{
"year": 2022,
"sha1": "44317fe57e3af978e1cad6b5aa480d1f50023aa0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2022.817919/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3facb57f212fc7b83def7d356c0f5d8e91f9796",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
246444478
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis and performance evaluation of adsorbents derived from sewage sludge blended with waste coal for nitrate and methyl red removal
Low-cost adsorbents were synthesized using two types of sewage sludge: D, which was obtained during the dissolved air flotation stage, and S, which was a mixture of primary and secondary sludge from the digestion and dewatering stages. The sewage sludge was mixed with waste coal before being activated with potassium hydroxide (KOH) and oxidized with ammonium persulfate (APS). The nitrate and methyl red removal capacities of the synthesized adsorbents were evaluated and compared to those of industrial activated charcoal. The oxidation surface area of adsorbents derived from sludge S shrank by six fold after modification i.e., from 281.72 (unoxidized) to 46.573 m2/g for the oxidized adsorbent with a solution of 2M ammonium peroxydisulfate, while those derived from D only varied narrowly from 312.72 to 282.22 m2/g, but surface modification had no effect on inorganic composition in either case. The adsorption of nitrate and methyl red (MR) was performed in batch mode, and the removal processes followed the pseudo second order kinetic model and the Langmuir isotherm fairly well. The adsorption capacities of nitrate and MR were higher at pH = 2 and pH = 4, respectively.
A waste coal sample was collected (using the ply sampling method) from a coal beneficiation facility located 15 kilometers south-west of Witbank in the South African province of Mpumalanga. The coal sample was dried in the laboratory for 48 h and then blended and divided using a Rotary splitter (Eriez) before being placed in a tight bag. A portion of the coal sample was pulverized to a particle size of 0.150-0.250 mm for further www.nature.com/scientificreports/ physicochemical tests, while another portion was crushed to a particle size of − 1 mm (80%) for petrographic examination. The physicochemical analyses of the waste coal sample utilized in this study are summarized in the Table 2 below. Sewage sludge reject from wastewater treatment facilities can be designated as primary or secondary sewage sludge based on its origin. The first type is the underflow from settling vessels following inlet physical treatment (sedimentation, flocculation, filtration, etc.) and is predominantly inorganic; the secondary, also known as biological sludge, is generated from biological process (activated sludge). In this paper we employ two types of sewage sludge collected at the East Rand Water Care Company (ERWAT) wastewater treatment plant facility: D, which was acquired during the dissolved air flotation stage, and S, which was a combination of primary and secondary sludge from the digestion and dewatering stages respectively.
Methods. Raw sewage sludges were sun dried for 24 h before being oven-dried. The following nomenclature is used throughout this paper to refer to all synthesized adsorbents: the letter C denotes the combination of sludge D or S and discard coal C, i.e. DC or SC, followed by the reagent concentration and finally the pyrolysis temperature. DC-5-750, for example, refers to sewage sludge D combined with waste coal C-The reagent concentration is 5M KOH, and the pyrolysis temperature is 750 °C, as stated in Table 2. The capital letter M denotes that the adsorbent has been changed, while the numbers 1 or 2 indicate the oxidation conditions using 1M or 2M ammonium peroxydisulfate (APS), respectively. SBAC ((Sludge Based Activated Carbon) refers to all activated carbons synthesized using sewage sludge alone or in conjunction with waste coal in this study. Dried and crushed precursors (sewage sludge with/without waste coal in a proportion of 0% and 50%) were impregnated with KOH solution with concentrations of 3, 5 and 7 moles/l, and a ratio of activating reagent to the sludge of 1.5:1. After impregnation, the mixture was dried for 24 h at 105 °C. The dried impregnated precursors were crushed to 100% passing through a 150 µm sieve, then placed in a crucible and pyrolyzed in an inert atmosphere for 90 min in a muffle furnace connected to a nitrogen bottle with a flowrate of 50 ml/min. The furnace regulator was turned off when the dwell time was reached, and the pyrolyzed sewage sludge was cooled to room temperature and then washed. The washing stage is divided into two stages: acidic and water. The former was performed using a 3 M HCl solution to remove excess activating reagent and ash-laden soluble ash, whilst the latter was performed using distilled water until the filtrate achieved a neutral pH value of between 6 and 7. SBAC (Sludge Based Activated Carbon) was washed and dried for 24 h at 105 °C, then crushed and sieved to a mesh size of 125 μm before being kept in an airtight container in the desiccator for further testing. The dried sludge was crushed to 100% passing through 125 µm sieves, and pyrolyzed at different temperatures (DC-5-750, DC-7-900, SC-3-600, and SC-5-900). The products from the pyrolysis were mixed with discard coal in a proportion of 1:1 and activated with KOH under N 2 atmosphere at a flow rate of 5 l/min. To achieve surface modification via oxidation, 1 g of pyrolyzed sewage sludge was incubated in 25 ml of 1M or 2M APS dissolved in 1M H 2 SO 4 at 60 °C for 240 min before being washed with distilled water until the pH neutral value was reached. Batch adsorption experiments were carried out with an adsorbent dose of 0.5% (5 mg of adsorbent in 10 ml of solution) and the performance of adsorbents was compared to commercial activated carbon. The best oxidized and/or unoxidized SBACs were chosen based on preliminary evaluation test results conducted at 303 K in an incubator shaker. The initial concentration of nitrate was 50 mg/l, with contact times of 180 and 360 min, respectively, while the initial concentration of MR was 75 ppm, with contact times of 90 and 180 min, and initial pH of 2 and 4 for nitrate and methyl red, respectively. The initial pH value of 2 in the case of nitrate was chosen because adsorbent surfaces are prone to being positively charged as pH decreases, and nitrate ions cannot compete with hydroxyl anions 25 Chemical analysis. SEM and EDS analysis were done using a ZEISS SIGMA FESEM 03-39 apparatus on coated samples on a carbon tape support using Gold-Palladium techniques. TGA was performed with the Pioneer (SDT-Q500) equipment. Proximate analysis of sewage was performed using a Perkin-Elmer STA 6000 simultaneous thermal analyser. FT-IR analyses were done using the Perkin-Elmer FT-IR Spectrometer Spectrum 2. Ultimate analysis was conducted with the thermoscientific flash 2000. XRD study was done using a Bruker 2D phaser instrument. The NH 4 OAC method 27 was used to determine cations exchange capacity. The Boehm titration method was used to determine the acidity of SBAC 28 . The pH point of zero charge was determined using the procedure described by Leng et al. 29 . Surface functionalities and graphitization or carbon disorder structure of the adsorbent were assessed using FT-IR analysis and Raman spectroscopy, respectively. For the pH point of zero charge, 50 mg of adsorbent were placed in vials with 40ml of 0.01 M NaCl solution and shaken for 48 h, the solution pH ranged from 2 to 12 with an incremental of 2. The concentrations of nitrate and MR were calculated using an IC dionex-120 and a U-vis Shimadzu 1800, the latter at a wavelength of 480 nm, as per Ding et al. 30 .
Results and discussion
Preparation and characteristics of precursors. The precursors were sun dried, and the moisture content of the sample was determined by mass losses after oven drying at 105 °C for 24 h as shown in Table 3. The elemental analysis (Table 4) reveals that there is only a minor difference between the two samples, indicating that digestion has no effect on the organic content of sewage sludge, despite the volatile solid content being slightly different. However, although the surface area of sludge from D was approximately 1.5 m 2 /g, the surface area of sludge S could not be determined, possibly due to a lack of porosity in the precursor materials.
In Figure 1, the TGA curve of dried sludge shows that between 30 and 190 °C, there is a slight mass loss, and the mass of both sludge was roughly 5%, possibly due to adsorbed water. The most mass loss occurred between 200 and 600 °C, with mass loss of D and S increasing from about 5% at 200 °C, to about two-thirds (64%) and half (46%) at 600 °C respectively. The mass loss from 600 to 1000 °C is less pronounced for both sludge with 17 and 13% for D and S respectively. As a result, 600 °C was chosen as the minimum temperature for pyrolysis. S appears to be slightly more stable than D based on the TGA curves.
Both samples D and S show heterogeneous morphology and composition as revealed by microscopic examination by SEM analysis in Fig. 2a-d, and EDS analysis in Figs. 3a, b and 4a, b respectively. The sponge-like morphology shown in Fig. 2b may be attributed to an organic water-repellent compound that adhered to the bubbles during the dissolved air flotation process. Both Figs. 3a, b and 4a, b reveal that the main elements in the sewage www.nature.com/scientificreports/ sludge samples were carbon and oxygen, with silica and aluminium at trace concentrations. In comparison to D sludge, energy dispersive X-ray (EDX) analysis of sample S indicated a high inorganic content, which was most likely created by domestic wastewater particles recovered during decantation or solid-liquid separation. This can be explained by the fact that excess air is released during the dissolved air flotation (DAF) process in the form of microbubbles that adhere to the dispersed phase, causing the particles to float, resulting in the formation of aerophobic compounds that are predominantly inorganic, whereas digestion reduces the amount of organic compounds, most likely due to the production of biogas. www.nature.com/scientificreports/ Figure 5 shows the FT-IR spectra of the raw sewage sludge with wavenumbers ranging from 450 to 4000 cm −1 . Reading the peaks spectra wavelengths from left to right, the peak observed at 3688-3619 cm −1 is attributed to OH-kaolinite and gibbsite lattice stretching 31,32 , 2988-2901 cm −1 to -C-H group vibration 7,33 , 1631 cm −1 and 1538 cm −1 to sulphur and nitrogen functional groups, respectively 33 . The shape of the shoulder peaks at 1050-1090 cm −1 was attributed to Si-C or Si-O-Si bands (Liang et al. 15 ), C-O-C vibration 33 , and finally, under 1000 cm −1 , the peaks at 749 cm −1 , 535 cm −1 , and 467 cm −1 were attributed to silica or calcium carbonate stretching 32,34 . Discard coal has a lower transmittance from 1498 cm −1 peaks than FT-IR spectra sludge 1007 cm −1 , www.nature.com/scientificreports/ 1030 cm −1 , implying that waste coal contains more mineral elements whereas sludge S has lower transmittance than D, implying that the former contains more functional classes.
Characteristics of synthesized activated carbons. Figure 6.a shows the microscopic examination of the sorbent SC-3-600 after activation. Pyrolysis and washing stages may have facilitated cavity formation of highly formed cavities due to the remarkable depletion of inorganic and organic components 35 . Furthermore, some particles ( Fig. 6a, b) lack or have insignificant cavities, which may be attributable to a lack of volatile and decomposed matter escaping to facilitate porosity 9 . The EDS qualitative analysis ( Fig. 6c) of the local particle ( Fig. 6b) revealed inorganic characteristics as well as the presence of K and Cl from activation and washing, respectively. The FT-IR spectrum of the various SBAC (as per Table 4) exhibited almost identical shape and peaks with different intensities regardless of the parameter involved in the synthesis process, as shown in Figs. 7 and 8. The spectra have six main peaks, which are 3676 cm −1 , 2901-2998 cm −1 , 1394 cm −1 , 1225 cm −1 , and 892 cm −1 in all SBAC samples.
In comparison to feedstock spectra, the disappearance of the wavelength peak at 470 cm −−1 and 500 cm −1 in SBAC could be due to inorganic matter solubilisation during the acid washing phase [36][37][38] , while 798 cm −1 could be due to dehydrogenation reactions 24 , 1631 cm −1 and 1538 cm −1 could be due to thermal degradation of protein for nitrogen related compound or sulphur 33 .
The peaks associated with C-H group stretching not only shifted slightly from2859 to 2922 cm −1 in feedstock to a higher value (2901-2988 cm −1 ) in SBAC, which can be related to the presence of saturated group 11 but also transmittance increased after activation for all samples, which is in contrast to some literature 7,33,39 in which it was argued that the disappearance of the peaks was due to the the decomposition of fatty organic matter and dehydration 7 . Organometallic formation may be a possible explanation for the increased transmittance of SBAC pyrolyzed at lower temperatures followed by depletion as temperature rises; for example, the abundance of functional groups (hydroxyl, carboxyl) on the biochar surface synthesized from sewage sludge pyrolyzed at 300 °C reduced extractable cations due to the formation of organometallic compounds 40,41 . The functional groups present in SBAC can be summarized as O-H, C=C, C=O, aliphatic C-H, Si-C, Si-O-Si, phosphate, and carbonate, based on the observations. Com-AC has broad peaks (2 = 25.3° and 2 = 44.6°) linked to its amorphous phases, while the XRD pattern (Fig. 9a, b) demonstrated mineral phases transformation from broad peak in the precursors to sharp in the Figure 6. SEM microscopic analysis of SBAC SC-3-600, (a) multiple particle (b) focus on particle without cavity after activation and (c) qualitative analysis of the targeted particle. www.nature.com/scientificreports/ manufactured absorbent, which clarified transition from amorphous to crystalline phase due to pyrolysis. XRD verified the existence of minerals such as wustite, quarts, illite, and feldspars in SBAC. It is worth noting that the presence of alkaline earth elements in the minerals (feldspars and illite) led to magnesium, calcium, and iron being classified as exchangeable cations. (Fig. 10), which can be linked to symmetric COO-and nanoaromatic C=O stretching entailed in formation of carboxylic functional groups or C=C bond of the aromatic skeleton ring of adsorbent 13 . On further notice the reduction in -CH (2901-988 cm −1 ) and -OH (3640 cm −1 ) stretching substances which may be ascribed to their solubility/affinity with H 2 SO 4 . On the other side, Com-AC did not have functional group. Raman spectra pattern is shown in Fig. 11, The peak located at 1587cm −1 (G band) and 1357 cm −1 (D band) are associated to sp 2 -bonded carbon and disorder of carbon structure respectively and their intensity I G and I D reveal the adsorbent degree of graphitization and carbon disorder structure from which I D /I G ratio enunciates prevalence of carbon disorder structure over graphitization and vice-versa 13 .
As shown in Table 5, graphitic structure is more predominant than carbon disorder structure and augmented with oxidation, partial graphitization of activated carbon was reported with acidic oxidation treatment with HNO 3 42 . The surface of oxidized adsorbent (Fig. 12b) exhibited less irregularities and soften surfaces than the unoxidized adsorbent (Fig. 12a) and Com-AC (Fig. 12c) probably as results of corrosive H 2 SO 4 -adsorbent surface interaction 18 and also disintegration of pore structure situated at the carbon edge 17 . XRD spectra of adsorbent are represented in Fig. 13 after oxidation treatment the unoxidized sorbent exhibited a broader peak (2θ = 25.3°) associated to amorphous or graphite structure as endorsed by Raman spectra analysis, on the other hand augmentation in peak (2θ = 30°) ascribed to peak might be corollaries of other mineral (feldspar and illite) solubilisation/decline.
The oxidation with APS did not impact significantly carbon composition as per ultimate analysis results since the biggest difference was 6.771% (from DC-5-750: 23,612% to DC-5-750M2:30,383%) while with other sorbents (SC-3-600,SC-5-900,DC-7-900) variation was less than 2%, alike tendency was reported by Ang et al. 22 Furthermore, textural properties in Table 6 receded severely after modification, in case of adsorbent derived from sludge S, for instance prior oxidation SC-3-600 had surface area of 281.72 m 2 /g that shrunk to 68.22 m 2 /g and 46.673 m 2 /g when treated with a solution of 1M and 2M APS respectively, probably due to thinness of walls www.nature.com/scientificreports/ pores, which are prone to collapse 43 and/or micropore occlusion 44 . In the case of DC-5-750, surface area varied slightly (7.28-30.57 m 2 /g: variation) and micropore areas changed from 124.14 to about 157.36 m 2 /g and 176.11 m 2 /g after modification with 1M APS and 2M APS respectively. After modifications DC-7-900 surface area (247.57 m 2 /g) increased probably as results of micropore and mesopore collapse emerging from activated carbon surface etching by reagents, it was 285.59 m 2 /g for DC-7-900M1 and 295,31 m 2 /g for DC-7-900M2.
Adsorption kinetics.
To better understand kinetics order and intra particle diffusion, the effect of time was investigated using the following relationships: where q e (mg/g), C 0 (mg/l), C e (mg/l), m (mg) and V (ml) represent the adsorption capacity, initial concentration, equilibrium concentration, adsorbent mass and volume of liquid in contact with adsorbent respectively. The removal (%) is calculated via Eq (2). The first and second order kinetics models, as well as interparticle diffusion, are commonly used to understand the adsorption mechanism of pollutants with activated carbon 45 .
Adsorption kinetics can be expressed in terms of the hypothesis that adsorbate removals obey a first-order kinetics: www.nature.com/scientificreports/ where q t and q e are the amount of pollutants adsorbed per mass of adsorbent (mg/g) at the targeted time and equilibrium, respectively, and k 1 is the constant rate (min −1 ). After integration with conditions that q e = 0 if t = 0, Eq (3) can be written: Or alternatively The adsorption capacity at equilibrium (q e ) and the first-order sorption rate constant (K 1 ) can be evaluated from the slope and the intercept respectively from plot of ln (1 − q t /q e ) vs t.
The second pseudo order kinetics is defined by equation: where K 2 is the second order rate constant, integration of Eq (6) with initial conditions when t = 0 and q e =0, lead to: K 2 and q e can deducted from the slope and the intercept of the plot t/q t vs t, where q t is the adsorption capacity at a specific time. The intraparticle diffusion model is a convenient means to depict diffusion mechanism and examine whether intraparticle diffusion is the rate-limiting step in the adsorption process. The intraparticle model diffusion is represented by: K int and C are determined from the slope and intercept of q t vs t 1/2 .where K int is the intraparticle diffusion rate constant (mg g −1 min −1/2 ) and C is the boundary layer effect intercept; the larger C, the greater the contribution of surface sorption to the rate-controlling step.
The "preferred" oxidized and unoxidized SBAC were chosen based on removal evaluation tests performed at a constant temperature of 303 K for 180 and 360 min with an initial concentration of 50 mg/l of nitrate, 90 min and 180 min for MR with 75 ppm as an initial concentration and an initial pH of 2 for nitrate and 4 for methyl red. The initial pH of 2 in the case of nitrate was chosen based on the hypothesis that adsorbent surfaces become positively charged as pH decreases, preventing competition with hydroxyl anions and pH 4 in the case of methyl red to avoid competition with H + cations while attempting to keep the surface deprotonated (below the pH pzc for the oxidized adsorbent) as MR pKa = 5.1. Initially, unoxidized SBAC outperformed oxidized SBAC as shown in Fig. 14, probably due to the former's proclivity for having a highly deprotonated surface at lower pH. The adsorption capacity decreased as time progressed, with the exception of Com-AC, where it changed slightly from 16.32 to 13.2 mg/g for contact times of 180 and 360 min, respectively. Furthermore, SBAC (DC-5-750M1 t q e (8) q t = K int t 1/2 + C Table 6. Structural properties and ultimate analysis results of SBAC after and before oxidation.
Modification conditions
S BET (m 2 /g) S meso (m 2 /g) S micro (m 2 /g) V tot (cm 3 /g) V micro (cm 3 /g) www.nature.com/scientificreports/ and SC-5-900M2) released a significant amount of nitrate ion in solution after 360 min of agitation, while this amount was lower when the process was carried out with other adsorbents. This situation may be related to the release of compound contained in the adsorbent ash in the liquid phase and/or the adsorption equilibrium phenomenon, in which optimum equilibrium contact was reached. DC-5-750 and SC-3-600, as well as the oxidized sorbents DC-5-750M2 and SC-3-600M2, were chosen for further nitrate adsorption experiments. The adsorbent had mesopores, making it ideal for removing medium-sized substances from liquid process 1 . Despite having a larger surface area, SC-5-900 (422.09 m 2 /g) and SC-5-900M2 (313.05 m 2 /g) had a lower adsorption capacity than other SBAC. This may be attributed to lower carbon content (Table 1) or the disappearance of acidic functional groups as temperatures rose 36 , and this result confirmed the significance of functional group presence. Based on the test results in Fig. 6, DC-5-750 M1 and DC-7-900 M1, with adsorption capacities of 127.634 mg/g and 124.376 mg/g, respectively, were chosen for additional experiments in relation to Com-AC (121.10 mg/g) performances, as well as unoxidized SBAC (DC-5-750 and DC-7-900). Figure 7a depicts the nitrate removal pattern over time. Prior to 30 min of contact time, all adsorbents had negligible adsorption power, most likely due to film diffusion resistance (external diffusion) 46 , but it increased significantly up to 120 min for Com-AC, DC-5-750, and DC-5-750M1 at 15.132 mg/g, 17.46 mg/g, and 10.72 mg/g, respectively.
Pore size (nm) C (%) H (%)
As illustrated in Fig. 15, adsorption rises with time except for Com-AC, where the change was insignificant (from 123. 8 mg/g at 90 min to 121. 2 mg/g at 180 min), owing to the fact that equilibrium had already been established. Despite having the highest surface area, SC-5-900 and SC-5-900M2 had a lower q e than other sorbents, most likely due to a lower carbon content (Table 6) or a lack of acidic functional groups due to their depletion during temperature augmentation 14 . This finding emphasizes the critical nature of functional group presence.
As depicted in Figure 15, the synthesised sorbents with the highest adsorption potential were DC-5-750 M1 (127.6 mg/g) and DC-7-900 M1 (124.4 mg/g), and they were therefore chosen for future experiments, along with the unmodified sorbents (DC-5-750 and DC-7-900).The upward tendency may be associated to pore diffusion and surface reaction which are deemed to present less component resistance than external diffusion 46 , while stagnant trend after 120 min may be ascribed to occupation of available adsorption site by nitrate ions as the process progress 25,47,48 , which weaken the interaction between sorbate and adsorbent surface 48 .
Preliminary tests shown in Fig. 6 revealed that the adsorption capacity increased significantly with time with a little fluctuation in Com-AC from 123.8 mg/g at 90 min to 121.172 mg/g at 180 min, implying that equilibrium had already been reached.
Although having a higher surface area, the adsorption capacity of SC-5-900 (422.1 m 2 /g) and SC-5-900M2 (313.1 m 2 /g) was lower than other SBAC. This could be due to lower carbon content (Table 5) and/or depletion of acidic functional groups as they vanished with temperature rise (28). This observation further corroborated the importance of functional group presence. www.nature.com/scientificreports/ Based on results in Fig. 6, DC-5-750 M1 and DC-7-900 M1 with adsorption capacities of 127.6 mg/g and 124.4 mg/g, respectively, were chosen for additional investigations in comparison to Com-Ac (121.1 mg/g) performances. Figure 16a depicts the nitrate removal trend as a function of time. Initially, all adsorbent adsorption capacities were negligible prior to 30 min of contact time possibly due to film diffusion resistance (external diffusion), and thereafter rose significantly for Com-AC (15.1 mg/g), DC-5-750 (17.5 mg/g), and DC-5-750M1 (10.72 mg/g). Figure 16.b depicts the changes in dye adsorption capacity with contact time. Com-Ac adsorption capacities increased slightly from 109.63 mg/g at 30 min to 123.868 mg/g at 90 min and 121.107 mg/g at 180 min, while DC-5-750 and DC-5-750M1 adsorption capacities increased from 83.08 mg/g and 113 mg/g at 30 min to 113.126 mg/g and 123.8 mg/g at 90 min and 114.36 mg/g and 127mg/g at 180 min.
The rapid increase in adsorption capability at the start of the process could be attributed to the abundance of adsorption sites 26,49 .
Nitrate pseudo first order (PFO) and Pseudo second order (PSO) plots are recorded in Fig. 17a, b and the corresponding parameters are recorded in Table 6. From the results in Fig. 17a, b Com-AC fitted better the PFO (R 2 = 0.9693) than PSO (R 2 = 0.8569) kinetic model, while other sorbents fitted better PSO kinetic model albeit lower coefficient correlation (R 2 ) 0.7859 and 0.793 for SC-3-600 and SC-3-600 M1 respectively. Figure 18 depicts intraparticle diffusion model line plots for nitrate adsorption that did not pass through the axis origin, indicating that the nitrate removal mechanism was not solely controlled by the intraparticle diffusion model. Observations of similar nitrate removal were recorded by others 25,50 .
Nitrate PFO and PSO parameters are shown in Table 7. It was noticed that Com-AC fitted better PFO (R 2 = 0.9693) than PSO (R 2 = 0.8569) kinetic model, whereas other sorbents fit better PSO kinetic model, albeit with lower coefficient correlation (R 2 ) 0.7859 and 0.793 for SC-3-600 and SC-3-600 M1, respectively. The plot of MR PFO and PSO kinetic models are depicted in Fig. 19a, b. .The MR adsorption process could not be depicted using the PFO kinetics model due to insufficient linearity fitting induced by the inherent equation formula where q e (adsorption capacity) is concomitantly fitting data and determining value. The parameters for parameters value of MR PFO and PSO kinetic and intraparticle model are presented in Table 8. www.nature.com/scientificreports/ pH dependence. The effect of pH was measured by varying the pH solution from 2 to 10, as shown in Fig. 20. The process was pH dependable, as evidenced by adsorption decrease with pH increasing, in the case of DC-5-750 (pH pzc = 6.6) from 20.56 mg/g at pH 2, to 6.24 mg/g at pH 6 and 4.2 mg/g at pH 10, Com-AC (pH pzc = 10.3) and DC-5-750M2 (pH pzc = 3.1). At pH 2, adsorption potential was 16.32 mg/g and 12.24 mg/g, respectively, at pH 6, 11.12 mg/g and 9.6 mg/g, respectively, and at pH 10.6 mg/g and 1.8 mg/g. This pattern was more likely caused by: (1) favorable conditions of nitrate removal accentuated by electrostatic attraction as adsorbent surface bears positive charge at lower pH. (2) The presence of competition between nitrate ions and hydroxyl ions in basic solution, as also stated in other works using carbon-based activated carbon content 47,48,51 .
In addition to the above-mentioned justifications, it may be further hypothesized that the underperformance of oxidized SBAC is due to the introduction of an acidic surface functional group; in the case of deprotonation, if pH > pHpzc, more binding sites for cationic sorbate are created on the surface than if it was an unoxidized adsorbent 44,52 .
Taking into account that MR is negative if pH>pKa and positive 26 , the introduction of acidic functional groups caused a shift in pH pzc , from neutral 6.6 (DC-5-750) to acidic after oxidation 3.1 (DC-5-750-M1) and Com-AC was basic 10.2, surface functional group deprotonated when pH pzc < pH and adsorbents surface becomes negatively charged 53 . In contrast to SBAC, Com-AC (pH pzc = 10.2) has a wider range where the surface's net www.nature.com/scientificreports/ charge is positive. Adsorption of MR with changed SBAC, which had a lower pH pzc than unoxidized, was more pH dependable due to electrostatic attraction between the adsorbent negatively charge and positive MR below pH4 26 ; at pH4, the adsorption potential of DC5-750M1 and DC-7-900M1 was 127.634mg/g and 117.176 mg/g, respectively, from pH 6, pH 8, and pH 10. Adsorption capacity decreased at pH6 from 97.488mg/g for DC-5-750M1 and 109.278 for DC-7-900M1 to 81.854 mg/g for DC-5-750M1 and 71.028 mg/g for DC-7-900M1. Figure 19. MR PFO (a) and PSO (b) kinetic model. www.nature.com/scientificreports/ The results show that for the oxidized adsorbent at pH4 the adsorption mechanism was related to electrostatic attraction and hydrophobic associated to/or hydrogen bond 54 . Similarly, the adsorption potential of unoxidized SBAC (DC-5-750 and DC-7-900) decreased from 123.868 mg/g and 111.76 mg/g at pH 2 to 81.854 mg/g and 90.32 mg/g at pH 10. The outperformance in basic solution could be attributed to adsorption site rivalry between hydroxyl ion and MR negatively charged ions 49 . Similarly, in an acidic solution with a pH of 2, the rivalry may have been between H + and positive MR 55 or/and repulsion force between protonated adsorbent surface and MR 26 .
However, pH solution variation affected slightly the Com-AC adsorption capacity of MR, from 101.894 mg/g at pH 2 to 113.04 mg/g at pH 10, possibly because the electrostatic attraction mechanism was not very pronounced in the process because below pH 5.1 dye was charged positively and the protonated adsorbent had positive net charge, similar results were recorded on adsorption of cationic dye (Methyl blue) 56 .
The adsorption capacity increased as initial concentration increased because driving force of concentration gradient prevailed and had propensity to subjugate mass transfer resistance barrier between solid and liquid interface. Conversely, the proportion of nitrate extracted decreased due to adsorbent site saturation, so a fraction of sorbate remained in solution 14 . As the initial pollutant concentration increased, the adsorbent's dye adsorption capacity increased, but the proportion of dye removed decreased due to a stronger driving force to overcome mass transfer resistance in terms of adsorption capacity and less available adsorption 55,57 For example, the adsorption capacity of Com-AC, DC-5-750, DC-5-750M2, and SC-3-600 at the lowest initial concentration (10 mg/l) was 5.78 mg/g, 8.50 mg/g, 6.28 mg/g, and 8.51 mg/g, respectively; at 50 mg/l it was 16.32 mg/g, 20.56 mg/g, 12.24 mg/g, and 18.32 mg/g, respectively; and at the highest initial concentration (90 mg/l 13 mg/g, 23.98 mg/g, 14.65 mg/g, and 18.778 mg/g, while the proportion of nitrate extracted displayed the opposite pattern. At the lowest initial concentration (10 mg/l), it was 27.78%, 40.87%, 30.19%, and 40.91%, and then decreased at the initial concentration of 50 mg/l to 16.32%, 20.56%, 12.4%, and 18.32%. Figures 21a, b and 22 a, b display Langmuir and Freundlich isotherms fitted to elucidate the nitrate and MR removal processes, respectively. Tables 9 and 10 provide data on the fitting parameters. For all adsorbents, the Langmuir isotherm suit the process better with a higher R 2 than the Freundlich model. The presence of R L values between 0 and 1 suggested that nitrate and MR adsorption were favorable on all sorbent surfaces. It is worth noting, however, that as the R L value reached zero, MR adsorption became more irreversible with increasing concentration 12,35 .
Isotherms.
Nitrate Langmuir and Freundlich isotherms were used to investigate the nitrate removal process. The findings are presented in Table 11.
As predicted, the Langmuir isotherm described the process better than the Freundlich model, with a greater R 2 for all adsorbents involved on a monolayer surface. The value of R L between 0 and 1 indicated that both sorbate adsorption was favourable on all sorbent surfaces of the Langmuir isotherm model, it is regarded as unfavourable if R L > 1 36 , for Freundlich model adsorption intensity (1/n) value was less than 0.5 in all cases, indicating that sorbate was easily adsorbed, it is hardly adsorbed if 1/n > 2 42 .
It is worth noting that MR adsorption became more irreversible with increasing concentration since the RL value approached zero at higher concentration 25,26 . The published data are also compared to the present study's nitrate adsorption capacities in Table 12.
Ionic strength.
To assess the effect of ionic strength, MR was diluted in NaCl solution with a concentration of 0.01 M and 0,05M and the pH was adjusted at 4. As shown in Fig. 23, presumably Na + cations competition and Environmental consideration: toxicity characteristic leaching procedure. The toxicity contaminant leaching procedure was carried out as described elsewhere 39,62 , and the element concentrations were determined using a Perkin-Elmer AA spectrometer. Table 13 shows the results of the TCLP test; in general, the concentration of leachable heavy metal in the pyrolyzed adsorbent was lower than the precursor due to the higher thermal stability of heavy metal acquired through pyrolysis 62 . www.nature.com/scientificreports/ However, after pyrolysis, SC-5-900 released more heavy metals (Fe, Cr, Co, and Ni) than its precursors. This may be due to the disintegration of some stable inorganic minerals (primarily silicate and carbonate) from sludge during pyrolysis with temperature augmentation, which caused the liberation of the fixed metals from the lattice 17 .
Conclusion
Two kinds of sewage sludge were used in this paper to create low-cost adsorbents: D, which was collected during the dissolved air flotation stage, and S, a combination of primary and secondary sludge from the digestion and dewatering phases. After mixing the sewage sludge with waste coal, it was activated with KOH and oxidized with APS. We evaluated and compared the capacity of the synthesized adsorbents to remove nitrate and MR to that of commercial activated carbon. The oxidation with APS (1) had a greater negative effect on the textural properties of adsorbent derived from sludge S than on those derived from sludge D, (2) had a negligible effect on the organic composition as revealed by ultimate analysis, and (3) induced the introduction of acidic functional groups as revealed by FT-IR and Raman spectroscopy, respectively. The adsorption capacity of adsorbents increased with time and initial contamination concentrations. The nitrate and MR removal processes followed www.nature.com/scientificreports/ the pseudo-second order kinetic model and the Langmuir isotherm rather well. Nitrate and MR adsorption capabilities were greater at pH = 2 and pH = 4, respectively.
Data availability
Some of the data will be made available upon the journal acceptance.
|
2022-02-02T06:18:07.043Z
|
2022-01-31T00:00:00.000
|
{
"year": 2022,
"sha1": "68e4f88891c86a05e2f27c18b22ca757fc0d985c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-05662-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e33a7f1ad4ef8bfcb5f6e05d6e61f2522836fa51",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17468595
|
pes2o/s2orc
|
v3-fos-license
|
Role of the preoperative usefulness of the pathological diagnosis of pancreatic diseases
Pancreatic cancer is the fifth leading cause of cancer death and has the lowest survival rate of any solid cancer. Endoscopic ultrasound-guided fine-needle aspiration biopsy (EUS-FNA) is currently capable of providing a cytopathological diagnosis of pancreatic malignancies with a higher diagnostic power, with a sensitivity and specificity of 85%-89% and 98%-99%, compared to pancreatic juice cytology (PJC), whose sensitivity and specificity are only 33.3%-93% and 83.3%-100%. However, EUS-FNA is not effective in the cases of carcinoma in situ and minimally invasive carcinoma because both are undetectable by endoscopic ultrasonography, although PJC is able to detect them. As for the frequency of complications such as post endoscopic retrograde cholangiopancreatography pancreatitis, EUS-FNA is safer than PJC. To diagnose pancreatic cancer appropriately, it is necessary for us to master both procedures so that we can select the best methods of sampling tissues while considering the patient’s safety and condition.
INTRODUCTION
Pancreatic ductal adenocarcinoma (PDAC) currently ranks fifth when it comes to death involving cancer. It also, when it comes to solid cancers, has the lowest survival rate [1,2] . The current survival rate for patients with PDAC after 5 years with the condition is less than 3.5% [3,4] . An early diagnosis is crucial to improve the prognosis. However, for a number of reasons, including the inaccessibility of the pancreas and the highly mali gnant property of the disease, an early diagnosis is still difficult to obtain, despite the constant improvements in diagnostic imaging. Furthermore, it is especially difficult to distinguish between a PDAC and a pancreatic inf lammatory lesion, which includes chronic pancreatitis (CP), or a benign stricture of the main pancreatic duct (MPD), and between intraductal papillary mucinous carcinoma IPMC and intraductal papillary mucinous neoplasm (IPMN). Being able to differentiate PDAC from other conditions is important, because not only are the treatments for each of these conditions different, but the prognosis for CP and other rare tumors is better than that for PDAC. A cytopathological diagnosis is desirable before beginning therapy in cases in which a qualitative diagnosis for the pancreatic mass by various imaging studies is not possible. In fact, 5%10% underwent pancreatoduodenectomy based on a diagnosis that was made before surgery. However, after performing surgery of the primary pancreatic or periampullary malignancy, it is later proven histopathologically to be CP, a benign fibrous common bile duct stricture or so on [57] . After performing endoscopic ultrasoundguided fineneedle aspiration biopsy (EUSFNA) for a pancreatic mass, the frequency of a PDAC does not reach 80% [8,9] , which is very important because the treatment strategy for a resection case and an unresectable case are different between PDAC and pancreatic neuroendocrine tumor cases [1014] .
There are many diagnostic procedures in cytopa thological treatment including, abdominal ultrasound guided fineneedle aspiration biopsy, computed to mography guided fineneedle aspiration biopsy, EUS FNA, pancreatic juice cytology (PJC), and Endoscopic pancreatography guided biopsy. I will give an outline mainly on EUSFNA and PJC in this review.
PROCEDURE OF AN EUS-FNA
Vilmann et al [15] was the first person to describe the EUSFNA of a pancreatic mass in 1992. These days, EUSFNA is the preferred method to sample pancreatic mass lesions, replacing for the most part other methods because EUSFNA is considered the best diagnostic modality for pancreatic masses with a higher accuracy than that of biopsies under CT or US guidance. There is a door knocking method and a fanning method in EUSFNA. The door knocking method is a nice procedure that is useful in obtaining a specimen from a mass, especially one with fibrotic tissue, and, as for the fanning method, the utility is proved by RCT [16] . FNA needles, which are available in sizes from 19 to 25 gauge (G), are available commercially. A recent metaanalysis suggests that a 22G and a 25G needle have a similar specificity rate after being used with 1292 patients being diagnosed with pancreatic ma lignancies [17] . The same study showed that the 25G needle did appear to have a higher sensitivity when compared to the 22G needle. Another study found that 25G needles seemed to be more advantageous over the 22G needles when it comes to the adequacy of passes. No difference in accuracy, number of passes or complications was found [18] . However, 25G needles should be considered first in cases in which one must sample from the pancreatic head or uncinated process lesions, as in some studies it has appeared that the 25G needle has a reduced chance of experiencing technical failures over 22G needles in such situations [19,20] . 19G needles, on the other hand, are not often used in the duodenum because of their natural rigidity. However, recently, a more flexible needle has been made of nitinol to improve its ability to function well (Flex 19, Boston Scientific, Natick, MA). An initial study using these new and improved needles included 38 patients. Thirty two of the 38 patients had pancreatic head/uncinate lesions. The use of the needles provided adequate samples for cytological analysis in all 32 patients. There were no reported technical failures or procedure related complications [21] . Ramesh et al [22] reported that there is no significant difference in the performance of flexible 19G and 25G needles although the procurement of histological core tissue with the flexible 19G needles was significantly higher (88% vs 44%, P < 0.001). As for aspiration, there is a report that compared nonaspiration, aspiration of 10 mL, the aspiration of the slow pull method, and 1020 mL, but a constant opinion was not obtained from the sampling rate about accuracy [2327] . EUSFNA accuracy is also impacted by the skill level and whether or not a cytopathologist is available [2830] . It has recently been shown, in a metaanalysis that covered 34 studies, that rapid onsite evaluation had a significant determinant on the accuracy of EUSFNA when it comes to the diagnosis of pancreatic masses [28] . Two studies have evaluated the optimal number of EUS FNA passes [29,31] to be 57 passes for pancreatic masses in order to get the best diagnostic yield. For situations in which rapid pathology interpretation is not possible, this information may prove to be useful. It is considered that the white specimens in EUSFNA samples include histological evidence, and, as for the red specimen, it is thought to be the blood component. When inspected by a 19G needle, a histologic core was found to be present in white specimens 78.9% of the time, and in red specimens 9.3% of the time [32] . It is reported in multiple metaanalysis that ROSE is useful in solving the problem mentioned above [28,33] . Whereas, a metaanalysis suggests 25G needles have a higher sensitivity rate than 22G needles when it comes to diagnosing pancreatic malignancy [17] , it is expected in the future that EUSFNA by using a 25G needle will become more mainstream because of the ease of its puncture. At that time, reexamination re examination may be required if it is necessary to perform immunohistochemical staining after performing ROSE, due to the smaller sample size meaning a decreased chance of there being a histologic core in the sample. Furthermore, there is a fundamental problem in that globally, there are not enough pathologists capable of performing ROSE.
We developed the target sample check illuminator (TSCI) to be a device that would solve the above problem [34] . The mean number of needle punctures was 2.4 (range, [1][2][3][4][5], and the agreement rate between TSCI and histopathology in 142 samples was 93.7% (133/142). No differences in detection capacity were observed in cancerous or noncancerous lesions. When presence of the target specimen was confirmed by TSCI, 91.4% (53/58) of the patients were able to finish the tests, and the mean number of needle punctures was 1.2 (67/58).
DIAGNOSTIC POWER OF EUS-FNA
Two recent studies reported a sensitivity of 85% and 89% based on cytology for the diagnosis of malignancy. The specificity for the same was found to be 98% and 99% respectively [28,35] . It is useful in the improvement of the diagnostic ability of EUSFNA to use a genetic analysis from EUSFNA samples. Recent metaanalysis reported that combining Kras mutation analysis with routine cytology moderately improves the ability of EUSFNA to differentially diagnose between PDAC and pancreatic inflammatory masses. In a total of eight studies, with 696 cases of PDAC and 138 cases of pancreatic inflammatory masses, the pooled sensitivity, specificity, positive likely ratio and negative likely ratio of Kras mutation analysis combined with cytopathology for diagnosis of PDAC vs pancreatic inflammatory masses were 90%, 95%, 13.45, and 0.13, respectively. Especially, among total 123 patients whose EUSFNA results were inconclusive or negative, fiftynine had Kras mutations and were finally diagnosed with PDAC (48%, 59/123) [36] . In addition, there are several possible means of processing aspirated samples obtained by EUSFNA for molecular and other ancillary tests [37] .
COMPLICATION WITH EUS-FNA
Complications from EUSFNA include pain, bleeding, fever, and infection. Rare complications such as, acute portal vein thrombosis [38] , peritoneal seeding of tumor cells [39] , and ruptured pseudoaneurysm of the splenic artery [40] have also been reported. A recent systematic review by Wang et al [41] , who identified 51 articles with a total of 10941 patients, reported that the mortality rate attributable to EUSFNAspecific morbidity was 0.02% (2/10941) and that out of 8246 patients with pancreatic lesions only 60 (0.82%) patients reported any com plications. About 36/8246 patients had pancreatitis. Of those patients, 75% of the cases were mild. Out of the total number of patients, one of them with severe pancreatitis died. The total rates of pain, bleeding, fever and infection were 0.38%, 0.10%, 0.08% and 0.02% respectively. Two point two percent of patients were reported to have peritoneal seeding of tumor cells after receiving EUSFNA. However, it seems to be lower than that caused by CTguided FNA (16.3%) [42] . There was no increase in the risk of peritoneal carcinomatosis in pancreatic masses to be found [43] . Beane et al [44] found there to be no difference in the survival rate of patients with PDAC who underwent EUSFNA than with those who did not. Not only was there no difference, but a recent study that looked at the risk of gastric/peritoneal recurrence in cases were EUSFNA was performed found EUSFNA was not associated with increased needle track seeding [45] . Furthermore, preoperative EUSFNA was evaluated in 498 patients, and it was found that it had no negative effect on the survival rate of patients with resected pancreatic cancer [46] .
LIMITATION FOR EUS-FNA
Even though EUSFNA has an excellent accuracy and a low incidence of major complications, it does have several limitations. We cannot perform EUSFNA when we cannot detect a tumor in EUS. Actually, we cannot identify the carcinoma in situ (CIS) in EUS [47] . Secondly, even though EUSFNA has a very high sensitivity rate, when in comes to pancreatic tumors, its negative predictive value is only 55%65% [35,48] . As such, EUS FNA does not allow us to rule out the possibility of a malignancy. Third, if the patient has CP the diagnostic accuracy of EUSFNA decreases [49,50] . It might also hinder cytological interpretation of pancreatic FNA, thus giving EUSFNA a decreased sensitivity [51] . Fourth, EUSFNA for pancreatic cancer has a falsepositive rate of 1.1%, usually in patients with CP [52] . Fifth, we may not be able to perform EUSFNA when we cannot discontinue the use of an antithrombotic drug.
COMPLICATION OF PJC
McCune et al [53] developed the ERCP process in 1968. As for the sampling of the pancreas lesion, Endo was the first person to perform a collection of pancreatic juice under the ERP [54] . The process of PJC is used in all of the following procedures: Brushing cytology, cytodiagnosis with pancreatic duct lavage fluid (PDLF), cytodiagnosis by using endoscopic nasopancreatic drainage (ENPD), and cytodiagnosis by using secretin. Now I will present the methods, diagnosis results, and complications of each procedure.
Brushing cytology
A cytopathological diagnosis by using brushing cytology is easier than conventional aspiration cytology because it can collect fresh cells.
However, the sensitivity (33.3%65.8%) and the accuracy (46.7%76.4%) are not so good because it is difficult to perform and collect enough cells [55,56] . Recently, scraping cytology with a guidewire yielded 71.4%93% sensitivity, 100% specificity, 100% positive predictive value, 75%84.4% negative predictive value, and 88.8%94% accuracy [8,57] . However, this diagnosis rate is shown to improve by mastering the procedure [56] . When we diagnose a CIS by PJC for a pancreatic duct stenosis case, when we are unable to see the pancreatic mass in imaging studies, and resected it, it is usually the case that there is no cancer at the site of stenosis in the MPD. The stenosis is caused by inflammation due to a CIS, which was derived from a branch duct. For this reason, the diagnostic power of brushing cytology is uncertain. As for the complications rate of brush cytology, it has been reported that acute pancreatitis is a possible complication with a rate of 4.2%33.3% [8,5557] .
ENPD method
The ENPD method places 5 or 6French ENPD tubes in the patient for up to 23 d [58,59] . Iiboshi et al [58] diagnosed 15 CIS using this method. Sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy of the ENPD method for pancreatic cancer were 80%100%, 83.3%100%, 93.3%100%, 71%100%, and 87%95%, respectively, revealing significantly higher sensitivity than the conventional method (P = 0.0001) [58] . As for the complications of the ENPD method, post endoscopic pancreatitis (PEP) has a rate of 7.5%. In particular, the incidence rate of PEP of ENPD method for BDIPMN is higher than the conventional method [60] .
Cytodiagnosis by using secretin
Due to the fact that secretin stimulates pancreatic exocrine function, we are able to obtain more pancreatic juice when secretin is present than without it. Finally, we can obtain pancreatic epithelial cells by using secretin. Administration of secretin was performed conventionally before collecting pancreatic juice for cytodiagnosis [54,60] . Secretin may be required in cases in which a sufficient amount of material was not able to be obtained by conventional methods or it may be needed to aspirate mucous fluid in intraductal papillary mucinous neop lasm [57] . Nakaizumi reported that the sensitivity for PDAC was 76% in PJC by using secretin [61] . As for the complications of secretin, at the top of the attached document, it shows a rate of 1.9% for nausea, 0.7% for flushing, and 0.5% for stomachache and vomiting [62] .
We have not experienced any adverse events with the secretin administration. Also, we confirmed that the quantity of pancreatic juice significantly increases even though the secretin load in diluted form is 1/32.
Cytodiagnosis with pancreatic duct lavage fluid
Imamura's process requires us to inject a saline from injection lumen, before aspirating it by the negative pressure from a guidewire lumen with a different sy ringe at the same time by using double or triplelumen cannule after brushing cytology in ERP. The sensitivity of pancreatic cancer diagnosis by this procedure is 83%, and pancreatitis was not a sideeffect due to PDLF [63] . We choose to do PJC by using secretin if a catheter is able to pass through the narrow segment of the MPD, and PDLF if the catheter cannot pass.
If secretin is used in cases where a catheter is unable to pass the stenosis of the MPD, the pancreatic ductal pressure in the caudad area past the stenosis increases, and this causes pancreatitis.
GENETIC ANALYSIS WITH PANCREATIC JUICE
It is useful in the improvement of the diagnostic ability of PJC to use a genetic and molecular analysis from PJC samples in cases in which a small quantity of specimen was obtained from PJC and the adjuvant diagnosis of the cytodiagnosis is negative. In a diagnosis for pancreatic cancer, sensitivity improves by adding the K-ras mutation analysis with routine cytology [64] . There are some reports about the utilities of telomerase activity [65] , DNA methylation [66] , Smad4 [67] , and KL6 [68,69] measurement in pancreatic juice.
LIMITATION OF PJC
First, the accuracy of PJC is generally only around 40%70% [55,56] except in some institutions [8,57,58] . Second, we cannot diagnose pancreatic neuroendocrine tumor, solid pseudopapillary neoplasm, or pancreatic acinar cell carcinoma, because they are not connected to the MPD. Third, it is hard to perform immunostaining because it is difficult to obtain a specimen as compared with EUSFNA. Fourth, around 4.2%33.3% of complications such as PEP can occur after PJC [8,5557,60] , but it is reported that the risk decreases for PEP with diclofenac administration. Elmunzer et al [70] reported that postERCP pancreatitis developed in 27 of 295 patients (9.2%) in the indomethacin group and in 52 of 307 patients (16.9%) in the placebo group (P = 0.005). Moderatetosevere pancreatitis developed in 13 patients (4.4%) in the indomethacin group and in 27 patients (8.8%) in the placebo group (P = 0.03).
USE OF EUS-FNA AND PJC
Generally, EUSFNA is better in diagnostic ability and adverse events than PJC. Therefore, if we can perform EUSFNA, we should choose EUSFNA, and it is desirable to only choose PJC in the following cases: (1) we can not detect a mass in EUS; (2) it is difficult to perform EUS FNA when avoiding blood vessels and the MPD; (3) it is difficult to stop use of antithrombotic medicine; and (4) there aren't any institutions capable of performing EUS FNA in the neighborhood. Furthermore, there are some reports that the diag nostic accuracy of EUSFNA and/or PJC was significantly higher than that of EUSFNA or PJC alone [8,71] .
In conclusion, although there are some complications such as acute pancreatitis and dissemination, if the frequency of complications and the physical burden of surgery for patients are taken into consideration, it is perhaps better to obtain tissue before treatment begins. Since there are various methods of sampling tissue, it is important to choose the procedure while considering the patient's condition and safety.
|
2018-04-03T05:24:42.788Z
|
2016-09-15T00:00:00.000
|
{
"year": 2016,
"sha1": "6b80afce9de111fe293ca5749af79af71294586e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4251/wjgo.v8.i9.656",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b80afce9de111fe293ca5749af79af71294586e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1624983
|
pes2o/s2orc
|
v3-fos-license
|
Neutrophil and T-Cell Homeostasis in the Closed Eye
Purpose This study sought to examine the changes and phenotype of the tear neutrophil and T-cell populations between early eyelid closure and after a full night of sleep. Methods Fourteen healthy participants were recruited and trained to wash the ocular surface with PBS for at-home self-collection of ocular surface and tear leukocytes following up to 1 hour of sleep and a full night of sleep (average 7 hours), on separate days. Cells were isolated, counted, and incubated with fluorescently labeled antibodies to identify neutrophils, monocytes, and T cells. For neutrophil analysis, samples were stimulated with lipopolysaccharide (LPS) or calcium ionophore (CaI) before antibody incubation. Flow cytometry was performed. Results Following up to 1 hour of sleep, numerous leukocytes were collected (2.6 × 105 ± 3.0 × 105 cells), although significantly (P < 0.005) more accumulated with 7 hours of sleep (9.9 × 105 ± 1.2× 106 cells). Neutrophils (65%), T cells (3%), and monocytes (1%) were identified as part of the closed eye leukocyte infiltration following 7 hours of sleep. Th17 cells represented 22% of the total CD4+ population at the 7-hour time point. Neutrophil phenotype changed with increasing sleep, with a downregulation of membrane receptors CD16, CD11b, CD14, and CD15, indicating a loss in the phagocytic capability of neutrophils. Conclusions Neutrophils begin accumulating in the closed eye conjunctival sac much earlier than previously demonstrated. The closed eye tears are also populated with T cells, including a subset of Th17 cells. The closed eye environment is more inflammatory than previously thought and is relevant to understanding ocular homeostasis.
PURPOSE. This study sought to examine the changes and phenotype of the tear neutrophil and T-cell populations between early eyelid closure and after a full night of sleep.
METHODS. Fourteen healthy participants were recruited and trained to wash the ocular surface with PBS for at-home self-collection of ocular surface and tear leukocytes following up to 1 hour of sleep and a full night of sleep (average 7 hours), on separate days. Cells were isolated, counted, and incubated with fluorescently labeled antibodies to identify neutrophils, monocytes, and T cells. For neutrophil analysis, samples were stimulated with lipopolysaccharide (LPS) or calcium ionophore (CaI) before antibody incubation. Flow cytometry was performed.
CONCLUSIONS. Neutrophils begin accumulating in the closed eye conjunctival sac much earlier than previously demonstrated. The closed eye tears are also populated with T cells, including a subset of Th17 cells. The closed eye environment is more inflammatory than previously thought and is relevant to understanding ocular homeostasis.
Keywords: neutrophils, flow cytometry, T cells, closed eye, sleep E yelid closure is essential for maintenance of both ocular and diurnal homeostasis. At the level of the retina, eyelid closure is essential for circadian rhythm regulation through melatonin production. 1 Eyelid closure during sleep results in increases in proinflammatory cytokines, complement activation products, and matrix metalloproteinases in the tears within the conjunctival sac. 2 An additional hallmark of closed eye tears is the influx of hundreds of thousands of neutrophils onto the ocular surface and into the conjunctival sac, and prior studies have examined the role of these neutrophils and demonstrated that they have an aberrant phenotype compared with regular, blood-isolated neutrophils. 3 Notably, the neutrophils of the closed eye are more activated, meaning that they have upregulated many of the cell surface receptors corresponding to inflammation, yet remain largely quiescent or nonresponsive when presented with an inflammatory stimulus. This implies that closed eye neutrophils have been primed and/or activated by a prior stimulus, and that they may have undergone some degranulation, as an example, but do not continue to degranulate on further stimulation. The present investigation sought to determine if neutrophil phenotype was consistent throughout eyelid closure, and thus required revisiting prior work to determine the time course for neutrophil recruitment to the ocular surface.
In early studies of tear neutrophils, the number of neutrophils reported from the closed eye was approximately 2000 to 8000 cells. 4,5 Using our technique of adding additional volume with the ocular surface wash, we previously reported that there are closer to 350,000 leukocytes recovered after a full night of sleep. 3 Tan et al. 4 performed a time course study on neutrophil recruitment in the closed eye, and observed that neutrophils were recruited in very low numbers (tens) between 0 and 3 hours of sleep, and only reached thousands of cells following 5 hours of sleep. Preliminary analyses performed with our improvements in cell collection demonstrated that up to 500,000 leukocytes could be recovered as early as 1 hour after eyelid closure. 6 Given the quiescent, yet activated, nature of closed eye tear neutrophils following a full night (average 7 hours) of sleep, the goal of this project was to determine if neutrophil phenotype changes with increasing/ prolonged eyelid closure, and thus we compared cellular phenotype following 1 and 7 hours of sleep.
Neutrophil recruitment is hypothesized to be driven by either trapped bacteria or lipopolysaccharide (LPS) in the closed eye tears, homeostatic mechanisms, or neurogenic mechanisms (potentially related to sleep). 2 Recruitment of neutrophils also may be coordinated through the actions of T cells, specifically IL-17 producing T-helper cells (Th17 cells). 7,8 Specifically, IL-17 is known to stimulate epithelial cells to produce inflammatory mediators such as IL-8 (CXCL8), which in turn leads to recruitment of neutrophils. 7 The presence of T cells in the closed eye leukocyte accumulation has yet to be elucidated, but T cells have been previously observed in open eye human tears, [9][10][11] and this population of T cells has been shown to increase in vernal conjunctivitis, 11 allergic conjunctivitis, 10 and atopic keratoconjunctivitis. 9 It is also known that CD3 þ T cells populate the palpebral conjunctiva in the conjunctival-associated lymphoid tissue (CALT). 12,13 This CALT region is proposed to play a pivotal role in closed eye regulation, as the overlying lid is primarily in contact with the lymphocyte-deficient cornea. 13 There is also a small population of T cells and Th17 cells in bulbar conjunctiva, [14][15][16] and biopsies of the bulbar conjunctiva show increased T-cell presence in dry eye disease and Sjögren's syndrome. 16,17 Further, T cells, and specifically Th17 cells, are considered to be involved in dry eye pathogenesis at the ocular surface, 13,14 which further highlights the importance of these cells in ocular surface immune regulation. Altogether, it is hypothesized that T cells, including Th17 cells, play a role in the closed eye leukocyte infiltration.
With the development of ocular therapeutics that focus on combatting inflammation, 18 it is imperative to understand the role that closed eye leukocytes play in ocular surface homeostasis, given their presence in tears. The purpose of this investigation was 2-fold: to determine phenotypic and population changes in the closed eye neutrophil population, and to determine the relative presence of T cells in the closed eye leukocyte mélange, following 1 and 7 hours of sleep in a cohort of healthy human subjects.
Subjects
The study was conducted in accordance with the tenets of the Declaration of Helsinki and received ethics clearance from the University of Alabama at Birmingham Institutional Review Board. A total of 14 healthy subjects were enrolled, with an average age of 34 6 9 (range, , and subjects were evenly split male versus female.
Cell Collection
All participants were trained using a previously established method 3 to self-collect their closed eye ocular surface and tear leukocytes using a polyethylene pipette containing sterile PBS. Briefly, on eye opening, the leukocytes are irrigated from the ocular surface and conjunctival sac, and include the resident tear film. This population of leukocytes is hereafter referred to as ''tear leukocytes.'' Subjects collected tears following either up to 1 hour of sleep, or after a full night of sleep (7.33 6 0.91 hours, range 5-9 hours). For simplicity, the former is referred to as the 1hour time point and the latter is hereafter referred to as the 7hour time point. An important caveat for the 1-hour time point is that there is no exact measure of length of sleep, although all subjects self-reported that they had fallen asleep by the time the alarm went off for the 1-hour collection. The presence of a diurnal effect on leukocyte recruitment has yet to be compared, but a 1-hour collection time point during nighttime sleep was used to compare with the 7-hour time point.
After awakening, participants were instructed to gently irrigate their ocular surface, with 5 mL PBS for each eye, with normal blinking, and the eye wash was collected in one sterile polypropylene tube (pooled sample, 10 mL). The 1-hour sample was collected following a normal evening bedtime. Collected samples were brought to the laboratory within 2 hours of each collection and were processed immediately. The cell collection was centrifuged at 270g and the supernatant was removed. Cells were counted, and average cell size was obtained using a Moxi Z automated cell counter (ORFLO, Hailey, ID, USA).
Reagents and Monoclonal Antibodies
General methods for cell processing and stimulation have been described previously. 3
Cell Stimulation
To evaluate the activation state of neutrophils (i.e., whether neutrophils are quiescent, primed, or activated), the closed eye leukocytes were presented with two stimuli that are recognized to induce an inflammatory response in leukocytes, namely LPS and calcium ionophore (CaI). LPS presents a stimulation of neutrophils through toll-like receptor 4 (TLR4), 20 whereas calcium ionophore induces cellular activation through increases in cytosolic calcium ions. 21 For LPS stimulation, cells were incubated in a final concentration of 6 lg/mL LPS in PBS, which should induce a significant stimulation. 22 For CaI stimulation, cells were incubated in a final concentration of 2 lM CaI in PBS. A higher concentration (5 lM) with a shorter incubation time (5 minutes) has been shown to be sufficient to induce metabolite release by neutrophils. 23 A third aliquot was reserved for unstimulated samples that were left to rest. All samples were incubated following addition of stimulus for 30 minutes at room temperature. Importantly, T cells and other leukocytes were a part of the stimulation, although only membrane receptor analysis of the neutrophils was conducted.
Expression of Membrane Receptors on Neutrophils and T Cells
After incubation with stimulus, tear samples for neutrophil and monocyte phenotyping were transferred into tubes containing fluorescently labeled antibodies against CD11b, CD14, CD15, CD16, CD45, CD66b, and C3aR. The use of several markers to positively identify neutrophils is necessary in human work, as there is no single marker analogous to Ly6G in mice to identify neutrophils in humans. 24 Cells were incubated with antibodies for 30 minutes at room temperature, and were then washed twice by spinning down and resuspending in 700 lL of PBS, before fixation in 2% paraformaldehyde.
For the tear samples for T-cell phenotyping, unstimulated tear collections were transferred into tubes containing fluorescently labeled antibodies against CD161 and CD196, and cells were incubated for 30 minutes at 378C. This first incubation was performed to improve the specificity and sensitivity of measurement for these nonabundant receptors, given the temperature-dependence of membrane receptor internalization and subsequent recycling. 25 Following the first incubation, cells were then stained with FVS and fluorescently labeled antibodies against CD3, CD4, CD8, CD25, and CD45 for 30 minutes at room temperature. Cells were then washed twice in PBS, filtered using a 35-lm cell-strainer cap (Corning, Corning, NY, USA), and then fixed in 2% paraformaldehyde.
Flow Cytometry
All samples were acquired on a LSR II flow cytometer (BD Biosciences) within 8 hours of fixation using BD FACS Diva software, version 8.0.1 (BD Biosciences). Neutrophils were defined by stepwise exclusion of doublets and cell clumps, nongranulocytes or non-neutrophils, by using flow cytometric analyses ( Supplementary Fig. S1). Specifically, neutrophils were specified as being CD45 þ (leukocyte common antigen), with an appropriate forward scatter and side scatter profile associated with neutrophils. Doublets were also excluded to ensure only single cells were used for analysis. Identification of neutrophils in humans is complex, as no single membrane receptor can be used to identify neutrophils in humans. 26 Positive staining for CD11b (Mac-1), CD15 (Lewis X), and CD16 (FccRIII), along with low staining for CD14 (LPS receptor common on monocytes/macrophages) can be used in combination to identify neutrophils. 26 A recent study of 374 clusters of differentiation (CD) antibodies on blood versus oral neutrophils, both in health and disease, suggested that all neutrophils, regardless of activation state or location, stain positively for CD11b, CD16, and CD66b (degranulation). 27 For this study, all the above markers were chosen to identify the neutrophils and characterize their activation state. The complement receptor for C3a (C3aR) was also used as a measure of complement activation.
Similarly, viable T cells were defined by stepwise exclusion of doublets, cell clumps, and dead cells; granulocytes or nonlymphocytes, and stained positively for CD3 (Supplementary Fig. S2). T cells were further broken down into CD4 (helper T cell) versus CD8 (cytotoxic T cell) lineage, and Th17 cells were gated following selection of CD4 þ T cells using double-positive selection for CD161 and CD196 (CCR6). CD161 has been reported to be a marker of all human IL-17producing T lymphocytes, 28,29 and the addition of CD196 has shown improved specificity in selection of Th17 cells. 29 Appropriate compensation controls and fluorescence-minus one controls were used to determine gating strategies. Interdaily variations in flow cytometry acquisition were controlled for using the Application Settings feature in BD FACS Diva software. All data were analyzed post acquisition using FlowJo V10 (Ashland, OR, USA).
Statistical Analysis
All results are reported as means 6 SD. To evaluate the significance of differences in expression between 1 and 7 hours, nonparametric analysis was performed using the paired Wilcoxon signed rank test using Statistical Analysis Software (SAS, Inc., Cary, NC, USA).
Cell Count and Size
Following 7 hours of sleep, the average number of cells recovered from a pooled sample of left and right eyes was 9.88 3 10 5 6 1.16 3 10 6 (Fig. 1). There were significantly fewer cells recovered at the 1-hour time point, with an average total recovery of 2.57 3 10 5 6 3.00 3 10 5 cells (P < 0.005). Cell size, however, was unchanged between the 1-and 7-hour time points, with averages of 9.58 6 0.60 lm and 9.46 6 0.41 lm, respectively (P ¼ 0.47). Age-related differences were not investigated in this study given the relatively small sample size and small standard deviation of patients enrolled in the study. Sex-related differences in cell count were investigated, and on average, there were more neutrophils isolated from females both at 1 hour (1.6 times as many) and 7 hours (1.8 times as many); however, this did not achieve statistical significance at either time point (P ¼ 0.73 at 1 hour, P ¼ 0.18 at 7 hours). More detailed analyses of sex-related differences in neutrophil and Tcell expression were not investigated given the smaller sample size.
Tear Neutrophil Analysis
Phenotypic changes in the neutrophil population could be observed between 1 and 7 hours of sleep ( Fig. 2A). Between 1 and 7 hours of sleep, it was observed that membrane receptor expression of CD14 was decreased (P ¼ 0.08) and CD16 was significantly decreased (P < 0.03), as shown in Figure 2B. There were also decreases in expression of CD15 (P ¼ 0.05) and CD11b (P ¼ 0.08) between 1 and 7 hours of sleep. There was a slight increase in expression of C3aR, although this did not achieve statistical significance (P ¼ 0.31). CD66b and CD45 remained mostly unchanged between 1 and 7 hours of sleep.
Stimulation with LPS or CaI resulted in no significant changes to tear neutrophil phenotype, and the activation ability remained unchanged between 1 and 7 hours (Supplementary Fig. S3).
Monocytes, as identified by CD14 þ and CD16 þ staining, 30 were minimally observed in the samples, and were approximately 1% of total recovered leukocytes, similar to prior reports. 5
T-Cell Analysis
As with the neutrophil population, T cells also accumulate with eyelid closure (Fig. 3A). CD3 þ , and consequently CD4 þ , and CD8 þ cell counts were all elevated between 1 and 7 hours of sleep (P < 0.01). There were more Th17 cells at the 7-hour time point, but this did not reach significance (P ¼ 0.25). The relative percentage of CD4 þ cells out of all CD3 þ T cells was also compared between the 1-and 7-hour time points to demonstrate that the CD4 þ /CD3 þ ratio was slightly lower at the 7-hour time point; however, this did not reach statistical significance (P ¼ 0.16, Fig. 3B). Similarly, there was no difference in the relative percentage of CD8 þ T cells out of all CD3 þ T cells between the two time points (P ¼ 0.43). However, the relative percentage of Th17 cells out of all CD4 þ T cells was significantly different between the 1-and 7-hour time points (P < 0.03, Fig. 4C), implying that Th17 cells are recruited or present early on after eyelid closure, but do not accumulate at the same rate as all other T cells by the 7-hour time point.
Altogether, the leukocyte composition in the eye, at awakening, is summarized in Figure 4. Following a full night of sleep, most of the leukocyte infiltrations are neutrophils, and there is a small percentage of CD3 þ T cells, and an even smaller percentage of monocytes. B cells were not stained for, but there was a large population of CD3 À lymphocytes with appropriate forward scatter/side scatter characteristics that would suggest their presence. Natural killer (NK) cells likely constitute missing fractions in the current breakdown. Last, there is a population of CD3 þ T cells that are CD4 À CD8 À and are known as double-negative T cells. Double-negative T cells may be involved with memory, and may be either pathogenic or regulatory in nature; these cells also may represent cd T cells or NK T cells. 31 Prior studies have shown that the normal physiological blood composition of double-negative T cells is approximately 1% to 5%. 32 It has also been demonstrated that this population can be greatly elevated in certain healthy tissues and organs, such as the female genital tract 33 and the kidneys, 34 or in certain infections and diseases. 31
DISCUSSION
As many as 1 million leukocytes may be recovered from the closed eye within 1 hour of sleep. This number is significantly larger than the previously reported value of tens of neutrophils, which was likely a result of 2 to 3 lL microcapillary tear collection in contrast to our improved collection techniques. 4 Our wash technique may remove any neutrophils that are adherent to mucins or the glycocalyx on the ocular surface. This adhesion could be possible through binding of galectin-3 to CD66b on the surface of the neutrophils. 35 Ultimately, this could be responsible for the increased yield observed with our technique versus the microcapillary collection. Altogether, the early leukocyte accumulation highlights that leukocyte recruitment is an active, and not passive process, given the short kinetics.
Following a full night of sleep, the average number of leukocytes increased in the closed eye, for an average of almost 1 million leukocytes. This number is larger than our previously reported average of 3.50 3 10 5 , but may arise from difference in counting methods (hemocytometer versus Moxi Z cell counter). 3 Importantly, the previous value is still well within 1 SD of the current estimate. There is a large variability in the number of cells collected, which could be a result of the eye wash method itself, as different volumes are recovered from the washes. Sex and age also have not been controlled for, to determine if these factors affect cellular recovery, although anecdotal observations do not seem to support that either of those are major contributory factors. However, age is likely to be of importance, as Th17 cells increase in peripheral tissues with age, 36 and mouse models have demonstrated an increased population of Th17 cells in the lacrimal gland with age. 37 Our prior work demonstrated that closed eye tear neutrophils have increased expression of surface membrane receptors CD66b, CD11b, CD54, and have lost expression of Lselectin (CD62L), 3 which are all consistent with neutrophil extravasation, or migration of neutrophils out of the bloodstream into tissue. Incubation of blood-isolated neutrophils following exposure to any permutation of artificial tear fluid, hypoxic conditions, or co-incubation with human corneal epithelial cells, in vitro, is also unable to replicate this phenotype, highlighting the importance of extravasation. 38 Although not directly compared with blood-isolated neutrophils, the tear neutrophils investigated in this study at the 7hour time point demonstrated high positive staining for CD66b, CD11b, CD16, and CD15, which are consistent with our previous results, 3 and also correlate with other aberrant neutrophil populations in healthy tissues, such as those in the mouth, 27 lung, 39 nose, 40 placenta, 41 and spleen. 42 Our neutrophil results demonstrate a change in phenotype with increasing duration of eyelid closure, predominated by a downregulation of membrane receptors CD14 and CD16. Loss of CD14 and CD16 has been shown to correlate with increased CD63 and primary granule release. 43 Primary (azurophil) granule release results in the release of neutrophil elastase, 44 a potent serine protease, which is known to cleave CD16 from the surface of neutrophils. 43,45 Neutrophil elastase is known to be upregulated in the closed eye, specifically as a result of neutrophil primary granule release. 46 CD16 also may be shed from neutrophils as they undergo apoptosis, 47,48 along with CD15 and CD66b, 48 but our prior results demonstrated that fewer than 2% of the closed eye tear neutrophils were apoptotic, and stained negatively for Annexin V, 3 so the observed CD16 membrane receptor downregulation is likely not mediated by apoptosis.
It has been suggested that the CD14 and CD16 downregulatory phenotype reduces the phagocytic capability of neutrophils, 43 as both CD14 and CD16 are phagocytic receptors on the surface of neutrophils. Although commonly thought of as a marker for monocytes and macrophages, CD14 is a glycoprotein that is also expressed on neutrophils, and binds LPS and is a receptor for phagocytosis of several microbial species. 49 CD16 is an FccRIII receptor, which binds the immunoglobulin molecule IgG connected to opsonized bacteria. 50 CD16 downregulation has been associated with decreased phagocytosis of pathogens by peripheral blood neutrophils in elderly individuals, potentially resulting in an increased risk of sepsis. 51 Interestingly, it was shown that closed eye neutrophils after a full night of sleep have an impaired phagocytic ability. 52 We hypothesize that neutrophils, as they arrive on the eye, may be more phagocytic, helping to clear pathogens that accumulate during the day, and as they remain on the ocular surface, they release their granule contents to effectively sterilize the closed eye tears. Future studies are required to better understand phagocytosis in the closed eye, and to examine primary granule release and CD63 surface expression on closed eye neutrophils with increasing sleep.
CD15 is also downregulated with eye closure, which is normally associated with neutrophil apoptosis, akin to CD16. 48 CD15 membrane receptor expression is reported to not be affected or cleaved by neutrophil elastase, nor are CD11b and CD45. 53 The precise mechanisms that result in CD15 downregulation in the closed eye remain unknown.
Neutrophil activation is a complex process that involves different, often sequential phases, of priming and activation, leading to different stages of degranulation. 54 Priming often occurs through stimulation of neutrophils by proinflammatory cytokines like IL-8, and this may simply lead to neutrophil entry into the tissue through release of secretory vesicles and gelatinase (tertiary) granules. 55,56 Activation of neutrophils often requires a more potent stimulus, such as LPS, to release specific (secondary) granules and ultimately azurophilic (primary) granules. 55,56 This process is oversimplified and there are many caveats, but generally, CD11b activation begins early in the priming process. CD11b pairs with CD18 to form the transmembrane receptor, part of the b2 integrin family, known as Mac-1 or complement receptor 3 (CR3). 57 CD11b is known to be expressed on gelatinase granules, 58 and the observed high CD11b membrane receptor expression at 1 hour may indicate that gelatinase granules have been released, facilitating the transfer of neutrophils into the tears. Its downregulation by the 7-hour time point may simply be explained by membrane receptor recycling into the neutrophil. However, each membrane receptor serves multifactorial roles, and CD11b is no exception. As CR3, the CD11b/CD18 complex is a receptor for phagocytosis of Bordetella pertussis. 59 CD11b also has demonstrated an important role for mediating TLR4 endocytosis in dendritic cells, in response to LPS stimulation. 57 Although the precise mechanism of CD11b internalization is not yet known, these results suggest that the phagocytic capability of closed eye tear neutrophils is altered between 1 and 7 hours of sleep. Future studies should investigate expression of TLR4 on the closed eye tear neutrophils to observe changes in expression with increasing sleep. Importantly, we observe no upregulation in CD66b between 1 and 7 hours of sleep. CD66b is involved later in neutrophil activation and is a hallmark of degranulation of specific granules. 55 This implies that neutrophil activation is not increased with increasing sleep, and may challenge the notion that azurophilic granules are released, hence requiring additional studies examining the role of CD63 membrane receptor expression.
Our results indicate that there is a slight upregulation of C3aR with increasing sleep. The closed eye is known to have an increase in complement components, and that both the classic and alternative pathways are activated in the closed eye. 2,4,60 The observed increase in C3aR may simply be a result of this complement activation, but a larger sample size is necessary to better understand these changes.
Our initial hypothesis was that the closed eye tear neutrophil response to inflammatory stimuli soon after eyelid closure, and therefore early in recruitment, would be measurably higher than a full 7 hours in the closed eye environment, as neutrophils at awakening do not respond to stimulus either in normal physiological conditions. 3 However, this hypothesis was not confirmed. Both at 1 hour and 7 hours of sleep, neutrophils demonstrated minimal ability to respond to stimulation with LPS or calcium ionophore. The combined hypothesized pathway for neutrophil recruitment and activation in the closed eye is summarized in Figure 5.
Our results demonstrate that there are few monocytes in the closed eye, which is consistent with prior observations. 5 This result is odd, given the high concentration of leukotriene B 4 , which should induce chemotaxis of both neutrophils and monocytes. 5 To our knowledge, this is the first report of T cells in the closed eye tears, although it is to be expected given the noted presence of T cells in the open eye. 9,10 Like the neutrophil profile, T cells accumulate with increasing sleep. There is a population of both CD4 þ helper T cells and CD8 þ cytotoxic T cells in the closed eye tears, and Th17 cells also may be identified, as shown with double-positive staining for CD161 and CD196. Interestingly, the proportion of Th17 cells out of all CD4 þ T cells following 1 hour of sleep was much greater than at 7 hours of sleep. Some subjects had more than 60% of all CD4 þ T cells as Th17 cells, suggesting that they are recruited early on to be active in the closed eye. It is possible, however, that the recruitment of Th17 cells and neutrophils is cooperative, and that Th17 cells are not responsible for neutrophil recruitment given the short kinetics observed for neutrophil recruitment.
Neutrophil recruitment to a direct subcutaneous injection of chemokines may be as fast as an hour, 61 but recruitment to peripheral sites usually involves a longer time course in inflammatory processes. Following LPS stimulation 62 or wounding, 63 kinetics of neutrophil recruitment to peripheral sites usually takes approximately 4 hours for a significant response. In corneal wound healing, peak neutrophil recruitment occurs roughly 12 hours after abrasion, although some neutrophils are recruited as early as 6 hours. 64 Therefore, it is hypothesized that the closed eye tear neutrophils reside in a peri-ocular surface tissue for quick migratory ability.
It is feasible that neutrophils control and regulate the recruitment of T cells to the ocular surface. In blood, T cells represent approximately 26% of all leukocytes, whereas neutrophils represent approximately 60% of all leukocytes (monocytes contribute an additional 6% of cells). 65,66 This ratio of T cells to neutrophils is drastically different at the ocular surface, where T cells represent only approximately 3% of the total recruited leukocytes. A similar imbalance in T cells to neutrophils has been reported in paraffin-stimulated human saliva, where approximately 96% of the total leukocyte accumulation are neutrophils, with monocytes and T cells each representing approximately 2% and 1% of the total accumulation, respectively. 65 Induced sputum also has a distinct imbalance, with 51% neutrophils and 2% total lymphocytes. 67 Interestingly, the ratio of CD4 þ to CD8 þ cells remained similar between the closed eye and blood, at 3.2 and 2.7, 66 respectively. Therefore, it may be possible that the neutrophils suppress T-cell recruitment, which could be possible given recent research demonstrating the interactions between these two cell types. 8,68 Both mouse and human neutrophils may act as antigen-presenting cells under certain conditions, through expression of the major histocompatibility complex type II, which could promote the differentiation of T cells. 69,70 Neutrophils may recruit Th17 cells through the production of CCL2 and CCL20, which bind to CCR2 and CCR6, respectively. 7 Neutrophils are capable of IL-17a production and are capable of self-stimulation through the IL-17 receptor IL-17RC, and this mechanism has been shown to be important in fungal killing at the ocular surface. 71 Last, neutrophils can inhibit T-cell responses and perform immunosuppressive actions through proximal interactions involving Mac-1. 24 The exact mechanisms that drive the interaction of T cells and neutrophils in the closed eye tears have yet to be elucidated and warrant further study.
The results of our study suggest that the closed eye is a very dynamic cellular and inflammatory environment, with both neutrophils and T cells, even IL-17 producing T-helper cells, recruited to the ocular surface within 1 hour of sleep. Examination of neutrophil presence at an early time point following eye closure demonstrates neutrophil phenotype changes across 7 hours of sleep, suggesting that neutrophils are originally more phagocytic, but following primary granule degranulation and release of neutrophil elastase, neutrophils may lose some membrane receptor expression and be less activated. The drastic difference between neutrophils and T cells in the closed eye versus blood suggests that the closed eye FIGURE 5. Neutrophil dynamics in the closed eye. Following eyelid closure, neutrophils are recruited in large numbers to the ocular surface. These neutrophils are hypothesized to be capable of phagocytosis, through the actions of CD14, which binds to LPS, and CD16, which binds to IgG on the surface of microbes. This is important for the clearance of microbes and pathogens that have accumulated in the eye throughout the day. Neutrophils also have high expression of CD11b, which implies that neutrophils have undergone tertiary granule release. As the neutrophils remain on the eye, they begin to release their primary granules, which contain a large amount of neutrophil elastase (NE). As a potent serine protease, neutrophil elastase cleaves surface membrane receptors CD14 and CD16, losing the potential to phagocytose. Reduction in CD11b expression also may imply a decrease in phagocytic capability. CD15 downregulation is poorly understood. Throughout eyelid closure, neutrophils do not appear to be activatable by LPS and CaI, and are hypothesized to suppress T-cell recruitment to the ocular surface. tear neutrophils could be T-cell suppressive, and may imply a new additional role for these neutrophils in ocular surface Tcell regulation. Altogether, the closed eye is an active inflammatory environment with numerous leukocytes that play a functional, although as yet understood, role in ocular surface homeostasis. Future studies are required to determine how and if these leukocytes are involved in ocular surface disease pathogenesis.
|
2017-12-15T12:21:48.790Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "79f97d9ffd8137312eea92f4b7d1ae300a552fee",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1167/iovs.17-22449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79f97d9ffd8137312eea92f4b7d1ae300a552fee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16623763
|
pes2o/s2orc
|
v3-fos-license
|
Wnt expression and canonical Wnt signaling in human bone marrow B lymphopoiesis
Background The early B lymphopoiesis in mammals is regulated through close interactions with stromal cells and components of the intracellular matrix in the bone marrow (BM) microenvironment. Although B lymphopoiesis has been studied for decades, the factors that are implicated in this process, both autocrine and paracrine, are inadequately explored. Wnt signaling is known to be involved in embryonic development and growth regulation of tissues and cancer. Wnt molecules are produced in the BM, and we here ask whether canonical Wnt signaling has a role in regulating human BM B lymphopoiesis. Results Examination of the mRNA expression pattern of Wnt ligands, Fzd receptors and Wnt antagonists revealed that BM B progenitor cells and stromal cells express a set of ligands and receptors available for induction of Wnt signaling as well as antagonists for fine tuning of this signaling. Furthermore, different B progenitor maturation stages showed differential expression of Wnt receptors and co-receptors, β-catenin, plakoglobin, LEF-1 and TCF-4 mRNAs, suggesting canonical Wnt signaling as a regulator of early B lymphopoiesis. Exogenous Wnt3A induced stabilization and nuclear accumulation of β-catenin in primary lineage restricted B progenitor cells. Also, Wnt3A inhibited B lymphopoiesis of CD133+CD10- hematopoietic progenitor cells and CD10+ B progenitor cells in coculture assays using a supportive layer of stromal cells. This effect was blocked by the Wnt antagonists sFRP1 or Dkk1. Examination of early events in the coculture showed that Wnt3A inhibits cell division of B progenitor cells. Conclusion These results indicate that canonical Wnt signaling is involved in human BM B lymphopoiesis where it acts as a negative regulator of cell proliferation in a direct or stroma dependent manner.
Background
In mammals, the early antigen independent phase of B lymphopoiesis takes place in the intersinusoidal spaces in the bone marrow (BM). Here, the B cell progeny mature from hematopoietic stem cells (HSC) via early lymphoid progenitors (ELP, comprising common lymphoid progen-itors and early B), pro-B, pre-B and immature B developmental stages characterized by successive steps in the rearrangement of immunoglobulin genes and consecutive expression of cellular markers [1][2][3]. Using immunohistochemical doublestaining we have revealed earlier that all developmental stages of the B cell lineage in human BM tissue are in close contact with slender CD10 + stromal cells or their extensions [4]. This finding correlates with the consensus that B lymphopoiesis is tightly regulated by signals provided by mesenchymal stromal cells and components of the intracellular matrix in the BM microenvironment in vivo [4][5][6]. However, the elements of this signaling are yet inadequately identified; stromal factors like IL 7, Flt3 ligand [7], IL-3 [8,9] and SDF1 [10,11] are essential, but not sufficient for BM B lymphopoiesis [2]. Clearly, there is a need for further characterization of both the stromal phenotype as well as the autocrine and paracrine factors that participate in the regulation of BM B lympopoiesis.
Wnt proteins belong to a large and highly conserved family of secreted, cystein-rich glycoprotein signaling molecules, consisting of 19 members. They are likely to act locally because of their limited solubility [12] and tendency to associate with the cell surface extracellular matrix [13]. Signaling is initiated by Wnt proteins binding to receptors of the Frizzled family (Fzd) on the cell surface. This binding is promiscuous and the ligand/receptor specificities are not yet properly determined. Depending on particular Wnt/Fzd combinations, at least three signaling cascades may be activated. Most studied is the canonical Wnt pathway, which is activated by members of the Wnt1 class (such as Wnt1, Wnt2, Wnt3 and Wnt8) [14]. A key regulatory molecule in this pathway is β-catenin, which in the absence of a Wnt signal is kept low through continuous phosporylation by glycogen synthase kinase-3β (GSK-3β), resulting in a subsequent proteasome dependent destruction of β-catenin. Binding of Wnt ligands to Fzd receptors and coreceptors LRP5/6, leads to inactivation of GSK3β and thereby accumulation of nonphosphorylated β-catenin, which enter the nucleus. Here, β-catenin acts as a coactivator of members of the lymphoid enhancer factor-1 (LEF-1)/T-cell factor (TCF) family of transcription factors to stimulate transcription of Wnt target genes [15]. Activation of Wnt signaling can be inhibited by soluble antagonists, including the Dickkopf (Dkk) family and the soluble Fzd related proteins (sFRP) [16].
Recently, Wnt proteins have drawn attention as a set of factors operating in embryonic development, growth regulation of adult tissues and cancer formation [15,[17][18][19][20]. Moreover, Wnt signaling plays a central role in the communication between HSC and stromal cells [21] as well as in several other stem cell niches [22,23]. Several observations have established direct roles for Wnt signaling in the maturation process where hematopoietic stem cells lose their pluripotency and commit to specific lineages [24][25][26]. LEF-1 and Fzd9 knockout mice show defect B lymphopoiesis [24,27] and Wnt signaling seems to be involved in development of leukemia [28][29][30] and malignant myeloma [31]. Moreover, in murine B lymphopoiesis this signaling pathway has a stimulatory effect on pro-B cells from fetal liver [24]. As early B lymphopoiesis in mice and humans to a certain extent shows distinct factor dependency [32], and since fetal and adult lymphopoiesis takes place in different maturation niches, the aim of the present study was to investigate Wnt signaling in human BM B lymphopoiesis in more detail. We have examined which Wnt signaling pathway molecules that are expressed in B progenitor cells and stromal cells from human BM, and analyzed the regulated expression of several Wnt receptors (Fzd and LRP), β-catenin and plakoglobin as well as the central transcription factors LEF-1 and TCF-4 during the early B lymphopoiesis. Furthermore, we have investigated the effect of recombinant Wnt3A on progenitor B cells. We found that Wnt3A induced β-catenin stabilization and inhibited in vitro B lymphopoiesis in a coculture with stromal cells by suppression of initial cell proliferation. Thus, canonical Wnt signaling may be involved in human BM B lymphopoiesis.
A distinct set of Wnt ligands, Fzd receptors and Wnt antagonists is expressed in B progenitor cells and stromal cells from human BM
Previous work has demonstrated expression of Wnt5A, Wnt2B and Wnt10B in pooled human BM populations [26]. However, the expression pattern of Wnt ligands, Fzd receptors and Wnt antagonists in human B lineage cells has not been explored. In the absence of available antibodies to detect these large families of proteins, we performed conventional RT-PCR on RNA isolated from FACS sorted B progenitor cells (CD10 + IgM -CD45 + ) pooled from three different donors, using primers designed specifically to detect mRNA expression of all known Wnt ligands and Fzd receptors as well as the Wnt antagonists Dkk1, Dkk4, sFRP1-4 and WIF1 ( fig. 1 and table 1). In B progenitor cells, Wnt 2B, 5B, 8A, 10A and 16 mRNAs were readily detected. Interestingly, the Wnt16 PCR product had two bands of 520 bp and 233 bp, respectively ( fig. 1). The 520 bp band represents the full-length form and the 233 bp band represents a possible splice variant lacking exon 3, potentially giving rise to a truncated Wnt16 form. In addition, expression of several other Wnt mRNAs was detectable, however, less readily (table 1). The Fzd receptors showed on average much higher mRNA expression levels than the Wnts, where Fzd2, 3, 4, 5, 6 and 9 mRNAs were easily detectable in the B progenitor population, as demonstrated by strong PCR bands. Fzd1 and Fzd7 mRNA mRNA expression analyses of Wnt ligands, Fzd receptors and Wnt antagonists
+ -
expression was also demonstrated, but at lower levels than the other Fzds (table 1). We also detected expression of the Wnt antagonists Dkk1, Dkk4, sFRP4 and WIF1 mRNAs in the BM B progenitor cells ( fig. 1 and table 1). Of these, sFRP4 mRNA was most readily detectable, suggesting the highest expression level. sFRP2 and sFRP3 mRNAs were variably detected (table 1), suggesting low expression levels.
RT-PCR performed on RNA from BM stromal cells showed expression of Wnt2B, Wnt5A, Wnt5B and Wnt8B. mRNA expression of Wnt9B was also demonstrated in these cells, although at a lower levels. Moreover, Fzd3, 4 and 6 mRNAs were detected in BM stromal cells, as well as expression of the Wnt antagonists Dkk1, sFRP2 and sFRP3 mRNAs (table 1).
The results showed regulation of several of the important Wnt-signaling molecules, and different expression profiles were recognizable ( fig. 2). mRNA levels for the plasma membrane receptors LRP5, LRP6, Fzd5 and Fzd6 dropped considerably as the cells develop from small pre-B cells into immature B cells. Furthermore, Fzd5 mRNA levels were strongly up-regulated as the cells commit to the B lineage (from ELP to pro-B), with a further up-regulation as the cells differentiate to pre-B cells. Fzd2 and Fzd9 mRNA levels, on the other hand, seemed to increase somewhat throughout the differentiation, with highest levels in small pre-B and immature B cells. In small pre-B cells, the mRNA levels of LRP5 and Fzd9 were about twofold higher than in the large cycling pre-B cells. The expression levels of all receptors were low compared to the expression levels of e.g. LEF-1 and β-catenin, indicating relative low mRNA expression levels. Fzd3 and Fzd4 mRNAs were not detectable with the amount of RNA template used in these assays.
The mRNA expression of β-catenin and plakoglobin showed little variation as the cells differentiate. β-catenin mRNA was evenly expressed in ELP, pro-B, large pre-B and immature B, with a small increase (near two-fold) in small pre-B cells. Plakoglobin mRNA levels, in contrast,
BM B progenitor cells BM stromal cells (BMS)
Human fetal brain
Real-time PCR analysis of relative mRNA expression levels of Wnt pathway molecules in BM B progenitor sub-populations LEF-1 and TCF-4 mRNA expression is highly regulated during the early B lymphopoiesis, as shown previously by microarray analysis (Hystad ME et al, manuscript in preparation and [33]). Our results showed a strong up-regulation of LEF-1 mRNA as the cells commit to the B lineage, and the expression was kept continuously high until the cells become immature B cells, where the level was reduced to the same as in uncommitted progenitors. Here, low LEF-1 expression was further confirmed by the absence of LEF-1 protein in B lymphocytes from peripheral blood (results not shown). The relative TCF-4 mRNA levels, on the other hand, were high in both ELP and pro-B, and decreased (up to 5-fold) as the cells passed through Ig rearrangement (pre-B -immature B cells) ( fig. 2). It should be noted that the LEF-1 mRNA expression was detected 5-8 cycles earlier than the TCF-4 mRNA expression, indicating that LEF-1 mRNA is much more abundant than TCF-4 mRNA.
Wnt3A induces β-catenin stabilization and accumulation in BM B progenitor cells
Our data demonstrated that human BM B progenitor cells express a set of central players in the canonical Wnt signaling pathway, potentially allowing a Wnt signal to be conveyed. To further examine whether B progenitor cells could respond to treatment with Wnt proteins, we looked for the stabilization and subsequent accumulation of the vital signaling molecule β-catenin in CD10 + B progenitor cells. When these cells were treated with Wnt3A, the amount of β-catenin increased substantially compared to the very low levels in untreated cells ( fig. 3). Although there were some donor variations, the results showed that the B progenitor cells are able to receive and communicate a signal from the Wnt pathway.
Wnt3A inhibits human in vitro B lymphopoiesis
Having identified expression of central molecules in the canonical Wnt pathway in BM B progenitor cells, we performed two variants of B lymphopoiesis assays to investigate whether Wnt signaling (using recombinant Wnt3A) had a functional effect on B lymphopoiesis in vitro. Both assays were based on coculture with the murine stromal cell line MS-5. In assay 1 hematopoietic progenitor cells (HPC) were tested for their capacity to develop into B lineage cells, whereas in assay 2 B progenitor cells were measured for survival and expansion. At the endpoint of the assays, each sample was subjected to quantitative flow cytometry and the total number of cells positive for the pan B cell marker CD19 was measured. In assay 2, analysis of the differentiation marker CD34 was included.
Initial analyses demonstrated that Wnt3A had an inhibitory effect when BM HPC (CD133 + CD10 -) were grown on stromal cells for 3 weeks at conditions that favored B lymphopoiesis (assay 1). The number of CD19 + cells in the samples treated with Wnt3A was 5 times less than the number measured in the control samples ( fig. 4A). The inhibited B lymphopoiesis could result from Wnt3A suppressing differentiation of the HSC pool found in the HPC population [34], an indirect effect mediated by the stromal cells [35], or, alternatively, Wnt3A could target more committed lymphoid progenitor cells. To examine the latter possibility in more detail, we tested whether Wnt3A acted on later stages of in vitro B lymphopoiesis. BM B progenitor cells (CD10 + ) were grown on stromal cells in the presence of Wnt3A or medium only for 2 weeks (assay 2). In accordance with the results from the assays using HPC, it was demonstrated on average near 50% reduction in the total number of CD19 + cells in samples treated with Wnt3A compared with control ( fig. 4B). When added every third day, both sFRP1 and Dkk1 were able to counteract the effect of Wnt3A almost completely, demonstrating a specific effect of Wnt3A on in vitro B lymphopoiesis ( fig. 4B). Similar results were obtained using Wnt3A protein from another source; Wnt3A conditioned medium (table 2). Moreover, the effect was independent of the source of stromal cells as the use of primary human BM stromal cells (BMS) as supportive layer did not change the outcome of the experiment (table 2). To check whether Wnt3A affected distinct early B subpopulations differently, the cells in assay 2 were additionally analyzed for expression of the CD34 differentiation marker to distinguish between pro-B and pre-B cells. The relative frequency of CD34 + cells (pro-B) decreased from 38 % before culturing (day 0), to approximately 30 % and 15 % after one and two weeks of culturing, respectively. This decrease was independent of treatment with or without Wnt3A ( fig. 4C). Furthermore, separation of the pre-B population into large cycling and small resting pre-B cells by surface expression of CD20 [33] revealed inhibitory effect of Wnt3A on all subpopulations (results not shown). Thus, we conclude that Wnt3A does not affect the relative proportions of different BM B subpopulations, but has a general inhibitory effect on pro-B, pre-B and immature B cells in a stroma coculture.
Wnt3A inhibits BM B progenitor cell division in vitro
The inhibitory effect of Wnt3A on in vitro B lymphopoiesis could be explained by increased apoptosis, an inhibitory effect on proliferation, or both. However, measurements of apoptosis in cells cultured without stromal cells for 1, 2 or 3 days showed no effect of Wnt3A (results not shown), suggesting an effect on proliferation only. To verify this, we used high-resolution cell division tracking to study the initial effects of Wnt3A on B progenitor cells grown on a stromal layer. Sorted CFSE labeled CD10 + B progenitor cells were cocultured with MS-5 for 3 days in the presence of Wnt3A or medium only, and examined for the number of cell divisions by flow cytometry as well as the surface markers CD34 and CD19. The data clearly demonstrated that Wnt3A inhibited the initial divisions of B progenitor cells taking place in the coculture ( fig. 5A).
Discussion
Several studies have identified the canonical Wnt pathway as a regulator of the homeostasis of human and murine HSC and hematopoietic progenitor cells [26,34,36]. Furthermore, knockout studies (LEF-1 and Fzd9) in mice have indicated a central role for Wnt signaling in B lymphopoiesis [24,27]. The Wnt pathway also seems to be involved in development of leukemia [28][29][30]. In the present work, we wanted to study in more detail the implications of canonical Wnt signaling in human BM B lymphopoiesis. Here, we describe that a set of Wnt ligands, Fzd receptors and Wnt antagonists is expressed in BM B progenitor cells, allowing a Wnt signal to be conveyed and modulated in these cells. We demonstrate regulated expression of several Wnt receptors, β-catenin and plakoglobin as well as the transcription factors LEF-1 and TCF-4 mRNAs during early differentiation steps in the B cell lineage, supporting the hypothesis that Wnt signaling is active in BM B lymphopoiesis. Furthermore, we show that canonical Wnt signaling, as measured by the accumulation of β-catenin levels, is induced in human BM B progenitor cells. Finally, we demonstrate that Wnt3A inhibits human stromal dependent B lymphopoiesis and that this effect is a consequence of decreased cell proliferation.
We show that CD10 + human B progenitor cells express a set of Wnt ligand mRNAs (2B, 5B, 8A, 10A and 16), of which Wnt16 is of particular interest, since this gene is activated by the E2A-Pbx1 translocation in some cases of acute lymphocytic leukaemia (ALL) [28]. However, several pre-B leukemia cell lines studied [28] do not express Wnt16, suggesting a distinct role for this factor in early B lymphopoiesis that is turned off during leukemiagenesis, except in cases where Wnt16 is aberrantly activated by the E2A-Pbx1 fusion protein. Further, we demonstrate that primary BM stromal cells express mRNA of several Wnt ligands, including Wnt2B, Wnt5A, Wnt5B, Wnt8B and Wnt9B. This is partly in accordance with previous studies [24,26]. Taken together, these results show that both [27]. In contrast to this, our results show that the large cycling pre-B cells express lower levels of LRP5, LRP6, Fzd6, Fzd9, β-catenin and plakoglobin than the small resting pre-B cells.
Although one should be cautious in trying to predict functional consequences from mRNA expression data, this trend suggests that Wnt signaling is not likely to be involved in a positive regulation of cycling of the large pre-B cells after Ig heavy chain rearrangement. And even though the absolute expression levels of the receptor mRNAs are low, these data suggest that during a narrow window of the development comprising pro-and pre-B cells, B progenitor cells might be target for Wnt signaling through these receptors.
To be able to convey a Wnt-signal, the cells have to express either of the two important molecules, β-catenin or plakoglobin. Our results show that levels of β-catenin mRNA change little during the differentiation. Although it has been demonstrated that levels of cytoplasmic β-catenin protein may vary throughout the development of thymocytes [37], these variations may not necessarily be reflected by the mRNA levels. In fact, as β-catenin is needed both for signaling purposes as well as for adhesion purposes, the mRNA levels may have to be kept relatively stable. Plakoglobin mRNA, on the other hand, decreases after the pro-B differentiation level. This corresponds to the observations made in developing murine thymocytes [37], where plakoglobin is down-regulated at the level of immature single positive thymocytes, suggesting that plakoglobin may play a central, but hitherto unexplored role in conveying a Wnt signal during lymphopoiesis. In fact, the lack of effect of knocking down β-catenin in early hematopoiesis, including B and T lymphopoiesis [38], prompted the authors to suggest that plakoglobin may stand-in for β-catenin in this respect.
The LEF-1/TCFs are directly activated by canonical Wnt signaling, and LEF-1 knockout mice show defects in pro-B cell proliferation and survival [24]. However, it cannot yet be ruled out that this effect might be a result of abolishment of the repressive functions or other non-Wnt related activities of LEF-1 [15]. Here, we have verified microarray data showing regulation of LEF-1 and TCF-4 during B lymphopoiesis (Hystad ME et al, manuscript in preparation and [33]). Interestingly, it has been reported that LEF-1 is a target gene for the B lymphopoiesis key transcription factor Pax-5 [39]. Moreover, LEF-1 interacts with Pax-5 and c-Myb to activate the Rag-2 promoter [40], but the accurate role of LEF-1 in B lymphopoiesis is still elusive. In contrast to LEF-1, we found TCF-4 mRNA levels to be high in ELP and pro-B cells, and lower in the more mature pre-B and immature B populations. Although expressed at lower levels, one could speculate that TCF-4 steps in for LEF-1 in the earliest lymphoid progenitors before LEF-1 is properly switched on, potentially in conveying a Wnt signal or, alternatively, in acting as a transcription repressor of B lineage genes before commitment. These are topics for further studies.
Wnt antagonists play important roles in preventing or fine tuning the Wnt signal [16]. Our data show expression of the Wnt antagonists Dkk1, Dkk4, sFRP4 and WIF1 mRNAs in B progenitor cells. Dkk1, sFRP2 and sFRP3 were expressed in bone marrow stromal cells. Of these factors, Dkk1 in particular is known to be involved in a feedback loop to adjust or shut down canonical Wnt signaling [41]. It is likely that these factors are important in adjusting the incoming Wnt signals in the bone marrow microenvironment, where several cell types are able to express a wide range of ligands and Wnt receptors.
The inhibitory effect of Wnt3A on the generation and cell division of B progenitor cells in vitro, both with regard to pro-and pre-B cells, is in contrast to several reports on the functional effects of canonical Wnt signaling in mice. Both in murine HSC [34], developing thymocytes [25] and a wide range of cancer cells [31,42], elevated levels of β-catenin lead to increased cell proliferation. Furthermore, in fetal murine pro-B cell [24], Wnt3A conditioned medium leads to increased BrdU incorporation. Our divergent results may be due to different species, microenvironments and/or cell context. For instance, murine and human B lymphopoiesis require to a certain extent differing factor dependency [32]. However, by culturing murine BM B progenitor cells, we have not been able to demonstrate increased cell proliferation in the presence of Wnt3A (results not shown). Thus, we suspect the Wnt response to be different in fetal and adult B progenitor cells, potentially affected by the cellular microenvironment and/or context. Indeed, the fetal pro-B cells are exposed to the microenvironment of the liver and this is very different from that of the BM. For instance several regulators of the Wnt pathway are more highly expressed in fetal liver stroma than in BM stroma [43], which suggest that Wnt signaling might be regulated in a different manner and have a different role in the fetal liver than in the BM. Another important aspect that has to be taken into consideration, is that different Wnt ligands, although able to activate canonical Wnt signaling, indeed show distinct activities [44]. In addition there may also be species and location differences. However, as mentioned above, Cobas et al have demonstrated a lack of an essential role for β-catenin in BM hematopoiesis, including proliferation of B lymphocytes [38]. Thus, in contrast to findings in the fetal liver, our results may very well represent a physiological situation in the adult organism, where Wnt signaling via β-catenin is not essential for B lymphocytes, but may be used to fine tune the delicate balance between proliferation, differentiation and apoptosis taking place during early BM B lymphopoiesis.
In support of our data on an inhibitory effect of Wnt3A on cell division, it has been reported that canonical Wnt signaling hampers fibroblast cell proliferation through cell cycle blocks, potentially mediated via p53 [45]. Moreover, Wnt signaling inhibits proliferation and regulates cellcycle arrest at distinct stages of development in Drosophila wing development [46]. Thus, it is likely that the cellular context, in some cases represented by the ability of a central regulatory molecule like p53 to respond, will affect how the cells react to vital stimuli like Wnt. It has been speculated that aberrant p53 is necessary to convey the strong tumor promoting effect of abnormal Wnt signaling seen in colon cancer [47,48]. It is also interesting that Wnt5A has been found to inhibit B cell proliferation and can function as a tumor suppressor in hematopoietic tissue, albeit via the non-canonical Wnt/Ca 2+ pathway [49]. [35]. The BM microenvironment is composed of a heterogeneous population of cells including fibroblasts, adipocytes, endothelial cells and osteoblasts, all derived from a common mesenchymal precursor [50]. In particular, the role of Wnt signaling in adipogenesis may be relevant here, as it has been demonstrated that Wnt10B [51,52] inhibits adipogenesis, and there seems to be a positive correlation between adipogenesis and hematopoiesis [52]. This emphasizes the complexity of the interactions in the B lymphopoiesis maturation niche and opens for the possibility that B progenitor cells may manipulate the stromal support via these Wnt factors. However, it is not uncommon in developmental niches that morphogenic signals have the potential to act on several cells in the microenvironment. Therefore, it has been suggested that Wnt signaling might influence the HSCs both directly and indirectly by maintenance of the cellular elements of the stem cell niche [21]. In line with this theory, several studies have demonstrated expression of multiple Wnt mRNAs in thymocytes and the thymic microenvironment. It is likely that particular Wnts serve distinct roles, thus, cell specific effects may be achieved by "playing the Wnt repertoire" as well as through combinations with other signaling events.
Conclusion
In this study, we have demonstrated mRNA expression of several Wnt ligands, Fzd receptors and Wnt antagonists in human BM B progenitor cells and regulated expression of Fzd receptors and co-receptors, β-catenin, plakoglobin, LEF-1 and TCF-4 mRNA in these cells during differentiation. Furthermore, we find that Wnt3A induced an accumulation of β-catenin in the BM B progenitor cells and inhibition of in vitro B lymphopoiesis. These results suggest the Wnt/β-catenin pathway as a negative regulator of human stromal dependent B lymphopoiesis. This is in contrast to observations on Wnt effects in fetal murine pro-B cells, and may represent a distinction between the fetal liver and adult BM microenvironments.
Primary cells and cell lines
BM aspirates were from the iliac crest of normal adult volunteers (approved by the Regional Ethical Committee). Mononuclear cells (MNC) were separated by Ficoll-Hypaque density gradient centrifugation (Lymphoprep, Nycomed, Norway). CD10 + B progenitor cells (ELP, pro-B and pre-B cells) were isolated from BM MNC using Dynabeads ® M-450 Epoxy (Dynal, Oslo, Norway) directly coated with anti-CD10 mAb (clone RFAL-3, Sigma-Aldrich, UK) followed by detachment using CD4/CD8 DETACHaBEAD (Dynal, Norway) according to the producer 's protocol. The CD10 + cells were of 90-95% purity, they were CD45 + and contained 4-7% IgM + cells (immature B cells). CD34 + and CD19 + cells were isolated in a similar manner from MNC using Dynabeads ® M-450 conjugated with anti-CD34 or anti-CD19 mAb, respectively, and CD34 or CD19 DETACHaBEAD (Dynal, Norway), respectively. CD133 + CD10cells (HPC) were isolated from the CD10fraction of BM MNC (see above) using the MACS system (Magnetic cell sorting of human cells) and a CD133 Cell Isolation Kit (Miltenyi Biotec, Germany). Briefly, the mononuclear cells were magnetically labeled with CD133 MicroBeads and separated on a column, which was placed in the magnetic field of a MACS Separator. The magnetically labeled CD133 + cells were retained in the column while the unlabeled CD133cells passed through. After removal of the column from the magnetic field, the magnetically retained CD133 + cells were eluted as the positively selected cell fraction. The CD133 + cells were typically of 97-98% purity. In monoculture, the cells were kept in X-VIVO 15™ (BioWhittaker, Walkersville, USA) with 0.1% detoxified BSA.
The murine stromal cell line MS-5 [53] was cultured in α-MEM with 10% FCS and 100 µg/ml of penicillin and streptomycin (PAA Laboratories, Pasching, Austria) and was passaged twice a week. Cultures of human BM stromal (BMS) cells were established as previously described [54]. Briefly, total BM MNC cells depleted of CD34 + cells were seeded into 75-cm 2 tissue culture flasks in RPMI-1640 with 10% FCS, penicillin and streptomycin. Nonadherent cells were washed off after 2 hours at 37°C, and the adherent cells were cultured in EX-CELL 610 (JRH Biosciences, USA) with 10% FCS, penicillin and streptomycin. The BMS cells were passaged twice before they were used for experiments.
PCR analysis
Total RNA from freshly isolated and sorted BM CD45 + CD10 + IgMcells was isolated using Absolutely RNA™ RT-PCR Mini-prep kit (Stratagene Europe, Amsterdam, Netherland) according to the manufacturer's instructions. RNA from human fetal brain was purchased from BioChain Institute, Inc., USA. cDNA was synthesized from 1 µg total RNA primed with random hexamers in a 50 µl reaction using TaqMan Reverse Transcription Reagents (Applied Biosystems, Foster City, CA, USA). Control reactions lacking reverse transcriptase were always included. RT-PCR of 20 ng of total RNA was performed with a titanium polymerase (BD Biosciences, USA) in a 25 µl reaction for 37 cycles at 95°C for 30 seconds, 60°C for 30 seconds, and 68°C for 30 seconds. The primer sequences used to identify Wnt, Fzd and Wnt antagonist gene expression are listed in Table 3. The primer sequences was partly designed specifically for this work and partly copied from previous expression analyses [55]. For all mRNAs expressed, the amplified products have been sequenced and confirmed to represent the correct target gene.
Real-time PCR
Total RNAs from 5-20 000 freshly isolated and sorted BM B progenitor cells (ELP, pro-B, large pre-B, small pre-B and immature B cells) were purified using Absolutely RNA™ RT-PCR Micro-prep kit (Stratagene Europe, Amsterdam, Netherland) according to the manufacturer's instructions. cDNAs were synthesized from total RNA primed with random hexamers using TaqMan Reverse Transcription Reagents (Applied Biosystems, Foster City, CA, USA). LEF-1 and TCF-4 (gene name TCF-7L2) mRNA expression was analyzed by real-time quantitative RT-PCR using Taqman technology according to the manufacturers procedure (Applied Biosystems). Predeveloped assay reagents including primers and probes for LRP5 (Hs00182031_m1), LRP6 (Hs00233935_m1), Fzd2 (Hs00361432_s1), Fzd5 (Hs00361869_g1), Fzd6 (Hs00171574_m1), Fzd9 (Hs00268954_s1), β-catenin (CTNNB1, Hs00170025_m1), plakoglobin (JUP, Hs00158408_m1), LEF-1 (Hs00212390_m1) and TCF-4 (Hs00181036_m1) mRNAs as well as the endogenous control phosphoglycerate kinase 1 (PGK1) (Hs99999906_m1) were supplied by Applied Biosystems and the PCR reactions were performed according to the manufacturer's instructions using Taqman Universal PCR Master Mix. Each measurement was performed in duplicate and the expression level for each gene was calculated using the standard curve method for relative quantitation of gene expression as described by the manufacturer (ABI Prism 7700 Sequence Detection System, User Bulletin 2, PE Applied Biosystems, Foster City, CA). Total RNA from the ALL cell lines Reh and Nalm6 as well as total RNA from human fetal brain were used for standard curves. Expression values for PGK1 mRNA, initially determined to be a suitable endogenous control for BM populations, were used for normalization of the expression levels. The expression level of the different genes in pro-B cells was used as a calibrator, and the expression of the other populations were calculated relative to the expression in pro-B cells.
Western blot analysis
The cells were treated with Wnt3A or vehicle only (PBS with 0.1% detoxified BSA) for 3 hours and total cell lysates were analyzed by Western blot using 10% SDS polyacrylamide gels from Pierce (Rockford, USA) as described earlier [56]. The filters were pretreated with PBS containing 0.1% Tween-20 (PBS-T) and 5% dry milk, incubated overnight with anti-β-catenin Ab or 1 hour with anti-β-actin Ab and then washed 2 × 15 min in PBS-T. The filters were then incubated with the secondary Ab rabbit anti-mouse IgG1-HRP Ab or rabbit anti-goat IgG-HRP Ab, respectively, for 60 minutes at room temperature and washed 2 × 15 min in PBS-T before the proteins were visualized using ECL + Western Blotting Detection Reagents from Amersham Biosciences (Piscataway, NJ, USA).
Hematopoietic cell-stromal cell coculture Assay 1: HPC (CD133 + CD10 -) were cultured in 24 well tissue plates (2000 cells/well) pre-seeded with MS-5 (2.5 × 10 4 cells/well). Assay 2: B progenitor cells (CD10 + ) were cultured in 96 well tissue plates (8000 cells/well) preseeded with MS-5 (1 × 10 4 cells/well). Both sets of cocultures were in α-MEM containing 1% FCS, 100 µg/ml of penicillin and streptomycin, and supplemented with cytokines (for HPC: SCF, 25 ug/ml and G-CSF, 2,5 ug/ml, for B progenitor cells: IL-7, 50 ng/ml, IL-3, 20 ng/ml and FL, 50 ng/ml). In one additional experiment, the wells were pre-seeded with BMS (1 × 10 4 cells/well) in EX-CELL 610 with 1% FCS, 100 µg/ml of penicillin and streptomycin and cytokines (IL-7, 50 ng/ml, IL-3, 20 ng/ml and FL, 50 ng/ml). Where indicated, Wnt3A (10-100 ng/ml), Dkk1 (500 ng/ml) or sFRP1 (2 µg/ml) were added to the cultures. 50% of the medium was replaced weekly. After 3 (HPC) or 2 (B progenitor cells) weeks of culturing, single wells were harvested by trypsination and the B progenitor cells were immunophenotyped using the pan B cell marker CD19 as well as the CD34 differentiation marker and subjected to quantitative analyses (see above). Wnt3A conditioned medium and control medium were collected from L-Wnt3A cells and control nontransfected L-cells, respectively (purchased from American Type Culture Collection (ATCC), Manassas, USA), according to the manufacurer's procedure. High-resolution cell division tracking BM CD34 + and CD19 + cells were labeled with 5-and 6carboxyfluorescein diacetate succinimidyl ester (CFSE; Molecular Probes, Eugene, OR, USA) as described earlier [57]. To allow unbound dye to diffuse from cells, labeled cells were seeded on a confluent layer of MS-5 and incubated for 18-24 hours at 37°C in α-MEM with 1% FCS. Subsequently, the cells were stained with CD10 APC mAb and CD10 + CFSE mean cells were sorted on a BD FACSDiVa flow cytometer (Becton Dickinson). Sorted cells (1.5-2 × 10 4 /well) were cultured in 48 well tissue plates pre-seeded with MS-5 (2 × 10 4 cells/well) supplemented with IL-7 (50 ng/ml) and FL (50 ng/ml) and treated with Wnt3A (25-400 ng/ml), Wnt3A + sFRP1 (2 µg/ml) or medium only. IL-3 was left out of these cultures, because earlier experiments showed that IL-7 and FL were sufficient to support survival and proliferation of the B progenitor cells (data not shown). After three days the cells were harvested by trypsination and analyzed on a FACSCalibur flow cytometer for the number of cell divisions as well as expression of the cell surface markers CD34 and CD19.
Statistical analysis
The statistical significance of differences between groups was determined using the paired two-tailed Wilcoxon's nonparametric test, by applying SPSS 11.5 software.
|
2014-10-01T00:00:00.000Z
|
2006-06-29T00:00:00.000
|
{
"year": 2006,
"sha1": "0f0b0f44f1bcc92c1c5aef86ae487ef9624ba54a",
"oa_license": "CCBY",
"oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/1471-2172-7-13",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f16224673067916128f8476f5e1eafc0fe3781ca",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
248301472
|
pes2o/s2orc
|
v3-fos-license
|
The Host-Pathogen Interactions and Epicellular Lifestyle of Neisseria meningitidis
Neisseria meningitidis is a gram-negative diplococcus and a transient commensal of the human nasopharynx. It shares and competes for this niche with a number of other Neisseria species including N. lactamica, N. cinerea and N. mucosa. Unlike these other members of the genus, N. meningitidis may become invasive, crossing the epithelium of the nasopharynx and entering the bloodstream, where it rapidly proliferates causing a syndrome known as Invasive Meningococcal Disease (IMD). IMD progresses rapidly to cause septic shock and meningitis and is often fatal despite aggressive antibiotic therapy. While many of the ways in which meningococci survive in the host environment have been well studied, recent insights into the interactions between N. meningitidis and the epithelial, serum, and endothelial environments have expanded our understanding of how IMD develops. This review seeks to incorporate recent work into the established model of pathogenesis. In particular, we focus on the competition that N. meningitidis faces in the nasopharynx from other Neisseria species, and how the genetic diversity of the meningococcus contributes to the wide range of inflammatory and pathogenic potentials observed among different lineages.
urethritis (Vienne et al., 2003;Bazan et al., 2017). Some reports also indicate that Nme may induce meningitis via infection of the olfactory nerve, bypassing the bloodstream and leading to meningitis in the absence of bacteraemia (Sjölinder and Jonsson, 2010;Delbaz et al., 2020).
Transmission of Nme between people occurs via large respiratory droplets spread by direct inhalation to other individuals in close proximity, although minor modes of transmission via the urogenital and anorectal secretions have recently been detected (Ladhani et al., 2020). It is unclear whether fomites play a role in transmission. However, under laboratory conditions the meningococcus can survive on surfaces for up to a day (Swain and Martin, 2007). Upon acquisition, human organoid models have shown meningococci preferentially bind to the microvillous surface of non-ciliated cells of the human nasopharynx, located at the back of the nose and above the oropharynx (Stephens et al., 1983). After initial contact, the bacteria form microcolonies which stably colonise the epithelial surface. Carriage of a single meningococcal isolate may persist for 5-6 months before clearance, depending on the host and isolate in question (Caugant and Maiden, 2009). The mechanism by which meningococci are cleared from the nasopharynx is unknown but includes the induction of natural immunity (Pollard and Frasch, 2001). Carriage prevalence varies by age, peaking at approximately 20% in the 15-20-year-old age bracket before gradually declining in later adulthood (Christensen et al., 2010). The greatest risk factors for Nme carriage are age, high density living situations such as those observed in military and university accommodation and at mass gatherings (Peterson et al., 2018), sore throat, season (Cooper et al., 2019), and behaviours including smoking, alcohol consumption, nightclub attendance, and having multiple kissing partners (MacLennan et al., 2021). In the meningitis belt of sub-Saharan Africa, arid conditions experienced during the dry season significantly increases the risk of meningococcal carriage (Cooper et al., 2019). Viral infection, particularly with Influeza virus A, also predisposes an individual to carriage (Tuite, et al., 2010).
The prevalence of IMD is correlated with meningococcal carriage in adolescents, who drive transmission in the wider population due to increased participation in risk-behaviours (MacLennan et al., 2021). The prevalence of IMD also fluctuates within populations and between geographic regions. Endemic disease (age standardised rate <10/100,000 population/year) is sporadic IMD caused by un-related strains as they circulate in the population (Jafri et al., 2013). Epidemics (age standardised rate >100/100,000 population/year) typically occur upon the introduction of a strain that is antigenically distinct from local carriage isolates (Jafri et al., 2013). This manifests as outbreaks characterized by transmission networks of close contacts with IMD stemming from infection by the same strain, or as waves of hyper-endemicity in which increased incidences of IMD may last for a decade or more in a given area .
Strains are typed according to two schemes: into serogroups based on capsular polysaccharide composition (Harrison et al., 2013) and into sequence types (ST) based on the alleles of seven housekeeping genes using multi-locus sequence typing (MLST) (Maiden et al., 1998). Lineages with STs sharing four or more alleles are grouped into a single clonal complex (cc). Of the twelve known serogroups (A, B, C, E, H, I, K, L, W, X, Y, and Z), six (A, B, C, W, X, and Y) are associated with strains causing the majority of IMD outbreaks (Acevedo et al., 2019). Certain clonal complexes are also associated with epidemics and outbreaks on a global scale. These lineages, of which there are eleven, have been termed the hyperinvasive lineages (Caugant and Maiden, 2009). For example, serogroup A isolates from cc5 (MenA:cc5) were the cause of disease in the African meningitis belt from 1988 to 2001 (Nicolas et al., 2005;Caugant and Brynildsrud, 2020), MenB:cc32 caused outbreaks in the UK during the 1980s (Abbott et al., 1985), and MenB:cc41/44 was responsible for outbreaks in New Zealand, 1990(Oster et al., 2005. More recently, MenW:cc11 has been the cause of a global outbreak which began during the Hajj pilgrimage in 2000 (Taha et al., 2000) leading to subsequent epidemics in the African meningitis belt and increased outbreaks in South Africa, South America, Europe, and Australia (Caugant and Brynildsrud, 2020). These MenW:cc11 strains resulted from a capsule switching event in which the genes for the synthesis of a serogroup W capsule were acquired by an ancestral MenC:cc11 isolate following a sequence of homologous recombination events (Mustapha et al., 2016). Capsule switching events represent a potential mechanism by which meningococcal lineages may evade vaccine-derived immunity against capsule (Lucidarme et al., 2017).
Genetic Diversity of N. meningitidis
Large natural history population-based surveys of meningococcal carriage show that strains belonging to certain clonal complexes are over-represented in IMD versus carriage (Caugant and Maiden, 2009). The IMD disease/carriage ratio (D/C ratio) is used to stratify clonal complexes by their propensity to cause disease. The eleven hyperinvasive lineages, which have been responsible for the majority of IMD epidemics, were found to have an increased D/C ratio compared to other lineages (Caugant and Maiden, 2009). Although the D/C ratio is an observational metric of the association of a genetic lineage with IMD in humans, it demonstrates that lineages differ in their ability to colonise the host and cause invasive disease. This hypothesis was modelled mathematically by Stollenwerk et al. (2004) who predicted that differences in the metabolism and virulence of meningococcal clonal complexes could explain these observations. Studies examining small genome datasets suggest that Nme has a large common array of genomic islands, but that these are present in unique combinations in each clonal complex (Snyder et al., 2001;Stabler et al., 2005;Hotopp et al., 2006;Schoen et al., 2008;Marri et al., 2010). While a subset of these genomic islands has confirmed roles in virulence, the majority have unknown functions or are proposed to have roles in metabolism. Schoen et al. (2014) proposed a model of nutritional virulence in which differences in key metabolic pathways in each clonal complex contributes to niche adaptation. They identified lactate metabolism, the oxidative stress response, glutathione metabolism and the denitrification pathway as key indicators of the involvement of metabolism in virulence. Lactate is generated by anaerobic glycolysis in host cells in response to stress to the extent that during bacterial meningitis, lactate concentrations rise to 13.6 mM, almost 7-fold above the levels in healthy tissue (Llibre et al., 2021). This acts as a carbon source to accelerate bacterial growth. Meningococcal carriage isolates that are not associated with IMD are genetically diverse and distinct from the hyperinvasive lineages. Comparisons of their transcriptional responses to growth in blood, saliva and CSF by Ampattu et al. (2017) have shown that although these isolates retain genetic similarity to invasive isolates, regulation of the pathways involved in energy, glutamine, and cysteine metabolism are quite distinct.
A recent study of approximately 4000 genomes of both hyperinvasive and non-virulent genetic lineages by Mullally et al. (2021) identified a cohort of 93 genomic islands with associations across nine hyperinvasive lineages and one non-virulent lineage (cc53). When clustered by the presence or absence of these islands, the hyperinvasive lineages fell into two large but distinct groups, termed genogroup I (GGI) and genogroup II (GGII) (Figure 1). Under this scheme, the possession of genomic islands was correlated with the D/C ratio, with GGI (cc5, cc22, cc23, and cc60) possessing fewer genomic islands and a D/C ratio < 0.5 and GGII (cc32, cc41/44, cc213, cc269, and cc461) possessing more genomic islands and a D/C ratio > 0.5.
Meningococcal Interaction With Mucosal Host Defenses
The nasopharyngeal respiratory epithelium is covered by a 10-12 µm thick two-layer surface liquid composed of a low viscosity FIGURE 1 | Presence of known genes associated with pathogenesis among meningococcal lineages. The presence of genes associated with pathogenesis mentioned in this review are marked as either present (gene is present in >80% of isolates from the lineage), variably present (gene is present in 20-80% of the isolates from the periciliary liquid and a high viscosity mucus that faces the lumen (Lillehoj et al., 2013). The low viscosity periciliary liquid facilitates ciliary beating which continually transports mucus from the lower respiratory tract to the pharynx where it is swallowed, to remove microorganisms and other debris. Mucins are a diverse family of high molecular weight, heavily glycosylated proteins which are secreted into the periciliary fluid or are anchored to the epithelial surface to capture microbes to prevent access to the host cell surface (Derrien et al., 2010). Models of meningococcal colonisation of the nasopharynx had presumed that the meningococcus would make direct contact with the epithelium (Virji, 2009). However, Audry et al. (2019) showed using an air interface culture (AIC) model that the likely niche of Nme in the carriage state is within the mucus layer rather than on the epithelial cell surface, as it lacks swimming motility and mucin-degrading enzymes found in other bacteria. The position of Nme within the mucosal secretions of the epithelium likely reflects a need for protection against desiccation while providing access to nutrients (Audry et al., 2019). Eventual clearance of Nme from the mucosal layer is mediated by the actions of various host defences including secretory IgA (sIgA) (Brandtzaeg, 2013), cationic antimicrobial proteins (CAMPs) (Ganz, 2002) and nutrient restriction ( Figure 2A).
Human sIgA is the most abundant antibody class at mucosal surfaces and inhibits microbial-host adhesion by nonspecifically coating the bacterial surface, resulting in enhanced opsonophagocytosis by IgA receptor-bearing phagocytes (Brandtzaeg, 2013). To counteract human sIgA, Nme expresses an IgA1 protease that cleaves the exposed hinge region of sIgA1 (Mulks et al., 1980). Cleavage at these sites results in the separation of the two antigen-binding Fab fragments from the Fc tail. Cleaved Fab fragments retain antigen-binding activity and bind to surface epitopes on Nme, competitively inhibiting bactericidal antibody binding (Mansa and Kilian, 1986). Nme IgA1 proteases can be divided into two classes (type 1 and type 2) based on their cleavage specificities (Mulks et al., 1980). IgA1 protease cleavage type 1 has the additional capacity to degrade IgG3, which are typically bactericidal antibodies that activate opsonophagocytosis (Spoerry et al., 2021). Interestingly, cc11 isolates exclusively encode cleavage type I IgA1 proteases (Spoerry et al., 2021).
CAMPs are a class of short peptides secreted by the host cells which bind negatively charged bacterial surfaces, disrupting membrane integrity and leading to bacterial lysis (McCormick and Weinberg, 2010). CAMPs are very diverse in function and origin and not all forms of CAMPs have been tested against Nme. Binding of CAMPs to gram-negative bacteria relies on the overall negative charge of the bacterial surface, which is conferred by the phosphorylated headgroups of the lipid A molecules making up 70% of the outer membrane. In Nme, this negative charge can be ablated by the substitution of the lipid A headgroups with phosphoethanolamine (PEA) by the lipid A ethanolamine transferase, EptA (formerly LptA) Kahler et al, 2018). Although EptA is considered the primary mechanism in Nme conferring resistance to CAMPs, there are secondary systems of resistance . Capsular polysaccharide and binding of human factor H by factor H-binding protein (fHbp) on the bacterial surface interferes with the electrostatic interactions between CAMPs and lipid A, thus reducing their effectiveness (Spinosa et al., 2007;Seib et al., 2009). Mutation in the pilin apparatus reduces the influx of CAMPs and the multiple resistance efflux pump, encoded by the mtrCDE operon, effluxes CAMPs . In N. gonorrhoeae, the MtrR repressor and MtrA activator are responsible for regulation of the mtrCDE operon. However, in Nme the insertion of a Correia element into the promoter has resulted in the loss of regulation by this system. Instead, the Correia element contains an integration host factor (IHF) binding site, and repression of mtrCDE expression is carried out via binding of IHF and post-transcriptional cleavage of the Correia element (Rouquette-Loughlin et al., 2004). MtrCDE efflux pump and capsule expression are induced via an unknown mechanism in the presence of sub-lethal concentrations of cathelicidin LL-37 and protegrin-1 (Spinosa et al., 2007), and both EptA expression and capsule synthesis is regulated by the two-component system MisRS (Tzeng et al., 2008;Bartley et al., 2013). Shedding of the outer membrane as blebs also removes CAMPs bound to the bacterial outer membrane (Tzeng and Stephens, 2015). Additionally, the secretion of extracellular DNA (eDNA) may bind CAMPs reducing the effective concentration of CAMPs on the bacterial cell surface (Wassing et al., 2021).
At mucosal surfaces, iron and zinc, which are required for bacterial growth, are sequestered by an array of human proteins, supressing bacterial growth in these environments in a process termed nutritional immunity (Cornelissen, 2018). Iron is sequestered and transported throughout the body by a variety of carrier proteins including transferrin, haemoglobin and haptoglobin. Lactoferrin is the primary protein secreted into mucosal surfaces to sequester free iron, thus restricting the growth of microbes in this compartment (Kell et al., 2020). Lactoferrin binding protein A (LbpA) binds holo-lactoferrin, extracting bound iron and releasing apo-lactoferrin. The LbpB lipoprotein acts in concert with LbpA, enhancing the ability of LbpA to bind lactoferrin. Additionally, lactoferrin is proteolytically processed by host enzymes to release a 47 amino acid peptide, lactoferricin, which acts as a CAMP (Gifford et al., 2005).
Resistance of Nme to lactoferricin is conferred by binding of lactoferricin by two negatively charged repeat-regions of LbpB (Morgenthau et al., 2014). Zinc is similarly sequestered by calprotectin and psoriasin and Nme expresses receptors, CbpA (TdfH in N. gonorrhoeae) and ZnuD (TdfJ in N. gonorrhoeae), which respectively bind these proteins in order to acquire host zinc (Maurakis et al., 2019). Nme does not express its own siderophores but does have the capability of utilising xenosiderophores of other species via the FetABC transporter (Cornelissen, 2018).
Synergism and Antagonism With the Human Nasopharyngeal Microbiome
The mucus layer is colonised by a microbial community that may have both synergistic and antagonistic interactions with Nme. During early life, stable microbial communities are established which are dominated by one of six bacterial genera-Moraxella, Streptococcus, Corynebacterium, Staphylococcus, Haemophilus and Alloiococcus. (Durack and Christophersen, 2020). In these situations, Neisseria spp. are transient, low abundance members of these communities. Both antagonistic and synergistic interactions have been noted between Streptococcus spp. and Nme. S. pneumoniae can successfully limit and eliminate competitive flora in co-culture experiments via the production of hydrogen peroxide, which is bactericidal against Nme despite Nme possessing catalase activity (Pericone et al., 2000). Additionally, S. pneumoniae produces a neuraminidase which desialylates the lipopolysaccharide (LOS), sensitising Nme to complement-mediated killing (Shakhnovich et al., 2002). Conversely, direct synergism between Nme and S. mitis has been observed by Audry et al. (2019). Using the AIC model, they showed that S. mitis degrades mucins, enabling Nme to reach the epithelial surface, initiating stable colonisation and potentiating growth. This result seems somewhat paradoxical given that both S. mitis and S. pneumoniae produce H 2 O 2 by the action of the pyruvate oxidase SpxB (Redanz et al., 2018). Such discrepancies may be due to Nme strain variation in H 2 O 2 sensitivity, variable amount of H 2 O 2 expressed by the Streptococcal isolates, or the fact that the study by Pericone et al. (2000) was performed in rich media co-culture, in the absence of host cells and mucus. Infection with respiratory viruses, including respiratory syncytial virus (RSV) and Influenza virus A has also been correlated with increased risk of IMD (Cartwright et al., 1991;Brundage, 2006;Jacobs et al., 2014;Salomon et al., 2020). Infection modelling suggests that the neuraminidase of Influenza virus A can degrade the bacterial sialic acid capsule and therefore enhance the adhesion of Nme to the host epithelium (Rameix-Welti et al., 2009). Dysregulation of the host immune system also plays a role in increasing susceptibility to IMD in a mouse model of co-infection (Alonso et al., 2003). The evidence supporting a correlation between RSV and an increased risk of IMD is conflicting. Tuite et al. (2010) found an association between RSV and IMD, while Stuart et al. (1996) and Jacobs et al. (2014) did not.
Apart from Nme, there are at least nine other commensal Neisseria spp. which inhabit the nasopharyngeal mucosa (Liu et al., 2015;Diallo et al., 2019). Colonisation with N. lactamica is inversely correlated with the carriage of Nme in humans. N. lactamica is the dominant Neisseria species during early life, but is replaced by Nme following one year of age (Cartwright et al., 1987). A pharyngeal carriage study from Africa examining six Neisseria spp. did not detect any relationships between Nme and the other five Neisseria spp. in their test panel (Diallo et al., 2016). In laboratory models of infection, N. lactamica and N. cinerea measurably inhibit colonisation of immortalised epithelial cells by Nme (Evans et al., 2011;Deasy et al., 2015;Wörmann et al., 2016). It is hypothesised that N. lactamica reduces meningococcal carriage both by competitive displacement of resident Nme and by preventing further acquisition (Deasy et al., 2015). N. cinerea adheres to the epithelium and forms microcolonies with close associations in a similar manner to Nme (Wörmann et al., 2016). N. cinerea significantly impairs meningococcal-host association, microcolony formation and microcolony expansion on host epithelial cells (Custodio et al., 2020). This was attributed to a reduction in meningococcal motility by an unknown mechanism (Custodio et al., 2020).
Two mechanisms of interference between Neisseria spp. are mediated by Type IV and Type VI systems. Kim et al. (2019) showed all commensal Neisseria spp. can kill the pathogenic Neisseria spp. through a mechanism based on competence and DNA methylation. Nme is transformable and possesses multiple restriction modification systems designed to methylate the host chromosome while degrading foreign DNA (Budroni et al., 2011). Kim et al. (2019) found that high concentrations of foreign DNA from the commensal Neisseria spp. can overcome this protective mechanism, resulting in the recombination of under-methylated sequences into the host genome, which are subsequently nicked by host restriction endonucleases resulting in the abortion of cell division. It is currently unclear whether this type of inference by commensal Neisseria spp. occurs between lineages of Nme, however, the diversity and lineagerestricted nature of restriction modification systems in Nme suggests this may also be a mechanism for fratricidal killing (Claus et al., 2000;Srikhanta et al., 2009;Srikhanta et al., 2010;Budroni et al., 2011). Type VI secretion systems have been identified in multiple commensal Neisseria spp. (Calder et al., 2020;Custodio et al., 2021). Custodio et al. (2021) demonstrated in competition assays using N. cinerea that the Type VI secretion system resulted in a 50-100-fold reduction in wild-type survival of Nme. The expression of the polysaccharide capsule enhanced meningococcal survival, and the mechanism of susceptibility required the expression of Type IV pili (Tfp) which is necessary for competence (Custodio et al., 2021).
Although carriage studies rarely report co-colonisation of meningococcal isolates, longitudinal studies of meningococcal asymptomatic carriage revealed the exchange of strains in the nasopharyngeal compartment over time (Barnes et al., 2017), implying that there are fratricidal mechanisms of competition between Nme strains. Two mechanisms have been examined: secreted bacteriocins and contact-dependent killing mechanisms mediated by the TspABI and MafABI systems. Allunans et al. (2008) identified an Nme isolate that inhibited the growth of other strains of Nme on solid media. They showed that this effect was due to the secretion of a bacteriocin (aka meningocin), encoded on a genetic island termed IHT-A2. Some lineages possess IHT-A2 but have non-functional genes for secretion of the bacteriocin (Mullally et al., 2021), suggesting that these strains may retain immunity to killing (Allunans et al., 2008). The TpsBAI(C) and MafABI systems are unrelated two-partner secretion systems that secrete a polymorphic toxin component, TpsA and MafB, which enter strains that are not expressing the cognate immunity factor (TpsI and MafI) resulting in cell death (Tommassen and Arenas, 2017). TpsB mediates the secretion of a cognate TpsA protein (Arenas et al., 2013b) while it appears that MafB secretion is independent of MafA. In some strains, the tps island contains short repeating cassettes downstream of the tpsI gene, termed tpsC, which are proposed to enable recombination with variable 3' end of tpsA to generate new variants of the TpsA toxin (Arenas et al., 2013b). Overexpression of one of the four MafB toxins of strain NEM8013 provided an advantage in competition assays, suggesting a role in niche adaptation (Jamet et al., 2015). The possession of the tspC array and the Maf system are characteristic of the hyperinvasive lineages but not the commensal Nme lineage cc53 (Mullally et al., 2021). However, cc53 lineage encodes a potential secreted bacteriocin suggesting that this lineage has evolved to possess fratricidal mechanisms distinct from the hyperinvasive lineages.
COLONIZATION OF THE NASOPHARYNGEAL EPITHELIUM
Following the acquisition of Nme by the host, meningococci must undergo several discreet phases of colonisation to become invasive and cause IMD. These are long distance attachment by Tfp, re-traction of the pilus, stable colonisation and microcolony formation, and intimate adhesion to the epithelial surface. During intimate adhesion, meningococcal adhesins initiate remodelling of epithelial cell architecture, resulting in engulfment and transcytosis of Nme to the sub-epithelial layers ( Figure 2B).
Initial Attachment by the Type IV Pilus
The Tfp is a long filamentous structure composed of the pilin monomer PilE and minor pilins ComP, PilV, and PilX (Carbonnelle et al., 2009). Pilus biogenesis is a complex process involving over 20 different proteins, ultimately resulting in the assembly of pilin polymers in the cytoplasm and extrusion through the outer membrane by PilQ (Carbonnelle et al., 2006;Brown et al., 2010). The retraction of pili is mediated by PilT and is counterbalanced by several proteins, including PilX, PilV, and the pilus-associated adhesin PilC, which regulate the number of pili per bacterial cell (Imhaus and Dumeńil, 2014). Piliation is required for self-aggregation, adhesion to host cells, and signalling to host cells, respectively (Imhaus and Dumeńil, 2014). Initial adhesion of Tfp to epithelial cells is mediated by the tip-adhesin, PilC, and along the shaft via PilE (Kennouche et al., 2019). Meningococci express two forms of PilC, PilC1 and PilC2, which are regulated independently of one another and modulate pilus function . While both forms of PilC can mediate adhesion to epithelial cells and induce the formation of cortical plaques, PilC1-based adhesion results in a sharp reduction in the expression of endothelial growth factor receptor (EGFR), which signals epithelial cells to detach from the substratum . This suggests that the variant forms of PilC allow meningococci to fine-tune host cell behaviour during infection. The search for the receptor for Tfp on epithelial cells has been inconclusive. Early studies identified membrane cofactor protein, also known as CD46, as the Tfp receptor on epithelial cells (Källström et al., 1997). CD46 is a transmembrane glycoprotein found abundantly on nearly all human cells/tissues, including cells of the respiratory tract. However, the role of CD46 in meningococcal adhesion has been challenged by several studies, leaving the identity of the cognate receptor for Tfp at the epithelial surface in question (Tobiason and Seifert, 2001;Johansson et al., 2003;Kirchner et al., 2005;Sutherland et al., 2010).
Pili are post-translationally modified with glycans, phosphocholine, phosphoethanolamine or phosphoglycerol (Bartley and Kahler, 2014;Mubaiwa et al., 2017). Glycans can be di-or tri-saccharides with are variably decorated with Oacetyl groups. These glycans are O-linked galactose (a1-3)-N,N'diacetylbacillosamine (Gal-diNAcBac) or tri-saccharides of Gal (b-1-4)Gal-diNAcBac or Gal(b-1-4)Gal-GATDH (glyceroamido acetamido trideoxyhexose) (Bartley and Kahler, 2014). Pilin glycosylation may slightly alter pili density and modulate epithelial attachment (Virji et al., 1993b;Marceau et al., 1998). Pilin glycosylation and phosphorylcholine modifications on pilin have been shown to be necessary for interaction with platelet activating factor receptor (PAFr), a key early receptor of the interaction between Nme and host bronchial epithelial cells (Jen et al., 2013). In N. gonorrhoeae, the pilin glycan is also essential for the interaction of the pilus with the I-domain of the CR3 receptor, a key mediator of attachment of N. gonorrhoeae to primary human cervical epithelial cells (Jennings et al., 2011) and presumptively has a similar role for Nme attachment to CR3 expressing host cells.
Microcolony Formation
Following Tfp mediated attachment, meningococci form bacterial aggregates on the apical surface of epithelium termed microcolonies (Heĺaine et al., 2005). Microcolony formation increases attachment at the epithelial surface and allows meningococci to weakly resist shear stress generated by mucociliary flow (Lécuyer et al., 2012). Microcolony aggregation is dependent upon the minor pilins PilX, PilV, and PamA (Pilus associated molecule A), which are required for twitching motility (Imhaus and Dumeńil, 2014;Takahashi et al., 2020). Microcolonies may progress in two ways: they may evolve into biofilms that result in stable colonisation of the epithelium, or they may disperse. The formation of a biofilm is a trait associated with Nme isolates that have lost the capacity to express capsule (Lappann and Vogel, 2010). Phase-variation of the polysialyltransferase of serogroup B strains, insertion of mobile genetic elements into the promoter of capsule synthesis genes, the transcriptional regulator CrgA, the MisRS twocomponent system, and temperature have all been shown to play a role in regulating capsule expression (Loh et al., 2013;Tzeng et al., 2016). eDNA is a major component of meningococcal biofilms, and microarrays examining gene expression in microcolonies identified increases in expression of the membrane-bound lytic transglycosylases A and B (MltA/ B), which are necessary for autolysis and the release of eDNA (Lappann and Vogel, 2010). Some lineages, including the hyperinvasive lineages cc11 and cc8, form eDNA-independent biofilms. Instead, cc11 possesses multiple copies of the prophage designated MDAF (meningococcal disease associated island) (Bille et al., 2005;Bille et al., 2008), which encodes a functional filamentous phage. The MDAF phage aids microcolony formation by stabilising inter-bacterial interactions through the formation of phage bundles that extend from the bacterial surface (Bille et al., 2017). These bacteria-bacteria interactions increase the overall biomass of encapsulated Nme interacting with the host epithelium, leading to an increased bacterial load at the site of attachment which in turn enhances the likelihood of bacterial translocation into the bloodstream (Bille et al., 2017). Dispersal of the biofilm is necessary for transmission and two mechanisms have been proposed: a host signal in the form of lactate which is a signal for increased inflammation (Sigurlásdóttir et al., 2017) and the induction of PilE phosphoglycerol transferase B (PptB), which decorates the surface proteins thus changing the dynamics of the bacterialbacterial interactions (Chamot-Rooke et al., 2011).
Aggregation and biofilm formation by Nme is supported by the minor adhesins: IgA1 protease, App, HrpA, and NHBA (Tommassen and Arenas, 2017). IgA1 protease and App (Adhesion and penetration protein) both belong to the family of chemotrypsin-like serine proteases and possess conserved positively charged a-domains. These a-domains bind eDNA, contributing to biofilm formation. App is highly conserved in meningococci and is expressed by all Neisseria spp. (Hadi et al., 2001). HrpA (haemagglutin/haemolysisn related protein A) is a large exoprotein secreted from Nme via a two-partner secretion system involving the HrpB protein (Schmitt et al., 2007). HrpA contains a highly conserved TPS domain and a variable functional domain (Schmitt et al., 2007). HrpA has been shown to play a key role in biofilm formation on human bronchial epithelial cells (Neil and Apicella, 2009). NHBA (Neisseria heparin binding antigen) is a surface-exposed lipoprotein ubiquitously expressed by Nme, which can also bind DNA (Arenas et al., 2013a).
Intimate Adherence and Endocytosis
Intimate association of Nme with the epithelial cell results in extensive remodelling of the host cell, creating a meshwork of filipodia-like cellular protrusions in which Nme replicates (Dumenil, 2011). By subverting the microtubule dependent pathway which controls the morphology and function of epithelial cells, Nme enables transcytosis through the host cell without disrupting the tight junctions between cells (Sutherland et al., 2010;Lećuyer et al., 2012). The recruitment of ezrin and the activation of Src tyrosine kinases and cortactin results in restructuring of the host plasma membrane into a cortical plaque enriched in transmembrane proteins such as CD44, ICAM1, VCAM1, epidermal growth factor receptor, the molecular-linker proteins ezrin and moesin, and characterized by the localized polymerization of cortical actin (Carbonnelle et al., 2009;Barrile et al., 2015). Some studies suggest that Nme localises within intracellular vacuoles, adopting a facultative intracellular lifestyle which would normally result in replication and release onto the polar surfaces of the epithelium for further dispersal (Barrile et al., 2015). In support of this pathway, Barrile et al. (2015) observed that Nme usurps the small GTPases such as Rab22a and Rab3, which control the endocytosis and exocytosis pathways usually associated with the polarised transport of transferrin. Eventually, the asymmetrical distribution of the host cell receptors is dysregulated to such an extent that cell polarity is lost, and Nme exits across the basolateral surface of the epithelial cell into the sub-epithelial tissues, where it can cross into the capillaries for systemic disease (Barrile et al., 2015).
The process of intimate adhesion is governed by the interaction of the Opa and Opc invasins and an array of minor adhesins to their cognate receptors on epithelial cells. Intimate adhesion does not occur until the expression of Tfp, and the polysaccharide capsule is downregulated (Virji, 2009) by the CrgA transcriptional regulator, which enables the switch from Tfp-dependent attachment to Tfp-independent intimate adhesion (Deghmane et al., 2000;Deghmane et al., 2002). Induced in a CREN-dependent manner upon cell contact, CrgA negatively regulates the expression of pilC1, pilE and capsule biosynthesis genes cssABC (Deghmane et al., 2000;Deghmane et al., 2002).
The Major Invasin: Opacity Proteins
Opa proteins are structurally variable and highly diverse, with different variants exhibiting tropism for different cell types (Sadarangani et al., 2011). These proteins consist of eight transmembrane b-barrel domains with four surface-exposed loops, of which two are hypervariable and one semi-variable (Sadarangani et al., 2011). These adhesins are encoded by four loci, opaA, opaB, opaD, and opaJ, which are subject to independent phase-variation and homologous recombination, contributing to meningococcal antigenic variation (Aho et al., 1991). Opa alleles have been regularly observed at the same locus during global spread spanning decades, indicating that particular meningococcal genotypes encode distinct Opa repertories (Callaghan et al., 2006).
The majority of Opa alleles bind the carcinoembryonic antigen-related cell adhesion molecules (CEACAM) expressed on the surface of a variety of host cell types. Of the host CEACAM repertoire, Nme Opa bind CEACAM1, CEACAM3, CEACAM5 and CEACAM6 (Sadarangani et al., 2011). The binding specificity is governed by ligand interactions between the conserved CEACAM N-domain and two hypervariable loops on the Opa adhesin (Martin et al., 2016). CEACAM1, CEACAM3 and CEACAM6 are expressed on the apical surface of epithelial cells and, due to their GPI-anchor, are directed to cholesterol-and sphingolipid-enriched membra ne microdomains (lipid rafts) (Schmitter et al., 2007). Meningococcal binding of CEACAMs initiates membrane microdomain-mediated uptake, which avoids maturation into acidic lysosomes, thus potentiating the development of vacuoles that sustain Nme in the host cell and eventual apical-tobasolateral transport in polarized epithelia (Schmitter et al., 2007). CEACAMs can modulate integrin-mediated cell adhesion at the basolateral surface of the host cell and control exfoliation of host cells from the basement membrane which is a protective mechanism to remove infected host cells. Although it has been shown that Opa-dependent CEACAM engagement prevents exfoliation from the basement membrane in gonococcal models of infection (Tchoupa et al., 2014), this has not been confirmed for Nme.
Some Opa alleles interact with cell-surface associated HSPGs (heparin sulfate proteoglycans), which belong to either the GPI (glycosylphosphatidylinositol)-linked or the transmembrane syndecan family (Hill et al., 2010). HSPG binding regulates many cell functions in a context-dependent manner, but in epithelial cells, it triggers endocytosis via multiple pathways which are currently undefined (Sarrazin et al., 2011). Opc which is a 10-stranded b-barrel with five surface exposed loops can also initiate invasion via binding to HSPG (Olyhoek et al., 1991). The expression of Opc is controlled at the transcriptional level by phase-variation of a polycytidine tract in the promoter region but the locus is missing from certain lineages including cc11 (Schubert- Unkmeir, 2017). However, the Opa proteins are the predominant invasin at the epithelial surface while Opc has a more dominant role during systemic disease and engagement with endothelial cells (Schubert-Unkmeir, 2017).
Minor Adhesins
The minor adhesins NadA, NhhA, App, MspA, HrpA and NHBA are also involved in nasopharyngeal colonisation and invasion. Although the roles of these minor adhesins are not fully understood, it appears that they re-enforce signalling via the endocytic pathway for bacterial uptake into the host cell. NadA binds the epithelial cell receptor b1 integrin, which has an important role in the initiation of endocytosis (Nägele et al., 2011). The nadA gene is lineage-restricted, being present in only 5.1% of carriage isolates but present in almost all isolates from cc11, cc8, and cc32 (Comanducci et al., 2004). The expression of NadA is regulated by the nadR (aka farR) repressor, integration host factor (IHF), the ferric uptake regulatory protein Fur, and a phase-variable tract in the promoter (Metruccio et al., 2009;Cloward and Shafer, 2013). The nadR gene is itself regulated by the MtrR repressor (Cloward and Shafer, 2013). NhhA (Neisseria hia/hsf homologue A) was shown by Scarselli et al. (2006) to promote adherence of a recombinant NhhA-expressing E. coli strain to the epithelium by binding to laminin and heparan sulfate and subsequent binding of these molecules to their epithelial receptors. Additionally, it was shown that adhesion of a MC58 null mutant to epithelial cells was significantly reduced compared to wild-type meningococci (Scarselli et al., 2006). MspA is a third chemotrypsin-like protease which is present in only a subset of lineages (Oldfield et al., 2013).
Unlike the related IgA1 and App proteases, MspA has no role in biofilm formation, but like App, it has been shown to bind to epithelial cells (Serruto et al., 2003;Turner et al., 2006) and the mannan and transferrin receptors of dendritic cells (DCs) (Khairalla et al., 2015). NHBA and HrpA, two proteins involved in bacterial aggregation, have also been shown to have functions in mediating attachment to epithelial cells via HSPGs (Schmitt et al., 2007;Vacca et al., 2016).
SYSTEMIC DISEASE
Once Nme crosses the nasopharyngeal barrier, it encounters a radically different environment to the nasopharynx. In the bloodstream, Nme must contend with different sources of iron and other metabolites, antibody-and complement-mediated killing, circulating immune cells, and the shear stress produced by blood flow. To cause meningitis or septicaemia, meningococci must attach to endothelial cells in the blood-brain barrier (BBB) and peripheral vasculature, respectively. Once attached, Nme resists the influx of phagocytic cells to infected sites, modulates the local thrombotic response, and cause the blood vessels to become leaky, allowing dissemination into the meninges or surrounding tissues, thus leading to the syndromes of meningitis and purpura fulminans, respectively. The interactions of Nme with the host once inside the systemic circulation are detailed in Figure 3 and in the following sections.
Survival in the Bloodstream
The complement system is the critical host defence against meningococci once they cross the epithelium, as evidenced by the fact that complement-deficient individuals are at a highly increased risk for IMD, and that an intact complement system is required for the killing of Nme by whole blood (Lewis and Ram, 2020). In addition, activation by the complement pathway is essential to an efficient anti-bacterial response by host neutrophils (Krüger et al., 2018). The mammalian complement system is activated via three pathways, all of which converge on the production of a C3 convertase which cleaves complement proteins C3, C4, and C5 into their active forms and leads to the downstream formation of the membrane attack complex (MAC), which disrupts bacterial cells membranes. The three pathways are termed the classical pathway (CP), which proceeds by binding of specific IgG and IgM antibodies to bacterial targets; the lectin pathway (LP), which proceeds from the binding of mannose binding lectin (MBL) to surface carbohydrates; and the alternative pathway (AP), which results from spontaneous 'tickover' of C3 into C3(H 2 O) which is subsequently converted into the C3 convertase C3(H 2 O)Bb by factor B and factor D (Lewis and Ram, 2020). The primary targets of complement deposition on the meningococcal surface are the LOS, porins, and Opa proteins (Ram et al., 2003;Lewis et al., 2008). MBL is also capable of direct binding to PorB and Opa proteins in order to activate the AP (Estabrook et al., 2004).
The expression of capsule is required for resistance to complement, and meningococci lacking a capsule are rarely recovered from disease settings in immunocompetent patients. In particular, expression of one of the four sialic acid containing capsules (serogroups B, C, W, or Y) has been shown to reduce the deposition of C4b and activation of the CP by blocking the binding of IgG and IgM antibodies to multiple surface-expressed proteins (Agarwal et al., 2014). Expression of sialic acid on erythrocytes is a known mechanism to block complement deposition on host cells, and the expression of sialic acid in capsule presumably functions in a similar fashion (Langford-Smith et al., 2015). Different capsular polysaccharides modulate complement in different ways, with expression of capsule from B, C, W, and Y reducing CP activation, serogroup A capsule having no effect on CP or AP activation, and serogroup W and Y capsules somewhat paradoxically increasing AP activation by deposition of C3b onto the capsule itself (Ram et al., 2011). Oacetylation, which occurs in multiple capsule types and is phasevariable in some serogroups, has also been shown to modify serum bactericidal activity, having a protective effect in serogroup C isolates but sharply enhancing the immunogenicity of the serogroup A capsule (Tuomanen et al., 2001;Berry et al., 2002). Meningococcal strains over-expressing capsule display increased serum resistance, and variation of capsule expression may represent a mechanism of immune evasion in Nme. Capsule expression is upregulated by temperature via a thermosensor secondary structure in the 5'UTR of the mRNA of cssABC operon (Loh et al, 2013). Variation in the repeats comprising the stemloop of the thermosensor and insertion of the IS1301 element in this location modulate expression of sialic acid synthesis affecting capsule polysaccharide and LOS sialylation (Uria et al., 2008;reviewed in Tzeng et al., 2016).
Although capsule is essential for resistance to human serum, multiple studies have demonstrated that variations in LOS structure are also responsible for modulating resistance to complement. Meningococcal LOS can be 12 structures termed immunotypes, based on the presence and phase-variation states of the glycosyltransferase genes involved in the synthesis of the a-chain and the genes involved in the decoration of the LOS inner core (Bartley and Kahler, 2014;Mubaiwa et al., 2017). The LOS a-chain is partially decorated with sialic acid (Neu5Ac) by the Lst sialyltransferase, which uses CMP-Neu5Ac scavenged from the host serum or endogenously synthesised by stains expressing sialic acid containing capsules (i.e. serogroups B, C, W, and Y). The two LOS a-chain structures which may be sialylated in Nme are LNnT [Gal(b1-4)GlcNAc(b1-3)Gal(b1-4) Glc] via an a2-3 linkage, and the P K -like antigen (the L1 immunotype) via an a2-6 linkage (Wakarchuk et al., 1998;Gulati et al., 2005). A sialylated LNnT a-chain has been shown to enhance resistance of encapsulated strains to human serum (Kahler et al., 1998). Although Lst expression in N. gonorrhoeae is regulated by CrgA, the insertion of a Correia element in the promoter region of meningococcal lst has resulted in an alternate promoter not subject to CrgA-based regulation (Matthias and Rest, 2014). Sialylation of LOS is controlled in part by the availability of CMP-Neu5Ac, and therefore is subject to similar regulatory mechanisms as capsule expression in serogroup B, C, W and Y strains (see above). Co-regulation of sialic acid synthesis and expression of lst occurs by temperature shift as the 5'UTR of the lst mRNA contains a thermosensitive riboswitch (Loh et al., 2013). In addition to a-chain structure and sialylation, decoration of the LOS inner core also modulates the complement response. Substitution of the Heptose II residue of the LOS inner core (HepII) with O-6 linked PEA, but not O-3 linked PEA, carried out by the PEA-transferases Lpt6 and Lpt3 respectively, is associated with increased deposition of C4b on LOS when an LNnT a-chain is present (Ram et al., 2003). HepII substituted with O-3 linked PEA may also undergo C4b deposition when the a-chain is truncated (Ram et al., 2003). Since both lgtG and lpt6 are found on genomic islands, strain variation in LOS inner core structure may contribute to the difference in pathogenicity of different meningococcal lineages (Mackinnon et al., 2002;Kahler et al., 2005). Phase-variation of LgtG, which preferentially adds a Glc residue in place of the O-3 linked PEA added by Lpt3, might also contribute to varying serum sensitivity in Nme (Berrington et al., 2002;Kahler et al., 2005).
In addition to surface carbohydrate expression, Nme possesses a number of surface proteins that are able to modulate complement deposition and contribute to serum resistance (reviewed in Lewis and Ram, 2020). The NalP protease is able to cleave human C3 in both its membranebound and secreted forms, resulting in the degradation of the generated C3 fragment by host factors and reduced C3 deposition on the meningococcal surface (Del Tordello et al., 2014). The host complement inhibitor C4 binding protein (C4bp) is recruited by meningococcal PorA, resulting in the inactivation of C4b and irreversible dissociation of the C4b2a convertase and inhibition of the CP. PorA expressing strains are more resistant to serum, however, C4bp recruitment is inhibited by capsule expression (Jarva et al., 2005). NHBA has been shown to increase serum resistance via binding to host heparin (Serruto et al., 2010). Both NalP and lactoferrin are capable of cleaving NHBA, and both the membrane-bound and secreted forms have similar heparin-binding activity (Serruto et al., 2010). Another component of the extracellular matrix, vitronectin, has been shown to inhibit complement activation and is bound by multiple meningococcal antigens, including NhhA and Opc, to reduce the formation of the MAC on meningococcal cells and increase serum resistance (Sa et al., 2010;Griffiths et al., 2011).
A particularly important feature of meningococcal complement resistance is the ability to bind human factor H (fH), which acts as a cofactor in the cleavage of C3b into its inactive form by factor I and carries out irreversible inactivation of the C3bBb C3 convertase, thereby playing a large role in the inhibition of the AP (Schneider et al., 2006). In gonococci, direct binding of fH to a2-3 sialylated LNnT is observed in strains expressing gonococcal PorB, however, meningococcal PorB cannot stabilise this interaction, and thus direct binding of fH is not observed (Madico et al., 2007;Lewis and Ram, 2020). Instead, fH may bind C3b fragments deposited on sialylated meningococcal LOS in a manner similar to the binding of glycosaminoglycans and C3 fragments on host cells (Lewis et al., 2012). Despite a low binding affinity, PorB binds fH at a rate that is clinically relevant (Lewis et al., 2013;Giuntini et al., 2015). Neisserial surface protein A (NspA) is also capable of binding fH in a manner influenced by LOS structure: a truncated a-chain or sialylation of LOS is associated with increased fH binding by NspA (Martin et al., 1997;Vandeputte-Rutten et al., 2003;Lewis et al., 2010). Finally, Nme expresses a fH binding protein (fHbp), responsible for the recruitment of fH to the meningococcal surface to inhibit AP activation (reviewed in Principato et al., 2020). Significant structural variation of fHbp exists among Nme isolates, with three major families (variant 1, variant 2, and variant 3) being described. Hundreds of sub-variants within each family exist, many of which are associated with ST (Masignani et al., 2003;Bambini et al., 2009;Brehony et al., 2009). fHbp is expressed by two independent promoters, one bicistronic upstream of the proximal gene to fhbp, and a dedicated monocistronic fhbp promoter which is under the control of the fumarate and nitrate reductase regulon and responds to anaerobic conditions/decreased oxygen concentrations (Oriente et al., 2010). Expression of fhbp has also been shown to increase under iron-replete conditions in most strains (Sanders et al., 2012). Expression of fHbp can vary up to 15-fold between strains based on the genetic sequence of the promoter region and is correlated with serum bactericidal activity (Biagini et al., 2016). SNPs in the signal peptide sequence of fhbp have also been shown to modulate trafficking of the mature protein to the membrane, altering the levels of surface-available fHbp and resistance to antibody-mediated killing (da Silva et al., 2019).
Lastly, glycosylation of surface-exposed proteins, especially the PilE subunit of Tfp, is another means of avoiding adaptive immunity by masking the surface of the bacteria from opsonisation. Meningococcal Tfp exists as one of two major classes, class I and class II, the former of which undergoes rapid antigenic variation by recombination of pilE with repeating cassettes of pilS pseudogenes, and the latter of which is invariant (Aho et al., 1997;Wörmann et al., 2014). PilE in class I expressing strains possess a single glycosylation site, whereas class II pilin display 2-5 glycosylation sites depending on the proteoform (Gault et al., 2015). The additional glycosylation sites on class II pili may provide an alternate form of immune evasion given the invariant nature of PilE in these strains (Gault et al., 2015). The specific glycans added to PilE are determined by the presence/absence of genes encoding for the synthesis and transfer of glycan residues onto the glycan chain extending from the UDP lipid carrier on the inner membrane (Bartley and Kahler, 2014). Synthesis of the initial sugar in the glycan chain is carried out by PglC, PglD, and either PglB or PglB2, with the allele of pglB determining whether the sugar added is diNAcBac or GATDH, respectively (Bartley and Kahler, 2014). Subsequent extension of the glycan chain into di-or tri-saccharides is carried out by PglA and PglE, resulting in a digalactose addition, or by PglH/H2, which results in the addition of either Glc or GlcNAc, respectively (Power et al., 2003;Børud et al., 2014). Both di-and tri-saccharide glycans may be mono-or di-acetylated by the PglI O-acetyltransferase (Anonsen et al., 2017). The mature glycan is transferred onto PilE or other proteins by the PglO (aka PglL) oligosaccharyltransferase (Musumeci et al., 2013). Microheterogeneity of the proteoglycome is generated through phase-variation and polymorphisms in the pgl locus of Nme (Børud et al., 2018). Such heterogeneity is proposed to play an important role in immune evasion, and variation in the pilin glycans expressed by meningococcal strains has been demonstrated to differ both before and after accidental human passage and between strains carried by the same individual within a short time period (Omer et al., 2011;Børud et al., 2018).
In addition to bacterial mimicry of host structures and recruitment of host immunoregulatory proteins, meningococcal metabolism also plays an important role in impeding complement deposition and activation during systemic infection with Nme. Lactate uptake by the lactate permease LctP has been shown to be critical for resistance to complement as lactate is an entry metabolite into the sialic acid biosynthesis pathway required for capsule and LOS sialylation (Exley et al., 2005a;Exley et al., 2005b). Sulfur metabolism also plays multiple roles in virulence. A mutant lacking cysteine binding protein was internalised by endothelial cells at a rate 100-fold lower than that of wild-type, and depletion of cysteine and other sulfur sources triggers increased membrane blebbing (Gerritzen et al., 2018;Takahashi et al., 2018). Nme shed the outer membrane as blebs which play multiple roles in virulence. Shedding the outer membrane rapidly and irreversibly removes bound complement from the bacterial surface, preventing MAC insertion and lysis. Shed blebs are also known to fuse with the membranes of surrounding host cells, delivering cytotoxins which further induce the inflammatory response and misdirect phagocytic cells to locations distant from the microcolony (Kaparakis-Liaskos and Ferrero, 2015).
Endothelial Colonisation
In the bloodstream, meningococcal microcolony formation and adherence phenotypes are opposed by the high-pressure environment and shear stress exerted by blood flow. Mairey et al. (2006) demonstrated using a laminar-flow model that the only vessels in which shear stress levels are low enough to allow microcolony formation are capillaries and small conducting vessels. At these sites, the transient and heterogeneous nature of perfusion allows meningococci to undergo initial attachment to endothelial cells and to form microcolonies in a manner similar to colonisation at the nasopharyngeal epithelium (see above). This evidence is supported by post-mortem histology performed on an untreated meningitis case in which microcolonies of Nme were observed in the cerebral capillaries (Pron et al., 1997). Similarly, the colonisation of peripheral capillaries by Nme has been shown to occur in skin lesions of patients with purpura fulminans and in human skin-graft models in mice (Sotto et al., 1976;Harrison et al., 2002;Join-Lambert et al., 2013).
Paracellular Transport
In contrast to interactions at the epithelial surface, where the receptor for Tfp is still unknown, the interactions between the pilus and human brain endothelial cells are well established (Lećuyer et al., 2012). The meningococcal major pilin, PilE, and minor pilin, PilV, bind the CD147 receptor via recognition of a triantennary sialylated poly-N-acetyllactosamine-containing Nglycan (Bernard et al., 2014;Le Guennec et al., 2020). CD147 is complexed to the b 2 -adrenoceptor (b 2 AR) on endothelial surfaces, and Tfp binding induces biased activation of the b 2 AR and subsequent activation of b 2 -arrestins, stimulating rapid recruitment of cytoskeleton-associated and signalling proteins to remodel the plasma membrane underneath the newly forming microcolony (Coureuil et al., 2009;Mikaty et al., 2009). In particular, recruitment of ezrin and moesin to the site of adhesion results in actin polymerisation and the formation of microvilli-like structures, and recruitment of a-Actinin4 drives increases in the local density of CD147-b 2 AR complexes in order to increase the strength of microcolony adhesion (Maissa et al., 2017). The recruitment of b 2 AR signalling partners such as Src tyrosine kinases; p120-catenin and VE-cadherin (adherens junctional proteins); zonula occludens-1, claudin-5, and occludin (tight junctional proteins); and the Par3/Par6/PKCz polarity complex to the site of bacterial attachment to form a cortical plaque results in weakening and eventual failure of the tight-junctions between endothelial cells, allowing the passage of Nme paracellularly into the meninges in the case of meningitis, or the peripheral tissues in the case of meningococcaemia (Maissa et al., 2017). In addition to CD147, Tfp bind laminin receptor precursor 1 (LAMR1/37LRP) co-localised with galectin-3 on the surface of hBMVECs via the major pilin, PilE, and the PilQ secretin (Alqahtani et al., 2014). The mature Laminin receptor (67LR) is recognized by PilQ and PorA and is a common receptor shared by S. pneumoniae and H. influenzae (Orihuela et al., 2009). Nme has also been shown to recruit both forms of fibroblast growth factor receptor 1 (FGFR1), which co-localise with 37LRP, and to a lesser extent, 67LR. Knockdown of FGFR1 using siRNA resulted in a significant reduction in the adherence and invasion of Nme into endothelial cells, suggesting an important role for this protein in meningococcal virulence (Azimi et al., 2020).
Transcellular Transport
Once attached to the endothelial surface, microcolony formation proceeds in a similar manner to epithelial binding (see above), and meningococcal aggregates begin to occlude the vessels they occupy (Manriquez et al., 2021). In addition to the paracellular route of invasion resulting from tight junction breakdown initiated by Tfp, meningococci can cross the endothelium via transcytosis. The most important mediator of this process at the endothelial surface is Opc, in contrast to the shared role played by Opc and the Opa proteins at the epithelial surface (Virji et al., 1993a). Opc binds the endothelial surface via binding to vitronectin and fibronectin, following which binding of avb3integrin or a 5 b 1 -integrin (respectively) occurs (Unkmeir et al., 2002b). Vitronectin is the preferred substrate for Opc binding. Bacterial uptake by endothelial cells via the integrin pathway follows and is mediated by an interplay between Src, focal adhesion kinase, and cortactin (Slanina et al., 2010;Slanina et al., 2012). The binding of vitronectin occurs via a heparin bridge (De Vries et al., 1998;Tuomanen et al., 1999) or directly via binding to the sulfated tyrosines on these proteins (Sa et al., 2010). Tfp-based binding of meningococci induces transient increases in cytosolic Ca 2+ in endothelial cells, resulting in the translocation of acid sphingomyelinase (ASM) to the surface of the cell and the development of ceramide-rich lipid microdomains at attachment sites (Simonis et al., 2014;Peters et al., 2019). Opc-mediated internalisation of Nme has been shown to be directly dependent on the levels of ASM and ceramide in these micro-domains, and the ability of Nme to induce micro-domain formation is higher in more invasive strains (Simonis et al., 2014).
The minor adhesins have also been demonstrated to play a role in the transcytosis of meningococci across endothelial barriers. The App and NadA autotransporters have both been shown to increase adhesion to hBMVECs, and a recent study demonstrated that meningococci treated with anti-NadA antibodies exhibit reduced transcytosis across a model of the BBB (Turner et al., 2006;Serruto et al., 2010;Kulkarni et al., 2020). Meningococcal IgA1 protease has been shown to cleave LAMP-1, a major integral glycoprotein of human lysosomes. During attachment, Tfp and Opc-induced CA 2+ transients trigger exocytosis of lysosomes, bringing LAMP-1 to the surface where it may be cleaved by IgA1 protease (Ayala et al., 2001). A major outer membrane protein, P.IB, has been shown to interact with endothelial cells, but the mechanism is as yet unknown (Kańováet al., 2018). Recently, a role for dynamin and clathrin-mediated endocytosis in the uptake of Nme by endothelial cells has been observed (Herold et al., 2021). Interestingly, the process was only dependent on dynamin in the absence of the meningococcal capsule, whereas Arp2/3 actin polymerisation was shown to be more important for the uptake of wild-type cells.
Immune Stimulation by the Meningococcus
In late-stage IMD cases, systemic infection by the meningococcus causes rapid and exacerbated activation of the host's innate immune response, producing unregulated systemic inflammation, dysregulation of coagulation, and severe widespread vascular injury (Pathan et al., 2003). This systemic inflammatory cascade is ultimately what leads to the progression of fulminant sepsis and meningitis in IMD patients, and eventually multi-organ failure and death. Recognition of Nme by multiple human cell types is mediated by pattern recognition receptors (PRRs), which recognise pathogen-associated molecular patterns (PAMPs) common to multiple species of pathogen.
The most important and well-studied PRRs on human cells which recognise Nme are the toll-like receptors (TLRs) (reviewed in Johswich, 2017). Meningococcal LOS is a classical activator of the inflammatory response and is recognised by TLR4 (Pridmore et al., 2001;Zughaier et al., 2004). The affinity of the lipid A to the TLR4 receptor and hence the stimulation of the cytokine response is dependent upon the decoration of the lipid A headgroups with PEA and the distribution and length of the fatty acyl chains (Kahler et al., 2018). Examination of various strain collections indicates that there is considerable micro-heterogeneity of the lipid A pyrophosphorylation which corresponds to the inflammation potential of lipid A (John et al., 2020). TLR4 is also capable of recognising several meningococcal surface proteins, including NhhA and PBP2 (Plüddemann et al., 2009;Hill et al., 2011;Sjölinder et al., 2012). TLR2 in complex with TLR1 recognises meningococcal capsule and surface proteins including PorB, NhhA, and fHbp (Massari et al., 2003;Zughaier et al., 2004;Luo et al., 2016;Wang et al., 2016). TLR9 is located within endosomes and recognises CpG DNA, which is common in bacteria but not in mammalian cells (Mogensen et al., 2006;Magnusson et al., 2007). The intracellular Nod-like receptors (NLR) recognise fragments of peptidoglycan liberated from the meningococcal cell wall upon phagocytosis of Nme (Girardin et al., 2003a;Girardin et al., 2003b). Recognition of additional Nme surface structures by host cells is mediated by a variety of other receptors. Binding of carbohydrate structures is mediated by receptors including MBL, the mannose receptor DC-SIGN, surfactant proteins, siglecs, ficolins, and galectins, whereas meningococcal proteins and peptide fragments may be recognised by the N-formyl peptide receptor and scavenger receptors (Johswich, 2017). Upon the binding of PRRs to their corresponding PAMPs, activation of intracellular signalling pathways (primarily via NF-kB signalling) results in the upregulation of genes for the expression of cytokines and chemokines, maturation of immune cells such as DCs, initiation of phagocytosis, and modulation of cell death via apoptotic pathways depending on the cell type.
The ultimate result of PRR activation by meningococcal PAMPs varies during different stages of meningococcal disease. During colonisation, a controlled local inflammatory response is elicited by the interaction of Nme with both epithelial cells and resident DCs, resulting in the production of neutrophil chemoattractants including IL-8, C5a and hepoxilin A-3, which initiate firm adhesion of circulating neutrophils and infiltration of the epithelium in order to clear the infection (Stephens et al., 1983;Johswich et al., 2013;Filippi, 2019). The inflammatory response produced by the body in response to systemic infection during IMD is, by contrast, enormous (Johswich, 2017). High levels of pro-inflammatory cytokines (including IL-1a, IL-1b, IL-2, IL-6, MIF), chemokines (including IL-8, MCP-1, MIP-1a, MIP-1b), factors stimulating neutrophil and monocyte activation and maturation (including G-CSF, GM-CSF, IFN-g, TNF-a), and complement components and activation products (including C1q, MBL, C3a, iC3b, C5a, sC5b-9, CFH) are detectable in both the CSF and serum of patients during meningococcal sepsis and meningitis (Mook-Kanamori et al., 2014;Johswich, 2017).
LOS is a key activator of inflammation, causing high levels of cytokine release in multiple cell types including DCs, macrophages, and meningeal cells (Clements et al., 2001;Christodoulides et al., 2002;Zughaier et al., 2004). Several meningococcal surface proteins, including PorB, the autotransporter NadA, and the MafA component of the MafAB toxin-antitoxin system have also been shown to directly stimulate the production of cytokines in human cell lines (Singleton et al., 2005;Massari et al., 2006;Mazzon et al., 2007;Franzoso et al., 2008;Kańǒváet al., 2019). In addition to cytokine release, meningococcal proteins may modulate apoptosis in host immune cells. The autotransporters MspA and App have both been shown to be internalised by DCs, following which they are trafficked to the nucleus causing a dosedependent increase in caspase mediated apoptosis (Khairalla et al., 2015). In contrast, meningococcal PorB has been demonstrated to insert itself in the mitochondrial membrane of host cells, altering mitochondrial depolarisation and protecting cells from apoptosis (Massari et al., 2003;Peak et al., 2016). NhhA has been shown to have multiple antiinflammatory effects. When NhhA is used to stimulate monocyte maturation, a profile of cytokines geared towards an anti-inflammatory and pro Th2 response (including IL-10, CCL17, CCL18, CCL22) are released (Wang et al., 2016). NhhA has also been shown to increase the rate of macrophage apoptosis (Sjölinder et al., 2012).
Professional Phagocytes and Resistance to Phagocytosis
A key consequence of the inflammatory response triggered by infection with Nme is the maturation and recruitment of immune cells. The majority of mononuclear phagocytes resident in the human nasopharynx are plasmacytoid and myeloid DCs, with a smaller population of resident monocytes/ macrophages (Vangeti et al., 2018). During the colonisation of the nasopharynx with Nme, bacteria that successfully cross the nasopharyngeal epithelium engage basolateral Toll-like receptors (TLRs), activating NF-kB signalling and the release of chemoattractant chemokines including IL-8 (Filippi, 2019). Assembly of the membrane attack complex (MAC) on bacterial membranes results in the conversion of C5 and the release of C5a, which also has chemoattractant properties (Filippi, 2019). An increasing chemoattractant gradient stimulates the activation of circulating neutrophils, increasing expression of CD11b and CD18, leading to firm adhesion to local endothelial cells, formation of endothelial docking structures by reorganisation of ICAM-1 and JAM-A on the endothelial surface, and neutrophil diapedesis into local tissues by either the paracellular or transcellular route (van Buul et al., 2007). Once transmigration has occurred, neutrophils and tissue resident DCs and monocytes are activated by contact with Nme through multiple pathways and initiate bacterial killing by phagocytosis, production of ROS, nitric oxide, CAMPs, and in the case of neutrophils, the production of neutrophil extracellular traps (NETs) (Urban et al., 2006;Filippi, 2019).
Dendritic Cells and Macrophages (Monocyte Derived Cells)
DCs activate upon contact with Nme, stimulating the release of the proinflammatory cytokines IL1-b, IL-6, IL-8, TNF-a, IFN-g, and GM-CSF (Kurzai et al., 2005). Neisserial LOS has been identified as a major mediator of the DC proinflammatory response. The expression of LOS containing sialylated LNnT reduce the adherence and subsequent phagocytosis of Nme (Clements et al., 2001;Unkmeir et al., 2002b;Kurzai et al., 2005). Capsule expression has also been shown to inhibit phagocytosis in DCs. Interestingly, capsule expression and variation in LOS structure have not been shown to alter the release of pro-inflammatory cytokines, although capsule expression has been shown to reduce the level of the regulatory cytokine IL-10 (Unkmeir et al., 2002b;Kurzai et al., 2005). Multiple meningococcal surface proteins have been shown to play a role in modulating the response of DCs to infection. The porins PorA and PorB have been shown to induce the maturation of monocyte-derived DCs, inducing chemokine release RANTES, and the expression of DC markers (CD40, CD54, CD80, CD86, MHC-II) (Singleton et al., 2005;Khairalla et al., 2015). PorA also increased the capacity of DCs to activate both naïve and memory T-cells but inhibited the production of IL-12p70, thereby directing activated T-cells towards a Th2 response (Al-Bader et al., 2004). The response to PorB was shown to be dependent on recognition by TLR2/1 and subsequent activation of MyD88 signalling (Singleton et al., 2005;Massari et al., 2006). The minor adhesins App and MspA have both been shown to bind mannose receptor and transferrin receptor on DCs, traffic to the nucleus, and induce a dose-dependent increase in DC death via caspase-dependent apoptosis (Khairalla et al., 2015). NadA, which is expressed predominantly by hyperinvasive lineage cc11 isolates, has also been shown to interact with DCs. Stimulation of DCs with NadA strongly upregulated DC maturation markers (CD83, CD86, CD80, HLA-DR) and resulted in moderate cytokine secretion (Mazzon et al., 2007).
Tissue resident macrophages represent a critical component of the innate immune response thanks to their roles in antigen presentation and the initial cellular antibacterial response (Escobar et al., 2018). Macrophages are activated by recognition of PRRs, including the TLRs and scavenger receptors such as scavenger receptor-AI/II (SR-A) and macrophage receptor with collagenous domain (MARCO). Activation of macrophages by Nme occurs via binding of the KDO residues of meningococcal LOS to TLR4, binding of PorB to TLR2, and binding of multiple surface-exposed proteins to SR-A and MARCO (Johswich, 2017). Opsonisation of Nme by MBL, a key activator of the LP of complement, has also been shown to accelerate the uptake of Nme into macrophages (Jack et al., 1998;Jack et al., 2005). Nme has several adaptations to resist killing by macrophages. As with most cell types, expression of the capsule has been shown to reduce phagocytosis and inhibit the initial fusion of the phagosome with the lysosome (Read et al., 1996). Resistance to the production of NO by macrophages is mediated by the nitric oxide reductase NorB and, to a lesser extent cytochrome c' (CycP) (Stevanin et al., 2005). Detoxification of NO by NorB has also been shown to downregulate the production of pro-inflammatory cytokines by macrophages, likely contributing to survival in these cells (Stevanin et al., 2007). Multiple surface proteins of Nme have been found to downregulate the apoptosis of macrophages, including NadA, NhhA, NorB, CycP, and PorB, the latter of which inhibits apoptosis in multiple cell types by inserting into the mitochondrial membrane, preventing mitochondrial depolarisation and activation of caspase-9 and -3 dependent apoptosis (Massari et al., 2000;Massari et al., 2003;Tunbridge et al., 2006;Franzoso et al., 2008;Wang et al., 2016). Although Nme prefers oxygen respiration, several studies have indicated that the expression of a denitrification pathway allows the meningococcus to utilise nitric oxide as an energy source and that this ability may aid the survival of Nme intracellularly (Tunbridge et al., 2006;Stevanin et al., 2007).
Recruited Neutrophils
Neutrophils recruited to infected vessels are a key part of the host defence against systemic bacterial infections, and an inflammatory infiltrate consisting primarily of neutrophils and macrophages is diagnostic for a range of bacterial meningitis pathogens, including Nme (Sotto et al., 1976;Harrison et al., 2002;Coureuil et al., 2017;Shahan et al., 2021). Clinical IMD cases are marked by early signs of neutrophil activation, including increased CD11b and CD18 expression and shedding of CD62L (L-selectin) (Peters et al., 2003). Neutrophils are recruited to arterioles, capillaries and venules containing attached Nme, however, it was recently demonstrated that neutrophil populations in these sites are heterogenous (Manriquez et al., 2021). Manriquez et al. (2021) showed that while neutrophils were recruited in large numbers to venules in human skin grafts in a mouse model of IMD, the level of adherent neutrophils in arterioles and capillaries was greatly reduced, leading to insufficient clearance of Nme from these vessels. The colonisation of these sites, therefore, represents a mechanism by which Nme may evade killing by neutrophils. Nme may also directly reduce the recruitment of neutrophils to infected vessels to evade killing. While both encapsulated and unencapsulated meningococci induce shedding of L-selectin by neutrophils (leading to increased adhesion at peripheral sites), a meningococcal mutant lacking a long-chain LOS induced greater-neutrophil adhesion than the wild-type strain, suggesting that LOS may play a role in inhibiting neutrophil recruitment (Klein et al., 1996).
Phagocytosis of Nme by neutrophils is primarily triggered by the deposition of complement factors or opsonising antibodies on the bacterial surface and is resisted by capsule, LOS, and surface proteins (described in detail in Section 5.2.1). Nonopsonic phagocytosis can be carried out by direct binding of neisserial surface structures to receptors on neutrophils surfaces. In the gonococcal model, binding of gonococcal Opa proteins to CEACAM3 (but not CEACAM1 or CEACAM6) results in nonopsonic phagocytosis followed by oxidative burst and degranulation (Sarantis and Gray-Owen, 2007;Sanders et al., 2012). Since meningococcal Opa proteins are capable of binding CEACAM3, it is probable that Nme may be uptaken by neutrophils in a similar manner (Sarantis and Gray-Owen, 2007;Sanders et al., 2012). Sialylation of meningococcal LOS has been shown to inhibit non-opsonic phagocytosis in some Nme strains (Estabrook et al., 1998). Neisserial porins play a key role in resistance to neutrophils, inhibiting opsonic phagocytosis, degranulation, and phago-lysosome fusion (Bjerknes et al., 1995). The inhibition of apoptosis by PorB observed in epithelial cells and DCs is likely to occur in neutrophils as well, representing a probable mechanism by which those meningococci that survive within neutrophils may extend their lifespan (Criss and Seifert, 2012;Peak et al., 2016). NETs produced by neutrophils in response to infection are deployed to immobilise and kill bacteria by depriving them of critical nutrients and by CAMP-and ROS-mediated killing. The binding of meningococci to NETs is partially mediated by Tfp (Lappann et al., 2013). Given the high affinity of meningococcal pilin for DNA via the ComP subunit (Cehovin et al., 2013), this interaction is likely mediated by ComP. Both Nme and outer membrane blebs are capable of inducing the production of NETs, and the release of blebs results in misdirection/depletion of NETs to protect meningococci (Lappann et al., 2013). EptA-mediated modification of Lipid A headgroups with PEA is also important for the resistance to NETs (Lappann et al., 2013). Such resistance is not due to decreased binding to NETs, but rather resistance to the action of NET-bound cathepsin G. Capsule plays a role in resistance to NETs, as indicated by increased binding of capsule mutants by NETs. Zinc uptake via ZnuD is also important for survival in NETs, and NETs may withhold zinc from bacteria bound to them (Lappann et al., 2013).
The production of ROS and reactive nitrogen species (RNS) by both neutrophils and macrophages causes large amounts of damage to bacterial proteins, lipids, and DNA, ultimately resulting in the destruction of phagocytosed pathogens (Kozlov et al., 2003;Imlay, 2013). To resist killing by ROS and RNS, meningococci possess a range of proteins that detoxify ROS, including catalase, cytochrome c peroxidase and two superoxide dismutases (reviewed in Criss and Seifert, 2012). In addition to detoxification, quenching of ROS is also used to protect Nme against ROS-mediated damage. Nme has two glutamine uptake systems, GltT and GltS, which work in tandem with the glutathione synthase GshB to acquire L-glutamine and convert it into glutathione in order to further quench ROS (Talà et al., 2011). Manganese has been shown to scavenge the superoxide radical and dismutate H 2 O 2 in the presence of bicarbonate (Archibald and Fridovich, 1982;Stadtman et al., 1990), and the meningococcal Mn uptake system MntABC has been shown to play a significant role in the resistance of Nme to ROS (Seib et al., 2004). Resistance to the production of RNS by macrophages is mediated by the nitric oxide reductase NorB and, to a lesser extent, cytochrome c' (CycP) (Stevanin et al., 2005). Detoxification of NO by NorB has also been shown to downregulate the production of pro-inflammatory cytokines by macrophages, likely contributing to survival in these cells (Stevanin et al., 2007).
In addition to detoxification, repairing damaged DNA and proteins is critical to cell viability and survival of Nme in phagocytes. Neisserial exonuclease (NExo) and neisserial apurinic/apyrimidinic endonuclease (NApe) have both been shown to contribute to survival in human neutrophils via their ability to remove damaged abasic residues from DNA (Carpenter et al., 2007). DNA repair by these enzymes is backed up by a redundant network of enzymes, including the bifunctional DNA glycosylase/glycolyases Nth and MutM, making the meningococcus robust to DNA damage by ROS (Nagorska et al., 2012). The DinG helicase has also been shown to increase survival under oxidative stress due to its role in double-stranded break repair (Frye et al., 2017). Repair of damaged proteins occurs via several pathways. Methionine sulfoxide residues on damaged proteins are repaired by an outer membrane lipoprotein called PilB. PilB comprises two fused methionine sulfoxide reductase domains (MsrA/B domains) fused to an N-terminal thioredoxin (Trx) domain. Electrons required to reduce methionine sulfoxide into methionine are channelled through the Trx domain via the inner membrane protein DsbD (Brot et al., 2006). Damage to cysteine residues occurs primarily by the breakage of thioldisulfide bonds critical for protein structure and function. The Dsb proteins, involved in the oxidation and isomerisation of thiol-disulfide bonds, repair this damage in Nme and ensure correct folding of their target proteins (Piek and Kahler, 2012). Nme contains three DsbA homologues: DsbA1, DsbA2, which among others is involved in the formation of disulphide bonds in the PilE and PilQ subunits of Tfp, and DsbA3, which catalyses disulphide bond formation in the LOS PEA-transferase EptA (Sinha et al., 2004;Tinsley et al., 2004;Sinha et al., 2008;Piek et al., 2014). Each of these DsbA proteins in Nme is re-oxidised by the inner membrane protein DsbB (Piek and Kahler, 2012). The isomerisation pathway consists of DsbD, which transfers electrons to DsbC, allowing DsbC to reshuffle thiol-disulfide bonds in proteins containing multiple cysteine residues (Piek and Kahler, 2012). Interestingly, DsbD has been identified as essential in Nme (Kumar et al., 2011).
EVOLUTION OF COMMENSALISM AND PATHOGENICITY
Nme is a useful species to examine the evolution of virulence as it contains both non-invasive genetic lineages and hyperinvasive lineages which differ in their capacity to cause IMD. The evolution of virulence in a pathogen is a dynamic continuum between the acquisition of patho-adaptive mutations and fitness in any given environmental niche (Diard and Hardt, 2017). The adaption towards virulence by a pathogen may result in an ecological advantage such as improved colonisation and transmission through the human population and thereby provides a competitive advantage against other strains without this feature. In the case of Nme, IMD is considered a dead-end in the transmission cycle and provides no obvious competitive advantage to genetic lineages with this trait. In theory this should ultimately result in the slow extinction of the hyperinvasive lineages over time. However, two evolutionary forces oppose this process: acquisition of traits via horizontal gene transfer (HGT) and the evolution of hypervariable and (Deghmane et al., 2009;Deghmane et al., 2011;Besbes et al., 2015) Whole blood ↑TNF-a, IL-6, IL-10 Oxidative Burst ↑TLR2, TLR4, HLA-DR, CD14 (Potmesil et al., 2014;Aass et al., 2018) Dendritic cells ↑IFN-a, TFN-a, IL-6, IL-8 ↑CD86 (Unkmeir et al., 2002a;Michea et al., 2013) Mouse sepsis model ↑IL-6, TNF-a, KC (Plant et al., 2006) Invasive GGI isolates (cc5, cc60, cc22, cc23) Epithelial cells ↑TNF-a, IL-6, IL-8, IFN, IL-1b ↑ Inflammatory, low number of studies (Guo et al., 2020) Dendritic cells hypermutable loci (De Ste Croix et al., 2020). Dependent upon the traits involved, such loci will result in a mixed population with strongly or weakly adaptive phenotypes that provide a subset of cells with a survival advantage within a given niche. Hypermutable loci in Nme are typically phase-variable loci, (Figure 1) in which the expansion and contraction of simple sequence repeats (SSR) result in stochastic expression of a trait within a population of bacterial daughter cells derived from a single progenitor. Hypervariable loci are typically loci in that contain both conserved functional regions and variable regions which contain variable epitopes that misdirect the host immune response (De Ste Croix et al., 2020). Such hypervariable regions are derived from contingency loci, some of which are partial and silent (such as the pilS cassettes), and some of which exist as multiple intact loci in the bacterial chromosome (such as the Opa-encoding loci) (Nassif et al., 1993;Callaghan et al., 2006). Both hypervariable and hypermutable loci are considered mechanisms of "short-sighted" evolution, typically driving in host evolution during colonisation and IMD . Despite the abundance of both hypervariable and hypermutable loci in Nme, multiple studies which compared hyperinvasive and commensal lineages have not detected an association between the phasome (the entire cohort of loci containing SSRs) and hyperinvasiveness (Wanford et al., 2018;Wanford et al., 2020;Mullally et al., 2021). The study by Mullally et al. (2021) proposed a model in which acquisition and loss of genomic islands correlated with the propensity of a lineage to cause IMD. Although the majority of genomic islands were hypothetical, where functions were known, they conferred traits associated with survival in host cells (e.g. resistance to host killing mechanisms) and competitive colonisation traits (e.g. bacteriocins and fratricidal competition mechanisms). In contrast to the hyperinvasive lineages, the commensal lineage cc53 possessed only 33 of the 93 genomic islands found in the pangenome. Interestingly, cc11 was found to be an outlier in this scheme, possessing the largest number of genomic islands (48/93) and by far the highest D/C ratio, suggesting that this lineage may be uniquely adapted to a pathogenic lifestyle (Mullally et al., 2021).
One possible explanation for these observations is the theory of coincidental evolution, in which virulence factors arise as a result of environmental selection pressures not directly associated with causing disease in the host per se (Sun et al., 2018). In this model, the first bottleneck encountered by Nme is colonisation of host mucosal surfaces and the need to out-compete the established microbiome. An epicellular lifestyle in which the bacteria invade the epithelial host cells, replicate and re-cycle to the apical surface has a dual purpose: to avoid competition from the microbiome but also to subvert nutritional immunity and evade host innate immunity. In this context, the accidental acquisition of the ability to cause IMD may potentially be an outcome of acquiring traits to improve bacterial growth for further transmission. One hypothetical pathway by which this could be achieved is the stimulation of inflammation and subsequent dysregulation of nutritional immunity, especially in the form of high lactate production. Conversely, stimulation of the inflammatory cascade results in the activation of adaptive immune responses, and in these circumstances Nme would need to develop resistance to host adaptive immunity mechanisms in order to take advantage of this carbon source. Although there are limited studies comparing genetic lineages and their ability to cause inflammation, a metaanalysis of the current published works in (Table 1) suggest that there are trends present to support this hypothesis. Typically, commensal strains of Nme (such as cc53) or strains isolated from carriage are less inflammatory than isolates from hyperinvasive lineages. Of the hyperinvasive lineages, cc11 has the strongest ability to stimulate inflammatory markers in whole blood, DCs, and epithelial cells, and induce increased host cell apoptosis compared to GGI and GGII isolates (Unkmeir et al., 2002a;Plant et al., 2006;Deghmane et al., 2009;Deghmane et al., 2011;Michea et al., 2013;Potmesil et al., 2014;Besbes et al., 2015;Aass et al., 2018) ( Table 1). The capacity to induce increased levels of apoptosis and inflammation compared to other lineages is associated with the acquisition of multiple unique genetic islands, including NadA, and the possession of virulence-associated genomic islands associated with both GGI and GGII. In addition, the recent adaption of cc11 to the human urinogenital tract and subsequent capability to cause epidemic outbreaks of urethritis provides an exciting opportunity to examine this hypothesis in real time (Tzeng et al., 2017). In this case, adaptation to the urogenital niche included the loss of capsule production and the acquisition of anaerobic metabolism by genetic transfer from N. gonorrhoeae to enable improved colonisation and growth, respectively, in the human urogenital tracts of men. As further work is performed on this new pathotype, comparisons of inflammatory potential may inform further thoughts on how Nme has evolved in the past.
While relatively little experimental biological work has been done on cc53, there is evidence of a distinct strategy of co-existence with the host. These isolates lack a capsule and the broad protection against complement, antibody-mediated opsonisation, and phagocytosis that provides. They lack many of the other features common to the hyperinvasive lineages including the Opc invasin, the HpuAB system for iron acquisition from heme, the MDA phage, O-acetylated pilin glycans and an IgA1 protease capable of cleaving IgG3. cc53 and other carriage-restricted lineages are less inflammatory and induce reduced cytokine production, apoptosis, and differentiation of a range of immune cells, indicating an overall strategy of persistence within the nasopharynx in a similar manner to the commensal Neisseria species.
CONCLUSIONS
Nme has proven to be an exciting model for understanding the evolution of epicellular bacterial colonisation in humans. The remarkable plasticity of meningococcal genome has allowed this species to develop both commensal and pathogenic lifestyles in multiple host niches. Future work on understanding the interference between Nme and the human microbiome, how Nme interacts with the epithelial surface at a molecular level, and how these processes differ between genetic lineages will enable a greater understanding of commensalism and virulence of Neisseria spp.
AUTHOR CONTRIBUTIONS
AM and NM contributed equally to research and drafting of the manuscript. AM, NM, and CK contributed to editing the manuscript. All authors contributed to the article and approved the submitted version.
|
2022-04-22T13:11:45.856Z
|
2022-04-22T00:00:00.000
|
{
"year": 2022,
"sha1": "53a106f694b23df0412fe91ce5315f4e55aacebd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "53a106f694b23df0412fe91ce5315f4e55aacebd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14313564
|
pes2o/s2orc
|
v3-fos-license
|
Trans-kingdom Cross-Talk: Small RNAs on the Move
This review focuses on the mobility of small RNA (sRNA) molecules from the perspective of trans-kingdom gene silencing. Mobility of sRNA molecules within organisms is a well-known phenomenon, facilitating gene silencing between cells and tissues. sRNA signals are also transmitted between organisms of the same species and of different species. Remarkably, in recent years many examples of RNA-signal exchange have been described to occur between organisms of different kingdoms. These examples are predominantly found in interactions between hosts and their pathogens, parasites, and symbionts. However, they may only represent the tip of the iceberg, since the emerging picture suggests that organisms in biological niches commonly exchange RNA-silencing signals. In this case, we need to take this into account fully to understand how a given biological equilibrium is obtained. Despite many observations of trans-kingdom RNA signal transfer, several mechanistic aspects of these signals remain unknown. Such RNA signal transfer is already being exploited for practical purposes, though. Pathogen genes can be silenced by plant-produced sRNAs designed to affect these genes. This is also known as Host-Induced Genes Silencing (HIGS), and it has the potential to become an important disease-control method in the future.
Introduction
Since the discovery of gene silencing induced by inverse transcripts in the 1980s [1] and Fire and Mello's discovery in 1998 that double-stranded RNA (dsRNA) can activate gene silencing in Caenorhabditis elegans [2], our understanding of the complex role of RNA in gene regulation has increased considerably. Different types of small RNA (sRNA) molecules have been identified over the years, of which microRNAs (miRNAs) and small-interfering RNAs (siRNAs) are the main types. In this review, we will primarily use the shared term, sRNA, and generally not distinguish between miRNAs and siRNAs.
sRNAs are typically 19-25 nt long, and they are produced from larger dsRNA or hairpin RNA (hpRNA) molecules by DICER (DCR) or DICER-like (DCL) proteins. They bind to complementary mRNA targets with the help of an Argonaute (AGO) protein, leading to transcriptional and post-transcriptional gene silencing. This complex of an sRNA and an AGO protein is called the RNA-Induced Silencing Complex (RISC). The components of the RNA silencing machinery are widely conserved in eukaryotes (reviewed in [3][4][5]). sRNA-guided transcriptional gene silencing in the nucleus involves RNA polymerase II release, epigenetic histone modifications, typically introducing the H3K9 methylation mark, and DNA methylation. Post-transcriptional gene silencing in the cytosol involves mRNA cleavage and inhibition of translation (reviewed in [3,6,7]). These modes of sRNA-guided gene silencing are often referred to as RNA interference (RNAi). The essential components of the RNAi mechanism appear to have been present in the last eukaryotic common ancestor, although species in several super-groups of the eukaryotic tree seem to have lost components of the RNAi machinery independently. These include Saccharomyces cerevisiae (Unikonta), Trypanosoma cruzi and Leishmania major (Excavata), Cyanidioschyzon merolae (Archaeplastida), and Plasmodium falciparum (Chromalveolata) [8,9]. It is widely believed that RNAi evolved as a measure to control viruses and transposable elements. However, as we review here, RNAi also functions in communication between hosts and more advanced pathogens and parasites. Otherwise, it has come to play essential roles in gene regulation important for endogenous life processes, including fine-tuning of mechanisms for innate immunity [10].
RNA molecules have been found to be mobile within organisms, and numerous cases in which RNA-silencing signals travel between different organisms have now been described. These organisms can be of the same species, where breast-feeding of infants may provide an example of RNA-mediated gene regulation [11], or of different species, for instance between plants parasitized by other plants [12]. In recent years, both animals and plants have been found to exchange sRNA with closely interacting pathogenic, parasitic, or symbiotic organisms [13][14][15]. Transkingdom movement of RNA-silencing signals has been reported to occur between a wide range of species: from humans to the malaria-causing chromist, P. falciparum [16], from bacteria to nematodes [17], from plants to pathogenic and symbiotic microbes [18][19][20][21], from plants to nematodes [22], from fungal pathogens to plants [23], and from plants to insects [24]. These examples are detailed in Figure 1 and Table 1. The method of Host-Induced Gene Silencing (HIGS) exploits the silencing effect of sRNA signals in interacting organisms, and involves host expression of sRNA-generating constructs directed against genes in associated pathogens, parasites, or symbionts [18][19][20]22,[24][25][26][27][28].
Many aspects of these trans-kingdom silencing phenomena remain poorly understood. These include how specific sRNAs are selected for transport, how sRNAs are transported outside the cell, the way they recognize and enter their target cell, and the mode by which these sRNAs use the target cells' RNAi machinery to convey their silencing effect. Here, we will deliberate on the mechanisms that could be involved in the transfer of these silencing signals and address some of the many questions surrounding the intriguing phenomenon of trans-kingdom sRNA mobility.
The Biological Context of RNA Trans-kingdom Transfer
The examples in Table 1 and Figure 1 suggest that there is a framework, widely conserved in eukaryotes, that allows production, transfer, and perception of RNA signals between very distantly related organisms across the branches of different kingdoms of the tree of life. In the HIGS examples, sRNAproducing constructs are designed to target genes in the interacting organisms, often of different kingdoms. However, evidence is available that natural, endogenous sRNA also target genes in a trans-kingdom manner. For instance, the plant pathogen, Botrytis cinerea, exploits siRNAs to target defense genes in Arabidopsis and tomato, thereby enhancing its pathogenicity ( Figure 1A) [23]. Another example of this comes from human erythrocytes that use miRNAs to target P. falciparum genes and thereby counteract malaria ( Figure 1B) [16]. This indicates at the same time that sRNA signaling can be transmitted in both directions between host and invader. Similarly, the parasitic flatworm, Schistosoma japonicum, was found to produce miRNAs that could be retrieved from the plasma of rabbits that host it, but it is not clear whether this miRNA has a function in rabbits [15].
Through evolution, hosts and their invaders have undergone amazing arms races involving appearances of receptors and downstream response mechanisms for detection and defense on the host side, and, e.g., defense suppressing effectors on the side of the invaders. Hitherto, the interactions are described to be based on transfer of proteins and low-molecular-weight molecules between the organisms. However, the results of LaMonte et al. [16] and Weiberg et al. [23] indicate that RNA can be added to this list of communication molecules.
Even though the occurrence of RNA signal transfer is widespread, it is not surprising that there are organisms that may not be influenced by incoming RNA. The oomycete plant pathogen Phytophthora parasitica appears not to be sensitive to sRNA coming from the plant host [29], even though the closely related Phytophthora capsici is [28]. If this distinction can be confirmed, it would be very interesting to determine what fundamental difference could account for the susceptibility to exogenous sRNA molecules in one and not the other Phytophthora species. This could potentially reveal an essential mechanism of sRNA transfer or RNAi, which would suggest that P. parasitica, by being insensitive, has added another level to the molecular arms race between host and pathogens.
The HIGS method provides us with a potential means to decrease the success rate of pathogens and parasites. This can be achieved by engineering host-produced sRNAs to silence essential pathogen transcripts, which under laboratory conditions has been documented to be very efficient [19,22,24]. It will be interesting to see how efficient and durable this will be under conditions outside the laboratory. Another way of obtaining host resistance may be based on the fact that pathogens and perhaps parasites also make use of sRNAs in the interaction with hosts. Therefore, the host genes targeted by them could be re-coded to make them insensitive.
Considerations When Assessing Inter-specific sRNA Transfer
As listed in Table 1 and Figure 1, many species have now been suggested to exchange sRNA signals. However, several of these Overview of different situations in which sRNA transfer occurs. A, Botrytis cinerea can transfer Bc-siRNA to its host. This process has been shown to be dependent on AGO1 in the host, Arabidopsis thaliana and on both Dcl1 and 2 in Botrytis cinerea [23]. B, Human miRNAs can be translocated to the malaria-parasite, P. falciparum, where they interfere with translation [16]. C, The nematode C. elegans has been shown to take up E. coli-produced ncRNAs that subsequently influence their foraging behavior. This is dependent on the C. elegans protein RDE-2, that is essential for RNAi [17]. D, The Chagas disease-causing parasite, T cruzi, produces tRNA-derived sRNAs (tsRNAs) that are exported from the cell in vesicles. These vesicles are shown to increase infectability of host cells, suggesting this might be caused by the tsRNAs. This has not been shown directly though [14]. E, The expression of sRNA-generating constructs to silence genes in pathogens, or other closely associated species, has now been demonstrated for many species combinations. This process is suggested to be dependent on Dcl1, since Dcl2, 3, and 4 seem to be dispensable to induce silencing by an Arabidopsis-expressed hairpin in the insect, Helicoverpa armigera [24]. examples are largely based on correlated phenotypic effects in the target organism after expression of an sRNA-generating construct in the interacting organism (e.g., [20,27,30]), and direct evidence for sRNA functioning in the target organism is not given. One reason is the difficulty of detecting the sRNA molecule specifically in the target organism without risk of contamination from the transmitting organism. However, the example of Tinoco et al. [26] offers convincing evidence. Here, GUS enzyme activity was reduced in a transgenic Fusarium verticillioides strain after it had attacked a tobacco host plant expressing a GUS hairpin construct. The observation was made during in vitro cultivation after the fungus had been recovered from the plant and occurred together with reduced GUS transcript level and presence of a GUS sRNA in the fungus, the latter detected by northern blot. Furthermore, it was noteworthy that this GUS gene silencing could last for an extended period of in vitro growth, i.e., in absence of hpGUS from tobacco, while subsequently resuming initial GUS expression levels. In vitro cultivation of one of the two organisms following the interaction overcomes the obvious contamination problem when determining presence of transferred sRNA. Weiberg et al. [23] study the plant RNAi machinery to support the hypothesis that the fungal-induced plant gene suppression indeed is caused by the fungus sRNA functioning in the plant. Plant RNAi in general is required for fungal resistance, and by knocking out DCL1, Weiberg et al. show that this also is the case for B. cinerea. However, knocking out AGO1 has the opposite effect on B. cinerea, even though these two components are on the same RNAi pathway. This supports the idea that plant AGO1 is used by the fungal sRNA in host gene silencing.
Alternative Mechanisms of Gene Silencing
In most described instances, both species involved in the exchange of sRNA possess the canonical RNAi machinery. However, trans-kingdom RNA silencing does not necessarily require this. The malaria parasite, which receives human miRNA [16], does not possess homologues of AGO and DCR proteins [31]. The translocated miRNAs were instead found to form chimeric dsRNAs with P. falciparum transcripts, thereby inhibiting translation ( Figure 1B) [16]. It has been found that the Chagas disease parasite T. cruzi, although it also lacks components of the canonical RNAi pathway, produces vesicles that are loaded with both tRNA-derived sRNAs and an Argonaute protein. It seems likely, but has not been directly shown, that these signals [23] Triticum aestivum P Puccinia triticina F RNA produced from RNA virus in planta leads to gene down-regulation in target species [62] Hordeum vulgare P Blumeria graminis F Hairpin expression in planta leads to gene down-regulation in target species [18] Medicago truncatula P Glomus intraradices F Hairpin expression in planta leads to gene down-regulation in target species [21] Musa paradisiaca P Fusarium oxysporum F Phenotype of fungus grown in vitro on medium containing sRNA. Hairpin expression in planta leads to gene down-regulation in target species [20] Arabidopsis thaliana P Fusarium graminearum F Hairpin expression in planta leads to gene down-regulation in target species and suppresses fungal growth [19] Nicotiana tabacum P Phytophtora capsici C Hairpin expression in planta leads to gene down-regulation in target species [28] Glycine max P Meloidogyne incognita A Hairpin expression in planta leads to gene down-regulation in target species [22] Arabidopsis thaliana, Nicothiana benthamiana P Helicoverpa armigera A Hairpin expression in planta leads to gene down-regulation in target species [24] Zea mays P Diabrotica virgifera and other coleopteran spp.
A Hairpin expression in planta leads to gene down-regulation in target species [30] Arabidopsis thaliana P Myzus persicae A Hairpin expression in planta leads to gene down-regulation in target species [63] Escherichia coli B Caenorhabditis elegans A Bacterial ncRNAs down-regulate genes and alter nematode behavior could influence host gene expression ( Figure 1D) [14]. Gene expression of the nematode, C. elegans, can be influenced by noncoding RNA produced by the bacterium Escherichia coli. This RNA is being taken up and feeds into the RNAi machinery of the worms, down-regulating the che-2 gene, which impairs their ability to find food ( Figure 1C) [17]. Future studies will show how common such alternative mechanisms are compared to classical RNAi mechanisms.
Extracellular Transport of sRNA
With the exception of the situation for intracellular symbionts and pathogens, e.g., P. falciparum, a prerequisite for cross-species RNA signaling is extracellular mobility of RNA. Many organisms have been shown to contain extracellular sRNA and several distinct forms of sRNAs have now been found to be mobile in different organisms. We believe that RNA signals that travel between organisms rely on similar mechanisms as those observed for extracellular transport within an organism (Figure 2). In humans, sRNAs have been found to be present in extracellular fluids. This is a hostile environment for RNA, which needs to be protected from degradation. Exported sRNA has been found inside extracellular vesicles and in association with High-Density Lipoprotein (HDL) cholesterol particles (Figure 2) [32][33][34]. How sRNAs are selected for extracellular transport is currently not clear, but the profile of exported sRNA appears to be different from the population of cellular sRNA. This suggests an active selection process [32,35,36]. Uptake of this RNA is depicted in a manner that resembles SID-1/SID-2 mediated uptake [39]. DsRNA is bound by a receptor and internalized, after which it is taken up into the cytosol by a transmembrane channel, such as SID-1. In the middle, transfer of sRNAs through MVB-mediated exosomes is depicted. A model for loading of sRNA into intraluminal vesicles of MVBs (MVB) is suggested [49]. These vesicles are released in the intercellular space as exosomes after fusion of MVBs with the plasma membrane (PM). Exosomes are taken up by endocytosis into the receiving cell. It is unknown how sRNA is released into the cytosol, but one could envisage a fusogenic protein (F) to be involved, which facilitates fusion of the endosomal and exosomal membranes. On the right, transfer of sRNA in shedding vesicles (SV), which are generated directly from the PM, is depicted. How RNA is loaded into SV is unknown. The recipient cell takes up the sRNA after fusion of the SV with the PM in a process that requires fusogenic proteins. SVs might be taken up in an endocytosis-dependent manner and exosomes might be taken up in a membrane fusion event. In the cytosol of the recipient cell, the sRNA is recognized by the RNAi machinery and triggers gene silencing, either through post-transcriptional gene silencing (PTGS) or transcriptional gene silencing (TGS). During PTGS, amplification of the sRNA signal is provided by RNA-dependent RNA polymerases (RdRP), which give rise to secondary sRNAs that can target the same or other transcripts. doi:10.1371/journal.pgen.1004602.g002 Non-vesicular extracellular sRNA As indicated, sRNA not enveloped in a membrane can be found in the extracellular space. It is still unknown how this free sRNA is secreted, not to mention how it is selected for secretion. However, outside the cells it can be associated with HDL and proteins, such as AGO [34,37]. Ideas of how free sRNA might be taken up by target cells come from studies in C. elegans, which revealed key components for uptake of free extracellular dsRNA. These are the transmembrane protein channel, Systemic RNAi-Deficient (SID)-1 [38], and the single-pass transmembrane receptor, SID-2 [39]. It is thought that free dsRNA is internalized from the intestinal lumen by a SID-2-receptor mediated endocytosis, after which dsRNA can escape from the endosome into the cytoplasm by SID-1 (Figure 2) [39]. Unlike SID-1, which is conserved in animals, SID-2 is poorly conserved [40]. An alternative to this protein seems to be scavenger receptors mediating clathrin-dependent endocytosis of cholesterol-conjugated lipoprotein. In cultured human hepatocytes, extracellularly applied cholesterol-conjugated lipoprotein-associated sRNA has been found to be able to induce RNAi [41]. This and other examples indicate that scavenger receptors are required for RNA uptake [34,41,42].
Vesicular extracellular sRNA
RNA in extracellular vesicles has attracted increasing interest as a means of intercellular communication. When first discovered, extracellular vesicles were merely considered to result from stressed cells shedding waste products [43]. However, after the discovery of nucleic acid sequences in these vesicles, they were considered much more interesting, as this suggested that they might facilitate genetic signaling [32]. sRNA-containing vesicles in human plasma are either shedding vesicles, formed by outwardbudding at the plasma membrane, or exosomes, formed by inward budding of intraluminal vesicles (ILV) at endosomal membranes of multi-vesicular bodies (MVBs). ILV formation is generally considered to require the Endosomal Sorting Complexes Required for Transport (ESCRT) machinery. The exosomes are subsequently released into the environment when MVBs fuse with the plasma membrane [44]. A subset of the sRNA population may enter the ILVs of the MVBs, possibly leading the subset onto the exosomal excretion pathway ( Figure 2) [45,46]. The ESCRT requirement for secretion of sRNA containing exosomes is uncertain since an alternative ceramide-dependent ILV formation mechanism, regulated by neutral sphingomyelinase2 activity, has been proposed [34,47,48]. It has been suggested that vesicle loading of sRNAs can depend on their binding to complementary mRNA, their sequence motifs, and their 39 modifications. miRNAs in human primary T-lymphocyte-derived exosomes have been found to share four-base EXOmotifs, which bind the protein hnRNPA2B1 after its sumoylation. This ribonucleoprotein-complex is sorted into the MVB ILVs, subsequently secreted as exosomes [49]. sRNAs in exosomes are not only protected by a membrane. In the mammalian bloodstream, sRNAs in exosomes have been found to form a complex with Ago2 [33], as has been found for sRNA not enveloped by a membrane [37].
RNA signals in extracellular vesicles are envisaged to enter target cells in one of two ways. The intact vesicle can be endocytosed at the plasma membrane, after which the RNA will end up being behind two membranes in an endosome (Figure 2). RNA escape to the cytosol will require a fusion of the two membranes by an unknown mechanism. Alternatively, the extracellular vesicles can fuse directly with the plasma membrane and thereby release the RNA into the cytosol. This process is also poorly understood. Intracellularly, membrane fusion processes are mediated by SNARE proteins. However, fusion of extracellular vesicles to plasma membranes will require other fusogenic proteins. This process will be similar to membrane fusions occurring, for instance, during oocyte fertilization, infection by membrane enveloped viruses, and cell-cell fusion events. A number of extracellular fusogenic proteins, such as syncytin and AFF-1, have been implicated in these fusion processes [50,51], and it will be interesting to learn about the role of such proteins in the fusion of RNA-carrying exosomes and shedding vesicles with target cells. Fusogenic proteins may also mediate fusion between the two membranes of the endosome resulting from endocytosis of a vesicle.
Trans-kingdom RNA-Transfer
RNA secretion is believed to generate the extracellular RNA that is transported between hosts and parasites. For instance, T. cruzi-produced vesicles are loaded with an Argonaute protein and sRNAs, which potentially influence host gene expression [14]. Extracellular vesicular transport of sRNA is a candidate mechanism to facilitate trans-kingdom RNA transfer between other species as well. Plant leaves attacked by the powdery mildew fungus deliver both shedding vesicles and exosomes at the fungal attack site, and interference of the latter hampers plant defense [52][53][54]. This supports a possible role for vesicular transport of the RNA silencing signal and suggests a means of RNA delivery not only during HIGS but also for wild-type plants to transfer RNA to the fungus as a defense strategy [18,54,55].
Trans-kingdom RNAi could also depend on transfer of nonvesicular RNA. However, to our knowledge, functional homologs of SID-1 and SID-2, for instance, or an alternative direct RNA uptake system, have not been described in plants. Therefore, given the accumulation of vesicular material in the interphase between the plant and pathogen, we deem it likely that the dissemination of gene silencing RNA during HIGS in plants relies on vesiclemediated transport, much like in mammalian circulation, where the spread of membrane-enveloped endogenous miRNA signals through the bloodstream requires the selective uptake of these signals by the recipient cells [18,52] (Figure 2).
After entering the target cell, it is likely that sRNAs will make use of the RNAi-machinery of that cell. For instance, when the fungus B. cinerea exploits sRNAs to silence defense genes in Arabidopsis and tomato, this process is dependent on plant AGO1 ( Figure 2) [23]. This protein controls the cytosolic RNAi pathways, suggesting target mRNA cleavage or translational inhibition. However, as mentioned before, it has been shown that sRNAs can reduce gene expression in species that lack the canonical RNAi-mechanisms. P. falciparum does not possess homologs of AGO and DCR proteins [31], but the translocated miRNAs from human cells were found to form chimeric dsRNAs with P. falciparum transcripts, inhibiting translation ( Figure 2) [16]. In plants, sRNA signals that are mobile through the phloem can induce marked reductions in gene expression in remote target cells, even though their concentration is very low (down to 10 parts per million) [56]. This is most likely also the case in trans-kingdom transfer of sRNA, since large-scale transfer of sRNA seems not feasible. Studies of long-distance signaling in plants using grafting revealed a necessity for the RNA-dependent RNA-Polymerase (RdRP), RDR6, in sRNA recipient cells, which is thought to amplify the incoming silencing signal [57,58]. It is plausible that in trans-kingdom transfer, sRNA signals will also be amplified in their target cells to be able to induce gene silencing. It has been suggested that organisms ingesting plant material that contains sRNAs amplify these signals in the cells lining the digestive gut [30]. However, this might not be achieved using RdRP in all species. For instance, insects do not possess this enzyme [59], but they can still be affected by HIGS [24,30]. Therefore, they are likely to have a different system to amplify incoming RNA signals.
Direct evidence remains, showing that trans-kingdom RNAi also can feed into nuclear chromatin-based silencing pathways. Yet an enduring silencing effect has been recorded [26], which might suggest such chromatin-based mechanisms can be activated.
sRNA Sequence-Complementarity Requirements
In order to have efficient gene silencing in the target organism, the delivered sRNA signals should meet the sequence-complementarity requirements specific to the receiving cell. These requirements vary between different kingdoms, for instance, being less stringent in animals than in plants, and the requirements also vary according to the silencing pathway [4,60,61]. This is essential when designing hairpin constructs to target a transcript in an interacting organism. Generally, these constructs are made with complete sequence identity, but the complementarity requirements are important for the prediction of off-target transcripts. Natural sRNAs able to target transcripts in a trans-kingdom manner, such as those identified by Weiberg et al. [23], obviously obey the stringency criteria of the target kingdom, which is, in this case, plants. Here, the B. cinerea fungus produces 73 sRNAs with potential targets in Arabidopsis and tomato. Of these, three 21-nt retrotransposon-derived siRNAs target four plant transcripts important for pathogen defense, despite three to five mismatches. This increases the chance of sRNAs to be functional in an interacting organism and leads to speculation on whether such mechanisms have arisen fortuitously. Since the presence of matching sRNAs can provide a clear selective advantage, it is likely not to be a random occurrence. Furthermore, the retrotransposon origin of these sRNAs could indicate that these elements contribute to relatively rapid evolution of the sequence of host-directed sRNA, which is an advantage in the host-pathogen arms race.
Common Emerging Concepts
Trans-kingdom RNA signaling is now a documented phenomenon with intriguing implications for our understanding of biological interactions. Similar to ideas proposed by Sarkies and Miska on ''social RNA'' [40], the presented evidence for transkingdom RNAi suggests that genetic interaction between organisms at the RNA level is common. Organisms become genetically programmed according to endogenous and environmental input. Hitherto, we have known these to include physical and chemical stimuli from other organisms. However, now we see that genetic programming also is influenced by genetic stimuli in the form of environmental RNA. Obvious biological niches where such RNA communication could be prolific would be in the soil, where evidence for this already has been seen between plants and symbiotic mycorrhizal fungi [18], and on the skin and in the gut of animals.
Even though convincing, the data available for trans-kingdom RNAi are fragmented and mostly based on input sRNA sequences and phenotypic effects in the receiving organisms. No example provides information for the whole RNA signaling chain, which conceptually should involve sRNA production, secretion, uptake, perception, amplification and manifestation. For each of these steps, evidence for alternative mechanisms has been presented. However, between eukaryotic organisms with the canonical RNAi mechanisms intact, it appears from the evidence available that most trans-kingdom RNAi signaling follows the route: (1) sRNA production, (2) RNA secretion in MVB-based exosomes, (3) fusion of RNA containing exosomes to the plasma membrane, (4) RdRPdependent amplification integrated with transcript cleavage and inhibition ( Figure 2). We think of alternative mechanisms for each step as variations of these.
Perspectives
It is now documented by many examples that eukaryotic organisms of different kingdoms exchange RNA sequences as signals affecting gene expression, and we may only have seen the tip of the iceberg of this phenomenon. Future studies that investigate the mechanisms of this trans-kingdom RNA transfer more systematically, will most likely identify ''the usual suspects'' of the canonical silencing machinery (e.g., Dicers and Argonautes) as being required for production of mobile RNA and the hijack of the target-cell RNA silencing machinery. The biggest revelations may come in the form of factors that are involved in RNA export from the producing cell, its physical extracellular transport and its import into target cells. These mechanisms are very enigmatic at this point, and we can only speculate by comparison to analogous phenomena within organisms. So far, HIGS has focused on the function of target genes, but we foresee that it could be used to dissect the process of trans-kingdom RNA-silencing transfer by setting up carefully designed screens. We think that HIGS systems, in which plant expression of hpRNA directed against genes in the pathogen, hold a big promise as a mechanism for pest control, since the system has been described to work effectively in an increasing number of species [15,18,21,22,25]. Targeting of essential invader genes would appear to be advantageous to current exploitation of endogenous defense mechanisms in that it should not influence other processes in the host, and that the invader may have larger difficulty in overcoming it.
|
2018-04-03T03:44:14.902Z
|
2014-09-01T00:00:00.000
|
{
"year": 2014,
"sha1": "1e3b9998be6e12ab4a05d7193400a6c1b67f560f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004602&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64625f931732009421db1575fede021fd25d5f0a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
5709798
|
pes2o/s2orc
|
v3-fos-license
|
A new framework for designing programmes of assessment
Research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. Without detracting from the value of this research, such an approach is not sufficient to high quality assessment of competence as a whole. A programmatic approach is advocated which presupposes criteria for designing comprehensive assessment programmes and for assuring their quality. The paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. We conducted focus group interviews to explore the experiences and views of nine assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The discussion was analysed, mapping all aspects relevant for design onto a framework, which was iteratively adjusted to fit the data until saturation was reached. The overarching framework for designing programmes of assessment consists of six assessment programme dimensions: Goals, Programme in Action, Support, Documenting, Improving and Accounting. The model described in this paper can help to frame programmes of assessment; it not only provides a common language, but also a comprehensive picture of the dimensions to be covered when formulating design principles. It helps identifying areas concerning assessment in which ample research and development has been done. But, more importantly, it also helps to detect underserved areas. A guiding principle in design of assessment programmes is fitness for purpose. High quality assessment can only be defined in terms of its goals.
Introduction
For long, research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. This is not illogical given the prevailing view of medical competence as consisting of separate elements-knowledge, skills, attitude, and problem solving-and the quest for the single best measurement instrument for each. Good examples of this approach are the established position of the Objective Structured Clinical Examination as the preferred instrument for skill measurement ( Van der Vleuten and Swanson 1990) and key feature as approach of choice for problem solving skills (Page et al. 1995;Schuwirth 1998). Without detracting from the value of psychometric criteria and the focus on single instruments, which has provided valuable insights into the strengths and weaknesses of instruments as well as into the tradeoffs that have to be made (Newble et al. 1994;Schuwirth and Van der Vleuten 2004;Van der Vleuten 1996), such an approach is not sufficient to high quality assessment of competence as a whole. From the point of view that medical competence is not the sum of separate entities but an integrated whole, it is only logical to conclude that no single instrument, however psychometrically sound, will ever be able to provide all the information for a comprehensive evaluation of competence in a domain as broad as medicine.
A currently popular model, Miller's pyramid (Miller 1990), frames assessment of ''professional services by a successful physician'' using a four-layered pyramid. While being a useful aid in selecting appropriate instruments for discrete elements of competence, Miller's pyramid does not describe the relationships between the layers or within combinations of instruments. Unfortunately, little is known about relations, compromises and trade-offs at this highly integrated level of assessment. Of course not just any mix of instruments will suffice: a purposeful arrangement of methods is required for measuring competence comprehensively. Similar to a test being more than a random sample of items, a programme of assessment should be more than a random selection of instruments. An optimal mix of instruments would be the best possible match between a programme of assessment and the goals of assessment (and/or the curriculum at large).
So a programmatic approach to assessment design is advocated Schuwirth et al. 2002;Van der Vleuten and Schuwirth 2005). It is not easy to provide a single definition of such a ''programme of assessment'', but central to the concept is a design process that starts with a clear definition of the goals of the programme. Based on this; well-informed, literature-based, and rational decisions are made about the different assessment areas to be included, the specific assessment methods, the way results from various sources are combined, and the trade-offs that have to be made between strengths and weaknesses of the programme's components. In this way we see not just any set of assessment methods in a programme as the result of a programmatic approach to assessment, but reserve the term programmes of assessment for the result of the design approach as described above.
In this, design and development of assessment programmes must be underpinned by ideas and decisions on how to reconcile the strengths and weaknesses of individual instruments and how to complement and synthesise different kinds of information. Studying programmatic assessment can only be at the level of comprehensive competence, framing medicine as an integrated whole task. This in contradiction to the view of competence as split up into separate entities, or even as the sum of these entities. From a holistic perspective on assessment, a programmatic approach offers several theoretical advantages.
-It can help to create an overview of what is and what is not being measured. This promotes the balancing of content and other aspects of competence and counteracts the pitfall of overemphasising easy-to-measure elements, like unrelated factual knowledge. -It allows for compensation for the deficiencies of some instruments by the strengths of other instruments, resulting in a diverse spectrum of complementary measurement instruments that can capture competence as a whole. -Matching instruments can increase efficiency by reducing redundancy in information gathering. When data on a subject are already available from another test, test time and space is freed for other subjects. -In high-stakes examinations, information from different sources (tests or instruments) can be combined to achieve well-informed and highly defensible decisions.
Of course, many existing examples of programmes of assessment are around already, much of which are based on extensive deliberation and good expertise and which are probably of high quality (Dannefer and Henson 2007). Unfortunately however, there is little research in this area that would help to support or improve their quality.
In our notion of a programmatic approach to assessment we presupposed that criteria for designing comprehensive assessment programmes and for assuring their quality would already be available in the literature, but when we searched the literature for guidelines for designing assessment programmes, the results were disappointingly scant. One of the early developments in this area, based on the notion that assessment drives learning, was the alignment of objectives, instruction, and assessment to achieve congruent student behaviour (Biggs 1996). Although in theory it might encompass an entire assessment programme, probably due to the complexity of educational environments, the application level of this alignment has rarely extended beyond the content of measurement (Webb 2007), i.e. blueprinting assessment based on curriculum objectives. Another approach focused on the application of psychometric criteria to combinations of methods (Harlen 2007), resulted in a framework for quality analysis which relied heavily on a ''unified view of validity'' (Birenbaum 2007) and research into high-stakes assessment programmes for certification of physicians aimed at high composite reliability (Burch et al. 2008;Knight 2000;Wass et al. 2001). Neither achieved a coherent programmatic approach to assessment, however.
Not only the search for single best instruments, but also the strong and almost unique reliance on psychometric quality in assessment can be challenged (Schuwirth and Van der Vleuten 2006) Undeniably, psychometric quality is important, but so are practical feasibility of instruments, educational goals, and context and environment of assessment. Baartman (2008) recently proposed adding education-based criteria, such as authenticity and meaningfulness. Her set of criteria for competence measurement was a valuable theoretical step with strong practical relevance, but the exclusive focus on competence (although cost and efficiency were considered too) disregarded the relationship of assessment programmes with their environment. Likewise, little attention was given to integrating or weighting criteria.
This paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. Fearful of the pitfalls of a blunderbuss technique, we first set out to develop a model to frame programmes of assessment and determine which dimensions have to be covered in formulating design criteria, before we could-in a subsequent study-start defining the individual design criteria. Because of the absence of a common language for programmatic assessment and uncertainty about criteria, we used an exploratory, open, qualitative method to probe the views and ideas of experts in assessment (in medical education). From this resulted an overarching model for programmatic assessment, which we present in this paper.
Study design
We conducted focus group interviews to explore the experiences and views of assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The focus group approach was chosen because it allows participants to freely express ideas without having to reach consensus and leaves room for issues not previously considered in research (Hollis et al. 2002). Prior to data collection, the research team devised a rough and ready framework (list of topics) as a starting point for the discussions. The framework consisted of six elements of assessment relating to theoretical issues as well as practical suggestions for an assessment programme (see Fig. 1). The overall purpose of the assessment (Goals) and objectives of the curriculum, determine what needs to be tested (Collecting information) to gain data about medical competence of students. The data from different tests or sources needs to be merged (Combining information) into an overview which can be distributed among various stakeholders (Reporting). Based on the goals and data a further action needs to be taken (Decision taking). Finally in order to ensure high-standard assessment, a system of quality checks and measures should be in place (Quality control). An email giving details of the objectives and the topics of the focus groups invited 12 experts with extensive experience with difficulties and problems associated with programmes of medical assessment to participate in the study. A total of nine experts voluntarily took part in two focus groups. Three had to decline because of diary or health problems. The experts, five from North America and four from Europe, fulfil different (and some multiple) roles in their assessment practice i.e. Program Directors (5), National Committee Members (6). The experts represented different domains ranging from undergraduate and graduate education (4), to national licensing (5) and recertification (2) and had published extensively on assessment. Purposeful selection based on the experts' longstanding involvement in different assessment organisations ensured heterogeneity of the focus groups. To facilitate participation, we organised the sessions directly after the 2007 AMEE conference in Trondheim and paid all related expenses.
Procedure
The meeting was divided in four sessions on 1 day: a plenary introductory session in which the guiding (initial) framework was presented; two sessions split into groups, first on theoretical issues; and second on practical recommendations; and a plenary retrospective session summarising the discussions. It was explained to the participants that we were interested in variety of views and that there were no correct or incorrect answers. Dissent was encouraged. All sessions were semi-structured using the framework. Two of the researchers (LS & CvdV) moderated the sessions of one group each. A third researcher (JD) took field notes.
Data analysis
All sessions were audio recorded, transcribed, and read by the research team. One coder (JD) analysed the transcripts, starting with using the categories from the initial framework. Because this exploratory research requires an informed but open mind, the framework, including concepts and theories, was further developed in a continuous process of checking and refinement, without adhering to this pre-set framework. Furthermore the data was analysed by identifying and labelling new emerging themes and issues. When the research team met to evaluate the resulting themes and issues, they were forced to conclude that the first draft of the model (the framework guiding the discussions) was overly simplistic, causing ambiguities in coding and occasionally precluding coding altogether. The model was revised until the research team reached consensus that saturation of coding was reached and no new topics emerged. Finally the model was send to the participants to check if it reflected the discussion correctly and whether our interpretation of the discussion was accurate. No major revisions were suggested by the participants, just a minor suggestion as to the specific captions in English was made by a native English speaking participant.
Results
There is a risk the result section becomes more confusing in stead of clarifying as a result of the differences between the initial framework and the end result. Therefore some thoughts and explanation about the development from the initial framework to the final framework are provided first. Next the frameworks are compared on the top level, and similarities and differences are briefly described, before the dimensions of the final model are described in more detail and illustrated with quotes from the discussion to clarify some terminology. The selected quotes are accompanied by a (randomly assigned) number corresponding to a specific participant. This selection of quotes is no quantitative reflection of the participation during the focus group discussion as only the most clear and illustrative quotes are included. Some quotes are edited for reasons of clarity without changing the meaning and/or intention of the participant.
Coding the transcripts with the initial framework was complicated by the fact that this framework covered only a small proportion of the topics of assessment programmes that were discussed, and by the interrelatedness of the different elements, which had initially been conceived of as discrete. The distinction between theory and practice proved problematic as well, with theoretical issues often requiring adjustment due to practical considerations and practical suggestions requiring translation into general guiding principles, which could become increasingly theoretical. The alternative framework (see Fig. 2) is based on the refinement of the initial framework and new themes which emerged. It is more interrelated and comprehensive than our initial framework, but is less sequential in nature.
Comparing the frameworks the dimension Goals is a central in both. Next, the four elements from the initial framework-Collecting, Combining, Reporting, and Decision Taking-are closely related activities that are represented in one dimension in the new framework, named Programme in Action. With the exception of some changes in definition, the two frameworks are similar in this respect. In contrast, the analysis yielded a huge amount of information on Quality Control. It appeared that our first framework did not do justice to the diversity in activities related to quality and the importance the experts placed on this issue. Quality turned out to be multi-layered and integrated with Goals and the Programme in Action in stead of a single element at the end of the process. In the final framework four layers (dimensions) were identified, which were placed on the same level as goals and programme in action. These are supporting, documenting, improving, and accounting.
Goals
Goals dominated the discussions, with experts typically linking ideas and suggestions to specific programme goals.
I think another way to think about the goal at the top level is eh, that there should be a purpose statement to the assessment programme just as there should be a purpose statement to each of the components.
[…] there should be a purpose of the assessment system that guides the whole of planning. (P8) … did you meet your goals, there has to be some sort of relationship between the quality control and the purpose and the goals of what you are trying to do (P4) Although goals were also part of our initial framework, we were struck by their unexpected centrality in almost every discussion on the other programme elements. Apparently, it was impossible to consider these elements in isolation from the goals of the assessment. The content of goals seemed to be of lesser importance, however.
… they are implied in goals which themselves will have a dynamic relationship to each other and to the context within it's being applied… (P6) … cause the ones where they run into problems are where they're not agnostic where there is a religious devotion to a particular tool [and everything else has to fit in] and it is used for everything where it's not appropriate. (P2) Regardless of educational concept (e.g. traditional education, problem-based learning) or the specific function of assessment (e.g. learning tool, licensing decisions), the quality of assessment programmes was framed in terms of fitness for purpose. This implies that clearly defined programme goals are prerequisite for high-quality programmes.
As fitness for purpose was regarded as the central premise of programme design, care should be taken to avoid a too normative view of design principles and quality criteria. Not all programmes are based on identical educational ideas. Today's popularity of competence-based programmes does not imply that a competence-based design should be the universal standard. Assessment aimed at selecting candidates uses different principles but that does not detract from their fitness-for-purpose.
Programme in action
The focus group discussions focused predominantly on Programme in Action or-in other words-on all the activities minimally required to have a running assessment programme. These activities encompass activities ranging from collecting information to taking action based on that information.
Emerging themes that were similar to elements of the initial framework were collecting information, combining information, reporting, and decision making, which were regarded as core activities of virtually any assessment programme. Collecting information was understood as referring to all activities for gathering the various kinds of information about assessees' abilities, including e.g. numeric (quantitative) data as well as descriptive (qualitative) data. Topics of consideration could be assessment content, selection of test formats, use of instruments, scoring systems, and scheduling of assessment.
With regard to combining information, an interesting distinction was made between technical and meaningful aspects. Technical aspects relate to combining data from multiple sources and combining different kinds of data. Combining data often seems a lot like comparing apples and oranges. For example, many programmes of assessment employ a compensatory test model (compensation of results on different items of the test or OSCEstations) and a conjunctive model disallowing compensation between tests, (e.g. between an OSCE and an MCQ test on the same subject).
Using multiple instruments often results in a large amount data from different sources. In order to take an action based on a versatile and rich data set, interpretation of the data is needed to add value to the information collected. Meaningful aspects refer to the use of combined information, including interpretation, valuing, and selecting data. Although closely linked to-and sometimes intertwined with-combining data, valuing data was regarded as a separate element. So, in the new framework, valuing information is presented alongside combining information.
Another common problem is that lots of sources of information are gathered but the system is not set up so that they are all considered […] they're not integrating and considering all of the material that is gathered… (P2) … the problem is how you can make it, so that you can get it in one place and that you can relate it to each and that you can understand the importance of different things and you can come to a judgment […] Don't inappropriately combine things which shouldn't be combined to force them together when they shouldn't be. (P6) According to the experts, valuing information involved not only setting a pass-fail score, but also determining candidates' strengths and weaknesses or prioritising which learning goals to distil from the information provided by the assessment.
With regard to fitness for purpose, our initial definitions of reporting and decision making were too restrictively tied to common (summative) purposes of assessment, which-although general-are not necessarily universally applicable.
But … there is an issue … about considering which stakeholders need to have this information or appropriate to have this information, so it is not a way of never giving it out. (P1) … but I don't agree either with the idea that every test provides feedback to every stakeholder, that to me, no… [Mod: It's depending on the goals]… the nature of the test will be greatly influenced by the feedback that will be given. (P2) Based on these views, reporting and decision making were merged into a more generic element in the new model, taking action, which includes all activities resulting from the collected, combined and valued information relating to assessments. Without taking action, information from previous activities was considered pointless. Taking action implies closing the loop, and may vary from go/no-go decisions to feedback or even remediation. Taking action attaches consequences to assessments.
As Programme in Action focuses on core activities that have practical consequences and are essential to determine students' abilities, it deserves extensive attention. In Action signifies that conducting the activities is indispensable for any assessment. In summary, the four core activities of Programme in Action are: Collecting Information, Combining Information, Valuing Information and Taking Action.
Supporting the programme
Although the elements of Programme in Action suffice to establish a programme of assessment, they cannot guarantee a high standard. The activities contributing to the quality of the programme of assessment were more often than not related to, if not interwoven with, activities categorised under programme in action. In other words, a major part of the activities classified as relating to quality control in the initial framework appear to be qualified more appropriately as activities in support of the programme in action (activities).
For an activity to support the programme in action and contribute to overall programme quality it should be directed at the goals of the assessment programme. Supporting activities must ensure that the programme in action is of sufficient quality to contribute optimally to the purpose of the assessment programme.
Two support-related themes matched the concept of quality as fitness for purpose. One is technical support, contributing to the quality of assessment materials. A distinction was made between proactive activities before an assessment is conducted (e.g. item review panels, faculty development) and monitoring after the assessment (e.g. psychometric and other analyses). Test quality depends on review, which determines whether test items or elements meet the required characteristics. Psychometric and other analyses serve to determine the quality of an assessment and whether steps are needed to make improvements. As the success of an assessment depends largely on its users, faculty development is important to promote the quality of assessment programmes. The term technical also captures the knowledge, skills, and attitudes necessary for designing and conducting an educationally sound assessment system.
It was also pointed out that even a technically sound design of an assessment programme does not preclude the risk of failure due to resistance from stakeholders. you have to establish providence… do you have the right to do what you are doing […] you need to identify the people that are involved within that and then they need to go through a process by which there is agreement within those people and that could be stakeholders (P5) The second support-related theme concerned political and legal support, targeted at increasing the acceptability of the assessment by early involvement of stakeholders and by putting in place an appeal procedure to avoid unfair conduct. Without acceptability, support will likely be insufficient to achieve high quality. Stakeholder involvement in the design of assessment programmes not only promotes input of creative ideas, but also ensures a certain fitness for practice. It can give stakeholders a sense of ownership of the programme, thereby gaining their support, without which goals can remain elusive. Issues related to (inter)national or local legal considerations need to be considered too and can influence the degrees of freedom in programme design.
in court when you stand up and you go through this whole due process business it's whether or not every body was treated in equal manner, did everybody have an opportunity to demonstrate their abilities…(P5) … well the government has just passed a law that says every doctor will have a 360 degree appraisal every 5 years whether you need it or not. (P6) Support-related actions have an immediate effect on the currently running assessment practice. Together with programme in action, supporting the programme forms a cyclic process aimed at optimising the internal assessment system. Documenting the programme Documenting assessment serves two purposes. Firstly, documentation will facilitate learning of the organisation by allowing the cyclic system of optimising the programme in action to function properly. Secondly, it enhances the clarity and transparency of the programme.
That is an important point. Disclosure … about exactly what the procedures are going to be like and exactly how scores are going to be combined in psychometric characteristics I don't know whether that goes on reporting or something else… (P4) Thus all the elements of programme in action and supporting the programme, including responsibilities, rights, obligations, rules, and regulations, must be recorded to ensure that the assessment process is unambiguous and defensible. Three elements deserve special attention in this respect.
Because assessment programmes do not function in a vacuum, it is of vital importance to address the first element, the (virtual) learning environment and context of a programme, which must be linked to the purpose of the assessment programme.
I was thinking about the importance … eh, the purpose and the setting and the context in which this is occurring to a range of stakeholders who might very well have a view about how important it was, […] I think eh, in different circumstances of acceptability to quite a wide range of stakeholders as well. (P1) The context and applicability of an assessment programme have to be clearly described. Stakeholders must be able to determine for themselves if and how the programme affects them.
Secondly, rules and regulations, establishes a reference for stakeholders to review the purpose of the assessment and the rights and duties of all stakeholders in relation to programme in action and supporting the programme. Often the conditions under which the assessment is to be conducted and specific demands on stakeholders can be captured in rules. Regulations describe the consequences and actions to be taken in specific (standard) situations. Responsibilities can be clearly defined and allocated on all levels of the programme, so that the proper person is approached in cases of errors or mistakes. Clear documentation of regulations can prevent shirking of responsibilities.
Obviously, in assessment design on any level content is part of the equation. Although there can be no assessment without content, the specific content does not influence the general design process. Because content is strongly related to assessment goals, it should however be recorded for future reference. So the third element, blueprinting, is a tool to map content to the programme and the instruments to be used in the programme. In this respect, it is strongly tied to the design principles relating to information collecting. Blueprinting can also be regarded as a tool to sample the domain efficiently.
To summarise, documenting the programme is about recording information that can help to establish a defensible programme of assessment and support improvement.
Improving the programme Two different types of quality activities can be distinguished. We have described activities aimed at optimising the programme in the dimensions supporting and documenting. But, another type is aimed at improving the programme in response to critical appraisal from a more distant perspective. Activities in this dimension generally have no immediate effect on the currently running programme, but take only effect as they become apparent in the (re)design of (parts of) the programme, usually at a later date.
Most improvement activities involve research and development aimed at careful evaluation of the programme to ascertain problematic aspects. It is imperative, however, that the evaluation loop should not stop at data gathering: it must be closed by the actual implementation of measures to address diagnosed problems.
… the goals change because the professional needs change and if it's frozen in time …, that's not good; so it means … some concept of periodically revisiting the effectiveness of the whole system somehow (P2) Is there something also about closing the loop, I mean there is no point in evaluating side-effects if you never have some mechanisms in place for putting it right. (P7) Apart from measures to solve problems in a programme, political change or new scientific insights can also trigger improvement. A concept that cropped up in relation to improvement was change management, comprising procedures for change and activities to cope with potential resistance to change. (Political) acceptance of changes refers to changes in (parts of) the programme. we haven't had the concept, yet… but it is so important in assessment systems is this idea of change management and how you, you know, move from one approach to another if it's starting the evidence is starting show a good idea eh who says what when and how and the impact. (P6) … eh implementation is part of change management to me, take something from nothing and you implement it but they actually test the administration (P5) Improvement is driven by the purpose of the assessment programme, which determines whether a change is an improvement or not. What may be an improvement for a licensing institute may be a change for the worse in an educational programme and vice versa.
Accounting for the programme While the previous dimensions of the framework related to internal aspects of the institution or organisation responsible for the assessment programme, Accounting for the programme relates to the increasing demand for public accountability. The purpose of activities in this dimension is to defend the current practices of the programme in action and demonstrate that goals are met in light of the overarching programme goals. Accounting for the programme deals with the rationale of the programme.
Four major groups of accounting activities can be distinguished. The experts identified a need for scientific research, frequently attributing uncertainty about assessment activities to a lack of research findings and calling for research to support practices with sound evidence, which is in line with the prominence in medicine of the drive for evidence-based practice.
well we said everything had to be evidence-based I mean if you don't have some sort of research programme or you don't have some sort of reporting mechanism then I'll never be able to prove to you that was right so I agree […] things should be either proven or being in a research mode or some research and development. (P5) The influence of scientific research is also manifest in the application of new scientific insights to assessment programmes.
Accountability also requires external review of programmes of assessment. A common method is external review by outside experts, who judge information on the programme and in some cases visit an institution to verify information and hear the views of local stakeholders. External review is generally conducted for accreditation and benchmarking purposes.
Actually that is a good principle from time to time, the processes put in place, should be reviewed by an outside body or somebody who is less associated with… (P5) Assessment programmes are also shaped by the needs and wishes of external stakeholders. As assessment programmes do not exist within a vacuum, political and legal requirements often determine how (part of) the programme of assessment has to be (re)designed and accounted for.
In every institution or organisation, resources-including those for assessment programmes-are limited. Cost-effectiveness is regarded as a desirable goal. Although fitness for purpose featured prominently in the discussions, the experts thought more attention should have been paid to accountability and especially to costs, which can be a formidable obstacle to new ideas. The success of assessment programmes often hinges on the availability of resources. Obviously, greater efficiency is desirable but there is a cost-benefit trade-off. In other words, the quality of a programme is also defined in terms of the extent to which it enables the attainment of the goals, despite the boundaries of available resources.
Discussion
The main purpose of this study was to produce a framework for programmes of assessment with appropriate dimensions for design. The model that resulted from the focus group discussions with experts was far more comprehensive and integrated than the model used to guide the discussions. The quality of assessment in particular turned out to be a much broader dimension than we had envisaged. During the focus group meetings it became clear that-even though there was general agreement on topics with relevance to programmes of assessment-a shared frame of reference for programmatic assessment was glaringly absent. As a consequence, while some elements of assessment received a lot of attention, others remained underexposed. We believe the model described in this paper can help to frame programmes of assessment, because it not only provides a common language (shared mental model) for programme developers and users but also a more comprehensive picture of the dimensions to be covered when formulating design principles. However this makes it hard to relate our findings to previous research. Where research is done on design criteria with respect to assessment it, focuses on specific, isolated elements, and where research is done at the level of assessment programmes is does not focus on design, but for example on quality in terms of content, validity, reliability, or alignment with education (Biggs 1996;Harlen 2007;Baartman 2008). This is not to say that all elements of the model we propose are completely new. There is for example good research on the combinations of information from various assessment methods; not only at the level of conjunctive versus compensatory combinations but also about how scores correlate between tests with identical content than between tests with identical format (Van der Vleuten et al. 1989) Yet most assessment programmes still allow for full compensation between format-similar elements (the separate stations in an OSCE) and not between format dissimilar elements (e.g. combining scores on an OSCE station with scores on a content-similar written test). Such a paradox cannot be resolved when one designs an assessment programmes starting from the individual methods, only a programmatic design perspective may be useful here.
A central concept was that high quality assessment and the activities needed to achieve it can only be defined in terms of the goals of an assessment programme. Goals underpin the guiding principle of programme design: fitness for purpose. Quality is inextricably interwoven with goals, which are closely tied to all activities related to assessment. Achieving appropriate interrelatedness of goals and activities requires design principles that are prescriptive, but take into account context and/or specific goals. Thus normative statements can only be included in design principles with explicit reference to specific purposes.
To explain and support this argument further we come back to our most important and maybe most obvious finding that quality of an assessment programme can only be judged in light of its purpose. The purpose of an assessment programme is often not included in research on relations between separate elements of an assessment system. In studying these relations the outcome measure should be what is the optimal configuration to contribute to our goals.
Initially we took a same isolated approach when drawing up our initial model to guide the focus groups, in which we defined discrete and sequential steps. The new model values interrelatedness and complexity of assessment, while undeniably, an intuitively logical sequence retains. For example within the programme in action (first collect, then combine and value, and finally take action), but this sequence can also be reversed, especially from the design point of view. Key is the interrelatedness of the elements within the framework for the design of assessment programmes that resulted from this study.
Remarkably, the prime focus of the discussions was the programme in action and, within this dimension, collecting information. This is not surprising since this dimension deals with the core activities of assessment and the visible aspects of the assessment process. The experts disapproved of what they regarded as an obsession with assessment tools in the assessment literature, whereas elements like accreditation standards tended to be neglected. We think that our model can attenuate this obsession by raising awareness that programmatic assessment consists above all of variegated components which are integrated and interconnected and bear no resemblance whatsoever to an assessment toolkit with different instruments suited to specific tasks.
When we looked at the literature from the perspective of the new model, a similar picture presented itself. It seems that in terms of our model the topics of the literature on assessment can largely be categorised as collecting information and as the major elements of programme in action and supporting the programme. Regrettably, the interrelatedness of these elements is largely ignored, which is only to be expected as they are generally considered in isolation, an approach that has also characterised the search for the one superior instrument for each type of test to which we referred earlier.
The focus group approach fitted the purpose of this study, which was to explore experts' experiences and ideas on the largely uncharted topic of programmatic assessment. The experts agreed that so far little work had been done on programmatic approaches to assessment, also by themselves, and that the discussions had been enlightening. However, the focus groups had limitations as well. The selection of experts was biased by our social network and field of educational expertise (medical education), and the group was small. Although we are convinced that the experts were open minded, their long-standing experience and fields of interest may have given rise to some blind spots. Although they had been instructed to think outside the box, during the wrap-up evaluation the experts expressed concern that the discussion had been heavily dominated by what they were most comfortable with or where their experience was. Their fear was that the discussion had resulted in more traditional ideas than intended. Yet the data gave rise to many new insights and ideas, reinforcing our resolve to move this research forward. Experts are only one source of information, so we will have to triangulate the results by tapping into other sources of information, such as the opinions of teachers and medical students as end-users of assessment programmes.
Although the new model is comprehensive, it is possible that relevant issues were overlooked in the discussions leaving gaps in our model that need to be filled by further research. The question is how. It was suggested that incorporating ideas from other cultures and practices could generate fresh ideas, admittedly with a concomitant risk of reduced generalisability as was illustrated during the discussions. These were sometimes less general than intended due to cultural differences between educational settings (undergraduate, postgraduate and continuing education) and countries of origin of the experts. So this note of caution on generalisability applies equally to our model because the experts' experiences and views were inevitably contextual. Although we strove to keep the model general and applicable to different contexts, it would be interesting to investigate its applicability (robustness) in different cultural contexts. A further concern about the application of criteria in different contexts led to the recommendation to look to a wider context (for example society at large) as a possible framework to make the general criteria transferable to different contexts.
Numerous ideas worth pursuing were produced by our study, pointing the way to topics of further research. One obvious next step would be to apply this model to an existing assessment programme and determine whether all the dimensions and elements are identifiable and relevant. Further steps could also include producing concrete design criteria and validating them by application to existing programmes of assessment.
|
2014-10-01T00:00:00.000Z
|
2009-10-10T00:00:00.000
|
{
"year": 2009,
"sha1": "e13279af93dcc452ac28f00eb4ae717aa8907c73",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10459-009-9205-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e13279af93dcc452ac28f00eb4ae717aa8907c73",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
255416041
|
pes2o/s2orc
|
v3-fos-license
|
Mesoscopic eigenvalue statistics for Wigner-type matrices
We prove a universal mesoscopic central limit theorem for linear eigenvalue statistics of a Wigner-type matrix inside the bulk of the spectrum with compactly supported twice continuously differentiable test functions. The main novel ingredient is an optimal local law for the two-point function $T(z,\zeta)$ and a general class of related quantities involving two resolvents at nearby spectral parameters.
Introduction
In the study of the eigenvalue distribution of large random matrices, the most celebrated analog of the Law of Large Numbers is the Wigner semicircle law [25]. It states that the empirical density of eigenvalues converges to a deterministic limit known as the semicircle distribution ρ sc . More explicitly, if H is an N × N Wigner matrix and f is a sufficiently smooth test function, then the linear eigenvalue statistics N −1 Tr f (H) converge in probability to R f (x)ρ sc (x)dx in the large N limit.
The corresponding Central Limit Theorem (CLT) asserts that the asymptotic fluctuations of the linear eigenvalue statistics Tr f (H)−E [Tr f (H)] are Gaussian. The absence of the N −1/2 normalization factor, appearing in the classical CLT, can be viewed as a manifestation of the strongly-correlated nature of the eigenvalues. For the special case of f (x) = (x − z) −1 with Im z = 0, this result was obtained by Khorunzhy, Khoruzhenko and Pastur [16]. Johansson obtained the CLT for invariant ensembles with arbitrary polynomial potentials in [15]. In [4], Bai and Yao used martingale CLT to establish the result for Wigner matrices with analytic test functions. The proof for bounded test functions f with bounded derivatives appeared in the work of Lytova and Pastur [22]. In subsequent works, different moment conditions on the matrix and regularity conditions on the test function were studied extensively by many authors, e.g., [6,18,23,24].
While fixed test functions represent macroscopic averaging in the spectrum, one can introduce Ndependent scaling and consider scaled test functions of the form f (x) = g(η −1 0 (x − E 0 )), where E 0 is a fixed reference energy in the bulk, η 0 ≡ η 0 (N ) ≪ 1 is a scaling parameter, and g is compactly supported. Then Tr f (H) involves only about N η 0 eigenvalues of H. In particular, on mesoscopic scales, corresponding to N −1 ≪ η 0 ≪ 1, the limiting variance is given by the square of theḢ 1/2 norm of g. Mesoscopic test functions were first studied by Boutet de Monvel and Khorunzhy in [7] for the Gaussian Orthogonal Ensemble, with subsequent extension to real Wigner matrices in [8] with N −1/8 ≪ η 0 ≪ 1. In [13], He and Knowles proved the CLT for Wigner matrices with general mesoscopic test functions for all scaling parameters N −1 ≪ η 0 ≪ 1. where G(z) is the resolvent of H. The corresponding result in the simpler setting of generalized Wigner matrices was obtained in [21]. Using the optimal local law for T (z, ζ), we prove the bulk mesoscopic CLT for Wigner-type matrices in the full range of scales N −1 ≪ η 0 ≪ 1 for compactly supported C 2 scaled test functions (Theorem 2.2). Our proof relies entirely on resolvent methods, circumventing the DBM dynamics used in [17].
Understanding T (z, ζ) is the crucial ingredient for the CLT as it was realized in [17]. In fact, a suboptimal entry-wise local law for T xy (z, ζ) was proved in Proposition 5.1 of [17]. If one relies solely on resolvent methods, this local law provides sufficient control for mesoscopic CLT only on scales η 0 ≫ N −1/5 . The main reason for this limitation is that the error term in [17] contains the norm of the inverted stability operator (defined in (4.5)). In the present paper, we show that this factor can be removed by separating the destabilizing eigendirection corresponding to the smallest eigenvalue of the stability operator. Using this method, we prove a local law for a general class of quantities involving two resolvents (Theorem 3.2) and deduce the optimal averaged and entry-wise local laws for T (z, ζ). In particular, this allows us to obtain the CLT on all mesoscopic scales without relying on DBM.
The main difficulty lies in the fact that the deterministic approximation of the resolvent for Wignertype matrices is not a multiple of the identity matrix, contrary to the generalized Wigner case [21]. Consequently, the destabilizing direction is no longer parallel to the vector of ones, and generally, no closed-form expression is known for the corresponding eigenprojector. It is important to note that for the deformed Wigner matrices studied in [20], the deterministic approximation is also not a multiple of the identity, but S jk = N −1 . Therefore, the two-point function can be expressed as the square of the resolvent and can be studied using the local law, similarly to the standard Wigner case.
Instead of approximating the destabilizing direction to circumvent this difficulty, we use a contour integral representation for the eigenprojector. It allows us to extend the decomposition approach of [21] to the Wigner-type ensembles. This method benefits from yielding an integral representation for the variance on all mesoscopic scales, under weaker regularity conditions on the test function than in [17], and relying only on resolvent methods.
The paper is organized in the following way. Section 2 contains the precise definition of the model and the statement of our main mesoscopic CLT result, Theorem 2.2. In Section 3, we present our main technical result, the optimal local law for two-point functions in Theorem 3.2. In Section 4, we collect notations and preliminary results to which we refer throughout the paper. In Section 5, we deduce Theorem 2.2 from Propositions 5.1 and 5.2, and prove Proposition 5.1 using a local law for T (z, ζ) (Corollary 3.3) as an input. The proofs of Theorem 3.2 and Corollary 3.3 are presented in Section 6. In Section 7, we prove Proposition 5.2, which relates the variance of the linear eigenvalue statistics to theḢ 1/2 -norm.
Model and Main Result
We begin with the definition of Wigner-type matrices originally introduced in Section 1.1 of [2].
for all j, k ∈ {1, . . . , N } and some strictly positive constants C sup , c inf . We assume a uniform bound on all other moments of √ N H jk , that is, for any p ∈ N there exists a positive constant C p such that holds for all j, k ∈ {1, . . . , N }. Additionally, we assume that S satisfies a Hölder regularity condition 1 , that is, for all j, j ′ , k, k ′ ∈ {1, . . . , N } and some positive constant L. The constants c inf , C sup , C p and L are independent of N .
Central Limit Theorem for Mesoscopic Linear Eigenvalue Statistics
Theorem 2.2. (c.f. Theorem 2.5 in [17]) Let g be a C 2 c (R) test function. Let ε 0 be a small fixed constant and let N −1+ε0 ≤ η 0 ≤ N −ε0 , and let E 0 be a fixed reference energy in the bulk of the spectrum, that is, ρ(E 0 ) ≥ ε 0 (here ρ is the density of states to be defined in (3.3) below ). Define the scaled test function f to be
4)
where β = 1 and β = 2 corresponds to real symmetric and complex Hermitian H, respectively. Remark 2.3. We remark that the universal limiting variance in (2.4) coincides with the corresponding formulas for standard Wigner matrices [13], where S jk = N −1 , m j (z) = m sc (z) for all j, k ∈ {1, . . . , N }, and m sc (z) is the Stieltjes transform of the semicircle law.
1 As stated in [2], assumption (B) can be weakened to piece-wise 1/2-Hölder regularity condition for some positive constant L on finitely many intervals, in the sense that where {Ia} n a=1 is a fixed finite partition of [0, 1] into smaller intervals, and (N Ia) denotes the set of positive integers j such that j/N lies in Ia.
Local Laws for the Two-point Functions
In this section, we introduce our main technical result, local laws for quantities that involve two resolvents of a Wigner-type matrix. Our prime motivation is to study the function T (z, ζ) defined in (1.1), but our methods allow us to estimate a more general class of quantities, namely a =y for fixed indices α, β, y, and deterministic weights w a , W ab satisfying |w a |, |W ab | ≤ cN −1 for some constant c > 0. Here G(z) := (H − z) −1 denotes the resolvent of H. Objects of this type were first studied in [11] in the setting of random band matrices. We obtain the estimates in the sense of stochastic domination. Definition 3.1. (Definition 2.1 in [12]) Let X = X (N ) (u) and Y = Y (N ) (u) be two families of random variables possibly depending on a parameter u ∈ U (N ) . We say that Y stochastically dominates X uniformly in u if for any ε > 0 and D > 0 there exists N 0 (ε, D) such that for any N ≥ N 0 (ε, D), We denote this relation by X ≺ Y or X = O ≺ (Y).
We consider spectral parameters z lying in the domain D, defined by for a fixed τ > 0. As in Theorem 2.2, our analysis is limited to the bulk of the spectrum, which we define via the self-consistent density of states ρ(E) ≡ ρ N (E). The density ρ(E) is recovered by the Stieltjes inversion formula, where m(z) := N −1 N j=1 m j (z), and m(z) = (m j (z)) N j=1 is the unique (Theorem 4.1 in [2]) solution to the vector Dyson equation
−1 m(z)
= z + Sm(z), Im m(z) Im z > 0. (3.4) Let I be the set on which ρ(E) is positive. Theorem 4.1 of [2] guarantees that I consists of a finite union of open intervals (a (j) , b (j) ). Then for κ > 0, we define the bulk domain by In particular, for all z ∈ D κ , ρ(z) ≥ C(κ) for some constant C(κ) > 0. Given E 0 as in Theorem 2.2, we choose κ so that E 0 ∈ I 2κ .
Theorem 3.2. There exists a positive constant ǫ = ǫ κ which is independent of N , such that for all z, ζ in D κ with | Re ζ − Re z| ≤ ǫ, and deterministic vectors w ∈ C N satisfying w ∞ ≤ cN −1 , the following estimate holds, where the vector m is identified with the diagonal operator diag (m).
Under the same conditions on z, ζ, for any deterministic N ×N matrix W satisfying |W ab | ≤ cN −1 for all a, b, the following estimate holds, Here Ψ(z) and Θ(z) denote control parameters defined as Theorem 3.2 implies the following averaged and entry-wise local laws for T (z, ζ) from (1.1) .
Corollary 3.3. Let z, ζ satisfy the assumptions of Theorem 3.2. The entries T xy (z, ζ) admit the estimate Furthermore, for all deterministic N × N matrices A, the following equality holds Remark 3.4. The error estimates in the entry-wise local law (3.6), and hence in (3.9) are optimal. Indeed, for S jk := N −1 , which corresponds to the standard Wigner matrices, and ζ =z, a simple calculation using the Ward identity shows that The error estimate in (3.7) is not optimal; it can be improved to However, (3.7) is sufficient for establishing the CLT, so for the sake of brevity, we do not present the proof of (3.12) in full detail. We only indicate the necessary ingredients in Remark 6.8 below.
Notations
For a vector x = (x j ) N j=1 ∈ C N we use the standard definitions of ℓ 2 and ℓ ∞ norms, namely, For a linear operator T : C N → C N , we denote its matrix norms induced by ℓ 2 and ℓ ∞ norms, respectively, by T ℓ 2 →ℓ 2 = sup For two vectors x, y ∈ C N we use angle brackets to denote the ℓ 2 scalar product, while for a single vector x ∈ C N angle brackets denote the average of its coordinates We use xy to denote a coordinate-wise product of vectors x and y, Similarly, for a given vector x with non-zero entries, 1 x denotes a coordinate-wise multiplicative inverse We use 1 to denote the vector of ones (1, . . . , 1) t in C N .
For a measurable function f : R → R we use the standard definition of the L p norms for p ≥ 1, and the following definition of theḢ 1/2 norm and Y X hold. We use C and c to denote constants, the precise value of which is irrelevant and may change from line to line.
Local Law for the Resolvent
In this subsection, we summarize the facts on Wigner-type matrices that we use throughout our proofs. Majority of these results were obtained in [1] (see also [3]), but we refer to their concise versions from [2] adapted for the Wigner-type setting.
(2) If the matrix of variances S satisfies conditions (A) and (B), then for all z ∈ C\R, the solution admits the following bounds We now state the optimal averaged and isotropic local laws for Wigner-type matrices.
Theorem 4.2. (Corollary 1.8 in [2]) Let w, x, y be deterministic vectors in C N satisfying w ∞ = 1 and x 2 = y 2 = 1. Then the following estimates hold uniformly in z ∈ D: where vectors m and w are associated with corresponding diagonal matrices.
Preliminary Bounds on the Stability Operator
A significant part of our proof revolves around the stability operator, originally introduced in [1], that emerges when studying the two-point function T (z, ζ) defined in (1.1). In this subsection, we collect the known bounds on the stability and related operators. The stability operator (1 − Sm(z)m(ζ)) is defined by the matrix with entries Throughout this paper we use m (and various functions of m, such as Im m, |m|, m −1 , m ′ ) to denote both a vector (m j ) N j=1 and the corresponding multiplication operator, i.e., diag (m j ) N j=1 . Note that this notation agrees with the point-wise multiplication of two vectors if the first multiplicand is interpreted as an operator. We stress which interpretation is used whenever ambiguity may arise.
The analysis of the stability operator relies on the corresponding saturated self-energy operator F , studied in [17], that depends on two spectral parameters z, ζ, and is defined as (4.6) The following statements encompass the main properties of F and preliminary bounds on the stability operator. [17], c.f. Proposition 7.2.9 and Lemma 7.4.4 in [9]) For any z, ζ ∈ C, the principal eigenvalue of F defined in (4.6) is positive and simple, the corresponding ℓ 2normalized eigenvector v(z, ζ) has strictly positive entries. The norm of F admits the following upper bound If |z|, |ζ| 1, then the entries of v(z, ζ) are comparable in size, that is and moreover, let Gap (F ) denote the difference between the two largest eigenvalues of where δ is a constant that depends only on the constants in conditions (A), (B) and κ. Furthermore, for a fixed κ > 0 and z, ζ ∈ D κ there exists a positive constant c κ such that [17]) Let z, ζ ∈ C, such that |z|, |ζ| 1 and Re z, Re ζ ∈ I κ , then If additionally Im z Im ζ > 0, the estimate is improved to where C κ > 0 is a positive constants dependent on κ.
Finally, we state the bounds on the stability operator in the special case of ζ = z, which is related to the derivative of m via the (vector) identity m ′ (z) = (1 − m 2 (z)S) −1 m 2 (z), obtained by taking the derivative of (3.4).
Therefore for all z ∈ C\R with Re z ∈ I κ we have (4.14)
Cumulant Expansion Formula
Lemma 4.6. (Section II in [7], Lemma 3.1 in [13]) Let h be a real-valued random variable with finite moments, let f be a C ∞ (R) function. Then for any ℓ ∈ N the following expansion holds, where c (j) is the j-th cumulant of h defined by , and the remainder term R ℓ+1 satisfies for any M > 0.
We apply formula (4.15) with h equal to the matrix element H jk . Correspondingly, in the real case (β = 1), C (p) denotes the matrix of p-th cumulants of H, C (p) jk := C (p) (H jk ). In the complex case (β = 2), C (p) is used as a notational shortcut and denotes the sum of matrices of p-th cumulants of real and imaginary parts of H, that is C
Proof of the Main Result
Proof of Theorem 2.2. We divide the proof into two parts contained in the following propositions. We indicate their analogs in the settings of [21] and [17] in parenthesis.
Then its derivative φ ′ (λ) satisfies the following equation, provided c ≤ V (f ) ≤ C for some positive N -independent constants c and C.
Here the variance V (f ) for a scaled test function f is defined by where for z, ζ ∈ C/R the kernel K(z, ζ) is defined by
4)
with C (4) denoting the matrix of fourth cumulants C
Proposition 5.2 implies that V (f ) satisfies the condition of Proposition 5.1, hence Remark 5.3. We restrict the proof to the real symmetric (β = 1) matrices for the sake of presentation. The complex Hermitian (β = 2) case differs solely in replacing the cumulant expansion formula (Lemma 4.6) with its complex analog. The obvious modifications are left to the reader.
Characteristic Function of Linear Eigenvalue Statistics
Proof of Proposition 5.1. Using standard techniques of the characteristic function method imported from, e.g., Section 5.2 of [17] (see also Section 4.2 of [19] and references therein), we can obtain the following series of estimates on the characteristic function of the linear eigenvalue statistics φ(λ) and its derivative φ ′ (λ). The proof is a relatively straightforward modification of similar arguments in [17], so we defer it to Appendix A.
Furthermore, for all z ∈ D κ , we have where the random function T (z, ζ) is defined as We now proceed to estimate the first two terms on the right-hand side of (5.11) in such a way that E [ e(λ)] factors out. By definition of the scaled test function (2.3), the support of f is contained inside a vertical strip centered at E 0 of width ∼ η 0 , hence we limit the further analysis to the regime | Re ζ − Re z| η 0 ≪ ǫ, where ǫ is defined in the statement of Theorem 3.2. We estimate the function T (z, ζ) using Corollary 3.3 with weight matrix A := m ′ (z) m(z) . It follows from the bounds (4.2) and (4.14) that A ℓ ∞ →ℓ ∞ 1, hence for all z, ζ ∈ D κ with Re z, Re ζ ∈ supp(f ), where the error term E(z, ζ) is analytic in both variables and admits the bound It follows from (5.13) and (5.14) for ζ = z that yielding the desired bound on the first term on the right-hand side of (5.11). We now estimate the second term in (5.11). Fix z ∈ D κ , and consider ζ that lie in Ω ′ 0 defined in (5.5). Differentiating (5.13) with respect to ζ yields To bound the derivative of the error term E(z, ζ), we use the following technical lemma.
Lemma 5.5. (Lemma 5.5 in [17]) Let K(z) be a holomorphic function on C\R, then for all z ∈ C\R and any p ∈ N, where C p > 0 is a constant depending only on p.
Lemma 5.5 applied to the estimate (5.14) implies that the error term ∂ ζ E(z, ζ) admits the bound To proceed we require another technical lemma.
Let Ω be a domain of the form Ω := {z ∈ C : cN −τ ′ η 0 < | Im z| < 1, a < Re z < b}, (5.19) such that supp(f ) ⊂ (a, b) and τ ′ , c are positive constants. Let K(z) be a holomorphic function on Ω satisfying |K(z)| ≤ C| Im z| −s , z ∈ Ω, (5.20) for some 0 ≤ s ≤ 2. Then there exists a constant C ′ > 0 depending only on g in (2.3), χ in (5.6), and s, such that Proof of Lemma 5.6. It follows from (2. 3) that In case 1 ≤ s ≤ 2 the inequality (5.21) follows from Lemma 4.4 in [19]. For 0 ≤ s < 1, the proof is conducted along the same lines, except the integration by parts is performed twice in the regime η 0 ≤ | Im z| ≤ 1. ] Finally, from (5.11) and (5.22), combined with (5.9) we conclude that where V (f ) is defined in (5.3), and E(λ) is the total error term collected from previous derivations and integrated over dzdz. Lemma 5.6 together with error estimates in (5.9), (5.11), (5.15) and (5.18) provides the following bound on the error term Under the conditions of Proposition 5.1 V (f ) is bounded, hence we conclude from the first estimate in (5.9) and (5.23) that (5.2) holds. This concludes the proof of Proposition 5.1.
Proof of the Local Laws for Two-point Functions
In this section, we derive all the tools necessary to prove Theorem 3.2 and its specification for the twopoint function T (z, ζ), Corollary 3.3. To make the notation more concise we introduce the convention For a deterministic matrix W with entries |W ab | N −1 , the quantity a =y W ax G αa G aβ can be readily estimated in two special cases. First, if each column of W is proportional to the vector of ones, i.e., W ab = w b depends only on b, then the summation over a yields w x ([G G] αβ − G αy G yβ ), and the estimate follows from the resolvent identity and the local laws in Theorem 4.2. Second, if the entries of X := (1 − Sm m) −1 W are bounded by CN −1 , then one can obtain the estimate from Lemma 6.1 below. We show that these two special cases are exhaustive in the sense that any W can be represented as their linear combination with controlled coefficients.
To this end, we prove that in the relevant regime, the operator (1 − Sm m) has a very small destabilizing eigenvalue and an order one spectral gap above it. Moreover, if Π is the eigenprojector corresponding to the principal eigenvalue of (1 − Sm m), then the ℓ ∞ → ℓ ∞ -norm of the restriction of (1 − Sm m) −1 to the kernel of Π is also an order one quantity. Finally, we show that the vector of ones 1 is sufficiently separated from the kernel of Π.
Stable Direction Local Law
For any N × N deterministic matrix W , and any indices x, y, α, β, we define the quantities We prove the following estimate.
Lemma 6.1. For any z, ζ ∈ D κ and any deterministic N × N matrix X, We use the following self-improving mechanism for stochastic domination bounds, borrowed, e.g., from [14]. Lemma 6.2. (Lemma 6.3 in [14]) Let X be a random variable such that 0 ≤ X ≺ N C for some C > 0, and let Ξ ≥ 0 be a deterministic quantity. Suppose there exists a constant q ∈ [0, 1), such that for any Φ satisfying Ξ ≤ Φ ≤ N C , and any d ∈ N, we have the implication Let Φ be a deterministic control parameter admitting the bounds (Ψ + Ψ)Λ ≤ Φ ≤ Λ, such that It follows trivially from (6.4) and (6.5) that Let ∂ jk denote the partial derivative with respect to the matrix element H jk , then the partial derivatives of F xy αβ are given by We combine the vector Dyson equation (3.4) and the resolvent identity zG = HG − 1 to obtain Plugging (6.8) into the definition (6.1) and applying the cumulant expansion formula of Lemma 4.6, we obtain where R 2 is the total error coming from the higher order cumulants, and all unrestricted summations are from 1 to N . We successively bound the terms (6.9b)-(6.9e) appearing on the right-hand side of (6.9). By condition (A), local law (4.4), upper bound (4.2), and (6.5), it follows that the terms (6.9b) and the first term in (6.9c) are bounded by O ≺ ((Ψ + Ψ)ΛM 2d−1 ). Similarly, the term (6.9d) and the first term in (6.9e) are bounded by O ≺ ( X max (Ψ + Ψ)M 2d−1 ). We bound the second term in (6.9e). It follows by (A), (4.4), bounds (4.2), (6.6), and (6.7) that b S ab G bβ ∂ ab P ≺ (Ψ + Ψ + δ αa + δ aβ ) ΨΦM 2d−2 . (6.10) Hence, the second term in (6.9e) is bounded by O ≺ (Ψ + Ψ)ΛΦM 2d−2 . Finally, it is easy to check using estimates (4.16), (6.6) and identity (6.7), together with condition (A) and (4.2), that the error term Observe that the first term on the right-hand side of (6.9a) can be expressed as a =y m a m a X ax E F ay αβ (S)P = E F ay αβ (X)P − E F ay αβ (Y )P − m y m y X yx E F yy αβ (S)P , (6.11) where the last term is bounded by O ≺ (N −1 ΛM 2d−1 ). Combining (6.9) and (6.11) yields for any control parameter Φ αβ,y satisfying (6.5). Hence, by Lemma 6.2, which concludes the proof of Lemma 6.1.
Stability Operator Analysis
In this subsection we obtain all the properties of the stability operator (1 − Sm(z)m(ζ)) that we use in combination with Lemma 6.1 to finish the proof of Theorem 3.2 for z, ζ lying in opposite half-planes, as outlined in the beginning of Section 6. For two spectral parameters z, ζ, let η := Im z, and η := Im ζ. Without loss of generality, we assume in the following that Re z ∈ I κ , η > 0 and Re ζ ∈ I κ , η < 0. For the remainder of this subsection, we use the following notation (6.14) We view the operator B as a perturbation of B 0 = B(z,z), since |ζ −z| is small. We deduce the desired properties of B from those of B 0 , which, in turn, follow from the lower bound on the spectral gap of F found in (4.9).
Let {ψ j } N j=1 denote the eigenvalues of F (with multiplicity) in descending order. Then, by Perron-Frobenius theorem, the principal eigenvalue ψ 1 is real, and it coincides with the spectral radius F ℓ 2 →ℓ 2 . Furthermore, by taking the imaginary part of the vector Dyson equation (3.4) and multiplying both sides by |m| coordinate-wise, we obtain Therefore, by (6.15) and (6.16), 1 − ψ 1 η. Together with an upper bound (4.10) on F ℓ 2 →ℓ 2 , this implies that 1 − ψ 1 ∼ η. It follows from (4.9) that the principal eigenvalue of F is separated from the rest of the spectrum by an annulus, i.e., there exist r > 0 and δ > 0 independent of z and N such that |1 − ψ 1 | < r − δ, and |1 − ψ j | > r + δ, j ∈ {2, . . . , N }. (6.17) In the remainder of this subsection, we show that for all ζ sufficiently close toz, the eigenvalue of B with the smallest modulus is also separated from the rest of the spectrum by an annulus of order one width.
Using the argument principle and Jacobi's formula, one can express the number of eigenvalues (with multiplicity) of a matrix X inside a domain Ω by a contour integral To show the eigenvalue separation for B, we begin by estimating the norm of the resolvent of B inside the annulus A r,δ := {w ∈ C : r − 3δ/4 ≤ |w| ≤ r + 3δ/4}, (6.19) with r and δ as in (6.17).
Claim 6.4 implies that for any sufficiently large fixed N the integrand in (6.18) with X := B is uniformly bounded in Ω := A r,δ for all ζ such that |ζ −z| ≤ ε 1 , hence by analyticity Since the eigenvalues of B(z, ζ) are continuous in ζ, (6.24) implies that no eigenvalue can move between the two connected components of C\A r,δ , which together with (6.17) yields the following claim. hold for any ζ such that Re ζ ∈ I κ , Im ζ < 0 and |ζ −z| ≤ ε 1 .
Claim 6.5 now allows us to define the principal eigenprojector Π of B as a contour integral Claim 6.5 asserts that the contour {|ξ| = r} encircles exactly one eigenvalue of B with multiplicity, hence Π is a rank one eigenprojector. We now prove that the restriction of B −1 to the range of (1 − Π) is bounded by a constant.
Claim 6.6. For all z, ζ such that Re z, Re ζ ∈ I κ , Im z Im ζ < 0 and |ζ −z| ≤ ε 1 , where c depends only on the constants in conditions (A), (B) and κ.
Proof. By expression (6.26) for Π we have Hence the norm of B −1 (1 − Π) is bounded by using the bound in Claim 6.4 on the circle {|ξ| = r} which lies inside A r,δ .
Finally, we show that the vector of ones is sufficiently separated from the kernel of Π. This ensures a stable decomposition of the space into the direct sum of the range of (1 − Π) and the span of 1, so we can apply the local laws to each of the components separately. Claim 6.7. There exists ε > 0 independent of N and z such that for all ζ with Re ζ ∈ I κ , Im ζ < 0 and |ζ −z| ≤ ε, where c > 0 is a constant independent of N and z.
Proof. Define the projector Π 0 corresponding to B 0 via (6.26). Then Π 0 = |m| −1 Π 0 |m|, where Π 0 is the orthoprojector corresponding to the principal eigenvalue of the Hermitian operator F . Since |m| ∼ 1 we have Π 0 ℓ ∞ →ℓ ∞ ≤ C 0 . Moreover, by Proposition 4.3, the ℓ 2 -normalized eigenvector v corresponding to the principal eigenvalue of F has entries v j ≥ 0 with v j ∼ N −1/2 , hence where c 0 > 0 is a constant independent of N and z.
Finishing the Proof of Theorem 3.2
Proof of Theorem 3.2. Recall that the objective is to estimate the quantities defined in (3.1). Instead of estimating a =y w a G αa G aβ directly, it is more convenient to work with objects of the type a =y W ax G αa G aβ , since they generalize quantities appearing in both (3.6) and (3.7). The redundant index x can be eliminated by setting W ax := w a .
Since Π has rank one and Claim 6.7 asserts that Π1 = 0, the kernel of Π together with 1 span C N . Therefore we can decompose each column of the matrix W into a linear combination of 1 and an element of ker Π, that is, there exists an N × N matrix Y and a vector s ∈ C N such that We multiply the first equality in (6.36) by Π from the left, apply both sides to the a-th standard basis vector e a of C N and take the ℓ ∞ -norm to deduce ΠW e a ∞ = |s a | Π1 ∞ , a ∈ {1, . . . , N }. (6.37) By assumption, W max N −1 , hence W e a ∞ N −1 . Using Claim 6.7 we get We combine (6.36) and the resolvent identity in the form Furthermore, estimates W max N −1 , (6.36), and (6.38) imply that |Y ab | N −1 for all a and b. Since by Claim 6.6 (1 − Sm m) −1 (1 − Π) ℓ ∞ →ℓ ∞ 1, we conclude that (6.40) First, using (6.40), we can apply Lemma 6.1 to the first term in (6.39) to obtain a =y Using (6.36), we proceed by computing Finally, it follows from subtracting the vector Dyson equations (3.4) for z and ζ that Next, we estimate the second term in (6.39). Applying the local law in the form (4.4), we obtain where we used that |z − ζ| ≥ |η| + | η|, since η η < 0. Combining (6.38), (6.39), and (6.41)-(6.44) yields a =y which proves (3.6) by setting W ax := w a . To prove (3.7), we observe that by setting x = y = α = β = b in (6.39) and summing over b yields b a =b To estimate s, g , we use (6.38) and the averaged local law (4.3) to obtain where we used that |z − ζ| ≥ |η| + | η|, since η η < 0. Setting x = y = α = β = b in (6.41), summing over b, using the identities (6.42) and (6.43), and combining the result with (6.47), we deduce that where we used that (|η| + | η|) −1 (Θ + Θ) = N Θ Θ. This establishes (3.7) and concludes the proof of Theorem 3.2.
Remark 6.8. We outline the steps needed to achieve the optimal error estimate (3.12). First, one needs to adapt the proof of Theorem 3.2. More specifically, replace the decomposition (6.36) with where Π(z, ζ) is the destabilizing eigenprojector defined in (6.26). The terms involving s and q are handled using the averaged local law (4.3), similarly to (6.47).
For the remaining term, R := y F yy yy , we adapt the mechanism of Lemma 6.1 by using the following iterative scheme. In the first step, we apply an expansion similar to (6.9) to the partial derivative ∂ jk R. This improves the error in the estimate on R by a factor of (Ψ + Ψ) 1/2 . If we expand ∂ lp ∂ jk R in a similar manner, we gain another (Ψ + Ψ) 1/4 . Iterating this approach we can estimate R with an error stochastically dominated by N Ψ Ψ(Ψ + Ψ) 2−2 −d for any given integer d (where d is the maximal order of expanded partial derivatives). By Definition 3.1, this is sufficient to establish (3.12). Similar arguments in the context of random band matrices can be found in [10].
Proof of Corollary 3.3. Estimate (3.9) on T xy (ζ, z) follows from (3.6) by setting α = β = y and w a := S xa . Estimate (3.10) on Tr[AT (z, ζ)] follows from (3.7) by setting W := SA t , which satisfies |W ab | N −1 A ℓ ∞ →ℓ ∞ . This concludes the proof of Corollary 3.3. Remark 6.9. Note that estimates (3.6) and (3.7) (also with the improved error term (3.12)) hold without omission of indices in the a summation. Indeed, it follows from Theorems 3.2 and 4.2 that In this section, we compute the variance V (f ) defined in (5.3) for mesoscopic C 2 c test functions f . In [17], the limiting variance was computed for several types of C ∞ test functions, including compactly supported ones; however, V (f ) is computed with an O(1) error (see, e.g., Lemma 6.7 in [17]), which is not negligible in the setting of the present paper. To obtain effective error bounds, we augment the proof laid out in [17] by performing further integration by parts in the integral representation of V (f ), thus eliminating the f ′ terms, improving the error by a factor of O(η 0 ).
Furthermore, by (4.9), the operator F can be decomposed such that where ψ 1 , v is the principal eigenvalue-eigenvector pair of F , and δ is the constant in (4.9). Let R ≡ R(z, ζ) denote (U * (z, ζ) − A(z, ζ)) −1 . In the sequel, we drop the arguments and write A ≡ A(z, ζ). Lower bound (4.8) and the inequality in (7.2) imply that In the following lemma, we collect the perturbative estimates on the saturated self-energy operator F and related quantities established in [17].
7)
Moreover, there exists ε > 0 independent of N , such that for all x, y ∈ I κ satisfying |x − y| ≤ ε, |ω(z, ζ)| η + |x − y|. (7.8) Finally, for z := x + iη with x ∈ I κ , the following identity holds By our choice of κ, E 0 is in the interior of the bulk interval I κ , defined in (3.5) , hence if we definê ε := min{ε/4, dist(E 0 , R\I κ )}, thenε ∼ 1. Furthermore, since the function g is compactly supported, we assume that supp(f ) ⊂ [E 0 −ε, E 0 +ε] for large N . where log is the principal branch of the complex logarithm, and C (4) is the matrix of the fourth cumulants of H. By Jacobi's formula for the derivative of the determinant, it follows from the definitions of L and K, that for all z, ζ ∈ C\R ∂ 2 ∂ζ∂z L(z, ζ) = K(z, ζ). The partial derivatives of L 1 contribute only sub-leading terms to L. Indeed, we have the estimates where we used the moment condition (2.2) to bound S jk and C (4) jk , (4.2) to get the upper bound m, m 1, and (4.14) to obtain m ′ , m ′ 1, since [E 0 +ε, E 0 −ε] ⊂ I κ .
We write z := x + iη, ζ := y + i η and plug (7.13) into the expression (7.17) for V (f ). Using the fact that ∂ z u = −i∂ η u for any holomorphic function u(z), and integrating by parts in η, we obtain The second estimate in (7.16), expression (7.18) and the estimates f ′′ . Similarly, integrating the first term on the right hand side of (7.21) by parts in η we get It follows from (7.14) and the expression (7.18) that the boundary term (the second line of (7.22)) is again dominated by O ≺ (N −ε0 ). We apply Stokes' theorem to (7.22) twice: once in z and once in ζ. Considering that ∂ η f (z) vanishes on the boundary of Ω * except for the lines {Im z = ±η * }, this results in where L(x, y) := Re [L(x + iη * , y + iη * ) − L(x + iη * , y − iη * )] (7.24) We restrict the integrations in (7.23) to [E 0 −ε, E 0 +ε], since this interval contains the support of f . Furthermore, for all y ∈ supp(f ), y − E 0 η 0 , hence |y − E 0 ±ε| ∼ 1. By symmetry of L(z, ζ), and the second estimate in (7.16) it follows that ∂ ∂y L(E 0 ±ε, y) 1, y ∈ supp(f ). (7.25) We write f ′ (y) = ∂ y (f (y) − f (x)), perform integration by parts in y and integrate the boundary term by parts in x to obtain (7.26) Since f 2 2 η 0 , it follows from (7.25) that the second integral in (7.26) is O (η 0 ). Similarly, integrating (7.26) by parts in x and using (7.26) to substitute one of the emerging itegrals for −V (f ) + O (N −ε0 + η 0 ), we get where we again used (7.25) to estimate the boundary term. For any holomorphic function u(z) of Finally, in view of in view of the first estimate in (7.16), ∂ z ∂ ζ L log (x + iη * , y + iη * ) 1, so its contribution is also bounded by O ≺ (η 0 g 2 2 + η 2 0 g 2 1 ). Moreover, it follows from the last estimate in (7.15) that we can replace K(x + iη * , y − iη * ) by ∂ z ∂ ζ L log (x + iη * , y − iη * ), since the contribution of the remaining terms is bounded by O ≺ (η 0 g 2 2 + η 2 0 g 2 1 ). This concludes the proof of Lemma 7.2. Once Lemma 7.2 is established, we can follow the method of Lemma 6.7 in [17] to finish the proof of Proposition 5.2.
Using Lemmas A.2 and 5.6 we proceed to estimate the third term on the right hand side of (A.4). S jk G kj (z) ∂ e(λ) ∂H jk = − 2iλ π E e(λ) Proof of Lemma A.3. In view of (1.1), multiplying (A.6) by S jk G kj (z), summing over k = j and taking expectations gives the first term on the right hand side of (A.4). For the remaining k = j term, observe that the function K(ζ) := G jj (ζ) − m j (ζ) is analytic in C\R and is stochastically dominated by Ψ(ζ) in D. Applying Lemma 5.5 with p = 1 to K(ζ), we obtain ∂G jj (ζ) ∂ζ = m ′ j (ζ) + O ≺ | Im ζ| −1 Ψ(ζ) .
|
2023-01-05T06:42:49.213Z
|
2023-01-04T00:00:00.000
|
{
"year": 2023,
"sha1": "d705d38396c6930e226659304938e6fec6f1fbd3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d705d38396c6930e226659304938e6fec6f1fbd3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
261397245
|
pes2o/s2orc
|
v3-fos-license
|
Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl)
We develop an account of dependent session protocols in concurrent separation logic for a functional language with message-passing. Inspired by minimalistic session calculi, we present a layered design: starting from mutable references, we build one-shot channels, session channels, and imperative channels. Whereas previous work on dependent session protocols in concurrent separation logic required advanced mechanisms such as recursive domain equations and higher-order ghost state, we only require the most basic mechanisms to verify that our one-shot channels satisfy one-shot protocols, and subsequently treat their specification as a black box on top of which we define dependent session protocols. This has a number of advantages in terms of simplicity, elegance, and flexibility: support for subprotocols and guarded recursion automatically transfers from the one-shot protocols to the dependent session protocols, and we easily obtain various forms of channel closing. Because the meta theory of our results is so simple, we are able to give all definitions as part of this paper, and mechanize all our results using the Iris framework in less than 1000 lines of Coq.
INTRODUCTION
Message passing is a commonly used abstraction for concurrent programming, with languages such as Erlang and Go having native support for it, and languages such as Java, Scala, Rust, and C# having library support.Session types offer powerful type systems for message passing concurrency [Honda 1993;Honda et al. 1998], and have been extended with a number of exciting features: (1) Dependent protocols: The key ingredient of a session type system is the notion of a session protocol, which describes what data should be exchanged.For example, the session protocol !Z.!Z.?B.end expresses that two integers are sent, after which a Boolean is received, and the channel is closed.In vanilla session types, protocols were meant to specify the types of the exchanged data.They cannot be used to express that the right values are exchanged (i.e., functional correctness), nor to express data-dependent protocols where the remaining protocol can depend on prior messages.
There have been two lines of work to extend session protocols with logical conditions to remedy this shortcoming.Bocchi et al. [2010]; Toninho et al. [2011]; Zhou et al. [2020]; Thiemann and Vasconcelos [2020] develop type systems that combine concepts from the theory of dependent and refinement types with session types.Lozes and Villard [2012]; Craciun et al. [2015]; Hinrichsen et al. [2020] develop program logics that combine concurrent separation logic [O'Hearn 2004;Brookes 2004] with concepts from session types.Separation logic (instead of a type system) is used to enforce affine use of a channel library, and Hoare triple specifications (instead of typing rules) are provided for channel operations.
(2) Integration in functional languages: While session types were originally developed in the context of -calculus, a tempting direction is to combine session types with functional programming.In such languages, session-typed channels are considered first-class data, and can be stored in data types and sent over channels (similar to first-class mutable references in ML).The GV family by Gay and Vasconcelos [2010]; Wadler [2012] extends linear lambdacalculus with channels.The SILL family by Toninho et al. [2013]; Pfenning and Griffith [2015]; Toninho [2015] uses a monadic embedding of session types into an unrestricted language.
(3) Session channels as a library: Session types are typically a language feature, but a recent trend is to embed channels with session types as a library in an existing language [Hu et al. 2008;Scalas and Yoshida 2016;Pucella and Tov 2008].Often, either the host language or the encoding supports substructural types, to enforce the affine use of session channels [Kokke and Dardha 2021;Lindley and Morris 2016;Jespersen et al. 2015;Chen et al. 2022].(4) Minimalistic calculi: Session-typed languages add a large number of additional constructs to the types and expressions of their base languages.Already in the early days of session types, Kobayashi [2002] showed that session types can be encoded into -types; an approach formalized by Dardha et al. [2012Dardha et al. [ , 2017]], and applied to GV-style languages by Jacobs [2022].
To our knowledge, there is no prior work that combines all five features under a single roof.The goal of this functional pearl is thus to do exactly that.We will develop an account of dependent session protocols for a GV-style language in a concurrent separation logic.We start from first principles, enabling us to take a minimalistic approach.Our results have been mechanized in the Coq proof assistant using the Iris framework for concurrent separation logic [Jung et al. 2015[Jung et al. , 2016;;Krebbers et al. 2017a;Jung et al. 2018;Krebbers et al. 2018Krebbers et al. , 2017b].In the remainder of the introduction, we give a teaser of our approach and list some of our key insights.
Key idea #1: Implicit buffers through one-shot channels.The first step to formalizing a language with message-passing concurrency is to decide on the semantics of channels.A common approach is to use an asynchronous semantics where the sender enqueues the messages in a buffer, from which the receiver dequeues them.In such a semantics, the receive operation can block if no message is present, but the send operation will always succeed immediately.To model the notion of a buffer, one typically incorporates a linked list in the formal definition of the language, and extends the language with operations to send (enqueue) and receive (dequeue) messages.
To be minimalistic, we want to avoid having to explicitly model the notion of a linked list in our semantics.Inspired by Kobayashi [2002]; Dardha et al. [2017]; Jacobs [2022] of one-shot channels.These come with functions new1 (), which creates a new channel; send1 , which send a message on channel (without blocking); and recv1 , which receives a message from (blocks until a message has been sent).On top of the one-shot channels, we define regular multi-shot session channels.For example, the send operation of session channels is defined as: This operation not only sends the message , but also creates a new channel ′ for the remainder of the communication, and sends the new channel paired with the message.While there is no explicit notion of a buffer or linked-list in the semantics of one-shot channels, nor in the definition of session channels, we will show that the buffer arises implicitly from the preceding definition.
Key idea #2: Dependent session protocols via one-shot protocols.Program logics for message-passing concurrency typically come with a channel points-to connective p, which provides unique ownership of a channel endpoint that has to obey to a protocol p.These protocols typically have a sequenced structure, describing a dependent session of multiple exchanges.An example of a dependent separation protocol in the Actris logic by Hinrichsen et al. [2020Hinrichsen et al. [ , 2022] ] is end.This protocol expresses that two natural numbers ≤ are sent, and the difference − is returned.Similar to our desire for avoiding the need to explicitly model the buffers that underpin channels as linked lists, we would like to avoid having to inductively define such dependent session protocols.In our system, the channel points-to connective for the one-shot channels is simply (tag, ), where tag ∈ {Send, Recv} and is a predicate over the exchanged value.While our protocols only describe a single message, dependent session protocols that can describe session channels are simply defined as combinators.This is achieved by recursively using the channel points-to connective for describing the channel continuation inside the base protocol .Due to Iris's support for impredicativity [Svendsen and Birkedal 2014], we can use its fixpoint combinator to define recursive (and dependent) protocols by guarded recursion.
Key idea #3: Layered session channel library design and verification.We implement session channels in terms of one-shot channels, and our dependent session protocols as combinators of one-shot protocols, but we wish to go further by layering our design-from below and above.The layered design is shown in Fig. 1.
From below, we do not start with a language that has channels as primitive.We build on top of a functional language with mutable references as found in languages of the ML family (with allocation, deallocation, store and load).One-shot channels are implemented on top of primitive mutable references, and verified using (Iris's) separation logic rules for the verification of concurrent programs with mutable shared-memory references.Building on top of a language with mutable references has other tangible benefits.First, we can write and verify programs that transfer data by reference.Second, we can define both functional versions of session channels (that return a new endpoint) and imperative versions of channel endpoints (that mutate the channel).
From above, we demonstrate the flexibility of our solution by implementing multiple methods for closing a session.Session types and protocols are often terminated with an explicit end-tag, and it is non-trivial to extend the range of termination tags in settings where the protocols are defined inductively.Since our session channels are defined as combinators on top of the one-shot channels-that do not inherently include a method for closing-we can freely choose how to close our channels, after the fact.Initially, we implement asymmetric closing, where one endpoint initiates the closing of a channel (protocol !end), while the other waits and actually deallocates the memory backing the channel (protocol ?end).We later provide two alternatives with a self-dual end protocol: symmetrically closing the channel with a the same closing operation on both endpoints, where the last call deallocated the channel, and a combined send-close operation, which sends a last message but does not create a continuation channel.
Key idea #4: Mechanization using a subset of Iris.Our layered design proved beneficial for the meta theory and mechanization of our results.We only need the usual points-to connective ℓ ↦ → for ownership of locations ℓ with value in separation logic, a simple form of ghost state (unique tokens), and Iris's impredicative invariants.By comparison, the Actris logic by Hinrichsen et al. [2020Hinrichsen et al. [ , 2022] ] relies on a non-trivial model of recursive protocols using the technique from America and Rutten [1989] for solving recursive domain equations, and uses Iris's mechanism for higher-order ghost state [Jung et al. 2016] to define its channel points-to connective p.Since the meta theory of our results is so simple, we are able to give all definitions as part of this paper (there is no appendix) and mechanize all our results in less than 1000 lines of Coq.
Contributions. This paper makes the following contributions:
• A layered implementation of higher-order shared-memory session channels, starting from mutable references, on which we build one-shot channels, session channels, and imperative channels ( §2) • A layered development of separation logic specifications for our channels.We start from a small subset of Iris, developing specifications for one-shot channels, which are then treated as a black box upon which we build high-level dependent separation protocols ( §3) • Support for subprotocols ( §3.3) and guarded recursion ( §4), which transfers automatically from one-shot protocols to dependent session protocols.• A demonstration of the extensibility obtained by building on first principles, through various methods for closing session channels ( §5) • A small and intuitive mechanization in the Coq proof assistant, comprised of less than 1000 lines of Coq code ( §7).The paper is annotated with mechanization icons ( ) that link to the relevant Coq code, and a cross-reference sheet is provided ( § A).
LAYERED IMPLEMENTATION OF CHANNELS
In this section we will implement message passing channels in terms of low-level operations.We build these channels in several layers: • We start by describing the base language and its low-level operations ( §2.1).
• We then build a library of one-shot channels ( §2.2).
• On top of this we build functional multi-shot session channels ( §2.3).
• As a final layer, we have imperative session channels ( §2.4).
• We show that linked lists (buffers) implicitly emerge ( §2.5).In the subsequent §3, we develop specifications and proof for each of the layers, and demonstrate how to verify the correctness of the example.
Base Language
We use HeapLang, a low-level concurrent language that comes with the Iris separation logic framework, as our base language.HeapLang has the purely functional operations that one would expect, such as arithmetic and conditionals, and also includes products and sums.For the purpose of this paper, the following operations on mutable memory locations are the most relevant:
ref
Allocate a new memory location that initially stores value .
! ℓ
Read the value from memory location ℓ. ℓ ← Write value to location ℓ. free ℓ Free the memory location ℓ.HeapLang additionally includes a primitive for spawning a new thread: The program is allowed to refer to variables in the surrounding lexical context.The following is a grammar of the most notable constructs that we will use:
One-Shot Channels
At the base of our development lie one-shot channels, which communicate a single message from a sender to a receiver.The API consists of the following operations: new1 () Create and return a new one-shot channel .send1 Send message on channel (non-blocking).recv1 Receive message from channel (blocks until a message is sent).
The channels are one-shot; only one value is sent over the channel, after which point the channel is deallocated as a part of recv1 .
Example of using one-shot channels.These channels enable us to set up a communication between child and parent threads as in the following example: The main thread creates a one-shot channel , which is shared between the main thread and a forked-off thread.The forked-off thread then dynamically allocates a reference to 42, and sends the location over the channel.Finally, the main thread receives the reference, reads it, and asserts that the stored value is 42.To communicate several times, we could share several channels, but an interesting alternative style that allows unbounded communication is to send a new channel along with the message, as we shall see in §2.3.
In the HeapLang semantics, assert gets stuck if the condition is false.Safety (the fact that the assert does not fail) crucially depends on the forked-off thread not modifying the reference after it has sent it.This example is safe as the exclusive permission to write and read the reference first belongs to the forked-off thread, after which it is transferred to the main thread.We verify this safe transfer of ownership in §3.2.This goes beyond standard session types due to reference ownership and the verification of the assert.
Implementation of one-shot channels.In our development, channels are not primitive but implemented in terms of low-level mutable references.A channel is represented as a mutable reference that initially contains the value None.To send a value to the channel, we set the mutable reference to Some .To receive from the channel, we read the value of the mutable reference in a loop, until we see the None change to Some .We then deallocate the mutable reference, and return .This gives us the following implementation: This implementation shows that safety also depends on the fact that clients only call recv1 once, and does not call send1 after a completed recv1.These would otherwise result in a double-free and use-after-free, which get stuck in the HeapLang semantics.
HeapLang has a sequentially consistent memory model.In a weaker memory model, the store/load instructions should use release/acquire memory order options (or stronger).Similar to most literature on Iris-with the exception of papers specifically focused on weak memory [Mével and Jourdan 2021;Kaiser et al. 2017;Dang et al. 2020]-we ignore these concerns.
Session Channels
A session channel facilitates sequences of messages between two channel endpoints, which is useful for implementing client-server style concurrency.
The new function allocates an initial one-shot channel and returns it as the session channel.The send function allocates a new one-shot channel, and sends it along the original channel with the given message , after which the new channel is returned.The recv function receives the value and continuation channel pair using the original one-shot channel receive function.The close function sends a final termination flag, without allocating a new one-shot channel, to terminate the session.The wait function receives the final termination flag, which deallocates the channel.
For session channels to be used safely-i.e., to not cause memory errors such as use-after-free or double-free-it is crucial that channel endpoints are used in a dual way.That is, if there is a send on one endpoint, there should be a matching receive on the other endpoint, and vice versa.Similarly, a close should match up with a wait.We discuss other options for closing channels in §5.
Example of using session channels.An example of using the session channels is as follows: assert(! = 42) Here, the main thread initially creates a session channel , which is shared between the main thread and forked-off 'worker' thread.The main thread dynamically allocates a reference to 40, after which it sends the reference over the channel.The worker thread receives the reference, adds 2 to it, and sends a flag back, to signal that the reference has been updated.The main thread receives the flag and then reads the updated value stored in the reference, and asserts that it is 42.Finally, the main thread sends the closing signal, which is received by the worker thread.Each operation on the channel binds the channel continuation to an overshadowing name , to intuitively capture that they keep working on the same session.
Similar to the example presented in §2.1, this program is safe if the assert succeeds and there are no memory errors due to improper use of the channel API.Intuitively, this example achieves safe access to the reference via ownership delegation over the channel.We verify this in §3.4.
Imperative Channels
Although session channels are more convenient to use than one-shot channels, they still require us to continuously pass around new channel references.On top of session channels we therefore define imperative channels, which have a traditional imperative channel API: new_imp () Create a new imperative channel, and return a pair ( 1 , 2 ) of two endpoints..send( ) Send message on channel .Return nothing.
.recv()
Receive a message from channel .Return only the message.
.close()
Send termination message and close the channel.
.wait()
Wait for termination message and close the channel.
-create channel between main and worker -start the worker thread -receive count and answer reference -sum received numbers -signal that we are done -wait for closing signal -mutable reference to store the sum -we will send 100 numbers to be summed into -send the numbers 1..100 -wait until the worker is done -send closing signal -assert that the received answer is correct We implement imperative channels in terms of session channels by storing a session channel in a mutable reference: .close() ≜ close (! ); free .wait()≜ wait (! ); free
Emerging Linked List Buffers
We demonstrate the imperative API with the example from Fig. 2. The example creates a channel to communicate between the main thread and the forked-off 'worker' thread.The main thread allocates a reference and sends the message (100, ) to the worker thread, which indicates that the main thread is going to send 100 further number messages to the worker thread.The worker thread receives each of these numbers, and mutates to keep track of their sum.Finally, the worker thread sends an empty acknowledgment message () to the main thread, indicating that it is done with and will not mutate further.The main thread closes the session by sending the closing signal, which the worker thread waits for.The main thread then reads the value of the sum from , and asserts that it is correctly computed.
The linked structures that emerge during execution are displayed in Fig. 3.In the picture, the main thread has sent the numbers [1, . . ., 9], while the worker thread has so far only received [1, 2, 3].At run time, the worker thread will have a reference to 2 , which points to the head of a linked list structure.When the worker thread receives the next message (4), it updates 2 to point to the next linked list element, and adds the value of the message to .The main thread also has a reference to , but it will not use it until the worker thread has sent the completion signal back, to avoid race conditions.Instead, the main thread is still busy working on the other end of the linked list.Each time the main thread sends a message, it allocates a new memory location, puts its message into the tail, and updates the tail of the existing linked list to point to the new location.This emergence of the linked list occurs because the send operation allocates a new one-shot channel, represented as a memory location, and sends it along with the message.At a lower level of abstraction, this results in a linked list buffer of messages, where each message is a pair of a value and a continuation channel.
If the worker thread were to catch up with the main thread, it would wait until it sees a message.When the main thread is done, it tries to receive a message using the last linked list node it has created, which is initially still empty.When the client reaches that node, it puts the acknowledgment () into it, signaling that the main thread may now read from .1More generally, the threads switch roles when the polarity of the protocol changes: the thread that used to consume list cells now creates new list cells, and vice versa.
Note that the emergence of the buffer as a bi-directional linked list is somewhat implicit.We have built several layers of channels, but at no point did we have to think about the linked-list run-time structure as a whole.We will see a similar phenomenon when doing the proofs: we never need to think about the run-time structure as a whole.Instead, we will develop specifications in a layered way, following the layers of the implementation.
In the remainder of this paper, we will develop specifications for these different layers (corresponding to § 2.1 to 2.4), and prove the correctness of the channel implementations with respect to these specifications.We can then use the specifications to verify this example in §3.5.
LAYERED SPECIFICATIONS AND VERIFICATION
As the reader may have noticed, the implementations in the preceding section are untyped.Rather than assigning types to the channel APIs, we will provide separation logic specifications.These allow us to prove functional correctness of programs that make use of the channel API.We prove partial correctness, which guarantees that if a program satisfies a separation logic specification with trivial precondition, then the program is safe, i.e., does not get stuck in the semantics due to run-time type errors, use-after-free or double-free bugs, or failing assert expressions.In terms of session types, our result should be compared with type safety and session fidelity.As is standard in papers that use Iris, we do not prove deadlock freedom or termination (which would only be true when assuming a fair scheduler as the spin-loop in recv1 could otherwise trivially loop).
The Iris Separation Logic
To specify and verify the channel implementations and example clients, we use the Iris separation logic.Fig. 4 shows the grammar and a selection of rules of the subset of Iris that we use.Iris provides a program logic for HeapLang with Hoare-triples { } { }, which express that given the precondition ( : iProp), the program ( : Expr) is safe to execute, and yields the postcondition ( :Val → iProp).We often write Points-to: Iris is a separation logic [O'Hearn et al. 2001], meaning that propositions assert ownership over resources, such as references.This is made precise by the separation logic connectives, such as the separating conjunction * , which describes that the propositions and holds for separate parts of the heap.In particular, this lets us derive exclusivity of references; it is impossible to separately own the same reference: ℓ ↦ → * ℓ ↦ → − * False.Here − * is the "separating implication" connective.It acts similarly to the regular implication, but for separation logic.
Separation logic facilitates modular verification, by virtue of the framing rule Ht-frame, which states that we can verify programs in the presence of separate resources .Non-structured concurrency is supported by the Ht-fork rule.Finally, Iris enjoys the conventional rules for mutable references Ht-alloc, Ht-load, Ht-store, and Ht-free, which respectively allow allocating, reading, updating, and freeing mutable references.
We use Iris's impredicative invariants , ghost state tokens tok , and later modality ⊲ .We further discuss the meaning and importance of these connectives throughout the section.
One-Shot Channels
In Fig. 5 we show separation logic specifications for the one-shot channel implementation from §2.2.These specifications make use of one-shot protocols that describe the protocol for a one-shot channel.As a one-shot channel communicates a value, the protocol will carry a predicate describing which values are allowed to be communicated with that channel.Additionally, the protocol says whether we are allowed to send or receive.Therefore, we represent one-shot protocols as a pair (tag, ) where tag ∈ {Send, Recv} and ∈ Val → iProp.The predicate is a separation logic predicate, so that protocols can express transfer of ownership.
To link protocols to actual channels, we shall define a channel points-to predicate base (tag, ).The channel points-to provides unique ownership of one end of the channel and says that channel satisfies protocol (tag, ).The channel points-to is analogous to the normal points-to ℓ ↦ → of separation logic, in the sense that a points-to assertion is required to verify an invocation of a channel operation.The definition can be found in Fig. 6, but we will first discuss how it is used in the Hoare rules for the channel operations.
When we create a new channel using new1 (), we may choose the protocol predicate , and we get two channel points-tos: base (Send, ) and base (Recv, ).Note that we get both channel points-tos for the same channel , because the same memory location is used for both ends of the channel, and the two channel points-tos represent ownership of the two ends of the channel, which give two different views of the same memory location.As we shall see in §3.2, this is achieved by moving the ownership of the primitive heap points-to of the memory location into an invariant, which allows us to share it.In accordance with session types, and to state the specification of new1 () in a symmetric manner (Fig. 5), we introduce the dual function on protocols, given by (Send, ) ≜ (Recv, ) and (Recv, ) ≜ (Send, ).
Once we have the two channel-point-to predicates we may give one of them to another thread, and keep one of them in the current thread.This way we ensure that two threads use the protocol to agree on how the channel will be used.
We may then use the send1 and recv1 operations to perform the communication.The send1 operation requires ownership of base (Send, ) as well as in its precondition.Dually, the recv1 operation requires ownership of base (Recv, ) in its precondition.Its postcondition guarantees that recv1 returns a value that satisfies . With these specifications we can verify the example presented in §2.2 with the following protocol: This protocol expresses that the exchanged value is a location ℓ.We transfer the ownership of the exchanged reference ℓ along with the message.With this, we can symbolically apply the one-shot channel specifications, and finally assert that the value read from the received reference is 42.
Verifying the implementation with respect to the specification.We now prove that the one-shot channel implementation satisfies its specification.To do this, we define the channel pointsto base in terms of Iris logic primitives (namely, ordinary points-to, ghost state and invariants).We then prove that the specifications for new1,send1 and recv1 follow from the rules of Iris.We first present the two key concepts from Iris needed for our proof: ghost state and invariants.
Ghost state.Ghost state is logical state that we can use to logically coordinate between parallel threads.Compared to the standard approach to ghost state in concurrency verification [Owicki and Gries 1976], ghost state in Iris is not part of the program text.It is introduced and manipulated solely in proofs.Just as the physical heap keeps track of the values of memory locations, Iris has a ghost heap that keeps track of the values of ghost locations.In our case we only need the very simplest form of ghost state: we need pure ownership over ghost heap locations; we do not need to store further information in the ghost locations.Given the ghost location , we have the ghost resource tok , which is analogous to ℓ ↦ → (), i.e., a location that points to a unit value.It may seem a bit puzzling that ghost locations that do not store any interesting contents can be helpful in a proof.The key is that ghost locations have the same exclusivity as memory locations.That is, we have the Tok-excl rule that says it is impossible to have ownership of two ghost locations with the same name: tok * tok − * False.We shall see why this is useful in a moment.Finally, we can always allocate new pieces of ghost state, using the Ht-ghost-alloc rule.
Invariants.The points-to resource ℓ ↦ → is an affine resource, and cannot be duplicated.This is a problem for verifying concurrent programs, where we would like to use the same memory location from multiple threads: when we fork off a child thread, we would like to keep ownership over the memory location in both the main thread and the child thread.
To solve this issue, concurrent separation logic has the notion of invariants.At any moment in the proof where we have ownership over ∈ iProp, we can choose to establish as an invariant, denoted ∈ iProp.This is formally described by the Ht-inv-alloc rule.The advantage of an invariant is that it can be freely duplicated, i.e., − * * .In turn, we cannot directly access the inside the invariant.Instead, we can only temporarily access it when the program takes an atomic step, such as a memory load !ℓ or store ℓ ← .After the atomic step has happened, we must immediately put back into the invariant.This is formally described by the Ht-inv-open-close rule, where the resources ⊲ are the resources that are temporarily removed from the invariant.In the precondition of the rule, we obtain access to the resources ⊲ taken out of the invariant, and in the postcondition we have to give back the resources ⊲ , which represents putting them back into the invariant.The proposition inside an invariant is typically a disjunction of several states, where the states may assert ownership over memory locations using ℓ ↦ → , and may assert that has certain properties in that state.A state may also assert ownership over ghost resources.
Iris's invariants are impredicative [Svendsen and Birkedal 2014], which effectively lets us nest invariants inside of invariants, because ∈ iProp for every ∈ iProp, including = .Nesting of invariants is critical for the verification of our session channels, as will be covered in §3.4.To maintain soundness of the Iris logic, resources extracted from an invariant are guarded by a later modality ⊲ [Nakano 2000;Appel et al. 2007].This later can be seen in the Ht-inv-open-close rule.Resources ⊲ behind a later modality can only be used after the program does the next step of execution.This is formally expressed by the Ht-later-frame rule, which states that one can frame resources under a later, if the program has not terminated.Another means of stripping laters is if the guarded resources are timeless (Ht-later-timeless).Pure propositions, reference ownership (ℓ ↦ → ) and ghost ownership (tok ) are timeless, which means when we open an invariant, we can immediately remove the later from these connectives.
The one-shot channel invariant.To verify the one-shot channels, we need to define the connective base , whose key ingredient is an invariant.To explain the invariant, we start with a key observation.The one-shot channel can be in three different states: (1) no message has been sent (ℓ ↦ → None), (2) a message has been sent but not received (ℓ ↦ → Some ), and (3) the message has been both sent and received (ℓ has been deallocated).These states are reflected in the invariant chan_inv 1 2 ℓ defined in Fig. 6.The arguments 1 and 2 are two ghost locations, whereas ℓ is the physical memory location where the channel is located, and is the predicate associated with the protocol.The invariant captures each state with a separate disjunct.By virtue of the exclusion of the ghost resources, it is then possible to exclude possible states, based on local ghost ownership.
In particular, if one owns tok 1 , the invariant must be in the first state (as the other states assert ownership of the token).Similarly, if one owns tok 2 , the invariant cannot be in the final state.The proof then follows by letting the sender own tok 1 and the receiver own tok 2 , to let them locally determine which state the invariant is in, by the exclusivity rule of the ghost resources.
More formally, with the invariant in place, we can define the channel points-to base (tag, ), as presented in Fig. 6.The definition captures (1) that is a reference ( = ℓ), (2) that the invariant is established ( chan_inv 1 2 ℓ ), (3) that the endpoint has ownership of either tok 1 or tok 2 , if they are the sender or receiver, respectively.The later modalities (⊲) in the definition of base are needed to support infinite protocols via guarded recursion ( §4).
Initially, when creating a channel, we establish the invariant in the first state, using the Ht-invalloc rule.We then duplicate the invariant, and create base (Send, ) and base (Recv, ) using the two copies of the invariant, as well as tok 1 and tok 2 , respectively, which are created by two applications of the Ht-ghost-alloc rule.
When the sender wants to send their message , they temporarily open the invariant using the Ht-inv-open-close rule, and determine that they are in the first state, based on their tok 1 token.They then get ownership over the reference ℓ ↦ → None.The sender then modifies the location to contain the sent value Some , and transfers the ownership back into the invariant.The sender also puts the token tok 1 into the invariant, as well as the resources captured by the protocol.The invariant is restored in the second state.
When the receiver wants to receive, it temporarily opens up the invariant, using the Ht-inv-openclose rule, to get ownership over the reference.It reads the location, and if the value is None, it determines that it is in the first state, and so it loops.Once a value Some is read, it is determined that we are in the second state, and so the receiver deallocates the reference.The receiver additionally takes the resource out of the invariant, and re-establishes the invariant by putting its token tok 2 into the invariant, which restores it in the third state.
The rule for new1 is then proven as follows.We obtain ownership over the location ℓ ↦ → None because new1 allocates the reference.We also allocate two new ghost locations tok 1 and tok 2 obtaining the identifiers 1 and 2 .We establish the invariant using the first disjunct, by putting ℓ ↦ → None into the invariant, and allocate it with the Ht-inv-alloc rule.We then duplicate the invariant, and create base (Send, ) and base (Recv, ) using the two copies of the invariant, as well as tok 1 and tok 2 , respectively.
Subprotocols
We define a subprotocol relation on dependent separation protocols as introduced by Actris [Hinrichsen et al. 2022], analogous to subtyping on session types [Gay and Hole 2005].Whereas subtyping between session types is established by subtyping between the messages, the subprotocol relation between protocols is established by implications between the separation logic predicates. 2he subprotocol relation is denoted ⊑ where , are protocols, and is defined as follows: This relation is reflexive and transitive, and ⊑ iff ⊑ .We layer subprotocols on top of our specification for one-shot channels by defining a new channel points-to that is explicitly closed under subprotocols: We do not use a superscript on because we consider it to be the main channel points-to, whereas we view base as an internal notion.This channel points-to satisfies a subsumption like rule: ( ) * ⊲( ⊑ ) − * ( ), which is proved by transitivity of ⊑.The use of the later modality (⊲) is discussed in §4.
We can prove versions of the specifications for new1, send1, and recv1 for .These proofs are straightforward, because we can prove these specifications using the existing specifications for base from Fig. 5, by using ⊑ at appropriate points to convert a 1 into 2 or vice versa.In particular, we apply this conversion in the send rule just before sending the message, and in the receive rule just after receiving the message.We also trivially have ( base ) − * ( ), which is used to prove the new1 rule for .
Session Channels
Now that we have established the specifications for the one-shot channels, we move on to the next layer: multi-shot session channels.A prominent approach to specifying and verifying multi-shot channels is the concept of session types [Honda 1993], which lets a user ascribe session channel endpoints with a sequence of obligations to send or receive messages of certain types.More recently, the session type approach has been adopted in the separation logic setting [Craciun et al. 2015;Hinrichsen et al. 2022].One such adaptation is Dependent Separation Protocols [Hinrichsen et al. 2022].Rather than ascribing types to each exchange, dependent separation protocols ascribe logical variables, physical values, and propositions.The dependent separation protocols and the specifications for the session channels can be seen in Fig. 7.
The dependent separation protocols consists of four constructors: !⟨ ⟩{ }. p, ?⟨ ⟩{ }. p, !end, and ?end.The first two constructors describe the permission to send or receive the logical variable , the value , and the resources , respectively, after which they follow the protocol tail p.Here, binds into all of the remaining constituents.We often omit the binder when it is of the unit type: e.g., !⟨ ⟩{ }. p.We similarly often omit the proposition if it is True: e.g., !⟨ ⟩. p.The last two constructors specify that the protocol has ended, meaning that no further operations can be made on the channel, and the channel can be closed.We further detail alternative specifications for closing and deallocation in §5.
The protocols are subject to the same notion of duality, as presented in § 3.2.The dual of a protocol is the same sequence of obligations, where the polarity has been flipped, i.e., all sends (!) become receives (?), and vice versa, as made precise by the rules of the figure.Finally, we use the same channel endpoint ownership p as for the one-shot channels, as the dependent separation protocols share the same type as the one-shot protocols, as will be seen momentarily.
The dependent separation protocols can be used to specify and verify session channels.As an example, the following dependent separation protocol specifies the interactions of the prog_add example from §2.3:The protocol says that one must first send a reference to a number (captured by the logical variable (ℓ, ) : Loc × Z)), along with the ownership of the reference ℓ ↦ → .Afterwards, the updated reference can be reacquired, followed by the protocol termination.The dual of the protocol is ?((ℓ, ) : The notion of duality is used in the specification for new.The specification states that we obtain separate exclusive ownership of the returned endpoint , one with a freely picked protocol p and the other with its dual p.This mimics the intuition from the one-shot channel, in which one endpoint had to release the specified resources, while the other could acquire them.The specification for send states that in order to send, the channel endpoint must have a sending protocol, and we must give up the specified resources , for a specific instantiation of the variable .Additionally, the sent value must correspond to the protocol, for the variable instantiation .As a result, the returned channel endpoint follows the protocol tail ′ p , for the same variable instantiation.Conversely, the specification for recv states that we can receive if the channel endpoint has a receiving protocol.As a result we obtain an instance of the logical variable , and the resources specified by the protocol .Additionally, the returned value is exactly the one specified by the protocol , and the new endpoint follows the protocol tail ′ p .The prog_add example can now be verified using the prot_add protocol.
Verification of the session channel specifications.The definitions of the dependent separation protocols and the specification rules presented in Fig. 7 are derived directly on top of the one-shot channel definitions and specifications.In particular, the type of dependent separation protocols is the same as the one for the one-shot channel protocols, namely Prot.The definition of the receiving protocol is as follows: The recv_prot constructor takes four arguments, and constructs a receiving one-shot channel protocol.In particular the constructor takes the type of its logical variable , the exchanged value , the exchanged proposition , and the protocol tail p.The latter three arguments all abstract over the protocol variable, which is existentially quantified in the protocol body.The second projection captures that the actual exchanged value is a tuple of the value specified by the protocol ( ), and the continuation ( ).It additionally includes ownership of the resources specified by the protocol ( ), and finally a one-shot channel ownership, of the continuation with the protocol tail ( (p )).The notation ?( : ) ⟨ ⟩{ }. p then simply lets us instantiate the receiving constructor, without explicitly repeating the variable abstraction for the three constituents.
The duality function of the session channels is the same as the one for the one-shot channel.We define the sending constructor in terms of the receiving one, using the duality function as follows: To specify the close and wait operations we define two session protocols: Finally, the channel endpoint ownership p is identical to the one for the one-shot channels, as the type of the protocols are the same, they simply carry channel continuations now.This immediate reuse of the one-shot ownership is made possible by the higher-order nature of Iris.In particular, the internal invariant of the endpoint ownership refers to the session protocols, which internally includes a nested endpoint ownership, and so on.By virtue of the step-indexing of Iris, this is sound as we always take a step for each unfolding of the nested invariants.
With these definitions the soundness of the session channel specifications (Fig. 7) follow almost immediately from the sound specifications of the one-shot channel operations send1 and recv1.
Subprotocols for session protocols.We have a notion of subprotocols for one-shot protocols ( §3.3), but what about dependent session protocols?Because we have defined session protocols as particular forms of one-shot protocols, we get the appropriate notion of subprotocols for session protocols for free.The following lemmas for session subprotocols (and the imperative derivation on top of them) are already true and easily derived from the subprotocol rules in §3.3: At a high level, these lemmas state that a session protocol is a subprotocol of another, if for each logical message in the first protocol, there exists an appropriate logical message in the second protocol, such that we have a separating implication between separation logic assertions, and the tails of the protocols are in a subprotocol relationship.The stated lemmas are somewhat stronger than this high-level description; for instance, the user of the lemmas gets access to the assertion 1 1 before having to provide the corresponding logical message 2 for the other protocol.As an example, this strengthening allows one to perform a form of framing of resources within a protocol: if a resource is provided by an earlier send and needed by a later receive, we can frame these two resources (i.e., remove both from the protocol by canceling them out).This property can be illustrated by the following rules: !⟨ ⟩{ }. ?⟨ ⟩{ }. p ⊑ !⟨ ⟩{ * }. ?⟨ ⟩{ * }. p ? ⟨ ⟩{ * }. !⟨ ⟩{ * }. p ⊑ ?⟨ ⟩{ }. !⟨ ⟩{ }. p
Imperative Channels
Because our session channels create new pointers at each step, they return new channels, and are thus inconvenient to work with.For that reason, we have our final layer: the imperative channels from § 2.4.These channels put a session channel in a mutable reference, so that we can use the same mutable reference throughout and use mutating operations to change the reference to a new session channel upon send and receive operations.To handle these channels, we introduce a new channel points-to imp p.The specifications for the imperative channels can be found in Fig. 8.We note a couple of differences with respect to the session channels: • The new_imp operation returns a pair of channels now, so the points-to connectives in the postcondition are for the two components of the pair.• The send operation does not return a value.The new channel points-to in the postcondition refers to the original channel instead.• The recv operation only returns one value-the message.The channel points-to in the postcondition once again refers to the original channel.
Verifying the imperative channel specifications.To verify the session channels we first define a new connective for channel endpoint ownership: The new imperative channel ownership connective imp p simply lifts the original connective ′ p to assert ownership of a mutable reference.With this definition in hand, verifying the specification is trivial.We simply use the Iris rule for allocating, reading, and updating the reference, along with the specifications for the original channel endpoint ownership, to resolve the operations on the channel.
Because the new channel-points-to is defined in terms of the old one, the results of subprotocols easily lift to the imperative channels.
Verifying the example.We now explain how these specifications can be used to verify the example from Fig. 2. The example starts by allocating a new channel, so we use the specification for new_imp.In order to use this specification, we have to choose the session protocol p.We use the following protocol: The protocol prot_sum says that we will first send the pair ( , ) of a number and a location, and the assertion that ↦ → 0. We then continue with the protocol prot_sum ′ 0 , which is recursively defined.Its first argument keeps track of the sum of the messages sent so far, and the second argument keeps track of how many messages we still have to send.When the counter = 0, we stop sending and instead receive a unit value, as well as the assertion that ↦ → , i.e., the sum of the messages sent.
After the channel allocation, we have 1 imp prot_sum and 2 imp prot_sum.We verify the first interaction using the first step of prot_sum.We prove the loops correct using induction: the main thread does induction on 100, and the child thread induction on the received message (which will be 100, but the child thread does not know this).After the final synchronization, the ownership over has been transferred back to the main thread.According to the protocol, the location points to the value 1 + 2 + • • • + 100, which is equal to 5050 by mathematical reasoning.
As the reader can see, the reasoning about the pointer structure of the buffers is completely encapsulated in the higher-level session specifications.The nondeterminism present due to the asynchronous semantics of the send operation does not need to be reasoned about explicitly: although the depth of the linked list buffer changes non-deterministically according to the thread scheduling of the sends and receives, the proof does not explicitly reason about this at all.
GUARDED RECURSION
As we have seen in the example in §3.4,we can already create some recursive protocols by employing recursion over natural numbers (or other inductively-defined data types in Coq).Recursion over natural numbers lets us verify the example from Fig. 2 where one side sends a number , and then sends further messages.Although recursion on inductive types is powerful, it does not allow us to create protocols for truly infinite interactions with services that run forever.We can create protocols that support truly infinite interactions with Iris's operator for guarded recursion.
Iris models guarded recursion via step-indexing [Appel and McAllester 2001;Ahmed 2004], meaning that separation logic propositions iProp are internally monotone predicates of a natural number , the step index.Intuitively, the meaning of such a proposition is given by taking the limit to ever higher step indices.This allows us to model infinite protocols as a step-indexed protocol of unboundedly increasing depth.Iris does not expose the step index to the user of the logic, so we cannot define protocols by direct recursion over .Instead, Iris provides a logical account of step-indexing [Appel et al. 2007;Dreyer et al. 2011] through the later modality ⊲ [Nakano 2000], and a guarded recursion operator .
for constructing recursive predicates.The must be contractive in the sense that recursive occurrences of in must only occur under a later ⊲.This ensures that creating such a recursive predicate does not result in any logical paradoxes.Our protocols Prot ≜ (Send | Recv) × (Val → iProp) contain separation logic predicates over values, so we can make direct use of Iris's guarded recursion mechanism to define recursive protocols.
The reader may have noticed that we have already inserted the later modality ⊲ in certain places in our definitions, such as in the definition of base ( §3.2).This is to make sure that base is contractive in , which in turn means that !⟨ ⟩{ }. p and ?⟨ ⟩{ }. p are contractive in p.
We are therefore able to take guarded fixpoints of protocols, to create unbounded or infinite protocols, such as the following recursive variant of prot_add: A second component of guarded recursion is Iris's support for Löb induction.Löb induction allows us to verify unbounded or infinitely recursive programs that use recursive protocols.Ordinary induction only gives us an induction hypothesis for recursive calls where some measure is decreasing, and hence only works for terminating loops.Löb induction, on the other hand, gives us an induction hypothesis for any recursive call (not necessarily decreasing), but this induction hypothesis will be guarded under a later (⊲).These laters maintain logical consistency, but the resources guarded by them may only be accessed after the next primitive program step.In this manner, Löb induction allows us to verify partial correctness of a program that sends a stream of messages in an infinite tail-recursive loop, by instantiating the channel with the preceding recursive protocol.
The recursive protocols combined with Löb induction allow us to verify recursive programs such as the following recursive variant of the prog_add program from §2.3: Here, rec f = is a recursive function, where the recursive occurrence is bound to .Verifying the program is straightforward.Notably, the main thread unfolds the recursive protocol prot_add_rec twice, to verify its code.The forked-off thread is resolved using Löb induction.It unfolds the recursive protocol once, verifies one iteration, after which it uses the Löb induction hypothesis to verify the recursive call.Similar to Actris [Hinrichsen et al. 2022, §9.1], recursion is not only permitted via the tail p, but also via the proposition in the protocols ?⟨ ⟩{ }. p and !⟨ ⟩{ }. p, making it possible to construct recursive protocols such as p. !⟨ ⟩{ p}.!end.We are allowed to construct such protocols because p is contractive in p.Also similar to Actris [Hinrichsen et al. 2022, §6.4], we can use Löb induction to prove that an infinitely recursive protocol is a subprotocol of another.The later modalities (⊲) in the rules for subprotocols (page 17) make it possible to remove a later from the Löb induction hypothesis.The same approach applies to protocols such as p. !⟨ ⟩{ p}.!end because the subsumption rule ( ) * ⊲( ⊑ ) − * ( ) contains a later modality.Making recursion and Löb induction interact properly requires careful placement of later modalities in the definitions of the channel points-to connectives.For example, to prove the subsumption rule ( ) * ⊲( ⊑ ) − * ( ) for other forms of channel closure in §5, we need to consider the case that = end and ≠ end.We only obtain ⊲ False from ⊲( ⊑ ), instead of an immediate contradiction (⊲ False is not equivalent to False).Due to the later modalities in , however, ⊲ False is sufficient to complete the proof.3
SELF-DUAL END
In the preceding sections, we had separate close and wait operations, with dual !end and ?end protocols.In this section we investigate alternative operations to deallocate or close a channel, which result in a self-dual end protocol.We have two different options for achieving this: • Symmetric close.Define one close operation, with protocol end, that both sides call, which dynamically determines who deallocates the channel ( §5.1) • Send-close.Define a combined send-close operation that sends the last message and closes the channel.The other side performs a recv that obtains no continuation channel ( §5.2).
Symmetric Close
Suppose that we want only one sym_close operation, that both sides of the channel call.Because the channel consists of one memory location, we need to dynamically decide which caller gets to free the memory.We use compare-and-swap to achieve this effect: sym_close ≜ if CAS( , None, Some()) then () else free To see how this works, consider two parallel close operations on the same channel: sym_close ∥ sym_close .The thread that does its CAS first will successfully set from None to Some(), and return () from its sym_close.The second thread will then fail its CAS, since the value stored in is no longer None.It will then go to the else branch and free .
To verify this version of close, we need to make a change to our notion of protocols.So far, our protocols have all been one-shot protocols ∈ Prot ≜ (Send | Recv) × (Val → iProp) under the hood; even the protocols !end, ?end ∈ Prot.For the symmetric sym_close, this does not work.We now have to explicitly distinguish end in the protocols: where ∈ Prot We also need to extend duality with end ≜ end and subprotocols with end ⊑ end.
With this additional protocol, we have the following specification for sym_close:
Close operation:
{ sym end} sym_close {True} Because our set of protocols has been extended, we need an extended channel points-to sym , which we define as follows: Here, the following protocol is stored inside our invariant: ) ∨ (ℓ ↦ → Some() * (tok 1 ∨ tok 2 ) one side has closed ) Like the one-shot send-receive protocol, this protocol uses two tokens tok 1 and tok 2 , which belong to the two sym end assertions.Initially, the invariant states that the location ℓ points to None.When one side has successfully closed, the invariant states that ℓ points to Some(), and the invariant has collected the token of the side that has called close first (because this is nondeterministic, the invariant uses a disjunction tok 1 ∨ tok 2 ).When both sides have closed, the invariant has both tokens, and no memory points-to (because the memory location has been deallocated).As before, we add later modalities (⊲) in front of = ℓ and tok 1 to support infinite protocols via guarded recursion ( §4).With these definitions, we can prove the Hoare specification for the symmetric sym_close in a similar way we verified send1 and recv1.
Send-Close
From an operational point of view, the previous two methods for channel closing are a tiny bit disappointing, because for the last step, a memory location is allocated but not used to communicate any useful message.In this section we develop a channel closing mechanism where the close operation is integrated with the last message send.
This may sound strange at first sight, but upon investigating how channel closing typically works in examples, it hopefully starts to make more sense.Consider an example where party A is communicating a stream of messages to another party B, and A may at every point decide to end the stream.This can be accomplished by sending an additional Boolean along with each message, which determines whether this is the last message or not.When it is the last message, the sender does not allocate a continuation channel, and sends () in place of the continuation channel.When the receiver receives a message, they have to inspect the Boolean to determine whether they got a continuation channel or not.This saves one memory allocation and synchronization compared to the previous methods.Similarly, in the example of Fig. 2, we can eliminate the last interaction and synchronization by integrating the final acknowledgment with the closing of the channel.
While this saving is minor, we argue in favor of it for aesthetic reasons.If one wants to implement the one-shot API on top of the previous session channel API (i.e., the other way around compared to what we have done so far), then a single shot communication would involve one real communication and then one extra allocation and communication to close the channel.We now present a channel closing mechanism with which one can implement one-shot channels on top of session channels with no additional synchronizations or allocations.Therefore, with this channel closing mechanism, session channels become a purely logical layer over one-shot channels.The implementation of this closing mechanism is very simple, namely the following send_close operation: There is no corresponding wait operation for the other side: as send_close simply does not allocate a continuation channel, the other side can use recv, which already deallocates the memory location.For the specification and verification of send_close, we use the same Prot end protocols: ∈ Prot end ::= end | where ∈ Prot We extend duality with end ≜ end and subprotocols with end ⊑ end.As before, we define a new channel points-to, this time for the send-close version: For the end protocol, the channel points-to asserts that there is no channel, i.e., the channel is a unit value instead of a pointer to a memory location (this could also be implemented as a null pointer).These are the specifications for the channel operations with send_close: Send-close: { scl (! ⟨ ⟩{ }. p) * } send_close ( ) {True} where p = end Receive: { scl (? ⟨ ⟩{ }. p)} recv { .∃ , ′ .= ( , ′ ) * ′ scl p * } The send operation now requires that the tail p is not end, whereas the send_close operation requires that p is end.The specification of recv does not concern itself with end.Instead, the received message will contain information about whether the protocol ended or not (such as a Boolean, as described previously).Using logical reasoning about the message, we can then conclude whether the tail protocol p is end or not.If it is, then we obtain ′ = (), and we do not need to do anything.If it is not end, we obtain ′ scl p and can continue the protocol.
Unlike close with symmetric channel closing from §5.1, the send_close operation has been defined in terms of send1.The proofs of the specifications therefore also follow straightforwardly from the specifications of send1 and recv1, unlike the proofs for symmetric channel closing.
OTHER SUPPORTED FEATURES
In this section, we briefly discuss some other features of our framework.Similar to Actris, we get these features for free by building on top of Iris: Delegation and channel passing.We support delegation, i.e., sending channels over channels as messages, due to Iris's support for impredicative (i.e., nested) invariants.This allows the channel points-to resource to be used in a protocol such as !⟨ ⟩{ }.This protocol enables us to send a channel as well as its associated channel points-to over another channel, which then allows the receiver to use the received channel at protocol .Choice protocols.We support choice protocols, where a thread can choose between multiple different continuation protocols.This can be encoded as a special case of dependent session protocols, where the sender makes the choice by sending a Boolean value, and the continuation protocol is chosen based on the value of the Boolean: 1 ⊕ 2 ≜ !⟨ ⟩{True}.if then 1 else 2 .Shared memory.Channels are not the only way to communicate information between threads: we can also use shared memory directly.We can use all of the features of Iris to reason about shared memory, we can send mutable references as messages over channels (as in Fig. 2), and we can store channels in mutable references.
Locks and shared sessions.We support the combination of locks with channel communication.For instance, we can use a lock to protect a channel endpoint, which can then be used by multiple threads.This is useful for implementing shared sessions, where multiple threads can send and receive messages on the same channel endpoint, which is common in client-server protocols.
MECHANIZATION
The implementations of channels ( §2), the proof that they satisfy their separation logic specifications ( §3), the different methods for closing channels ( §5), and the verification of all the examples have been fully mechanized using the Coq proof assistant [Coq Team 2021], making use of the Iris separation logic framework.
The mechanization follows the layered design as presented in Fig. 1.The layered design allows our proofs to be simpler compared to previous work on Actris [Hinrichsen et al. 2020].Only the proofs for one-shot operations new1, send1, recv1 (and the symmetric sym_close) involve concurrent separation logic concepts such as ghost state and invariants.All the other proofs are done on top of these specifications, treating the one-shot operations as a black box.
Our protocol definitions are simple compared to Actris.We do not need to solve an intricate recursive domain equation [Hinrichsen et al. 2022, §9.7].At no point do we have to reason about more than one cell in the buffer structure; the multi-shot session protocols simply emerge automatically using composition.Despite this simplification to the Actris model, the different extensions such as subprotocols, guarded recursion, and the different forms of channel closing work seamlessly together.For instance, we can show that an infinitely recursive protocol is a subprotocol of another infinitely recursive protocol, by using guarded recursion and Löb induction.
In total, our Coq mechanization consists of less than 1000 lines of Coq code (including the verification of all examples).The mechanization is referenced throughout the paper by -symbols.The mechanization has also been archived on Zenodo [Jacobs et al. 2023].
RELATED WORK
The origins of our line of work trace back to session types.More directly, our work is inspired by encodings of session types in terms of one-shot synchronization in particular [Kobayashi 2002;Dardha et al. 2017;Jacobs 2022].Our work is also directly related to dependent protocols and program logics for session protocols.Most notable is the work on Actris [Hinrichsen et al. 2020[Hinrichsen et al. , 2022]], which introduced the notion of dependent separation protocols, which we use to specify our session channels.We go over each of these points in more detail below.
One-shot channels.The encoding of session channels in terms of sequenced one-shot channels originated in the -calculus.This encoding sends a continuation channel in each message, so that the communication can continue.Kobayashi [2002] showed that session types can be encoded into -types, and Dardha et al. [2012Dardha et al. [ , 2017] ] later extended Kobayashi [2002]'s approach.Jacobs [2022] presented a bidirectional version in a -calculus.
Similar one-shot primitives have also been used in the implementation of message passing libraries, such as in the work of Scalas and Yoshida [2016]; Padovani [2017]; Kokke and Dardha [2021]; Niehren et al. [2006].Our implementation of session channels in terms of one-shot channels uses a similar strategy.
Unlike this earlier work, which is either untyped or type-based, we use session protocols in separation logic to verify (partial) functional correctness.Our one-shot channels are not primitive and not built-in to the language, but implemented in terms of low-level memory operations.We take inspiration from the preceding work and subsequently build session channels on top of one-shot channels, and we build session protocols on top of one-shot protocols.
Dependent protocols and session logics.Bocchi et al. [2010] and Toninho et al. [2011] both developed version of (multi-party) session types which incorporate logical binders into the protocols, alongside a first-order decidable assertion language for specifying properties about them.Later, Toninho and Yoshida [2018] and Thiemann and Vasconcelos [2020] expanded on this work by allowing similar binders determine the structure of the remaining protocol, similar to what we do in § 2.4.Compared to our work, their assertion languages are limited in the sense that they cannot describe the delegation of resources (e.g., sending a reference to another thread).Later work [Craciun et al. 2015;Costea et al. 2018] addressed the issue of specifying resource delegation, through the development of a session logic, based in separation logic.Their logic allows ascribing channel endpoints with protocols, which in turn can specify resources to be shared, such as other channel endpoints.Compared to our work, they do not support binders, which for one means that they cannot specify protocols referring the dynamically allocated references, like we do in §2.2.Actris protocols support both binders, delegation, and protocols referring to dynamically allocated references and ghost resources [Hinrichsen et al. 2020], as our protocols do.
Actris.Actris introduced a shared-memory implementation of higher-order session channels, and the notion of dependent separation protocols for the verification of message passing concurrency using program logics, mechanized on top of Iris.Our work focuses primarily on developing a framework in the style of Actris, but with a focus on layered design, elegance, and simplicity.This results in the following key differences between Actris and our work: • Actris channels implement bi-directional communication using a pair of buffers that are protected by a lock.Our one-shot channels are implemented directly using load and store memory operations, and our session channels and imperative channels are implemented in terms of one-shot channels.• As a result of this, Actris's dependent separation protocols are defined by solving an intricate recursive domain equation.By contrast, our definition of Prot ≜ (Send | Recv) × (Val → iProp) is itself non-recursive, yet Actris-style dependent separation protocols can be defined as inhabitants of Prot, and automatically support recursive protocols.• Our notion of subprotocols for one-shot channels is very simple and non-recursive, but automatically lifts to (recursive) session protocols, because session protocols are defined as one-shot protocols.Actris's notion of subprotocols is recursive and more complicated than ours, but also stronger: Actris's implementation of channels with a pair of buffers admits swapping sends over receives (akin to asynchronous subtyping [Mostrous et al. 2009;Mostrous and Yoshida 2015]).Such a transformation is not sound for our single-buffer implementation of channels.• We achieve a simpler approach by making use of nested invariants, but Actris's solution gave rise to the "Actris ghost theory" [Hinrichsen et al. 2022, §9.4] for reasoning about session protocols in a way that is disconnected from specific implementations.The Actris ghost theory has been used to develop specifications based on dependent session protocols for distributed systems [Gondelman et al. 2023].• Actris contains a number of convenience features, such as multi-binders and associated tactics, to ease verification of message passing programs in Coq.While such features can be integrated in our Coq development, we preferred to keep the protocols (and verification thereof) simpler, to focus on the layering of channel variants.Even so, our single-binders can simulate multi-binders using tuples, as has been demonstrated throughout the paper.• While Actris relies on a garbage collector for channel deallocation, we present several manually memory managed solutions for channel closing.
In short, Actris has more features (asynchronous subtyping, ghost theory) and a more convenient implementation in Coq (multi-binders, tactics), but our design achieves the key feature of Actris (dependent separation protocols) in a conceptually simpler and layered manner: once we have defined and verified one-shot channels (which are quite simple and require only the simplest form of ghost resources to verify), we treat them as a black box and develop Actris-style protocols with relative ease and without any further use of ghost state or invariants.An application of Actris is the verification of the soundness of a session type system via the method of semantic typing [Hinrichsen et al. 2021].Since our separation logic specifications for session channels are the same as Actris's, a similar result could be achieved with our development.
Imperative session channels.Related to §2.4, there has also been work on type systems for imperative channels, which free the user from having to thread channel variables through their program Saffrich and Thiemann [2022b,a].The advantage of a type system compared to a program logic is that type checking is automatic, but an advantage of a program logic is its ability to verify functional correctness.Hinrichsen et al. [2021] combines advantages of both approaches via the method of semantic typing in Iris, which allows one to combine separation logic verification for intricate parts of the program, and type checking for the rest.
Fig. 5. Separation logic specifications for one-shot channels.
Fig. 6 .
Fig. 6.The channel invariant and channel points-to definition.
Fig. 1.Layered design of our development.
Session channels have the following API:In this section we demonstrate how one-shot channels can be used to implement session channels.The session channels are obtained by allocating and exchanging a new one-shot channel whenever a value is sent.The new one-shot channel is then used as a continuation of the session.The session Proc.ACM Program.Lang., Vol. 7, No. ICFP, Article 214.Publication date: August 2023.
|
2023-07-26T02:18:53.432Z
|
2023-08-30T00:00:00.000
|
{
"year": 2023,
"sha1": "9a1e6fa7c8a51444f74fb40d23f9f3f05cb0e35e",
"oa_license": "CCBY",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3607856",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aa9efdb3d63cdb14fea85b33ef8209e2095a9857",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
233634170
|
pes2o/s2orc
|
v3-fos-license
|
Graphene Oxide Topical Administration: Skin Permeability Studies
Nanostructured carriers have been widely used in pharmaceutical formulations for dermatological treatment. They offer targeted drug delivery, sustained release, improved biostability, and low toxicity, usually presenting advantages over conventional formulations. Due to its large surface area, small size and photothermal properties, graphene oxide (GO) has the potential to be used for such applications. Nanographene oxide (GOn) presented average sizes of 197.6 ± 11.8 nm, and a surface charge of −39.4 ± 1.8 mV, being stable in water for over 6 months. 55.5% of the mass of GOn dispersion (at a concentration of 1000 µg mL−1) permeated the skin after 6 h of exposure. GOn dispersions have been shown to absorb near-infrared radiation, reaching temperatures up to 45.7 °C, within mild the photothermal therapy temperature range. Furthermore, GOn in amounts superior to those which could permeate the skin were shown not to affect human skin fibroblasts (HFF-1) morphology or viability, after 24 h of incubation. Due to its large size, no skin permeation was observed for graphite particles in aqueous dispersions stabilized with Pluronic P-123 (Gt–P-123). Altogether, for the first time, Gon’s potential as a topic administration agent and for delivery of photothermal therapy has been demonstrated.
Introduction
Skin diseases are one of the leading causes of global disease burden, affecting millions of people worldwide. In the United States of America (USA), nearly 85 million people are seen by a physician for at least one skin disease every year. This leads to an estimated direct health care cost of USD 75 billion and an indirect lost opportunity cost of USD 11 billion. Further, mortality was noted in half of the 24 skin disease categories. The costs and prevalence of skin disease are comparable with or exceed other diseases with significant public health concerns, such as cardiovascular disease and diabetes. Chronic surface functionalization and coupling with other molecules such as chemotherapeutic drugs or photosensitizers that make possible its utilization as drug carriers. Thus, several biomedical applications of GO have also been studied, including biosensing/bioimaging, drug delivery, antibacterial or cancer photothermal therapy [34][35][36][37][38].
Beyond their polarity, materials' size is of key importance in biomedicine. Considering that biological systems as membranes and protein complexes are natural nanostructures, the utilization of nanomaterials has a clear advantage in the interaction with these structures, making cellular uptake, penetration into blood vessels and renal clearance possible [39]. Thus, the successful application of GO in the biomedical field requires size reduction to the nanoscale.
The administration of nano graphene oxide (GOn) in in vivo models to test the efficacy of these materials as platforms for cancer or infections treatment, is generally done by intravenous or intratumoral injection [40][41][42][43][44]. However, these approaches present some disadvantages, once they are invasive procedures, more susceptible to trigger adverse reactions [45,46]. Thus, the topical application of GOn to treat skin diseases, including skin cancer, local infections or other diseases for which the treatment can be delivered through this route, is positioned as an interesting approach, since it is a non-invasive procedure that allows a localized material distribution, preventing any systemic side effects [45][46][47][48][49].
In view of these aspects, for the first time, to our knowledge, the permeability of single layer GO with nanometric lateral dimensions (GOn) and micrometric graphite stabilized with Pluronic P-123 (Gt-P-123) water dispersions through human skin have been studied. The influence of materials' lateral dimensions and exfoliation procedure in skin permeation were also discussed. Finally, GOn photothermal therapy potential and biocompatibility were evaluated.
GOn Dispersions Production
Graphite oxide (GtO) was produced by Gt oxidation (size ≤ 20 µm, Sigma Aldrich, St. Louis, MO, USA) using the modified Hummers method, as described elsewhere [18,50]. Briefly, 4 g of graphite was added to a mixture of 40 mL of phosphoric acid (H 3 PO 4 , Chem-Lab, Zedelgem, Belgium) and 160 mL of sulfuric acid (H 2 SO 4 , VWR, Frankfurt, Germany) under stirring, and cooled using an ice bath. Then, 24 g of potassium permanganate (KMnO 4 , JMGS, Odivelas, Portugal) were added gently under stirring. Subsequently, 600 mL of H 2 O was slowly added, controlling temperature using an ice bath. Finally, hydrogen peroxide (H 2 O 2 , 26.5 mL, VWR, Frankfurt, Germany) was added and the mixture was left to rest overnight. Afterwards, the solution was decanted to separate the solid phase from the acidic solution, centrifuged at 4000 rpm for 20 min and redispersed in distilled water. The process was repeated until water pH was achieved in the supernatant. The pellet was recovered, redispersed in distilled water and sonicated for 8 h using a high-power ultrasonic probe (UIP1000hd, Hielscher Ultrasonics GmbH, Teltow, German) to simultaneously exfoliate GtO and breakup the sheets to lateral sizes close to a hundred nanometers, yielding the final product, nanographene oxide (GOn) at a concentration of 7 mg mL −1 , which was further diluted for testing.
Optical Microscopy
Gt-P-123 dispersions at a Gt concentration of 1000 µg mL −1 were placed in a 48-well cell culture plate (500 µL) and observed under an inverted optical microscope (CKX41, Olympus, Tokyo, Japan) coupled with a digital camera (SC30, Olympus, Tokyo, Japan).
Transmission Electron Microscopy
GOn sheets' morphology and dimensions were evaluated using transmission electron microscopy (TEM, JEOL JEM 1400 TEM, Tokyo, Japan). An amount of 10 µL of GOn dispersed in water (50 µg mL −1 ) as placed on a carbon-coated TEM grid and left to stand for one minute. The surplus of the dispersion was removed using filter paper by capillarity. GOn lateral dimensions were measured from several different TEM images using ImageJ 1.53a software [51].
Dynamic Light Scattering and Zeta Potential Measurements
The size of GOn particles, polydispersibility index (PDI) and values of zeta potential were assessed using a Zetasizer (Nano-ZS, Malvern Instruments, Malvern, UK) by dynamic light scattering (DLS) and electrophoretic light scattering (ELS). GOn (25 µg mL −1 ) was tested using a disposable Zetasizer cuvette (Malvern Instruments, Malvern, UK), at room temperature, and pH 6. Measurements were done in triplicate and results are presented as the average and standard deviation.
Ultraviolet-Visible Spectroscopy
Absorption spectra in the range of 200-850 nm for GOn, G-P-123, and P-123 (only) were obtained using a spectrophotometer (Lambda 35 UV/Vis, Perkin-Elmer, MA, USA). Samples at 25 µg mL −1 concentration were analyzed in a 50 µL quartz cuvette (Hellma Analytics, Müllheim, Germany) with 10 mm light path length. All measurements were subjected to baseline correction using water as a blank control at room temperature.
Human Samples
Human skin samples with a thickness of 0.8 mm were obtained from one healthy woman subjected to abdominal surgery in the Department of Plastic Surgery of the São João Hospital (Porto, Portugal). A written informed consent form was provided to the donor and the the Bioethics Committee of the São João Hospital approved the experimental protocol (protocol code: 90_17). The skin sample was washed with ultrapure water, and afterwards the hair and subcutaneous adipose tissue were removed using scissors. The skin was kept at −20 • C wrapped in aluminum foil until being used [52] as recommended by the European Center for the Validation of Alternative Methods, the International Programme on Chemical Safety and the EU Scientific Committee on Consumer Products.
Skin Permeation Assays
Human skin permeability to Gt-P-123 and GOn was assessed using Franz diffusion cells with 9 mm clear jacketed with flat ground joint, 0.785 cm 2 of permeation area, and with a receptor compartment with 5 mL of volume (PermeGear, Inc., Hellertown, PA, USA).
The skin previously prepared was mounted in the Franz cells with stratum corneum (SC) facing the donor compartment. The receptor chamber was filled with 0.1 M phosphate buffer (PBS) at pH 7.4, and maintained at 37 • C under stirring at 300 rpm, ensuring sink conditions. Afterwards, 500 µL of Gt (1000 µg mL −1 ) or GOn dispersions (300, 400, 500, and 1000 µg mL −1 ) were added to the donor compartment and sealed with paraffin film to ensure occlusive conditions in order to prevent loss of sample from the surface of the skin and also to maintain human skin hydrated during the assay [53]. Then, at 1, 2, 3, 4, 5 and 6 h, a receptor medium aliquot (100 µL) was recovered to determine by absorbance the amount of material that permeated through the skin. The same volume of PBS was then readded to the same compartment. A calibration curve for both materials was prepared to extrapolate Gt-P-123 or GOn concentrations at the receptor compartment. Materials' permeated mass was obtained by multiplying the sample concentration for the volume of receptor compartment. Results are presented as cumulative mass and percentage of material that permeated through the skin. All assays were performed in triplicate.
GOn Photothermal Therapy Potential
In order to evaluate the ability of GOn to convert light into heat, 500 µL of the dispersion at different concentrations (300-1000 µg mL −1 ) was placed in a cell culture plate (48-well). A control was performed filling wells with water only. Samples irradiation was performed using a LED-based source with 150 mW cm −2 of irradiance and with a peak emission in NIR region (810 nm) [54]. Samples' temperature increment induced by irradiation was monitored during 30 min using a type K thermocouple (Hanna instruments, Póvoa de Varzim, Portugal) placed at half-height and centered in the liquid. Assays were performed in 3 different experiments, with 3 replicates for each condition, and results are reported as the average and standard deviation of absolute temperature. Cells were kept at 37 • C in a humidified atmosphere with 5% CO 2 .
Resazurin Assay
The effect of GOn on HFF-1 cells' viability was assessed using different material amounts (180-600 µg/well, corresponding to 300-1000 µg mL −1 ). Each well has an area of 0.91 cm 2 . Cells at a density of 1 × 10 4 cells/well were seeded in 48-well cell culture plates and incubated for 24 h at 37 • C and 5% CO 2 . Afterwards, cell medium was removed and GOn dispersions were added in a final volume of 600 µL per well (in complete DMEM). After, 24 h incubation, cell viability was quantified by using the resazurin assay. GOn dispersions were removed, cells were washed 3 times with PBS and then incubated at 37 • C and 5% CO 2 for 2 h with 10% (v/v) resazurin (Sigma-Aldrich, St. Louis, MO, USA) previously prepareded in cell culture medium. The supernatant fluorescence (λ ex/em = 530/590 nm) of each well was determined using a Synergy Mx micro-plate reader (Bio-Tek Instruments, VT, USA). Cell viability decrease positive and negative controls were performed incubating HFF-1 cells with 10% (v/v) dimethyl sulfoxide (DMSO) in complete DMEM and complete DMEM only, respectively. Results for each condition were normalized to the negative control (cells in complete DMEM only) and reported as % of the control. All experiments were performed in triplicate and six replicates for each condition were performed.
Live/Dead Assay
The effecf of GOn on cells morphology and viability was evaluated by performing a live/dead assay. Cells were seeded and exposed to GOn as described for the resazurin assay. After 24 h, cells were washed 3 times with PBS and incubated with calcein (1 µM) and propidium iodide (2 µg mL −1 ) in PBS during 15 min at 37 • C in the dark. Then, cells were washed twice wih PBS and analyzed using an inverted fluorescence microscope (Axiovert 200, Zeiss, Jena, Germany).
Statistical Analysis
Statistical analyses were performed using GraphPad Prism software version 8.4.2 (GraphPad Software, San Diego, CA, USA). One-way analysis of variance (ANOVA) with Tukey's test for multiple comparisons were performed. Differences between experimental groups are considered significant whenever p < 0.05.
Gt and GOn Dispersions Physico-Chemical Characterization
Graphite (Gt) was dispersed by sonication in water, however, it precipitated due to its large size (≤20 µm) and hydrophobicity. Therefore, it was stabilized with Pluronic P-123 (P-123), a non-ionic surfactant composed of poly(ethylene oxide) and poly(propylene oxide) blocks [55]. Nanosized GO (GOn) was produced by Gt oxidation and exfoliation using a modified Hummers method, followed by high-power ultrasonication. Figure 1 shows Gt, Gt-P-123, and GOn aqueous dispersions at a concentration of 1000 µg mL −1 . and propidium iodide (2 µg mL −1 ) in PBS during 15 min at 37 °C in the dark. Then, cells were washed twice wih PBS and analyzed using an inverted fluorescence microscope (Axiovert 200, Zeiss, Jena, Germany).
Statistical Analysis
Statistical analyses were performed using GraphPad Prism software version 8.4.2 (GraphPad Software, San Diego, CA, USA). One-way analysis of variance (ANOVA) with Tukey's test for multiple comparisons were performed. Differences between experimental groups are considered significant whenever p < 0.05.
Gt and GOn Dispersions Physico-Chemical Characterization
Graphite (Gt) was dispersed by sonication in water, however, it precipitated due to its large size (≤20 µ m) and hydrophobicity. Therefore, it was stabilized with Pluronic P-123 (P-123), a non-ionic surfactant composed of poly(ethylene oxide) and poly(propylene oxide) blocks [55]. Nanosized GO (GOn) was produced by Gt oxidation and exfoliation using a modified Hummers method, followed by high-power ultrasonication. Figure 1 shows Gt, Gt-P-123, and GOn aqueous dispersions at a concentration of 1000 µg mL −1 . The presence of P-123 at a concentration of 0.5% (w/v) stabilized Gt in water, allowing the attainment of a homogenous blackish dispersions without formation of any precipitate. Such dispersions are stable for a 12 h period, after which the sedimentation becomes visible. However, it is possible to easily redisperse them by manual shaking. GOn water dispersions presented a typical brownish appearance and good stability. Such dispersions present a shelf-life of at least 6 months (longest observation period tested).
Gt-P-123 water dispersions were observed by optical microscopy (Figure 2), being revealed to have small particles with sizes from a few µ m to large agglomerates up to 200 µ m. The presence of P-123 at a concentration of 0.5% (w/v) stabilized Gt in water, allowing the attainment of a homogenous blackish dispersions without formation of any precipitate. Such dispersions are stable for a 12 h period, after which the sedimentation becomes visible. However, it is possible to easily redisperse them by manual shaking. GOn water dispersions presented a typical brownish appearance and good stability. Such dispersions present a shelf-life of at least 6 months (longest observation period tested).
Gt-P-123 water dispersions were observed by optical microscopy (Figure 2), being revealed to have small particles with sizes from a few µm to large agglomerates up to 200 µm. The morphology and size of GOn nanosheets were evaluated by TEM. Figure 3 shows that our high-power sonication size reduction metod allowed achievement of well exfoliated GOn single layer particles with an average size of 190 ± 144 nm. Furthermore, 70% of the particles measured presented sizes below 200 nm ( Figure 3B), 90% were under The morphology and size of GOn nanosheets were evaluated by TEM. Figure 3 shows that our high-power sonication size reduction metod allowed achievement of well exfoliated GOn single layer particles with an average size of 190 ± 144 nm. Furthermore, 70% of the particles measured presented sizes below 200 nm ( Figure 3B), 90% were under 290 nm, and all particles measured presented lateral sizes below 450 nm. In addition, no agglomeration was observed ( Figure 3A), confirming the good dispersibility and degree of exfoliation of the GOn particles. The morphology and size of GOn nanosheets were evaluated by TEM. Figure 3 shows that our high-power sonication size reduction metod allowed achievement of well exfoliated GOn single layer particles with an average size of 190 ± 144 nm. Furthermore, 70% of the particles measured presented sizes below 200 nm ( Figure 3B), 90% were under 290 nm, and all particles measured presented lateral sizes below 450 nm. In addition, no agglomeration was observed ( Figure 3A), confirming the good dispersibility and degree of exfoliation of the GOn particles. Table 1 shows particle size, polydispersity index (PDI) and surface charge of GOn measured using a Zetasizer by DLS and ELS, respectively. Gt-P-123 dispersions presented a particle size too large to be analyzed using a Zetasizer; however, it has already been Table 1 shows particle size, polydispersity index (PDI) and surface charge of GOn measured using a Zetasizer by DLS and ELS, respectively. Gt-P-123 dispersions presented a particle size too large to be analyzed using a Zetasizer; however, it has already been clearly observed in optical microscopy images (Figure 2), furthermore, this is a commercial material whose particle size has already been described. GOn presented hydrodynamic diameters of 197.6 ± 11.8 nm, with a PDI of 0.396 ± 0.013. Particle size average value determined by DLS for GOn is consistent with TEM measurements. The smaller size distribution range observed might have to do with particles being stabilized and folded due to intraplanar hydrophilic interactions when well dispersed in water, opposing to when adsorbed on the TEM grid's surface [56]. The surface charge was −39.4 ± 1.8 mV, which is a high value that explains the excellent aqueous dispersion stability visually observed for more than 6 months for this material [57].
The absorbance spectra of GOn, Gt-P-123, and P-123 were determined by UV/Visible spectroscopy (Figure 4). GOn spectra presented an absorbance peak at λ max = 230 nm, attributed to π-π* electronic transitions in sp 2 clusters, and a shoulder peak at 300 nm, corresponding to n-π* transitions of free electron pairs in oxygen atoms in C=O bonds from carboxyl and carbonyl groups [51]. Gt-P-123, presented a typical spectrum for graphitic materials, with peaks at 223 and 273 nm [58]. Residual absorbance was detected for Pluronic-only when at the same concentration used to stabilize Gt in water. Table 1. Size, surface charge and polydispersity index of GOn aqueous dispersions diluted at a concentration of 25 µg mL −1 and pH 6 (n = 3).
Material
Size ( adsorbed on the TEM grid's surface [56]. The surface charge was −39.4 ± 1.8 mV, which is a high value that explains the excellent aqueous dispersion stability visually observed for more than 6 months for this material [57]. Table 1. Size, surface charge and polydispersity index of GOn aqueous dispersions diluted at a concentration of 25 µg mL −1 and pH 6 (n = 3).
Material Size (nm) Polydispersity Index Surface Charge (mV) GOn
197.6 ± 11.8 0.396 ± 0.013 −39.4 ± 1.8 The absorbance spectra of GOn, Gt-P-123, and P-123 were determined by UV/Visible spectroscopy (Figure 4). GOn spectra presented an absorbance peak at λmax = 230 nm, attributed to π-π* electronic transitions in sp 2 clusters, and a shoulder peak at 300 nm, corresponding to n-π* transitions of free electron pairs in oxygen atoms in C=O bonds from carboxyl and carbonyl groups [51]. Gt-P-123, presented a typical spectrum for graphitic materials, with peaks at 223 and 273 nm [58]. Residual absorbance was detected for Pluronic-only when at the same concentration used to stabilize Gt in water.
Skin Permeability of GOn and Gt-P-123
The permeation through human skin of GOn and Gt-P-123 was evaluated immobilizing the skin samples between the donor and receiver compartments of Franz cells ( Figure 5A). The donor compartment was filled with 500 µ L of GOn (300-1000 µg mL −1 ) or Gt-P-123 (1000 µg mL −1 ). Samples were collected from the receptor compartment every hour, for 6 h. The amount of material that permeated the skin was quantified by UV/Visible spectroscopy. GOn and Gt-P-123 concentrations were determined from absorbance values at wavelengths correspondent to the maximum absorption peaks in their spectra
Skin Permeability of GOn and Gt-P-123
The permeation through human skin of GOn and Gt-P-123 was evaluated immobilizing the skin samples between the donor and receiver compartments of Franz cells ( Figure 5A). The donor compartment was filled with 500 µL of GOn (300-1000 µg mL −1 ) or Gt-P-123 (1000 µg mL −1 ). Samples were collected from the receptor compartment every hour, for 6 h. The amount of material that permeated the skin was quantified by UV/Visible spectroscopy. GOn and Gt-P-123 concentrations were determined from absorbance values at wavelengths correspondent to the maximum absorption peaks in their spectra (230 nm for GOn and 223 nm for Gt-P-123). This was performed by matching the absorption values obtained with calibration curves performed with a range of known concentrations of both materials.
Gt-P-123 was not detected in the receptor compartment even after 6 h, indicating that it cannot permeate through the skin sample. This reaffirms the relevance of reaching nanometric size to achieve and maximize skin permeation of nanoparticles [59]. This material has therefore no use as a possible vehicle for drug delivery or phototherapy in skin diseases, and was not further characterized.
Results for GOn skin permeation are presented in Figure 5B. GOn was capable of permeating across the skin in a time-dependent manner for all concentrations tested. It is relevant to notice that, besides presenting a lateral size below 200 nm, GOn is formed by a single layer of carbon atoms, therefore presenting a very low thickness and high flexibility, which facilitates transport through skin. On the other hand, Gt is composed of numerous stacked graphene layers.
Skin permeability of GOn at concentrations of 300, 400, 500, and 1000 µg mL −1 was evaluated every hour for a period of 6 h ( Figure 5B). The cumulative percentage of GOn that permeated from the donor to the receptor compartment was found to decrease as concentration increased. After 6 h in contact with skin 55.3, 91.4, 99.3 and 99.8% of the GOn placed in the donor compartment at 1000, 500, 400 and 300 µg mL −1 , respectively, reached the receptor compartment. For high concentrations, permeation is hindered by deposition of larger particles and agglomerates along time. Gt-P-123 was not detected in the receptor compartment even after 6 h, indicating that it cannot permeate through the skin sample. This reaffirms the relevance of reaching nanometric size to achieve and maximize skin permeation of nanoparticles [59]. This material has therefore no use as a possible vehicle for drug delivery or phototherapy in skin diseases, and was not further characterized.
Results for GOn skin permeation are presented in Figure 5B. GOn was capable of permeating across the skin in a time-dependent manner for all concentrations tested. It is relevant to notice that, besides presenting a lateral size below 200 nm, GOn is formed by a single layer of carbon atoms, therefore presenting a very low thickness and high flexibility, which facilitates transport through skin. On the other hand, Gt is composed of numerous stacked graphene layers.
Skin permeability of GOn at concentrations of 300, 400, 500, and 1000 µg mL −1 was evaluated every hour for a period of 6 h ( Figure 5B). The cumulative percentage of GOn that permeated from the donor to the receptor compartment was found to decrease as concentration increased. After 6 h in contact with skin 55.3, 91.4, 99.3 and 99.8% of the GOn placed in the donor compartment at 1000, 500, 400 and 300 µg mL −1 , respectively, reached the receptor compartment. For high concentrations, permeation is hindered by deposition of larger particles and agglomerates along time. After 1 h of experiment, the percentages of GOn at 1000 and 500 µg mL −1 at the receptor compartment were of 18.4 and 20.2%, respectively. After 3 h, permeation values were of 42.6 and 54.0%, in the same order. After 4 h, the permeation of GOn at 1000 µg mL −1 started to stabilize due to surface deposition and agglomeration. Values observed were of 49.0% and 61.6% for GOn at 1000 and 500 µg mL −1 , respectively. After 6 h, the percentage of permeation obtained at 1000 µg mL −1 was 1.65-, 1.79-and 1.80-fold lower than when using GOn at 500, 400 and 300 µg mL −1 . However, using GOn at 1000 µg mL −1 allowed the achievement of a higher absolute mass of material in the receptor compartment (276.7 µg), when compared to GOn at 500 (228.5 µg), 400 (198.6 µg) and 300 µg mL −1 (150 µg).
GOn Photothermal Therapy Potential
Since GOn particles' ability to permeate through human skin has been demonstrated, they might have the potential to be used in dermatological applications, such as photothermal therapy of skin cancer [13][14][15][16]60]. For that reason, the ability of GOn (300-1000 µg mL −1 ) to convert NIR light into thermal energy was evaluated ( Figure 6 and 30 min of NIR irradiation, respectively. These values corresponded to an increment of up to 10 • C in relation to water only (control). Therefore, GOn dispersions confirmed to be effective agents to induce a temperature increase within the mild photothermal therapy temperature range, which has been reported to induce death of skin cancer cells [54,60]. Concentrations and times applied can be adjusted according to the specific patient and desired treatment.
GOn Photothermal Therapy Potential
Since GOn particles' ability to permeate through human skin has been demonstrated, they might have the potential to be used in dermatological applications, such as photothermal therapy of skin cancer [13][14][15][16]60]. For that reason, the ability of GOn (300-1000 µg mL −1 ) to convert NIR light into thermal energy was evaluated ( Figure 6). GOn heating by NIR light irradiation was demonstrated to be concentration-and time-dependent. At a concentration of 300 µg mL −1 , GOn reached temperatures of 36.0 and 40.2 °C, after 15 and 30 min of irradiation, respectively. For 400 and 500 µg mL −1 , similar values were obtained of around 38 and 42 °C, after 15 and 30 min. Finally, at a concentration of 1000 µ g mL −1 , GOn dispersions reached temperatures of 40.3 °C and 45.7 °C, after 15 and 30 min of NIR irradiation, respectively. These values corresponded to an increment of up to 10 °C in relation to water only (control). Therefore, GOn dispersions confirmed to be effective agents to induce a temperature increase within the mild photothermal therapy temperature range, which has been reported to induce death of skin cancer cells [54,60]. Concentrations and times applied can be adjusted according to the specific patient and desired treatment.
In Vitro Biocompatibility of GOn
Since GOn has the potential to be used for applications such as skin cancer phototherapy and topic drug delivery [13][14][15][16]54,61], it is important to assure that the used particles are non-toxic towards healthy skin cells. For that reason, human foreskin fibroblasts (HFF-1) were incubated with increasing concentrations (300-1000 µg mL −1 ) of GOn for 24 h, and cell viability assessed through the resazurin assay ( Figure 7).
In Vitro Biocompatibility of GOn
Since GOn has the potential to be used for applications such as skin cancer phototherapy and topic drug delivery [13][14][15][16]54,61], it is important to assure that the used particles are non-toxic towards healthy skin cells. For that reason, human foreskin fibroblasts (HFF-1) were incubated with increasing concentrations (300-1000 µg mL −1 ) of GOn for 24 h, and cell viability assessed through the resazurin assay ( Figure 7). It is relevant to mention that, unlike what happens with HFF-1 exposed to the full amount of material placed in the wells during 24 h, in skin permeation tests, the particles go through the skin in a period up to only 6 h. Therefore, the time of exposure and GOn It is relevant to mention that, unlike what happens with HFF-1 exposed to the full amount of material placed in the wells during 24 h, in skin permeation tests, the particles go through the skin in a period up to only 6 h. Therefore, the time of exposure and GOn amounts inside the skin are lower during permeation than in the in vitro biological tests presented in this section. GOn did not induce any statistically significant decrease in HFF-1 cell viability, for all conditions tested, as compared to the control condition in which the cells were incubated in cell culture media without materials. Figure 8 shows that HFF-1 cells presented a normal spindle like shape, characteristic of human skin fibroblasts, when exposed or not to GOn. Moreover, no death cells, stained with PI, could be found. This reaffirms the potential of the nanosized single layer GOn herein reported to be used in the biomedical field, in applications such as, for example, skin cancer phototherapy or topic drug delivery [13][14][15][16]54,61].
Conclusions
In order to evaluate their potential for dermatological applications, two different carbon materials were studied in terms of physicochemical characteristics and human skin permeation. Graphite particles in aqueous dispersions stabilized with Pluronic P-123 (Gt-P-123) presented sizes between a few to hundreds (agglomerates) of microns. The presence of P-123 at a concentration of 0.5% (w/v) stabilized Gt in water, allowing the attainment of homogenous blackish dispersions without sedimentation. Such dispersions are stable for 12 h, a period after which they precipitate. However, they can be easily redispersed. Gt-P-123 presented a typical spectrum for graphitic materials, with peaks at 223 and 273 nm. Due its large size, no skin permeation was observed for Gt-P-123.
Nanographene oxide (GOn) particles presented average lateral sizes of 197.6 ± 11.8 nm, and a surface charge of −39.4 ± 1.8 mV, being stable in water dispersion for up to 6 months. GOn spectra presented an absorbance peak at λmax = 230 nm, attributed to π-π* electronic transitions in sp 2 clusters, and a shoulder peak at 300 nm, corresponding to nπ* transitions of free electron pairs in oxygen atoms in C=O bonds from carboxyl and
Conclusions
In order to evaluate their potential for dermatological applications, two different carbon materials were studied in terms of physicochemical characteristics and human skin permeation. Graphite particles in aqueous dispersions stabilized with Pluronic P-123 (Gt-P-123) presented sizes between a few to hundreds (agglomerates) of microns. The presence of P-123 at a concentration of 0.5% (w/v) stabilized Gt in water, allowing the attainment of homogenous blackish dispersions without sedimentation. Such dispersions are stable for 12 h, a period after which they precipitate. However, they can be easily redispersed. Gt-P-123 presented a typical spectrum for graphitic materials, with peaks at 223 and 273 nm. Due its large size, no skin permeation was observed for Gt-P-123.
Nanographene oxide (GOn) particles presented average lateral sizes of 197.6 ± 11.8 nm, and a surface charge of −39.4 ± 1.8 mV, being stable in water dispersion for up to 6 months. GOn spectra presented an absorbance peak at λ max = 230 nm, attributed to π-π* electronic transitions in sp 2 clusters, and a shoulder peak at 300 nm, corresponding to n-π* transitions of free electron pairs in oxygen atoms in C=O bonds from carboxyl and carbonyl groups.
GOn was capable of permeating across skin in a time-dependent manner. An amount of 20.3% of the mass of GOn (1000 µg mL −1 ) put in contact with the skin sample permeated after 1 h, while 55.5% permeated after 6 h. Lower concentrations of GOn (300-500 µg mL −1 ) presented faster permeation to the receptor compartment, however, the total mass of material that permeated was lower. Furthermore, GOn dispersions were shown to absorb near-infrared radiation, causing local temperature to reach up to 45.7 • C, within mild photothermal therapy temperature range. Concentrations and times to apply can be adjusted according to a specific patient and desired treatment.
Finally, GOn in amounts superior to those which could permeate the skin were shown not to affect human skin fibroblast (HFF-1) morphology or viability, after 24 h of incubation.
GOn potential as a topic administration agent and for delivery of photothermal therapy has been demonstrated. This material can also be considered as a drug delivery vehicle for drugs used in skin disease, potentially improving drugs' stability and penetration, allowing for reduced therapeutic doses and avoiding side effects of systemic therapy and high topical doses.
Informed Consent Statement:
The experimental protocol for the utilization of skin samples was approved by the Bioethics Committee of the São João Hospital and written informed consent form was provided to the volunteer.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2021-05-05T00:08:35.525Z
|
2021-03-22T00:00:00.000
|
{
"year": 2021,
"sha1": "efa99e008e86f135725a2543f429fa8debfac045",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/11/2810/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b97876ccbd95d2c50e4b926fcba6560b8c3eed15",
"s2fieldsofstudy": [
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
257466835
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Dysregulated Ferroptosis-Related Genes in Septic Myocardial Injury Based on Human Heart Transcriptomes: Evidence and New Insights
Introduction Sepsis is currently a common condition in emergency and intensive care units, and is defined as life-threatening organ dysfunction caused by a dysregulated host response to infection. Cardiac dysfunction caused by septic myocardial injury (SMI) is associated with adverse prognosis and has significant economic and human costs. The pathophysiological mechanisms underlying SMI have long been a subject of interest. Recent studies have identified ferroptosis, a form of programmed cell death associated with iron accumulation and lipid peroxidation, as a pathological factor in the development of SMI. However, the current understanding of how ferroptosis functions and regulates in SMI remains limited, particularly in the absence of direct evidence from human heart. Methods We performed a sequential comprehensive bioinformatics analysis of human sepsis cardiac transcriptome data obtained through the GEO database. The lipopolysaccharide-induced mouse SMI model was used to validate the ferroptosis features and transcriptional expression of key genes. Results We identified widespread dysregulation of ferroptosis-related genes (FRGs) in SMI based on the human septic heart transcriptomes, deeply explored the underlying biological mechanisms and crosstalks, followed by the identification of key functional modules and hub genes through the construction of protein-protein interaction network. Eight key FRGs that regulate ferroptosis in SMI, including HIF1A, MAPK3, NOX4, PPARA, PTEN, RELA, STAT3 and TP53, were identified, as well as the ferroptosis features. All the key FRGs showed excellent diagnostic capability for SMI, part of them was associated with the prognosis of sepsis patients and the immune infiltration in the septic hearts, and potential ferroptosis-modulating drugs for SMI were predicted based on key FRGs. Conclusion This study provides human septic heart transcriptome-based evidence and brings new insights into the role of ferroptosis in SMI, which is significant for expanding the understanding of the pathobiological mechanisms of SMI and exploring promising diagnostic and therapeutic targets for SMI.
Introduction
Sepsis, one of the leading causes of death in critically ill patients worldwide, is a life-threatening organ dysfunction caused by a dysregulated host response to infection. 1,2 Although the prognosis of sepsis patients has improved with the development of therapeutic measures such as intensive care and antibiotic application, it is still considered a major public health problem with significant health care and social impact due to the high morbidity and mortality. [3][4][5] The pathogenesis of sepsis is complex, and not only involves systemic inflammation but also dysfunction of multiple organs. 1 Cardiac dysfunction caused by septic myocardial injury (SMI) is a common and lethal manifestation of sepsis, and is associated with septic shock and increased mortality. 6,7 Clinical and pathological research related to SMI is progressing worldwide, however, there is still a lack of characteristic biomarkers and precise clinical diagnostic and therapeutic strategies. [8][9][10][11] Therefore, it is crucial to elucidate the molecular basis of SMI to identify promising targets for its prevention, diagnosis and treatment. 12 Disruption in iron homeostasis is one of the critical pathological features of sepsis and SMI, and entails increased iron transport and uptake into cells and decreased iron export. 13,14 There is evidence of iron homeostasis disorder in both circulating blood and organ tissues of septic patients, which has been found to be related to the clinical prognosis. [15][16][17] Since ferroportin is the only known iron exporter in vertebrate cells, cellular iron overload is susceptible to occur when iron homeostasis is disordered. 14 Iron is an important trace element involved in multiple biological processes such as DNA synthesis and energy production. However, the accumulation of unstable iron ions can lead to oxidative damage and cell death when cellular iron is overloaded. 13 Ferroptosis is a form of programmed cell death driven by irondependent lipid peroxidation, which has unique morphological, genetic, and biochemical characteristics. 17,18 Recent studies have reported ferroptosis in both in vivo and in vitro SMI models, and inhibition of ferroptosis by small molecule compounds has been found to be protective in SMI models. [19][20][21] Despite increased research focus on the role of ferroptosis in SMI, the current understanding of its molecular biology is still scattered and unclear, further in-depth exploration is necessary and urgent, which may lead to promising targets for diagnosis and treatment.
Deep transcriptomic analyses based on human pathological tissues contribute to bring the closest insights to reality in exploring the molecular biological mechanisms in various diseases, many of which have translated into clinical benefits. [22][23][24][25] However, current studies on the mechanisms of ferroptosis in SMI still lack important information from human heart samples. Here, we performed an in-depth analysis of the human septic heart transcriptomes, which identified the variations of ferroptosis-related genes (FRGs) in SMI, and further explored their potential biological functions and pathways. A protein-protein interaction (PPI) network was constructed to identify key functional modules and hub genes. The expression of hub genes and ferroptosis features was then validated in the mouse SMI model, and the diagnostic capability and prognostic relevance of key FRGs for SMI were subsequently evaluated. The expression and distribution of key FRGs were determined through human heart single-cell transcriptome data. Furthermore, we performed immune infiltration correlation analyses of identified key FRGs in septic hearts given the tight association between SMI, ferroptosis, and immune infiltration. 8,[26][27][28] Finally, we predicted potential ferroptosis-modulating drugs for SMI based on the drug-target correlation of key FRGs, and performed molecular docking for further exploration.
Data Collection
As described in our previous study, 29 the microarray datasets GSE79962 and GSE54514 were retrieved from the GEO database (https://www.ncbi.nlm.nih.gov/geo/). The transcriptomic data of 11 control hearts (sourced from non-failed donors) and 20 septic hearts (sourced from patients who died from sepsis) were obtained from the GSE79962 dataset, while the transcriptomic data of whole blood samples (within 24 hours of admission to the intensive care unit) from 26 sepsis survivors and 9 sepsis nonsurvivors were obtained from the GSE54514 dataset. After probe merging and ID conversion, all expression data were log2 transformed and quantile normalized before further analyses.
Gene Set Enrichment Analysis (GSEA)
To evaluate the overall correlation between FRGs and septic hearts, GSEA was performed using the Sangerbox online tool (http://vip.sangerbox.com/). 33 Genes in the ferroptosis-associated gene set obtained as mentioned above were scored and ranked by expression value in the GSE79962 dataset. Normalized enrichment score (NES) was calculated and FDR < 0.05 was considered significant.
Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) Enrichment Analyses As described in our previous study, 29 the identified DEFRGs were subjected to GO and KEGG enrichment analyses using the clusterProfiler package in the R software. 34 FDR < 0.05 was the criterion for significantly enriched by DEFRGs.
PPI Network and Identification of Key Modules and Hub Genes
The STRING database (https://string-db.org/) and Cytoscape software were used to establish and visualize a PPI network of DEFRGs as described in our previous studies. 29 Functional key modules were identified from the PPI network by the MCODE plugin using the K-means clustering algorithm (degree cutoff = 2, node score cutoff = 0.2, K-core = 2). The genes in the PPI network were assigned and ranked accordingly by Cytohubba's built-in MCC algorithm, and the top ten genes were screened as hub genes.
Animals and Establishment of the SMI Model
Male BALB/c mice (8-10 weeks old) were adopted, provided by Charles River Laboratories. Mice were acclimatized for 1 week at 23±1°C under a 12 hours light/dark period, housed with bacteria-free water and food provided ad libitum. Twenty-one mice were used in the control group, while twenty-four mice per group were used in the experimental groups. To induce SMI, the mice were injected intraperitoneally with lipopolysaccharide (LPS, Sigma-Aldrich, USA) (10 mg/kg). The sham-operated controls received an equal volume of PBS. The LPS+Fer-1 group was pre-treated with Ferrostatin-1 (Fer-1, Sigma-Aldrich, USA) (5 mg/kg) 30 min before LPS injection. After 24 hours of LPS injection, mice received transthoracic echocardiography to identify the heart functions. The mice were then euthanized, and the blood and heart samples were collected for subsequent experiments. All animal experiments were approved by the Animal Experimentation Ethics Committee of the First Affiliated Hospital of Nanchang University (ethics number: CDYFY-IACUC-202209QR004), and all the laboratory procedures were followed the "Laboratory Animals-Guideline of welfare and ethics" of the State Standard of P.R. China for the welfare of animals.
Echocardiography
The mice were anesthetized with 1.5% isoflurane, and cardiac function was evaluated by two-dimensional transthoracic echocardiography using a Vevo2100 imaging system (VisualSonics, Canada). All measurements were performed by an experienced operator blinded to the study.
Measurement of Cardiac Iron Content
The myocardia (with removal of atriums) were weighed, homogenized and lysed. The iron content in the myocardium was measured using a kit according to the manufacturer's instructions (Pulilai Gene Technology, China). The absorbance was measured at 550 nm using the Spark ® multimode microplate reader (Tecan, Switzerland).
Detection of Radical Oxygen Species (ROS)
Fresh isolated hearts were frozen sectioned after OCT embedding. The sections were incubated with a dihydroethidium (DHE) probe (Yeasen, China) at 37°C for 60 min away from light, and imaged under a fluorescence microscope (Nikon Eclipse Ci, Japan) fitted with a digital camera (Nikon DS-U3, Japan).
Transmission Electron Microscopy (TEM) Imaging
Fresh mice left ventricular myocardium (1 mm × 1 mm × 1 mm) were rapidly harvested. After fixation, washing, dehydration, embedding, sectioning and staining. The ultrastructure of myocardial mitochondria was observed by TEM (Hitachi 7800, Japan), and the Flameng score method was used to evaluate the ultrastructural damage of mitochondria. 35
Western Blot Analysis
Western blotting was performed as described in our previous study. 29 Briefly, the myocardium (with removal of atriums) was homogenized and lysed, and the extracted proteins were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking with 5% non-fat dry milk at room temperature for 2 hours, the blots were incubated overnight with primary antibodies against PTGS2 (Proteintech #12375-1-AP, China, 1:1000) and β-actin (Biosharp #BL005B, China, 1:2000) at 4°C. Subsequently, the membrane was incubated with the secondary antibody (Beyotime # A0208, China, 1:2000) for 2 hours at room temperature. The positive bands were visualized using the Ultra High Sensitivity ECL kit (Beyotime, China) and imaged using FluorChem FC3 (ProteinSimple, USA). The blots were densitometrically scanned using the ImageJ software (NIH, USA), and βactin was selected as the internal reference according to previous studies. 36,37 Quantitative Real-Time PCR (qRT-PCR) The protocol for qRT-PCR has been described in our previous study. 29 Briefly, myocardia (with removal of atriums) were homogenized in TRIzol (Invitrogen, USA) to extract total RNA. After removal of genomic DNA by DNase treatment, total RNA was then reverse-transcribed into cDNA using RevertAid MM (ThermoFisher Scientific, USA) according to the manufacturer's instruction. qRT-PCR was performed using the Power SYBR Green PCR Master Mix (ThermoFisher Scientific, USA) on the real-time PCR system (StepOnePlus Real-Time PCR System, Applied Biosystems, USA). According to previous studies, 38,39 for normalization, ACTB levels were used as an internal reference and mRNA levels relative to control were calculated according to the 2−ΔΔCT method. The primers were designed using NCBI Primer-Blast and synthesized by Sangon Biotech Co. Ltd. (Shanghai, China), the sequences are listed in Table S2.
Receiver Operating Characteristic (ROC) Curve Analyses
To evaluate the diagnostic accuracy of genes, ROC curves were plotted as described in our previous study. 29 The area under the ROC curve (AUC) was used to quantify the classification. Genes with AUC > 0.6 were considered diagnostic, and those with AUC > 0.8 were considered excellently diagnostic.
Single-Cell Sequencing Analyses
The Single Cell Portal (https://singlecell.broadinstitute.org/single_cell) was availed to obtain single cell sequencing data for key FRGs. The single-cell sequencing data of the human fetal heart and adult heart from SCP498 and SCP1021 were used. 40,41 Immune Infiltration Analyses The Sangerbox platform (http://vip.sangerbox.com/) was used to perform immune infiltration analyses as previously described. 29 The proportion of 22 immune cell species was calculated using the CIBERSORT algorithm based on normalized gene expression data.
Potential Therapeutic Drug Prediction and Molecular Docking
The potential ferroptosis-regulating drugs for SMI were predicted by the Enrichr online enrichment analysis tool (http://amp. pharm.mssm.edu/Enrichr) based on the protein-drug interaction data from the DSigDB database (http://dsigdb.tanlab.org/). 42,43 The corresponding FDR and the combined score were calculated as described in the study of Avi Ma'ayan et al. 43 FDR < 0.05 and combined score > 10,000 were set as cut-offs for valid drug candidates, while higher combined scores were considered to have a stronger interaction between the drug and target genes.
Molecular docking was used to model the combination between drug and target proteins, and to predict the extent of the interaction. The chemical structures of the predicted drugs and target proteins were obtained from the PubChem database (http://pubchem.ncbi.nlm.nih.gov), 44 the RCSB protein data bank database (https://www.rcsb.org), 45 and the AlphaFold protein structure database (https://alphafold.ebi.ac.uk/). 46 After collating the receptor protein and ligand small molecule compounds separately, molecular docking was performed using the CDOCK module in Discovery Studio 2019 software (BIOVIA, USA).
Statistical Analyses
All the statistical analyses not mentioned above were performed using the Sangerbox platform and the GraphPad Prism (version 8.0.2, La Jolla, CA). Comparisons between two groups in the transcriptomic analyses which not mentioned above were performed using signed-rank test. Ordinary one-way ANOVA was used for comparisons between multiple groups in the animal experiments. The Spearman correlation test was used for correlation analysis. Data are presented in mean ± SD. The Benjamini-Hochberg correction method was applied to adjust the P values for FDR, and statistical significance was considered at FDR < 0.05. P values are shown as indicated: *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001, ns = not significant.
Overall Study Protocol
The illustrative and detailed protocol of our study is summarized ( Figure 1A and B). All the raw data were normalized before further analyses, as shown in Figure S1.
Widespread Dysregulation of FRGs in the Human Septic Heart Transcriptomes
GSEA is a commonly utilized method to evaluate the overall relevance of specific characteristic gene set in the disease transcriptome to the disease phenotype. 22 To evaluate whether the variation of the FRG set in the human septic heart transcriptome is significant, the GSEA of total FRG set in the GSE79962 dataset was performed. The result of GSEA showed that FRGs were dramatically differentially expressed in the two groups, and the FRG set showed significant positive correlation with the septic heart phenotype compared to the control (FDR < 0.05, NES = 1.489) ( Figure S2). This result suggests that the widespread dysregulation of FRGs is present in the human septic heart transcriptomes as an essential feature, which indicates a functional correlation between ferroptosis and SMI.
Identification of DEGs and DEFRGs
There were significant transcriptomic differences between control and septic hearts, as shown in the clustered heatmap of genes in the top and last 250 of the log2FC ranking ( Figure S3). There were 2316 DEGs identified from the GSE79962 dataset, as listed in Table S3, of which 1031 were upregulated and 1285 were downregulated in the septic hearts ( Figure 2A).
Furthermore, we identified DEFRGs in the GSE79962 dataset since the GSEA result showed a meaningful association between the FRG set and septic heart phenotype. 71 out of the 388 FRGs obtained from FerrDB overlapped with the DEGs, which were identified as DEFRGs for subsequent analyses ( Figure 2B), as listed in Table S4. These 71 DEFRGs were significantly differentially expressed in control and septic hearts, as shown in the clustered heatmap ( Figure 2C). Further analysis of expression correlations between DEFRGs in septic hearts revealed numerous significant correlations, as shown in the correlation heatmap ( Figure 2D). For example, HIF1A was significantly positively correlated with SAT1 (r = 0.87), and POR was significantly positively correlated with ELOVL5 (r = 0.84) and negatively correlated with NR1D2 (r = −0.86) and TSC1 (r = −0.82). These extensive correlations between genes suggest that intricate functional networks may exist in DEFRGs.
GO and KEGG Enrichment Analyses
To explore the functions and related pathways of the DEFRGs, we performed GO and KEGG enrichment analyses. The GO enrichment analyses showed that the DEFRGs are mainly involved in the biological processes (BP) of cellular iron ion homeostasis and response to nutrient, oxygen, and chemical stress ( Figure 3A). Furthermore, cellular components (CC) to which the DEFRGs localize include apical part, basal plasma membrane, autophagosome or autolysosome Figure 3B). Among the molecular functions (MF), the DEFRGs were associated with the binding of DNA-binding transcription factor, ubiquitin protein ligase, transcription coregulator and coactivator, heat shock protein and the activities of acyltransferase and oxidoreductase ( Figure 3C). The KEGG enrichment analysis showed that the DEFRGs are mostly involved in ferroptosis, reactive oxygen species, autophagy, and cancer-related pathways ( Figure 3D). The cross-talks between gene functions and pathways were further analyzed, as shown in Figure 3E-H. DEFRGs have an intricate web-like relationship with different BP, CC, MF, and KEGG pathways, and the results suggested that FRGs regulate the progression of SMI may via the cross-talks of multiple gene functions and pathways rather than individually.
Construction of PPI Network and Identification of Key Modules and Hub Genes
Given the cross-talk of multiple genes and pathways in FRGs-mediated regulation of SMI, we subsequently constructed a PPI network of DEFRGs using the STRING database to identify functional gene clusters and individual genes among DEFRGs ( Figure 4A). Two key modules ( Figure 4B) and ten hub genes (STAT3, NOX4, TP53, HIF1A, NFE2L2, MAPK3, RELA, PTEN, EGFR, PPARA) ( Figure 4C) were identified. Notably, the identified hub genes were exactly overlapped with one of the key modules, which further demonstrates the important pivotal role of these genes as major functional clusters in DEFRGs. Figure 4D and E show the multiple correlations between hub genes and with other DEFRGs.
Ferroptosis Features in LPS-Induced Mouse Septic Myocardial Injury Model
The LPS-induced mouse SMI model is a commonly used experimental model for exploring SMI in vivo, which we used here to observe the ferroptosis features and hub gene expression in SMI myocardium. 19,21,47 Cardiac dysfunction in the SMI mice was manifested by reduced left ventricular ejection fraction and fractional shortening ( Figure 5A-C), along with increased serum inflammatory factors (IL6, IL-1β, TNF-α) and myocardial injury markers (LDH, CK-MB, cTnT), which could be partially rescued by the ferroptosis inhibitor Fer-1 ( Figure 5D-I). The ferroptosis marker PTGS2 was upregulated in the myocardium of SMI mice at both mRNA and protein levels ( Figure 5J-L). Consistent with this, the inflamed myocardial tissues of SMI mice exhibited the biochemical and morphological features of ferroptosis, such as high levels of ROS and 4-HNE, and extensive mitochondrial damage characterized by atrophy and matrix loss ( Figure 5M and N). Furthermore, we also observed increased levels of iron and MDA, decreased levels of SOD and GSH-Px, and reduced GSH/GSSG ratio in the myocardia of SMI mice ( Figure 5O-S). Fer-1 protected against most of these pathological alterations, although there was no significant decrease in cardiac iron content, most likely due to the fact that Fer-1 only chelates Fe 2+ rather than expelling iron from cells. 48 These results suggest that ferroptosis is a crucial pathological driver of SMI, and targeting ferroptosis pathways can be a promising therapeutic strategy for SMI.
Identification of Key FRGs and Exploration of Their Diagnostic Capability and Prognostic Relevance
We sequentially validated the expression of hub genes in the myocardium of SMI mice. As shown in Figure 6A-J, most of these genes showed trends consistent with that in the human septic heart transcriptome, such as increased expression of HIF1A, MAPK3, NOX4, RELA, STAT3 and TP53, and decreased expression PPARA and PTEN. The Fer-1 treatment could partially but not completely reverse these alterations, suggesting that these genes may participate in other biological processes alongside ferroptosis. Interestingly, the EGFR expression was similar in the myocardium of SMI and control mice, while NFE2L2 even showed opposite expression trend, which may be attributed to the presence of individual differences or different biological circumstances, and we excluded them as key genes given these uncertain expression alterations.
We finally identified HIF1A, MAPK3, NOX4, PPARA, PTEN, RELA, STAT3 and TP53 as key FRGs, and further ROC analyses revealed that all of them have excellent diagnostic capability for septic heart (AUC > 0.8) (Figure 6K-R). Furthermore, the expression of several key FRGs (PPARA, HIF1A, MAPK3, RELA, TP53 and STAT3) was significantly lower in the whole blood of sepsis non-survivors on the first day of ICU admission compared to survivors, indicating their relevance to the prognosis of sepsis ( Figure 6S), further analysis revealed that PPARA and TP53 expression in whole blood of sepsis patients was significantly negatively correlated with APACHEII scores-quantified severity ( Figure 6T and U, S4). Interestingly, while SMI in general usually leads to poor prognosis in sepsis, we found the variability trends of key FRGs in the septic hearts do not fully coincide with those in whole blood of sepsis non-survivors, probably due to the comprehensive pathological condition of sepsis as a complex systemic disease. To conclude, these key genes that regulating ferroptosis in SMI have the potential to serve as biomarkers for SMI.
Distribution of Key FRGs in Human Heart
To explore the distribution of key FRGs in human heart, we investigated single-cell sequencing of key FRG expression using the Single Cell Portal. All key FRGs were expressed to varying degrees in both human fetal and adult hearts ( Figure 7A and B). Noticeably, while key FRGs are predominantly expressed in cardiomyocytes, they are also expressed in other cell types such as immune cells.
Immune Infiltration Analyses
Immune cells play an important role in many cardiac diseases. 49,50 The immune and inflammatory responses are critical to the pathological progression of SMI. 8,51 Since single-cell sequencing analysis revealed that the key FRGs are also expressed in immune cells, while a close association between ferroptosis and immune cell function and activity has been reported in other biological environments, the immune infiltration analyses were further performed in our study to explore the cross-talks between ferroptosis and immune cells in SMI. 28,52,53 We estimated the relative proportions of infiltrating immune cells in each heart sample from the GSE79962 dataset using the CIBERSORT algorithm, as shown in Figure 8A, while the clustered heatmap of infiltrating immune cells shows the differences between control and septic hearts ( Figure 8B). M2 macrophage was found at the highest proportion among immune cells in these heart samples, and it was lower in septic hearts (0.35 ± 0.08 vs 0.37 ± 0.08, septic hearts vs control), corresponding to a higher proportion of M1 macrophage (0.02 ± 0.04 vs 0.005 ± 0.01, septic hearts vs control) However, M1 and M2 macrophage did not show significance due to large individual differences (p > 0.05). The proportion of infiltrating neutrophils and resting NK cells was significantly increased in the septic hearts, while that of resting mast cells and CD8 + T cells were significantly decreased, which were identified as differentially infiltrating immune cells (DIICs) in the SMI myocardium ( Figure 8C).
Further analyses revealed correlations between immune cells ( Figure S5A) and with several key FRGs in the septic hearts ( Figure 8D), the proportion of resting mast cells was significantly positive correlated with PPARA expression and significantly negative correlated with HIF1A expression, and the proportion of neutrophils was significantly negative correlated with TP53, HIF1A, and STAT3 expression, while that of resting NK cells was significantly positive correlated with PTEN and RELA expression ( Figure S5B). In contrast, none of these correlations were significant in control hearts, suggesting that the correlation between key FRGs and immune cells may be pathological rather than physiologically present.
shows the top ten predicted potential therapeutic drugs ranked according to the combined score, and resveratrol (C 14 H 12 O 3 ) was the most strongly drug-target correlated drug predicted for ferroptosis-related key genes (Combined Score = 2975802) ( Figure 9B). Further molecular docking simulations showed that resveratrol formed stable complexes Figure 9D.
Discussion
Given the diagnostic uncertainty and serious prognostic impact of SMI, it is crucial to elucidate the underlying pathophysiological mechanisms, and identify reliable diagnostic markers and therapeutic targets for clinical practice.
1009
Ferroptosis has recently been reported as a noteworthy pathological mechanism in SMI, however, the underlying functional and regulatory mechanisms still unclear. Rapid advances in transcriptomic studies have improved our understanding of disease. In this study, we comprehensively analyzed dysregulated FRGs in the human septic heart transcriptomes, including regulatory mechanisms, functional pathways, key genes and their diagnostic capacity, immune infiltration relevance and potential ferroptosis-targeting drugs, and further validated prognostic relevance and key gene expression through an independent cohort of sepsis patients and in vivo SMI model, which provides new insights to explore the role of ferroptosis in the development of SMI.
Widespread dysregulation of ferroptosis-related genes in the septic hearts was identified by GSEA, suggesting that ferroptosis is an essential pathological mechanism of SMI, which corroborates previous studies. 14,47,54 Further GO enrichment analyses revealed that the cellular component localization, molecular functions, and biological processes of dysregulated FRGs in SMI have mostly been demonstrated or observed to be associated with ferroptosis in different biological circumstances. Ferroptosis is a form of programmed cell death caused by iron-dependent lipid peroxidation and massive accumulation of ROS, and its cellular sensitivity was demonstrated to be controlled by a combination of energy metabolism, iron homeostasis and oxidative stress responses. 18,55 Lipid peroxidation of the plasma membrane is an essential part of the ferroptosis process, ferritinophagy is also being focused on as a regulatory process of intracellular iron metabolism and cascade response to ferroptosis. 18,56,57 Transcription factor regulation, protein ubiquitination, and regulation of oxidoreductase activity have been reported as key functions of many ferroptosis-regulating molecules that have a strong influence on the ferroptosis process. [58][59][60] Apart from ferroptosis and ROS-related pathways, the dysregulated FRGs in SMI were also enriched in some cancer-related pathways, suggesting that some key molecules and pathways that regulate ferroptosis in SMI may not yet be defined. We believe that these enrichment results and the crosstalks between them warrant further investigation to clarify the specific mechanisms that affecting and being affected by ferroptosis in SMI.
We identified eight key dysregulated FRGs in the septic hearts by constructing a PPI network, and validated them in an in vivo model of SMI. All key FRGs had convincing evidence to support their regulation of ferroptosis in different biological circumstances, 32 in which NOX4, PTEN and MAPK3 are positive regulators, [61][62][63] while PPARA and RELA are negative regulators, 64,65 and HIF1A, STAT3 and TP53 were bidirectional regulators. [66][67][68] Interestingly, while septic hearts were generally in a ferroptosis-activated state, the expression of the positive regulator PTEN is significantly reduced while the expression of the negative regulator RELA is significantly increased. We consider that this may attribute to their hitherto unrecognized bidirectional regulatory effect on ferroptosis or their simultaneous involvement in biological processes independent of ferroptosis in SMI. HIF1A and TP53 were identified in a previous study as key genes that regulate autophagy in SMI, and our enrichment analysis also found that dysregulated FRGs were enriched in the autophagy pathway, suggesting that autophagy and ferroptosis may be closely associated in SMI. Some key FRGs have been identified as essential targets in the development of SMI pathology, and regulation of ferroptosis may be their unidentified molecular biological function in SMI, yet further studies are required to elucidate the specific regulatory mechanisms. [69][70][71][72][73] The relationship between immune cell infiltration and SMI is complex and poorly understood. Immune and inflammatory responses are a double-edged sword in the pathological development of SMI. 8,74,75 The correlations between ferroptosis and immune cells have been demonstrated. 53 While immune cells can influence cellular ferroptosis by regulating iron metabolism or releasing cytokines, ferroptosis can also regulate the proliferation, function, activation and death of immune cells. However, the correlation between ferroptosis and immune infiltration in SMI is of interest but remains unclear. M2 macrophage, as an important anti-inflammatory immune cell, was the most abundant immune cell in the heart samples we studied. Its important functionality and abundant proportion in the heart have been reported in previous studies, 76-78 but it did not show differences in the control and septic hearts we studied, suggesting that there may be individual differences in its role in SMI which require further research. The proportion of infiltrating neutrophils and resting NK cells were increased in the septic hearts we analyzed, while that of infiltrating resting mast cells and CD8 + T cells was decreased, which has been partially observed in previous research. [79][80][81][82] Several significant correlations between key FRGs and infiltrating immune cells in the septic hearts were identified, which may be pathological crosstalks between ferroptosis and abnormal immune responses in SMI as these correlations were not present in the healthy counterparts. Nevertheless, some immune cells can exhibit both proinflammatory and antiinflammatory biological functions, and hyper-inflammation may coexist with immunoparesis in SMI, immune infiltration in SMI also varies by individual and disease stage, thus further investigation of the relationship between ferroptosis and immune infiltration in SMI is required. 8,83,84 There is considerable interest in exploring effective therapies for diseases based on essential pathobiological targets. 3,85 Here, we predicted potential ferroptosis-modulating drugs for SMI based on the key FRGs, and resveratrol in particular showed high drug-target relevance. Echoing our study, resveratrol has been shown to alleviate SMI through multiple mechanisms, such as enhancement of SERCA2a activity by promoting the phospholamban oligomerization, 86 activation of the PI3K/AKT/mTOR and Nrf2 pathways, 87,88 and inhibition of the NF-κB signaling pathway and iron transport from plasma to myocardium. 88,89 Ferroptosis has recently been identified as one of the mechanisms by which resveratrol alleviates SMI, 21 and our study provides the molecular basis and potential targets for further investigation of how resveratrol regulates ferroptosis in SMI.
Our study provides evidence and new insights into the dysregulation of FRGs in the human septic heart transcriptome to explore the role of ferroptosis in SMI. Nevertheless, some limitations ought to be considered. Although the occurrence of ferroptosis has been conclusively observed in various experimental models of SMI and targeting ferroptosis has shown a protective effect in alleviating SMI, most studies in human SMI hearts, including our study, have only identified the occurrence of ferroptosis in terms of biochemical indicators or biomarkers while the corresponding morphological features are lacking. The data on the immune cell proportions of heart samples were derived from the side evidence provided by the CIBERSORT algorithm, which may present bias. Several protein structures used for molecular docking were predicted based on available vestigial PDB structures, as complete experimentally observed protein structures were not yet available. In addition, the human septic heart transcriptomes we used for analyses may only be representative of terminal SMI patients or those with poor prognoses as derived from patients who died of sepsis. To identify more reliable diagnostic and therapeutic targets, and translate our findings to clinical benefit, further mechanistic and functional studies of the identified key FRGs to adequately understand their regulation of ferroptosis in SMI are needed.
Conclusion
We identified the widespread dysregulation of FRGs in the human septic heart transcriptomes, and further bioinformatic analysis showed that dysregulated FRGs are mainly localized to the apical part, basal plasma membrane, and autophagosome/autolysosome, and regulate iron ion homeostasis and response to nutrient, oxygen, and chemical stress by affecting the binding of DNA-binding transcription factors, ubiquitin protein ligase, transcription coregulator and coactivator, heat shock protein and the activities of acyltransferase and oxidoreductase. The ferroptosis features and expression of key FRGs (HIF1A, MAPK3, NOX4, PPARA, PTEN, RELA, STAT3 and TP53) were validated in an in vivo SMI model, and key FRGs showed excellent diagnostic capability, along with significant correlation with the prognosis and immune infiltration. Furthermore, we predicted resveratrol as a potential therapeutic drug by regulating ferroptosis in SMI. Our study provides human septic heart transcriptome-based evidence and new insights into the role of ferroptosis in SMI, which is significant for deepening the understanding of the pathobiological mechanisms of SMI and exploring diagnostic and therapeutic targets for SMI.
|
2023-03-12T15:36:28.550Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8e76d8affd010e37101db011cc1b355d8006d683",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f13d89aae5d40c947217e6a17bb5ef6f4d3f463c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
146056596
|
pes2o/s2orc
|
v3-fos-license
|
The dual formalisms of nonextensive thermodynamics for open systems with maximum entropy principle
We study the nonextensive thermodynamics for open systems. On the basis of the maximum entropy principle, the dual power-law q-distribution functions are re-deduced by using the dual particle number definitions and assuming that the chemical potential is constant in the two sets of parallel formalisms, where the fundamental thermodynamic equations with dual interpretations of thermodynamic quantities are derived for the open systems. By introducing parallel structures of Legendre transformations, other thermodynamic equations with dual interpretations of quantities are also deduced in the open systems, and then several dual thermodynamic relations are inferred. One can easily find that there are correlations between the dual relations, from which an equivalent rule is found that the Tsallis factor is invariable in calculations of partial derivative with constant volume or constant entropy. Using this rule, more correlations can be found. And the statistical expressions of the Lagrange internal energy and pressure are easily obtained.
Introduction
In recent years, nonextensive statistical mechanics (NSM) has been developed, based on the q-entropy firstly proposed by Tsallis in 1988 [1]. Different from the classical statistics, this new paradigm of statistical theory suggested a type of power-law distribution functions, which have been applied into many interesting research fields, such as self-gravitating astrophysical systems [2][3][4][5][6], astrophysical and space plasmas [7][8][9][10], chemical reaction systems [11,12], biological systems [13][14][15] and so on. Due to the success in dealing with the non-Maxwellian distribution functions of complex systems, NSM is being popularly accepted and applied by many authors in very wide science and technology fields. On the other hand, because the fundamental laws of thermodynamics were originated from the phenomenological observations and experiments, people always expect that they can be appropriate to any system, no matter what their statistical basis is. This is reasonable in logic, yet very difficult in the real theoretical development of the thermodynamics in NSM. For example, in order to define an appropriate pressure, many authors [16][17][18][19] abandoned the standard Legendre transformation and suggested a modified free energy where the Tsallis factor [20] occurred in an obvious way. This results in a direct obstacle to the further introductions of other Legendre transformations.
In order to develop the nonextensive thermodynamic formalism with the same mathematical structure (i.e., the same thermodynamic relations) as the traditional one, we proposed the dual interpretations of the physical quantities and fundamental thermodynamic equations based on the temperature duality assumption [20]. And then, the nonextensive thermodynamic formalism consisting of two parallel Legendre transformation structures was presented. One is called the physical set (P-set), the other is called the Lagrange set (L-set). In each set of thermodynamic formalism, the Legendre transformation is obtained directly from the classical thermodynamics. The only difference is that there are two sets of transformation structures in our treatment. The Tsallis factor does not appear in any set of the formalism, but these two sets of formalisms are linked through the Tsallis factor. It can be seen that thermodynamic relations in the proposed thermodynamics are almost the same as those in traditional thermodynamics and therefore can be applied to any realistic systems. The key point is that the quantities in complex systems should be interpreted in the dual ways.
In this work, we would further study the nonextensive thermodynamic equations in open systems on basis of the maximum entropy principle, where particle number is also a state variable. The remaining of this paper is constructed as follows. In section 2, the dual power-law distribution functions are re-deduced by recourse to the maximum entropy principle. In section 3, the first and the zeroth laws of thermodynamics are revisited for the open systems. In section 4, the links between the fundamental thermodynamic equations in nonextensive realm and the balance conditions are studied. In section 5, the Legendre transformations are introduced and the corresponding nonextensive thermodynamic relations are deduced for the open systems. In section 6, the correlations of the nonextensive thermodynamic relations within different set of formalisms for the open systems are found. In section 7, we give the conclusions and discussions.
The dual power-law q-distribution functions with maximum entropy principle
Firstly, let us start from the Tsallis q-entropy, expressed [1] by 1 1 ln ( ) 1 where k is the Boltzmann constant, p i is the probability of the ith microstate of the system and q is the nonextensive parameter. For convenience, we define q q c = i p ∑ as Tsallis factor. In the nonextensive thermodynamics, two different internal energies were proposed [20]. One is the physical internal energy (P-internal energy), defined as [21] (2) q q the other is the Lagrange internal energy (L-internal energy), defined as [22] (3) Here, we should notice that the microstate energy levels, ε i and ε′ i , in above two definitions are different from each other. The main difference between them is that they obey different composite rules, which will be discussed later on. It is apparent that these two definitions (2) and (3) have been employed in literature. However, in the present work we would endow them with new physical senses. On the other hand, because the particle number is a variable in an open system, here we propose the dual interpretations of averaged particle number. That is the physical particle number (P-number), and the Lagrange particle number (L-number), Again we notice that the microstate particle numbers, N i and N′ i , in the two definitions are also different from each other. Similarly, we will see later on that they obey different composite rules. In order to apply the maximum entropy principle, we define the functional in P-set of formalism for the open system as (2) (2) where the quantity α is a multiplier related to the normalization of probability, the quantity β′ is the generalized Lagrange multiplier and the γ′ is also a generalized multiplier related to chemical potential. According to the maximum entropy principle, the extreme value of the above function (6) should be zero, which leads to the equation [23], According to the variation principle, we have that Then the power-law distribution function is obtained, It is easy to find that [23] (2) And furthermore we can find that [24] (2) 1 (2) Similarly, in order to apply the maximum entropy principle to the open system in the L-set of formalism, the function can be written by, (3) (3) where the quantity β is the Lagrange multiplier, and the γ is a multiplier related to the chemical potential. According to the variation principle, the extreme value of the above function (13) also leads to the power-law distribution, It is difficult to deduce the statistical expressions of L-internal energy and L-number directly, yet we can prove that (see the Appendix A) [22] ( In next section, by recourse to the power-law distribution functions presented in this section we would discuss the first and the zeroth laws of thermodynamics for the open systems.
The first and zeroth laws of nonextensive thermodynamics for open systems
In order to rebuild the first law of thermodynamics in the domain of P-set of formalism for the open systems in nonextensive thermodynamics, we should both consider the variations of internal energy and the averaged particle number, namely, from which we can deduce that For an open system, the action exerted by external force can lead to the shift in energy level and the change in particle number on each microstate at the same time. Therefore, it is reasonable to judge that the first term on the right hand side of the above equality (18) is the work done by the reservoir. Similarly, in open system the heat conduction process always changes the distribution function of system with fixed microstate energy level and particle number. So the second term on the right hand side of (18) is the heat absorbed by the system from the reservoir. Then, we have the work (2) ( ) where the P q is the P-pressure. We also has the heat On the other hand, the variation of the q-entropy gives out that which leads to Now we introduce the generalized Lagrange relations, where the T q is the P-temperature and the μ is the chemical potential. Here we generally suppose that the chemical potential is identical in these two sets of formalisms. According to the relations in (23), we get which is the fundamental nonextensive thermodynamic equation for the open systems in P-set of formalism. In order to calculate the one in L-set of formalism, we consider the variations of the corresponding L-internal energy and L-number, that is to say, [18] which leads to Likewise, for open system, the external force changes microstate energy level and particle number, but the heat conduction changes the system distribution function with unchanged energy level and particle number. So we confirm that the first term on the right-hand side of the above equation (27) is the work done by the reservoir, and the second term is the heat absorbed by the system. So there are where the P is the L-pressure. Moreover, the variation of q-entropy in the L-set of formalism can be written as [18] (3) Then we can find that (31) Now let us introduce the following Lagrange relations, 1 , kT kT where the T is the L-temperature. According to the relations in (32), we obtain the fundamental thermodynamic equation for the open systems in the L-set of formalism, namely, In order to make clear the physical meanings of the temperatures in (24) and (33), we need to revisit the zeroth law of thermodynamics in each set of formalisms. For this aim, we should know the addition rules of the microstate energy levels and the particle numbers. Noticing that they are different in each set of the formalisms, we generally suggest the following composite rules where "1" and "2" represent different subsystems; "i" and "j" represent different microscopic states. It can be proved that the above two addition rules guarantee the validity of the probability independence assumption, p ij =p i p j . According to the probability independence assumption, there are the following addition rules of macroscopic physical quantities in nonextensive thermodynamics, that is, It can be seen that the q-entropy, the P-internal energy, and the P-number are all nonadditive. On the contrary, now that the chemical potential is indefinite, the equation (38) suggests the additive L-internal energy and L-number, i.e., We regard the subsystem 1 as the researched object, and the subsystem 2 as its reservoir. Between these two subsystems there exists the exchange of internal energy and particle number. Now we consider the variations of (36), (37) and (39), respectively, In the calculation of (41), the relation (12) is taken into account. Similarly, now that the chemical potential is indefinite, the equation (41) actually suggests two independent nonadditive relations, that is, 6 When the whole system arrive the q-equilibrium state, taking into account we gets On the other hand, letting we obtains Now that the equations (47) and (49) denote the same q-equilibrium state, generally, we have This relation is called the assumption of temperature duality [20], which at the same time gives the physical reality to the P-temperature and L-temperature. Here, we should emphasize that although these two temperatures are both real in physics, the P-temperature is related to the global property of the system, such as its balance condition and evolution process, while the L-temperature is related to the local nature of system, such as the local thermal equilibrium. Therefore, in experiments, the P-temperature is un-measureable, while the L-temperature is measurable. In next section, we will study the links of the basic nonextensive thermodynamic equations and analyze the balance conditions for the q-equilibrium.
The links of the nonextensive thermodynamic equations to the balance conditions
Consider these two works given in (19) and (28) respectively, if neglecting the difference between the microstate energy levels and the particle numbers, we have (2) ( leading to the pressure link q q P c P = . (52) In view of (50) and (52), from (18) and (27) we get ( 2 ) ( 2 ) Comparing (43) and (45), we find which directly results in the link of these two fundamental nonextensive thermodynamic equations (24) and (33), that is, Moreover, if ignoring the difference between the microstate energy levels in (2) and (3), and ignoring the difference between the microstate particle numbers in (4) and (5), we also have , In the deductions of (37) and (38), we have assumed the generalized Lagrange multiplier β′ and the chemical potential to be constant. This means that the invariable P-temperature and the invariable chemical potential are the balance conditions for the q-equilibrium. Apart from these two quantities, the invariable P-pressure is also the balance condition. In order to prove this, we assume [25] that the volume is also nonadditive and satisfies Then in the P-set of formalism, in view of and taking into account for the q-equilibrium that we can obtain Furthermore, in view of and letting we can obtain It is obvious that, in light of the links in (50), (52), (54) and (55), the equations (60) and (63) produce the same balance conditions, namely, It should be noticed that in the P-set of thermodynamic formalism, the balance conditions are deduced more directly. Although the inference in the conditions (64) is dependent on the assumption (57) of the volume nonadditivity, the latter is compatible with other thermodynamic relations. Actually, if take the conditions (64) as the logic start point, the assumption (57) can be inferred easily. In next section, we would discuss the Legendre transformations for the open systems.
The Legendre transformations and the nonextensive thermodynamic relations for open systems
In our treatment, all the Legendre transformations can be directly derived from the classical thermodynamics in each set of nonextensive thermodynamic formalisms. Therefore we can directly write down the free energies, enthalpies, and Gibbs functions, as follows, (2) ( , It is easy to find the links of these two sets of fundamental thermodynamic functions, according to the fundamental links in equations (50), (52), and (56), namely, (73) Besides, according to (12), the statistical expression of the P-free energy can be written as which is different from the P-free energy expression in a closed system [20] with constant particle number. Actually, just as the traditional thermodynamics, we can further introduce the so-called grand thermodynamic potential J in each set of formalisms, that is, And then there is (77) The statistical expression of the P-grand potential is obvious, according to (74), which is similar to the expression of the grand in traditional thermodynamics. Based on the Legendre transformations mentioned above, we symmetrically write down the differential expressions of the thermodynamic functions in each set of the nonextensive formalisms, namely, (2) (2) q q q q dU T dS P dV dN By recourse to the partial derivative theory, it is easy to prove the Maxwellian relations, now that the Legendre transformations are directly derived from the traditional thermodynamics. Here we write down the Maxwellian relations in the L-set only, and those in P-set can be obtained directly by the simple symbol substitution. These Maxwellian relations in L-set are written as , , , , , , , , , Furthermore, we can also write out the following thermodynamic relations in L-set, , , , (3) , , , , , , In the next section, we will discuss the correlations of these nonextensive thermodynamic relations derived from different set of formalisms.
The correlations between the nonextensive thermodynamic relations for open systems
It is interesting to check the correlations between the nonextensive thermodynamic relations from different set of formalisms. From the fundamental thermodynamic equations about the P-internal energy and the L-internal energy, that is, the (79) and (80), we can find that ( 2) (3) , , (2) , , (2) , , From (81) and (82), we get that (2) , , , , (2) , , Moreover, from (83) and (84), we obtain that ( 2) (2) , , , , (2) , , Then from (85) and (86), we gain that , , , , (2) (3) , , Lastly, from (87) and (88), we can get that (2) , , , , , , From the correlations presented above, an interesting equivalent rule is found that, in the partial derivative calculations with constant entropy or volume, the Tsallis factor is an invariable, and it can be directly extracted from the partial derivative. By use of this rule, more correlations can be found. It is not necessary to list all the correlations. We just give several examples. From (99), it can be seen that the heat capacity with constant volume satisfies (3) ( 2) (3) , , , , From (100), we have ( 2) (3) and from (101) we get (2) (3) By the application of the rule, it is easy to find the statistical expressions of both the L-internal energy from the (10), (3) (2) and the L-number from the (11), Moreover, by recourse to the links of (50), (56), (71) and (77), the expressions of L-free energy and L-grand potential are given respectively by Based on the above two expressions, from (109) or (118) the expression of L-pressure is easily obtained, It is difficult to calculate the grand partition function for the open systems in the above equation, yet according to the case in a closed system [20], we have the following approximation for a nonextensive gas, where a is an unknown function of the L-number and the parameter q. Then we get and further, Similarly, for the nonextensive gas, the P-internal energy is and the L-internal energy is It can be seen that for the heat capacities there are the relations, Considering the heat cut-off of the power-law q-distribution functions (9) and (14), there is 1-q>0; then the heat capacity in the above equation is always positive for a normal nonextensive gas. The further work would be carried out in other papers. Here we do not give more discussions.
Conclusions and discussions
In this paper, we further studied the nonextensive thermodynamics for the open systems based on the maximum entropy principle. The temperature duality assumption is adopted and all the nonextensive thermodynamic relations and thermodynamic quantities are endowed with the dual interpretations. By doing this, a nonextensive thermodynamic formalism consisting of two sets of parallel Legendre transformations is proposed. For an open system, the particle number is a state variable, which should be defined in the two sets of the nonextensive formalisms. So we presented two particle number definitions: one is called P-number and the other is called L-number, which can be linked through the Tsallis factor. The physical meanings of these two particle numbers are clearly given in our treatments. For an open complex system, the L-number is the number of the non-creation particles restricted to the system, while the P-number is the sum of the numbers of three kinds of particles: the non-creation particles and creation particles inside the system, and the particles also existing on the system surface, which take part in the interactions between the system and its surroundings. Of course, the creation particles can also be annihilated due some unknown reasons. Anyway, according to (12), because the partition function in P-set of formalism is more than unity and also 1-q>0, there should be (2) ( 1, It is reasonable for the nonadditivity of P-number, now that it includes the number of creation particles. In our treatments, the two sets of nonextensive formalisms share the common entropy, volume and chemical potential. Apart from this, the chemical potential also participates into the chemical balance between different subsystems or phases. This makes the chemical potential more special that it is the only one intensive quantity that does not require the dual interpretations. This is rational because chemical reactions all occur in local regions of the open complex systems.
All the Legendre transformations in this paper are directly originated from the traditional thermodynamics. This means the thermodynamic relations in the nonextensive thermodynamics are identical in forms to those in traditional thermodynamics. The main difference is that in the proposed treatments there exist two sets of parallel Legendre transformation structures, which are linked through the Tsallis factor.
The Maxwellian relations and other thermodynamic relations are easily deduced in the proposed nonextensive thermodynamic formalism consisting of particle numbers as state variables. By studying the basic nonextensive thermodynamic equations, these correlations between the nonextensive thermodynamic relations within different set of formalisms are found. And an equivalent rule is deduced that in the calculations of partial derivative with constant volume or entropy, the Tsallis factor is invariable and it can be extracted from the partial derivative.
By use of the equivalent rule, more correlations are found and these two sets of nonextensive formalisms share the same heat capacity (see (121)). These expressions of internal energies are easily obtained for a nonextensive gas. Also with the rule, the heat capacity of the nonextensive gas is always positive. However, the equivalent rule might be not valid in some situations, such as the nonexensive systems with small size, or the self-gravitating systems, where the long range interactions are regarded as some kind of "phase transition", in which the heat capacity might be negative. In ordr to verify (15), we shoulde prove that (3) (3)
|
2019-05-07T14:05:54.538Z
|
2019-04-09T00:00:00.000
|
{
"year": 2020,
"sha1": "24f6328dfff4410bf5c5fc4f2ccf66eaee6dbb10",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.10226",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "57d9021f09c208ec2b099947255471c6dc84c316",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7899076
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Moxa (Folium Artemisiae argyi) Smoke Exposure on Heart Rate and Heart Rate Variability in Healthy Young Adults: A Randomized, Controlled Human Study
Objective. To determine the effects of the moxa smoke on human heart rate (HR) and heart rate variability (HRV). Methods. Fifty-five healthy young adults were randomly divided into experimental (n = 28) and control (n = 27) groups. Experimental subjects were exposed to moxa smoke (2.5 ± 0.5 mg/m3) twice for 25 minutes in one week. ECG monitoring was performed before, during, and after exposure. Control subjects were exposed to normal indoor air in a similar environment and similarly monitored. Followup was performed the following week. Short-term (5 min) HRV parameters were analyzed with HRV analysis software. SPSS software was used for statistical analysis. Results. During and after the first exposure, comparison of percentage changes or changes in all parameters between groups showed no significant differences. During the second exposure, percentage decrease in HR, percentage increases in lnTP, lnHF, lnLF, and RMSSD, and increase in PNN50 were significantly greater in the experimental group than in control. Conclusion. No significant adverse HRV effects were associated with this clinically routine 25-minute exposure to moxa smoke, and the data suggests that short-term exposure to moxa smoke might have positive regulating effects on human autonomic function. Further studies are warranted to confirm these findings.
Introduction
Moxibustion, one of the classical therapies of Traditional Chinese Medicine (TCM), uses the heat generated by burning moxa floss (usually made by Folium Artemisiae argyi) to stimulate acupuncture points [1]. It is used widely in acupuncture clinics in China and other Asian countries to treat various diseases, especially chronic conditions such as osteoarthritis, asthma, gastrointestinal disorders, and insomnia [2][3][4][5].
Smoke is an unavoidable aspect of the therapy. The aim of this study was to evaluate the effects of moxa smoke exposure on human heart rate variability (HRV) parameters.
A moxibustion session typically lasts 20-30 minutes, and patients are often treated several times a week for several weeks. Patients are exposed to the smoke during treatment, while acupuncturists are commonly exposed for prolonged periods during clinical practice. Because of recent concerns as to the safety of the therapy, specifically the potential toxicity of the smoke, many clinics no longer use moxibustion, thus depriving patients of the benefits of this unique treatment. Evaluation of the safety and the effects of moxa smoke is imperative.
Concerns about moxa smoke are similar to those regarding tobacco smoke and air pollutants. Many studies show that exposure to tobacco smoke and air pollutants is positively associated with adverse effects in the respiratory, immune, nervous, and cardiovascular systems [6][7][8][9][10][11]. Active and passive exposures to tobacco smoke have been found 2 Evidence-Based Complementary and Alternative Medicine to increase sympathetic nervous system activity and reduce parasympathetic nervous system modulation and HRV [12][13][14]. Particulate air pollutants affect heart rate (HR), blood pressure, vascular tone, blood coagulation, the progression of atherosclerosis [15], and HRV [16][17][18][19]. To our knowledge, the influence of moxa smoke on HRV in the human body has not been sufficiently investigated.
HRV refers to the time variation coefficient between successive heart beat cycles. It is one of the most promising quantitative markers of autonomic nerve system activity [20]. There is a growing recognition of the role of HRV abnormalities in cardiovascular disease [21][22][23], and HRV decrease is a strong predictor of mortality [24]. HRV measurement in time and frequency domains is a convenient, noninvasive tool for autonomic nervous physiology evaluation, and shortterm (5-minute) recording gives reliable measurements [25]. The purpose of this study was to determine whether exposure to moxa smoke influences HR and HRV in healthy subjects.
Subjects.
Participants, most of them students of Beijing University of Chinese Medicine or other nearby universities plus some residents of the area around the University, were recruited between March 2012 and July 2012. The study protocol was approved by the Human Medical Ethics Committee of Beijing University of Chinese Medicine and was registered in the Chinese Clinical Trial Registry (ChiCTR-TRC-12002445). Written informed consent was secured from all participants.
Inclusion criteria required that subjects be normal and healthy according to the American Society of Anesthesiologists Physical Status Classification System, that is, that they have no organic, physiologic, biochemical, or psychiatric disorders, smoke <5 cigarettes per day [26], and are between the ages of 18 and 50.
Individuals were excluded if they (1) had a history of addiction to alcohol or drugs, (2) had had contact with moxa smoke within one month of the test, (3) had used medications within two weeks of the test, (4) had had a cold or other illnesses within one week of the test, (5) had ingested food or drink containing caffeine or alcohol, smoked, or done strenuous exercise within four hours of the test, and (6) were pregnant or lactating.
Participants were instructed to refrain from tobacco, alcohol, medications, and strenuous exercise and to avoid contacting moxa smoke or any other abnormal gas during the two-week test.
Study Protocol
2.2.1. Equipment and Setting. The trial was performed at the Beijing University of Chinese Medicine in two adjacent, bright, quiet, and similarly laid-out rooms equipped with beds. Ambient temperature and humidity were kept between 24 ∘ C∼26 ∘ C and 40% ∼50% and monitored by a meteorological parameter recorder (Kestrel NK3000, USA).
Room 1 had normal indoor air. In Room 2, moxa smoke was generated by burning moxa sticks (three-year-old pure moxa, 1.8 cm × 20 cm, Nanyang Hanyi Moxa Co., Ltd., China). A digital dust indicator (P5L2C, Binta Green Technology Co., Ltd., Beijing, China) that detects particulate matter <10 m in diameter (PM 10 ) levels and a volatile organic compound (VOC) detector (model no. PGM-7320, kit MiniRAE3000, Rae Systems, Inc., USA) were set beside the participant to monitor the air. In the moxa room, PM 10 and total VOC levels were kept between 2.5 ± 0.5 mg/m 3 and 4.2 ± 1.3 mg/m 3 , respectively, which accord with average moxa smoke levels in acupuncture clinics [27]. In Room 1, PM 10 and total VOC levels detected in this trial were lower than 0.01 and 0.2 mg/m 3 , respectively.
This was a two-arm, open, and randomized study ( = 55). After reading and signing the consent form, the participant was assigned to the experiment or control group by computer-generated random allocation. Group assignments were performed by a statistician blinded to the study design. Each assignment was sealed in an opaque envelope that was opened for the respective participant by the investigator prior to treatment.
Experimental Group.
Testing consisted of three phases, one immediately after the other. In phase 1, subjects entered Room 1 and were encouraged to relax in a supine position. After 5-10 minutes of rest, ECG monitoring was performed for 5 minutes. In phase 2, they entered Room 2. After a 5-minute rest in a supine position, ECG monitoring was performed for 20 minutes. In phase 3, they returned to Room 1 for another 5-minute ECG recording (see Figure 1). After each ECG, subjects recorded their subjective sensations and emotions on a questionnaire. These questions were about whether they have experienced drowsiness, shortness of breath, cough, choking, irritation in nose, pharynx, and eyes, body temperature changes, or any other discomfort that might be associated with moxa smoking exposure.
Control Group.
Control subjects were similarly monitored but remained in Room 1 during phase 2 (see Figure 1).
The test was performed on each subject twice in a single week to accord with routine clinic practice. One week later, subjects in both groups returned for another 5-minute ECG (see Figure 2).
ECG Monitoring and Short-Term (5 min) HRV Data
Analysis. With the subjects supine, three ECG electrodes were placed on their right subclavian and double costal arch regions. A data acquisition instrument (DATAQ Instrument Inc., MODEL:DI-720-USB, USA) was connected to the electrodes and a computer. To allow the heart beat to become steady, ECG recording was started 5-10 minutes after they lay down.
ECGs were analyzed by a specialist blinded to group assignment. After removal of extraneous noise, normalto-normal beat intervals were analyzed for time-and frequency-domain parameters in 5-minute epochs using standard algorithms and HRV analysis software (Catholic University of Leuven). Time-domain analysis estimates the variation of differences between successive RR intervals through statistically developed indices. Frequency-domain Evidence-Based Complementary and Alternative Medicine Evidence-Based Complementary and Alternative Medicine analysis estimates respiratory-dependent, high-and lowfrequency power through spectral analysis. Widely used HRV parameters [25,28,29] were employed in this study (Table 1).
Statistical
Analysis. SPSS17.0 statistical software was used for data analysis. The paired -test was used to compare HR data of different phases in the same group; the independent sample -test was used to compare data from different groups.
A related sample nonparametric test (Wilcoxon) was used to compare all HRV parameter data of different phases in the same group, and an independent sample nonparametric test (Mann-Whitney Test) was used to compare data from different groups.
TP, HF, and LF data were transformed into natural logarithms (ln) for better analysis. The four-segment data of phase 2 were calculated into one mean value for comparison with data from the other two phases. Percentage changes ((mean value in phase 2/value in phase 3 − value in phase 1)/value in phase 1 × 100%) or changes (mean value in phase 2/value in phase 3 − value in phase 1) of all data were used for comparisons between the groups.
Baseline Characteristic of Study Subjects.
There were no statistically significant baseline differences between the groups ( Table 2). Evidence-Based Complementary and Alternative Medicine Values are expressed as a mean ± standard deviation and b median (interquartile range). No significant differences were found between the groups using a independent two-sample t-test and b Mann-Whitney test.
Comparisons: Second Test.
In phases 2 and 3, during and after exposure, each group had significant reductions in HR ( < 0.05) and significant increases in all HRV parameters ( < 0.05; Figures 4(a)-4(h)) except for LF/HF ratio in phase 3 of the experimental group (Figure 4(h)).
Comparisons: Follow-Up Test.
Mean HR ( = 0.039) in the experimental group was significantly lower thanthat in control; other indicators were not significantly different in the two groups (Table 5).
Participants' Sensations.
During moxa smoke exposure, seventeen experimental group subjects felt sleepy and relaxed. One felt refreshed; stomach and bowel movement improved in another. Ten complained of choking and irritation in nose, pharynx, and eyes. One had difficulty in breathing. Eight had no unusual sensations.
In the control group, two subjects felt sleepy; one had neck discomfort; one had numbness in the hand. Twenty-three felt nothing unusual.
Discussion
No harmful HR and HRV effects were observed during exposure to clinical levels of moxa smoke. Evidence for this is that there were no differences in HR and HRV, either immediately (after 10 minutes) or at followup a week after exposure, between the experimental group exposed to moxa smoke and the control group without such exposure. These results might explain why reports of adverse reactions associated with smoke produced in this ancient therapy are so rare.
In contrast to retrospective studies based on clinical observation, our present study was a well-controlled, randomized, and prospective study to examine possible adverse Values are expressed as a mean ± standard deviation and b median (interquartile range). The symbols * and △ indicate significant differences ( < 0.001 and < 0.05, resp.) between the groups using a independent two-sample -Test and b Mann-Whitney Test. Values are expressed as a mean ± standard deviation and b median (interquartile range). The symbol * indicates a significant difference (P < 0.05) between the two groups according to the independent two-sample t-test.
effects of moxa smoke. The study is unique in moxa smoke concentration, length of exposure to the smoke, and its carefully controlled and monitored experimental environment, all of which mimic actual clinical moxibustion practice. The sample size is comparable to those reported in similar studies on exposure to other types of potentially hazardous smoke [12][13][14]30].
The HRV effects that we observed in this moxa smoke study contrast with findings of air pollution and tobacco smoke studies, which show harmful effects on human health [12][13][14][15][16][17][18][19]. This difference might be the result of the unique constituents of moxa smoke. Moxa floss (burning material of moxibustion) is made from the mugwort leaf (Folium Artemisiae argyi), and its smoke contains multiple essential oils, suspended particulate matters, and products of chemical oxidation [31]. Wheeler et al. [32] tested the chemical products of moxibustion in clinically common dosages and found that neither carbon monoxide nor volatile compounds that present safety hazards are produced under clinical conditions. Air pollution studies show that the suspended particles in polluted air can reduce HRV by affecting the neurological system and consequently affecting the cardiovascular system by increasing HR and blood coagulation and decreasing hemoglobin to cause oxidative stress [33][34][35]. The respirable particles in moxa smoke mainly consist of unknown, ultrafine particles [27], which might be one reason why we observed no adverse HRV effects from clinical levels of the smoke. However, further investigation is needed to confirm and refine our finding.
Interestingly, in the second test we observed positive HR and HRV parameter changes in the experimental group compared to control. These include decrease in mean HR and increases in both time-domain analysis HRV (RMSSD, PNN50) and frequency-domain (TP, HF, and LF) during the 25-minute moxa smoke exposure (Table 4). HRV has been widely applied as a marker of autonomic nervous activity. Tension of the autonomic nervous system is maintained by opposing actions of the sympathetic and parasympathetic systems. RMSSD, PNN50, and HF are primarily thought to reflect parasympathetic influences. LF has been shown to reflect both sympathetic and parasympathetic influences. The LF/HF ratio is widely used as a relative marker of sympathetic nervous activities or sympathovagal balance [25,36]. The HRV changes found in this study appear to be linked to the restorative functions of the autonomic system. These include an increase in total variability shown by increased TP and an increase in parasympathetic nervous activity shown by increased RMSSD, PNN50, and HF. LF/HF increase after moxa smoke exposure was significantly lower in experimental subjects than in control, which may indicate that moxa smoke drives autonomic nervous activity toward a balanced state. These findings are consistent with those of our previous pilot study in which 24 healthy volunteers exposed to moxa smoke had significant reduction in HR and increase in total HRV during and after 20 minutes of exposure to moxa smoke [37]. This suggests that moxa smoke has a regulating effect on human autonomic system function and that moxa smoke inhalation might have short-term stress-alleviating effects.
Moxa smoke effects and mechanisms have not been well investigated. We speculate that the effects are similar to those of aromatherapy, as a number of studies [38][39][40][41][42] show that inhalation of certain aromas can induce HRV increase and HR reduction, indicating beneficial autonomic nervous system regulation. Mechanisms of these effects might be pharmacological and/or psychological [43]. The pharmacological hypothesis is that the odor directly interacts with and affects the autonomic nervous system/central nervous system and/or endocrine systems. On the one hand, the pharmacological compound might enter the bloodstream by way of nasal or lung mucosa; on the other, the odor might stimulate the olfactory nerves and the limbic system of the brain. In the clinic, moxibustion is often used to treat insomnia, anxiety, and depression [4,5]. However, it is unclear whether the treatment effects are induced by heat at the acupuncture point or by the moxa smoke. The present study provides some information for distinguishing the respective roles of heat and smoke. Further studies to elucidate the mechanisms of moxibustion are warranted.
We are aware of the limitations of the present study. Our data only show HRV effects from short-term exposure to moxa smoke; in normal acupuncture practice, patients usually receive multiple moxibustion treatments, and practitioners are usually exposed to the smoke for years. These factors warrant a long-term observational study. Furthermore, because the participants in our study were not blinded, we cannot rule out the possibility of placebo effect. Additionally, our subjects were young and healthy; these results might not reflect how moxa smoke affects the elderly or chronically ill.
Nevertheless, this study is an important step toward understanding the effects of moxa smoke. Our results provide useful information on the feasibility of a future, larger trial and will make it possible to calculate adequate sample sizes for such research.
In conclusion, our data show that short-term moxa smoke exposure at clinical concentrations poses no hazards to patients' HR and HRV and suggest that moxa smoke has a positive regulating effect on human autonomic function. Future studies are needed to further investigate the effects and the safety of moxa smoke.
|
2018-04-03T02:58:23.437Z
|
2013-05-23T00:00:00.000
|
{
"year": 2013,
"sha1": "c30ff4a1c4a1a0ddfe78035e8889d979e267f246",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2013/510318",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b0dd6e788d2c7cd77eec3e5f00eb78b7ccb3077",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235457335
|
pes2o/s2orc
|
v3-fos-license
|
Feasibility of waste cooking oil as biodiesel feedstock
Waste Cooking Oil (WCO) has potential as biodiesel feedstock since it contains triglycerides. However, it contains many impurities and requires several purifications. Thus, this study aims to evaluate the feasibility of WCO as a biodiesel feedstock in the term of physicochemical properties such as free fatty acid content (%FFA) as palmitic acid, moisture content, and peroxide number. Samples are collected from fast-food fried chicken restaurants in Padang, West Sumatera, Indonesia. WCO is processed by filtering, degumming, centrifugation, neutralization, and adsorption. FFA content, moisture content, and peroxide number are examined based on ISO 660, ISO 665, and ISO 3960, respectively. The finding shows that WCO has 2.01%FFA, 0.65% moisture content, and 1.02 mg O2/100g peroxide number. These findings show that WCO is feasible as biodiesel feedstock after several purifications. The results of this study are expected as the basic information in biodiesel production by using waste cooking oil as biodiesel feedstock.
Introduction
Oil-based fuel scarcity is common not only in Padang but throughout Indonesia. The provision of alternative fuel feedstock has been done through the utilization of biodiesel that is officially sold at various gas stations. The government's commitment in encouraging the utilization of biodiesel is implemented by increasing the level of biodiesel mixture from Biodiesel 20 percent (B20) to Biodiesel 30 percent (B30). In fact, President Joko Widodo has inaugurated the B30 program at Pertamina MT Haryono Gas Station, South Jakarta on December 23, 2019. Biodiesel B40 is also ready to be tested in March 2020 and is planned to come into effect in 2021 [1].
However, the biodiesel from cooking oil is based on palm oil which will be addressed with the issue of food security. Thus, it is necessary to find alternative feedstocks other than palm oil such as algae, waste or non-edible oil. One of the wastes that can be utilized is waste cooking oil from fried chicken restaurant that is abundant in Padang, West Sumatera, Indonesia.
Based on data from the Padang City Cultural and Tourism Office, there are currently 512 culinary/restaurant industries both fast food and regular restaurants in operation [2]. These restaurants produce Waste Cooking Oil in a large scale. As a waste, used cooking oil is dangerous if recycled as recycled cooking oil (RCO) because it can cause Parkinson's, coronary heart, stroke and cancer risk. Surprisingly, there are 18 to 20% RCO (3.56 million tons) circulated on market [3].
WCO has the potential to be used as feedstock for the biodiesel production. However, used cooking oil has a low quality [4]. The process of frying chicken continuously and repeatedly at high 2 temperatures (160-180°C) accompanied by contact with air and water in the frying process results in a complex degradation reaction in the oil. This causes WCO contains many impurities such as solid waste, colloid waste, gum, high content of Free Fatty Acids (FFAs) which is more than 15% and blackish color [5,6]. The low quality makes used WCO does not qualify as biodiesel feedstock with a maximum FFA content of 3% [6].
Thus, this study aims to evaluate feasibility of waste cooking oil as biodiesel feedstock after carried out by several treatment. The significant of this study is a diversification of biodiesel feedstock, one of efforts in conducting waste management from fastfood friend chicken restaurants, and avoiding transformation of WCO into RCO that can harm health. Further, this study is expected to increase the added value of WCO.
1. Samples
For the experiments, Waste Cooking Oil purchased from several fastfood friend chicken restaurants in Padang, West Sumatera, Indonesia.
Purification stages
WCO were purified by several stages such as preparation, centrifugation, neutralization and adsorption. The preparation of WCO/samples consist of three processes, namely sedimentation, filtering and degumming. This stage is a preliminary treatment stage that aims to eliminate insoluble and colloidal suspension in sample (gum) by sedimentation, filtering, and degumming methods using phosphoric acid (1% v/v of WCO) as well as heating to eliminate moisture content.
Centrifugation is carried out to remove residue impurities in the form of a colloidal suspension in oil. Centrifugation is carried out by using a centrifuge. Meanwhile, the neutralization process is the process of separating Free Fatty Acids (FFA) from the oil by reacting them with alkali to form soap. In this study, KOH is used as alkali in order to produce liquid shop. It does not clog the outlet pipe. In addition, the utilization of KOH can be neutralized with phosphorus acid and produces potassium phosphorus that can be used as fertilizer. In the neutralization process, 1% w/v KOH is added to oil. The neutralization process also can reduce oil color substances [8]. It reduces quantity of required adsorbents in the adsorption process.
Analysis of physicochemical properties
Samples were examined the physicochemical properties such as FFA content as Palmitic acid, moisture content, and peroxide number. Samples were analysis twice. First, before purification process. Second, after purification process. Data before, after and unused cooking oil are compared to examine effectiveness of purification process and feasibility of WCO as biodiesel feedstock.
Methods in examining FFA content as palmitic acid, moisture content, and peroxide number are based on ISO 660, ISO 665, and ISO 3960, respectively as presented in Table 1.
Result and Discussion
Result of physicochemical properties of oil before purification process is presented in Table 2. Table 2 shows that FFA content of WCO, moisture content, and peroxide number before purification process are 14.2%, 2.31%, and 9.12 mg O2/100g, respectively. Those values are far exceed the value of free fatty acids and peroxide number in unused cooking oil which is 1.04% and 0.91%, respectively. Waste Cooking Oil before purification process and after purification are presented in Figure 1. Waste Cooking Oil After Purification Waste Cooking Oil before purification is cloudy with blackish color a due to high impurities such as gum, colloid waste, solid waste [6]. After filtration, sedimentation, degumming, centrifugation, neutralization, and adsorption, samples are turn to be clear with yellowish color as presented in Figure 2. Figure 2 shows purifications effective to convert dark with blackish color oil to clear with yellowish color oil. Purification also reduce free fatty acid content, moisture content, and peroxide number of oil. Those physicochemical properties of oil is presented in Table 3. Free fatty acid content is allowed maximum 3% and maximum moisture content is 1% in order feasible as biodiesel feedstock [6]. Table 2 shows WCO after purifications has 2.01% of FFA and 0.65% of moisture content. Thus, WCO after purification is feasible as biodiesel feedstock. Meanwhile WCO before purification is not feasible as biodiesel feedstock.
Conclusion
Waste Cooking Oil from fastfood chicken restaurant has high impurities such as solid waste, colloid waste, gum, high free fatty acid content and also high moisture content. However, it is feasible as biodiesel feedstock after purification processes. Those purification processes are filtation, sedimentation, degumming, centrifugation, neutralization, and adsorption. This study are expected as the basic information in biodiesel production by using waste cooking oil as biodiesel feedstock.
|
2021-06-17T20:03:02.577Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ab8646de4d7c070e8ae1571fcf3fd064f4423aae",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1940/1/012081",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ab8646de4d7c070e8ae1571fcf3fd064f4423aae",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
}
|
243640406
|
pes2o/s2orc
|
v3-fos-license
|
Lean Manufacturing Machine using Value Stream Mapping
Production process of mid scale manufacturing in Indonesia commonly using conventional sequential process. There are some wasted process and has already slowing down production lead time and creating more expenses for manufacture. The need of eliminating useless steps in production development for cost cutting lead to Value Stream Mapping (VSM) as mapping tool in it. VSM acceptance itself has already proven in many research studies which use manufacture in many levels as its case study. This paper, aims at using VSM as tool in examining mid scale manufacture production development. It also evaluated how VSM is put into real world practice in mid scale manufacture in Indonesia. Based upon observation and interview, then it can create future state map which eliminate waste from production development and also shorten lead time and propose new production development flow. At least, more than 500 minutes overall which can be eliminated in production process time after remapping it. While at least 141 minutes are being reduced for its VAT. However, it still need non technical effort to ensure company to use available map in their production development. Since that mostly mid scale manufacturing using family business management which still believe traditional process rather than modern one. Thus, future research should include psychological industrial aspect in VSM implementation in Indonesia.
Introduction
Production process of mid-scale manufacturing in Indonesia commonly using conventional sequential process. There are some wasted process and has already slowing down production lead time and creating more expenses for manufacture. This research study took place in mid scale manufacture which produce sheller machine for farming post production. While mid scale manufacture in Indonesia commonly focusing in more labor for its production development, it risks unefficient production cost which lead into more expensive product price.
The need of eliminating useless steps in production development for cost cutting lead to Value Stream Mapping (VSM) as mapping tool in it. VSM acceptance itself has already proven in many research studies which use manufacture as its case study [1]- [5]. Thus, it is chosen as mapping tool in order to eliminate wasted process inside sheller machine production.
Wasted process in production development lead into more expensive sales price, therefore, it should be evaluated in order to get better sales price for customer. While VSM is one of great mapping tool which can evaluate process for customer purpose and redraw shorter timeline and production lead time [6], so hopefully it can leverage competitive advantage for mid scale manufacture in marketplace.
On the other hand, the need of efficient manufacture process which leads into lean production process, can reduce material flow inside it. While the actors inside mid scale manufacture in Indonesia are mostly from conventional "old skool" person, drawing diagram using VSM should help simplify their understanding the need of efficient process. Since that VSM offers simple and yet powerful step by step approach, it also should help the actor draw future map for better process in its production development. This paper, aims at using VSM as tool in examining mid scale manufacture production development. It also evaluated how VSM is put into real world practice in mid scale manufacture in Indonesia. Since that many Indonesia small-mid scale manufactures merely organized using traditional approach in their production flow. Therefore, VSM in this case study is used to identify and eliminate wasted process and create new future map process for more efficient production development for them.
This research also evaluated VSM real practices effectiveness in Indonesia as en effort for lean production, especially for mid-scale manufacture. Because applied research about VSM in Indonesia is rarely found, thus there are small amount reference for VSM research in Indonesia. While Indonesia as development country, thus this research should also can be as valuable reference for other similar country research using VSM
Literature Review
Lean Manufacturing (LM) which originated come from Japan employes some methods such as JIT (Just In Time), TPM (Total Productive Maintenance) and also Kanbans [5]. The aim of LM is decreasing production cost and also create better production timeline toward efficiency for lesser effort [7]- [9] .
One of the LM tool which has already proven theoritically and empirically successful for mid-scale production process is Value Stream Mapping (VSM) [6], [11], [12]. There are some researchers who already prove that VSM can fit in many SME (Small Medium Enterprise) production process (in this paper named as mid-scale manufacture), and its efficiency includes time waste reduction and also defect reduction [12]. Thus, decision of using VSM in this case should also can fulfil research purpose based upon previous background explanation.
VSM consists of five core process which are [6], [13]: (1) selection of product family, (2) draw current state mapping, (3) draw future state mapping, (4) define working plan, and (5) achieving working plan. Other researchers stated that the fifth process is about experimenting what future mapping has been drawn, or simply experimenting what have already drawn [12], [14].
Each process in VSM should be: (1) valuable, (2) capable, (3) available, (4) adequate and (5) flexible, in order to create lean thinking in product development [15]. Therefore, each steps in product development should be evaluated in current state map, and then re-draw into future state map. VSM itself can be applied in many scales manufacture, from big size manufacture [1], [16] through small-mid scale manufacture [11], [15], [17] in effective way. Especially for small-mid scale company which has limited investment, VSM can have potential improvement in it [11]
Result and Discussion
Normal time measurement method used in this study continuos time study. Measurements were performed by elemental breakdown in any part of production to obtain activities. Time measurement performed on the activities and sample also test the adequacy of the data. This measurement happened because the length of time the majority of the activity and also the number of existing activities. Having obtained the normal time, the next step is to create the initial map. The initial map is a map of the company's condition before the repair.
Current state map shown as figure 2, which shown product development based upon observation and preeliminary interview. There are 14 processess exclude supply process and product shipment. However, these steps were assumed inefficient, since that product development was stay in warehouse before its shipment. In fact, all of the products are made as customer demand using batch processing, not using continuous product development. The on demand decision is made because of manufacture can not handle all of the request in its annual product schedule. Thus, it need change in its product development cycle. After examining order process and also finishing statement, it is clearly stated that product development should use batch processing based upon customer order, not using continuous production as current state map shown. Details of all production development stage shown at table 1. While the real condition shown some of process is not done efficiently, because of lead time from normal time. So, it should be rearranged for better result and shorter time. Observation took time for at least two weeks in order to create detail time calculation for each process and also discuss which process is determined as waste and which process is should be extended. This observation process not merely done by researchers, but also includes production manager as the one who really understand what kind of result that company want in order to fulfil customer demand.
At least, there are four major improvements that can be easily shorten production development time. First of all, it can shorten distance for spare parts storage into assembly place which can shorten more than 20 seconds. It also suggested that all spare parts from supplier should not stored in warehouse, because it will be ordered based upon demand for cheaper storage fee. It also can give docking load for spare parts straight through assembly place. Production development can also give more value added time for more than 100 minutes. This can be done using online sharing electronic data for customer order from customer service (which also take role as marketing) to production division and purchasing division. It also supported by reducing some of welding processes which are also reducing lead time among whole production development. Based on observation, take time owned by the company consumed 1723.2 minutes and all of the production process of Sheller machine still be below the take time. Therefore, the work activity can be modified so that the time may be approaching take time. In the assessment given to the seven kinds of waste in the value stream analysis tools matrix (VALSAT), obtained the selection of the most suitable tool is a process activity mapping (PAM). PAM process, in the beginning found that there are 45 elements NNVA work, 27 VA element, and 19 work elements NVA.
While the proposal PAM was found that there are 46 elements NNVA work, 23 VA element, and 12 work elements NVA. Total number of elements of work reduced from 91 to 81 working elements work elements. In the current state on the initial mapping, value-added time and production lead times each minute is 2590.219 and 7010.953 minutes. In future state mapping, value-added time and production lead times each minute is 2455.94 and 6248.105 minutes. While on mapping the current state of implementation, value added time and production lead times each minute is 2448.811 and Process Cycle Efficiency (PEC) on the current state mapping prefix amounted to 0.3695. PEC on the future state mapping is estimated at 0.3931. PEC after implementation amounted to 0.3974. Improved implementation of PEC PEC beginning indicates that the production process has been running more efficiently, ie to minimize and even eliminate work processes that do not add value. Thus, the product can be processed more quickly so that the product quickly into the hands of consumers. While the mapping process already done, future state map is shown in figure 3. It clearly shown that production development already reduced and also its process. While the production development in future state map has revealed its waste elimination and shorter time, there is still retention from the company.
Since that mostly small-mid manufacture in Indonesia is being held as family business and it means that elder generation opinion still become great matter in every decision making. Thus, it still need greater effort to ensure company management to use this map as their production flow. It means that all of the numbers and diagrams should also being presented using simple explantion for them. While observation ended, there are some data which can be shown under VSM process. There are value added and its following detail which shown how many activities has been analyzed. Data is shown in following table 4.
NNVA = (Neces sory but Not Value Added)
Shows which activity should be Done but did not provide added value
Conclusion and Future Research
Based upon result and discussion, it can be concluded that there some processes which can be shorten for its normal and standard time. The help from simple technology for small-mid manufacture such as online sharing data for customer order and also purchasing order, really help in reducing production development time.
It also clearly shown that minimizing the temporary spare part storage distance with the assembly process also can eliminate waste. While some welding processing can also be modified so it might approach below take time. Even though there is one additional process, whole product development time still better than previous one.
Using VSM result, it can ensure the product manager that efficiency really matter for them. That whole process really does create value added for customer order and increase customer satisfaction in the end. While some steps need big improvement, others merely need small improvement that really can reduce waste among production development.
However, whole mapping result still need great commitment from management, since that small-mid manufacture in Indonesia commonly as family business. Thus, implementing improvement such lean concept in their production development usually having retention by elder generation. Even though it already proven as better and more efficient in its process.
Thus, future research for VSM in small-mid manufacture should be also includes managerial point of view in implementing and ensuring company to use it. This managerial point of view will not only creating efficiency number, but it should include how to create standard operational procedure and also focus group discussion with high management level in company. Therefore, it should also include industry phsychology expert inside the team in future research. Figure 2.
Map Implementation
|
2019-09-15T03:06:31.614Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "1bb0c7b0c799c5d56d52a403357bf48abe44b49c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1175/1/012118",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ecd1a8221a11d36c7a9239369d603d25115970a3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics"
]
}
|
227243836
|
pes2o/s2orc
|
v3-fos-license
|
Blood–brain barrier genetic disruption leads to protective barrier formation at the Glia Limitans
Inflammation of the central nervous system (CNS) induces endothelial blood–brain barrier (BBB) opening as well as the formation of a tight junction barrier between reactive astrocytes at the Glia Limitans. We hypothesized that the CNS parenchyma may acquire protection from the reactive astrocytic Glia Limitans not only during neuroinflammation but also when BBB integrity is compromised in the resting state. Previous studies found that astrocyte-derived Sonic hedgehog (SHH) stabilizes the BBB during CNS inflammatory disease, while endothelial-derived desert hedgehog (DHH) is expressed at the BBB under resting conditions. Here, we investigated the effects of endothelial Dhh on the integrity of the BBB and Glia Limitans. We first characterized DHH expression within endothelial cells at the BBB, then demonstrated that DHH is down-regulated during experimental autoimmune encephalomyelitis (EAE). Using a mouse model in which endothelial Dhh is inducibly deleted, we found that endothelial Dhh both opens the BBB via the modulation of forkhead box O1 (FoxO1) transcriptional activity and induces a tight junctional barrier at the Glia Limitans. We confirmed the relevance of this glial barrier system in human multiple sclerosis active lesions. These results provide evidence for the novel concept of “chronic neuroinflammatory tolerance” in which BBB opening in the resting state is sufficient to stimulate a protective barrier at the Glia Limitans that limits the severity of subsequent neuroinflammatory disease. In summary, genetic disruption of the BBB generates endothelial signals that drive the formation under resting conditions of a secondary barrier at the Glia Limitans with protective effects against subsequent CNS inflammation. The concept of a reciprocally regulated CNS double barrier system has implications for treatment strategies in both the acute and chronic phases of multiple sclerosis pathophysiology.
Introduction
In a healthy individual, the central nervous system (CNS) parenchyma is protected from the peripheral circulation by the blood-brain barrier (BBB), which tightly regulates the entry and exit of soluble factors and immune cells [1]. Importantly, during multiple sclerosis, the abnormal permeability of the BBB allows penetration into the CNS parenchyma of inflammatory cells and soluble factors such as autoantibodies, cytokines, and toxic plasma proteins, which drive lesion formation and acute disease exacerbation [2,3]. Therefore, identifying key mechanisms that promote BBB tightness is currently considered to be a main strategy for controlling leukocyte and humoral entry, preventing acute relapse and disability progression in multiple sclerosis.
Previous studies have identified the Hedgehog (HH) pathway as a regulator of BBB integrity in multiple sclerosis, HIV, and stroke [4][5][6][7]. Desert hedgehog (DHH) is expressed constitutively at the BBB in adults [8] and belongs, together with Sonic hedgehog (SHH) and Indian hedgehog (IHH), to the HH family of morphogens, identified nearly 4 decades ago in Drosophila as crucial regulators of cell fate determination during embryogenesis [9]. The interaction of HH proteins with their specific receptor Patched-1 (PTCH1) derepresses the transmembrane protein Smoothened (SMO), which activates downstream pathways including the canonical HH pathway leading to the activation of Gli family zinc finger (Gli) transcription factors, and the so-called noncanonical HH pathways, which are independent of SMO and/or Gli [10].
Interestingly, a wealth of literature published during the last decades has enabled a change in the vision of BBB structure and integrity, which has expanded to include contributions from both barrier properties of the vascular endothelial cells and the astrocytic end feet of the neurovascular unit. Within the neurovascular unit, substantial intercellular communication network involves the vascular endothelial cells and astrocytic end feet, as well as the pericytes and basement membranes within the perivascular space (PVS) [11][12][13]. How these signals regulate the passage of soluble factors and cells into and out of the CNS is not completely understood and is of considerable translational interest to the field of neuroimmunology. Regulatory mechanisms at the BBB include solute transporters and receptor-mediated transcytosis, and immune cells are actively prevented from crossing the BBB by low levels of immune receptors that normally permit immune trafficking. Once soluble factors and immune cells penetrate the BBB, they circulate within the PVS, a region between the basal basement membrane of the endothelial cell wall and the parenchymal basement membrane abutting the astrocyte end feet [14,15]. While it is now well established that BBB breakdown leads to soluble factor and inflammatory cell infiltration into the PVS during neuropathology, the role of the Glia Limitans is more complex. Indeed, astrocytes, described as reactive, may demonstrate opposing roles in both recruiting and restricting neuroinflammatory infiltration depending on the context [16]. Specific reactive astrocyte behaviors are likely determined by signaling events that vary with the nature and severity of CNS injury or disease. Specifically, in multiple sclerosis as well as Alzheimer's and Parkinson diseases, it has been shown that reactive astrocytes, on one hand, produce pro-inflammatory and pro-permeability factors and on the other hand, neuroprotective factors [17][18][19].
Astrocyte barrier properties are not as well characterized as those of the BBB. However, several groups have highlighted barrier properties at the Glia Limitans [20][21][22]. Notably, endfootendfoot clefts, similar to those observed between endothelial cells at the BBB, have been described at the Glia Limitans and shown to be responsible for the sieving effect observed between the distribution of small and large Dextrans [23]. Moreover, under neuroinflammatory conditions, immune cell trafficking across the Glia Limitans is necessary for clinical experimental autoimmune encephalomyelitis (EAE) [21]; indeed, matrix metalloproteinase (MMP)-2 and MMP-9 proteolytically cleave dystroglycan, which anchors astrocyte end feet to the Glia Limitans basement membrane via binding to extracellular matrix molecules, allowing infiltrating leukocytes to penetrate the parenchyma [24]. Additionally, it has been shown that the scavenger receptor CXCR7 is up-regulated on the inflamed BBB endothelium, facilitating mobilization of T cells from the PVS into the CNS parenchyma [25]. Altogether, these data demonstrate that both the endothelial BBB and its basement membrane, along with the Glia Limitans and the parenchymal basement membrane, are required for immune cell trafficking across the neurovascular unit. Strikingly, our recent work has given considerable attention to a new property of reactive astrocytes: the expression of tight junction proteins (notably Claudin4 (CLDN4)) under inflammatory conditions [26]. This result provides yet another argument in favor of astrocytic barrier properties.
The first objective of our study was to decipher the role of the morphogen DHH in maintaining BBB tightness. The second objective was to demonstrate that a double barrier system comprising both the BBB and Glia Limitans is implemented in the CNS and regulated by a crosstalk going from endothelial cell to astrocytes using endothelial Dhh knockdown as a model of permeable BBB.
Here, we first demonstrate that endothelial DHH expression is down-regulated during neuroinflammation and is necessary to maintain BBB tightness. We then show that BBB opening, induced by Dhh knockdown, drives astrocyte CLDN4 expression, conferring barrier properties to the Glia Limitans, which results in the PVS entrapment of plasma proteins and inflammatory cells, both under physiological conditions and during pathology. Together, these data identify the neurovascular unit as a double barrier system whose function is controlled by the crosstalk between endothelial cells and astrocytes.
In conclusion, this work strengthens the concept of CNS double barrier system, unveiling how signals at the endothelium drive astrocyte barrier properties to protect the parenchyma during neuropathology. Consequently, taking into account both components of the neurovascular unit is of translational interest and could open the way for new therapeutic strategies notably to limit progressive multiple sclerosis pathology.
DHH, but not SHH or IHH, is expressed by CNS microvascular endothelial cells and down-regulated during chronic neuroinflammation
First, we showed that DHH is expressed at the BBB in vitro using mouse CNS MECs � (Fig 1A). CNS MEC purity was assessed using platelet/endothelial cell adhesion molecule 1 (PECAM1) and zonula occludens 1 (ZO1) as positive endothelial markers and smooth muscle actin (SMA), neural/glia antigen 2 (NG2), cluster of differentiation 45 (CD45), ionized calcium binding adaptor molecule 1 (IBA1), and glial fibrillary acidic protein (GFAP) as markers of contamination by smooth muscle cells, pericytes, leukocytes, microglia, and astrocytes, respectively ( Fig 1A and S1A Fig). DHH expression at the BBB was verified in vivo using human cortical sections from healthy donors ( Fig 1B) and brain sections from C57BL/6 mice (S1B Fig). SHH and IHH are not expressed in the healthy BBB, and DHH is known to be stored intracellularly as well as being secreted [7]. Therefore, we here infer the CNS endothelial cells as the source of DHH within the neurovascular unit. Next, we demonstrated that Dhh is severely down-regulated at the BBB under inflammatory conditions both in vitro using human brain microvascular endothelial cells (HBMECs) treated with interleukin-1β (IL-1β) (one of the main pro-inflammatory cytokine implicated in multiple sclerosis pathophysiology) (Fig 1C and S1 Data) and in vivo (Fig 1F and S1 Data) using a preclinical model of multiple sclerosis (MOG ) to induce chronic neuroinflammation in C57BL/6 mice. For this experiment, isolated spinal cord microvessels underwent a digestion step followed by a CD45 + T cell depletion step to discard inflammatory cell contamination induced by EAE (S1C and S1D Fig). Dhh down-regulation at the BBB is associated with the up-regulation of endothelial activation markers intercellular adhesion molecule 1 (Icam1) (Fig 1D-1G and S1 Data) and vascular cell adhesion molecule 1 (Vcam1) (Fig 1E-1H and S1 Data) and with down-regulation of mRNA markers of tight junctions (claudin5 (Cldn5) and Zo1) (Fig 1I, 1J and S1 Data).
Together, these data identify DHH as the only HH expressed in adults at the endothelial BBB. Moreover, they highlight the fact that DHH expression is down-regulated at the BBB during neuroinflammatory pathology.
Endothelial-specific Dhh inactivation induces down-regulation of adherens junction CDH5 and tight junction CLDN5 ex vivo
To test the importance of endothelial DHH expression at the BBB, we conditionally disrupted DHH expression in endothelial cells and examined the consequences on BBB integrity. To do so, we used CNS MEC cultures isolated from Cdh5-Cre ERT2 , Dhh Flox/Flox mice (Dhh ECKO mice), and Dhh Flox/Flox control littermates, 2 weeks after inducing knockdown by intraperitoneal injection of tamoxifen. There is no difference between Dhh ECKO and control mouse cell culture viability (S1E, S1F Fig and S2 Data).
We first verified the efficiency of the knockout by measuring Dhh expression in primary CNS MEC cultures obtained from Dhh ECKO and littermate controls and showed that Dhh expression is strongly down-regulated in the knockout mice (Fig 2A and S1 Data). Moreover, CDH5, CLDN5, and ZO1 junctions are disorganized in Dhh ECKO endothelial cells: In controls, CDH5, CLDN5, and ZO1 display a well-defined pattern of sharp contours at endothelial cell-cell contacts. In contrast, in Dhh ECKO cultures, a broader, more irregular pattern is detected at endothelial cell-cell contacts (Fig 2D-2F). This result is consistent with the previously documented phenotype using small interfering RNA (siRNA) for DHH [8]. CDH5 and CLDN5 but not ZO1 are down-regulated in Dhh ECKO CNS MECs compared to controls (Fig 2B, 2C and 2G-2I and S1 Data).
We concluded that endothelial DHH expression is necessary to maintain endothelial adherens and tight junction mRNA and protein expression level at the BBB and to maintain a welldefined CDH5, ZO1, and CLDN5 pattern of sharp contours at endothelial cell-cell contacts.
https://doi.org/10.1371/journal.pbio.3000946.g001 CTNNβ1 to inhibit transcription factor forkhead box O1 (FOXO1) [27] via PI(3)K-AKTdependent phosphorylation, which thereby up-regulates the expression of the endothelial tight junction protein CLDN5 [27]. Therefore, we next measured the expression level of the phosphorylated form of FOXO1 (p-FOXO1) in CNS MEC cultures from Dhh ECKO and control (Dhh Flox/Flox ) mice and demonstrated that p-FOXO1 is down-regulated in Dhh ECKO mice compared to controls (Fig 2G, 2J-2K). We then treated CNS MEC cultures from Dhh ECKO mice with a cell permeable inhibitor of the transcription factor FOXO1 (AS1842856), which blocks the transcription activity of FOXO1, and measured Cldn5 mRNA expression. We demonstrated that Cldn5 mRNA expression, in Dhh ECKO CNS MEC cultures treated with the inhibitor of FOXO1, returns to the expression level of control CNS MEC cultures, unlike the Dhh ECKO CNS MEC cultures treated with DMSO ( Fig 2L and S1 Data). Additionally, in Dhh ECKO CNS MEC cultures treated with the inhibitor of FOXO1, CLDN5 displays a welldefined pattern of sharp contours at endothelial cell-cell contacts. In contrast, in Dhh ECKO CNS MEC cultures treated with DMSO, a broader, more irregular pattern is detected at endothelial cell-cell contacts ( Fig 2M).
We concluded that endothelial autocrine DHH expression at the BBB maintains the pool of CDH5-CTNNβ1 signaling in endothelial cells, which promotes Cldn5 mRNA expression and maintains a well-defined CLDN5 pattern of sharp contours at endothelial cell-cell contacts through the inhibitory phosphorylation of the transcription factor FOXO1 ( Fig 2N).
In the white matter, endothelial-specific Dhh inactivation induces BBB permeability associated with endothelial and astrocytic activation in vivo
In vivo, on spinal cord sections from control and Dhh ECKO mice, we confirmed that the expression of adherens junction CDH5 and tight junction CLDN5, when normalized for the number and length of blood vessels, is down-regulated under resting conditions (Fig 3A-3D, S1 Data and S3A Fig) and demonstrated that it is associated with an increase accumulation of serum proteins (fibrinogen (FGB) and albumin (ALB)) [28] (Fig 3B, 3E, 3F, S1 Data, S3B and S3C Fig) around the vessels, suggesting BBB opening.
We then analyzed the activation status of both the spinal cord endothelium and Glia Limitans in Dhh ECKO and control littermates. We chose ICAM1 as a marker of endothelial activation and GFAP as a marker of astrocyte reactivity because they are both widely published in the context of inflamed CNS tissues and represent a strong indicator of a reactive response of the CNS endothelium and Glia Limitans.
Using spinal cord sections, we revealed that ICAM1 is up-regulated at the endothelium in Dhh ECKO mice compared to littermate controls (Fig 3G, 3I and S1 Data) and associated with a regionalized up-regulation of GFAP, a marker of astrocyte activation, in white matter (Fig 3H, 3J and S1 Data) but not gray mater (Fig 3H, 3K and S1 Data).
Dhh ECKO -induced BBB breakdown is sufficient to induce a secondary CNS protective barrier at the Glia Limitans
As we already demonstrated in Fig 3, Dhh ECKO mice display BBB leakage, whereas control littermates feature a tight BBB.
Although we demonstrated BBB leakage in Dhh ECKO mice (Fig 3B, 3E, 3F and S1 Data), we noticed that infiltrating plasmatic proteins are concentrated around the vascular area in arterioles and venules and not seamlessly distributed within the parenchyma �� . To verify this observation, we quantified the distribution of immunoglobulin G (IgG) and a smaller sized dye (70 kDa fluorescein isothiocyanate (FITC) Dextran) in the 3 compartments (lumen, PVS, and parenchyma). Specifically, in control mice, there is no significant endothelial permeability, with more than 95% of IgG and 97% of 70 kDa FITC Dextran contained in the lumen of blood vessels and 5% of IgG and 3% of 70 kDa FITC Dextran segregated in the PVS area limited by the astrocytic end feet (aquaporin 4 (AQP4) or laminin (LAM) antigen) on 1 side and the vessel wall (PECAM1 or LAM antigen) on the other side, and none found in the parenchyma (S1 Data, S4A- S4E Fig and S2 Data). The quantification protocol of IgG and 70 kDa FITC Dextran distribution within the lumen, PVS, and parenchyma is described in S4 Fig. In Dhh ECKO mice, vascular leakage is significant, but strikingly, 50% of IgG and 50% of 70 kDa FITC Dextran is contained into the PVS, while none is found in the parenchyma, indicating the presence of a secondary barrier at the Glia Limitans (S1 Data, S4A- S4E Fig and S2 Data). Interestingly, some overlap is observed between the IgG signal and the AQP4 signal, concentrated within the internal surface of the Glia Limitans ( Fig 4A). Thus might reflect a potential interaction between the inner face of astrocyte end feet and IgG accumulated in the PVS, as astrocytes express Fc receptors (cell surface receptors for IgG), which play a role in both CNS health and disease [29,30]. The perivascular trapping of IgG in Dhh ECKO mouse CNS was confirmed using co-immunostaining of IgG and LAM antigen, which mark the endothelial and astrocytic basement membranes (S5A and S5B Fig).
We have previously found that reactive astrocytes express tight junctions, notably CLDN4, under inflammatory conditions in a mouse model of multiple sclerosis (EAE) [26]. Here, we found that these data are relevant to human disease since CLDN4 is also expressed by reactive astrocytes, with stronger CLDN4 labeling intensity at the Glia Limitans, in active cortical lesions from multiple sclerosis patients ( Fig 4D). Based on the above results, we investigated whether the PVS entrapment of plasmatic proteins observed in Dhh ECKO mice is linked to the expression of the tight junction CLDN4 at the Glia Limitans in response to BBB permeability ( Fig 4E-4H and S1 Data). We showed that CLDN4 is expressed at the Glia Limitans in Dhh ECKO mice but not control littermates (Fig 4E-4H and S1 Data) using isolated neurovascular enriched fractions. Small intestine samples were used as a positive control for the quantification of CLDN4 by western blot (S6A Fig).
Altogether, these results suggest that, in Dhh ECKO mice, spontaneous BBB permeability leads to the establishment of a physical barrier at the Glia Limitans, characterized by the expression of the tight junction protein CLDN4. Therefore, in Dhh ECKO mice, astrocytic end feet at the Glia Limitans are "preconditioned" to form a secondary barrier protecting the parenchyma. ( �� It is important to note that this study focuses on arterioles and venules but not capillaries. Indeed, CLDN4 is only up-regulated in Dhh ECKO -enriched neurovascular fractions, which are 100 μm and larger in diameter (Fig 4F-4H and S1 Data). In the lysates obtained with enriched
Endothelial signals can drive astrocyte barrier properties at the Glia Limitans
Given the above results, we wanted to determine if astrocyte barrier formation requires signals from the endothelial BBB or from the plasmatic protein perivascular infiltrate. To do so, we first studied in vitro the response of normal human astrocytes (NHAs) to HBMEC conditioned media versus plasmatic proteins from healthy donors. HBMECs used to produce the conditioned media were treated with either the osmotic agent Mannitol or the pro-permeability factor vascular endothelial growth factor A (VEGFA) to induce BBB breakdown ( S7A Fig and S2 Data) through various methods.
We demonstrated that Gfap (Fig 5A, 5E-5I and S1 Data), aldehyde dehydrogenase 1 family, member l1 (Aldh1l1) (Fig 5B and S1 Data) (markers of astrocyte reactivity), and Cldn4 mRNA expression (Fig 5D, 5A-5H, 5J and S1 Data) are up-regulated in the NHA treated with HBMEC-conditioned media but not in the NHA treated with plasma from healthy donors. vimentin (Vim) (marker of astrocyte reactivity) mRNA expression level was not modulated in any condition (Fig 5C and S1 Data).
To confirm this observation in vivo, we delivered murine VEGFA or murine plasmatic proteins into the left cerebral cortex of adult mice and evaluated the consequences on CLDN4 expression by astrocytes. PBS stereotactic administration was used as a control. Importantly, VEGFA cortical stereotactic injection has already been shown to efficiently induce BBB breakdown in mice [17,31].
In vivo, GFAP (Fig 5K-5N and S1 Data) and CLDN4 (Fig 5K-5M, 5O and S1 Data) are induced in mouse cortex after murine VEGFA and murine plasmatic protein treatments, with VEGFA having a much stronger effect than plasmatic proteins.
We concluded that permeable endothelial monolayers produce signals that can drive astrocyte reactivity and tight junction expression. Plasmatic protein involvement in controlling astrocyte barrier behavior is, however, less clear as astrocyte reactivity and tight junction expression are up-regulated in vivo but not in vitro when treated with plasma; further investigations will be necessary to identify the mechanisms involved.
Mice with endothelial Dhh inactivation display reduced disability in a model of multiple sclerosis during the onset of the disease
To examine the impact of these findings on disease severity, we investigated the phenotype of induced experimental multiple sclerosis (EAE) in Dhh ECKO and control mice.
https://doi.org/10.1371/journal.pbio.3000946.g004 We observed that in control mice, neurologic deficits were observed from day 9 and increased in severity until day 18, when clinical score stabilized at a mean of 3.2, representing hind limb paralysis. In contrast, the onset of clinical signs in Dhh ECKO mice was first seen 4 days later, and the clinical course was much milder. In Dhh ECKO mice, disease reached a plateau at day 21 at a mean of 2.3, indicating hind limb weakness and unsteady gait, a mild phenotype ( Fig 6A and S1 Data). The EAE peak score (Fig 6B and S1 Data) and average score during the time of disability (Fig 6C and S1 Data) were both decreased in Dhh ECKO mice, but there were no significant changes in survival and mortality rates (Fig 6D and S1 Data).
Interestingly, the reduced clinical course of EAE in Dhh ECKO mice was much more marked during the onset of the disease (between day 12 and day 20 post EAE induction). However, when the plateau phase was reached (after day 21 post EAE induction), the clinical score difference between the Dhh ECKO and the control group was greatly reduced and coincided with an acceleration of the mortality rate in both groups (Fig 6A-6D and S1 Data). One explanation for this observation is that, in the Dhh ECKO group, inflammatory cells, accumulated in the PVS, end up degrading astrocytic tight junctions by secreting proteases. This previously described phenomenon [26] explains how inflammatory cells trapped in the PVS can eventually pass through the astrocytic barrier at the Glia Limitans and thereby enter the CNS, thus inactivating Dhh at the BBB. This in turn slows disease progression until the perivascular accumulation of inflammatory cells causes the degradation of the astrocyte secondary barrier, leading to the deterioration of the clinical course of EAE in the Dhh ECKO mice.
Importantly, the clinical course in the Dhh ECKO mice is correlated with strikingly decreased areas of demyelination as compared to the control cohort (Fig 6E, 6F and S1 Data). Critically, these studies reveal that the clinical course and pathology of EAE are strongly reduced in Dhh ECKO mice during the onset of the disease.
We concluded that endothelial Dhh knockdown-induced BBB opening is associated with a clinical protective effect during the onset of the disease in a model of multiple sclerosis.
Mice with endothelial Dhh inactivation display a reinforced barrier at the Glia Limitans, restraining access to the parenchyma to inflammatory infiltrate in a model of multiple sclerosis
Although Dhh ECKO mice display equivalent FGB densities (Fig 7A and S1 Data) as well as numbers of CD45+ leukocytes (Fig 7B and S1 Data) in lesions compared to those in control mice, neuropathology in both cohorts appeared very different.
We found that while the BBB is permeable in both groups, with plasmatic protein extravasation associated with equivalent CDH5 densities, astrocyte reactivity in EAE lesions in Dhh ECKO mice is greatly increased with GFAP immunoreactivity strongest at the Glia Limitans (Fig 7C, 7E and S1 Data). Moreover, infiltrating plasmatic proteins in Dhh ECKO mice show less CNS parenchymal dispersion (Fig 7E, 7F and S1 Data), with 68.0% of FGB trapped into the Glia Limitans in the Dhh ECKO cohort versus 32% in the control cohort (Fig 7F, S7B Fig and S1 Data).
In previous work from our laboratory [8], we showed that endothelial-specific deletion of Dhh in the peripheral vasculature is associated with vascular permeability and endothelial activation, notably in the lung, and that lipopolysaccharide (LPS) injection increased pulmonary neutrophil infiltration in Dhh ECKO mice compared to control littermates. Therefore, we can hypothesize that endothelial Dhh knockdown increases the peripheral recruitment and activation of inflammatory cells that need to travel to the CNS. However, in our study, we do not . What we do observe is a significant difference in terms of the repartition of these CD45+ cell populations between both groups, with 77.1% of CD45+ cells trapped into the PVS in the Dhh ECKO cohort versus 25.1% in the control cohort (Fig 7G, 7H, S1 Data and S7C Fig).
Altogether, these data suggest less access through the Glia Limitans in the Dhh ECKO mice compared to the littermate control mice. Finally, we demonstrated that CLDN4 expression in spinal cord EAE lesion lysates is up-regulated in Dhh ECKO mice as compared to control mice (Fig 7I, 7J and S1 Data).
Collectively, data from Fig 4 to Fig 7 reveal that conditional loss of a key structural component of endothelial integrity at the BBB in Dhh ECKO mice leads to increased astrocyte reactivity and implementation of barrier properties at the Glia Limitans, allowing for less diffusion of plasmatic proteins and immune cells into the CNS parenchyma than in control mice. Therefore, in Dhh ECKO mice, astrocytic end feet at the Glia Limitans are "preconditioned" to form a barrier, explaining their ability to protect the parenchyma more efficiently during neuropathology than in controls, leading to the protective effect observed clinically. Thus, we identify BBB leakage, induced by the down-regulation of Dhh endothelial expression, as an important mechanism controlling Glia Limitans reactivity and barrier properties, and subsequently, tissue damage and clinical deficits in a model of human disease (Fig 8).
Discussion
While it is now well established that BBB breakdown leads to soluble factor and inflammatory cell infiltration into the PVS during neuropathology [4], the function of the Glia Limitans barrier is just starting to be unraveled [15,20,24,26]. In the present study, we have enabled a different perspective on CNS barrier organization, unveiling the existence of 2 independent, dissociable states of the astrocyte and endothelial barriers in the neurovascular unit. Indeed, we confirmed that, just like the BBB, the Glia Limitans can form a protective barrier. For the first time, to our knowledge, we have demonstrated that BBB breakdown is sufficient to induce chronic barrier properties at the Glia Limitans, and we uncovered crosstalk from endothelial cells to astrocytes that restricts access to the parenchyma of plasmatic proteins and inflammatory cells during multiple sclerosis. Moreover, we showed that in Dhh ECKO mice, which display an open BBB, astrocytes express the tight junction protein CLDN4 under resting conditions. Therefore, under neuroinflammatory conditions, Glia Limitans in Dhh ECKO mice is primed with stronger barrier properties protecting against the onset and severity of EAE symptoms in the Dhh ECKO mice compared to control littermates.
Fig 8. Schematic of the BBB and Glia Limitans in Dhh ECKO versus control mice, in health and inflammatory disease.
Under resting conditions, in control mice, ECs express CLDN5 and CDH5, which reinforce a closed BBB. In Dhh ECKO mice, CLDN5 and CDH5 are disrupted leading to BBB permeability; in turn, astrocytes of the Glia Limitans up-regulate CLDN4, closing the Glia Limitans and restricting incoming plasmatic proteins to the PVS. Under inflammatory condition, in both control and Dhh ECKO mice, CLDN5 and CDH5 are disrupted leading to BBB permeability, astrocyte reactivity, and upregulation of CLDN4. However, in Dhh ECKO mice, astrocytic end feet of the Glia Limitans have been "preconditioned" to form a barrier, explaining their ability to trap plasmatic proteins and inflammatory cells in the PVS, and thus protecting the parenchyma more efficiently than in the controls. BBB, blood-brain barrier; CDH5, cadherin 5; CLDN4, Claudin4; CLDN5, claudin5; CTNNβ1, catenin β1; DHH desert hedgehog; EC, endothelial cells; FOXO1, forkhead box O1; PTCH1, Patched-1; PVS, perivascular space.
A model of pericyte-deficient mice featuring BBB breakdown has been described; but, in this model, endothelial permeability is due to increased transcytosis and not junction degradation as in the Dhh ECKO mouse model. Therefore, it would be interesting to study the behavior of astrocytes in this model, notably their capacity to express CLDN4 and to restrain parenchymal access to plasmatic proteins. Should astrocyte barrier properties be observed in the pericyte-deficient mouse model, thus would be suggestive for a more generalized role of BBB breakdown in driving astrocyte barrier properties, either by parallel or convergent mechanisms.
The expression of CLDN4 by astrocytes has only been identified in mouse models of CNS inflammation (acute CNS inflammation model (stereotactic injection of the pro-inflammatory cytokine IL-1β and model of multiple sclerosis: EAE MOG 35-55 ) [26]. However, in the present study, we demonstrated that CLDN4 is expressed by astrocytes under resting conditions in response to BBB opening in the Dhh ECKO mouse model and after cortical stereotactic injection of the pro-permeability factor VEGFA. The next step would be to examine astrocytic CLDN4 expression in other pathologies of the CNS, notably Alzheimer's disease, stroke, or amyotrophic lateral sclerosis in which BBB permeability has been identified as a critical pathophysiological player. This could implicate CLDN4-mediated barrier function in astrocytes as a more generalized defense against BBB opening in other chronic diseases of the CNS.
The critical role of Hh signaling in CNS neuroinflammation was first highlighted in 2011; this study revealed that during EAE, the morphogen SHH is expressed by reactive astrocytes and participates in the maintenance of BBB integrity [4]. Following this discovery, our group found that DHH is physiologically expressed at the BBB in adults [8]. Here, we demonstrated for the first time that DHH is down-regulated at the BBB during EAE and that DHH knockdown is sufficient to induce BBB permeability by inhibiting CDH5 and CLDN5 expression through the modulation of FOXO1 activity, strengthening the idea that HH signaling is essential to control BBB integrity both physiologically and under multiple sclerosis conditions. Based on our results and the literature, we hypothesize that DHH is necessary to maintain BBB tightness under physiological conditions and that DHH down-regulation under inflammatory conditions might be offset by astrocytic SHH secretion to maintain BBB homeostasis during disease progression.
Over the past years, the field has begun to acknowledge the fact that the BBB is not the sole line of defense of the CNS and that the astrocytic end feet of the Glia Limitans might play a role in restricting access to the parenchyma. Indeed, it was first described that in spinal cord injury, astrocyte scar borders corral inflammatory cells within areas of damaged tissue [32,33]. Moreover, we have found that during EAE, reactive astrocytes of the Glia Limitans form tight junctions of their own containing CLDN4 [26], a junction protein also expressed in tightly sealed epithelia [34,35]. Noteworthy is the fact that down-regulation or reorganization of CLDNs and other tight junction proteins has been implicated in permeability in various tissues, particularly the gut [19,36,37]; however, reports of dynamic tight junction protein induction resulting in functional barrier formation have been rare [38,39]. Here, we have shown for the first time that genetically induced disruption of endothelial junctions is sufficient to induce CLDN4 expression at the Glia Limitans under resting condition, identifying an inducible astrocyte barrier mediated fully by signals transmitted by the open BBB. It has already been described, by our group and others, that astrocytes can send signals, notably VEGFA [18], thymidine phosphorylase (TYMP) [17], and SHH [4], to the BBB to modulate its state (tight versus permeable). In this study, we identify a reciprocal signaling pathway demonstrating that BBB endothelial junction disruption leads to CLDN4 expression at the Glia Limitans. Interestingly, endothelial cell capacity to send signals to neighboring cells has been previously identified, notably in the context of pericyte (mural cells associated with arterioles, capillaries, and venules) recruitment at the vascular wall [40]. Specifically, it has been shown that platelet-derived growth factor subunit b (PDGFB) is secreted from angiogenic sprout endothelium where it serves as an attractant for co-migrating pericytes, which in turn express platelet-derived growth factor receptor beta (PDGFRβ) [41]. Based on these arguments and our results, it appears highly likely that endothelial signals can be sent to astrocytes. Identifying such signals will be the aim of future studies by our group.
We then showed that in animals, which exhibit an open BBB (Dhh ECKO mice), astrocytic end feet of the Glia Limitans form a barrier more efficiently than in the littermate controls, leading to the protective effect observed clinically in the model of multiple sclerosis. This is somehow reminiscent of what is observed in brain ischemic preconditioning where a mild nonlethal ischemic episode (preconditioning) can produce resistance to a subsequent more severe ischemic insult [42,43]. Here, inducing BBB opening and PVS plasmatic protein accumulation produces resistance to the subsequent massive inflammatory infiltration induced by multiple sclerosis development. This could account for the infrequency of recurrent multiple sclerosis relapse/lesion formation at the same location in the CNS. Interestingly, among neurons and nonneuronal cells, astrocytes are considered increasingly important in regulating cerebral ischemic tolerance [44], and a parallel can be easily drawn between these results and ours showing a major role for "preconditioned" astrocytes in the control of "chronic neuroinflammation tolerance" and protection against further relapse.
In light of the above observations, we may assume that the CNS has the ability to protect itself against isolated BBB leakage episodes through a secondary barrier at the Glia Limitans that takes over once the BBB is open. Moreover, it suggests that manipulation of the BBB and Glia Limitans in combination may have greater potential than either alone to control CNS entry of leukocytes and pro-inflammatory soluble factors in conditions such as multiple sclerosis and perhaps more widely. Indeed, taking into account both components of the neurovascular unit is of translational interest, notably to limit CNS parenchymal access to pathogenic agents by strengthening the Glia Limitans once the BBB is open in cardiovascular diseases such as brain ischemic strokes [45], neuroinfections [46], and neurodegeneration (Parkinson/ Alzheimer's diseases and vascular dementia) [47], or to facilitate parenchymal access to drugs, by opening the BBB and Glia Limitans together, in CNS tumor treatment [48]. Along similar lines, it is unknown how the barrier properties of the Glia Limitans may impact the pharmacokinetics of drugs that must enter the CNS parenchyma in conditions such as multiple sclerosis, which may account for treatment failure.
In summary, our study first demonstrates the critical role of DHH in maintaining BBB integrity. We find that DHH is down-regulated during the animal model of multiple sclerosis and that Dhh knockdown leads to BBB opening. Using Dhh knockdown as a tool to cause BBB opening, we then show that BBB permeability is sufficient to induce a secondary barrier at the Glia Limitans, mediated by CLDN4 and astrocyte reactivity. These findings not only highlight the capacity for bidirectional signaling between the endothelial BBB and the astrocytic Glia Limitans in modulating the double barriers of the CNS but also provide support for a novel concept of "chronic neuroinflammatory tolerance", in which chronic induction of Glia Limitans barrier properties by BBB opening may lead to a protective effect against neuroinflammatory disease activity and progression.
Human tissues
Cortical sections from multiple sclerosis patients (active lesions) and healthy controls (frontal cortex) were obtained from the NeuroCEB bio bank (https://www.neuroceb.org/fr). The sections were 30 μm thick and obtained from fresh frozen samples.
Mice
Dhh Floxed (Dhh Flox ) mice were generated at the "Institut Clinique de la Souris" through the International Mouse Phenotyping Consortium (IMPC) from a vector generated by the European conditional mice mutagenesis program, EUCOMM, and described before [8].
The Cre recombinase in cadherin5 (Cdh5)-Cre ERT2 mice was activated by intraperitoneal injection of 1-mg tamoxifen (Sigma Aldrich, St. Louis, Missouri, United States of America) for 5 consecutive days at 8 weeks of age. Mice were phenotyped 2 weeks later. Successful and specific activation of the Cre recombinase has been verified by measuring recombination efficacy in Cdh5-Cre ERT2 ;Rosa26 mTmG mice (S2A Fig). Importantly, Dhh endothelial knockdown does not impact CNS angiogenesis (S2 Data) nor angioarchitecture (S2D Fig). The Cdh5-Cre ERT2 mice and C57BL/6 mice were purchased from Jackson Laboratories (Bar Harbor, Maine, USA).
Neurovascular fraction enrichment from mouse CNS
Mouse was humanely killed by cervical dislocation, and its head was cut and rinsed with 70% ethanol. Brain and spinal cord were then harvested, and cerebellum, olfactory bulb, and white matter were removed from the brain with sterile forceps. Additionally, meninges were eliminated by rolling a sterile cotton swab at the surface of the cortex. The cortex and spinal cord were then transferred in a potter containing 2 mL of buffer A (HBSS 1X w/o phenol red (Gibco, Waltham, Massachusetts, USA), 10-mM HEPES (Gibco), and 0.1% bovine serum albumin (BSA) (Sigma Aldrich), and the CNS tissue was pounded to obtain an homogenate, which was collected in a 15-mL tube. The potter was rinsed with 1 mL of buffer A, which was added to the 2-mL homogenate. Cold 30% dextran solution was then added to the tube (V:V) to obtain a 15% dextran working solution centrifuged for 25 minutes at 3,000 g, 4˚C without brakes. After centrifugation, the pellet (neurovascular components and red cells) was collected, and the supernatant (dextran solution and neural components) was centrifuged again to get the residual vessels. Neurovascular components were then pooled and resuspended in 4 mL of buffer B (HBSS 1X Ca 2+ / Mg 2+ free with phenol red (Gibco), 10-mM HEPES (Gibco), and 0.1% BSA (Sigma Aldrich)).
Neurovascular fraction enrichment for RT-PCR, western blots, or immunohistochemistry
After centrifugation of the cell suspension, the pellet was washed 3 times with the buffer B and filtered through a 100-μm nylon mesh (Millipore Corporation, Burlington, Massachussetts, USA). The nylon mesh was washed with 7 mL of buffer B to collect the retained enriched neurovascular fractions. The suspension was then centrifuged for 10 minutes at 1,000 g, and the pellet suspended in 300 μL of radioimmunoprecipitation assay (RIPA) lysis buffer for western blot analysis or 1,000 μL of Tri-Reagent (MRC, Cincinnati, Ohio, USA) for quantitative reverse transcription polymerase chain reaction (qRT-PCR) analysis. For immunohistochemistry, the pellet was suspended in 3 mL of a solution of matrigel (Corning, Steuben, New York, USA)-Dulbecco's Modified Eagle Medium (DMEM) 1 g/L glucose, Mg + , Ca 2+ (Gibco) 1:80, distributed on a labtek (Starstedt, Nümbrecht, Germany) (1 mouse brain is needed to seed 1 labtek) and incubated for 30 minutes at 37˚C. Finally, the enriched neurovascular fraction embedded in the matrigel (Corning) solution was fixed with 10% formalin for 10 minutes.
Cytokines/growth factors/chemicals
Human IL-1β was purchased from PeproTech (Rocky Hills, New Jersey, USA), and Human and mouse VEGF-165 (VEGFA) were purchased from CliniSciences (Nanterre, France). Based on previous studies, Human IL-1β and Human VEGF-165 were routinely used at 10 ng/ mL [19,49]. Mouse VEGF-165 was used at a concentration of 20 ng/μL. The inhibitor of FOXO1 total (AS1842856) was purchased from Merck (Kenilworth, New Jersey, USA) and was used at 100 nM [50]. D-Mannitol was purchased from Sigma Aldrich (St. Louis, Missouri, USA) and was used at 100 mM [51].
Quantitative RT-PCR
The relative expression of each mRNA was calculated by the comparative threshold cycle method and normalized to β-actin mRNA expression.
Western blots
Protein expression was evaluated by SDS-PAGE. Protein loading quantity was controlled using the rabbit monoclonal anti-β-actin antibody (cell signaling). Secondary antibodies were from Invitrogen. The signal was then revealed by using an Odyssey Infrared imager (LI-COR, Lincoln, Nebraska, USA). For quantification, the mean pixel density of each band was measured using Image J software (NIH, Bethesda, Maryland, USA), and data were standardized to β-actin, and fold change versus control calculated.
Stereotactic injection
Ten-week-old C57BL/6 mice (4 mice per condition) were anaesthetized using isoflurane (3% induction and 1% maintenance) (Virbac Schweiz, Glattbrugg, Germany) and placed into a stereotactic frame (Stoelting Co., Illinois, USA). To prevent eye dryness, an ophthalmic ointment was applied at the ocular surface to maintain eye hydration during the time of surgery. The skull was shaved, and the skin incised on 1 cm to expose the skullcap. Then, a hole was drilled into the skull, using a pneumatic station S001+TD783 Bien Air, until reaching the dura mater. A total of 3 μl of murine VEGFA (20 ng/μL), 3 μL of healthy mouse plasma, or 3 μL of vehicle control (PBS) were then delivered at 0.01 μl/s into the frontal cortex at coordinates of 1 μm posterior to bregma, 2 μm left of the midline, and 1.5 μm below the surface of the cortex [36].
Mice received a subcutaneous injection of buprenorphine (0.05 mg/kg) (Ceva santé animale, Libourne, France) 30 minutes before surgery and again 8 hours post-surgery to assure a constant analgesia during the procedure and postoperatively. Mice were humanely killed by pentobarbital (Richter Pharma, Wels, Austria) overdose at 24 hours post injection (dpi). For histological assessment, the brain of each animal was harvested.
Immunohistochemistry
Prior to tissue collection and staining, mice were transcardially perfused with PBS (10 mL) followed by 10% Formalin (10 mL) to remove intravascular plasma proteins. Brain and spinal cord samples were either fixed in 10% formalin for 3 hours, incubated in 30% sucrose overnight, OCT embedded and cut into 9-μm thick sections, or directly OCT embedded and cut into 9 μm thick sections. Cultured cells were fixed with 10% formalin for 10 minutes. Human frozen sections were used directly without any prior treatment. Concerning the fixed sections, for CLDN4, prior to blocking, sections were soaked in Citrate (pH 7.5; 100˚C). For CLDN5, prior to blocking, sections were soaked in EDTA (pH 6.0; 100˚C). For CD45, sections were treated with 0.5 mg/mL protease XIV (Sigma Aldrich) at 37˚C for 5 minutes. Primary antibodies were used at 1:100 except CLDN4 (1:50), FGB (1:1,000), and ALB (1:1,000). Samples were examined using a Zeiss Microsystems confocal microscope (Oberkochen, Germany), and stacks were collected with z of 1 μm. For immunofluorescence analyses, primary antibodies were resolved with Alexa Fluor-conjugated secondary polyclonal antibodies (Invitrogen), and nuclei were counterstained with DAPI (1:5000) (Invitrogen). For all immunofluorescence analyses, negative controls using secondary antibodies only were done to check for antibody specificity.
Morphometric analysis
Morphometric analyses were carried out using NIH ImageJ software (NIH).
BBB permeability was evaluated by measuring tight junction integrity and plasmatic protein extravasation. Brain and spinal cord sections were immunostained for the expression of CLDN5/CDH5 and FGB/IgG/ALB, respectively. For each brain or spinal cord section, CLDN5+, CDH5+, FGB+, IgG+, and ALB+ areas were quantified in 20 pictures taken at the margins of the lesion area under 40× magnification. One section was quantified per spinal cord (3 different zones are displayed on the same section: 1 cervical, 1 lumbar, and 1 dorsal to get a global vision of the lesion) (per mouse).
Leukocyte densities were evaluated in sections stained for the expression of CD45 leukocyte population. For each brain or spinal cord section, CD45+ leukocytes were counted in 20 pictures randomly taken under 40× magnification. One section was quantified per spinal cord (3 different zones are displayed on the same section: 1 cervical, 1 lumbar, and 1 dorsal to get a global vision of the inflammatory lesion) (per mouse).
Demyelination was evaluated in spinal cord sections stained for the expression of MBP. For each spinal cord section, MBP+ area was quantified in 10 pictures taken in and around inflammatory lesion sites under 20× magnification. One section was quantified per spinal cord (3 different zones are displayed on the same section: 1 cervical, 1 lumbar, and 1 dorsal to get a global vision of the lesion) (per mouse).
Plasmatic protein and leukocyte infiltrate distribution at the neurovascular unit were evaluated in brain or spinal cord sections (1) triple stained for PECAM1 or CDH5 (markers of the BBB), IgG (plasmatic proteins), and AQP4 or GFAP (markers of the Glia Limitans); or (2) double stained for IgG (plasmatic proteins), FITC Dextran (exogenous tracer) or CD45 (leukocyte infiltrate), and LAM (marker of basement membranes). For each section, the distribution (between the lumen, the PVS, and the parenchyma) of IgG, 70 kDa FITC Dextran or leukocyte infiltrate was quantified for 5 to 6 neurovascular units randomly taken under 60× magnification, each 1 from a different animal. We used negative working images highlighting the endothelial BBB and astrocyte Glia Limitans and outlined them by using dotted lines. Dotted lines were then transferred to the plasmatic protein or leukocyte infiltration images so that we could quantify their distribution within the 3 compartments (lumen, PVS, and parenchyma).
Statistical analyses
Results are reported as mean ± SEM. Comparisons between groups were analyzed for significance with the nonparametric Mann-Whitney U test, the nonparametric Kruskal-Wallis test followed by the Dunn multiple comparison test when we have more than 2 groups, the chisquared test for the distribution of plasmatic proteins and inflammatory cells in the neurovascular unit, or a nonlinear regression test (Boltzmann sigmoidal) for the EAE scoring analysis using GraphPad Prism v8.0.2 (GraphPad, San Diego, California, USA). Differences between groups were considered significant when P � 0.05 ( � P � 0.05, �� P � 0.01, ��� P � 0.001).
Supporting information S1 Text. Supporting information file containing the S1 Data and S2 Data legends and DOI links (A), the Supporting Methods (B), and the associated References (C). (DOCX) S1 Raw Images. Supporting information file containing the original, uncropped, and minimally adjusted images supporting all blot and gel results reported in Fig 2 panel G, Fig 4 Fig. (Related to Fig 3, Fig 4, Fig 6 and Fig 7). Cadherin5Cre ERT2 recombinase activation in blood vessels is successful and specific (A) Brain and spinal cord sections were harvested from Cadherin5Cre ERT2 ,Rosa26 mTmG mice and littermate controls and immunostained with anti-GFP (in green) and anti-PECAM1 (in red) antibodies. Dhh endothelial knockdown does not impact CNS angiogenesis (B) Spinal cord sections were harvested from Dhh ECKO mice and littermate controls and immunostained with an anti-IB4 (in green) antibody. IB4 positive area was quantified (Dhh ECKO n = 7, control n = 6). (C) Cortical sections were harvested from Dhh ECKO mice and littermate controls and immunostained with an anti-IB4 (in green) antibody. IB4 positive area was quantified (Dhh ECKO n = 6, WT n = 6). Dhh endothelial knockdown does not impact brain angioarchitecture (D) The vascular network in the brain of Dhh ECKO mice and control littermates was imaged by microcomputed tomography (micro-CT). NS, Mann-Whitney U test. The underlying data for S2 Fig can be found in S1 Data (individual numerical data (excel file)) and S2 Data (statistical analysis (Prism file)) (https://doi. 4). Dhh ECKO -induced BBB breakdown is sufficient to induce a secondary CNS protective barrier at the Glia Limitans. (A) Spinal cord sections were harvested from Dhh ECKO mice and littermate controls and immunostained with anti-LAM (in green) and anti-IgG (in red) antibodies (nuclei were stained with DAPI (in blue)). Representative LAM/IgG staining was shown. (B) Negative working images of LAM channels were used to highlight the endothelial (EBM) and astrocyte (ABM) basement membranes, using orange dotted lines. The outlines were then transferred to the IgG images to discriminate the distribution of IgG between the lumen, PVS, and parenchyma. (TIF) S6 Fig. (Related to Fig 4). Small intestine samples are used as a positive control for the quantification of CLDN4 expression by western blot. (A) Representative blots of CLDN4 expression level on control mouse neurovascular unit lysates and mouse small intestine lysates were shown. There is astrocyte reactivity but no astrocytic CLDN4 up-regulation at the capillary level in Dhh ECKO 5 and Fig 7). Both VEGFA and Mannitol induce HBMEC permeability in vitro. (A) Cultured HBMECs were treated with PBS, VEGFA, or Mannitol for 6 h, and HBMEC monolayer permeability to 70 kDa FITC Dextran was quantified. Mice with endothelial Dhh knockdown display a reinforced barrier at the Glia Limitans restraining access to the parenchyma to inflammation in a model of multiple sclerosis: (B) Negative working images of GFAP/CDH5 channels were used to highlight the endothelial (EBM) and astrocyte (ABM) basement membranes, using orange dotted lines. The outlines were then transferred to the FGB images to discriminate the distribution of FGB between the lumen, PVS, and parenchyma. (C) Negative working images of the LAM channel were used to highlight the endothelial (EBM) and astrocyte (ABM) basement membranes, using blue dotted lines. The outlines were then transferred to the CD45 images to discriminate the distribution of leukocytes between the lumen, PVS, and parenchyma. � P � 0.05, ���� P � 0.
|
2020-12-02T14:11:03.664Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "a071023ccaf6e9e80139c76756bf27438b041db8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000946&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dcd620d3548fc3f082e4b9a32260845d9d78300c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
3563790
|
pes2o/s2orc
|
v3-fos-license
|
Biochemical and Biophysical Characterization of Recombinant Yeast Proteasome Maturation Factor Ump1
Protein degradation is essential for maintaining cellular homeostasis. The proteasome is the central enzyme responsible for non-lysosomal protein degradation in eukaryotic cells. Although proteasome assembly is not yet completely understood, a number of cofactors required for proper assembly and maturation have been identified. Ump is a short-lived maturation factor required for the efficient biogenesis of the 20S proteasome. Upon the association of the two precursor complexes, Ump is encased and is rapidly degraded after the proteolytic sites in the interior of the nascent proteasome are activated. In order to further understand the mechanisms behind proteasomal maturation, we expressed and purified yeast Ump in E. coli for biophysical and structural analysis. We show that recombinant Ump is purified as a mixture of different oligomeric species and that oligomerization is mediated by intermolecular disulfide bond formation involving the only cysteine residue present in the protein. Furthermore, a combination of bioinformatic, biochemical and structural analysis revealed that Ump shows characteristics of an intrinsically disordered protein, which might become structured only upon interaction with the proteasome subunits.
At present, no three-dimensional structure of Ump proteins is available, although there is some information on their functional domains. The C-terminal region, encompassing residues 5 -47 of yeast Ump or 6 -4 of human Ump , is required and sufficient for interaction with proteasome precursor complexes. Residues 68 to 72 of hUmp are essential for this interaction [ 7]. By contrast, the region containing the first 50 amino acid residues of Ump is neither sufficient nor required for incorporation of Ump into precursor complexes [ 7]. These Ump regions are likely to operate by interacting with distinct substructures of different proteasome subunits. Indeed, hUmp binds directly to several α and β subunits and associates with α rings in vitro [ 9,22,24]. In line with this finding, hUmp appears to be essential for the binding of the β2 subunit to α ring precursor complexes, and therefore for the initiation and assembly of β rings [24]. In vitro experiments showed that hUmp binds directly to the β5 subunit [24]. Interestingly, the yeast β5 propeptide becomes dispensable in cells lacking Ump , but is essential for viability in its presence [ 0,25]. In vivo depletion of hUmp by siRNA experiments, however, prevented the incorporation of β5 into nascent proteasome precursor complexes [ 9].
Here we report the biochemical and biophysical characterization of recombinant yeast Ump . Ump purified as a heterogeneous mixture of monomers and dimers. Dimer formation is mediated by Cys 5. Mutation of this single cysteine to serine abolished dimer formation leading to preparations enriched in monomeric Ump . Nevertheless, the purified mutated monomers were conformationally too heterogeneous to crystallize. A comparative biophysical analysis showed that Ump displays characteristics of a natively disordered protein. This biophysical property is independent of the oligomeric state of the protein and suggests that Ump structure might be stabilized upon interaction with proteasomal subunits and concomitant incorporation into proteasomal precursor complexes.
Materials and methods
The plasmid pJD492-UMP was designed to express UMP -6xHis yielding a non-cleavable C-terminally 6His-tagged version of S. cerevisiae Ump . A PCR fragment, containing the nucleotide sequence of the complete UMP ORF, was cloned into pET a using XbaI and BamHI restriction sites. The plasmid encodes the fulllength Ump followed by the additional amino acid sequence GYHHHHHH. This plasmid was used as a template for construction of the mutant plasmid pJD492-UMP -C 5S by PCR. The following primers were synthesized to introduce the mutated sequence: 5′-CTA CTG AAC AAA GAG TCC AGC ATC GAT TGG GAG-3′ and 5′-CTC CCA ATC GAT GCT GGA CTC TTT GTT CAG TAG -3′ (bold represents mutation site). The mutation was confirmed by sequencing (Eurofins). Both plasmids encoding the 6His-tagged versions of yeast Ump under the control of a T7 promotor were used to transform E. coli BL2 CodonPlus (Stratagene) competent cells.
For expression of S. cerevisiae Ump and Ump -C 5S, E. coli BL2 CodonPlus (Stratagene) transformed with the expression plasmids were grown in lysogeny broth (LB) medium containing ampicillin and chloramphenicol (final concentrations of 00 µg/mL and 34 µg/mL, respectively), and incubated at 37ºC until OD600 reached approximately 0.3. The incubation temperature was reduced to 24 ºC, before induction of protein expression by addition of IPTG (Biosynth) to a final concentration of 2 mM. Cells were harvested by centrifugation 4 h after induction and the cell pellet from each liter of culture was resuspended in 20 mL of lysis buffer (0. % (v/v) Tween 20, 300 mM NaCl, 0 mM imidazole in PBS (phosphate buffered saline -37 mM NaCl, 2.7 mM KCl, 0 mM Na2HPO4, 4 mM KH2PO4, pH 8.0) supplemented with g/mL of lysozyme and stored at -20 ºC. Upon thawing, complete EDTA-free protease inhibitor cocktail (Roche), 5 µg/mL DNAse I and 0 mM MgCl2 (final concentration) were added to the cell lysate, which was centrifuged and the supernatant loaded onto a 5 mL HisTrap column (GE Healthcare) previously equilibrated with buffer A (20 mM sodium phosphate pH 8.0, 500 mM NaCl and 0 mM imidazole). The column was washed with 0 column volumes of buffer A and bound proteins were eluted with 00 mM imidazole in buffer A.
Fractions containing recombinant Ump were pooled, desalted on a HiPrep 26/ 0 column (GE Healthcare), previously equilibrated in 50 mM Tris-HCl pH 7.5. The desalted Ump fraction was further purified on an anion-exchange column (MonoQ; GE Healthcare), using a linear 0 to M NaCl gradient in 50 mM Tris-HCl pH 7.5. The oligomeric state of the protein was verified by size-exclusion chromatography on a Superdex 75 column (GE Healthcare) equilibrated with 50 mM Tris pH 7.5, 00 mM NaCl. The column was calibrated using aprotinin (6.5 kDa), ribonuclease A ( 3.7 kDa), chymotrypsinogen (25.0 kDa), and ovalbumin (43.0 kDa) as standards. The void volume (Vo) was calculated by determining the elution volume of dextran blue. The partition coefficient (Kav) for each protein was obtained with the following equation: Kav=(Ve-Vo)/(Vt-Vo), where Ve is the elution volume and Vt is the total bed volume. A standard calibration curve of Kav versus log(MW) was used to calculate the apparent molecular mass of the distinct recombinant Ump molecular species. The Stokes radius (Rs) for the globular protein standards was calculated with the equation Log(Rs)=-(0.2040.023)+(0.3570.005)•log(MW) [26]. These values were used to create a calibration curve ( 000/Ve vs. Rs), which allowed the determination of the Rs for the distinct Ump molecular species. For plotting the theoretical relationship between Rs and MW for proteins in native (Native), natively unfolded premolten globule (nu-PMG) and urea-unfolded (un) Hydrodynamic radius (RH) measurements were made at 25°C with a Zetasizer Nano ZS DLS apparatus (Malvern Instruments). A sample (50 l) containing 0.5 mg/ml protein in 50 mM Tris-HCl pH 7.5, 00 mM NaCl was centrifuged and filtered through a 0.2 μm filter to remove suspended particles, and placed in a quartz cuvette. Particle diffusion coefficients were calculated from autocorrelated light intensity data, and converted to RH with the Stokes-Einstein equation (Dt = kBT/6πηRH, where kB is the Boltzmann constant; T is temperature in Kelvin; η is solvent viscosity; and RH is the hydrodynamic radius of the protein). A histogram of the percentage of the scattering mass versus RH was calculated using DTS (nano) 6.0 software (Malvern Instruments). Data represent an average of 3 measurements for each sample. Table S1 for protein % similarities) was performed with ClustalW2, and rendered with Aline [35]. Disorder was predicted with RONN [30] for the selected amino acid sequences and a consensus line for disorder prediction (http://www.bioinformatics.nl/~berndb/ronn.html) is printed below the alignment: the black line highlights residues where disorder is predicted for all the displayed sequences and the blue line represents regions where disorder is predicated for least 80% of the represented Ump1 orthologs. The position of the non-conserved Cys115 is indicated by a red star, the conserved motif HPLE is indicated by red triangles, and the Cys37 residue conserved in mammalian orthologs is boxed. The blue-boxed arrow above Arg84 points to one of the trypsin-cleavage sites identified by N-terminal sequencing after limited proteolysis experiments ( Figure S1). SCHCE, Ump1 from Saccharomyces cerevisiae (UniProt accession code P38293); SCHPO, Ump1 from Schizosaccharomyces pombe (O74416); MOUSE, Ump1 from Mus musculus (Q9CQT5); PONAB, Ump1 from Pongo abelii (Q5R9L9); HUMAN, Ump1 from Homo sapiens (Q9Y244); BOVIN, Ump1 from Bos taurus (Q3SZV5); DICDI, Ump1 ortholog from Dictyostelium discoideum (Q55G18) and DRMEG, Ump1 from Drosophila melanogaster (Q9VIJ5).
Figure 1. Recombinant Ump1 is purified as a mixture of molecular species with different charges and oligomeric states.
A) Ion-exchange chromatographic profile of the metal-affinity purified Ump1 fraction shows that this protein is further separated into two peaks corresponding to species with different isoelectric points (peak 1 and peak 2). Conductivity is represented by a dotted line. B) Electrophoretic analysis of Ump1 fractions corresponding to peak 1 (monomer) and peak 2 (dimer) of the ion-exchange chromatography. The wild-type Ump1 monomer is frequently contaminated with dimers under nonreducing conditions (first lane). The Ump1-C115S mutant elutes from the ion-exchange column as a single peak (data not shown) and migrates as the wild-type Ump1 monomer. Proteins were loaded in sample buffer without (-) or with (+) 10 mM DTT prior to electrophoresis in a 15% SDS-PAGE (here stained with Coomassie Blue). MW, Molecular weight marker; values in kDa.
Figure 3. Determination of Ump1 apparent molecular mass and Stokes radii (Rs).
A) Size-exclusion chromatography of wild-type Ump1 dimer and Ump1-C115S monomer. Superdex 75 calibration was performed with the following molecular weight protein standards: 1 -aprotinin (6.5 kDa), 2 -ribonuclease A (13.7 kDa), 3 -chymotrypsinogen (25.0 kDa), and 4 -ovalbumin (43.0 kDa). Ump1 wild-type dimer and C115S monomer display atypical mobility, eluting with apparently higher molecular masses of 65 and 40 kDa, respectively (calculated with the equation Kav=-2.0693•log(MW)+4.9698, R² = 0.99607, obtained after column calibration). Using these data, the apparent Rs calculated for wild-type Ump1 dimer and C115S monomer are 34 and 27 Å, respectively (as calculated from the equation Rs=0.3467(1000/Ve)-5.7834, R² = 0.99061; Ve=elution volume). B) Logarithmic plot of Rs versus molecular mass (MW) of the corresponding proteins. The straight lines represent the average theoretical Rs for the proteins used as standards, assuming different conformational states (native), a natively unfolded pre-molten globule-like conformation (nu-PMG) or a non-native urea-denatured conformation (un) according to the equations given in ref [26]. The error bars represent the standard deviation for each plot as calculated from ref. [26]. Ump1 monomer (C115S) and Ump1 dimer correspond to the orange and red circles, respectively and fall within the range of values expected for natively unfolded molten globule conformation. For comparison purposes, experimentally determined values for Rs [36] are shown for pre-molten globule conformations of proteins with molecular masses of 43 kDa (MMP-1 Interstitial collagenase, orange triangle), 28 kDa (Tryptophan synthase, blue circle) and 15 kDa (Tumor suppressor p16, blue rhombus). Limited proteolysis assays were performed by incubating the purified recombinant protein with trypsin at a ratio of 000: (w/w) in 50 mM Tris-HCl pH 8.0 and 00 mM NaCl, at 37ºC. Aliquots were collected at specific time points (0 and 30 min) and reactions were stopped by incubation at 95ºC for 5 minutes in standard sample buffer without or with 0 mM DTT. The cleavage products were separated by SDS-PAGE ( 7.5% acrylamide gel), transferred onto PVDF membrane, and analysed by Edman degradation.
For analysis of the secondary structure content of the N-and Cterminal peptides, recombinant Ump -C 5S was treated with trypsin for 30 min and the solution obtained after limited proteolysis was applied to a mL HisTrap column (GE Healthcare) previously equilibrated with buffer A, and the unbound N-terminal fragment collected by washing with 2 column volumes of buffer A. The bound proteins were eluted with 00 mM imidazole in buffer A and contained a mixture of full-length Ump and the His-tagged Cterminal peptide. The purified N-terminal fragment was dialysed against 50 mM Tris-HCl pH 7.5, 00 mM NaCl, concentrated to 5 mg/mL and used for CD analysis.
The secondary structure content of full-length Ump was assessed by far-UV circular dichroism (CD) spectroscopy. Measurements were performed on a Jasco J-8 5 spectrometer equipped with a Peltier-controlled thermostated cell support. Ump solutions were 0. mg/ml in 50 mM Tris-HCl pH 7.5, 00 mM NaCl with or without mM DTT (freshly prepared and incubated for h at 4ºC). CD spectra were acquired at 25ºC, with the instrument set up to 2 nm bandwidth, s response, 200 nm/min scanning speed and 0 accumulations. Spectra were deconvoluted with CDNN 2. [27]. Thermal unfolding was performed by raising the temperature at a rate of ºC/min, between 25 and 90ºC, while monitoring the CD signal at 205 nm. The unfolded protein fraction was calculated by normalizing the CD signal variation.
For analysis of the secondary structure content of Ump -C 5S and N-terminal fragment by far-UV CD in buffer without DTT and low NaCl concentration, the proteins were diluted to a final concentration of 0. mg/mL in mM Tris-HCl pH 7.5, 2 mM NaCl, and measurements were performed at 20ºC on a Jasco J-8 5 spectrometer fitted with a Peltier temperature controller. Spectra were acquired between 90 and 260 nm, set up to nm bandwidth, s response, 500 nm/min scanning speed and 3 accumulations. Each spectrum was the average of two scans corrected for buffer background. The spectra were deconvoluted with the CONTIN program using the online software Dichroweb [28,29].
Prediction of disorder for Ump was performed on multiple sequence alignments with RONN (http://www.bioinformatics.nl/ ~berndb/ronn.html) that uses a modification of the Bio-Basis Function Neural Network (BBFNN) [30] and Fold Index [3 ], based on the algorithm of Uversky and coworkers [32]. For comparison with other available disorder prediction servers yeast Ump sequence was also analysed with the Meta Protein DisOrder prediction System (http://prdos.hgc.jp/cgi-bin/meta/top.cgi), an online webserver that predicts the disorder tendency of each residue resorting to the prediction results of the seven independent disorder predictors [33] ( Figure S2).
Results
Yeast Ump , expressed in E. coli and containing a C-terminal 6His tag, was efficiently purified by metal affinity chromatography. In a subsequent ion-exchange chromatography, two Ump -containing peaks were eluted with different NaCl concentrations ( Figure A). This elution profile and isoelectric focusing (data not shown), indicated that recombinant Ump purified by metal affinity chromatography was heterogeneous and contained at least two differently charged species. Analysis by SDS-PAGE showed that, under reducing conditions, the proteins eluting in the different peaks after ion exchange chromatography were indistinguishable ( Figure B). However, when no reducing agent was added, the protein eluting with lower NaCl concentration migrated faster (apparent MW 8 kDa corresponding to the predicted value for the tagged protein, and from here on referred to as monomer, Figure ) than the protein eluted with higher NaCl concentrations (apparent MW 36 kDa and from here on referred to as dimer, Figure ). Taken together these data indicated that Ump was purified as a mixture of presumably monomers and dimers (under non-reducing conditions), and that selfassociation was mediated by formation of an intermolecular disulfide bond.
Analysis of the Ump amino acid sequence (Figure 2) shows that disulfide bond formation likely involves the single non-conserved cysteine residue at position 5. Interestingly, previous work with the recombinant human Ump ortholog revealed that it also selfassembles and that oligomerization is likely to be mediated by a cysteine residue (Cys37) located in the N-terminal region of the protein [34].
Analysis of the two peaks obtained by size exclusion chromatography ( Figure 3) confirmed that the two Ump fractions correspond to different oligomeric states of the recombinant protein.
Purification under reducing conditions (addition of -5 mM DTT in all chromatography and protein storage buffers) increased the yield of the Ump species with lower molecular weight (monomer, data not shown), but this protein slowly converted to a mixture of the two forms, rendering this sample too heterogeneous for further biophysical and structural studies.
In an attempt to obtain homogeneous protein, and to confirm the implication of cysteine 5 in Ump dimerization, we mutated this residue to a serine. The purified Ump -C 5S mutant was analyzed by SDS-PAGE ( Figure B), size-exclusion chromatography ( Figure 3), and DLS (Table ) and compared to wild-type Ump purified under non-reducing conditions. The mutant protein purified as a single peak in the ion-exchange column (data not shown) and in the analytical size exclusion chromatography ( Figure 3A). The C 5S mutant eluted with a lower apparent molecular weight than that of the wild-type disulfide-bonded Ump dimer, supporting the hypothesis that Cys 5 is responsible for the oligomerization of wild-type Ump . However, both Ump species eluted with apparent molecular masses (40 and 65 kDa for the lower and higher molecular mass Ump species, respectively) that are larger than the theoretical values for monomeric ( 8 kDa) or dimeric (36 kDa) tagged Ump . The apparent molecular mass determined for the lower molecular weight Structural analysis of yeast Ump1 species, is larger than a monomer and approaches the value expected for a non-covalently associated dimer. Similarly, the higher molecular mass species displays an intermediate size, closer to a tetramer. Since this atypical mobility is a characteristic of intrinsically disordered proteins [37], one hypothesis to support these results is that the purified Ump species could represent a mixture of monomers (with identical elution profiles to Ump -C 5S) and covalently associated dimers (wild-type Ump higher molecular mass species) with noncompact elongated shapes, resulting in anomalous migration in sizeexclusion chromatography.
The calculation of the Stokes radii, which was based upon the values of a standard calibration curve (Figure 3) revealed values of 27 Å and 34 Å for the lower and higher molecular mass species, respectively. To obtain another estimate of the hydrodynamic dimensions of the protein in solution, the diffusion coefficient was measured by dynamic light scattering (DLS). All samples have high polydispersity indices, and show a heterogeneous distribution of particles with different molecular sizes in solution ( Table ), with ~50% of the scattering volume attributed to particles ranging between 8 and 24 Å for the monomeric wild-type Ump , and between 24 and 32 Å for dimeric Ump . These data reinforce the view that both recombinant wild-type Ump and the C 5S mutant are highly heterogeneous in solution.
The logarithmic plot of these calculated Rs values versus the molecular masses of the corresponding monomeric and dimeric Ump variants indicates that these proteins do not behave as natively folded globular proteins in solution, and fall very close to the plot representing the behaviour of molecules with a natively unfolded molten globule conformation ( Figure 3B). All results indicate that the Rs for the recombinantly expressed Ump molecular species are significantly larger than expected for a globular protein of similar molecular mass. Despite the current experimental evidence, however, it cannot be excluded that non-covalent oligomerization is a reason for the higher-than-predicted apparent molecular masses of the monomeric and dimeric Ump species in solution. The data suggest that this protein is at least partially unfolded and alternates between multiple extended conformations with variable hydrodynamic radius. In addition, the C 5S mutation, although eliminating the heterogeneity attributed to the formation of covalently associated wild-type Ump oligomers, did not prevent the appearance of molecules with variable sizes as clearly seen in the DLS data (Table ), and likely attributable to conformational variation between slightly more compact and extended conformations.
In agreement with the hypothesis that Ump is at least partially unfolded, leading to the apparently higher hydrodynamic radius of the different molecular species of recombinant Ump , analysis of its primary sequence shows that 33% of its amino acid residues are predicted to be disordered (Figure 2). These residues are mainly distributed in the N-terminal half of the protein, comprising amino acids 2-38 and 47-63 (Figure 2 and S2). The prediction of disorder extends to the sequences of Ump orthologs, indicating that the regions predicted to be partially unfolded might have a functional significance.
Circular dichroism (CD) spectra were recorded to compare the secondary structure content of the wild-type and mutant Ump oligomeric species, and thus confirm its folding state. The CD spectra for all proteins (monomeric and dimeric wild-type Ump , as well as Ump -C 5S mutant) exhibit isodichroic curves, with a minimum at 20 nm and a shoulder around 222 nm ( Figure 4A). The negative peak is characteristic of random coil structures. Spectral similarity indicates similar secondary structure content in all Ump preparations. These results provide evidence for the presence of structured and unstructured regions in Ump , in agreement with the disorder predictions. The secondary structure content, however, is not significantly affected by the oligomeric state of the protein or by the C 5S mutation.
To gain complementary insight into the folding properties of Ump , we performed thermal denaturation assays of monomeric and dimeric wild-type versions (in the presence or absence of DTT) as well as of the C 5S mutant, while simultaneously monitoring the CD signal at 205 nm ( Figure 4B). Interestingly, all preparations of wild-type Ump exhibit a very gradual -and almost constant -CD signal variation with temperature, from 25 to 90ºC. This is unlike the typical behaviour of small, single domain folded globular proteins, where the unfolding is highly cooperative and occurs in a very narrow temperature range. Also, even at 90ºC, Ump does not seem to be fully denatured, as seen by the fact that the CD signal does not plateau at high temperature. This behaviour is what one would expect from a protein harbouring unstructured regions, since the inability to maintain a compact hydrophobic core would, (i) hinder the establishment of the interaction network responsible for folding cooperativity, and (ii) substantially increase the conformational entropy and therefore increase the resistance to full unfolding. The C 5S mutant exhibits higher unfolding cooperativity, but the overall considerations made for the wild-type still apply.
Limited proteolysis experiments shows that Ump is cleaved by trypsin at Arg84 (Figure S ), leaving an N-terminal fragment that includes most of the region predicted to be unstructured as well as the conserved HPLE sequence (Figure 2) required for proteasome interaction [ 7]. The CD spectra from this Ump N-terminal proteolytic fragment confirm that, in accordance with the theoretical disorder predictions (Figures 2 and S2), the N-terminal region is largely unstructured (Figure 5). Spectral deconvolution of the fulllength Ump -C 5S reveals that it contains 9% -helices, 20% strands, 9% turns and around 42% random coil. The N-terminal segment is predominantly composed of random coil (~50%), with 24% of -strands, 22% turns and a negligible amount of -helices (~%). The C-terminal spectrum, obtained by subtracting the Nterminal Ump spectrum from that of Ump -C 5S, provides an estimate of the secondary structure content of the C-terminal region and suggests that this region has a significant secondary structure content with a relatively lower percentage of coil regions (27% coil, 7% -helices, 37% -strands and 9% turns).
Discussion
Biochemical characterization of S. cerevisiae Ump was performed using a variety of techniques. The results obtained by size exclusion chromatography and CD, together with amino acid sequence analysis, show that recombinant Ump is a natively unfolded protein. Accurate identification of these disordered regions in proteins, which confer conformational heterogeneity to the samples but are often mediators of protein-protein interactions, is crucial for structural and functional studies.
The recombinantly expressed and purified Ump consists of a heterogeneous mixture of molecules with variable isoelectric points and hydrodynamic radii. In particular, the non-conserved single cysteine at position 5 is partly responsible for this heterogeneity leading to Ump self-assembly by disulfide-bond formation. Conceivably, dimerization may play a role in proteasome biogenesis, a process that could be modulated by the local redox state of the cell. Indeed, disulfide-mediated virion assembly in the cytosol catalyzed by virus-encoded redox-regulated proteins has been previously demonstrated [39]. However, the lack of evolutionary conservation of this cysteine residue (Figure 2) may indicate that cysteine-mediated dimerization might not have a key role in Ump function in vivo.
Mutation of Cys 5 to serine eliminates the formation of covalently associated Ump oligomers, but the anomalously large Stokes radius of this monomeric form suggests that the protein is not globular and its conformation is predicted to be a natively unfolded molten globule.
The intrinsic disorder of Ump is supported by CD analysis of the secondary structure content, which indicated that ~42% of its structure is dominated by a random coil conformation ( Figure 4A, Figure 5). These data are in agreement with a theoretical prediction of disorder, particularly relevant in the N-terminal half of the protein (Figure 2, Figure S2), which was shown to be ~50% random coil ( Figure 5). Moreover, the low unfolding cooperativity and high stability of Ump towards unfolding by temperature ( Figure 4B) constitute additional fingerprints for structurally disordered proteins. In this context, it is worth noting that the Ump region 5 -47 starting at the conserved HPLE motif, which is predicted to be unstructured (Figure 2), is sufficient for interaction with proteasome precursor complexes [ 7]. The flexibility of its N-terminal domain may give the protein the ability to bind multiple targets during proteasome assembly. One possibility is that the N-terminal region of Ump engages in interactions with components of a second 5S complex during their dimerization [ 6]. Another important aspect of a lack of regular secondary structure is that it might provide Ump with the capability to adjust to steric restrictions upon enclosure in the newly formed proteasome following dimerization of 5S precursor complexes [ 3].
There is a currently growing awareness of the fundamental importance of disordered regions of proteins in many biological and pathological processes [38,40]. These regions, characterized by the absence of a well-defined three-dimensional structure and displaying structural flexibility, are highly abundant in eukaryotic proteomes. These features are proposed to provide a functional advantage to proteins by enabling them to interact with multiple binding partners and to behave as intracellular hubs [4 ]. The inherent plasticity of these intrinsically disordered regions allows them to play fundamental roles in macromolecular recognition and assembly, and to be active players in molecular events such as intracellular signalling, which require transient interactions and shuttling between different macromolecular assemblies.
Ump mechanism of action is not yet completely understood and its known interaction partners are limited to some proteasome subunits. Ump was proposed to provide a checkpoint that prevents early dimerization of precursor complexes until their assembly is completed [ 3]. The propeptides of proteasome subunits β5 and β6, as well as the β7 C-terminal extension might contribute to overcome this checkpoint after incorporation of β7 by displacing Ump or changing its conformation [ 0, 3, 6]. Structural flexibility of Ump might be a key characteristic enabling these adjustments.
Characterization of Ump , a key factor in proteasome biogenesis, may open a window of opportunity for the development of new proteasome inhibitors. Since the proteasome has been shown to be a suitable target in cancer therapy [42], development of alternative or additional proteasome inhibitors that interfere with proteasome assembly might contribute enormously to cancer treatment. Figure 5. The N-terminal region of Ump1 is highly disordered. Far-UV CD spectra of Ump1-C115S and its isolated N-terminal fragment. The difference spectrum for the C-terminal peptide was obtained by subtracting the N-terminal Ump1 spectrum from that of full-length Ump1-C115S. Upon deconvolution the secondary structure of Ump1 N-terminal is 1% -helix, 25% -strand, 22 % turns and 50% coil. The C-terminal peptide secondary structure corresponds to 18% -helix, 37% -strand, 19% turns and 27% coil.
|
2018-04-03T06:01:38.452Z
|
2013-04-01T00:00:00.000
|
{
"year": 2013,
"sha1": "dadb5cbaf1f9710e3290dec89d58a5f80b7ab47e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5936/csbj.201304006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dadb5cbaf1f9710e3290dec89d58a5f80b7ab47e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
53681014
|
pes2o/s2orc
|
v3-fos-license
|
Advancing mobile learning in Australian healthcare environments: nursing profession organisation perspectives and leadership challenges
Background Access to, and use of, mobile or portable devices for learning at point of care within Australian healthcare environments is poorly governed. An absence of clear direction at systems, organisation and individual levels has created a mobile learning paradox, whereby although nurses understand the benefits of seeking and retrieving discipline or patient-related knowledge and information in real-time, mobile learning is not an explicitly sanctioned nursing activity. The purpose of this study was to understand the factors influencing mobile learning policy development from the perspective of professional nursing organisations. Methods Individual semi-structured interviews were undertaken with representatives from professional nursing organisations in December 2016 and January 2017. Recruitment was by email and telephone. Qualitative analysis was conducted to identify the key themes latent in the transcribed data. Results Risk management, perceived use of mobile technology, connectivity to information and real-time access were key themes that emerged from the analysis, collectively identifying the complexity of innovating within an established paradigm. Despite understanding the benefits and risks associated with using mobile technology at point of care, nursing representatives were reluctant to exert agency and challenge traditional work patterns to alter the status quo. Conclusions The themes highlighted the complexity of accessing and using mobile technology for informal learning and continuing professional development. Mobile learning cannot occur at point of care until the factors identified are addressed. Additionally, a reluctance by nurses within professional organisations to advance protocols to govern digital professionalism needs to be overcome. For mobile learning to be perceived as a legitimate nursing function requires a more wholistic approach to risk management that includes all stakeholders, at all levels. The goal should be to develop revised protocols that establish a better balance between the costs and benefits of access to information technology in real-time by nurses.
Background
The use of mobile technology to access information in real-time is ubiquitous in modern life. Digital knowledge transfer is an outcome of using mobile technology that is currently underutilised in Australian healthcare settings [1]. Harnessing mobile learning to augment traditional andragogies in healthcare environments by stakeholders, especially nurses, has been slow. Previous studies to explore the lack of mobile learning at point of care by nurses have been undertaken [1][2][3][4]. Focus group studies with nurse supervisors and online surveys with students have uncovered barriers, challenges, risks and benefits to nurses and undergraduate students of being able to access and use mobile technology for learning at point of care [3][4][5]. Analysis of the Registered Nurse Standards for Practice [6] and professional Codes of Conduct [7,8] have revealed an absence of guidance to support this adjunct method of learning.
The aim of this study was to explore the factors influencing the governance of mobile technology at point of care for informal learning and continuing professional development (CPD) from the perspectives of representatives of professional nursing organisations. Barriers, risks, challenges and benefits to using mobile technology by nurses at point of care have been previously been identified in the international literature at individual, organisational and systems levels [9][10][11][12]. Inadequate governance and lack of understanding within registered health professions regarding the potential of accessing and using mobile technology for learning has created further disruption to healthcare provision both within Australia and internationally [13][14][15]. The resultant inability of nurses to use mobile technology for informal learning and CPD at point of care in Australia hinders them meeting the annual learning requirements for registration as a nurse [16,17]. Additionally, the lack of legitimate access to mobile learning prohibits nurses from guiding and supporting student nurses and modelling digital professionalism, while undertaking work integrated learning within healthcare environments.
Clear direction regarding governance of mobile technology for leisure and learning within healthcare settings remains unaddressed at a systems level in nursing, with flow-on effects impacting at the organisation and individual levels. While nursing informatics is now an essential component of the undergraduate nursing curriculum [18,19], students and registered nurses are not formally or consistently taught digital professionalism. In the resulting confusion regarding appropriate and safe use of mobile technology at point of care, opportunities arise for advertent and inadvertent professional transgressions to occur [20]. The blurring of public-private boundaries in healthcare environments generates organisational risks and potential adverse media attention if nurses make poor choices regarding access and use of mobile technology. Fear of litigation has negatively impacted the ability of nurses to access mobile technology in the workplace as organisations have dissuaded its use. Paradoxically, however, nursing is consistently reported to be the most trustworthy profession [21], with nurses depended on to provide complex nursing care and administer controlled substances, yet not trusted with carrying a mobile device to access information at point of care [2,22].
Nurses are the largest group within the registered health professions in Australia [23], making it costly for organisations to educationally prepare their nursing workforce to become proficient in using mobile technology at point of care. However, a digitally capable workforce will be also be able guide the new generation of nurses to become digitally professional and minimise the potential risks associated with using digital media. Upskilling the nursing workforce will also contribute to lessening the current confusion whereby undergraduate students can use mobile technology for learning [11,24] except during work integrated learning [25]. Promotion of congruency in mobile learning opportunities across the profession is now necessary if nursing is to remain contemporary and continue to be viewed by the public as a trustworthy profession [21,26].
Mobile technology enables individuals to seek and retrieve information in real-time that can aid in decisionmaking that could potentially improve patient outcomes [34,35]. Access to information at point of care also has the potential to improve workflow. Westbrook and colleagues [36] quantified patterns of task time distribution and found nurses completed an average of 72.3 tasks per hour which over time became more fragmented and interrupted, creating potential safety concerns. Deployment of mobile learning has the potential to reduce this fragmentation by enabling continuity of care of patients, as nurses would not need to leave the bedside to check or clarify information. This study targeted representatives from nursing profession organisations to better understand from their perspective the factors influencing the use of mobile technology for informal learning and CPD.
Design
This research uses interpretive description as discussed by Thorne [37]. It draws on the work of Creswell [38], and Strauss and Corbin [39] by using purposive sampling and employing a reflexive approach within a systematic framework to code, label and categorise the data to enable analysis.
Participants and recruitment
Purposive sampling was used to recruit participants from a range of nursing profession organisations. Inclusion criteria for interview were being a nurse employed or belonging to a nursing profession organisation senior enough to be able to represent the organisation from a policy or guideline perspective and having expertise in nursing practice. A potential list of organisations was generated (CM and EC) that included National (n = 7) and Coalition of National Nursing and Midwifery Organisations (CoNNMO) member (n = 55) organisations. Invitations were sent to the contact emails provided via the national organisation or CoNNMO website (n = 52). If no response was received within two weeks, a follow-up telephone call was made. If there were no telephone contact details available, a further email was sent to the same address. A reminder email was despatched one month after the initial email invitation. An information sheet was provided as an attachment to the email invitation and consent to participate was recorded prior to the beginning of the recorded interview as per ethics protocol for approval H0016097.
Data collection
Interviews with participants were conducted and recorded using Skype for Business™, at a mutually agreeable time using a semi-structured schedule as a guide. The interview schedule question development was informed by previous research [4] and developed by two researchers (CM and EC). Prompts and potential probing questions were included in the schedule to maintain congruency of questioning (Table 1). Interview questions were designed to establish whether the nursing profession organisations had a policy position on mobile technology for informal learning and CPD and then to explore factors impacting the use of mobile technology for learning at point of care.
The interviews were conducted during December 2016 and January 2017, took between 17.29 and 54.29 min (mean 34.05 min) and were transcribed verbatim. Variations in interview length were due to the depth of knowledge of the topic of investigation by individual participants.
Data analysis
A systematic and organised process was developed consisting of trial coding with member checking and development of a codebook that provided a framework of codes. Auditing of codes and reviewing previous interviews to ensure consistency of application of labels across interviews was conducted during the process of coding. Inductive thematic analysis was undertaken by coding 'meaning units' as 'open codes' as described by Elliot and Timulak [40]. 'Meaning units' were tabulated in Microsoft Excel (2016), from which data was labelled and reduced from open to axial and finally to selective codes to enable the sub-themes to be revealed. This process of labelling and reducing the phrases by coding enabled further refinement of the data to become four core themes. Constant comparison was undertaken by two of the authors (CM and EC).
Rigour
The interviewer (CM) familiarised herself with the schedule to ensure the interview process flowed and enabled probing questions and prompts to be less rehearsed. The interviewer was aware of the lack of body language cues and maintained a neutral but encouraging dialogue with participants [41]. At the conclusion of each interview, interviewees were asked if they had any further information they would like to add. This opportunity enabled participants to raise any issues or information that had not been discussed during the interview. The accuracy of the transcriptions were confirmed, by reading and listening to the audio recordings of the interviews simultaneously by the interviewer. At the conclusion of each interview, participants were offered the opportunity to check the transcription for errors. This process minimised potential for error and ensured accuracy of the data transcription.
Ethics
Ethics approval was gained from The University of Tasmania Social Sciences Human Research Ethics Committee (H0016097) prior to commencement of the study as required under Australia's National Statement on Ethical Conduct in Human Research [42].
Participant demographics
Six interviews were conducted during the study period ( Table 2) Participants were senior registered nurses holding executive positions who through their careers had a broad range of nursing experiences in a variety of healthcare settings. They were paid employees or were volunteers within Australian nursing specialty organisations that were members of CoNNMO. Gaining access to appropriate nursing representatives to seek participation proved problematic owing to the complexity of the national organisations targeted or voluntary nature of the membership to nursing specialty organisations. The lead time required to obtain national organisation permission to interview varied. Requests for interviewing a representative from the organisation needed to be taken to appropriate internal meetings to be considered. Feedback from organisations was sought, after meetings were held to discuss the interview request. However, reaching an appropriate representative for interview remained complicated. One organisation declined to participate due to a decision made by the organisation Director. Access to nursing speciality organisation representatives affiliated with CoNNMO was ad hoc, owing to the volunteer nature of many nursing specialty organisations. The voluntary nature of these organisations was apparent by irregular monitoring of email accounts, so non-acknowledgement or response from point of entry was common. However, initial and follow-up contact was undertaken as per ethics protocol. The complexity of gaining access to National representatives and the poor response from voluntary organisations impacted the capacity to recruit interviewees. In addition, the release of the Australian College of Nursing, Health Informatics Society of Australia and Nursing Informatics Australia joint draft position statement on health informatics in February 2017 resulted in cessation of recruitment as the researchers believed it could influence the responses of future participants.
Themes
Four key themes emerged from the data analysis, revealing the complexity of factors that influence governance of mobile learning at point of care in healthcare environments in Australia. These themes were: 1) risk management; 2) perceived use of mobile technology; 3) connectivity to information; and 4) real-time access. Addressing all four themes was found to be imperative for enabling mobile learning at point of care.
Risk management
Participants identified numerous potential risks in employing mobile technology at point of care that required management to minimise adverse or unintended consequences. Participants acknowledged there was a lack of governance at a wider systems level that negatively impacted their capacity to use mobile technology at point of care. They indicated the belief that mobile technology was not allowed within healthcare settings. The belief was expressed that the non-use of mobile technology had developed historically, with one participant stating: "We also had, I think we've still got some of the misconceptions around the risks with mobile devices and medical devices" (Participant 2).
Another influence on the lack of direction regarding mobile learning within organisations was attributed to generational cohorts. One participant reported: "But we have to overcome the establishment, the bureaucracy in the health system that actually sees this as a bad thing, that oh no, they're going to be on social media and they're all going to be doing bad things and this instant thought that the internet is just this bad place and no good will come of it. I think some of the older directors of nursing and all that sort of stuff, who are all basically starting to retire now sort of are making way for a younger generation of directors of nursing who we hope is going to have a better or a more positive approach to this" (Participant 5).
Representatives described factors that have influenced organisations to implement policies or local rules within organisations excluding the use of mobile technology. Participants cited organisations formally and informally dissuading nurses from using mobile technology at point of care. This was expressed by one representative who stated: "But the nurses I find, whether it's just that they're more regulated, are not encouraged to use their phones in the actual clinical environment" (Participant 4).
Interviewees reported there was inconsistency of access and use, which created confusion for nurses within organisations as shown by this comment: "But unfortunately, it's such a reactive approach rather than proactive approach, in that they're notit's actually," "Well, the technology's great, most people are using it appropriately, but you can't stop every -you know, don't stop everybody from using it because some people have been not doing the right thing" (Participant 3).
They acknowledged incongruency with using mobile technology for patient care, clinical decision-making and the lack of capacity for seeking and retrieving information at point of care. Nurses expressed concern over the lack of direction provided to the profession at a National level, which then impacted at an organisation level. Participants provided examples of other health professionals' expectations of nurses being able to access mobile technology even when organisational policy precluded its use. One participant stated: "But then, as I said, there's that conflict between, we're encouraged to have those things on our phone, but we're not allowed to really use them on the ward. So, there is an issue around that, that you will send a photo to a consultant and actually, that is written into policy that that's a breach of that particular policy; you're not allowed to send patient's photos on personal devices" (Participant 4).
Interviewees revealed that although there was little formal direction at a systems level on whether mobile technology could be used, some nursing staff were beginning to challenge the apparent edict to drive change: "What they have now, so we're living in a bit of a fantasy world at the moment where people say there's no mobile phones allowed, when in fact everyone has a mobile phone in their pocket" (Participant 2).
A participant indicated another influence on practice was previous breaches of patient confidentiality or privacy, which motivated organisations to limit access to mobile technology: "I think it will be when -we've had -the reason it's come about unfortunately, is because of the opposite reason, in that people's photos have got out onto Facebook and to general internet public forums and there's been people that have been sued" (Participant 4).
Cyberloafing behaviour was cited as a reason for preventing legitimate access to mobile technology while at work. One representative expressed: "And I don't know whether it's a different generation or different -that people think that they might be checking Facebook, or they might be misusing their mobile devices rather than using them for education" (Participant 4).
Participants offered potential workarounds to resolve the current impasse regarding legitimate use of mobile technology at point of care, indicating they believed nurses were capable of discerning when mobile learning could be deployed: "We're good at coming up with solutions to things. And I think that's part of our learning" (Participant 6).
One participant summed up the current situation related to guidance of mobile learning at point of care within Australian healthcare environments by stating: "So, it is a real messy minefield" (Participant 2).
Interviewees raised the importance of appropriate use of mobile technology for informal learning and CPD. This addresses the concept of digital professionalism, which embodies ethical use and maintenance of professional boundaries when using mobile technology within healthcare environments. One participant suggested learning about safe and appropriate use was a risk management strategy to ameliorate the current circumstances: "But of course, that's again, I don't think -I think that's the risk but I -my philosophy is let's train people, let's have a policy, let's train people in safe, responsible mobile use" (Participant 2).
Another participant pointed out nurses need to know their professional boundaries regarding seeking and retrieving information, and users must be able to critically reason when it is appropriate to use mobile technology for learning: "But as I said, nurses need to learn what they need to learn when they need to learn it. This can augment that process but again we're not going to learn how to do open heart surgery just because we've got a new device that's got it there for us. We still have to have appropriate use" (Participant 6).
Within the theme of risk management, it became apparent that nurse representatives belonging to nursing profession organisations acknowledged there was an issue in the workplace. However, due to the volunteer nature or absence of priority to enable informal learning of CPD at point of care within these organisations, there was a lack of agency to drive change, to enable mobile learning to become a legitimate nursing function. A representative indicated: "I think that we could -and we're doing it at the moment, slowly, as you know, these volunteer organisations and colleges are slow-moving ships but we are trying to develop a policy, not so much aboutit probably won't be specific about mobile learning" (Participant 2).
Interviewees indicated from their comments that despite the lack of congruency about mobile technology use at point of care, they did not view themselves as responsible for solving the current paradox. Representatives discussed the issue as though it was outside the aim and scope of their professional organisation to effect change. There was no acknowledgement of the capacity of their organisations to advocate for a change in the status quo or to show leadership in the National arena relating to accessing mobile learning even though it could potentially benefit their members and patients. One participant stated: "But we specifically don't have a position statement on it, it's just something that we recognise is a minimum standard that it must be" (Participant 5).
Perceived use of mobile technology
From the comments by representatives of nursing profession organisations it was clear that non-use of mobile learning in nursing healthcare environments is commonplace. Statements by interviewees indicated that healthcare lags behind other industries in harnessing emerging technology and nursing is hindered by groups, within and external to the profession. Representatives provided a range of examples where other stakeholders including the medical profession were using mobile technology. Stakeholders in this context were individuals and organisations that interact with, or impact the opportunities of, participants to access or use mobile technology at point of care. For example, one participant indicated junior medical officers (JMOs) could access mobile learning at point of care: "Well yeah, it certainly seems to be that it's -I see certainly -I guess, I'm getting a little bit older -see a lot new, younger -you know, the JMOs and even some of the residents coming though and they use their phones constantly and it doesn't seem to be seen as an issue" (Participant 5).
Participants indicated there was a need to address how patients perceive mobile learning by nurses if mobile technology is to be used at point of care. A participant indicated they perceived patients were unaccepting of nurses using mobile technology: "Because I think that there is perhaps a perception and as I said particularly from older people out there that we're using phones merely to communicate with our friends as opposed to actually looking up things that are useful for the conversation at hand" (Participant 3).
Another representative commented they believed patients would be accepting if the purpose of using the technology was explained: "However, from a patient's perspective, also from the perspective on a personal level, if you use it with them and you explain what you're doing they'll often be quite accepting of that" (Participant 4).
Additionally, interviewees indicated there was fear of reputational damage, if nurses were accused of misuse by other stakeholders. This risk influenced whether nurses accessed mobile technology at point of care. One representative indicated: "I think they've felt -well, there's been complaints from a patient perspective that nurses seem to be on their phones, using their phones. They see it as patient perception, that nurses in particular aren't working, they're using their mobile devices for personal use in the workplace rather than using it for work purposes" (Participant 4).
Participants indicated generational cohorts of patients and co-workers behaved differently and this behaviour needed to be taken into account when using mobile technology for learning: "But we've got -it's perception from a different generation that doesn't see it the same way necessarily, so there needs to be education around 'this is what's happening with these mobile devices' as well. Whereas, I certainly see the younger generations now -so, gen Y will often use online learning. So, not necessarily mobile technology as such but they will use Internet learning far more readily" (Participant 4).
Furthermore, they suggested that access to mobile technology varied depending on the role of the nurse. One interviewee stated: "But the nurses, just generalist nurses, certainly aren't able to use their -or are discouraged from having their phones on them when they're with patients" (Participant 4).
Participants believed historical circumstances contributed to the current situation where the nursing profession trails other health professions in using mobile learning. For example, one representative stated: "And I think that while there might be a little bit of a backlash from people who are yearning for a bygone time, the reality going forward is that this reflects well on nursing, showing that nursing is very professional, that they are engaging in and embracing technology" (Participant 6).
Connectivity to information
Connectivity to information was viewed by interviewees as crucial for enabling informal learning and CPD at point of care. Connectivity to information in this context includes the tangible and intangible consequences of stakeholder interactions using mobile technology for information transfer. Representatives indicated they believed it was detrimental to the work of nurses to block access to information transfer for the purpose of connecting with others, or for seeking and retrieving information via the Internet. Nurses needed to demonstrate they were professional, capable and contemporary in their role as one participant stated: "If in the aviation industry, if our bookings were done by paper we'd be going what's going on here?…I think most people prefer, to have a nurse turn up with a digital device or something to be accessing information" (Participant 6).
Statements by participants indicated Internet connectivity to undertake their clinical role was hidden. For example, one participant provided an explanation about why they perceived nurses were unable to harness mobile learning at point of care: "I wonder whether nurses tend to be seen as giving that hands on physical care, so they can't pull their phone out and use it, whereas doctors if they're consulting and so it's all right for them to be looking at their phone and that they're being seen to use it for work purposes" (Participant 4).
The inability of nurses to promote their knowledge and skills hinders their access to this vital resource in the new learning age. Nurses are viewed as caring and compassionate and their high level of clinical skills that can be augmented by knowledge management through connection to the Internet, is less overt. As stated by one participant the need for access to mobile learning tools and resources to improve patient outcomes is invisible.
"Anyway, it's actually detrimental because it's a really useful tool, these mobile devices, for our staff" (Participant 2).
Real-time access
Real-time access refers to whether participants have the ability to connect at the actual time to transfer information using mobile technology. Interviewees were enthusiastic about the potential of mobile learning at point of care from the perspective that information was available when required. One participant stated: "I mean, I've worked in nursing for a hell of a long time and I think I would have given my left arm for that type of ability to look things up then and there at the time" (Participant 3).
Participants recognised the convenience of being able to access information as required without leaving the patient. One representative reported: "Because we're busy working. We haven't got time to be always stopping to do things. We're busy. And the modern life is busy. And I actually think that nurses find out what they need to know when they need to know it" (Participant 6).
Similarly, another representative revealed the belief that slow acceptance of mobile learning into healthcare environments hindered the advance of nursing practice: "And in health, I think technology generally in health is really underutilised and I think that we could become far more efficient with education and in improved patient care by using it more appropriately" (Participant 4).
Comments about the inability to harness mobile learning at point of care indicated that stakeholders were missing vital information and interactions that could improve patient outcomes. One example demonstrates the broad scope of mobile learning for clinicians in practice: "Whereas we are looking up, we should be looking up blood results and then checking it on an app on your phone and finding out what that could be and looking at with your patient symptoms would be fantastic if nurses were doing that and I think would save a huge amount of patient deterioration and improve care" (Participant 4).
Additionally, participants recognised the benefits for nursing students of being able to learn in real-time: "So, it is really that point at which they know that it's going to be significant for them [students] and if they were to like make a note for themselves like we used to do when we were on clinical placement to go and look it up at home. Well sometimes you don't get there, don't do it for one reason or another you forget" (Participant 3).
Interviewees also realised that over time learning in real-time at point of care will become more commonplace: "Yes, I think so. You've got to move with the times. I realise that, over the next decade or so, we've got an older patient group but my mother's downloaded recipes off the internet. I think that's an excuse. I think we have to move. It's in the banking industry. Every other industry That's just part of society. I don't think it's any different in nursing. I think though that nurses in the public image are a little bit caught in time, in a bit of a time capsule. And we're not allowed to grow up" (Participant 6).
Participants recognised that access to learning in real-time will take leadership and concerted effort by stakeholders: "But I think there's still a lot of work to be done in being able to do it, I don't think there's a magic bullet that will make it happen but rather a sort of concerted effort over a period of time" (Participant 5).
One representative summed up the future of mobile learning by stating: "Easily accessible up to date information on the device in your hand at the time you're standing by the patient" (Participant 1).
Discussion
The emergent themes of risk management, perceived use of mobile technology, connectivity to information and real-time access in this research confirm Fixsen and colleagues' [43] framework that mobile learning is stalled at the adoption point in the Stages of Implementation (Fig. 1). The four themes support the contention that nurses within nursing profession organisations are currently unwilling to lead on installing mobile learning at a systems level. This reluctance to advance access and use of mobile technology for learning within national healthcare environments then flows over to organisational and individual levels. The absence of clear direction within the Registered Nurse Standards for Practice [6] and new draft Codes of Professional Standards [44] illustrates the issue and compounds the problem of lack of governance. The leadership vacuum within and outside the nursing profession in favour of reform is perpetuating the mobile learning paradox.
Action across all four themes is necessary to enhance governance for mobile learning at point of care. As long as the identified limitations persist, nurses will be hindered in their access to mobile learning for informal learning and CPD. Additionally, nurses cannot support, guide or model digital professionalism to nursing students undertaking work integrated learning. The unwillingness of senior nurses to lead on mobile learning is a cause for concern since it is required to overcome the observed stalled implementation [3,43]. For progress to be made, developing protocols that address the four identified themes will be required.
The release of Australia's National Digital Health Strategy [45] and review of Registered Nurse Accreditation Standards [46] has created opportunities to remedy the current situation by establishing a governance structure within organisations that individuals can implement. Strategy 6 of the Digital Health Strategy acknowledges that Australia requires a health workforce that can confidently use digital health technologies to deliver health and care [45]. Support for change management, training, resources and clear direction are outlined. Additionally, the Australian Nursing and Midwifery Accreditation Council Consultation Papers 1 and 2 provide opportunity to feed forward information about supporting health informatics and mobile learning within the undergraduate nursing curriculum [46,47].
Nurses are bound by National Standards and Codes which provided detailed cues about expected knowledge, skills, attitudes and behaviour of nurses. The new Registered Nurse Standards for Practice [6] and revised Codes [44] are more generic, giving organisations and individuals more autonomy to determine expectations of nursing practice [16]. However, the lack of explicit information regarding mobile technology in these documents appears to be discouraging its use in healthcare environments because nurses are not yet conversant with the new Standards and Codes and the level of autonomy they provide [48]. The research has demonstrated that senior nurses are unwilling to lead workplace change and have little enthusiasm for being involved in the change process.
Most participants did not view themselves as playing an advocacy role within their nursing profession organisation with regard to mobile learning. Those who did thought that change within professional organisations was slow because it usually relied on volunteer labour, which waxed and waned depending on individual circumstances. Volunteer 'burnout' led to inconsistency in progressing the aims and objectives of the nursing profession organisation. In addition, the main focus of specialty organisations is advancing specific clinical information and advocating for new platforms to convey that information is not envisaged. Finally, nurses who hold executive positions within nursing profession organisations often do not provide direct care and thus lack contemporary experience of the new ways information can be integrated into nursing practice and transferred at point of care. Thus, until there is a greater appreciation of the issues, the current lack of leadership will continue to hinder progress towards implementation [41].
It is also evident there is a lack of consistency in knowledge, attitudes and behaviour within the nursing profession regarding the use of mobile technology. Resistance to changing workflows [36] owing to inadequate educational preparation and fear of inappropriate use of mobile technology was reported [3,20].
Representatives provided examples where inappropriate behaviour resulted in the 'banning' of mobile technology at the workplace and anecdotal evidence of previous inappropriate behaviour of health professionals [20,49,50] has shaped the current situation. Interviewees justified the inequity of access by claiming adverse media attention was responsible [20,51,52]. Participants mentioned cyberloafing and unprofessional behaviour such as using social media while at the workplace contributed to the inability to use mobile technology [53,54]. All representatives narrated stories of inappropriate behaviour by nurses while admitting they had not witnessed it themselves.
Direct care nurses were unable to access mobile technology, whereas nurses in other roles were allowed to carry a mobile device. This shifting access to mobile technology perpetuates confusion between leisure and learning and will only be ameliorated when mobile learning becomes a legitimate nursing function [55]. Continuance of the lack of governance that supports the mobile learning paradox will impede implementation of mobile learning at point of care. Since further innovation in mobile technology is predicted [56,57], the current mobile learning gap will continue to widen if the status quo remains unchallenged. Access to learning resources within healthcare environments is an important imperative. Currently, however, seeking and retrieving relevant information in real-time by nurses is hidden. While nurses are viewed by the public as caring and compassionate individuals, their advanced critical thinking and capacity for managing complex nursing care more covert and less recognised [58]. Therefore, clinical skill enhancement by accessing information in real-time is underappreciated by organisations and nurses. The difficulty in demonstrating the value of access to information transfer in real-time is also arresting progress towards the implementation phase.
As highly skilled clinicians, nurses are constantly analysing and altering their planned schedule of care as new information or events require [36]. Constant interruptions to established workflows require critical thinking and an ability to be flexible. As interruptions to workflow increase, the fragmentation of nursing care creates the need for workarounds. Nurses modify the way they think and behave when practices no longer work as intended, become redundant or opportunities occur to incorporate new work practices that benefit workflow. This adaptation process includes recognising the new intervention's benefits and investing in learning about the new process to enable integration into routine work patterns. Sustaining change occurs when the benefits outweigh non-use [59]. This process is being attenuated with regard to mobile technology and mobile learning, however. From the interviews, nurse leaders appear to absolve themselves of responsibility for advocating within the profession to advance nursing practice. Nurses continue to support a historically hierarchical system that justifies their lack of inclusion in decision-making and are consequently unable to articulate the importance of mobile learning for enabling informal learning and CPD [58]. This apparent inability to communicate the value of access to mobile technology is hindering nurses' capacity to demonstrate how mobile learning improves workflow, promotes continuity of care and potentially improves patient outcomes. It also prevents the modelling of digital professionalism to undergraduate nurses perpetuating the status quo. The current deficiency in the capacity of nurses to influence the direction of mobile learning policy at system and organisation levels further marginalises them within the registered health professions [16,60].
The casualties of this failure to embrace the mobile learning era include a range of stakeholders. An inability to engage in mobile learning at point of care is a lost opportunity for experienced nurses to lead learning in real-time at the workplace. Being able to legitimately access information at the bedside has the potential to build capacity with other health professionals, students and patients [61]. Moreover, accessing mobile technology at point of care could strengthen the nurse-patient relationship by increasing mutuality of understanding [1], enable continuity of care and reduce time away from the patient. Nurse supervisors could capitalise on real-time learning moments by supporting students at point of care by using mobile learning when it is safe and appropriate to do so. Currently, nurses support students in practice because they believe 'it is the right thing to do' [48]. However, although they understand the risks, challenges and benefits, they do not advocate for access to mobile learning to support this activity. This unwillingness to lobby for access to learning resources confirms the noted absence of agency by nurses to contemporise their nursing practice by maximising opportunities for informal learning, CPD [62] and teaching students undertaking work integrated learning.
The inclusion of mobile learning early in the nursing curriculum in the classroom will enable modelling of digital professionalism to occur prior to undertaking work integrated learning [24]. Consistency between learning on campus and being able to continue to use mobile technology during work integrated learning will promote safe and appropriate use by the next generation of nurses. The ADHA Digital Health Strategy [45] acknowledges the need for preparation of the nursing health workforce to become digitally literate. As the nursing workforce is the largest of the registered health professions it is imperative that resources are channelled to upskill the current workforce [63]. It is also imperative that nursing profession organisations recognise that knowledge management relies on connectivity to information and that they have a responsibility to advocate for appropriate governance of mobile learning at point of care for the benefit of all stakeholders. Only when there is greater equity of access to mobile technology will nurses be fully able to participate in informal learning, CPD, and training nursing students in digital professionalism and thus to deliver contemporary nursing practice in real-time.
Impact statement
Lack of governance guiding the use of mobile technology at point of care at a systems level negatively impacts the ability of nurses to legitimately incorporate mobile learning into their nursing practice. The current 'mobile learning paradox' needs to be resolved from within the profession of nursing and healthcare organisations. Perpetuation of the mobile learning paradox has implications for the profession internationally, where governance structures regarding access and use of mobile technology in healthcare environments has not been addressed.
Limitations
Limitations of this study include timing of interviews, which due to the short recruitment period took place during December 2016 -January 2017. Recruitment in the lead up to the Christmas period may have reduced opportunity as potential participants may have organised annual leave during the Australian summer, were required to complete work by the end of the year or work during the traditional holiday shut-down period may not have responded, whereas they have done so if recruitment occurred during another time period. Recruitment ceased when the Health Informatics Society of Australia, Nursing Informatics Australia and Australian College of Nursing released the joint draft position statement on nursing informatics in February 2017, as this could have changed perspectives of future interviewees by raising awareness of the topic.
Strengths
Although recruitment numbers were low, participants were senior nurses, who during their careers had experienced clinical, administration, education and research within the nursing profession. This wealth of knowledge was demonstrated through interview. Timing of the interviews was a limitation, and also a strength. This study was undertaken before the draft position statement on nursing informatics was released, providing baseline understanding of the field that can provide direction for further research.
Future directions
The nursing profession is the largest of the registered health profession. As such, this profession is in a strong position to lead mobile learning at point of care. However, this ascendancy will only be accomplished when nurses marshal their mobile learning agency by taking responsibility for leadership within healthcare environments.
There is an opportunity to achieve this aim by embracing the ADHA National Digital Health Strategy [64] and demanding the profession of nursing is included in decision-making at a systems and organisation level. Involving nurses in systems design and creating positive and supportive environments is instrumental to sustainability of the health workforce [65]. Further research into safe and appropriate use of mobile learning by trialling its use needs investigation. The inclusion of digital professionalism early within the undergraduate nursing curriculum is necessary, as is the educational preparation of undergraduate nurses and nurses currently employed within healthcare settings. It is imperative that nurses develop requisite skills to seamlessly undertake patient care and to guide and support students in using mobile technology for learning at point of care.
Further research into mobile learning at point of care is necessary to ensure standards, guidelines and codes of conduct reflect safe and appropriate use. Usability trials to evaluate quality and safety issues may assist with providing evidence to guide risk management for implementation of mobile learning at point of care. This research will also provide rich data to guide undergraduate nursing curriculum development. Gaining the patient perspective regarding nurses using mobile learning will be beneficial to all stakeholders. Findings can be used to guide patient education about the implementation of mobile learning and be used to guide deployment of mobile learning in health care environments.
Conclusions
There is a gap in the governance of mobile technology for learning by nursing profession organisations. At systems and organisation levels, there is a lack of leadership providing direction for the professional conduct of nurses, which is expressed as the inability for nurses to implement mobile technology for learning as a legitimate nursing function. This shortage of support stalls the capacity for individual nurses to implement and model digital professionalism at point of care. Additionally, there is a deficiency of agency within the nursing profession and healthcare organisations that further hinders the installation or deployment of mobile learning at an individual level within healthcare environments.
Through their narratives participants indicated that an absence of governance within nursing organisations is perpetuated by a lack of inclusion in decision-making at a systems level. It is evident from this study that there is insufficient agency by nurses in leadership positions to influence the installation of mobile technology for informal learning and CPD at point of care in healthcare environments. However, inclusion of nurses in healthcare decision-making at a systems level coupled with promoting digital professionalism within organisations and higher education institutions will foster a more inclusive culture that will contribute to improving patient outcomes.
The installation of digital technology for mobile learning to enable informal learning and CPD to be undertaken at point of care challenges traditional work patterns. There is a lack of leadership by nurses within professional organisations to advance governance of digital professionalism that needs to be ameliorated. Empowerment of members within nursing profession organisations will support mobile learning to become a legitimate nursing function.
|
2018-11-15T01:06:36.416Z
|
2018-11-12T00:00:00.000
|
{
"year": 2018,
"sha1": "ceb3f72f3a91f5ae39e70fc67f4b02f8dac51f8e",
"oa_license": "CCBY",
"oa_url": "https://bmcnurs.biomedcentral.com/track/pdf/10.1186/s12912-018-0313-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceb3f72f3a91f5ae39e70fc67f4b02f8dac51f8e",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267532333
|
pes2o/s2orc
|
v3-fos-license
|
Addressing energy challenges in Iraq: Forecasting power supply and demand using artificial intelligence models
The global surge in energy demand, driven by technological advances and population growth, underscores the critical need for effective management of electricity supply and demand. In certain developing nations, a significant challenge arises because the energy demand of their population exceeds their capacity to generate, as is the case in Iraq. This study focuses on energy forecasting in Iraq, using a previously unstudied dataset from 2019 to 2021, sourced from the Iraqi Ministry of Electricity. The study employs a diverse set of advanced forecasting models, including Linear Regression, XGBoost, Random Forest, Long Short-Term Memory, Temporal Convolutional Networks, and Multi-Layer Perceptron, evaluating their performance across four distinct forecast horizons (24, 48, 72, and 168 hours ahead). Key findings reveal that Linear Regression is a consistent top performer in demand forecasting, while XGBoost excels in supply forecasting. Statistical analysis detects differences in models performances for both datasets, although no significant differences are found in pairwise comparisons for the supply dataset. This study emphasizes the importance of accurate energy forecasting for energy security, resource allocation, and policy-making in Iraq. It provides tools for decision-makers to address energy challenges, mitigate power shortages, and stimulate economic growth. It also encourages innovative forecasting methods, the use of external variables like weather and economic data, and region-specific models tailored to Iraq's energy landscape. The research contributes valuable insights into the dynamics of electricity supply and demand in Iraq and offers performance evaluations for better energy planning and management, ultimately promoting sustainable development and improving the quality of life for the Iraqi population.
Introduction
Introducing the context of this research needs to present the global issue of energy demand and the issue in the country of the case study.Moreover, describing the commonly used forecasting models is also important before exploring the energy forecasting literature, which enables in accurately stating the problem of this research and the approaches for overcoming it.These aspects are explored and covered in this section.
Global energy demand
The demand for energy has become a fundamental requirement for the development of nations due to the continuous growth of technological devices and the significant increase in the global population.Consequently, there has been a significant increase in the demand for energy worldwide.The production of large electrical appliances, the proliferation of factories in urban areas, and the rising population have all contributed to this trend.
In addition to the continued growth of technological devices and the rising global population, the negative impact of nonrenewable energy sources on the climate has become increasingly apparent.As a result, there is a growing demand for renewable energy sources such as hydro, geothermal, wind, and solar.Many countries aim to transition to using only renewable energy by 2050 [1].However, the generation of energy from renewable sources is only part of the solution.There is also a need for effective utilization of this energy through proper planning and distribution.Grid systems seek to supply energy based on demand to avoid storage costs or oversupply of energy in certain regions, while other regions experience a shortage [2].One of the reasons for the energy shortage is that traditional grid systems cannot accurately estimate energy demand.Moreover, fluctuations in energy demand cause traditional grid systems to store large amounts of energy at certain times of the year and run out of energy supply at other times [3].To solve this problem, it is crucial to accurately estimate the energy demand at all times.Forecasting energy demand would help with accurate planning and the proper distribution of energy to endpoints.Given the significant investment required for network reinforcements and expansions, it is appropriate to forecast future load and demand to ensure proper planning.Economic conditions, time of day, weather patterns, and other random factors all have an impact on the system load.On the other hand, energy demand typically follows general consumption patterns in the economy and is subject to fluctuations based on changes in demographics, industry activity, and weather conditions [4].
Smart grid systems come as solutions to these problems [5].Energy distribution and utilization can be monitored and controlled.The advent of modern systems, such as smart meters and other advanced metering frameworks, allows data on the bidirectional flow of energy to be obtained [6] [7].Such data can be analyzed and utilized for future prediction and forecasting.
Energy demand issue in Iraq
The unstable security situation in Iraq has had a negative impact on electric power generation, which results in a shortage of supply.Additionally, the newly introduced technologies, the lack of strategic planning, mismanagement, and infrastructure together increase the energy demand in Iraq.Other reasons, such as low gas supply rates, the use of traditional grid systems, the exposure of power stations and transmission lines to terrorist attacks, the failure to use the smart meter, and the control of violators on distribution lines and traditional grid systems, have also had a great impact on the stability of the power grid in Iraq.Currently, the demand for electricity exceeds the supply and capacity to produce electricity in Iraq, as shown in Fig. 1.Iraq has recently made efforts to upgrade and develop its infrastructure to keep pace with the latest technological developments in the electricity sector.After 2003, Iraq was opened to the global energy market.The 2014 report from the United Nations Development Program (UNDP) shows that 35% of Iraqis demand the provision of electricity and consider it a top priority [8].The electricity grid in Iraq has been severely damaged by wars, successive conflicts, and economic sanctions in the 1990s.To date, there are no studies that address the issue of electrical energy in Iraq in terms of forecasting demand and prices.
Many power plants were built in Iraq between the mid-1970s and 1980s, with a few small gas-fired plants operating in 2003.Most current power plants are thermal, which use crude oil supported by gas and hydropower plants.The unserved demand is currently served by distributed diesel generators, which are privately owned.On 9 January 2021, according to a statement by the Iraqi Ministry of Electricity, some estimations indicated that Iraq produces and imports 19 to 21 thousand megawatts of electricity, while the actual need exceeds 30 thousand megawatts.Therefore, Iraq needs to increase its production capacity by nearly double to secure stable levels of electrical energy, while its population may double by 2050.This means that its energy consumption will increase by a higher percentage than the increase in electricity production.
Despite the increase in electricity production during 2021 in Iraq, which amounted to about 20 thousand megawatts, the scene of electricity cuts continues, especially during the peak (in summer), when the temperature exceeds fifty degrees Celsius, and the size of the shortage in the supply of electric power in Iraq exceeds 10,000 megawatts due to several factors as follows: • The increasing targeting of electric power systems and towers by sabotage.
• The decline in gas emissions supplied by Iran to operate the stations.
• The governorates lack commitment to the quotas approved for them in terms of the amount of energy supplied.
• The continuation of the emergence of informal agricultural and squatter areas adds new burdens to the system.• The rise in temperature and the technical symptoms that accompany it.• Obsolescence of transmission and distribution networks.
The energy system in Iraq is currently hierarchical, with the Ministry of Electricity exercising control over every aspect of the process, including providing electricity and equipment to consumers as well as billing and accounting services.This approach to control causes confusion and internal conflicts within the ministry, resulting in substandard service.Furthermore, the ministry functions as a policymaker, operator, regulator, and supplier, creating a potential conflict of interest.In addition, the electricity sector lacks a formal regulatory framework, and despite the issuance of invoices, there is no interaction with consumers regarding electricity services.
Energy forecasting
Energy forecasting is a crucial factor for any energy utility company.It helps guide their decision about whether there is a need for infrastructural development, the energy supply per time, load switching decisions, or the cost of energy, to mention a few.Accurate forecasting of energy demand is essential in preparing for the future and ensures that consumers do not experience energy shortages.And could use people's opinions to add to the model and improve performance [9].Load forecasting can be classified into four categories according to the time horizon over which the forecast is made [10]: • Long-Term Load Forecasting (LTLF): This is a class of load forecasting whose time horizon is measured in months or perhaps years.
They are mostly used when price or risk management assessments are done.• Midterm Load Forecasting (MTLF): This is a class of load forecasting where the time horizon is in a couple of days to a few months.
They are useful when stakeholders want to evaluate the financial implications of their systems.They are used when the energy price needs to be fixed or a risk management assessment is necessary.• Short-Term Load Forecasting (STLF): In this type of load forecasting, the time horizon is between a few minutes and a few days as well.This is critical when the utility company needs to have a robust understanding of the energy consumption behavior of its end-users.• Very Short-Term Load Forecasting (VSTLF): This is a type of forecasting whose time zone is in minutes or a few hours.They are typically not more than 3 hours.
In recent years, researchers have been using machine learning techniques and deep learning models to predict energy demand [11] [12].Deep learning models consist of layers of interconnected units called "perceptions" that are trained on data to make accurate predictions [13].However, traditional artificial neural networks are designed to work with static data, which is not typically found in smart grid systems.Instead, smart grid data usually takes the form of "time series" data, which change over time and follow a pattern based on past events [14] [15].To effectively learn from this type of data, a neural network would need to forget unimportant information and retain important information for future use.This is where recurrent neural networks (RNNs) come in; they are designed to handle sequential data by selectively retaining information.A specific type of RNN, known as a Long-Short-Term Memory (LSTM) model, uses input gates, forget gates, and output gates to selectively store and retrieve information, making it well suited for processing time series data.
Problem statement and contribution
Predicting power supply and demand in an unstable country like Iraq is a challenging task for two main reasons.First, it's difficult to collect time-series data from traditional power grid systems.Second, the inconsistent operation of power plants and the varying amounts of imported power from neighboring nations throughout the year have a significant impact on the completeness and accuracy of the data collected.These factors make it even more challenging for Iraqi officials to design plans to improve the current power grid and make decisions about how to handle the growing population and increasing demand for power due to advances in technology.
The existing literature on the Iraqi power grid network indicates a severe lack of comprehensive studies or datasets related to time-series-based supply and demand.Furthermore, most studies that exist rely on traditional approaches for analyzing supply and demand, which are insufficient considering the rapid growth in population and technology.Therefore, this work addresses a critical issue in Iraq by focusing on forecasting electricity supply and demand.The novelty of the paper lies in its empirical analysis of various machine learning and deep learning models for predicting electricity supply and demand.It also distinguishes itself from the existing literature by addressing the limitations of previous works and offering insights into the unique challenges faced in Iraq's energy sector.Therefore, this study aims to make the following contributions: • Collecting a time-series-based dataset for the years 2019 to 2021, encompassing a range of electricity demand and supply values.
The dataset is novel and was officially collected with the support of the Operation and Control Office, Ministry of Electricity, Baghdad, Iraq.The minimum electricity demand value is 6336 MW/day, and the maximum value is 29059 MW/day.The minimum supply value is 5399 MW/day, and the maximum value is 18233 MW/day.• Using the collected dataset, the supply and demand of electricity in 15 provinces in Iraq were predicted.The structural timeseries modeling approach was applied to annual data for the period between 2019 and 2021, using estimated equations and value assumptions.To make the predictions, the study used a range of machine learning and deep learning models, including Linear Regression (LR), XGBoost (XGB), Random Forest (RF), Long Short-Term Memory (LSTM), Temporal Convolutional Networks (TCN), and Multilayer Perceptron (MLP) Models.The study also used various metrics for benchmarking and statistical analysis for verifying the differences between the involved models.
The remainder of this article is organized as follows: the next section outlines the research methodology, including details of the data collection process, the models used, the settings of the experiments in terms of the parameters of the models, and the metrics involved in the process of performance evaluation.Section 3 presents and discusses the results obtained and assess the differences of models' performance using statistical testing approaches.Finally, Section 5 concludes the article and presents future directions as well as the limitations of this research.
Literature review
This section explores the related energy forecasting literature and presents the state-of-the-art.
Threat to validity
Before delving into the literature, it was essential to acknowledge potential threats to the validity of the literature search.The search for related literature was conducted using specific search strings and databases to ensure comprehensive coverage.However, it was important to recognize that despite the efforts spent in this work, some relevant sources may not have been included in this work.The search strings employed included variations of terms such as "Energy Forecasting", "Electricity Demand Prediction", "Machine Learning", "Deep Learning", and "Iraq Energy Sector".These strings were designed to capture a wide range of relevant articles and studies.The publishers explored in this research encompassed many academic publishers, including but not limited to Elseveir, IEEE, MDPI, and Springer.While the aim was to be as exhaustive as possible, the vast and dynamic nature of the energy forecasting literature may still lead to some omissions.To mitigate potential biases and ensure the relevance of the selected works, the focus was on the peer-reviewed articles published in high reputable journals and conferences [16].
Additionally, the state-of-the-art publications were considered, prioritizing those from the last decade to ensure the applicability of the findings to contemporary energy forecasting challenges.Despite these measures, it was important to recognize that the landscape of energy forecasting is continually evolving.New methods, data sources, and insights emerge regularly.Therefore, this work represents a snapshot of the literature available up to the authors' knowledge cutoff date in September 2022.While making diligent efforts to provide a comprehensive review of relevant literature, the limitations inherent in any literature search may have influenced the selection of sources.Readers are encouraged to consider this context when interpreting the findings and conclusions presented in the following sections.Table 1 presents a summary of the threat to validity components of this work.
Related work
Over the last few decades, researchers have relied on the use of statistical methods to forecast energy demand, energy balance, or energy supply.A commonly used method is the autoregressive integrated moving average (ARIMA), which has been successful in predicting energy demand in stable load situations.However, this method is not always effective in real-world scenarios, where extreme peak loads can occur intermittently [17].
Moreover, with the advent of neural networks, better solutions are now available to researchers.Neural networks can learn hidden patterns from data, which is a significant improvement over purely statistical methods.In fact, neural networks operate similarly to how humans learn: by making a prediction, receiving feedback, and then adjusting their prediction accordingly.A deep neural network that accounts for sequential data is useful for applications that involve time.
The literature includes a large number of studies on predicting electricity supply and demand.Researchers have employed a variety of methods, ranging from statistically based approaches to machine learning and deep learning methods.The choice of a particular method depends on the specific characteristics of the dataset, such as whether it is a time series, has seasonality, or is stationary.Consequently, researchers typically test their dataset before selecting an appropriate algorithm.For example, Dittmer et al. [18] forecast the demand for electricity in a rural region in Germany.They first examined their data for seasonality and trends and subsequently used ARIMA and other statistical models to forecast 48 hours ahead.The results showed that the models they employed were suitable for performing the forecasting task and allowed predictions to be made up to 14 days in advance.In another study, Kim et al. [19] developed a hybrid deep learning model using LSTM and CNN for the prediction of power demand.The study used a real-world dataset, and the results indicated that the hybrid model was more accurate in predicting power demand compared to using each model individually.
[20] investigated the issue of forecasting short-term electricity demand in Uruguay over the period 2010 to 2019.They employed a variety of models, including linear regression, ridge, KNN, random forest, gradient boosting, MLP, and ExtraTrees, and used benchmarking metrics such as MAE, MAPE, and RMSE.The results indicated that the models mentioned above were suitable for forecasting the hourly power demand.Similarly, Velasquez et al. [21] analyzed the time series of per demand in Brazil for the period 2014 to 2019, using various forecasting approaches.They found that incorporating regression and seasonality with mixing time-series approaches can help reduce forecasting errors.
Other researchers aim to test different approaches and determine the most appropriate method for their dataset.Pallonetto et al. [22] recently compared deep neural networks and the Support Vector Machine (SVM) approach.Their results indicated that LSTM provided more accurate forecasting when the load data used in training was sufficient, while SVM performed better when the load was insufficient.The two approaches were applied to one-hour-ahead and one-day-ahead load forecasting.Similarly, Banga et al. [23] compared the power demand forecasting performance of ten models: ARIMA, Prophet, LR, SVM, XGBoost, RF, KNN, RNN, LSTM, and GRU.They evaluated the performance of these models using metrics such as RMSE, MAE, MAPE, and R2.Their findings suggested that at the hourly and daily levels, the Prophet model provided more accurate forecasting compared to the other models.
Additionally, several studies have investigated the energy demand and supply of different countries through prediction processes.For example, Raza et al. [24] focused on Pakistan and aimed to create a balance between power demand and supply for economic purposes.They used the Long-Range Energy Alternatives Planning System (LEAP) to perform forecasts, and the results suggested that Pakistan can generate more power to meet future needs.Similarly, Jaramillo et al. [25] studied the case of Ecuador and used the SARIMA modeling approach for the monthly forecast of the power demand, which proved to be efficient.These studies demonstrate the importance of forecasting in the achievement of sustainable energy systems worldwide.
Table 2 summarizes the aforementioned studies in terms of the model used, datasets, limitations, and advantages.
Research methodology
This section describes about the data collection process as well as the forecasting models involved in this research.Also, the setup of the experiments in terms of optimizing the models' parameters and the evaluation metrics are also explained.
Dataset collection
The data used in this work was officially collected from the Department of Operations and Control of the Ministry of Electricity in Baghdad, Iraq, for the period 2019 to 2021.The data consisted of hourly time-series data on the supply and demand of 15 provinces in Iraq.The collection process was strictly regulated due to governmental procedures in Iraq, which took approximately 4 months to complete.Then it was preprocessed and cleaned to address missing values and outliers.In total, the dataset consisted of 26,352 rows and 15 columns, each corresponding to a different province.
Time series forecasting models
One of the main considerations in the analysis of time series data is the examination of their inherent seasonality [26].Upon conducting a rigorous Dickey-Fuller analysis on the dataset (see Figs. 2 and 3), it was evident that both demand and production exhibited non-stationary behaviors.Consequently, in light of this assessment, the decision was made to employ forecasting models that are robust to the absence of stationarity in the series.This judicious choice not only enhances the robustness of the analysis, but also confers adaptability by bypassing the strict stipulation that the data must conform to stationarity, a conventional prerequisite in many established statistical models.
In this study, an extensive analysis was conducted using six distinct prediction models.Three of them are deep learning-based models: TCN (Temporal Convolutional Network), MLP (Multi-layer Perceptron), and LSTM (Long Short-Term Memory).The remaining three correspond to machine learning models, namely linear regression, XGBoost, and random forest.In the following, a brief description of each model is provided:
Deep learning models
• Long Short-Term Memory Network.The Long Short-Term Memory (LSTM) algorithm is a deep learning method used for prediction purposes that can handle individual data points or a sequence of data points.It has proven to be an effective algorithm that provides accurate predictions based on recent information in the data.LSTM is capable of retaining information for a long period of time to predict, process, and classify time-series datasets.It primarily utilizes four neural networks and memory blocks (cells), which store information and control the information flow using three gates produced by a sigmoid function.The Input Gate is used to include useful information; the Forget Gate is used to discard data that are no longer useful; and Output Gate is used to extract relevant information from a cell state [27].
• Temporal Convolutional Network.The Temporal Convolutional Network (TCN) places more emphasis on temporal series.
The TCN is a time-series processing algorithm developed by Bai et al. in 2018 to address the challenge of extracting long-term time-series information.It combines causal convolution, dilated convolution, and residual blocks.The TCN requires low memory for training due to its shared convolutional filters and can process long input sequences through parallel convolutions, making it a more stable training scheme [28] [29].• Multi-layer Perceptron.The Multilayer Perceptron (MLP) is one of the most popular neural networks used to train deep learning models.In this network, the input is presented to the network with the desired output, and the weights are adjusted so that the network attempts to produce the desired output.The MLP consists of three layers: the input layer, which contains the input neurons that feed information to the hidden layer; the hidden layer, which performs calculations based on the input data and forwards the output to the output layer; and the output layer, which represents the model results.The number of hidden layers determines the depth of the network, and it is the reason why an MLP network with more hidden layers is considered a deep learning model [30].
Machine learning models
• Linear Regression.Regression analysis is a statistical approach that allows us to determine the strength of the relationship between one or more variables.It can therefore help us predict unknown values based on these relationships.Simple linear regression involves using one independent variable to model a linear relationship with a dependent variable.Multiple linear regression, on the other hand, involves using multiple independent variables to predict the dependent variable [31].Although Linear Regression is not inherently suited for modeling non-linear time series data, in this context, some justifications can be found for using it as a prediction model.Linear regression models are simple and easy to interpret and can be a good starting point for modeling time-series data, providing a baseline for understanding the data's structure.After evaluating the performance of the model, linear regression is considered an appropriate choice for the data under study, as shown by the experimental results in Section 4. • XGBoost.XGBoost is a popular gradient boosting algorithm used for machine learning tasks.It was developed by Tianqi Chen [32] as an improvement on the GBM algorithm, using a more regularized model to prevent overfitting.XGBoost is known for its efficiency, flexibility, and portability and has been shown to outperform other algorithms in tasks such as classification, regression, and ranking.The algorithm combines multiple weak learners to create a strong learner, with each weak learner trained on a subset of the data.The algorithm works by training decision trees and combining their predictions to make a final prediction [33].• Random Forest.Random forest models are a popular type of nonparametric machine learning model used for both classification and regression tasks.They belong to the ensemble method category, specifically bagging methods.Ensemble methods use a group of weak learners to create a stronger and more accurate model.Random forests are a collection of many decision trees that are known to be prone to overfitting.By combining multiple trees, random forest models are able to mitigate this issue and provide a more flexible and powerful model with lower variance.This allows larger and more predictive trees to grow, resulting in better performance in both training and unseen data.Additionally, random forest models retain the simplicity and interpretability of decision trees [34].
Experimental setup
The sliding or rolling window approach is used in time series forecasting to handle the sequential and temporal nature of the data.It involves training the forecasting model on a fixed-length window of past observations and then using the model to make predictions for a specific forecast horizon.The window slides forward in time and is repeated at regular intervals.In this context, the concept of forecast horizon refers to the time in the future for which predictions are to be made.Therefore, it determines how far ahead the prediction of future values is intended based on historical data.On the other hand, the past history horizon, also known as the historical data window, refers to the length of the time series data that are used to make predictions in the forecast horizon.
Both values are essential because they influence the complexity and accuracy of the forecasting model.Furthermore, in the case of past history, it is important to strike a balance between using enough historical data to capture relevant patterns and trends and not including too much data that may introduce noise or outdated information.
The purpose of the study is to analyze the behavior of six different prediction models when forecasting the demand and supply of energy in Iraq for four different horizons ahead: 24, 48, 72, and 168 hours.To do that, the experiments in this study were carried out in two phases, as can be seen in Fig. 4: the first phase was designed to optimize the hyperparameters of each model, while in the second phase, the optimized values were used to train the models on different scenarios, according to the past history and forecast horizons, on both datasets.
The datasets have been split into two parts: training and validation, with a proportion of 80% and 20%, respectively.The first subdivision was used to train the models, while the second one was used to evaluate and compare their performance in both phases.This approach allows us to assess the model's performance on data entirely independent of those used for training, minimizing the risk of overfitting.The use of a test set not seen during the training process ensures that the model not only fits the training data but can also effectively generalize to new observations.This validation approach provides a solid foundation to have confidence in the models' ability to make accurate predictions in real-world scenarios, as its performance has been comprehensively evaluated on previously unseen data.
The first phase of the experimentation consists of establishing the best parameter combination for each model by performing an experiment with the entire parameter grid and setting the prediction horizon to 24 hours and the past history to 72 hours.Tables 3 and 4 show the parametrization used in the first phase, including the parameter names together with all the values tested for the three deep learning models.Similarly, Table 5 shows the parametrization for the three machine learning models.In both cases, the parameters that are not specified have been left with their default values.
Various configurations for batch size, number of epochs, maximum steps per epoch, optimizer, and learning rate are discussed in the context of deep learning models.For the batch size, the commonly used values (32 and 64) were selected, along with the number of epochs (200), maximum steps per epoch (10000), and learning rates (0.001 and 0.01).The Adam optimizer was chosen due to its suitability for a broad range of machine learning problems, as reported in the literature [35].Furthermore, for data preprocessing, two widely used normalization techniques were employed: mean normalization and min-max scaling, also known as z-score normalization, as shown in Table 3.These normalization methods have been used in both deep learning and machine learning models.Moreover, Using different normalization methods in the study serves several purposes: 1) it enhances the robustness of the Max depth [3,4,5,6] results by accounting for variations in data characteristics, 2) enables comparisons to identify the most effective normalization technique, and 3) assesses the generalization of forecasting models to different data representations and aids in data exploration by revealing specific data patterns.Table 6 contains the description of each model for demand and supply for the best parameters found.The best parameter combination was then used in the second phase of the experimentation, where experiments have been conducted for each of the remaining forecast and past history horizons.
Table 6
The best parametrization for each model and dataset.
Evaluation metrics
The purpose of this study is to evaluate and compare the performance of previous models under different scenarios.To achieve this, five widely recognized metrics from the forecasting literature [36,37] were selected: mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), mean absolute percentage error (MAPE) and weighted average percentage error (WAPE).The chosen metrics were used as a basis for evaluating and comparing the effectiveness of the algorithms in each scenario.The respective formulas for these metrics are defined in Equations ( 1), ( 2), ( 3), ( 4) and ( 5), respectively.
• Mean Absolute Error (MAE).MAE reflects the average of the absolute differences between the actual and predicted observations in the test sample.It is calculated as follows: where and are indexes, is the number of observations, and ŷ are the actual and the predicted values, respectively.
• Mean Squared Error (MSE).It measures the average of squared differences between actual and predicted observations.Using the same symbols as in MAE, the formulation is as follows: • Root Mean Squared Error (RMSE).It measures the square root of the average of squared differences between the actual and predicted observation and, using the previous symbols, RMSE can be formulated as follows: • Mean Absolute Percentage Error (MAPE).This metric evaluates the accuracy of the prediction of the forecasting model.It is formulated using the following equation: • Weighted Average Percentage Error (WAPE).This metric evaluates the accuracy of the prediction of the forecasting model, taking into account the weights of the observations.It is formulated using the following equation: where represents the weight assigned to each observation.
Results and discussions
This section discusses the results achieved through the use of the chosen forecasting models and the collected dataset.The objective was to evaluate and compare the performance of the different models for demand and supply datasets in various scenarios.Specifically, the objective is to select the best prediction model for each forecast horizon (24 h, 48 h, 72 h, and 168 h).For this purpose, the models were tested with seven different past history periods.Reproducibility of experiments can be performed with the code from the public repository located at [38].
Consistent with previous studies, five widely recognized metrics from the forecasting literature were selected to evaluate the models; all of them have been explained in the previous section.All implementations were carried out using the Python programming language.
Before the experiments were carried out, it was necessary to test the trends and seasonality of the data.Fig. 5 depicts the trend of demand and seasonality of the data.According to Fig. 5, it could be observed that the data has trended at specific time series and seasonality features.The demand trend showed the general direction in which demand for power is moving over time.Clearly, some periods showed an increase or decrease in demand based on the period of time.However, seasonality in the figure refers to patterns in demand that repeat over time, such as higher demand for certain times during a season.The residuals represent the differences between the actual demand for power and the predicted demand.They represent the variability in demand that cannot be explained by trends or seasonality.The same behavior is observed in the supply data, as shown in Fig. 6.Visualizing the demand and supply shows the fluctuation of power in Iraq.As announced in Section 3 methodology is divided into two parts.The first step was designed to determine which hyperparameters are best for each model with a specific past history of 72 hours and a forecast horizon of 24 hours.Once these parameters and thus the best model have been identified, the model is trained on different scenarios in which the prediction horizon and past history vary.
To make a more extensive comparison and to see the behavior of these data with different prediction models, it has been decided to use a total of six models, three of which are considered machine learning and the other three deep learning models.The models chosen were as follows: linear regression (LR), random forest (RF), XGBoost (XGB), long-short-term memory network (LSTM), temporal convolutional network (TCN), and multi-layer perceptron (MLP).The performance of these models is evaluated using the metrics MSE, RMSE, MAE, WAPE, and MAPE.
The proposed methodology aims to establish an experimental framework for the prediction of energy supply and demand in Iraq.The following sections will present the results separately for the two-time series analyzed in this study, beginning with the demand and then the supply.
Energy demand forecasting
Table 7 provides results for forecast horizon 24 h and demand dataset.The results are divided into two main categories: more classical machine learning algorithms and deep learning models.Models were benchmarked and compared with respect to past history and five performance metrics.The two best-performing models are linear regression for machine learning models and TCN for deep learning.The first obtains MAE value of 450 and MAPE of 0.025% for linear regression, while the second gets MAE value of 493 and MAPE of 0.026%.Regarding the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 870 and MAPE of 0.040%.In the second category, MLP was the worst, with a MAE value of 521 and MAPE of 0.028%.
Table 8 provides results for forecast horizon 48 and demand dataset.The two best-performing models are linear regression for machine learning and LSTM for deep learning.The first obtains MAE value of 643 and MAPE of 0.035% while the second gets MAE value of 678 and MAPE of 0.036%.As for the models with the worst results, it can be seen that the worst in the first category was obtained by random forest with MAE value of 1002 and MAPE of 0.048%.In the second category, TCN was the worst, with MAE value of 685 and MAPE of 0.037%.
Table 9 provides the results for the forecast horizon 72 h and demand dataset.The two best-performing models are linear regression for machine learning models and LSTM for deep learning.The first one obtains MAE value of 784 and MAPE of 0.042%, while the second gets MAE value of 815 and MAPE of 0.043%.As for the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 1096 and MAPE of 0.053%.In the second category, MLP was the worst, with MAE value of 849 and MAPE of 0.045%.Table 10 provides the results for the 168 h forecast horizon and the demand data set.The two best-performing models are linear regression for machine learning models and MLP for deep learning.The first obtains the MAE value of 1123 and MAPE of 0.060%, while the second gets MAE value of 1162 and MAPE of 0.062%.As for the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 1368 and MAPE of 0.068%.In the second category, LSTM was the worst, with MAE value of 1187 and MAPE of 0.063%.Fig. 7 provides an overview of the previous tables, taking into account the MAE metric.This figure groups the results according to the forecast horizon, showing a comparison between each of the models used in the experimentation.It is interesting to note at first glance how, as the prediction horizons increase, the results get worse for all models.This decrease may be mainly due to the fact that higher prediction horizons imply greater difficulty in prediction.On the basis of this figure, it seems that the linear regression model outperformed the other models, with the lowest MAE across all four forecast horizons.However, random forest seems to almost always obtain the worst results, although if not only the individual results are taken into account, but also the median of the distribution, the worst model would be RF.Finally, it is worth noting that there is a certain difference in results between the DL and ML models used, the latter being the ones that obtain greater variability between models, which may be due to the fact that DL models in this case are more suitable for capturing the temporal characteristics of the data set.Fig. 8 shows the comparison between the predicted and actual values for the last month of the study.The best model among all the predictions of the experiments served as the basis for the comparison.In this case, the best result was obtained with a prediction horizon of 24 hours with the linear regression model.
The Friedman test has been applied to assess the overall significance of differences among the performances of the six models.The Friedman test is a non-parametric test used to compare three or more matched groups (in this case, models) without assuming that the data follow a specific distribution, and the objective is to determine whether there are significant differences between the groups.In this context, each model provides 28 evaluation values (7 different past values for 4 different forecast windows) from Tables 7 -10.The Friedman test determined a significant difference in the models' performances, both for MAE and RMSE values, with a p-value equal to 6.49 − 24 and 7.53 − 24, respectively.Afterwards, the test used for two-to-two comparisons is the Mann-Whitney U test (also known as the Wilcoxon rank-sum test).The p-value is then corrected using the Bonferroni-Dunn method.After analyzing in the comparison tests between the six models, significant differences were found in some but not all comparisons.The presence of significant differences in at least some comparisons indicates that the models are not equivalent in terms of their performance in the circumstances evaluated.This allows us to establish the following ranking in the model's performance: The best-performing model is LR, followed by LSTM, TCN, and MLP, which are considered equivalent.The third place is for XGB and last place (worst performing model) for RF.
An overfitting analysis has also been carried out using learning curves, for the best model in the ranking.Learning curves provide insights into how the classifier's performance evolves as the training set size increases.By plotting the model's training and validation metrics against the number of training instances, it is possible to identify if overfitting or underfitting is occurring.
In Fig. 9, the learning curves for the best experiment have been represented, corresponding to LR model, for 24 hours forecast prediction and 120 hours for past history.X-axis represents the iteration number, where more than 20000 increasing size subsets have been used.Y-axis represents the normalized MAE metric values for both train and validation subsets.As it can be derived from this figure, the convergence of both curves, as well as the level in which it occurs indicate a situation of a well-generalized model, indicating that it generalizes well to unseen data.
Energy supply forecasting
Table 11 provides results for forecast horizon 24 and the supply dataset.The two best-performing models are XGBoost for machine learning models and LSTM for deep learning.The first obtains MAE value of 509 and MAPE of 0.035% while the second gets MAE value of 541 and MAPE of 0.040%.As for the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 604 and MAPE of 0.042%.In the second category, TCN was the worst, with MAE value of 556 and MAPE of 0.041%.
Table 12 provides results for forecast horizon 48 and the supply dataset.The two best-performing models are XGBoost for machine learning models and LSTM for deep learning.The first one obtains MAE value of 588 and MAPE of 0.041% while the second gets MAE value of 644 and MAPE of 0.048%.As for the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 690 and MAPE of 0.049%.In the second category, MLP was the worst, with MAE value of 658 and MAPE of 0.049%.
Table 13 provides results for forecast horizon 72 and the supply dataset.The two best-performing models are XGBoost for machine learning models and LSTM for deep learning.The first obtains MAE value of 637 and MAPE of 0.045% while the second gets MAE value of 704 and MAPE of 0.053%.As for the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 747 and MAPE of 0.053%.In the second category, TCN was the worst, with MAE value of 736 and MAPE of 0.054%.
Table 14 provides results for forecast horizon 168 and the supply dataset.The two best-performing models are XGBoost for machine learning models and TCN for deep learning.The first obtains MAE value of 875 and MAPE of 0.061% while the second gets MAE value of 917 and MAPE of 0.069%.As for the models with the worst results, it can be seen that the worst in the first category was obtained with random forest with MAE value of 914 and MAPE of 0.066%.In the second category, LSTM was the worst, with MAE value of 923 and MAPE of 0.069%.
From Fig. 10, it can be noted that the behavior of the results compared to the forecast horizons follows the same pattern as the results for demand: the longer the forecasting horizon, the worse the results obtained.Overall, the XGBoost model is apparently the one that obtains the best results in the four horizons; however, it is also the one that obtains the worst results, so it would not be the most appropriate model to be considered.If the behavior of LR is observed, it could be determined that it is indeed one of the best since it is the second one, and also that its variability of results is very low.As for the deep learning models, it should be noted that the LSTM model shows the best performance from the TCN and MLP models on four forecast horizons, while the performance of MLP is generally weaker than the other models.Finally, and as was the case for demand, when examining the median of each of the models, the worst of them is again RF, so it is definitely not a model to be taken into account in this type of data.Fig. 11 shows the comparison between predicted and actual values for the last month of the study.The best model among all the experiments' predictions served as the basis for the comparison.In this case, the best result was obtained with a prediction horizon of 24 hours with the XGBoost model.
As in the case of the demand dataset, the Friedman test has been applied to assess the overall significance of differences among the performance of the six models, using both MAE and RMSE values in Tables 11 -14.The Friedman test determined a significant difference in the models' performances, both for MAE and RMSE values, with a p-value equal to 4.76 − 10 and 1.21 − 8, respectively.
Nevertheless, when performing a two-by-two comparison analysis, no significant differences were found in any case.Therefore, it is not possible to establish any ranking for the models in this dataset.This may be due to a correction for multiple comparisons: When applying the Bonferroni correction for two-to-two comparisons, the significance threshold becomes more stringent.This means that differences must be more pronounced to reach significance in individual comparisons.In addition, variability in data can influence the ability to detect significant differences.If the data are highly variable, it is more difficult to detect differences with confidence.In this sense, higher variability can be observed for both measures than in the case of the demand dataset, which may justify this result.
As well as in the previous section, an overfitting analysis has also been carried out to check overfitting for the best performing model.In this case, as it has not been possible to obtain a ranking on the models, XGB has been selected based on the best results reported in Tables 11 to 14.
The learning curves in Fig. 12 correspond to the XGB model, for 24 hours forecast prediction and 72 hours of past history.X-axis represents the iteration number, where more than 20000 increasing size subsets have been used.Y-axis represents the normalized MAE metric values for both train and validation subsets.Similar to what occurred in the LR model for demand, it is possible to appreciate that the convergence of both curves, as well as the level in which it occurs indicate a situation of a well-generalized model, indicating that it generalizes well to unseen data.
In both demand and supply forecasting results, the TCN models stand out as the most complex and time-intensive deep learning models due to their specific architecture and extended training times.The LSTM models exhibit moderate complexity with shorter training durations, while the MLP models are the simplest with the briefest training periods.In the machine learning models, Random Forest and XGBoost are more complex than Linear Regression due to their ensemble nature.XGBoost, in particular, requires the most extensive training time, while Linear Regression remains the simplest and least time-intensive model [39].
Discussions
In the realm of energy forecasting, benchmarking the performance of methodologies against state-of-the-art techniques and ground truth data is essential to evaluate their effectiveness.In this section, a discussion about the performance of the models is illustrated for both the existing state-of-the-art methods and ground truth data.Comparison with State-of-the-Art: An extensive evaluation is performed on the six forecasting models in comparison to wellestablished state-of-the-art methods that are commonly utilized in the field of energy forecasting.These benchmark methods encompass a range of approaches, including deep learning and machine learning architectures.Comparison was performed across various forecasting horizons, including 24 hours, 48 hours, 72 hours, and 168 hours, to account for short-term and long-term prediction requirements.The metrics used for benchmarking encompass widely recognized error measures such as MAE, MSE, RMSE, WAPE, and MAPE.These metrics offer a holistic view of forecasting accuracy, accounting for errors of different magnitudes, and providing a balanced assessment of model performance.
Comparison with Ground Truth Data:
To further validate the reliability of the forecasting models, a comparison was performed on their predictions with ground truth data obtained from authoritative sources.The ground truth data includes actual energy consumption and supply measurements observed during the evaluation period.This data has not been used before, therefore, it cannot be compared with other previous articles.Using the models involved in this work, the findings confirm that these models exhibit a high degree of accuracy in replicating actual energy demand and supply trends, indicating a strong alignment between predictions and ground truth data.
Finally, the favorable comparison results against state-of-the-art methods and the alignment with ground truth data validate the robustness and efficacy of the proposed forecasting.These findings underscore the potential of this research to significantly enhance the precision and reliability of energy forecasting in the context of the Iraqi energy sector.
Conclusions
In conclusion, this study has significantly advanced the understanding of energy demand and supply forecasting in the complex landscape of a liberalized energy market.The study addressed the critical challenge of real-time balance between energy demand and supply in a distributed environment, underscoring the need for continuous model maintenance to ensure reliable forecasts.4.3 A novel time-series dataset was collected for the years 2019 to 2021, encompassing a range of Iraqi electricity demand and supply values.The study carefully compared different architectural models and how they could be used to predict Iraq's power supply and demand.The study also revealed approaches to improve the accuracy and usefulness of these predictions by optimizing the models' parameters.Using the collected dataset, six prominent models were used in performing the forecasting process, including LSTM, TCN, MLP, LR, XGB, and RF.The performance of these models was rigorously assessed using key metrics such as MAE, MSE, RMSE, WAPE, and MAPE.
Furthermore, the study findings unveiled crucial insights into the realm of demand and supply forecasting.For demand forecasting, LR emerged as the standout performer across multiple forecast horizons, demonstrating its prowess as a machine learning-based model.Also, LSTM showcased its excellence in deep learning-based forecasting for specific horizons, while TCN and MLP displayed their strengths in other contexts.In the realm of supply forecasting, XGB and LSTM led the way, representing the pinnacle of machine learning and deep learning approaches, respectively.Conversely, RF and MLP lagged behind, revealing their limitations in modeling intricate temporal relationships.
Additionally, the results underscored the pivotal role of data preprocessing techniques in shaping forecasting performance.Also, the study illuminated the concentration of the highest electricity demand in Baghdad, driven by factors such as population growth and industrial expansion.While this increase in demand contributed to environmental problems, various regions also saw improvements in energy efficiency.The study holds immense promise in assisting the Iraqi government in tackling energy issues, offering invaluable insights for the selection of the most effective forecasting models.Precise energy demand predictions, as the research highlights, are indispensable for ensuring a stable and dependable energy supply, thereby bolstering economic development and enhancing the well-being of Iraq's population.
As the study anticipates future demand reaching between 30,000 and 35,000 megawatts per day, the ability to predict and manage this demand becomes paramount.Therefore, this research not only addresses Iraq's pressing energy problems but also establishes a robust foundation for future investigations in this domain.These endeavors will undoubtedly contribute to more effective and sustainable energy resource management, propelling Iraq's economic growth and the overall welfare of its people.
As future work, the study aspires to delve into hybrid forecasting models that combine machine learning and deep learning models, aiming to further elevate the accuracy and reliability of energy demand and supply predictions in Iraq.The study also plans to introduce exogenous variables such as weather data and economic indicators into the models to enhance their predictive capabilities.Future research may also look into how different time frames affect model performance and make region-specific forecasting models that are tailored to the specific energy needs of Iraq's provinces.In addition, extending the dataset to include data for two more years will also contribute to a more accurate discussion of the limitations.Finally, these endeavors collectively hold the potential to shape more effective strategies for managing Iraq's energy resources, ultimately fostering the nation's economic prosperity and the well-being of its citizens.
Fig. 8 .
Fig. 8.Comparison between actual and predicted values for demand in the last month of the data set.The observed results are those obtained with the best model.
Fig. 11 .
Fig. 11.Comparison between actual and predicted values for supply in the last month of the data set.The observed results are those obtained with the best model.
Table 1
Summary of the threat to validate of this research.
Table 2
A summary of the literature.
Table 3
Training parameters used for deep learning models.
Table 4
The parameters used for LSTM, TCN, and MLP deep learning models.
Table 5
The parameters used for Random Forest, Linear Regression, and XGBoost machine learning models.
Table 7
Forecasting demand with a forecast horizon of 24 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Table 8
Forecasting demand with a forecast horizon of 48 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Table 9
Forecasting demand with a forecast horizon of 72 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Fig. 7. Distribution of results in terms of MAE grouped by forecast horizon.
Table 10
Forecasting demand with a forecast horizon of 168 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Table 11
Forecasting supply with a forecast horizon of 24 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Table 12
Forecasting supply with a forecast horizon of 48 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Table 13
Forecasting supply with a forecast horizon of 72 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
Fig. 10.Distribution of results in terms of MAE grouped by forecast horizon.
Table 14
Forecasting supply with a forecast horizon of 168 h and different past history scenarios.The optimal outcomes of each model have been emphasized in bold.
|
2024-02-08T16:08:26.665Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "24051b4c125f3181f103f15250a5c7b0a0a3beda",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2024.e25821",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19ecfd97df85b36d808e869fe3bca45e895cbb9e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.